Re: Any known issues with Solaris event ports?

2010-01-22 Thread Theo Schlossnagle
This looks really good.

Your description of the problems in the ticket look spot on and the 
implementation looks correct as well.

I'll see what I can do about pulling this and running it somewhere -- but I'm 
swamped with some rollouts right now.

On Jan 22, 2010, at 2:56 PM, Nils Goroll wrote:

> Hi Theo and all,
> 
> I'd appreciate reviews of some fixes to the event port waiter:
> 
> http://varnish.projects.linpro.no/ticket/629
> 
> These look promising to me, but I might have missed something important.
> 
> Thank you, Nils

--
Theo Schlossnagle
http://omniti.com/is/theo-schlossnagle





___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Varnish on OpenBSD

2009-03-13 Thread Theo Schlossnagle
Does anyone here run Varnish on OpenBSD?  I'm interested in either  
successes or known feature issues that would make it less than ideal  
as a platform.

Thanks!

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: Timeouts

2009-03-02 Thread Theo Schlossnagle
Indeed!

On Mar 2, 2009, at 10:56 AM, Nils Goroll wrote:

> Theo,
>
>> I see two approaches:
>
> Thank you for your thoughts. To me, this sounds like it might  
> actually be more appropriate to spend the effort on the Solaris  
> source instead.
>
> Cheers,
>
> Nils

--
Theo Schlossnagle
Principal/CEO
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
w: http://omniti.com   p: +1.443.325.1357 x201   f: +1.410.872.4911




___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: Timeouts

2009-03-01 Thread Theo Schlossnagle

On Mar 1, 2009, at 11:49 AM, Nils Goroll wrote:

> Theo,
>
>> http://bugs.opensolaris.org/view_bug.do?bug_id=4641715
>> Either that or application level support for timeouts.  Given  
>> Varnish's design, this wouldn't be that hard, but still a bit of  
>> work.
>
> thank you for your answer. I was hoping that someone might had  
> started work on this, but I understand that this won't be too easy  
> to implement.

I see two approaches:

(1) the traditional: change all the the read/write/readv/writev/send/ 
recv/sendfile calls with non-blocking counterparts and wrap them in a  
poll loop with the timeout management.

(2) the contemporary: create a timeout management thread that  
orchestrates interrupting the system calls in the threads.  It's kinda  
magical.  It's basically and implementation of alarm() in each thread  
where the alarms are actually managed in a watching thread and it  
raises a signal in the requested thread by using pthread_kill  
explicitly.  I've implemented this before and it works.  But, it is a  
bit painful and given that this implementation would exist to work  
around _only_ the kernel lacking in Solaris, it seems crazy.  I think  
I'll go with the general Varnish developer attitude here: "we expect  
the OS to support the advanced features we need."  Upside is that it  
would work well with sendfile.

I still wish all network I/O was done in an event system... and we  
just had a lot of concurrently operating event loops all consuming  
events full-tilt.  I've had better success with that. Que sera sera.   
Varnish is still the fastest thing around.

--
Theo Schlossnagle
Principal/CEO
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
w: http://omniti.com   p: +1.443.325.1357 x201   f: +1.410.872.4911




___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: Any known issues with Solaris event ports? / tests b17&c22 failing on Solaris

2009-02-28 Thread Theo Schlossnagle

On Feb 28, 2009, at 2:41 PM, Nils Goroll wrote:
>
> By the way, does anyone have an idea yet how to make timeouts work  
> on Solaris (tests b20-b25 failing)?

If it has to do with TCP send and receive timeouts... we're waiting on  
this:

http://bugs.opensolaris.org/view_bug.do?bug_id=4641715

Either that or application level support for timeouts.  Given  
Varnish's design, this wouldn't be that hard, but still a bit of work.

--
Theo Schlossnagle
Principal/CEO
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
w: http://omniti.com   p: +1.443.325.1357 x201   f: +1.410.872.4911




___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: Any known issues with Solaris event ports?

2009-02-26 Thread Theo Schlossnagle
We haven't run into that problem.  However, we are running trunk and  
not 2.0.3.

Varnish 2.0.3 appears to fail b17 and c22 tests in the suite for me.   
I haven't had a chance to look deeper yet... it's on my todo list.   
Also, as a note, the configure.in with 2.0.3 managed to disable  
sendfile (which works on Solaris).  I have a patch that reenables  
that.  Once I track down the b17/c22 issues and fix and/or explain  
them, I'll send in the patch.

On Feb 26, 2009, at 5:50 PM, Nils Goroll wrote:

> Hi,
>
> I have not dug deeply enough into this issue, but I believe to have  
> stepped on
> an issue surfacing in "hanging" client connections with the Solaris  
> event ports
> interface.
>
> Using poll seems to avoid the issue.
>
> Is this a known problem?
>
> Cheers,
>
> Nils
> ___
> varnish-dev mailing list
> varnish-dev@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-dev

--
Theo Schlossnagle
Principal/CEO
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
w: http://omniti.com   p: +1.443.325.1357 x201   f: +1.410.872.4911




___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: is Varnish 2.0 beta1 supposed to compile/test on sol10?

2008-09-04 Thread Theo Schlossnagle
Looks like:

REPORT(LOG_ERR, "mgt_child_inherit(%d, %s)", fd, what);

In mgt_child_inherit(int fd, const char *what) in bin/varnishd/ 
mgt_child.c

is throwing a wrench in the management communication channel.  Not  
sure how this doesn't break all platforms though.  If ou comment that  
line out, all tests pass (at least for me).

On Sep 4, 2008, at 6:38 PM, Keith J Paulson wrote:

> Theo,
>
> I had used a slightly different line, but did have the -mt; I repeated
> the config/build with your example and still have 61 FAILs.
>
> Keith
>
> Theo Schlossnagle wrote:
>> You likely need to specify -mt in your CFLAGS as Varnish is
>> multi-threaded.  You'd need to do that with gcc too.
>>
>> My build looks like:
>>
>> CFLAGS="$CFLAGS -m64 -mt" \
>> LDFLAGS="$LDFLAGS -L/usr/sfw/lib/amd64 -R/usr/sfw/lib/amd64" \
>> CC=cc CXX=CC \
>> ./configure [typical args]
>>
>> On Sep 4, 2008, at 6:04 PM, Keith J Paulson wrote:
>>
>>>
>>> I have tried Feature complete Varnish 2.0 Beta 1 on sol10 with  
>>> gcc, and
>>> sun's studio 12 compiler, and in neither case does it work.  With  
>>> sun
>>> cc, latest results are:
>>>
>>> ===
>>> 61 of 73 tests failed
>>> Please report to varnish-dev@projects.linpro.no
>>> ===
>>>
>>> for which
>>>
>>> Assert error in varnish_ask_cli(), vtc_varnish.c line 97:
>>> Condition(i == 0) not true.
>>>
>>> seems a significant cause (response not recieved?)
>>>
>>> One complete test:
>>>
>>> ./varnishtest -v  tests/b2.vtc
>>> #TEST tests/b2.vtc starting
>>> #TEST Check that a pass transaction works
>>> ##   s1   Starting server
>>> ###  s1   listen on 127.0.0.1:9080 (fd 3)
>>> ##   v1   Launch
>>> ###  v1   CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a
>>> '127.0.0.1:9081' -T 127.0.0.1:9001
>>> ##   s1   Started on 127.0.0.1:9080
>>> ###  v1   opening CLI connection
>>>  v1   debug| storage_file: filename: ./varnish.sUaWlN (unlinked)
>>> size 166 MB.\n
>>>  v1   debug| mgt_child_inherit(4, storage_file)\n
>>>  v1   debug| Using old SHMFILE\n
>>>  v1   debug| Notice: locking SHMFILE in core failed: Not owner
>>>  v1   debug| \n
>>>  v1   debug| Debugging mode, enter "start" to start child\n
>>> ###  v1   CLI connection fd = 4
>>>  v1   CLI TX| vcl.inline vcl1 "backend s1 { .host =  
>>> \"127.0.0.1\";
>>> .port = \"9080\"; }\n\n\tsub vcl_recv {\n\t\tpass;\n\t}\n"
>>>  v1   CLI RX| VCL compiled.
>>> ###  v1   CLI STATUS 200
>>>  v1   CLI TX| vcl.use vcl1
>>> ###  v1   CLI STATUS 200
>>> ##   v1   Start
>>>  v1   CLI TX| start
>>>  v1   debug| mgt_child_inherit(9, sock)\n
>>>  v1   debug| mgt_child_inherit(10, cli_in
>>>  v1   debug| )\n
>>>  v1   debug| mgt_child_inherit(13, cli_out)\n
>>>  v1   debug| child (20083
>>>  v1   debug| ) Started\n
>>>  v1   debug| mgt_child_inherit(
>>> Assert error in varnish_ask_cli(), vtc_varnish.c line 97:
>>> Condition(i == 0) not true.
>>> Abort
>>>
>>>
>>>
>>> Any suggestions?
>>>
>>> Keith
>>>
>>>
>>> ___
>>> varnish-dev mailing list
>>> varnish-dev@projects.linpro.no
>>> http://projects.linpro.no/mailman/listinfo/varnish-dev
>>
>> -- 
>> Theo Schlossnagle
>> Principal/CEO
>> OmniTI Computer Consulting, Inc.
>> Web Applications & Internet Architectures
>> w: http://omniti.com   p: +1.443.325.1357 x201   f: +1.410.872.4911
>>
>>
>>
>>
>
>
> ___
> varnish-dev mailing list
> varnish-dev@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-dev

--
Theo Schlossnagle
Principal/CEO
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
w: http://omniti.com   p: +1.443.325.1357 x201   f: +1.410.872.4911




___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: is Varnish 2.0 beta1 supposed to compile/test on sol10?

2008-09-04 Thread Theo Schlossnagle
I get the same errors on the beta1.

On Sep 4, 2008, at 6:38 PM, Keith J Paulson wrote:

> Theo,
>
> I had used a slightly different line, but did have the -mt; I repeated
> the config/build with your example and still have 61 FAILs.
>
> Keith
>
> Theo Schlossnagle wrote:
>> You likely need to specify -mt in your CFLAGS as Varnish is
>> multi-threaded.  You'd need to do that with gcc too.
>>
>> My build looks like:
>>
>> CFLAGS="$CFLAGS -m64 -mt" \
>> LDFLAGS="$LDFLAGS -L/usr/sfw/lib/amd64 -R/usr/sfw/lib/amd64" \
>> CC=cc CXX=CC \
>> ./configure [typical args]
>>
>> On Sep 4, 2008, at 6:04 PM, Keith J Paulson wrote:
>>
>>>
>>> I have tried Feature complete Varnish 2.0 Beta 1 on sol10 with  
>>> gcc, and
>>> sun's studio 12 compiler, and in neither case does it work.  With  
>>> sun
>>> cc, latest results are:
>>>
>>> ===
>>> 61 of 73 tests failed
>>> Please report to varnish-dev@projects.linpro.no
>>> ===
>>>
>>> for which
>>>
>>> Assert error in varnish_ask_cli(), vtc_varnish.c line 97:
>>> Condition(i == 0) not true.
>>>
>>> seems a significant cause (response not recieved?)
>>>
>>> One complete test:
>>>
>>> ./varnishtest -v  tests/b2.vtc
>>> #TEST tests/b2.vtc starting
>>> #TEST Check that a pass transaction works
>>> ##   s1   Starting server
>>> ###  s1   listen on 127.0.0.1:9080 (fd 3)
>>> ##   v1   Launch
>>> ###  v1   CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a
>>> '127.0.0.1:9081' -T 127.0.0.1:9001
>>> ##   s1   Started on 127.0.0.1:9080
>>> ###  v1   opening CLI connection
>>>  v1   debug| storage_file: filename: ./varnish.sUaWlN (unlinked)
>>> size 166 MB.\n
>>>  v1   debug| mgt_child_inherit(4, storage_file)\n
>>>  v1   debug| Using old SHMFILE\n
>>>  v1   debug| Notice: locking SHMFILE in core failed: Not owner
>>>  v1   debug| \n
>>>  v1   debug| Debugging mode, enter "start" to start child\n
>>> ###  v1   CLI connection fd = 4
>>>  v1   CLI TX| vcl.inline vcl1 "backend s1 { .host =  
>>> \"127.0.0.1\";
>>> .port = \"9080\"; }\n\n\tsub vcl_recv {\n\t\tpass;\n\t}\n"
>>>  v1   CLI RX| VCL compiled.
>>> ###  v1   CLI STATUS 200
>>>  v1   CLI TX| vcl.use vcl1
>>> ###  v1   CLI STATUS 200
>>> ##   v1   Start
>>>  v1   CLI TX| start
>>>  v1   debug| mgt_child_inherit(9, sock)\n
>>>  v1   debug| mgt_child_inherit(10, cli_in
>>>  v1   debug| )\n
>>>  v1   debug| mgt_child_inherit(13, cli_out)\n
>>>  v1   debug| child (20083
>>>  v1   debug| ) Started\n
>>>  v1   debug| mgt_child_inherit(
>>> Assert error in varnish_ask_cli(), vtc_varnish.c line 97:
>>> Condition(i == 0) not true.
>>> Abort
>>>
>>>
>>>
>>> Any suggestions?
>>>
>>> Keith
>>>
>>>
>>> ___
>>> varnish-dev mailing list
>>> varnish-dev@projects.linpro.no
>>> http://projects.linpro.no/mailman/listinfo/varnish-dev
>>
>> -- 
>> Theo Schlossnagle
>> Principal/CEO
>> OmniTI Computer Consulting, Inc.
>> Web Applications & Internet Architectures
>> w: http://omniti.com   p: +1.443.325.1357 x201   f: +1.410.872.4911
>>
>>
>>
>>
>
>
> ___
> varnish-dev mailing list
> varnish-dev@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-dev

--
Theo Schlossnagle
Principal/CEO
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
w: http://omniti.com   p: +1.443.325.1357 x201   f: +1.410.872.4911




___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: is Varnish 2.0 beta1 supposed to compile/test on sol10?

2008-09-04 Thread Theo Schlossnagle
You likely need to specify -mt in your CFLAGS as Varnish is multi- 
threaded.  You'd need to do that with gcc too.

My build looks like:

CFLAGS="$CFLAGS -m64 -mt" \
LDFLAGS="$LDFLAGS -L/usr/sfw/lib/amd64 -R/usr/sfw/lib/amd64" \
CC=cc CXX=CC \
./configure [typical args]

On Sep 4, 2008, at 6:04 PM, Keith J Paulson wrote:

>
> I have tried Feature complete Varnish 2.0 Beta 1 on sol10 with gcc,  
> and
> sun's studio 12 compiler, and in neither case does it work.  With sun
> cc, latest results are:
>
> ===
> 61 of 73 tests failed
> Please report to varnish-dev@projects.linpro.no
> ===
>
> for which
>
> Assert error in varnish_ask_cli(), vtc_varnish.c line 97:
>  Condition(i == 0) not true.
>
> seems a significant cause (response not recieved?)
>
> One complete test:
>
> ./varnishtest -v  tests/b2.vtc
> #TEST tests/b2.vtc starting
> #TEST Check that a pass transaction works
> ##   s1   Starting server
> ###  s1   listen on 127.0.0.1:9080 (fd 3)
> ##   v1   Launch
> ###  v1   CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a
> '127.0.0.1:9081' -T 127.0.0.1:9001
> ##   s1   Started on 127.0.0.1:9080
> ###  v1   opening CLI connection
>  v1   debug| storage_file: filename: ./varnish.sUaWlN (unlinked)
> size 166 MB.\n
>  v1   debug| mgt_child_inherit(4, storage_file)\n
>  v1   debug| Using old SHMFILE\n
>  v1   debug| Notice: locking SHMFILE in core failed: Not owner
>  v1   debug| \n
>  v1   debug| Debugging mode, enter "start" to start child\n
> ###  v1   CLI connection fd = 4
>  v1   CLI TX| vcl.inline vcl1 "backend s1 { .host = \"127.0.0.1\";
> .port = \"9080\"; }\n\n\tsub vcl_recv {\n\t\tpass;\n\t}\n"
>  v1   CLI RX| VCL compiled.
> ###  v1   CLI STATUS 200
>  v1   CLI TX| vcl.use vcl1
> ###  v1   CLI STATUS 200
> ##   v1   Start
>  v1   CLI TX| start
>  v1   debug| mgt_child_inherit(9, sock)\n
>  v1   debug| mgt_child_inherit(10, cli_in
>  v1   debug| )\n
>  v1   debug| mgt_child_inherit(13, cli_out)\n
>  v1   debug| child (20083
>  v1   debug| ) Started\n
>  v1   debug| mgt_child_inherit(
> Assert error in varnish_ask_cli(), vtc_varnish.c line 97:
>  Condition(i == 0) not true.
> Abort
>
>
>
> Any suggestions?
>
> Keith
>
>
> ___
> varnish-dev mailing list
> varnish-dev@projects.linpro.no
> http://projects.linpro.no/mailman/listinfo/varnish-dev

--
Theo Schlossnagle
Principal/CEO
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
w: http://omniti.com   p: +1.443.325.1357 x201   f: +1.410.872.4911




___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


commit 3113 all good

2008-08-20 Thread Theo Schlossnagle
Trunk past 3113 compiles out of the box on Solaris 10+ and passes all  
the varnishtests.

Thanks!

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: waitpid EINTR issue in varnishd (& testcases)

2008-08-15 Thread Theo Schlossnagle


On Aug 15, 2008, at 4:32 AM, Poul-Henning Kamp wrote:



I have committed the EINTR patch and, I hope, fixed the two testcases,
please check this and send me your latest solaris patch.



This looks good and now the full test suite passes.

Attached is the Solaris patch.

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/



varnish-cache-3095-solaris.patch
Description: Binary data


___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Two varnishtest failures on Solaris

2008-08-13 Thread Theo Schlossnagle
1   Starting server
###  s1   listen on 127.0.0.1:9080 (fd 3)
##   v1   Launch
###  v1   CMD: cd ../varnishd && ./varnishd -d -d -n /tmp/__v1 -a  
'127.0.0.1:9081' -T 127.0.0.1:9001 -p thread_pools=1 -w1,1,300
##   s1   Started on 127.0.0.1:9080
###  v1   opening CLI connection
 v1   debug| storage_file: filename: ./varnish.2jaqEs (unlinked)  
size 100 MB.\n
 v1   debug| Using old SHMFILE\n
 v1   debug| Notice: locking SHMFILE in core failed:
 v1   debug| Not owner\n
 v1   debug| Debugging mode, enter "start" to start child\n
###  v1   CLI connection fd = 7
 v1   CLI TX| vcl.inline vcl1 "\n\tbackend b1 {\n\t\t.host =  
\"localhost\";\n\t\t.port = \"9080\";\n\t}\n"
 v1   CLI RX| VCL compiled.
###  v1   CLI STATUS 200
 v1   CLI TX| vcl.use vcl1
###  v1   CLI STATUS 200
##   v1   Start
 v1   CLI TX| start
 v1   debug| child (
 v1   debug| 9477) Started\n
 v1   debug| Child (9477) said Closed fds: 3 5 7 8 11 12 14 15\n
###  v1   CLI STATUS 200
 v1   CLI TX| debug.xid 1000
 v1   debug| Child (9477) said Child starts\n
 v1   debug| Child (
 v1   debug| 9477
 v1   debug| ) said managed to mmap 105172992 bytes of 105172992\n
 v1   debug| Child (9477) said
 v1   debug| Ready\n
 v1   CLI RX| XID is 1000\n
###  v1   CLI STATUS 200
##   c1   Starting client
##   s1   Waiting for server
##   c1   Started
###  c1   Connect to 127.0.0.1:9081
###  c1   Connected to 127.0.0.1:9081 fd is 12
 c1   txreq| GET / HTTP/1.1\r\n
 c1   txreq| \r\n
###  c1   rxresp
###  s1   Accepted socket fd is 11
###  s1   rxreq
 s1   rxhdr| GET / HTTP/1.1\r\n
 s1   rxhdr| X-Varnish: 1001\r\n
 s1   rxhdr| X-Forwarded-For: 127.0.0.1\r\n
 s1   rxhdr| Host: localhost\r\n
 s1   rxhdr| \r\n
 s1   http[ 0] | GET
 s1   http[ 1] | /
 s1   http[ 2] | HTTP/1.1
 s1   http[ 3] | X-Varnish: 1001
 s1   http[ 4] | X-Forwarded-For: 127.0.0.1
 s1   http[ 5] | Host: localhost
 s1   txresp| HTTP/1.1 200 Ok\r\n
 s1   txresp| \r\n
###  s1   shutting fd 11
##   s1   Ending
##   c1   Waiting for client
 c1   rxhdr| HTTP/1.1 200 Ok\r\n
 c1   rxhdr| Content-Length: 0\r\n
 c1   rxhdr| Date: Wed, 13 Aug 2008 19:25:40 GMT\r\n
 c1   rxhdr| X-Varnish: 1001\r\n
 c1   rxhdr| Age: 0\r\n
 c1   rxhdr| Via: 1.1 varnish\r\n
 c1   rxhdr| Connection: keep-alive\r\n
 c1   rxhdr| \r\n
 c1   http[ 0] | HTTP/1.1
 c1   http[ 1] | 200
 c1   http[ 2] | Ok
 c1   http[ 3] | Content-Length: 0
 c1   http[ 4] | Date: Wed, 13 Aug 2008 19:25:40 GMT
 c1   http[ 5] | X-Varnish: 1001
 c1   http[ 6] | Age: 0
 c1   http[ 7] | Via: 1.1 varnish
 c1   http[ 8] | Connection: keep-alive
###  c1   Closing fd 12
##   c1   Ending
##   v1   as expected: n_backend (1) == 1
##   v1   as expected: n_vcl_avail (1) == 1
##   v1   as expected: n_vcl_discard (0) == 0
##   s2   Starting server
###  s2   listen on 127.0.0.1:9180 (fd 3)
##   s2   Started on 127.0.0.1:9180
 v1   CLI TX| vcl.inline vcl2 "\n\tbackend b2 {\n\t\t.host =  
\"localhost\";\n\t\t.port = \"9180\";\n\t}\n"
 v1   CLI RX| VCL compiled.
###  v1   CLI STATUS 200
 v1   CLI TX| vcl.use vcl2
###  v1   CLI STATUS 200
##   v1   as expected: n_backend (2) == 2
##   v1   as expected: n_vcl_avail (2) == 2
##   v1   as expected: n_vcl_discard (0) == 0
 v1   CLI TX| debug.backend
 v1   CLI RX| 4ff790 b1 1\n
 v1   CLI RX| 4ff850 b2 1\n
###  v1   CLI STATUS 200
##   v1   CLI 200 
 v1   CLI TX| vcl.list
 v1   CLI RX| available   1 vcl1\n
 v1   CLI RX| active  0 vcl2\n
###  v1   CLI STATUS 200
##   v1   CLI 200 
 v1   CLI TX| vcl.discard vcl1
 v1   debug| unlink ./vcl.ORk8t3RP.so\n
###  v1   CLI STATUS 200
##   v1   CLI 200 
##   v1   as expected: n_backend (2) == 2
##   v1   as expected: n_vcl_avail (1) == 1
##   v1   as expected: n_vcl_discard (1) == 1
##   c1   Starting client
 v1   CLI TX| debug.backend
##   c1   Started
###  c1   Connect to 127.0.0.1:9081
 v1   CLI RX| 4ff790 b1 1\n
 v1   CLI RX| 4ff850 b2 1\n
###  v1   CLI STATUS 200
##   v1   CLI 200 
 v1   CLI TX| vcl.list
 v1   CLI RX| discarded   1 vcl1\n
 v1   CLI RX| active  0 vcl2\n
###  v1   CLI STATUS 200
##   v1   CLI 200 
 v1   Not true: n_backend (2) == 1 (1)



--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


waitpid EINTR issue in varnishd

2008-08-13 Thread Theo Schlossnagle
On Solaris, I am occasionally seeing a sigchild due to the completion  
of the vcl compile that fires during the waitpid() call causing  
waitpid to -1 errno EINTR.  Here is a patch that addresses that  
condition.


--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/



vcc_waitpid.patch
Description: Binary data
___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: Fresh patch for Solaris

2008-08-11 Thread Theo Schlossnagle
Next pass:

http://lethargy.org/~jesus/misc/varnish-solaris-trunk-3080-2.diff

On Aug 11, 2008, at 10:47 AM, Poul-Henning Kamp wrote:

> You still have the err.h compat stuff, that's not necessary any more,
> I removed the two uses of err().

Didn't catch that.  Fixed.

> Why is the tcp.c patch necessay ?

FIONBIO isn't available by default as it is a BSD thing.  You're  
"supposed" to use fcntl on Solaris.  fcntl is slower and ioctl _will_  
work, but the define of FIONBIO is in sys/filio.h.  You get it if you  
turn on BSD compatibility when including ioctl.h, but we don't want  
the other stuff that comes with that.  So far, that single line of  
code is the only one requiring BSD_COMP on Solaris side, so I'm  
hesitant to -DBSD_COMP the whole thing.

> You have not answered my questions about .so and sendfile ?

.o -> .so is not purely cosmetic.  There is an issue with the Sun  
Studio toolchain that makes .o not work.
sendfile on Solaris should be safe.  When the call returns no bits  
should be referenced at all.

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: Fresh patch for Solaris

2008-08-11 Thread Theo Schlossnagle

On Aug 11, 2008, at 5:30 AM, Poul-Henning Kamp wrote:

> In message <[EMAIL PROTECTED]>, Theo  
> Schlossnagle
> writes:
>
>> http://lethargy.org/~jesus/misc/varnish-solaris-trunk-3071.diff
>
> OK, I've picked the obvious stuff out.

Next pass:

http://lethargy.org/~jesus/misc/varnish-solaris-trunk-3080.diff

cleaned up some redundant includes
removed the flock stuff (not Solaris related)
made the cache_acceptor_ports use ->pass
removed unnecessary things (includes, defines and initializations...  
removed from my patch only)
made the umem allocator a "new file" instead of a modified svn copy of  
the malloc allocator.

Patch is about half the size of the previous one.  AWESOME progress.   
This is great.  Thanks for your attention phk!

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: Fresh patch for Solaris

2008-08-11 Thread Theo Schlossnagle

On Aug 11, 2008, at 5:30 AM, Poul-Henning Kamp wrote:

> In message <[EMAIL PROTECTED]>, Theo  
> Schlossnagle
> writes:
>
>> http://lethargy.org/~jesus/misc/varnish-solaris-trunk-3071.diff
>
> OK, I've picked the obvious stuff out.
>
> Various questions about the rest:
>
> Doesn't Solaris have fcntl(F_SETLK) as mandated by POSIX ?
> flock() is not a standardized API, and I havn't seen a system yet
> which supports it, which doesn't also have fcntl(F_SETLK), so I
> would rather not mess up the source with a pointless check.

Ah, good catch.  That wasn't a patch for Solaris.  It was for Linux.   
We've run into some issues with flock being more reliable than fcntl  
on Linux under high concurrency environments.  We ran into this  
problem on another project and thought to make a preemptive strike in  
Varnish.

However, my patch prefers flock over fcntl, which is clearly wrong as  
it might do that on BSD*.  flock() on Solaris is only available with - 
lucb (the meager BSD-ish compatibility layer).  So, with that patch,  
Solaris still uses fcntl. (Go POSIX!)

The ugly way we did it on the other project (FastXSL) that had this  
issue is here:

http://labs.omniti.com/trac/fastxsl/changeset/43

>
> Do you know for sure that sendfile on Solaris has no reference to the
> relevant parts of the file when it returns ?  Otherwise it is not safe
> to use.

I asked the internal engineering team at Sun, they said that it has no  
references.  I asked move forcefully and they did a code review  
(albeit very short) and said again that it had no references.  So, at  
this point, I believe that it is safe to use in Varnish.

>
> Why is the extra include of  necessary in  
> storage_file.c ?

My mistake.  I wasn't thorough when I merged up to trunk.  No conflict  
and my lack of attention.  Ignore that.

>
> Is there a reason to name the shared object .so instead of .o or is it
> just cosmetics ?

The Sun Studio toolchain barfs on the .o.  It "knows better" and seems  
to ignore the request to make it shared.  I could get around this with  
a shell scripts (as you had once suggested), but this change makes it  
work in all the toolchains I have tried "out of the box"

>
> In cache_acceptor.c, please implement the "->pass" function which does
> the port_send() call.

Will do.  Not sure how that got reverted actually.

> Why the initialization in mgt_child.c ?


I believe that is left over from something else.  A valgrind warning  
-- but that could only have been due to other code I added and then  
removed.  It looks clean and safe without initialization.

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Fresh patch for Solaris

2008-08-10 Thread Theo Schlossnagle
http://lethargy.org/~jesus/misc/varnish-solaris-trunk-3071.diff

This is cleaned up a bit.  Nothing really new, just updated to trunk  
3071.

Some nice things worth cherry picking (IMHO)

err/errx in libvarnishcompat (from NetBSD)
replacement of the ifdef'd compile stuff with a define set in configure
umem allocator -- should scale much better than malloc

Anyway, anyone running Varnish on Solaris, please rev your engines and  
give this a spin.

autogen.sh
CC=cc CFLAGS="-mt -m64" ./configure
make

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/


___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: cache_acceptor_poll.c oddities (Solaris)

2008-03-15 Thread Theo Schlossnagle

On Mar 14, 2008, at 8:49 PM, Jyri J. Virkki wrote:

> Once upon a time Dag-Erling Smørgrav wrote:
>>
>> The poll code has seen very little maintenance or testing lately,  
>> simply
>> because there are better alternatives on all the platforms we  
>> officially
>> support.  Theo Schlossnagle has an acceptor implementation that uses
>> Solaris event ports; you may have more luck with that than with poll.
>>
>> In any case, thank you for the patch; I will commit it as soon as
>> possible.
>
> While I'd meant it as more of an illustrative than final patch, I see
> you massaged it during checkin already, so thanks!
>
> The other immediate problem I saw I've filed as ticket 222[1] and
> included the patch there (related to IOV_MAX limit on Solaris, which I
> see from the archives was discussed earlier and the define was  
> changed,
> but didn't quite work).
>
> I realize the poll implementation isn't that interesting but I figured
> I'd start with it as an initial experiment and add an event port
> alternative later, but nice to hear it's been done.
>
> Theo, are you planning on contributing it to the main trunk to make it
> easier to access and keep in sync?

Here's a more recent patch:

http://lethargy.org/~jesus/misc/varnish-cache-2543-sol10-2.diff

I'd love to have the changes put back.  There are a few outstanding  
issues:

1) the ping/pong stuff (cross notify) is done to leverage the  
efficiency of event ports, but it is a bad hack from the integration  
perspective.  I can fix this when I get some time -- don't have enough  
of that at the moment.
2) The socket timeouts aren't supported which is bad and nontrivial to  
fix.  However, adding a very thin read(v)/write(v)/sendfile(v)  
abstraction layer would make this very easy and make the SSL  
featureset trivial to add as well.  I could add this in if the  
developers are interested in it?
3) The VCL compiler stuff is changes to support a short-coming of the  
Sun Studio toolchain.  And while the change should work everywhere,  
phk expressed some dislike for it.  Apparently, it's a system("")  
call, so I could do some trickery by chaining commands to work around  
the issue.  I think my work-around is more to-the-point.  I doubt that  
will be taken back, which is fine -- it's subjective argument over  
which approach is better.
4) The IOV_MAX stuff is still an issue as the Sun Studio C  
preprocessor doesn't work with
#if (IOV_MAX < (HTTP_HDR_MAX * 2))

Aside from those issues.  The patch is working well for us in  
production.  We've hit no issues with (2) at this point which is the  
only part that is bonafide technical problem.

For 2) above, the approach I would take is to change the session fd to  
be an:

typedef struct varnish_io_opset {
   int (*read)(void *op_ctx, void *buf, size_t nbyte);
   int (*write)(void *op_ctx, const void *buf, size_t nbyte);
   ... readv ...
   ... writev ...
   ... sendfile ...

} varnish_io_opset_t;
typedef struct varnish_socket {
   varnish_io_opset_t *ops;
   void *op_ctx;
} varnish_socket_t;

This would also obviate the IOV_MAX trickery as Solaris would get its  
own I/O opset.

Best regards,

Theo

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: Load Balancing Algorithms in VCL

2008-03-04 Thread Theo Schlossnagle

On Mar 4, 2008, at 6:34 AM, Poul-Henning Kamp wrote:

> In message <[EMAIL PROTECTED]>, Shiraz Kanga writes:
>>
>> Will it be possible to implement custom Load Balancing algorithms in
>> VCL? There are numerous algorithms that can be used to distribute  
>> load like
>> Round Robin, Weighted Round Robin, Least Connections, Fastest  
>> Response,
>> Most Bandwidth, etc.
>
> The plan is to implement such "directors" in C (we already have  
> "random")
> and I'm not convinced that doing it in VCL makes much sense, but can
> be persuaded otherwise by good arguments.


Load balancing algorithms can be complicated over larger sets of  
backends (if that is the desired use case).  mod_backhand was written  
specifically as a platform to test different algorithms in the lab.   
What I learned from that is that if you have a good (and very simple)  
API to implement new algorithms, then it is easy enough to test.  For  
example, we found interesting and successful approaches using cost- 
benefit-based algorithms and randomized-log2-window algorithms (much  
better than simple random).  As a note, simple random performs really  
well in the lab (better than least-connections and fastest-response)  
and in practice, while it doesn't do as well, it still does better  
than most alternatives -- so good first algorithm selection.

On the other hand, VCL compiles to C.  It seems you have an excellent  
platform to expose that algorithm composition up the stack and make it  
even more accessible without compromising efficiency.  At the end of  
the day, it would only be a convenience thing.  If not in VCL, I hope  
that it could be at least dlopened, so algorithms can be tested post- 
install.

Best regards,

Theo

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: development efforts on the Solaris side.

2008-02-14 Thread Theo Schlossnagle

On Feb 14, 2008, at 5:48 AM, Dag-Erling Smørgrav wrote:

> Theo Schlossnagle <[EMAIL PROTECTED]> writes:
>> [re: stv = stevedores = stevedores->next;]
>>
>> [...]  Was worried that the assumption that both assignments were
>> atomic.  One is, then the next is.  Perhaps that doesn't matter.  It
>> does seem like a hard to trigger race condition to me because while
>> the assignment is atomic, the access to stevedores->next is not
>> guaranteed to be view consistent in that assignment:
>>
>> T1: get stevedores->next in R(T1,a)
>> T2: get stevedores->next in R(T2,a)
>> T1: set stevedores to R(T1,a)
>> T2: set stevedores to T(T2,a)
>>
>> Now T1 and T2 both "advanced the pointer" but they did the same work
>> and the stv they have is the same.  Is that not a problem?  Perhaps
>> I don't understand the impact of that scenario correctly.
>
> There is a race, but it's irrelevant.  The point of this code is to
> spread the load between the stevedores.  It doesn't matter if two
> simultaneours allocations go to the same stevedore as long as *all*
> allocations don't go to the same stevedore.  It will balance out in
> the long run.

The comment made me think that the code was relying on different  
threads getting different stevedores for safety reasons -- not  
efficiency.  Thanks for clearing that up.

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: development efforts on the Solaris side.

2008-01-09 Thread Theo Schlossnagle

On Jan 9, 2008, at 4:40 AM, Poul-Henning Kamp wrote:
>
> sendfile():
>   Does the solaris sendfile guarantee that storage is no longer
>   touched when it returns ?  Otherwise it's as little use as
>   the FreeBSD and Linux versions.


I just conferred with a Solaris kernel engineer and...

Solaris guarantees that the address ranges (and file data) references  
as the source for data in calls to both sendfile() and sendfilev()  
will not be touched after control returns to the caller.

Destination guarantees are more complicated.  If the destination is a  
file, it may not be on disk after the call returns.  opening the file  
O_DSYNC or calling fsync() would be required.  However, I don't see  
how this aspect will apply to varnish at all as the destination is a  
network socket -- and for the sake of this discussion we really only  
care about the semantics of the source data and how that is accessed  
relative to the return of control to the caller.

--
Theo Schlossnagle
Principal/CEO
OmniTI Computer Consulting, Inc.
W: http://omniti.com
P: +1.443.325.1357 x201
F: +1.410.872.4911







___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: development efforts on the Solaris side.

2008-01-09 Thread Theo Schlossnagle
 nvec to writev(2) on Solaris.  It is strictly  
limited to IOV_MAX.  So, the app breaks.  After reading the code, it  
looked like that was the right place to fix it.  Perhaps there should  
be an autoconf fragment that detects the "real" OS IOV_MAX and then  
uses that in the event that it is lower HTTP_HDR_MAX*2.  Or am I  
thinking about this all wrong?

>
> cache_acceptor.c
>   Under no circumstances should #ifdef HAVE_PORT_CREATE be
>   necessary here.  If a new method is necessary for the
>   acceptors, so be it.

Completely agreed.  I noted that it was a hack in a previous email.   
I'd like that add the ping stuff as a function into each acceptor so  
they can have their own approach to waking the acceptor thread up.   
Solaris' portfs is much more like kqueue than epoll and support more  
than just filedescriptors.  It allows user-space eventing, so it is  
really easy to have one thread just say "dude, wake up" to another  
waiting in port_get(n).  People tend to have strong preferences to  
adding functions to structures in C, so I was pretty confident that I  
should propose the design before implementing it, as it would likely  
be redone.

Best regards,

Theo

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: Solaris support

2008-01-08 Thread Theo Schlossnagle

On Jan 8, 2008, at 3:45 PM, Dag-Erling Smørgrav wrote:

> Theo Schlossnagle <[EMAIL PROTECTED]> writes:
>> Dag-Erling Smørgrav <[EMAIL PROTECTED]> writes:
>>> Theo Schlossnagle <[EMAIL PROTECTED]> writes:
>>>> We've been running this for a while on Solaris.  Works really well.
>>> Only because you haven't noticed the bugs yet...  for instance,
>>> session timeout is broken (commented out, actually) in your patch,  
>>> so
>>> broken backends and / or clients will bog you down.
>> Sure.  When I said "works well," I meant "as well as on Linux.
>
> Uh, no, Linux actually supports SO_{RCV,SND}TIMEO, so Varnish does
> *not* work as well on Solaris as on Linux, with or without your patch.

And Solaris supports portfs which is better than epoll.  It's not  
really a competition.  I'm kinda lost as to how this turned into an  
argument.  I have had a good experience on Solaris so far and I don't  
know what is gained by rebutting my comments assuming I don't realize  
there might be bugs.  The patch is against /trunk/ and as such I would  
assume many bugs, half baked features, prototype code, etc.  I'd  
assume both bugs in my path as well as to the /trunk/ to which it is  
applied.  Varnish as a whole only seems to work because of the bugs we  
haven't noticed, right?

I just look forward to having solid support for the OS features that  
are applicable.

I'd note that the performance from the umem stevedore implementation  
is pretty nice.  And that works on FreeBSD and Linux now that libumem  
is ported there.  Obviously, every implementation has its advantages  
and disadvantages, but umem stuff is an excellent alternative for the  
malloc based stevedore under similar usage.

>
>>>> sendfile and sendfilev on solaris
>>> Probably not a good idea unless sendfile() semantics are  
>>> significantly
>>> better on Solaris than on FreeBSD and Linux.
>> It's sendfile, it has all the advantages of sendfile.  To support
>> them, you have to conform to their APIs.  I just added support so it
>> could say "oh, look, I know how to use that sendfile..." and then
>> actually use it (just as linux and freebsd now).  And I think
>> sendfilev on Solaris is pretty slick.
>
> So you've missed the numerous threads on sendfile() bugs affecting
> Varnish, and the more recent threads on sendfile() in FreeBSD and
> Linux being broken by design so that Varnish cannot reliably use it,
> and Poul-Henning's commit disabling the sendfile() detection in
> configure.ac to stop the whining.

Well, so far so good.  There are some bugs in Solaris' sendfile as  
well, of course.  I haven't been able to tickle them in my testing.

I didn't miss the discussion, but I did miss that commit.

>
>>>> using fcntl() when flock() is unavailable
>>> There are issues here as well; the semantics are subtly different  
>>> from
>>> OS to OS.  For instance, what happens if separate threads in the  
>>> same
>>> process try to lock the same file?  It's even less fun if you take
>>> into consideration systems that support both.
>> As I see it you only supported flock().
>
> You've got it exactly backwards - Varnish has used fcntl() locks
> exclusively for... what... five months now?  ever since I determined
> that in addition to being more portable, fcntl() tends to be the least
> broken on platforms that support both (though not on FreeBSD, where
> flock() is slightly better, but I didn't consider it "better enough"
> to warrant an #ifdef).  I even credited you in the commit log.

It looks like when updating to trunk that part was in conflict.  I had  
removed my code as yours did the trick.  So, that patch was reverted  
in my set a while ago and wasn't in the one I linked to.

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: Solaris support

2008-01-08 Thread Theo Schlossnagle

On Jan 8, 2008, at 2:18 PM, Dag-Erling Smørgrav wrote:

> Theo Schlossnagle <[EMAIL PROTECTED]> writes:
>> We've been running this for a while on Solaris.  Works really well.
>
> Only because you haven't noticed the bugs yet...  for instance,
> session timeout is broken (commented out, actually) in your patch, so
> broken backends and / or clients will bog you down.

Sure.  When I said "works well," I meant "as well as on Linux.

>
>> What we need is the function in cache_acceptor.c:
>> [...]
>> I see this as a cleaner mechanism regardless as there is no reason  
>> for
>> the generic cache_acceptor to care about int vca_pipes[2]; -- it's an
>> implementation detail.  How's that sound?
>
> That sounds like a good idea.  Even better if you can submit an
> isolated patch :)

Okay, I hadn't tackled that, I'll look at submitting a patch specific  
to that.

>
>>   sendfile and sendfilev on solaris
>
> Probably not a good idea unless sendfile() semantics are significantly
> better on Solaris than on FreeBSD and Linux.

It's sendfile, it has all the advantages of sendfile.  To support  
them, you have to conform to their APIs.  I just added support so it  
could say "oh, look, I know how to use that sendfile..." and then  
actually use it (just as linux and freebsd now).  And I think  
sendfilev on Solaris is pretty slick.

>
>>   using fcntl() when flock() is unavailable
>
> There are issues here as well; the semantics are subtly different from
> OS to OS.  For instance, what happens if separate threads in the same
> process try to lock the same file?  It's even less fun if you take
> into consideration systems that support both.

As I see it you only supported flock().  If I don't have it, I use  
fcntl().  It will certainly not break any system that current works.   
It doesn't use fcntl() when available.. it uses fcntl() only when  
flock() isn't available.  fcntl() on Linux acts really strange -- so  
it would be bad -- as you intimate above.

Best regards,

Theo

--
Theo Schlossnagle
Principal/CEO
OmniTI Computer Consulting, Inc.
W: http://omniti.com
P: +1.443.325.1357 x201
F: +1.410.872.4911







___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Re: development efforts on the Solaris side.

2008-01-08 Thread Theo Schlossnagle

On Jan 8, 2008, at 9:32 AM, Poul-Henning Kamp wrote:

> In message <[EMAIL PROTECTED]>, Theo  
> Schlossnagle
> writes:
>> Hi guys,
>>
>> I'd really like to be able to contribute some of the improvements
>> we've made to varnish back.  Is there a way I can get access to
>> commit.  I'd be happy to stay in my own branch.  My current patch set
>> is unwieldy and I'm very tempted to just start my own repos... That,
>> of course, seems silly.  I've fixed up (removed) some of the gccism  
>> in
>> favor or more portability (#include over -include).  I've fixed a few
>> bugs, made the VCC line a but smarter and more accepting of non gcc
>> compiler, I've added a portfs acceptor and built a storage_umem
>> allocator facility that rides on Solaris' excellent libumem (highly
>> scalable allocator) which we also ported to run on Linux and FreeBSD
>> (and Mac OS X): https://labs.omniti.com/trac/portableumem
>>
>> Next steps?
>
> Can you mail me a link to the patch ?

http://lethargy.org/~jesus/misc/varnish-solaris-trunk-2328.diff

Enjoy!

--
Theo Schlossnagle
Principal/CEO
OmniTI Computer Consulting, Inc.
W: http://omniti.com
P: +1.443.325.1357 x201
F: +1.410.872.4911







___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


development efforts on the Solaris side.

2008-01-08 Thread Theo Schlossnagle
Hi guys,

I'd really like to be able to contribute some of the improvements  
we've made to varnish back.  Is there a way I can get access to  
commit.  I'd be happy to stay in my own branch.  My current patch set  
is unwieldy and I'm very tempted to just start my own repos... That,  
of course, seems silly.  I've fixed up (removed) some of the gccism in  
favor or more portability (#include over -include).  I've fixed a few  
bugs, made the VCC line a but smarter and more accepting of non gcc  
compiler, I've added a portfs acceptor and built a storage_umem  
allocator facility that rides on Solaris' excellent libumem (highly  
scalable allocator) which we also ported to run on Linux and FreeBSD  
(and Mac OS X): https://labs.omniti.com/trac/portableumem

Next steps?

Best regards,

Theo

--
Theo Schlossnagle
Principal/CTO
OmniTI Computer Consulting, Inc.
W: http://omniti.com
P: +1.443.325.1357 x201
F: +1.410.872.4911






___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev


Solaris support

2007-12-20 Thread Theo Schlossnagle
Hello all,

We've been running this for a while on Solaris.  Works really well.

We have some minor modifications to the source tree to make it work,  
but there is one minor design change to make things work well with our  
ports cache_acceptor.

What we need is the function in cache_acceptor.c:

void vca_return_session(struct sess *sp);

to be delegated to the acceptor implementations.  Solaris allows for  
very efficient user initiated notifications across its event system  
and that does that by talking directly to the port and not using the  
vca_pipes.

I'd like to have a return_session element added to the acceptor  
structure that is void (*return_session)(struct sess *);

And the vca_return_session function be:

void
vca_return_session(struct sess *sp)
{
   vca->return_session(sp);
}

I see this as a cleaner mechanism regardless as there is no reason for  
the generic cache_acceptor to care about int vca_pipes[2]; -- it's an  
implementation detail.  How's that sound?

I'd like to see this change in trunk before I submit my patch as it  
will pretty solidly affect how the return session stuff is handled in  
our changeset.

My current patch supports:
   sendfile and sendfilev on solaris
   using fcntl() when flock() is unavailable
   A cache_acceptor_ports.c that embraces Solaris' port eventing system

--
Theo Schlossnagle
Esoteric Curio -- http://lethargy.org/
OmniTI Computer Consulting, Inc. -- http://omniti.com/

___
varnish-dev mailing list
varnish-dev@projects.linpro.no
http://projects.linpro.no/mailman/listinfo/varnish-dev