Re: Any known issues with Solaris event ports?
This looks really good. Your description of the problems in the ticket look spot on and the implementation looks correct as well. I'll see what I can do about pulling this and running it somewhere -- but I'm swamped with some rollouts right now. On Jan 22, 2010, at 2:56 PM, Nils Goroll wrote: Hi Theo and all, I'd appreciate reviews of some fixes to the event port waiter: http://varnish.projects.linpro.no/ticket/629 These look promising to me, but I might have missed something important. Thank you, Nils -- Theo Schlossnagle http://omniti.com/is/theo-schlossnagle ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
Varnish on OpenBSD
Does anyone here run Varnish on OpenBSD? I'm interested in either successes or known feature issues that would make it less than ideal as a platform. Thanks! -- Theo Schlossnagle Esoteric Curio -- http://lethargy.org/ OmniTI Computer Consulting, Inc. -- http://omniti.com/ ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
Re: Timeouts
On Mar 1, 2009, at 11:49 AM, Nils Goroll wrote: Theo, http://bugs.opensolaris.org/view_bug.do?bug_id=4641715 Either that or application level support for timeouts. Given Varnish's design, this wouldn't be that hard, but still a bit of work. thank you for your answer. I was hoping that someone might had started work on this, but I understand that this won't be too easy to implement. I see two approaches: (1) the traditional: change all the the read/write/readv/writev/send/ recv/sendfile calls with non-blocking counterparts and wrap them in a poll loop with the timeout management. (2) the contemporary: create a timeout management thread that orchestrates interrupting the system calls in the threads. It's kinda magical. It's basically and implementation of alarm() in each thread where the alarms are actually managed in a watching thread and it raises a signal in the requested thread by using pthread_kill explicitly. I've implemented this before and it works. But, it is a bit painful and given that this implementation would exist to work around _only_ the kernel lacking in Solaris, it seems crazy. I think I'll go with the general Varnish developer attitude here: we expect the OS to support the advanced features we need. Upside is that it would work well with sendfile. I still wish all network I/O was done in an event system... and we just had a lot of concurrently operating event loops all consuming events full-tilt. I've had better success with that. Que sera sera. Varnish is still the fastest thing around. -- Theo Schlossnagle Principal/CEO OmniTI Computer Consulting, Inc. Web Applications Internet Architectures w: http://omniti.com p: +1.443.325.1357 x201 f: +1.410.872.4911 ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
Re: Any known issues with Solaris event ports?
We haven't run into that problem. However, we are running trunk and not 2.0.3. Varnish 2.0.3 appears to fail b17 and c22 tests in the suite for me. I haven't had a chance to look deeper yet... it's on my todo list. Also, as a note, the configure.in with 2.0.3 managed to disable sendfile (which works on Solaris). I have a patch that reenables that. Once I track down the b17/c22 issues and fix and/or explain them, I'll send in the patch. On Feb 26, 2009, at 5:50 PM, Nils Goroll wrote: Hi, I have not dug deeply enough into this issue, but I believe to have stepped on an issue surfacing in hanging client connections with the Solaris event ports interface. Using poll seems to avoid the issue. Is this a known problem? Cheers, Nils ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev -- Theo Schlossnagle Principal/CEO OmniTI Computer Consulting, Inc. Web Applications Internet Architectures w: http://omniti.com p: +1.443.325.1357 x201 f: +1.410.872.4911 ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
Re: is Varnish 2.0 beta1 supposed to compile/test on sol10?
Looks like: REPORT(LOG_ERR, mgt_child_inherit(%d, %s), fd, what); In mgt_child_inherit(int fd, const char *what) in bin/varnishd/ mgt_child.c is throwing a wrench in the management communication channel. Not sure how this doesn't break all platforms though. If ou comment that line out, all tests pass (at least for me). On Sep 4, 2008, at 6:38 PM, Keith J Paulson wrote: Theo, I had used a slightly different line, but did have the -mt; I repeated the config/build with your example and still have 61 FAILs. Keith Theo Schlossnagle wrote: You likely need to specify -mt in your CFLAGS as Varnish is multi-threaded. You'd need to do that with gcc too. My build looks like: CFLAGS=$CFLAGS -m64 -mt \ LDFLAGS=$LDFLAGS -L/usr/sfw/lib/amd64 -R/usr/sfw/lib/amd64 \ CC=cc CXX=CC \ ./configure [typical args] On Sep 4, 2008, at 6:04 PM, Keith J Paulson wrote: I have tried Feature complete Varnish 2.0 Beta 1 on sol10 with gcc, and sun's studio 12 compiler, and in neither case does it work. With sun cc, latest results are: === 61 of 73 tests failed Please report to varnish-dev@projects.linpro.no === for which Assert error in varnish_ask_cli(), vtc_varnish.c line 97: Condition(i == 0) not true. seems a significant cause (response not recieved?) One complete test: ./varnishtest -v tests/b2.vtc #TEST tests/b2.vtc starting #TEST Check that a pass transaction works ## s1 Starting server ### s1 listen on 127.0.0.1:9080 (fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection v1 debug| storage_file: filename: ./varnish.sUaWlN (unlinked) size 166 MB.\n v1 debug| mgt_child_inherit(4, storage_file)\n v1 debug| Using old SHMFILE\n v1 debug| Notice: locking SHMFILE in core failed: Not owner v1 debug| \n v1 debug| Debugging mode, enter start to start child\n ### v1 CLI connection fd = 4 v1 CLI TX| vcl.inline vcl1 backend s1 { .host = \127.0.0.1\; .port = \9080\; }\n\n\tsub vcl_recv {\n\t\tpass;\n\t}\n v1 CLI RX| VCL compiled. ### v1 CLI STATUS 200 v1 CLI TX| vcl.use vcl1 ### v1 CLI STATUS 200 ## v1 Start v1 CLI TX| start v1 debug| mgt_child_inherit(9, sock)\n v1 debug| mgt_child_inherit(10, cli_in v1 debug| )\n v1 debug| mgt_child_inherit(13, cli_out)\n v1 debug| child (20083 v1 debug| ) Started\n v1 debug| mgt_child_inherit( Assert error in varnish_ask_cli(), vtc_varnish.c line 97: Condition(i == 0) not true. Abort Any suggestions? Keith ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev -- Theo Schlossnagle Principal/CEO OmniTI Computer Consulting, Inc. Web Applications Internet Architectures w: http://omniti.com p: +1.443.325.1357 x201 f: +1.410.872.4911 ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev -- Theo Schlossnagle Principal/CEO OmniTI Computer Consulting, Inc. Web Applications Internet Architectures w: http://omniti.com p: +1.443.325.1357 x201 f: +1.410.872.4911 ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
commit 3113 all good
Trunk past 3113 compiles out of the box on Solaris 10+ and passes all the varnishtests. Thanks! -- Theo Schlossnagle Esoteric Curio -- http://lethargy.org/ OmniTI Computer Consulting, Inc. -- http://omniti.com/ ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
Re: waitpid EINTR issue in varnishd ( testcases)
On Aug 15, 2008, at 4:32 AM, Poul-Henning Kamp wrote: I have committed the EINTR patch and, I hope, fixed the two testcases, please check this and send me your latest solaris patch. This looks good and now the full test suite passes. Attached is the Solaris patch. -- Theo Schlossnagle Esoteric Curio -- http://lethargy.org/ OmniTI Computer Consulting, Inc. -- http://omniti.com/ varnish-cache-3095-solaris.patch Description: Binary data ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
Two varnishtest failures on Solaris
(fd 3) ## v1 Launch ### v1 CMD: cd ../varnishd ./varnishd -d -d -n /tmp/__v1 -a '127.0.0.1:9081' -T 127.0.0.1:9001 -p thread_pools=1 -w1,1,300 ## s1 Started on 127.0.0.1:9080 ### v1 opening CLI connection v1 debug| storage_file: filename: ./varnish.2jaqEs (unlinked) size 100 MB.\n v1 debug| Using old SHMFILE\n v1 debug| Notice: locking SHMFILE in core failed: v1 debug| Not owner\n v1 debug| Debugging mode, enter start to start child\n ### v1 CLI connection fd = 7 v1 CLI TX| vcl.inline vcl1 \n\tbackend b1 {\n\t\t.host = \localhost\;\n\t\t.port = \9080\;\n\t}\n v1 CLI RX| VCL compiled. ### v1 CLI STATUS 200 v1 CLI TX| vcl.use vcl1 ### v1 CLI STATUS 200 ## v1 Start v1 CLI TX| start v1 debug| child ( v1 debug| 9477) Started\n v1 debug| Child (9477) said Closed fds: 3 5 7 8 11 12 14 15\n ### v1 CLI STATUS 200 v1 CLI TX| debug.xid 1000 v1 debug| Child (9477) said Child starts\n v1 debug| Child ( v1 debug| 9477 v1 debug| ) said managed to mmap 105172992 bytes of 105172992\n v1 debug| Child (9477) said v1 debug| Ready\n v1 CLI RX| XID is 1000\n ### v1 CLI STATUS 200 ## c1 Starting client ## s1 Waiting for server ## c1 Started ### c1 Connect to 127.0.0.1:9081 ### c1 Connected to 127.0.0.1:9081 fd is 12 c1 txreq| GET / HTTP/1.1\r\n c1 txreq| \r\n ### c1 rxresp ### s1 Accepted socket fd is 11 ### s1 rxreq s1 rxhdr| GET / HTTP/1.1\r\n s1 rxhdr| X-Varnish: 1001\r\n s1 rxhdr| X-Forwarded-For: 127.0.0.1\r\n s1 rxhdr| Host: localhost\r\n s1 rxhdr| \r\n s1 http[ 0] | GET s1 http[ 1] | / s1 http[ 2] | HTTP/1.1 s1 http[ 3] | X-Varnish: 1001 s1 http[ 4] | X-Forwarded-For: 127.0.0.1 s1 http[ 5] | Host: localhost s1 txresp| HTTP/1.1 200 Ok\r\n s1 txresp| \r\n ### s1 shutting fd 11 ## s1 Ending ## c1 Waiting for client c1 rxhdr| HTTP/1.1 200 Ok\r\n c1 rxhdr| Content-Length: 0\r\n c1 rxhdr| Date: Wed, 13 Aug 2008 19:25:40 GMT\r\n c1 rxhdr| X-Varnish: 1001\r\n c1 rxhdr| Age: 0\r\n c1 rxhdr| Via: 1.1 varnish\r\n c1 rxhdr| Connection: keep-alive\r\n c1 rxhdr| \r\n c1 http[ 0] | HTTP/1.1 c1 http[ 1] | 200 c1 http[ 2] | Ok c1 http[ 3] | Content-Length: 0 c1 http[ 4] | Date: Wed, 13 Aug 2008 19:25:40 GMT c1 http[ 5] | X-Varnish: 1001 c1 http[ 6] | Age: 0 c1 http[ 7] | Via: 1.1 varnish c1 http[ 8] | Connection: keep-alive ### c1 Closing fd 12 ## c1 Ending ## v1 as expected: n_backend (1) == 1 ## v1 as expected: n_vcl_avail (1) == 1 ## v1 as expected: n_vcl_discard (0) == 0 ## s2 Starting server ### s2 listen on 127.0.0.1:9180 (fd 3) ## s2 Started on 127.0.0.1:9180 v1 CLI TX| vcl.inline vcl2 \n\tbackend b2 {\n\t\t.host = \localhost\;\n\t\t.port = \9180\;\n\t}\n v1 CLI RX| VCL compiled. ### v1 CLI STATUS 200 v1 CLI TX| vcl.use vcl2 ### v1 CLI STATUS 200 ## v1 as expected: n_backend (2) == 2 ## v1 as expected: n_vcl_avail (2) == 2 ## v1 as expected: n_vcl_discard (0) == 0 v1 CLI TX| debug.backend v1 CLI RX| 4ff790 b1 1\n v1 CLI RX| 4ff850 b2 1\n ### v1 CLI STATUS 200 ## v1 CLI 200 debug.backend v1 CLI TX| vcl.list v1 CLI RX| available 1 vcl1\n v1 CLI RX| active 0 vcl2\n ### v1 CLI STATUS 200 ## v1 CLI 200 vcl.list v1 CLI TX| vcl.discard vcl1 v1 debug| unlink ./vcl.ORk8t3RP.so\n ### v1 CLI STATUS 200 ## v1 CLI 200 vcl.discard vcl1 ## v1 as expected: n_backend (2) == 2 ## v1 as expected: n_vcl_avail (1) == 1 ## v1 as expected: n_vcl_discard (1) == 1 ## c1 Starting client v1 CLI TX| debug.backend ## c1 Started ### c1 Connect to 127.0.0.1:9081 v1 CLI RX| 4ff790 b1 1\n v1 CLI RX| 4ff850 b2 1\n ### v1 CLI STATUS 200 ## v1 CLI 200 debug.backend v1 CLI TX| vcl.list v1 CLI RX| discarded 1 vcl1\n v1 CLI RX| active 0 vcl2\n ### v1 CLI STATUS 200 ## v1 CLI 200 vcl.list v1 Not true: n_backend (2) == 1 (1) -- Theo Schlossnagle Esoteric Curio -- http://lethargy.org/ OmniTI Computer Consulting, Inc. -- http://omniti.com/ ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
Re: Fresh patch for Solaris
On Aug 11, 2008, at 5:30 AM, Poul-Henning Kamp wrote: In message [EMAIL PROTECTED], Theo Schlossnagle writes: http://lethargy.org/~jesus/misc/varnish-solaris-trunk-3071.diff OK, I've picked the obvious stuff out. Various questions about the rest: Doesn't Solaris have fcntl(F_SETLK) as mandated by POSIX ? flock() is not a standardized API, and I havn't seen a system yet which supports it, which doesn't also have fcntl(F_SETLK), so I would rather not mess up the source with a pointless check. Ah, good catch. That wasn't a patch for Solaris. It was for Linux. We've run into some issues with flock being more reliable than fcntl on Linux under high concurrency environments. We ran into this problem on another project and thought to make a preemptive strike in Varnish. However, my patch prefers flock over fcntl, which is clearly wrong as it might do that on BSD*. flock() on Solaris is only available with - lucb (the meager BSD-ish compatibility layer). So, with that patch, Solaris still uses fcntl. (Go POSIX!) The ugly way we did it on the other project (FastXSL) that had this issue is here: http://labs.omniti.com/trac/fastxsl/changeset/43 Do you know for sure that sendfile on Solaris has no reference to the relevant parts of the file when it returns ? Otherwise it is not safe to use. I asked the internal engineering team at Sun, they said that it has no references. I asked move forcefully and they did a code review (albeit very short) and said again that it had no references. So, at this point, I believe that it is safe to use in Varnish. Why is the extra include of sys/statvfs.h necessary in storage_file.c ? My mistake. I wasn't thorough when I merged up to trunk. No conflict and my lack of attention. Ignore that. Is there a reason to name the shared object .so instead of .o or is it just cosmetics ? The Sun Studio toolchain barfs on the .o. It knows better and seems to ignore the request to make it shared. I could get around this with a shell scripts (as you had once suggested), but this change makes it work in all the toolchains I have tried out of the box In cache_acceptor.c, please implement the -pass function which does the port_send() call. Will do. Not sure how that got reverted actually. Why the initialization in mgt_child.c ? I believe that is left over from something else. A valgrind warning -- but that could only have been due to other code I added and then removed. It looks clean and safe without initialization. -- Theo Schlossnagle Esoteric Curio -- http://lethargy.org/ OmniTI Computer Consulting, Inc. -- http://omniti.com/ ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
Re: Fresh patch for Solaris
On Aug 11, 2008, at 5:30 AM, Poul-Henning Kamp wrote: In message [EMAIL PROTECTED], Theo Schlossnagle writes: http://lethargy.org/~jesus/misc/varnish-solaris-trunk-3071.diff OK, I've picked the obvious stuff out. Next pass: http://lethargy.org/~jesus/misc/varnish-solaris-trunk-3080.diff cleaned up some redundant includes removed the flock stuff (not Solaris related) made the cache_acceptor_ports use -pass removed unnecessary things (includes, defines and initializations... removed from my patch only) made the umem allocator a new file instead of a modified svn copy of the malloc allocator. Patch is about half the size of the previous one. AWESOME progress. This is great. Thanks for your attention phk! -- Theo Schlossnagle Esoteric Curio -- http://lethargy.org/ OmniTI Computer Consulting, Inc. -- http://omniti.com/ ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
Re: Fresh patch for Solaris
Next pass: http://lethargy.org/~jesus/misc/varnish-solaris-trunk-3080-2.diff On Aug 11, 2008, at 10:47 AM, Poul-Henning Kamp wrote: You still have the err.h compat stuff, that's not necessary any more, I removed the two uses of err(). Didn't catch that. Fixed. Why is the tcp.c patch necessay ? FIONBIO isn't available by default as it is a BSD thing. You're supposed to use fcntl on Solaris. fcntl is slower and ioctl _will_ work, but the define of FIONBIO is in sys/filio.h. You get it if you turn on BSD compatibility when including ioctl.h, but we don't want the other stuff that comes with that. So far, that single line of code is the only one requiring BSD_COMP on Solaris side, so I'm hesitant to -DBSD_COMP the whole thing. You have not answered my questions about .so and sendfile ? .o - .so is not purely cosmetic. There is an issue with the Sun Studio toolchain that makes .o not work. sendfile on Solaris should be safe. When the call returns no bits should be referenced at all. -- Theo Schlossnagle Esoteric Curio -- http://lethargy.org/ OmniTI Computer Consulting, Inc. -- http://omniti.com/ ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev
Solaris support
Hello all, We've been running this for a while on Solaris. Works really well. We have some minor modifications to the source tree to make it work, but there is one minor design change to make things work well with our ports cache_acceptor. What we need is the function in cache_acceptor.c: void vca_return_session(struct sess *sp); to be delegated to the acceptor implementations. Solaris allows for very efficient user initiated notifications across its event system and that does that by talking directly to the port and not using the vca_pipes. I'd like to have a return_session element added to the acceptor structure that is void (*return_session)(struct sess *); And the vca_return_session function be: void vca_return_session(struct sess *sp) { vca-return_session(sp); } I see this as a cleaner mechanism regardless as there is no reason for the generic cache_acceptor to care about int vca_pipes[2]; -- it's an implementation detail. How's that sound? I'd like to see this change in trunk before I submit my patch as it will pretty solidly affect how the return session stuff is handled in our changeset. My current patch supports: sendfile and sendfilev on solaris using fcntl() when flock() is unavailable A cache_acceptor_ports.c that embraces Solaris' port eventing system -- Theo Schlossnagle Esoteric Curio -- http://lethargy.org/ OmniTI Computer Consulting, Inc. -- http://omniti.com/ ___ varnish-dev mailing list varnish-dev@projects.linpro.no http://projects.linpro.no/mailman/listinfo/varnish-dev