Thanks!  Using b instead of ab, I was able to make the error occur
occasionally in my test environment.  I found one bug that could
cause the previously accepted socket to be pushed back onto the
fdqueue if an accept failed.  The change that I just committed
for unixd.c fixes this.  I'm no longer seeing the segfault in my
test environment, although that's not a guarantee that the bug is
fixed, because I was only able to catch the error occasionally
before the change.  With the latest unixd.c code, do you still
see the segfaults in your environment?

Thanks,
--Brian


Jeff Trawick wrote:

>Brian Pane <[EMAIL PROTECTED]> writes:
>
>>I just tried to debug this with the current CVS HEAD on
>>Solaris, but I can't reproduce the crash.  The test case
>>that I tried was: set the httpd's ulimit on file descriptors
>>to a small number and run ab to generate enough concurrent
>>requests to exceed the fd limit.  Is there a better test case
>>that will trigger the segfault?
>>
>
>ulimit -n 100
>apachectl start
>
>from another shell with higher ulimit:
>
>b -n 100000 -c 200 127.0.0.1:8080/
>
>Note that / does a lot of negotiation work (extra file opens) using
>the default configuration.
>
>It didn't take more than a minute to hit a segfault.
>
>For some reason it seems easier to hit with my b than with standard ab
>(which should do pretty much the same thing with that command-line).
>
>b is at www.apache.org/~trawick/public_html/b.c
>
>To build it, "grep HAVE_ b.c" and see what to define on the cc
>invocation :)
>
>(I suspect I could hit it with ab if I were more patient.
>Historically b had better concurrency because it does non-blocking
>connects.  I don't know if 2.0 ab has been fixed to do non-blocking
>connects.)
>



Reply via email to