I just tried to debug this with the current CVS HEAD on Solaris, but I can't reproduce the crash. The test case that I tried was: set the httpd's ulimit on file descriptors to a small number and run ab to generate enough concurrent requests to exceed the fd limit. Is there a better test case that will trigger the segfault?
Thanks, --Brian Jeff Trawick wrote: >[EMAIL PROTECTED] writes: > >>brianp 02/01/11 00:07:07 >> >> Modified: . STATUS >> Log: >> Updated STATUS to cover the worker segfault fixes >> >> > >> - * The worker MPM on Solaris segfaults when it runs out of file >> - descriptors. (This may affect other MPMs and/or platforms.) >> > >I can still readily hit this on current code (the same code that no >longer segfaults with graceful restart). > >[Fri Jan 11 07:26:37 2002] [error] (24)Too many open files: >apr_accept: (client socket) >[Fri Jan 11 07:26:37 2002] [error] [client 127.0.0.1] (24)Too many >open files: file permissions deny server access: /exp >ort/home/trawick/apacheinst/htdocs/index.html.en >[Fri Jan 11 07:26:37 2002] [error] [client 127.0.0.1] (24)Too many >open files: cannot access type map file: /export/home >/trawick/apacheinst/error/HTTP_FORBIDDEN.html.var >[Fri Jan 11 07:26:38 2002] [notice] child pid 25493 exit signal >Segmentation fault (11), possible coredump in /export/ho >me/trawick/apacheinst > >This is the same coredump I saw before: > >#0 0xff33a3cc in apr_wait_for_io_or_timeout (sock=0x738360, > for_read=1) at sendrecv.c:70 >70 FD_SET(sock->socketdes, &fdset); > >The socket has already been closed so trying to set bit -1 segfaults. >
