Lothar ? wrote:

does anyone know how to increase the time apache will wait
before sending a SIGKILL to cgi-processes?
(Or to change the kill-policy used with subprocesses?)
problem background:


I'm using BerkeleyDB from my perl-cgi-scripts, but I don't know how
to close the database in case of signals in an reliable way. I do
this in my signal handler(SIGTERM and SIGPIPE), but occasionally
this closing takes longer than 3 seconds and then apache uses
SIGKILL to *kill* the child.

Have you patched the code to use a longer timeout? Does that resolve the entire problem for you?


I believe that, since closing the database involves removing handles
from shared memory, the cgi-process is waiting for a exclusive lock
on the shared memory region.

It would probably be good to verify that theory with a patch to Apache.


There is nothing you can do about a SIGKILL, and so the shared memory
region is left inconsistent.

yessir


Since this a load-dependend issue, this time values should be set
in the config...but any ideas?

IMO adding a new config directive is premature...we don't even know if a longer timeout helps you.


Reading the source I saw that free_proc_chain invokes the killing of
the subprocesses, and this cleanup is invoked before waiting
on a new connection. how can I prevent this cleanup from happening?

Then what do we do about cgi subprocess that refuse to respond to any signals (except SIGKILL)? I think the logical first step is for you to try a patch with a longer timeout, then tell us if that helps.


Greg



Reply via email to