Robert Landrum wrote:
I have some very large httpd processes (35 MB) running our
application software. Every so often, one of the processes will grow
infinitly large, consuming all available system resources. After 300
seconds the process dies (as specified in the config file), and the
On Mon, 5 Feb 2001, Perrin Harkins wrote:
First, BSD::Resource can save you from these. It will do hard limits on
memory and CPU consumption. Second, you may be bale to register a
handler for a signal that will generate a stack trace. Look at
Devel::StackTrace (I think) for how to do it.
Dave Rolsky wrote:
On Mon, 5 Feb 2001, Perrin Harkins wrote:
First, BSD::Resource can save you from these. It will do hard limits on
memory and CPU consumption. Second, you may be bale to register a
handler for a signal that will generate a stack trace. Look at
Devel::StackTrace
On Mon, 5 Feb 2001, Perrin Harkins wrote:
Nope, that's not it. I wrote that one and it doesn't talk about that at
all.
I meant "for how to generate a stacktrace". Using it with a singal
handler was demonstrated on this list about two weeks ago, but I can't
recall who did it. It was
On Mon, 29 Jan 2001, Robert Landrum wrote:
I have yet to solve the runaway problem, but I came up with a way of
identifying the URLS that are causing the problems.
First, I added the following to a startup.pl script...
$SIG{'USR2'} = \apache_runaway_handler;
setting that to
Actually, I've had some bad experiences with the Carp module. I was
using Carp for all my errors and warnings within mod_perl on our
development server, but when I moved it to our production server
(both similarly configured) it cause every request to core dump. I
never figured out what the
On Wed, 31 Jan 2001, Robert Landrum wrote:
Has anyone else had problems with the Carp module and mod_perl?
there were bugs related to Carp in 5.6.0, fixed in 5.6.1-trial1,2
On Mon, 29 Jan 2001, Robert Landrum wrote:
I have some very large httpd processes (35 MB) running our
mod_perl are not freeing memory when httpd doing cleanup phase.
Me too :).
Use the MaxRequestPerChild directive in httpd.conf.
After my investigations it seems to be only way to
build a
I have some very large httpd processes (35 MB) running our
application software. Every so often, one of the processes will grow
infinitly large, consuming all available system resources. After 300
seconds the process dies (as specified in the config file), and the
system usually returns to
solved (?) this by writing a little perl script to run from cron
and watch for and kill these runaways, but it's an admittedly lame
solution. I've meant for a while to look into Stas'
Apache::Watchdog::RunAway module to handle these more cleanly, but never
did get around to doing this.
Let us
a state where
it consumes lots of memory and cpu cycles. I did some investigation, but
(like you, it sounds) couldn't garner any useful info from gdb traces.
I solved (?) this by writing a little perl script to run from cron
and watch for and kill these runaways, but it's an admittedly lame
solution.
scoreboard and kills anything that's
been running for "X" amount of time.
Yep, we've had a few of these too -- but it seems I can avoid these if I
kill the runaways early enough before they become too brain dead.
You could, in theory just reduce the "Timeout" option in a
I have yet to solve the runaway problem, but I came up with a way of
identifying the URLS that are causing the problems.
First, I added the following to a startup.pl script...
$SIG{'USR2'} = \apache_runaway_handler;
sub apache_runaway_handler {
print RUNFILE "\%ENV contains:\n";
13 matches
Mail list logo