Yes, I've seen this happen often, maybe once a day on a relatively heavily
used site running mod_perl, where a child process goes into a state where
it consumes lots of memory and cpu cycles.  I did some investigation, but
(like you, it sounds) couldn't garner any useful info from gdb traces.

I solved (?) this by writing a little perl script to run from cron
and watch for and kill these runaways, but it's an admittedly lame
solution.  I've meant for a while to look into Stas'
Apache::Watchdog::RunAway module to handle these more cleanly, but never
did get around to doing this.

Let us know if you do get to the bottom of this.

<Steve>

On Mon, 29 Jan 2001, Robert Landrum wrote:

> I have some very large httpd processes (35 MB) running our 
> application software.  Every so often, one of the processes will grow 
> infinitly large, consuming all available system resources.  After 300 
> seconds the process dies (as specified in the config file), and the 
> system usually returns to normal.  Is there any way to determine what 
> is eating up all the memory?  I need to pinpoint this to a particular 
> module.  I've tried coredumping during the incident, but gdb has yet 
> to tell me anything useful.
> 
> I was actually playing around with the idea of hacking the perl 
> source so that it will change $0 to whatever the current package 
> name, but I don't know that this will translate back to mod perl 
> correctly, as $0 is the name of the configuration from within mod 
> perl.
> 
> Has anyone had to deal with this sort of problem in the past?
> 
> Robert Landrum
> 

=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=
Steve Reppucci                                       [EMAIL PROTECTED] |
Logical Choice Software                          http://logsoft.com/ |

Reply via email to