Michael - depends on the OS - but you could look at the Apache::SizeLimit
code which allows kills processes when the memory per process gets large
works well for the system we use at work...
If on a unix/linux based system "top" is your friend as it will indicate
the memory usage per process
On Sun, 15 Jun 2008, Michael Gardner wrote:
I've inherited an existing Apache+mod_perl 1.3.x server. I am not very
experienced with Apache nor mod_perl, so I have to pick up things as I go
along.
Recently I built a new version of Apache (1.3.41) with static mod_perl 1.30,
and it seems to work. The problem is that every few days or so, the server
apparently runs out of memory, grinding to a halt and necessitating a hard
reset.
I suspect mod_perl is the primary memory user here, since most of the pages
we serve are Perl scripts. But I don't know how to go about diagnosing the
problem, especially since the server gets so bogged down when it happens that
I can't access it to get info on running processes, memory usage, etc. I have
noticed that the memory usage of each httpd process seems to grow over time,
but it's usually very slow growth, and I can't tell if that's really a leak
or just normal behavior.
Workarounds would be helpful, but naturally I'd prefer to eliminate the cause
of the problem. I've been looking through the documentation, but haven't made
much progress so far. How can I get to the bottom of this?
-Michael
--
The Wellcome Trust Sanger Institute is operated by Genome Research
Limited, a charity registered in England with number 1021457 and a
company registered in England with number 2742969, whose registered
office is 215 Euston Road, London, NW1 2BE.