Hi, Tortsen,

Am 26.10.2009 um 12:12 schrieb Torsten Foertsch:

On Sat 24 Oct 2009, Rolf Schaufelberger wrote:
Starting some weeks ago the server sometimes hangs with an out of
memory problem.

Assuming you are running on linux the following sysctls may help to find
the culprit.

vm.overcommit_memory=2
vm.overcommit_ratio=90


how about setting PERL_RLIMIT_DATA ?
see (http://perl.apache.org/docs/2.0/api/Apache2/Resource.html)

Would that work ?




By default (overcommit_memory=0) when a process needs more memory linux
simply says "okay here you are" no matter if the memory is currently
available or not. This is based on the assumption that most processes
do not use all memory they allocate at all. Later on, when the process
really accesses the memory a page fault is generated and linux only
then allocates the memory for the process. But now it is too late to
signal the process out-of-memory. So, linux somehow has to obtain
memory no matter what. So, in short-of-memory conditions linux starts
the OOM killer. It implements some heuristic that chooses some "best
fitting" processes to be killed. These "best fitting" processes may be
totally unrelated to the original problem.

I had once a case where a perl program processed mailbox files (using
Mail::Box) on a box where a postgres database ran. Unfortunately
Mail::Box reads in the whole mailbox file. Normally our mailbox files
were about 1-10mb and the program had worked for years. But suddenly we had one of >1Gb. Instead of signaling out-of-memory to the perl process
linux killed the postgres database.

Make sure you have enough swap space (at least the RAM size) before
experimenting with those sysctls.

Torsten

--
Need professional mod_perl support?
Just hire me: torsten.foert...@gmx.net

Mit freundlichen Grüßen
Rolf Schaufelberger

plusW GmbH
Stuttgarter Str. 26       Tel.   07183 30 21 36
73635 Rudersberg    Fax   07183 30 21 85

www.plusw.de
www.mypixler.com
www.calendrino.de





Reply via email to