On Sat 24 Oct 2009, Rolf Schaufelberger wrote:
> Starting some weeks ago the server sometimes hangs with an out of  
> memory problem.

Assuming you are running on linux the following sysctls may help to find 
the culprit.

vm.overcommit_memory=2
vm.overcommit_ratio=90

By default (overcommit_memory=0) when a process needs more memory linux 
simply says "okay here you are" no matter if the memory is currently 
available or not. This is based on the assumption that most processes 
do not use all memory they allocate at all. Later on, when the process 
really accesses the memory a page fault is generated and linux only 
then allocates the memory for the process. But now it is too late to 
signal the process out-of-memory. So, linux somehow has to obtain 
memory no matter what. So, in short-of-memory conditions linux starts 
the OOM killer. It implements some heuristic that chooses some "best 
fitting" processes to be killed. These "best fitting" processes may be 
totally unrelated to the original problem.

I had once a case where a perl program processed mailbox files (using 
Mail::Box) on a box where a postgres database ran. Unfortunately 
Mail::Box reads in the whole mailbox file. Normally our mailbox files 
were about 1-10mb and the program had worked for years. But suddenly we 
had one of >1Gb. Instead of signaling out-of-memory to the perl process 
linux killed the postgres database.

Make sure you have enough swap space (at least the RAM size) before 
experimenting with those sysctls.

Torsten

-- 
Need professional mod_perl support?
Just hire me: torsten.foert...@gmx.net

Reply via email to