I had this same problem with the OOM for a while. Very frustrating to have
to reboot a server to bring it back to life.

I found out the OOM only ran when the swap file was about 99% full. The
servers I had this problem on had 16GB and 24GB of ram, but only 2GB of
swap. I increased the swap on the 24GB servers to 48GB and on the 16GB
servers to 32GB. The swap never fills to over 60% now and I haven't' had
any OOM problems since and the systems run great.

I've also set vm.swappiness=0 in /etc/sysctl.conf

-JW

On Fri, Apr 13, 2012 at 7:17 AM, Reindl Harald <h.rei...@thelounge.net>wrote:

> the following may be useful for most server systems
>
> OOM-killer acts if some process reclaims more and more
> memory and the kernel randomly kills unimportant tasks
> using hughe memory
>
> in case of a running mysqld the classification "unimportant"
> is nearly all time wrong and can cause hughe damage and work
> in other words: you really never want killed a database server
> randomly instead "dbmail-imapd" which can be restarted via
> systemd without pain and may be the root-cause of OOM
> _________________________________________
>
> with one single command you can protect processes from get killed
> i started to run this every 15 minutes to make sure it is also
> active after restarts
>
> i am considering include this in "mysqld.service" as
> "ExecStartPost=-/usr/local/bin/mysql-no-oom.sh" in our
> internal mysqld-packages and include also the script
> _________________________________________
>
> [root@mail:~]$ cat /etc/crontab | grep oom
> 0,15,30,45 * * * * root bash /usr/local/bin/mysql-no-oom.sh
>
> [root@mail:~]$ cat /usr/local/bin/mysql-no-oom.sh
> #!/bin/bash
> pgrep -f "/usr/libexec/mysqld" | while read PID; do echo -1000 >
> /proc/$PID/oom_score_adj; done
>
>


-- 
-----------------------------
Johnny Withers
601.209.4985
joh...@pixelated.net

Reply via email to