On Friday, July 22 at 19:55 (+0100), Peter Humphrey said:

> > Wouldn't a sufficiently large swap (100GB for example) completely
> prevent
> > out of memory conditions and the oom-killer?
> 
> Of course, on any system with more than a few dozen MB of RAM, but I
> can't 
> imagine any combination of running programs whose size could add up to
> even 
> a tenth of that, with or without library sharing (somebody will be
> along 
> with an example in a moment).

The *prime* example is you have a program with a memory leak (omg we
have programs with memory leaks?).

On a system with only say 2GB swap, that program will cause oom killer
to kick in fairly quickly, on a system with 100GB swap, that system is
going to have to use all 100GB of swap before oom kicks in.  By then
your system will probably be thrashing like hell. 

There is no way you can complete guarantee a system won't run out of
virtual memory, unless you can guarantee that there are no misbehaving
applications or that some clueless guy won't isn't going to try to open
a database dump in vi.*

* Well you could set process/user limits to make sure a process gets an
error after it tries to allocate a set limit of memory.


Reply via email to