In a LWN discussion thread on how google uses the kernel I found the following:

==================
2) Mike asked why the kernel tries so hard to allocate memory - why not just 
fail to allocate if there is too much pressure. Why isn't disabling overcommit 
enough?

Posted Oct 24, 2009 1:26 UTC (Sat) by Tomasu (subscriber, #39889) [Link]

2) probably because they actually want some over-commit, but they don't want 
the OOM thread to go wild killing everything, and definitely not the WRONG 
thing.

Posted Oct 25, 2009 19:24 UTC (Sun) by oak (subscriber, #2786) [Link]

In the Maemo (at least Diablo release) kernel source there are
configurable limits for when kernel starts to deny allocations and when to
OOM-kill (besides notifying user-space about crossing of these and some
earlier limits). If process is set as "OOM-protected", its allocations
will also always succeed. If "OOM-protected" processes waste all memory
in the system, then they can also get killed.

===================

Working the table at the Boston book festival I was reminded how painful the 
OOM stuff is on a gen 1. The demo machines were in this state a lot as each 
visitor would open up a new program.  Basically you have to just turn the unit 
off and restart as trying to recover is futile.  The 1GiB of memory on 1.5 will 
help with this somewhat but in most cases it just means that you shift the 
problem.  Users being users will still open up too much and the 1GiB isn't an 
option for Gen 1.0 users.    

The OOM topic and what to do in that case has come up several times in the 
past.  If Maemo thinks they have a reasonable solution to the problems then 
someone should look at trying to add that to our kernel and user space

-- 
Richard A. Smith  <rich...@laptop.org>
One Laptop per Child
_______________________________________________
Devel mailing list
Devel@lists.laptop.org
http://lists.laptop.org/listinfo/devel

Reply via email to