On Aug 25, 2010, at 8:52 AM, Peter Zijlstra wrote:
> Also, there's a good reason for disliking (a), its a deadlock scenario,
> suppose we need to write out data to get free pages, but the writing out
> is blocked on requiring free pages.
> 
> There's really nothing the page allocator can do to help you there, its
> a situation you have to avoid getting into.

Well, if all of these users start having their own private pools of emergency 
memory, I'm not sure that's such a great idea either.

And in some cases, there *is* extra memory.  For example, if the reason why the 
page allocator failed is because there isn't enough memory in the current 
process's cgroup, maybe it's important enough that the kernel code might decide 
to say, "violate the cgroup constraints --- it's more important that we not 
bring down the entire system" than to honor whatever yahoo decided that a 
particular cgroup has been set down to something ridiculous like 512mb, when 
there's plenty of free physical memory --- but not in that cgroup.

-- Ted

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to