On Wed, Feb 19, 2014 at 09:08:59PM +0700, Robert Elz wrote: > The kernel code for handling resource limits attempts to share the > limits structure between processes (wholly reasonable, as limits are > inherited, and rarely changed). A shared limits struct (which is all > of them when a new one is created) is marked as !pl_writeable. > (Then when a process modifies one of its limits, it is given a copy > of the limits struct, marked pl_writeable that it can modify as needed).
I do not see anything wrong with your analysis, but I only skimmed it. I skimmed your email expecting for you to mention a problem with process resource limits that came up several years ago: after a process fork()s, the child's resource use does not count against the parent's limits, but it counts against the child's own copy of the parent's resource limits. Also, we may configure a system-wide limit on the number of processes, and we may individually limit the number of processes simultaneous belonging to each user, but there is not a limit to the number of processes created by a process and its descendants. All of this means that a user has very little protection against a program that constantly forks and allocates memory: where N is the user's process limit, and M the bytes memory limit, the program and its descendants can use N * M bytes of memory and all N of the processes available to the user. In this way a "fork bomb" can run away with all of the user's resources, and it might cripple the system, too. It seems to me that the whole area of resource limits is ripe for reconsideration, if somebody had the time and level of interest. These days it makes more sense to arbitrate access to system resources using power budgets, noise budgets/limits, and latency goals, than to enforce some of the traditional limits. Limits should be enforceable by users on the processes that run on their behalf. Dave -- David Young dyo...@pobox.com Urbana, IL (217) 721-9981