In the interests of academia (and from the idea of setting up a public
Plan 9 cluster) comes the following mail. I'm sure people will brush
some of this off as a non-issue, but I'm curious what others think.

It doesn't seem that Plan 9 does much to protect the kernel from
memory / resource exhaustion. When it comes to kernel memory, the
standard philosophy of ``add more memory'' doesn't quite cut it:
there's a limited amount for the kernel, and if a user can exhaust
that, it's not a Good Thing. (Another argument I heard today was
``deal with the offending user swiftly,'' but that does little against
full disclosure). There are two potential ways to combat this (though
there are additional advantages to the existence of both):

1) Introduce more memory pools with tunable limits.

The idea here would be to make malloc() default to its current
behavior: just allocate allocate space from available arenas in
mainmem. An additional interface (talloc?) would be provided for
type-based allocations. These would be additional pools that serve to
store specific kernel data structures (Blocks, Chans, Procs, etc.).
This provides two benefits:

 o Protection against kernel memory starvation by exhaustion of a
specific resource
 o Some level of debugalloc-style memory information without all of the overhead

I suppose it would be possible to allow for tunable settings as well
by providing a FS to set e.g. minarea or maxsize.

The benefit to this approach is that we would have an extremely easy
way to add new constraints as needed (simply create another tunable
pool), without changing the API or interfering with multiple
subsystems, outside of changing malloc calls if needed. The limits
could be checked on a per-process or per-user (or both) basis.

We already have a pool for kernel memory, and a pool for kernel draw
memory. Seems reasonable that we could have some for network buffers,
files, processes and the like.

2) Introduce a `devlimit' device, which imposes limits on specific
kernel resources. The limits would be set on either a per-process or
per-user basis (or both, depending on the nature of the limit).

#2 seems more like the unixy rlimit model, and the more I think about
it, the less I like it. It's a bit more difficult to `get right', it
doesn't `feel' very Plan 9-ish, and adding new limits requires more
incestuous code. However, the limits are more finely tuned.

Just wondering any thoughts on this, which seems more feasible, if
anybody would feel it's a `good idea,' and the like. I got mixed
(though mostly positive from those who understood the issue) feedback
on IRC when I brought up the problem. I don't have any sample cases in
which it would be possible to starve the kernel of memory.

--dho

Reply via email to