On 28 January 2015 at 06:50, arisawa <aris...@ar.aichi-u.ac.jp> wrote:

> Eric’s users are all gentleman, all careful people and all skillful
> programmers.
> If your system is served for university students, you will have different
> thought.
>

You might also be running services on it, and it's reasonable that it
should manage its resources.
It's best to refuse a new load if it won't fit, but that requires a fairly
careful system design and works
best in closed systems. In a capability system you'd have to present a
capability to create a process,
and it's easy to ration them. Inside Plan 9, one could have a nested
resource allocation scheme
for the main things without too much trouble. Memory is a little different
because you either under-use
(by reserving space) or over-commit, as usually happens with file-system
quotas.

With memory, I had some success with a hybrid scheme, still not intended
for multi-user,
but it could be extended to do that. The system accounts in advance for
probable demands of
things like fork/exec and segment growth (including stack growth). If that
preallocation is then not
used (eg, because of copy-on-write fork before exec) it is gradually
written off. When that fails,
the system looks at the most recently created process groups (note groups)
and kills them.
It differs from the OOM because it doesn't assume that the biggest thing is
actually the problem.
Instead it looks back through the most recently-started applications since
that corresponded to my usage.
It's quite common to have long-running components that
soak up lots of memory (eg, a cache process or fossil) and they often
aren't the ones that caused
the problem. Instead I assume something will have started recently that was
ill-advised. It would be better
to track allocation history across process groups and use that instead but
I needed something simple,
and it was reasonably effective.

For resources such as process (slots), network connections, etc I think a

Reply via email to