On Sat, 10 Apr 1999, Matthew Dillon wrote:

> :Kevin
> 
>     Plausable, yes.  Useful:  probably not as useful as you might think.  I
>     wouldn't even consider doing something like that for BEST, it could lead
>     to cascade failures.
> 
>     For example, if a user is running procmail or cron on a relatively loaded
>     system, the user's share of the cpu relative to other users might not be
>     sufficient to handle the user's medium term mail and/or job load.  If 
> there
>     are a few hundred users logged in, this could rapidly devolve into a
>     cascade failure which fills the system's process table.  This in turn can
>     lockup sendmail's waiting to lock the user's mailbox which in turn can 
>     lead to a cascade failure with the root-run sendmails.
> 
>     Sometimes, the most noble of purposes can make a machine less stable in
>     inobvious ways.  In the above example, limitations on a user process might
>     lead to a backup of root-run services.
> 
>     A user-run CGI is another example.  Say you have a web server which runs
>     CGI's under a user id.  If the web site is loaded down and the user 
> happens
>     to run a log processing script, execs of the user's CGIs might slow down
>     due to the load balancing 'feature'.  The web server may now wind up in 
> the
>     situation where it is forking CGIs faster then it can retire them.  
> Leading
>     to another cascade failure.
> 
>     In general, it is best to treat process scheduling without taking into
>     account other processes run by the same user.  
> 
>     If a user misbehaves, the best solution is to stomp on him until he does
>     behave, not try to shove the misbehavior under the run by making the 
> system
>     'control' the user's resources.  If you do not actively control the 
>     behavior of your users all you will accomplish is to have a large number 
> of
>     them misbehaving continuously rather then just one or two.  The result
>     is going to be the same:  A loaded down server and lots of complaints.
>     users, 

Matt, I agree with what you're saying, but what would you think about
something that would take a look at the total cpu time that a process
group had accumulated in the previous 120 seconds.  That would be, I
think, plenty long enough to catch most inadvertent things, and just
kill the process leader.  You could allow some users to have very high
limits, but an attacker, someone purposely bringing the machine down (a
hacker) would find only 2 minute capabilities.

Do you think something like this would still contribute to the cascade
failure scenarios?

>  
>                                       -Matt
>                                       Matthew Dillon 
>                                       <dil...@backplane.com>
> 
> 
> 
> To Unsubscribe: send mail to majord...@freebsd.org
> with "unsubscribe freebsd-current" in the body of the message
> 

----------------------------+-----------------------------------------------
Chuck Robey                 | Interests include any kind of voice or data 
chu...@picnic.mat.net       | communications topic, C programming, and Unix.
213 Lakeside Drive Apt T-1  |
Greenbelt, MD 20770         | I run picnic (FreeBSD-current)
(301) 220-2114              | and jaunt (Solaris7).
----------------------------+-----------------------------------------------






To Unsubscribe: send mail to majord...@freebsd.org
with "unsubscribe freebsd-current" in the body of the message

Reply via email to