On Tue, 9 Jan 2001, Rob Bloodgood wrote:
> > It's not a hard limit, and I actually only have it check on every other
> > request.  We do use hard limits with BSD::Resource to set maximums on CPU
> > and RAM, in case something goes totally out of control.  That's just a
> > safety though.
> 
> <chokes> JUST a safety, huh? :-)

Why is that surprising?  We had a dev server get into a tight loop once
and use up all the CPU.  We fixed that problem, but wanted to be sure that
a similar problem couldn't take down a production server.

> since I never saw a worthwhile resolution to the thread "the edge of chaos,"

The problem of how to get a web server to still provide some service when
it's overwhelmed by traffic is pretty universal.  It's not exactly a
mod_perl problem.  Ultimately you can't fit 10 pounds of traffic in a 5
pound web server, so you have to improve performance or deny service to
some users.

> In a VERY busy mod_perl environment (and I'm taking 12.1M hits/mo right
> now), which has the potential to melt VERY badly if something hiccups (like,
> the DB gets locked into a transaction that holds up all MaxClient httpd
> processes, and YES it's happened more than once in the last couple of
> weeks),
> 
> What specific modules/checks/balances would you install into your webserver
> to prevent such a melt from killing a box?

The things I already mentioned prevent the box from running out of memory.  
Your web service can still become unresponsive if it depends on a shared
resource and that resource becomes unavailable (database down, etc.).  
You can put timers on your calls to those resources so that mod_perl will
continue if they're hung, but it's still useless if you've got to have the
database.  If there's a particularly flaky resource that is only used in
part of your application, you could segregate that on it's own mod_perl
server so that it won't bring anything else down with it, but the
usefulness of this approach depends a lot on the situation.

- Perrin

Reply via email to