On Wed, 30 Jan 2002, Bill Stoddard wrote:

> Not so. If you know your site has this problem and you can't fix it for whatever 
>reason,
> you can preemptively set MaxRequestsPerChild to 0 or some suitably high number to 
>give the
> admin time to notice the problem when it occurs. It is wrong to whack the entire 
>site if
> you can avoid it. For some websites, every minute of downtime costs real $$$ (and
> substantial sums in some cases). Anything you can do to prevent downtime is goodness.
> Anything you can do to give you time to solve the real problem is goodness. I 
>completely
> agree that an admin should -not- rely on this behaviour as a permanent way to
> avoid/mitigate the problem.
> 
> Real life scenario... Suppose you run a retail web site. You do 90% of your yearly
> business in the two weeks prior to Christmas.  For some whacky reason, the parent 
>process
> in your Apache site starts dying right at the same time as your peek shopping season
> starts.

1. I would rather have some very easy to detect failure ("the server
falls over") that immediately alerts me something is broken.
Checking that the server is responding on port x is a very simple
check.  Checking that the parent process hasn't died or that the
server is "slow" is much more complex, and the former is very
software specific.

2. If I'm running such an important site, it is going to be behind
a load balancer, so it is much better to have one server go away
completely and therefore not get any requests than to have that
one server half-working.

Reply via email to