Yep, I am familiar with MaxClients .. there are two backend servers
of 10 modperl processes each (Maxclients=start=10). Thats sized
about right. They can all pump away at the same time doing about
20 pages per second. The problem comes when they are asked to do
21 pages per second :-)

There is one frontend mod_proxy.. currently has MaxClients set
to 120 processes (it doesnt serve images).. the actual number 
in use near peak output varies from 60 to 100, depending on the
mix of clients using the system. Keepalive is *off* on that,
(again, since it doesnt serve images).

When things get slow on the back end, the front end can fill with
120 *requests* .. all queued for the 20 available modperl slots..
hence long queues for service, results in nobody getting anything,
results in a dead site. I don't mind performance limits, just don't
like the idea that pushing beyond 100% (which can even happen with
one of the evil site hoovers hitting you) results in site death.

So dropping maxclients on the front end means you get clogged
up with slow readers instead, so that isnt an option..

-Justin

On Wed, Jan 03, 2001 at 11:57:17PM -0600, Jeff Sheffield wrote:
> this is not the solution...
> but it could be a bandaid until you find one.
> set the MaxClients # lower.
> 
> # Limit on total number of servers running, i.e., limit on the number
> # of clients who can simultaneously connect --- if this limit is ever
> # reached, clients will be LOCKED OUT, so it should NOT BE SET TOO
> LOW.
> # It is intended mainly as a brake to keep a runaway server from
> taking
> # the system with it as it spirals down...
> #
> MaxClients 150
> 
> >On Wed, Jan 03, 2001 at 10:25:04PM -0500, Justin wrote:
> > Hi, and happy new year!
> > 
> > My modperl/mysql setup does not degrade gracefully when reaching
> > and pushing maximum pages per second  :-) if you could plot 
> > throughput, it rises to ceiling, then collapses to half or less,
> > then slowly recovers .. rinse and repeat.. during the collapses,
> > nobody but real patient people are getting anything.. most page
> > production is wasted: goes from modperl-->modproxy-->/dev/null
> > 
> > I know exactly why .. it is because of a long virtual
> > "request queue" enabled by the front end .. people "leave the
> > queue" but their requests do not.. pressing STOP on the browser
> > does not seem to signal mod_proxy to cancel its pending request,
> > or modperl, to cancel its work, if it has started.. (in fact if
> > things get real bad, you can even find much of your backend stuck
> > in a "R" state waiting for the Apache timeout variable
> > to tick down to zero..)
> > 
> > Any thoughts on solving this? Am I wrong in wishing that STOP
> > would function through all the layers?
> > 
> > thanks
> > -Justin
> Thanks, 
> Jeff
> 
> ---------------------------------------------------
> | "0201: Keyboard Error.  Press F1 to continue."  |
> |                          -- IBM PC-XT Rom, 1982 |
> ---------------------------------------------------
> | Jeff Sheffield                                  |
> | [EMAIL PROTECTED]                                  |
> | AIM=JeffShef                                    |
> ---------------------------------------------------

Reply via email to