Aaron Bannert <[EMAIL PROTECTED]> writes:

> On Wed, Sep 19, 2001 at 06:47:31PM -0000, [EMAIL PROTECTED] wrote:
> > trawick     01/09/19 11:47:31
> > 
> >   Modified:    server/mpm/worker worker.c
> >   Log:
> >   if we're gonna trash the connection due to a queue overflow, at the
> >   very least we should close the socket and write a log message (mostly
> >   to aid debugging, as this is a showstopper problem)
> >   
> >   this is no fix; there is a design issue to consider; hopefully this
> >   will
> 
> [I assume you had more to say?]

I forgot what the rest of the sentence was :)  I intended to follow up
here anyway, but in my rush home to beat the school bus you chimed in
first.

> Now that the queue represents the number of accepted-but-not-processed
> sockets, it does not necessarily need to be the size of the number of
> threads, but instead some other value that indicates the number of
> sockets we'll accept before sending back some "Server Busy" error.
> 
> So I have two questions:
> 
> 1) How do we send back that error?
> 
> 2) How long should the queue be? Should we just set some arbitrary constant,
>    defined in mpm_default.h, or should we come up with some heuristic?

Let's pick a number for now for the length of the queue.  As far as I
am concerned, the current number is a reasonable starting point.

It shouldn't be too low, since when the server is busy we want to
always have some available work when workers are done processing a
previous connection.

It shouldn't be too high because

1) the queue may be full because our workers are busy, and for all we 
   know there is another httpd process with workers which aren't busy,
   and we shouldn't be accepting too many new connections because that
   keeps the idle workers in another process from processing them;
   with a larger queue we have higher risk of starving workers in
   another process even though we can't do anything ourselves
2) (not terribly important to me) any connection sitting in the queue
   at graceful restart time is a connection which could have been
   processed with the new configuration but won't be since we were
   needlessly greedy

If we avoid calling accept when our queue is full, we should fix this
issue.  

So what to do?

query state of queue before accept (without getting a lock if
possible)...  if pretty full*, instead of calling accept() block in a mutex
which will be posted by a worker when get go above some threshhold of
available slots in the queue

*maybe some lack of exactness here can allow us to avoid a lock
-- 
Jeff Trawick | [EMAIL PROTECTED] | PGP public key at web site:
       http://www.geocities.com/SiliconValley/Park/9289/
             Born in Roswell... married an alien...

Reply via email to