> -----Original Message-----
> From: Brian Pane [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, April 17, 2002 7:48 PM
> To: [EMAIL PROTECTED]
> Subject: Re: cvs commit: httpd-2.0/server/mpm/worker worker.c
> 
> 
> Rose, Billy wrote:
> 
> >If I could receive feedback on the following email made on 
> the 11th, I'd be
> >willing to burn some hours to make the following MPM for testing:
> >
> >>I hope my emails are not annoying you guys. To give a more 
> complete picture
> >>of this (pulled from methods I used in a client server app):
> >>
> >>The initial process creates a shared memory area for the 
> queue and then a
> >>new thread, or child process, whose sole purpose is to 
> dispatch connections
> >>to workers. The shared memory area is a FIFO for queuing up 
> connections.
> >>The dispatcher process then goes on to set up the other 
> children each of which
> >>has access to the shared memory so they may get at the 
> connection FIFO. The
> >>FIFO is a linked list containing connection objects that 
> get filled in by
> >>the listener thread/process. The dispatcher maintains a 
> list of all children
> >>workers that it created, and sits in a loop sleeping with 
> some set timeout.
> >>When the listener accepts a connection and puts it in the 
> queue, it wakes
> >>the dispatcher if need be. Once awakened, the dispatcher 
> looks to see if
> >>connections are queued, and if so, it looks in its worker 
> list for the next
> >>available worker and awakens it.
> >>
> 
> I can think of a few factors that will complicate the implementation
> of this design:
> 
> * The role of the dispatcher in this design adds some extra context
>   switching to the critical path for accepting a connection.  By my
>   count, three or four separate threads--distributed among two or
>   three separate processes--will have to get involved to accept each
>   new connection.  Based on the relative performance of the other
>   MPM designs, I'd say that MPMs that involve fewer threads per
>   connection generally are fastest (that's part of why prefork
>   has outperformed the original worker design.)
> 

The extra overhead of communications between contexts will be offset
by the shear volume of connections handled. I would not use this
prospected MPM on a light duty web server, but rather one that gets
thousands of requests per minute. The machinery employed here would
dictate that until some number N of connections, the context switching
will cause added overhead. Beyond N, and the response mechanisms will
begin to flatten out at the dictated rate of the hardware as very little
switching occurs until a thread is finished with a request. I suppose
this MPM is much like a tricked out V8 engine. It won't perform until
enough RPM's have been gained. One extra context switch between
accepting a connection and servicing it is a low price to pay for
an MPM that could handle instant spikes into the thousands of requests.
Not all people would benefit from this, but I'm sure some out there
would.

Here's an idea: how about a hybrid MPM that starts out in prefork, and
then switches to something like this when N is reached.

> * When the dispatcher looks for an available worker, is it looking for
>   a worker process, or for a worker thread within one of the worker
>   processes?
> 

Thread. The dispatcher is responsible for managing the creation of all
processes, and all threads therein. I keeps track of which thread in
what process is handling some connection via the "task list" (for lack
of a better term). Each process would have an entry point into a thread
creation function that the dispatcher would call up via IPC.

>   If the dispatcher wakes up a worker process, then that process will
>   need to figure out which of its threads will handle the connection.
>   To do this, it will need to either: 1) let its idle threads 
> take turns
>   handling connections (meaning that it looks a lot like 
> leader/follower)
>   or 2) have one dedicated thread that delegates the connection to an
>   idle worker (meaning that it looks a lot like the threadpool MPM or
>   the original worker MPM).
> 
>   Alternately, if the dispatcher wakes up a worker thread directly, it
>   needs a means to signal a specific thread within another process.
>   It may be very difficult to do that portably.  I suppose each idle
>   worker thread could do a blocking read on a separate socket or pipe,
>   and the dispatcher could write a byte to the one that it wants to
>   wake up.  But that would slow down connection processing and use up
>   lots of file descriptors.
> 

shmget(), or perhaps msgget()?

Side note:
It's too bad there's not a way to segregate ports amoung threads
handling the port, via directly addressing them: i.e. 6900:0001
(port 6900, thread 0001). One file handle, many handlers...

> >> The worker then locks the FIFO head pointer
> >>and grabs the connection, moves the pointer to the next 
> FIFO node, and then
> >>unlocks the FIFO head pointer. If no workers are available, 
> the dispatcher
> >>creates more workers if not at a set maximum in order to take the
> >>connection. If at the maximum, it sleeps again with a 
> shorter timeout, which
> >>when woke up again, checks for an available worker again. 
> This repeats
> >>until a worker is available. The listener in all of this is 
> totally independent of the workers, and only knows that it 
> accepts connections and puts them in a
> >>FIFO, and finally notifies the dispatcher of the 
> connection. The connection
> >>queue could span thousands of queued connections if 
> desired. The dispatcher
> >>is responsible for coordinating the workers, and the queue 
> resides in one
> >>place only, that being the shared memory segment.
> >>
> 
> Having a single queue definitely is the right solution.  The catch
> is that managing that single queue within the httpd is tricky (based
> on the issues noted above).  The alternate way to get a single-queue
> solution is to let the OS manage the queue for us, which is what
> the prefork, leader/follower, and threadpool MPMs do.
> 

I'll begin work on it...

> >>Is this too drastic an alteration to the current worker 
> mpm, i.e. would it
> >>be a separate mpm if it came to fruition?
> >>
> 
> Separate MPM
> 
> --Brian
> 
> 

Reply via email to