If I could receive feedback on the following email made on the 11th, I'd be
willing to burn some hours to make the following MPM for testing:

>I hope my emails are not annoying you guys. To give a more complete picture
>of this (pulled from methods I used in a client server app):
>
>The initial process creates a shared memory area for the queue and then a
>new thread, or child process, whose sole purpose is to dispatch connections
>to workers. The shared memory area is a FIFO for queuing up connections.
The
>dispatcher process then goes on to set up the other children each of which
>has access to the shared memory so they may get at the connection FIFO. The
>FIFO is a linked list containing connection objects that get filled in by
>the listener thread/process. The dispatcher maintains a list of all
children
>workers that it created, and sits in a loop sleeping with some set timeout.
>When the listener accepts a connection and puts it in the queue, it wakes
>the dispatcher if need be. Once awakened, the dispatcher looks to see if
>connections are queued, and if so, it looks in its worker list for the next
>available worker and awakens it. The worker then locks the FIFO head
pointer
>and grabs the connection, moves the pointer to the next FIFO node, and then
>unlocks the FIFO head pointer. If no workers are available, the dispatcher
>creates more workers if not at a set maximum in order to take the
>connection. If at the maximum, it sleeps again with a shorter timeout,
which
>when woke up again, checks for an available worker again. This repeats
until
>a worker is available. The listener in all of this is totally independent
of
>the workers, and only knows that it accepts connections and puts them in a
>FIFO, and finally notifies the dispatcher of the connection. The connection
>queue could span thousands of queued connections if desired. The dispatcher
>is responsible for coordinating the workers, and the queue resides in one
>place only, that being the shared memory segment.
>
>Is this too drastic an alteration to the current worker mpm, i.e. would it
>be a separate mpm if it came to fruition?
>
>Billy Rose 
>[EMAIL PROTECTED]
>
> -----Original Message-----
> From: Brian Pane [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, April 11, 2002 2:49 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [PATCH] convert worker MPM to leader/followers design
> 
> 
> Rose, Billy wrote:
> 
> >Would the solution in my last email do what you are looking for?
> >
> 
> My one concern with your solution is that it puts a queue in the
> httpd child processes.  I think that putting a queue in each child
> is always going to be tricky because you can get things stuck in the
> queue of one busy child process while other child processes are idle.
> 
> What I like about designs like leader/follower and prefork is that
> they share one queue across all child processes (and the queue is
> maintained by the TCP stack).
> 
> --Brian
> 
> 

Billy Rose 
[EMAIL PROTECTED]

> -----Original Message-----
> From: Aaron Bannert [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, April 17, 2002 12:35 PM
> To: [EMAIL PROTECTED]
> Subject: Re: cvs commit: httpd-2.0/server/mpm/worker worker.c
> 
> 
> > > Aren't global pools still cleaned up on exit? If the 
> threads are still
> > > running we'll still have the same problem. The only way I 
> see to fix this
> > > is to make sure that all threads have terminated before 
> cleaning up
> > > the pool.
> > 
> > I don't see that they're getting cleaned up on exit.
> 
> Pools that are created with a NULL parent are actually created as
> child-pools of the global_pool. The global_pool is destroyed in
> apr_pool_terminate(), which is called from apr_terminate(), which
> is registered with atexit().
> 
> > As far as making sure all threads have terminated before cleaning up
> > the pool:  How do we do that in a graceless shutdown?  If we hang
> > around much longer, the parent is going to kill us and we won't be
> > able to run cleanups anyway.
> 
> I don't see any good way out of this situation, here are two bad ways
> I see out:
> 
> 1) check for workers_may_exit upon returning from any 
> EINTR-returning syscall
>    Con: yuck, the number of places where we would have to do this is
>         way too large.
> 
> 2) introduce apr_thread_cancel()
>    Con: a) many thread libraries leak kernel resources when 
> threads are
>            cancelled.
>         b) We'd also have to introduce 
> apr_thread_setcancelstate, et al.
>            *and* we would have to be sure to clean up thing like the
>            accept mutex (it would be bad to be canceled while holding
>            the accept mutex).
> 
> A couple questions:
>   - What happens when we call shutdown() on the listen socket? Are
>     accepted sockets allowed to remain (I'm assuming so).
> 
>   - Could the listener thread keep a list of accepted socket 
> descriptors
>     and then close them all when we receive the signal to gracelessly
>     stop the server? We could optionally _not_ handle the resulting
>     socket errors (Afterall, that might be good information to have --
>     the admin just intentionally killed off a bunch of connections.)
> 
> -aaron
> 

Reply via email to