I've been thinking about strategies for building a
multiple-connection-per-thread MPM for 2.0.  It's
conceptually easy to do this:

 * Start with worker.

 * Keep the model of one worker thread per request,
   so that blocking or CPU-intensive modules don't
   need to be rewritten as state machines.

 * In the core output filter, instead of doing
   actual socket writes, hand off the output
   brigades to a "writer thread."

 * As soon as the worker thread has sent an EOS
   to the writer thread, let the worker thread
   move on to the next request.

 * In the writer thread, use a big event loop
   (with /dev/poll or RT signals or kqueue, depending
   on platform) to do nonblocking writes for all
   open connections.

This would allow us to use a much smaller number of
worker threads for the same amount amount of traffic
(at least for typical workloads in which the network
write time constitutes the majority of each requests's
duration).

The problem, though, is that passing brigades between
threads is unsafe:

 * The bucket allocator alloc/free code isn't
   thread-safe, so bad things will happen if the
   writer thread tries to free a bucket (that's
   just been written to the client) at the same
   time that a worker thread is allocating a new
   bucket for a subsequent request on the same
   connection.

 * If we delete the request pool when the worker
   thread finishes its work on the request, the
   pool cleanup will close the underlying objects
   for the request's file/pipe/mmap/etc buckets.
   When the writer thread tries to output these
   buckets, the writes will fail.

There are other ways to structure an async MPM, but
in almost all cases we'll face the same problem:
buckets that get created by one thread must be
delivered and then freed by a different thread, and
the current memory management design can't handle
that.

The cleanest solution I've thought of so far is:

 * Modify the bucket allocator code to allow
   thread-safe alloc/free of buckets.  For the
   common cases, it should be possible to do
   this without mutexes by using apr_atomic_cas()
   based spin loops.  (There will be at most two
   threads contending for the same allocator--
   one worker thread and the writer thread--so
   the amount of spinning should be minimal.)

 * Don't delete the request pool at the end of
   a request.  Instead, delay its deletion until
   the last bucket from that request is sent.
   One way to do this is to create a new metadata
   bucket type that stores the pointer to the
   request pool.  The worker thread can append
   this metadata bucket to the output brigade,
   right before the EOS.  The writer thread then
   reads the metadata bucket and deletes (or
   clears and recycles) the referenced pool after
   sending the response.  This would mean, however,
   that the request pool couldn't be a subpool of
   the connection pool.  The writer thread would have
   to be careful to clean up the request pool(s)
   upon connection abort.

I'm eager to hear comments from others who have looked
at the async design issues.

Thanks,
Brian




Reply via email to