Cliff Woolley wrote:

On Sat, 31 Aug 2002, Brian Pane wrote:



Wouldn't it be sufficient to guarantee that:
* each *bucket* can only be processed by one thread at a time, and
* allocating/freeing buckets is thread-safe?



No. You'd need to also guarantee that all of the buckets sharing a private data structure (copies or splits of a single bucket) were, as a group, processed by only one thread at a time (and those buckets can exist across multiple brigades even).


I *think* this one can be solved by making the increment/decrement of the bucket refcount atomic.

You'd also have to guarantee that no
buckets are added/removed from a given brigade by more than one thread at
a time.


This part is easy to guarantee. When the worker thread passes buckets to the writer thread, it hands off a whole brigade at once, so that ownership of the brigade passes from one thread to another.

When you add up the implications of all these things, it
basically ends up with the whole request being in one thread at a time.



If we can overcome this limitation, it will be straightforward to build an async MPM. If not, the fallback solution would be:

 * Each worker thread does its own network writes, up until the
   point where it sees EOS.
 * At that point, the worker thread hands the remaining brigade
   off to the writer thread.  (In doing so, it's basically transferring
   the entire request to the writer thread.)

This would give us the benefits of async writes for static files,
where the core_output_filter could immediately transfer the
response_header+file_bucket+EOS brigade to the writer thread and
let the worker thread go on to work on other requests.  For large
streamed responses, though, the worker would end up writing almost
the entire response.

Brian




Reply via email to