> Am 13.05.2016 um 16:11 schrieb Eric Covener <cove...@gmail.com>:
> 
> On Fri, May 13, 2016 at 7:02 AM, Stefan Eissing
> <stefan.eiss...@greenbytes.de> wrote:
>> That would allow HTTP/2 processing to become fully async and it would no 
>> longer need its own worker thread pool, at least with mpm_event.
>> 
>> Thoughts?
> 
> One bit I am still ignorant of is all of the beam-ish stuff (the
> problem and the solution) and how moving the main connection from
> thread-to-thread might impact that.

The bucket beams have no thread affinity. What I describe in the comments as 
'red side' and 'green side' is just a name for the thread that is *currently* 
handling the red/green pool.

So, the red pool and green pools are fixed during the lifetime of a beam. The 
threads may vary. Whichever thread is the one owning operations on the red 
pool, I call the red thread.

The beam just manages the lifetimes of buckets and their pool affinity without 
unnecessary copying. The bucket being sent, the red buckets, are only ever 
manipulated in calls from the sending side. The buckets received, the green 
buckets, are separate instances, so they can be safely manipulated by the 
receiver (split/read/destroy).

The trick is that a green bucket may expose red data when read. That means the 
red bucket must stay around at least as long as the green one. So, a green 
bucket calls its beam when it's last share gets destroyed. The beam then knows 
that the corresponding red bucket is no longer needed.

Those no longer needed red buckets are placed on a 'purge' list. This list gets 
cleared and the buckets destroyed on the next call from the sending side, the 
red side.

The fragile thing is the cleanup of the red pool. That must not happen while 
the beam has outstanding green buckets. For this, the beam has a shutdown 
method that may block until this is true.

> Maybe  you could have a pipe()  with a writing end in each slave,  and
> read by the master,  that the event loop watches to re-schedule the
> master?

Hmm, I do not see the need. Those pipe()s will only generate events when 
another part of the process wants to. Unless we are talking about spreading 
HTTP/2 processing across multiple processes and using the pipes to transfer the 
actual data. And I am not convinced that this is a good idea.

And signaling would also need to go the other direction: from master to slave. 
Which would then require 2(4?) file handles per active requests, I assume?

-Stefan

Reply via email to