> Am 07.10.2015 um 17:55 schrieb Graham Leggett <minf...@sharp.fm>:
> 
> On 07 Oct 2015, at 4:30 PM, Stefan Eissing <stefan.eiss...@greenbytes.de> 
> wrote:
>> [...]
>> Due to the non-multithreadability of apr_buckets, no buckets are ever moved 
>> across threads. non-meta buckets are read, meta buckets are deleted. That 
>> should work fine for EOR buckets, as all data has been copied already when 
>> they arrive.
> 
> A key part of the httpd v2.x design was to achieve zero copy - ideally we 
> should be using bucket setaside to pass the bucket between pools rather than 
> copying the buckets.
> 
> Can you explain "non-multithreadability of apr_buckets” in more detail? I 
> take it this is the problem with passing a bucket from one allocator to 
> another?
> 
> If so then the copy makes more sense.

Yes, I wrote about this on the list a while ago. When the bucket is destroyed, 
its allocator tries to put it on the free list. There is no protection for that.

>> Stream pool destruction is synched with 
>> 1. slave connection being done and no longer writing to it
> 
> How do you currently know the slave connection is done?
> 
> Normally a connection is cleaned up by the MPM that spawned the connection, I 
> suspect you’ll need to replicate the same logic the MPMs use to tear down the 
> connection using the c->aborted and c->keepalive flags.
> 
> Crucially the slave connection needs to tell you that it’s done. If you kill 
> a connection early, data will be lost.
> 
> I suspect part of the problem is not implementing the algorithm that async 
> MPMs used to kick filters with data in them. Without this kick, data in the 
> slave stacks will never be sent. In theory, when the http2 filter receives a 
> kick, it should pass the kick on to all slave connections.

I am not sure what you mean by that "kick". I'd have to look at your async 
filter design some more...

>> 2. h2 stream having been written out to the client or otherwise being closed
>> Only after 1+2 happened will this memory be reclaimed.
> 
> In the case of the h2 stream you probably need to implement the same 
> mechanism with c->aborted and c->keepalive so the MPM cleans up the h2 stream 
> for you.
> 
> You would need to implement cleanups on the slave connections which would 
> then mark the master h2 stream for cleanup by the MPM based on whether the 
> number of slave connections has reached zero.

I think you misunderstood me. mod_h2 uses ap_process_connection() just like 
core.

Maybe this async changes just shines the light on a bug that has always been 
there, but never happened due to timing. I will look some more tomorrow. 
Originally, I planned to do something else, but I am running out of subversion 
branches where I can work...

//Stefan


Reply via email to