On Wed, Apr 27, 2016 at 7:28 AM, Stefan Eissing
<stefan.eiss...@greenbytes.de> wrote:
>
>
>> Am 26.04.2016 um 23:57 schrieb Yann Ylavic <ylavic....@gmail.com>:
>>
>> On Tue, Apr 26, 2016 at 4:49 PM, Stefan Eissing
>> <stefan.eiss...@greenbytes.de> wrote:
>>> Today I Learned the difference between writing
>>>  DATA + META
>>> and
>>>  DATA + META + FLUSH
>>> to the core output filters. I am astonished that
>>> my code ever worked.
>>>
>>> Seriously, what is the reason for this kind of
>>> implementation?
>>
>> Do you mean why META could be destroyed before (setaside-)DATA, in
>> remove_empty_buckets()?
>
> Yes. That was unexpected. My understanding of the pass_brigade contract was 
> that buckets get destroyed in order of appearance (might be after the call).

Actually after re-reading the code, the core output filter looks good to me.
remove_empty_buckets() will only remove buckets up to DATA buckets,
and indeed httpd couldn't work otherwise.

>
> The h2 use case is to pass a meta bucket that, when destroyed, signals the 
> end of life for data belonging to one stream. So all buckets before that one 
> should have been either destroyed or setaside already.

That's looks very similar to how EOR works.

>
> With http/1.x and EOR, my current reading of the code, this does not happen 
> since EORs are always FLUSHed.

No, EOR is never flushed explicitly (sent alone in the brigade by
ap_process_request_after_handler), the flush occurs if/when pipelining
stops (or MAX_REQUESTS_IN_PIPELINE is reached).
While requests are pipelined, so are responses (the usual case is not
pipelining though, so EOR may look flushed, but not always...).

> And even if, releasing the request pool early will probably go unnoticed as 
> WRITE_COMPLETION will read all memory *before* a new pool is opened on the 
> conn pool.

I debugged pipeling a few time ago and would have noticed it, accurate
ap_run_log_transaction() depends on this too.
This is not something that crashes httpd at least :)

>
> This (again, if my reading is correct) does no longer hold in h2. request 
> (stream) starting and ending is happening all the time, conn child pools are 
> created/destroyed all the time.
>
> What I saw was memory nullified, by assumedly, new apr_pcallocs, that should 
> not have been freed already. With more likelihood the more load and the less 
> logging, of course.

So I'm rather thinking of a pool lifetime issue in h2 buckets
handling, did you see the comment in ap_bucket_eor_create(), the case
does not seem to be handled in h2_beam_bucket_make() (if that's the
bucket you are talking about).
I'm not familiar with your recent Beam changes, and don't know what
lifetime the bucket is supposed to have, but couldn't your issue be
that beam_bucket_destroy() is called too late?

Regards,
Yann.

Reply via email to