Ok, tests show that on returning SUSPENDED, no EOS + EOR buckets are written,
but a FLUSH is passed by ap_process_request() nevertheless.

Maybe that should not be done on a suspended connection, not sure.

-Stefan

> Am 11.03.2016 um 12:02 schrieb Stefan Eissing <[email protected]>:
> 
> Comments to the later parts, re suspending requests.
> 
>> Am 10.03.2016 um 22:24 schrieb Yann Ylavic <[email protected]>:
>> [...]
>> On Thu, Mar 10, 2016 at 5:38 PM, Stefan Eissing
>> <[email protected]> wrote:
>>> 
>>> Backend Engines
>>> ---------------
>>> How can we do that? Let's discuss one master http2 connection with
>>> no proxy request ongoing. The next request which is handled by proxy_http2
>>> is the "first" - it sees no other ongoing request against the same
>>> backend. It registers itself at the master http2 session for the backend.
>>> It performs the request in the usual way, de-registers itself again
>>> and returns. Same as with HTTP/1, plus some registration dance of what
>>> I named a "request engine".
>>> 
>>> If another worker handles a proxy_http2 request, while the "first"
>>> engine is ongoing, it sees that someone has registered for the backend
>>> that it also needs. What it then does is "hand over" the request_rec
>>> to the "first" engine and immediately returns as if done.
>> 
>> Which sends out the EOR since the core thinks the request is done, right?
>> If so, wouldn't returning SUSPENDED be more appropriate (that would
>> free the worker thread for some other work)?
> 
> Now I learned something new! Have to try this.
> 
> So, a handler may return SUSPENDED, which inhibits the EOR write and
> sets the connection in state SUSPENDED as well and leaves the 
> process_connection loop, but only with an async mpm. Right?
> 
> I assume that the resume handling is supposed to happen in the
> in/out filters only then. How much of that is available in 2.4? I
> saw that there are several improvements in trunk on that part...
> 
> But on the h2 slave connections, this could work in both branches. I 
> will give this a shot.
> 
>>> [...]
>>> 
>>> When the proxy_http2 worker for that request returns, mod_http2
>>> notices the frozen request, thaws it and signals the backend connection
>>> worker that the request is ready for processing. The backend worker
>>> picks it up, handles it and *closes the output* which will write
>>> the buckets set aside before. This will write the EOR and the
>>> request deallocates itself. During this time, all the input/output
>>> filter etc. will remain in place and work.
>> 
>> With my description above, that would translate to: before the worker
>> thread available to handle the last bytes of a response sends them, it
>> also flushes the EOR.
>> 
>> Each worker thread could still do as much work as possible (as you
>> describe) to save some queuing in the MPM, but that's possibly further
>> optimization.
> 
> Still learning about all the interactions between MPM, event and core.
> It's quite some code there...
> 
> Cheers,
> 
>  Stefan

Reply via email to