Hi Jacob, Helmut!

2017-07-06 20:54 GMT+02:00 Jacob Champion <champio...@gmail.com>:

> On 07/06/2017 11:13 AM, Jim Jagielski wrote:
>
>> works 4 me...
>>
>
> Doesn't for me. E.g. with a script like
>
> <?php
>   print("hi!\n")
>   flush();
>   sleep(1);
>   print("hi!\n");
> ?>
>
> it takes 1 second to receive a single chunk with both lines in it.
>
> From a quick skim I assume this is because we don't use nonblocking
> sockets in the proxy implementation. (There's even a note in mod_proxy_fcgi
> that says, "Yes it sucks to [get the actual data] in a second recv call,
> this will eventually change when we move to real nonblocking recv calls.")


Quick check from my side too, so not sure if it makes sense. IIUC the
comment is about getting all the headers from the FCGI backend
(get_data_full(..., AP_FCGI_HEADER_LEN)), then get the response body in
another one (the [actual data]).

I checked mod_fcgi as Helmut suggested and it seems to me that the -flush
feature is a simple "flush every data that you receive", so I tested the
following patch with Jacob's php example code and it seems doing what
Helmut asked (please correct me if I am wrong).

Caveat: I had to set output_buffering = Off in my php-fpm's php.ini config
file.

http://home.apache.org/~elukey/httpd-2.4.x-mod_proxy_fcgi-force_flush.patch

I am still not sure if the get_data function (responsible to read the body
after the headers in dispatch()) works in this way only my local test or
not, but it seems simply calling apr_socket_recv with a buffer len and
return whatever data it comes from the backend. IIUC apr_socket_recv should
return data even if less than the buffer specified..

So if what I wrote vaguely makes sense, we might think about adding a new
directive that enables explicit flushing to mod_proxy_fcgi, so admins can
twist its behavior if needed?



> On 07/06/2017 11:08 AM, Helmut K. C. Tessarek wrote:
> > What is your take on this?
>
> If I'm honest, my brutally blunt take on it is "stop using HTTP to try to
> emulate push notifications within a single response; pretty much everything
> in the ecosystem is actively working against you at this point; responses
> are designed to be cacheable and deliverable as a unit, not as multiple
> pieces; and we've had *real* solutions like WebSocket for five years now
> rabble rabble rabble." But I don't actually think that's going to be the
> accepted answer.
>
> It probably makes sense to work on a nonblocking architecture for proxied
> responses in general.
>
> -Jacob
>

Completely agree with Jacob, this use case might not be the best one for
HTTP :)

Luca

Reply via email to