Hi Willy.

Am 17.01.2019 um 04:25 schrieb Willy Tarreau:
> Hi Aleks,
> 
> On Wed, Jan 16, 2019 at 11:52:12PM +0100, Aleksandar Lazic wrote:
>> For service routing are the standard haproxy content routing options possible
>> (path, header, ...) , right?
> 
> Yes absolutely.
> 
>> If someone want to route based on grpc content he can use lua with body 
>> content
>> right?
>>
>> For example this library https://github.com/Neopallium/lua-pb
> 
> Very likely, yes. If you want to inspect the body you simply have to
> enable "option http-buffer-request" so that haproxy waits for the body
> before executing rules. From there, indeed you can pass whatever Lua
> code on req.body. I don't know if there would be any value in trying
> to implement some protobuf converters to decode certain things natively.
> What I don't know is if the contents can be deserialized even without
> compiling the proto files.

Agree. I would be interesting to here a good use case and a solution for that,
at least haproxy have the possibility to do it ;-)

>>> That's about all. With each major release we feel like version dot-2
>>> works pretty well. This one is no exception. We'll see in 6 months if
>>> it was wise :-)
>>
>> So you would say I can use it in production with htx ;-)
> 
> As long as you're still a bit careful, yes, definitely. haproxy.org has
> been running it in production since 1.9-dev9 or so. Since 1.9.0 was
> released, we've had one crash a few times (fixed in 1.9.1) and two
> massive slowdowns due to non-expiring connections reaching the frontend's
> maxconn limit (fixed in 1.9.2).

Yep agree. In prod is always good to keep an eye on it.

>> and the docker image is also updated ;-)
>>
>> https://hub.docker.com/r/me2digital/haproxy19
> 
> Thanks.
> 
>> As we have now a separated protocol handling layer (htx) how difficult is it 
>> to
>> add `mode fast-cgi` like `mode http`?
> 
> We'd like to have this for 2.0. But it wouldn't be "mode fast-cgi" but
> rather "proto fast-cgi" on the server lines to replace the htx-to-h1 mux
> with an htx-to-fcgi one, because fast-cgi is another representation of
> HTTP. The "mode http" setting is what enables all HTTP processing
> (http-request rules, cookie parsing etc). Thus you definitely want to
> have it enabled.

Full Ack.

This means that I can use QUICK+HTTP/3 => php-fpm with haproxy, in the future 
;-)

Fast cgi isn't a bad protocol (IMHO) but sadly it was not as wide spread as
http(s) even it has multiplexing and keep alive feature in it.

>> I ask because php have not a production ready http implementation but a 
>> robust
>> fast cgi process manager (php-fpm). There are several possible solution to 
>> add
>> http to php (nginx+php-fpm, uwsgi+php-fpm, uwsgi+embeded php) but all this
>> solutions requires a additional hop.
>>
>> My wish is to have such a flow.
>>
>> haproxy -> *.php          => php-fpm
>>         -> *.static-files => nginx,h2o
> 
> It's *exactly* what I've been wanting for a long time as well. Mind you
> that Thierry implemented some experimental fast-cgi code many years ago
> in 1.3! By then we were facing some strong architectural limitations,
> but now I think we should have everything ready thanks to the muxes.

Oh wow 1.3. 8-O

In 2014 have Baptiste written a blog post how to make health checks for php-fpm
so it looks like fast-cgi is a long time on the table.

https://alohalb.wordpress.com/2014/06/06/binary-health-check-with-haproxy-1-5-php-fpmfastcgi-probe-example/

Just in case it's interesting here some receiver implementations links for
popular servers.

https://github.com/php-src/php/blob/master/main/fastcgi.h
https://github.com/php-src/php/blob/master/main/fastcgi.c

https://github.com/unbit/uwsgi/blob/master/proto/fastcgi.c
https://github.com/unbit/uwsgi/blob/master/plugins/router_fcgi/router_fcgi.c

https://golang.org/src/net/http/fcgi/fcgi.go
https://golang.org/src/net/http/fcgi/child.go

https://docs.rs/crate/fastcgi/1.0.0/source/src/lib.rs

All of them looks to the keep alive flag but not to the multiplex flag.

Python is different, as always, they use mainly wsgi, AFAIK.
https://wsgi.readthedocs.io/en/latest/

uwsgi have also there on protocol
https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html

>> I have take a look into fcgi protocol but sadly I'm not a good enough 
>> programmer
>> for that task. I can offer the tests for the implementation.
> 
> That's good to know, thanks!
> 
> Cheers,
> Willy

Regards
Aleks

Reply via email to