W dniu 19.12.2025 o 17:02, Aleksandar Lazic pisze:
Hi.

On 2025-12-19 (Fr.) 14:08, Hank Bowen wrote:


W dniu 19.12.2025 o 13:32, Amaury Denoyelle pisze:
On Fri, Dec 19, 2025 at 01:10:16PM +0100, Hank Bowen wrote:

[...]
OK but the quoted fragment suggests that it is possible to have a single TCP
haproxy <-> server connection with streams (in case of HTTP/2 - HTTP/2
streams as I understand) of different sessions (although it would require at
least "aggressive" mode of http-reuse). But having considered that the
connection is recognized idle only after all requests have been sent and got
their entire responses, I cannot see any possibility for that.

Protocols such as HTTP/2 supports multiplexing, i.e. multiple
requests/responses can be exchanged in parallel. Thus, even if an HTTP/2
connection is not idle, it can be reused unless there is already too
many active streams on it. Such connections are labelled internally as
"available".

"Thus, even if an HTTP/2 connection is not idle, it can be reused" - OK, everything is clear, that is the the crucial part, I assumed that each haproxy <-> server TCP connection can only be reused by another session if it is idle. Thank you both, Willy Tarreau and you for comprehensive explanations!

BTW, this behavior is not identical for QUIC as the protocol does not
suffer from the same head-of-line blocking limitation, hence a
connection can be used in parallel by multiple sessions even in
http-reuse safe level.


With the chance to confuse all, let me suggest to take also a look into HTX, the HAProxy internal representation of an request protocol agnostic :-)

https://www.haproxy.com/blog/haproxy-1-9-has-arrived

https://www.haproxy.com/blog/sanitizing-http1-a-technical-deep-dive-into-haproxys-htx-abstraction-layer


Thanks! I'll try to read that as it seems to be a very interesting, although perhaps a somewhat difficult lecture.

However, having touched on the subject of HTTP/2 - I'm wondering if I understand correctly why in case of http-reuse set to "aggressive" or "always" one client can cause head-of-line blocking problem on the rest of the clients. Is it that the given connection's TCP buffer then significantly fills up and when other (fast) clients request to download data, haproxy can only download so much of them as much there is remaining space in its TCP buffer which is low, so it must perform this operation by executing many separate downloads and each such download is related to some overhead (compared to the situation when all the data were downloaded to haproxy's TCP buffer at once)?

If we have a sequence of frames from the server and they are for different clients, haproxy does not have to wait for the n-th frame to be sent to a client in order to send the n+1-th frame to another client, am I right?


I have also some more questions although I'm not sure if it is best to send them here or to create a new topic, but they are rather closely related to this discussion.



Reply via email to