Hi List,

In the process of figuring out bandwidth limits (previous thread), we stumbled 
on a much more interesting problem. Under some conditions (especially high 
request rate), H3 connections simply stall. This appears to be more prevalent 
in some browsers than others (Safari on iOS is the worst offender), and appears 
to coincide with high packet_loss in `show quic`. I'm not sure if that's cause 
or effect. I've tried tuning use of Retry threshold (raised to 1000) and 
max-frame-loss (raised to 100), to no change.

On the client side, this manifests itself by requests spinning indefinitely 
until HAProxy expires the connection or the client goes away. A refresh usually 
fixes it, but not always. There is no HAProxy crash and it still sees the 
connection in `show quic`, but no packets flow (and counters in show quic, 
other than packet loss are zero). Some browsers smooth this over by 
establishing a new connection or switching to H2 (but not always). This occurs 
in IPv4 and IPv6, across networks and devices—even on separate continents. 😁 
All backend connections are H/1.1.

All logs end up with CD-- or CL-- states and durations that are roughly the 
same as the QUIC connection timeout (~30s).

What output would be helpful to debug this? I can send over a JSFiddle that 
makes it fairly easy to replicate the behavior in iOS.

Relevant config:

global
        tune.quic.socket-owner connection
frontend
        bind quic4@0.0.0.0:443,quic6@[::]:443 ssl crt <cert> allow-0rtt
        http-response set-header alt-svc "h3=\":443\";ma=86400"

I'm a bit at a loss as to what to try next, so we've disabled H3 for most 
traffic.

Thanks!

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder & CEO
stadiamaps.com

Reply via email to