On Tue, Oct 10, 2023 at 10:03:32PM +0000, Lukas Tribus wrote:
> On Tue, 10 Oct 2023 at 20:22, Willy Tarreau <w...@1wt.eu> wrote:
> >
> > So at this point I'm still failing to find any case where this attack
> > hurts haproxy more than any of the benchmarks we're routinely inflicting
> > it, given that it acts exactly like a client configured with a short
> > timeout (e.g. if you configure haproxy with "timeout server 1" and
> > have an h2 server, you will basically get the same traffic pattern).
> 
> This is pretty much the situation with nginx as well:
> 
> https://mailman.nginx.org/pipermail/nginx-devel/2023-October/S36Q5HBXR7CAIMPLLPRSSSYR4PCMWILK.html

Thanks for the pointer Lukas. I'm not surprized at all, and Maxim has
all my sympathy on this when he speaks about FUD. Stacks like haproxy
and nginx which have been exposed for a long time to amazing amounts
of garbage (and even to their own bugs sometimes) naturally had to be
made more robust against this type of misbehavior.

I'm noticing that they're counting resets and fixing some limits, but
this can very easily be triggered by a legit client clicking stop a
few times while loading many objects in parallel. Stefan Eissing said
somewhere that in Apache they implement a "mood" on a connection
indicating how apache feels about that connection and that as bad stuff
happens its mood degrades and it progressively reduces the processing
on that connection. I find this quite interesting because instead of
taking many parameters into account, any event can say "that peer is
not very nice with us". And we could count a ratio of number of streams
that are reset before the response to the the total number of streams,
as this is a normal behavior for a client clicking stop, but it should
not represent high ratios. I was thinking that we could imagine sending
settings with a reduced SETTINGS_MAX_CONCURRENT_STREAMS when bad stuff
like this happens.

I remember that we mentioned a while ago that it could be useful to
implement a total stream limit per connection to force to reconnect,
but my primary use case was to force h2load to disconnect/reconnect
in tests. It could also increase the cost a little bit for the client
here, though it would also increase the cost for the proxy by forcing
new SSL connections more often.

Amaury told me that H3/QUIC was not affected by this thanks to the
way streams are flow-controlled, since the server must send a frame
to allow the client to create new streams. And it turns out that
Martin Thomson of the IETF HTTP WG proposed something very similar
which could improve the situation:

  
https://martinthomson.github.io/h2-stream-limits/draft-thomson-httpbis-h2-stream-limits.html

So we can expect improvements in that area in the foreseable future.

Willy

Reply via email to