Hi all,
This just got disclosed:
-
https://cloud.google.com/blog/products/identity-security/google-cloud-mitigated-largest-ddos-attack-peaking-above-398-million-rps/
-
https://cloud.google.com/blog/products/identity-security/how-it-works-the-novel-http2-rapid-reset-ddos-attack
Seems like a clever update to the "good old" h2 multiplexing abuse vectors:
1. client opens a lot of H2 streams on a connection
2. Spams some requests
3. immediately sends h2 RST frames for all of them
4. Go back to 1. repeat.
The idea being to cause resource exhaustion on the server/proxy at least
when it allocates stream related buffers etc, and the underlying server
too since it likely sees the requests before they get cancelled.
Looking at HAProxy I'd like to know if someone's aware of a decent
mitigation option?
My current first guess was to adapt F5's recommendations here
https://my.f5.com/manage/s/article/K000137106, ie set
tune.h2.fe.max-concurrent-streams 10
But I'm not sure if there's a way to apply the suggested measure (by
Google this time) of catching misbehaving clients (opening more streams
that advertised, or sending a lot of reset frames).
Finally I'll note that I don't currently have access to a good way to
test how much it affects HAProxy. And maybe it doesn't due to some
peculiar implementation detail. But that would be good to know.
Regards,
Tristan