As Matthias pointed out, generally the issue here is certificate sizes. And 
while I agree that we should work to get those smaller, there is minimal 
control we have over that. Updating certificates can be a pretty slow process 
as it often requires a lot of processes to be updated as well (at least at 
MSFT).

But, to your question Paul, it doesn't matter what we set the limit for those 
servers/stacks that are already ignoring it. There's no protocol police, so 
they do what they want anyways (and as you can see a majority do already). That 
being said, I don't expect by us increasing the limit they will go out and 
choose to get even bigger certificates. So, I don't expect increasing the limit 
to make servers go even bigger.

It's more of a selfish question about updating our stack (and anyone else who 
currently follows the rules) to be more inline with what it seems the rest of 
the industry is practically doing anyways. The 3x number was pretty arbitrarily 
picked originally, and I don't see a practically big difference between 3x and 
5x in the end (especially considering some of the protocols amplification 
factors out there today already).

Thanks,
- Nick

Sent from Outlook<http://aka.ms/weboutlook>
From: Paul Vixie <paul=40redbarn....@dmarc.ietf.org>
Sent: Tuesday, July 30, 2024 8:53 PM
To: IETF QUIC WG <quic@ietf.org>; Nick Banks <niba...@microsoft.com>
Subject: Re: Proposal: Increase QUIC Amplification Limit to 5x

Do we know a reason why the system's behavior won't move beyond the new limit 
the same way it moved beyond the old one? If it's some bizarre kind of leaky 
bucket let's have the showdown now rather than later when everything is larger 
and ossification has begun.

p vixie

On Jul 30, 2024 07:16, Nick Banks 
<nibanks=40microsoft....@dmarc.ietf.org<mailto:nibanks=40microsoft....@dmarc.ietf.org>>
 wrote:
Hello Folks,

We've had this discussion on Slack in the past, and I wanted to bring it here 
to get some additional feedback. As some of you know, I have a project on 
GitHub (microsoft/quicreach<https://github.com/microsoft/quicreach>) that is a 
simple ping-like reachability tool for QUIC, and I run a periodic action to 
test the top 5000 hostnames for QUIC-reachability and then breaks the handshake 
down by whether it (a) requires multiple round trips, (b) exceeds the specified 
amplification limit or (c) connects in 1-RTT under the limit. It produces this 
dashboard<https://microsoft.github.io/quicreach/>:

[cid:image001.png@01DAE268.3B37CDC0]

The main point in sending this email is to focus on the large percentage of 
servers that are ignoring the 3x amplification limit today, and what we should 
do (if anything) about that. I ran a quick experiment 
(PR<https://github.com/microsoft/quicreach/pull/243>) this morning to test how 
the breakdown would look if we had different amplification limits 
(3x<https://github.com/microsoft/quicreach/actions/runs/10161649574/job/28100572606#step:6:1>,
 
4x<https://github.com/microsoft/quicreach/actions/runs/10162466467/job/28103201648#step:6:1>,
 
5x<https://github.com/microsoft/quicreach/actions/runs/10162939158/job/28104656720#step:6:1>)
 and found that if we used a 5x limit we would find ourselves in a place where 
most servers are now under the limit.

[cid:image002.png@01DAE268.3B37CDC0]

So, my ask to the group is if we should more officially bless a 5x limit as 
'Ok' for servers to use. This would more impact those servers that currently 
take multiple round trips because they are correctly enforcing the 3x limit on 
themselves, resulting in longer handshake times. If we say they can/should 
change their logic from 3x to 5x, then their handshake times will improve, and 
largely things will speed up for clients when using QUIC. Personally, I'd like 
to update MsQuic to use this new limit based on this data, but I wanted to get 
a feel from the group first.

Thanks,
- Nick

Sent from Outlook<http://aka.ms/weboutlook>

Reply via email to