On 06/02/2022 22:19, Martin Thomson wrote:
Hi Al,

On Sun, Feb 6, 2022, at 09:11, Al Morton via Datatracker wrote:
What rerodering extent (yes, that's a metric) would be required to cause
failure or unnecessary retransmission?
For my benefit: Is "rerodering" a typo, or a clever joke?

As for the question, I believe that QUIC implementations will deal with any 
amount of reordering within their timeout period (which varies, but it might be 
10s or 30s in common cases).

There are some narrow cases where packets might be discarded, but that would be 
down to specific constraints on the deployment, not the protocol itself.  For 
example, a server might choose not to take on arbitrary state based on the 
first packet, so it might discard any data that arrives out of order.  This is 
still likely to resolve as the client is required to retransmit.  Worst case, 
it takes N round trips to resolve, where N is the number of packets in the 
flight.  If that ends up taking more than the timeout (on either endpoint), 
then it will fail.  N == 1 in most current cases, so this is more of a 
theoretical problem than a practical one.

This scenario was extensively tested. interop.seemann.io has a bunch of tests 
of handshake robustness, which sometimes produces a lot of reordering.  You can 
see there that quant, which has the above behaviour, fails the L1 and C1 tests 
in some cases as a result.

That's probably more detail than the doc needs though.

The transport uses pretty normal standard transport stuff: Reordering *can* impact loss detection and hence lead to spurious retransmission (DUPack threshold is at least 3 by default for Reno - used thresholds depend on the sender implementation). Spurious retransmission can potentially lead to spurious CC reaction (mitigated/avoided when if original data is ACked as received).

Gorry



Reply via email to