Hello Tong Li,

This message will provide comments on the draft that may seem negative. Just to be clear, I am very pleased to see you researching the general topic of measuring time and delays in transport protocols, and would like to encourage you. On the other hand, there are hard question in this area, such as how much the endpoints of a QUIC connection shall trust each other, or how to track increases of the min RTT. Research improves if we outline these questions!

On 11/17/2025 12:56 AM, Tong Li wrote:
Q3: It would be interesting to discuss the purpose of the minRTT estimation.
  Most implementations do not really use delay estimations for loss
  detection and recovery -- like RFC 9002 they use RACK, which does not
  require precise timeout estimates. Do you want to use the min RTT for
  congestion control?
A3: Thank you for your question. Here are some scenarios where an accurate 
minRTT might be essential (see section 3 in 
draft-li-quic-minimum-rtt-estimation):
(1) Congestion Control: As you suspected, several congestion controllers, most 
notably BBR and its variants, fundamentally depend on an accurate minRTT to 
estimate the bandwidth-delay product (BDP) and size their congestion window 
effectively.

Maybe. BBR mainly depends on accurate estimate of the bottleneck bandwidth, and uses the min RTT for sizing of the running CWND as a product of estimated rate times min RTT. However, in presence of delay jitter, using the min RTT may be counter-productive. It is better to pick a somewhat higher value to "absorb the jitter" without blocking transmission. There is ongoing research and experimentation in that area. This kind of discussion is happening in the CCWG working group list.

The main issue with RTT and BBR is not really accuracy, but rather the "latecomer advantage". The min RTT is definedĀ  as the min of all samples since the beginning of the measurement era. The late comer starts measuring the RTT at a later time than the earlier connections. The min from that later time to the current time is always larger or equal than what the earlier connections observed. That issue is not really due to inaccuracies and ack delays, but rather to the building of standing queues. BBR solves that with the Probe RTT algorithms. Again, this kind of discussion is happening in the CCWG working group list.

(2) Loss Detection Validation: QUIC loss detection (RFC 9002) uses minRTT to 
reject implausibly small RTT samples, which helps maintain the robustness of 
the loss detection algorithm.

The loss detection algorithm in RFC 9002 has a minimal dependency on RTT samples. Packet losses are detected by looking for holes in the sequence of incoming acknowledgements. The RTT estimates are only used in the "probe time out" part, in case of long silences. The endpoint sends a probe to trigger a new acknowledgement, and thus confirm that there was a hole in the sequence of ACKs. There is indeed a benefit to not sending probes too soon and causing overhead, and not sending probes too late and taking too long to repair a loss. A better min RTT estimate would help, but that's mostly a second order effect.

The "implausibility" mechanism addresses a different problem. If the sender trusts the "ack delay" announced by the receiver, adversarial receivers can lie, announce made up versions of that ACK delay, and trick the sender into sending at a higher rate, resulting in potential DoS attacks. The "implausibility" test is there to protect against such attacks. That test effectively mandates that the min RTT be computed based on the actual measurements, without taking into account the ack delays provided by the peer.

Switching from ack delays to time stamps does not prevent this attack. Adversarial receivers could fake their time stamps in the same way that they would fake ACK delays. If we want the same level of protection against adversarial peers as provided by the "implausibility" rule, we will end up requiring that the min RTT is only updated by "end to end" measurements, without taking into account the timestamps provided by the peer.

(3) ACK Frequency Optimization: In mechanisms like the one described in "TACK: 
Improving Wireless Transport Performance by Taming Acknowledgments," the minRTT is 
used to dynamically adapt the ACK frequency to the network connection's bandwidth-delay 
product, improving scalability and efficiency.

This is very debatable. The Quic Ack Frequency draft () takes an opposite approach. Section 8.1 states:

"When setting the Requested Max Ack Delay as a function of the RTT, it
is usually better to use the Smoothed RTT (smoothed_rtt) (Section 5.3
of [QUIC-RECOVERY]) or another estimate of the typical RTT, but not
the minimum RTT (min_rtt) (Section 5.2 of [QUIC-RECOVERY])."

One of the reason is that the smoothed RTT incorporates an estimate of delay jitter, while the min RTT does not.

In my experience, the advantage of measuring one way delays is not in acquiring better min RTT estimates, but in differentiating between congestion on the sending path and congestion of the ACK path. The RTT alone does not provide that information. Measuring one way delays could also be advantageous in multipath connections, in which ACKs may be sent over different paths. There are certainly many areas of potential research around these two topics. Probably better discussed in a research group like ICCRG than in the QUIC WG.

-- Christian Huitema


Reply via email to