> Anyone who cares about performance is motivated to use a recent-enough TCP > implementation. Emphatically agree.
I’d add that any IOT host that doesn’t support out of sequence packets is probably using a 2.4 GHz Wi-Fi 4 radio that suffers 50 ms of latency from the WLAN link alone. -- Michael Overcash Principal Architect, CPE Premises Engineering From: David Schinazi <[email protected]> Sent: Friday, February 7, 2025 2:28 PM To: Neal Cardwell <[email protected]> Cc: Martin Thomson <[email protected]>; Greg White <[email protected]>; Ingemar Johansson S <[email protected]>; [email protected]; [email protected] Subject: [EXTERNAL] [tsvwg] Re: Robustness to packet reordering On Thu, Feb 6, 2025 at 6:53 PM Neal Cardwell <[email protected]<mailto:[email protected]>> wrote: > Given how modern implementations work, reordering at the link-layer is almost > always harmful. For modern implementations of the RACK style or newer, that sounds true. But I would have guessed there is a large installed base of TCP senders out there with pre-RACK TCP stacks. And for those, presumably, as Greg noted, "older TCP implementations that don't support RACK would have problems with reordering." So AFAICT there may be some tricky trade-offs. You're absolutely right that there is a nontrivial base of TCP senders out there that don't have RACK and could benefit from link-layer reordering. That said, I don't think anyone cares about the performance of those senders. Anyone who cares about performance is motivated to use a recent-enough TCP implementation. I'm sure there's a small number of counter-examples out there, but in general we should optimize for the main case. I have embedded devices with minimal TCP stacks, but I really don't care if my temperature reading is slightly slower than it could be. This isn't a case of "you need to be a large tech company to have fast TCP", this is just a "update your open source code to something from this decade". To me the tradeoff is very easy: let's disable the link-layer reordering. Anyone who cares enough to notice a performance regression will be able to upgrade their TCP. And of course, this isn't a concern at all for QUIC since there's no pre-RACK legacy device problem. David If most 5G link-layer retransmissions can be completed in X milliseconds (X less than, say, 10ms), then it may be worth the user's 5G cell phone modem buffering received packets for X milliseconds to try to deliver in-order data to the receiver if this can be done in a low-latency manner. AFAIK various layers of networking software (and humans using it) already need to tolerate ~10ms of jitter due to various link layer effects for the most common last mile link-layer technologies (wifi, cellular, DOCSIS) and CPU scheduling effects for networking stacks (especially in user space). But I'm sure there are other factors I'm not taking into account... :-) neal On Thu, Feb 6, 2025 at 5:15 PM David Schinazi <[email protected]<mailto:[email protected]>> wrote: Having the link-layer delay packets to provide them in-order to the transport layer was helpful multiple decades ago when TCP implementations had more naive algorithms and were very resource-constrained. Given how modern implementations work, reordering at the link-layer is almost always harmful. TCP and QUIC stacks can handle reordering well, thanks to both protocol features and implementations having more memory to work with. The delay induced by this reordering will generally cause more harm than good. Furthermore, one of the biggest motivators for QUIC was to break head-of-line blocking and allow out-of-order delivery to the application layer. We have large bodies of data showing that this improves performance. Please disable reordering at the 5G layer. David On Thu, Feb 6, 2025 at 2:15 PM Christian Huitema <[email protected]<mailto:[email protected]>> wrote: On 2/6/2025 12:55 PM, Martin Thomson wrote: > > On Fri, Feb 7, 2025, at 04:59, Greg White wrote: >> This is an important topic relating to the expectations and >> requirements that transport protocols place on layer 2 protocols. In >> layer 2 standards bodies that I've been involved in, it has been >> understood that "the upper layers" expect in-order delivery, > As far as QUIC goes, it is sensitive to reordering in the network. Some > reordering will be interpreted as damage (Christian cited the relevant parts) > and performance suffers in a few minor way when things arrive out of order > (ACKs are less efficient, data needs to be held, memory accesses are less > likely to be contiguous, etc...). > > However, the idea that the network might seek to "fix" these problems, when > doing so necessarily involves extra work and delays, is not a good trade. > Stuff that is delayed to "fix" a reordering that happened might delay signals > that the QUIC stack could use, even if some data needs to be held at the > endpoint. QUIC packets contain many things, some of which don't need to be > strictly ordered to be useful. Applications that are sensitive to delays will break their traffic into multiple QUIC streams. In case of packet loss, only one of those streams will be blocked, the others will be delivered without "head of queue blocking". Implementing L2 correction will make the response worse, not better, because all the streams will be delayed for the duration of the L2 correction instead of just one. -- Christian Huitema
