Thank you, David!

On Thu, Jul 08, 2021 at 04:14:00PM -0400, David P. Reed wrote:
> 
> Keep It Simple, Stupid.
>  
> That's a classic architectural principle that still applies. Unfortunately 
> folks who only think hardware want to add features to hardware, but don't 
> study the actual real world version of the problem.
>  
> IMO, and it's based on 50 years of experience in network and operating 
> systems performance, latency (response time) is almost always the primary 
> measure users care about. They never care about maximizing "utilization" of 
> resources. After all, in a city, you get maximum utilization of roads when 
> you create a traffic jam. That's not the normal state. In communications, the 
> network should always be at about 10% utilization, because you never want a 
> traffic jam across the whole system to accumulate. Even the old Bell System 
> was engineered to not saturate the links on the worst minute of the worst 
> hour of the worst day of the year (which was often Mother's Day, but could be 
> when a power blackout occurs).
>  
> Yet, academics become obsessed with achieving constant very high utilization. 
> And sometimes low leve communications folks adopt that value system, until 
> their customers start complaining.
>  
> Why doesn't this penetrate the Net-Shaped Heads of switch designers and 
> others?
>  
> What's excellent about what we used to call "best efforts" packet delivery 
> (drop early and often to signal congestion) is that it is robust and puts the 
> onus on the senders of traffic to sort out congestion as quickly as possible. 
> The senders ALL observe congested links quite early if their receivers are 
> paying attention, and they can collaborate *without even knowing who the 
> others congesting the link are*. And by picking the heaviest congestors with 
> higher probability to drop, fq_codel pushes back in a "fair" way when 
> congestion actually crops up. (probabilistically).
>  
> It isn't the responsibility of routers to get packets through at any cost. 
> It's their responsibility to signal congestion early enough that it doesn't 
> persist very long at all due to source based rate adaptation.
> In other words, a router's job is to route packets and do useful telemetry 
> for the end points using it at the instant.
>  
> Please stop focusing on what is an irrelevant metric (maximum throughput with 
> maximum utilization in a special situation only).
>  
> Focus on what routers can do well because they actually observe it 
> (instantaneous congestion events) and keep them simple.
> .
> On Thursday, July 8, 2021 10:40am, "Jonathan Morton" <chromati...@gmail.com> 
> said:
> 
> 
> 
> > > On 8 Jul, 2021, at 4:29 pm, Matt Mathis via Bloat
> > <bloat@lists.bufferbloat.net> wrote:
> > >
> > > That said, it is also true that multi-stream BBR behavior is quite
> > complicated and needs more queue space than single stream. This complicates 
> > the
> > story around the traditional workaround of using multiple streams to 
> > compensate
> > for Reno & CUBIC lameness at larger scales (ordinary scales today). 
> > Multi-stream does not help BBR throughput and raises the queue occupancy, 
> > to the
> > detriment of other users.
> > 
> > I happen to think that using multiple streams for the sake of maximising
> > throughput is the wrong approach - it is a workaround employed 
> > pragmatically by
> > some applications, nothing more. If BBR can do just as well using a single 
> > flow,
> > so much the better.
> > 
> > Another approach to improving the throughput of a single flow is 
> > high-fidelity
> > congestion control. The L4S approach to this, derived rather directly from 
> > DCTCP,
> > is fundamentally flawed in that, not being fully backwards compatible with 
> > ECN, it
> > cannot safely be deployed on the existing Internet.
> > 
> > An alternative HFCC design using non-ambiguous signalling would be 
> > incrementally
> > deployable (thus applicable to Internet scale) and naturally overlaid on 
> > existing
> > window-based congestion control. It's possible to imagine such a flow 
> > reaching
> > optimal cwnd by way of slow-start alone, then "cruising" there in a true
> > equilibrium with congestion signals applied by the network. In fact, we've
> > already shown this occurring under lab conditions; in other cases it still 
> > takes
> > one CUBIC cycle to get there. BBR's periodic probing phases would not be 
> > required
> > here.
> > 
> > > IMHO, two approaches seem to be useful:
> > > a) congestion-window-based operation with paced sending
> > > b) rate-based/paced sending with limiting the amount of inflight data
> > 
> > So this corresponds to approach a) in Roland's taxonomy.
> > 
> > - Jonathan Morton
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to