> On 16 Mar, 2019, at 1:43 am, David P. Reed <dpr...@deepplum.com> wrote:
> 
> Now the other thing that is crucial is that the optimal state almost all of 
> the time of every link in the network is that a utilization far from max 
> capacity. The reason for this is the fact that the Internet (like almost all 
> networks) is bursty and fractal.

That can be said about some types of links but not others.

Last-mile links in particular are often saturated by their users' individual 
traffic for minutes or even hours at a time, especially the slower link 
technologies such as ADSL and 4G.  The user wants their hundred-gigabyte game 
update installed as soon as possible, even if they only have 10Mbps to suck it 
through, and they still want to use their connection for other things while 
they wait.  And this is not unreasonable; I do it regularly.

At peak times, ISP backhaul capacity can often be enough of a bottleneck for 
users to notice the congestion and induced latency; it is far from the case 
that all ISPs worldwide massively over-provision their networks to avoid 
routine congestion, even in modern technologically advanced nations.  There are 
remote islands where hundreds or thousands of users must share a single 
satellite or microwave uplink.  Cell towers are also a shared medium with 
decidedly finite capacity.

Only core networks, and the backhaul networks of certain particularly 
conscientious ISPs, can typically be described as congestion-free.  And that is 
why we discuss AQM and ECN in such detail in the first place; without 
congestion, they wouldn't be required.

The extent to which traffic classification is needed on particular types of 
link can be debated.  It could fairly be argued that we've done remarkably well 
without the benefit of a functioning Diffserv ecosystem, so there is no 
particular urgency to create one.  At the same time, it's worth discussing 
*why* Diffserv fails to do its intended job, and how a better system *could* be 
designed, because that may reveal lessons for the future.

I will say this: there are times, even with the benefit of everything Cake does 
for me, when I would prefer that BitTorrent traffic would automatically defer 
to other stuff (as it was supposedly designed to).  Several game updaters, 
including Wargaming.net's, use BitTorrent under the skin - opening and using 
several hundred flows in parallel, and thereby swamping any other traffic going 
to the same host.  It would be very straightforward for them to mark all that 
traffic as Minimum Cost, while their games themselves use Minimum Latency for 
battle traffic.

Minimum Cost is a natural choice for any transport using LEDBAT, or with 
similarly altruistic philosophy.

Minimum Latency is a natural choice for any application requiring realtime 
response - games, VoIP, remote shells.

Minimum Loss is a natural choice for applications involved in network control, 
where dropped packets could have a much greater than normal impact on overall 
network performance.

Maximum Throughput is a natural choice for general-purpose applications not 
fitting any of the above.

Pricing is not required.  Making the wrong choice will simply make your 
application perform badly on a loaded network, as the bottleneck link optimises 
for the thing you told it to optimise for, rather than for what you actually 
want.  That's all that's really needed.

 - Jonathan Morton

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to