> On 29 Nov, 2016, at 04:55, Matt Mathis <mattmat...@google.com> wrote:
> 
> Bob's point is that fq_anything forfeits any mechanism for an application or 
> user to imply the value of the traffic by how much congestion they are 
> willing to inflict on other traffic.

Yes, it does.

I actually consider that a good thing, because most applications will, given 
the choice, choose to inflict more congestion on other traffic in order to 
boost their own performance.  There are honourable exceptions, but it’s not a 
behaviour we can solely rely on.

For example, Steam uses between four and eight parallel TCP streams (I can’t 
figure out what the number depends on) to receive game updates, when one or two 
would already saturate most domestic Internet connections.  This magnifies the 
impact on other things the user might be doing with that connection, such as - 
ironically enough - playing multiplayer games.  You’d think Valve, of all 
companies, would keep that in mind.

> This concept is the foundation of ConEx and related technologies which could 
> move the capacity allocation problem into the economic domain.

“Economic domain” only works if there is a financial cost borne by the causer 
of the congestion.  Good luck making that work, in a world where IoT device 
manufacturers don’t bear the costs of DDoSes launched through them.

> That said, fq_anything does not work at core router scale.

Correct.

> Note that there are two views, each of which is self consistent:
> 
> 1) You need fq_* to isolate flows; prioritization must be done with 
> IP/TOS/DSCP bits; aggressive flows can't hurt other flows; low delays require 
> that flows sharing a Q to be nice to each other and respond to AQM

…or that different flows are carefully kept in different queues.

Cake uses set-associative flow hashing to achieve flow isolation much more 
reliably than the current version of fq_codel.

Cake also applies Codel and BLUE in parallel, each covering a different AQM 
regime - BLUE takes over if and when Codel fails to control a particular queue. 
 If both Codel and BLUE fail to control the queue, Cake uses head-drops from 
the longest queue to remain within a total memory budget.  All of these avoid 
penalising well-behaved traffic whenever possible.

Cake also has mechanisms which consider “per host” fairness simultaneously with 
“per flow” fairness.  Incidentally, the Linux wifi got “per station airtime 
fairness” along with the fq_codel upgrade, achieving a similar aim by different 
means.

SFB uses a Bloom filter to a similar end, though that’s not strictly an FQ 
qdisc.

> 2) Uniform AQM/drop/mark per packet permits shared economic view of the value 
> of the traffic (e.g. a price) ; traffic is prioritized by how aggressive of 
> CC you choose; low delay [is/should be] a design property of the shared CC 
> and AQM algorithms.

It’s fair to say that “uniform AQM” is the only mechanism available at the core 
level, mainly because there are too many flows there to treat individually.  
But at that level, network engineering is supposedly all about providing 
sufficient link capacity so that AQM of any kind is unnecessary, because there 
is no congestion.  Still, plain AQM is a valid and potentially useful 
mitigation against transient overloads.

There are exceptions.  Some ISPs have been known to deliberately restrict 
peering capacity in certain directions to deliberately cause congestion to 
specific types of traffic, supposedly as a financial lever.  These ISPs would 
not be interested in applying AQM to reduce the impact of this 
deliberately-induced congestion.

I do have to ask, though, what protection this “shared economic view" provides 
against a single bulk flow which simply ignores all congestion signals?

> If you have a way to create proper incentives about congestion (e.g. price 
> and chargeback), #2 is probably a strong system; if that fails #1 is probably 
> stronger.
> 
> Note that half solutions or solutions split between the models don't work. 
> period.  Arguing about incomplete systems that are missing some of the parts 
> is pointless because they don't work at some level (often layer 8 or 9).

Indeed.  I’m not aware of any “complete” systems in category 2 - and I don’t 
count “data caps” among them, unpopular though they are.

 - Jonathan Morton

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to