>
> I agree with the complement language. I don't mind if they are separable.
> Integration, however, is highly advantagous.

I started another thread on the backlog issue.

>
>>Because scheduling requires policy and AQM doesn't.
>
> Machine gunning down packets randomly until the flows start to behave
> does not require any policy, agreed. a 5 tuple fq system is not a lot
> of policy to impose. certainly
> qos and rate scheduling systems impose a lot more policy.

Actually, I'm going to retract part of what I just said. Everything is a policy.

Drop tail is a policy, it's useful for e2e mechanisms like ledbat if
the queue size
is greater than 100ms. not helpful for bufferbloat.

Drop head is a policy, it's useful for voip (actually useful for tcp
too). Not helpful for ledbat.

Shooting randomly and increasingly until flows get under control, a
decent compromise
between drop head and drop tail, that also shoots at a lot of packets
it doesn't need to

drr is a policy that does better mixing and does byte fairness
sfq is a polic does better mixing of packet fairness
qfq does weighted fq

red/ared/wred is a policy.
hfsc is a policy that does interesting scheduling and drop things all its own
htb based policies are often complex and interesting

so the problem is in defining what policies are needed and what algorithms
can be used to implement that policy. May the ones that provide the best
QoE for the end user succeed in the marketplace, and networks get ever better.

https://www0.comp.nus.edu/~bleong/publications/pam14-ispcheck.pdf

>> So operators
>> don't want to have to face the dilemma of needing the AQM part, but not
>> being able to have it because they don't want the policy implicit in the
>> scheduling part.

A dilemma of choosing which single line of code to incorporate in an
otherwise far more complex system? I certainly do wish it was entirely
parameterless, and perhaps a future version could be more so than
this is today.

I can write up the complexity required to do for example qfq + pie but
it would be a great deal longer than the below, and qfq + RED or red alone,
is much longer than either. Scripting is needed to configure those...

# to do both AQM + DRR at the same time, with reasonable defaults for
4mbit-10gbit

tc qdisc add dev your_device root fq_codel

# AQM only
# ecn not presently recomended

tc qdisc add dev your_device root codel

# or (functional equivalent)

tc qdisc add dev your_device root fq_codel flows 1 noecn

# (you could also replace the default tc filter, to get, like,
# a 4 queued system on dscp...)

# DRR + SQF-like behavior with minimal AQM, probably mostly reverting
# to drop head from largest queue (with the largest delay I consider
# even slightly reasonable)

tc qdisc add dev your_device root fq_codel target 250ms interval 2500ms

# if your desire is to completely rip out the codel portion of fq_codel that's
# doable. I know a fq_pie exists, too.

# reasonable default for satellite systems (might need to be closer to 120ms,
# and given the speed of most satellites, quantum 300 makes sense as well as
# a reduced mtu and IW)

tc qdisc add dev your_device root fq_codel target 60ms interval 1200ms

# useful option for lower bandwidth systems is quantum 300

# Data center only use can run at reduced target and interval

tc qdisc add dev your_device root fq_codel target 500us interval 10ms

# above 10Gbit, increasing the packet limit is good, probably a good
idea to increase flows

# a current problematic interaction with htb below 2.5mb leads to a
need for a larger target
# (it would be better to fix htb or to write a better rate limiter)

It's about a page of directions to handle every use case. I'd LOVE to
have similar
guideline and cookbook page(s) for EVERY well known aqm and packet
scheduling system - notably red and ared. I lack data on pie's
scalability presently, too.

Most rate shaping code on top of this sort of stuff, and most
shaping/qos related code also
is orders of magnitude more complex than this. Take htb's compensator
for ATM and/or PPPoe framing. Please. OR the hideous QoS schemes
people have designed using DPI.

As things stand fq_codel is a simpler/faster/better
drop in replacement for tons of code that shaped and used RED, or shaped and
did sfq.

Sensing the line rate, choosing an appropriate packet limit based on
available memory,
and auto-choosing number of flows are things the C code could be
smarter about.

They are something I currently do in a shell script (that also tries to figure
out atm framing and a 3 tier qos system)

I think that adding a rate limiter directly to a fq_codel or wfq +
codel derived algo
is a great idea and would be better than htb or hfsc + X. Been meaning to polish
up the code...


>>
>> This is critical for fq_codel, because apparently CoDel alone is not
>> recommended (which I would agree with).

The present version of that is useful (without ecn) in many scenarios.
It has been used
in combination with hfsc, htb, and standalone.

We've long had a version that outcompetes pie in more scenarios in
both ns2 and cerowrt for 2 years now... but falls off a little more at
longer RTTs than we'd like. And pie, in it's current
form and defaults in the linux kernel performs nowhere near as well as
it did in the paper or
vs codel or ns2_codel.

fq_codel is merely better in every respect we care about - better
utilization & lower latency across a wide range of benchmarks on real
traffic. We have some voip related benchmarks
coming up soon....

I certainly agree that the data center case needs something faster
than "codel by default".
I have had pretty good results with target 500us interval 10ms in that
case, and I
am seriously encouraged by the new e2e approaches of sch_fq + pacing,
and for that
matter DCTCP (if only someone else would port it to a modern kernel) -

So maybe, if we clearly separate out use cases - move the data center
discussion over
to the newly formed dclc group, for example? we can get moving on
fixing the edge
of the internet more?

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to