going back to the integrated vs separable vs complementary debate...

in order for any queuing system to have an effect, there needs to be a
queue for it to work on. If the egress rate exceeds the ingress rate,
no queueing system is needed.

Now, in either an AQM or FQ system, they need a queue to work on. If
you hook them up serially, one or the other has no queue backpressure
to work on,
unless you batch up delivery or decide to have a standing queue between them.

doing both aqm + fq at the same time works more universally with a
greater probability of shooting at AND scheduling the right stuff,
starting with a queue length of 3.

If you settle on having a standing queue between the two (say for example,
10 packets) you are in a losing situation vs a vs latency in the first place.

The situation is mildly better on batching packet delivery (take
wireless-n or cable as an example) - there you could schedule the
flows to end-points, aqm/fq+aqm the flows, and furthermore treat the
bundles sanely at the hardware aggregate level as to what packets you
are willing to drop rather than retransmit.

At least in the case of wifi, the variable rate packet scheduler ->
aqm/fq -> qos -> txop batching -> packet drop in the air/retransmits
process needs to be a lot more complex to work well, and doesn't fit
into the line rate ethernet debate well at all, and there hasn't yet
been enough work done on it to get anywhere although I hold out great
hope for latency aware aqms to get wireless far more right.

All modern linux ethernet systems are doing some form of batching in
order to run at 10gig speeds - but JUST enough to keep the hardware
busy - that was the miracle of BQL - but most dma hardware is not
presently smart enough to do anything other than a tx ring for
handling packets on those batches.

-- 
Dave Täht

_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to