On 6/14/2015 10:26 PM, Dave Taht wrote:
On Sun, Jun 14, 2015 at 4:10 PM, Simon Barber <si...@superduper.net> wrote:
Indeed - I believe that Codel will drop too much to allow maximum bandwidth
utilization, when there are very few flows, and RTT is significantly greater
than target.
Interval. Not target. Interval defaults to 100ms. Target is 5ms. Dropping
behaviors stop when the queue falls below the target.

In this case I specifically mean target, not interval. Dropping stops when queue falls below target, but by then it's too late. In the case I'm talking about (cwind cut by more than queue length) then a period of link idle occurs, and so bandwidth is hurt. It happens repeatedly.

Range of tests from near zero to 300ms RTT codel does quite well with reno,
better with cubic, on single flows. 4 flows, better. fq_codel does better than
that on more than X flows in general.
The effect is not huge, but the bandwidth loss is there. More flows significantly reduce the effect, since the other flows keep the link busy. This bandwidth reduction effect only happens with very few flows. I think TCP Reno will be worse than Cubic, due to it's 50% reduction in cwind on drop vs Cubic's 20% reduction - but Cubic's RTT independent increase in cwind after the drop may make the effect happen more often with larger RTTs.

What results have you seen for codel on single flow for these larger RTTs?

You can easily do whatever experiments you like with off the shelf hardware
and RTTs around half the planet to get the observations you need to confirm
your thinking. Remember that a drop tail queue of various sizes has problems
of it's own.

I have a long overdue rant in progress of being wikified about how to use
netem correctly to properly emulate any rtt you like.

I note that a main aqm goal is not maximum bandwidth utilization, but maximum
bandwidth while still having working congestion avoidance and minimal queue
depth so other new flows can rapidly grab their fair share of the
link. The bufferbloat
problem was the result of wanting maximum bandwidth for single flows.
Indeed - with many TCP CC algorithms it's just not possible to achieve maximum bandwidth utilization with only 5ms induced latency when the RTTs are long, and a single queue (no FQ, only drop tail or single queue AQM). The multiplicative decrease part of TCP CC simply does not allow it unless the decrease is smaller than the queue (PRR might mitigate a little here). Now add in FQ and you can have the best of both worlds.
The theory is - with a Reno based CC the cwind gets cut in half
on a drop. If the drop in cwind is greater than the number of packets in the
queue, then the queue will empty out, and the link will then be idle for a
flight + queue.
When cwind gets cut by N packets, the sender stops sending data while ACKs for N data packets are received. If the queue has less than N data packets, then it will empty out, resulting in an idle link at that point, and eventually at the receiver (hence bandwidth loss).
while. If you want data to keep the data flowing uninterrupted, then you
must have a full unloaded RTT's worth of data in the queue at that point. A
Do the experiment? Recently landed in flent is the ability to monitor
queue depth while running another test.

drop will happen, the cwind will be halved (assuming a Reno TCP), and the
sender will stop sending until one (unloaded) RTT's worth of data has been
received. At that point the queue will just hit empty as the sender starts
sending again.
And reno is dead. Long live reno!

Simon



On 6/9/2015 10:30 AM, Jonathan Morton wrote:

Wouldn't that be a sign of dropping too much, in contrast to your previous
post suggesting it wouldn't drop enough?

In practice, statistical multiplexing works just fine with fq_codel, and
you do in fact get more throughput with multiple flows in those cases where
a single flow fails to each adequate utilisation.

Additionally, utilisation below 100% is really characteristic of Reno on
any worthwhile AQM queue and significant RTT. Other TCPs, particularly CUBIC
and Westwood+, do rather better.

- Jonathan Morton

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm



_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to