> On 17 Jun 2015, at 06:37, Simon Barber <si...@superduper.net> wrote:
> 
> The need for a full BDP comes from the way that TCP can release a fill BDP in 
> one go, and in fact this happens on a restart of an idle stream. CWind is 
> open to a full BDP, no data is in the pipe, so a full cwind burst can go out 
> at once. Pacing is the fix for this - hence recent discussions of sch_fq.
> 
> Auto tuning of target is exactly what I think is needed to mitigate the 
> problem with codel limiting available bandwidth usage in certain situations. 
> The penalty is a little extra latency, but if used with fq, then that extra 
> latency penalty is isolated from other traffic.

I agree, with fq, estimating the number of recently active flows shouldn't be 
hard and such auto-tuning could be feasible

Cheers,
Michael



> 
> Simon
> 
> On 6/15/2015 4:40 PM, Agarwal, Anil wrote:
>> I guess this is pointing to the age old problem - what is the right buffer 
>> size or equivalent delay limit, when packets should be dropped or 
>> ECN-marked, so that the link is never under-utilized?
>> 
>> For a single TCP connection, the answer is the bandwidth-delay product BDP.
>> For large number of connections, it is BDP / sqrt(numConnections).
>> 
>> Hence, one size does not fit all.
>> E.g., for RTT of 100 ms or 500 ms, CoDel target delay of 5 or 10 ms is too 
>> short - when handling a small number of connections.
>> 
>> Perhaps, there is a need for a design that adapts the queue size (or delay) 
>> target dynamically by estimating numConnections !
>> 
>> Anil
>> 
>> 
>> -----Original Message-----
>> From: aqm [mailto:aqm-boun...@ietf.org] On Behalf Of Simon Barber
>> Sent: Monday, June 15, 2015 2:01 PM
>> To: Dave Taht
>> Cc: Jonathan Morton; aqm@ietf.org; Steven Blake
>> Subject: Re: [aqm] CoDel on high-speed links
>> 
>> On 6/14/2015 10:26 PM, Dave Taht wrote:
>>> On Sun, Jun 14, 2015 at 4:10 PM, Simon Barber <si...@superduper.net> wrote:
>>>> Indeed - I believe that Codel will drop too much to allow maximum
>>>> bandwidth utilization, when there are very few flows, and RTT is
>>>> significantly greater than target.
>>> Interval. Not target. Interval defaults to 100ms. Target is 5ms.
>>> Dropping behaviors stop when the queue falls below the target.
>> In this case I specifically mean target, not interval. Dropping stops when 
>> queue falls below target, but by then it's too late. In the case I'm talking 
>> about (cwind cut by more than queue length) then a period of link idle 
>> occurs, and so bandwidth is hurt. It happens repeatedly.
>> 
>>> Range of tests from near zero to 300ms RTT codel does quite well with
>>> reno, better with cubic, on single flows. 4 flows, better. fq_codel
>>> does better than that on more than X flows in general.
>> The effect is not huge, but the bandwidth loss is there. More flows 
>> significantly reduce the effect, since the other flows keep the link busy. 
>> This bandwidth reduction effect only happens with very few flows.
>> I think TCP Reno will be worse than Cubic, due to it's 50% reduction in 
>> cwind on drop vs Cubic's 20% reduction - but Cubic's RTT independent 
>> increase in cwind after the drop may make the effect happen more often with 
>> larger RTTs.
>> 
>> What results have you seen for codel on single flow for these larger RTTs?
>> 
>>> You can easily do whatever experiments you like with off the shelf
>>> hardware and RTTs around half the planet to get the observations you
>>> need to confirm your thinking. Remember that a drop tail queue of
>>> various sizes has problems of it's own.
>>> 
>>> I have a long overdue rant in progress of being wikified about how to
>>> use netem correctly to properly emulate any rtt you like.
>>> 
>>> I note that a main aqm goal is not maximum bandwidth utilization, but
>>> maximum bandwidth while still having working congestion avoidance and
>>> minimal queue depth so other new flows can rapidly grab their fair
>>> share of the link. The bufferbloat problem was the result of wanting
>>> maximum bandwidth for single flows.
>> Indeed - with many TCP CC algorithms it's just not possible to achieve 
>> maximum bandwidth utilization with only 5ms induced latency when the RTTs 
>> are long, and a single queue (no FQ, only drop tail or single queue AQM). 
>> The multiplicative decrease part of TCP CC simply does not allow it unless 
>> the decrease is smaller than the queue (PRR might mitigate a little here). 
>> Now add in FQ and you can have the best of both worlds.
>>>> The theory is - with a Reno based CC the cwind gets cut in half on a
>>>> drop. If the drop in cwind is greater than the number of packets in
>>>> the queue, then the queue will empty out, and the link will then be
>>>> idle for a
>>> flight + queue.
>> When cwind gets cut by N packets, the sender stops sending data while ACKs 
>> for N data packets are received. If the queue has less than N data packets, 
>> then it will empty out, resulting in an idle link at that point, and 
>> eventually at the receiver (hence bandwidth loss).
>>>> while. If you want data to keep the data flowing uninterrupted, then
>>>> you must have a full unloaded RTT's worth of data in the queue at
>>>> that point. A
>>> Do the experiment? Recently landed in flent is the ability to monitor
>>> queue depth while running another test.
>>> 
>>>> drop will happen, the cwind will be halved (assuming a Reno TCP), and
>>>> the sender will stop sending until one (unloaded) RTT's worth of data
>>>> has been received. At that point the queue will just hit empty as the
>>>> sender starts sending again.
>>> And reno is dead. Long live reno!
>>> 
>>>> Simon
>>>> 
>>>> 
>>>> 
>>>> On 6/9/2015 10:30 AM, Jonathan Morton wrote:
>>>>> Wouldn't that be a sign of dropping too much, in contrast to your
>>>>> previous post suggesting it wouldn't drop enough?
>>>>> 
>>>>> In practice, statistical multiplexing works just fine with fq_codel,
>>>>> and you do in fact get more throughput with multiple flows in those
>>>>> cases where a single flow fails to each adequate utilisation.
>>>>> 
>>>>> Additionally, utilisation below 100% is really characteristic of
>>>>> Reno on any worthwhile AQM queue and significant RTT. Other TCPs,
>>>>> particularly CUBIC and Westwood+, do rather better.
>>>>> 
>>>>> - Jonathan Morton
>>>>> 
>>>> _______________________________________________
>>>> aqm mailing list
>>>> aqm@ietf.org
>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mai
>>>> lman_listinfo_aqm&d=AwICAg&c=jcv3orpCsv7C4ly8-ubDob57ycZ4jvhoYZNDBA06
>>>> fPk&r=FyvaklKYrHaSCPjbBTdviWIW9uSbnxdNSheSGz1Jvq4&m=uxNYRwC4x80YZPJPY
>>>> o3-lMaVCC_1TNJzTxVGd0F1PSs&s=6OQXBky7MzGz1dBmf7oRhwemCi5a4yAiQRm90WHX
>>>> FIg&e=
>>> 
>> _______________________________________________
>> aqm mailing list
>> aqm@ietf.org
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mailman_listinfo_aqm&d=AwICAg&c=jcv3orpCsv7C4ly8-ubDob57ycZ4jvhoYZNDBA06fPk&r=FyvaklKYrHaSCPjbBTdviWIW9uSbnxdNSheSGz1Jvq4&m=uxNYRwC4x80YZPJPYo3-lMaVCC_1TNJzTxVGd0F1PSs&s=6OQXBky7MzGz1dBmf7oRhwemCi5a4yAiQRm90WHXFIg&e=
> 
> _______________________________________________
> aqm mailing list
> aqm@ietf.org
> https://www.ietf.org/mailman/listinfo/aqm

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to