Hi Sebastian,

Since dslreports disable the ping due to high latency:

I did a ping test during the test:

--- youtube-ui.l.google.com ping statistics ---
61 packets transmitted, 61 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 336.631/369.733/493.473/30.946 ms

It looks to me that configuring the interval to be high for fq_codel
(350ms) does improve the latency under load.







On Sun, Jul 24, 2016 at 9:53 AM, Loganaden Velvindron
<logana...@gmail.com> wrote:
> I've set the interval to 350ms, by using advanced option:
>
> Result is better:
>
> http://www.dslreports.com/speedtest/4520697
>
>
> I will submit a patch for sqm-luci so that the interval is
> configurable easily via web ui for African countries where the latency
> tends to be of the order of 300ms to 600ms, particularly on 3g
> connections.
>
>
> tc -s qdisc
> qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024
> quantum 300 target 5.0ms interval 100.0ms ecn
>  Sent 3493746 bytes 5455 pkt (dropped 0, overlimits 0 requeues 2)
>  backlog 0b 0p requeues 2
>   maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>   new_flows_len 0 old_flows_len 0
> qdisc htb 1: dev eth1 root refcnt 2 r2q 10 default 12
> direct_packets_stat 3 direct_qlen 1000
>  Sent 4869533 bytes 18710 pkt (dropped 0, overlimits 8200 requeues 0)
>  backlog 0b 0p requeues 0
> qdisc fq_codel 110: dev eth1 parent 1:11 limit 1001p flows 1024
> quantum 300 target 41.1ms interval 350.0ms
>  Sent 12742 bytes 115 pkt (dropped 0, overlimits 0 requeues 0)
>  backlog 0b 0p requeues 0
>   maxpacket 256 drop_overlimit 0 new_flow_count 84 ecn_mark 0
>   new_flows_len 0 old_flows_len 1
> qdisc fq_codel 120: dev eth1 parent 1:12 limit 1001p flows 1024
> quantum 300 target 41.1ms interval 350.0ms
>  Sent 4854326 bytes 18569 pkt (dropped 85, overlimits 0 requeues 0)
>  backlog 0b 0p requeues 0
>   maxpacket 1454 drop_overlimit 0 new_flow_count 4831 ecn_mark 0
>   new_flows_len 1 old_flows_len 1
> qdisc fq_codel 130: dev eth1 parent 1:13 limit 1001p flows 1024
> quantum 300 target 41.1ms interval 350.0ms
>  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>  backlog 0b 0p requeues 0
>   maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>   new_flows_len 0 old_flows_len 0
> qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
>  Sent 21938964 bytes 23336 pkt (dropped 0, overlimits 0 requeues 0)
>  backlog 0b 0p requeues 0
> qdisc mq 0: dev wlan0 root
>  Sent 18874833 bytes 18494 pkt (dropped 0, overlimits 0 requeues 2)
>  backlog 0b 0p requeues 2
> qdisc htb 1: dev ifb4eth1 root refcnt 2 r2q 10 default 10
> direct_packets_stat 0 direct_qlen 32
>  Sent 22265602 bytes 23335 pkt (dropped 0, overlimits 5193 requeues 0)
>  backlog 0b 0p requeues 0
> qdisc fq_codel 110: dev ifb4eth1 parent 1:10 limit 1001p flows 1024
> quantum 300 target 5.0ms interval 350.0ms ecn
>  Sent 22265602 bytes 23335 pkt (dropped 0, overlimits 0 requeues 0)
>  backlog 0b 0p requeues 0
>   maxpacket 1454 drop_overlimit 0 new_flow_count 4392 ecn_mark 0
>   new_flows_len 1 old_flows_len 1
>
> On Sat, Jul 23, 2016 at 10:59 PM, Loganaden Velvindron
> <logana...@gmail.com> wrote:
>> After going through:
>>
>> https://tools.ietf.org/html/draft-ietf-aqm-fq-codel-06
>>
>> I think that I should remove the itarget and etarget, and set the
>> Interval rather to 500ms.
>>
>>
>>
>>    The _interval_ parameter has the same semantics as CoDel and is used
>>    to ensure that the minimum sojourn time of packets in a queue used as
>>    an estimator by the CoDel control algorithm is a relatively up-to-
>>    date value.  That is, CoDel only reacts to delay experienced in the
>>    last epoch of length interval.  It SHOULD be set to be on the order
>>    of the worst-case RTT through the bottleneck to give end-points
>>    sufficient time to react.
>>
>>    The default interval value is 100 ms.
>>
>> The default interval value is not suited for my 3g connection where
>> the worse-case RTT is much higher.
>>
>>
>>
>> On Sat, Jul 23, 2016 at 10:36 PM, Loganaden Velvindron
>> <logana...@gmail.com> wrote:
>>>>         It seems the initial burst like behavior from the earlier test was 
>>>> a false positive; these test are still not beautiful, but at least they do 
>>>> not indicate that HTB+fq_codel as configured on your system do not suffer 
>>>> from uncontrolled bursty-ness. Unfortunately, I have no real insight or 
>>>> advice to offer how to improve your situation (short f setting the shaper 
>>>> rates considerably lower).
>>>>         BTW, “tc -d qdisc” and “tc -s disc” give a bit more information, 
>>>> and “tc class show dev eth1” and “tc class show dev ifb4eth1” will also 
>>>> offer more detail about your setup.
>>>>
>>>>
>>>> Best Regards
>>>>         Sebastian
>>>>
>>>>
>>>>
>>>
>>> tc -d qdisc:
>>>
>>> qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024
>>> quantum 300 target 5.0ms interval 100.0ms ecn
>>> qdisc htb 1: dev eth1 root refcnt 2 r2q 10 default 12
>>> direct_packets_stat 5 ver 3.17 direct_qlen 1000
>>> qdisc fq_codel 110: dev eth1 parent 1:11 limit 1001p flows 1024
>>> quantum 300 target 500.0ms interval 100.0ms
>>> qdisc fq_codel 120: dev eth1 parent 1:12 limit 1001p flows 1024
>>> quantum 300 target 500.0ms interval 100.0ms
>>> qdisc fq_codel 130: dev eth1 parent 1:13 limit 1001p flows 1024
>>> quantum 300 target 500.0ms interval 100.0ms
>>> qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
>>> qdisc mq 0: dev wlan0 root
>>> qdisc htb 1: dev ifb4eth1 root refcnt 2 r2q 10 default 10
>>> direct_packets_stat 0 ver 3.17 direct_qlen 32
>>> qdisc fq_codel 110: dev ifb4eth1 parent 1:10 limit 1001p flows 1024
>>> quantum 300 target 500.0ms interval 100.0ms ecn
>>>
>>> tc -s qdisc:
>>>
>>> qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024
>>> quantum 300 target 5.0ms interval 100.0ms ecn
>>>  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>>>  backlog 0b 0p requeues 0
>>>   maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>>>   new_flows_len 0 old_flows_len 0
>>> qdisc htb 1: dev eth1 root refcnt 2 r2q 10 default 12
>>> direct_packets_stat 5 direct_qlen 1000
>>>  Sent 23456195 bytes 109776 pkt (dropped 0, overlimits 86210 requeues 0)
>>>  backlog 0b 4p requeues 0
>>> qdisc fq_codel 110: dev eth1 parent 1:11 limit 1001p flows 1024
>>> quantum 300 target 500.0ms interval 100.0ms
>>>  Sent 14760 bytes 164 pkt (dropped 0, overlimits 0 requeues 0)
>>>  backlog 0b 0p requeues 0
>>>   maxpacket 256 drop_overlimit 0 new_flow_count 163 ecn_mark 0
>>>   new_flows_len 1 old_flows_len 0
>>> qdisc fq_codel 120: dev eth1 parent 1:12 limit 1001p flows 1024
>>> quantum 300 target 500.0ms interval 100.0ms
>>>  Sent 23440300 bytes 109600 pkt (dropped 115, overlimits 0 requeues 0)
>>>  backlog 5816b 4p requeues 0
>>>   maxpacket 1454 drop_overlimit 0 new_flow_count 15749 ecn_mark 0
>>>   new_flows_len 0 old_flows_len 1
>>> qdisc fq_codel 130: dev eth1 parent 1:13 limit 1001p flows 1024
>>> quantum 300 target 500.0ms interval 100.0ms
>>>  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>>>  backlog 0b 0p requeues 0
>>>   maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>>>   new_flows_len 0 old_flows_len 0
>>> qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
>>>  Sent 190858989 bytes 149884 pkt (dropped 0, overlimits 0 requeues 0)
>>>  backlog 0b 0p requeues 0
>>> qdisc mq 0: dev wlan0 root
>>>  Sent 194287835 bytes 153002 pkt (dropped 0, overlimits 0 requeues 7)
>>>  backlog 0b 0p requeues 7
>>> qdisc htb 1: dev ifb4eth1 root refcnt 2 r2q 10 default 10
>>> direct_packets_stat 0 direct_qlen 32
>>>  Sent 192953486 bytes 149877 pkt (dropped 0, overlimits 41505 requeues 0)
>>>  backlog 0b 0p requeues 0
>>> qdisc fq_codel 110: dev ifb4eth1 parent 1:10 limit 1001p flows 1024
>>> quantum 300 target 500.0ms interval 100.0ms ecn
>>>  Sent 192953486 bytes 149877 pkt (dropped 0, overlimits 0 requeues 0)
>>>  backlog 0b 0p requeues 0
>>>   maxpacket 1454 drop_overlimit 0 new_flow_count 16778 ecn_mark 0
>>>   new_flows_len 1 old_flows_len 2
>>>
>>>
>>> tc class show dev eth1
>>> class htb 1:11 parent 1:1 leaf 110: prio 1 rate 128Kbit ceil 100Kbit
>>> burst 1600b cburst 1600b
>>> class htb 1:1 root rate 300Kbit ceil 300Kbit burst 1599b cburst 1599b
>>> class htb 1:10 parent 1:1 prio 0 rate 300Kbit ceil 300Kbit burst 1599b
>>> cburst 1599b
>>> class htb 1:13 parent 1:1 leaf 130: prio 3 rate 50Kbit ceil 284Kbit
>>> burst 1600b cburst 1599b
>>> class htb 1:12 parent 1:1 leaf 120: prio 2 rate 50Kbit ceil 284Kbit
>>> burst 1600b cburst 1599b
>>> class fq_codel 110:188 parent 110:
>>> class fq_codel 120:3d6 parent 120:
>>>
>>>
>>> tc class show dev ifb4eth1:
>>> class htb 1:10 parent 1:1 leaf 110: prio 0 rate 19600Kbit ceil
>>> 19600Kbit burst 1597b cburst 1597b
>>> class htb 1:1 root rate 19600Kbit ceil 19600Kbit burst 1597b cburst 1597b
>>> class fq_codel 110:2f1 parent 110:
>>> class fq_codel 110:330 parent 110:
>>>
>>> I changed the target from 350ms to 500ms for both ingress and egree,
>>> and the throughput seem to be better:
>>>
>>> http://www.dslreports.com/speedtest/4515961
_______________________________________________
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

Reply via email to