>         It seems the initial burst like behavior from the earlier test was a 
> false positive; these test are still not beautiful, but at least they do not 
> indicate that HTB+fq_codel as configured on your system do not suffer from 
> uncontrolled bursty-ness. Unfortunately, I have no real insight or advice to 
> offer how to improve your situation (short f setting the shaper rates 
> considerably lower).
>         BTW, “tc -d qdisc” and “tc -s disc” give a bit more information, and 
> “tc class show dev eth1” and “tc class show dev ifb4eth1” will also offer 
> more detail about your setup.
>
>
> Best Regards
>         Sebastian
>
>
>

tc -d qdisc:

qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024
quantum 300 target 5.0ms interval 100.0ms ecn
qdisc htb 1: dev eth1 root refcnt 2 r2q 10 default 12
direct_packets_stat 5 ver 3.17 direct_qlen 1000
qdisc fq_codel 110: dev eth1 parent 1:11 limit 1001p flows 1024
quantum 300 target 500.0ms interval 100.0ms
qdisc fq_codel 120: dev eth1 parent 1:12 limit 1001p flows 1024
quantum 300 target 500.0ms interval 100.0ms
qdisc fq_codel 130: dev eth1 parent 1:13 limit 1001p flows 1024
quantum 300 target 500.0ms interval 100.0ms
qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
qdisc mq 0: dev wlan0 root
qdisc htb 1: dev ifb4eth1 root refcnt 2 r2q 10 default 10
direct_packets_stat 0 ver 3.17 direct_qlen 32
qdisc fq_codel 110: dev ifb4eth1 parent 1:10 limit 1001p flows 1024
quantum 300 target 500.0ms interval 100.0ms ecn

tc -s qdisc:

qdisc fq_codel 0: dev eth0 root refcnt 2 limit 1024p flows 1024
quantum 300 target 5.0ms interval 100.0ms ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc htb 1: dev eth1 root refcnt 2 r2q 10 default 12
direct_packets_stat 5 direct_qlen 1000
 Sent 23456195 bytes 109776 pkt (dropped 0, overlimits 86210 requeues 0)
 backlog 0b 4p requeues 0
qdisc fq_codel 110: dev eth1 parent 1:11 limit 1001p flows 1024
quantum 300 target 500.0ms interval 100.0ms
 Sent 14760 bytes 164 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 163 ecn_mark 0
  new_flows_len 1 old_flows_len 0
qdisc fq_codel 120: dev eth1 parent 1:12 limit 1001p flows 1024
quantum 300 target 500.0ms interval 100.0ms
 Sent 23440300 bytes 109600 pkt (dropped 115, overlimits 0 requeues 0)
 backlog 5816b 4p requeues 0
  maxpacket 1454 drop_overlimit 0 new_flow_count 15749 ecn_mark 0
  new_flows_len 0 old_flows_len 1
qdisc fq_codel 130: dev eth1 parent 1:13 limit 1001p flows 1024
quantum 300 target 500.0ms interval 100.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
 Sent 190858989 bytes 149884 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev wlan0 root
 Sent 194287835 bytes 153002 pkt (dropped 0, overlimits 0 requeues 7)
 backlog 0b 0p requeues 7
qdisc htb 1: dev ifb4eth1 root refcnt 2 r2q 10 default 10
direct_packets_stat 0 direct_qlen 32
 Sent 192953486 bytes 149877 pkt (dropped 0, overlimits 41505 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 110: dev ifb4eth1 parent 1:10 limit 1001p flows 1024
quantum 300 target 500.0ms interval 100.0ms ecn
 Sent 192953486 bytes 149877 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1454 drop_overlimit 0 new_flow_count 16778 ecn_mark 0
  new_flows_len 1 old_flows_len 2


tc class show dev eth1
class htb 1:11 parent 1:1 leaf 110: prio 1 rate 128Kbit ceil 100Kbit
burst 1600b cburst 1600b
class htb 1:1 root rate 300Kbit ceil 300Kbit burst 1599b cburst 1599b
class htb 1:10 parent 1:1 prio 0 rate 300Kbit ceil 300Kbit burst 1599b
cburst 1599b
class htb 1:13 parent 1:1 leaf 130: prio 3 rate 50Kbit ceil 284Kbit
burst 1600b cburst 1599b
class htb 1:12 parent 1:1 leaf 120: prio 2 rate 50Kbit ceil 284Kbit
burst 1600b cburst 1599b
class fq_codel 110:188 parent 110:
class fq_codel 120:3d6 parent 120:


tc class show dev ifb4eth1:
class htb 1:10 parent 1:1 leaf 110: prio 0 rate 19600Kbit ceil
19600Kbit burst 1597b cburst 1597b
class htb 1:1 root rate 19600Kbit ceil 19600Kbit burst 1597b cburst 1597b
class fq_codel 110:2f1 parent 110:
class fq_codel 110:330 parent 110:

I changed the target from 350ms to 500ms for both ingress and egree,
and the throughput seem to be better:

http://www.dslreports.com/speedtest/4515961
_______________________________________________
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

Reply via email to