Hi Mikael, Thanks! That looks like a fully saturated core, no? I do not know how to parse the symbols here, so not sure what "class" of load is denoted by the star, but I would guess something including sirqs? Anyway the average is ~49% load, while clearly CPU is pegged already. I assume the htop data is from the HGW...
best regards Sebastian > On Sep 4, 2020, at 15:37, Mikael Abrahamsson <swm...@swm.pp.se> wrote: > > On Thu, 3 Sep 2020, Sebastian Moeller wrote: > >> Mmmh, how did you measure the sirq percentage? Some top versions show >> overall percentage with 100% meaning all CPUs, so 35% in a quadcore could >> mean 1 fully maxed out CPU (25%) plus an additional 10% spread over the >> other three, or something more benign. Better top (so not busybox's) or htop >> versions also can show the load per CPU which is helpful to pinpoint >> hotspots... > > If I run iperf3 with 10 parallel sessions then htop shows this (in the CAKE > upstream direction I believe): > > 1 [* > 0.7%] Tasks: 19, 0 thr; 2 running > 2 > [*********************************************************************************100.0%] > Load average: 0.48 0.16 0.05 > 3 [#*************************************** > 44.4%] Uptime: 10 days, 04:46:37 > 4 [************************************************ > 54.2%] > Mem[|#* > 36.7M/3.84G] > Swp[ > 0K/0K] > > The other direction (-R), typically this: > > 1 [#*********** > 13.0%] Tasks: 19, 0 thr; 2 running > 2 [*********************************************** > 53.9%] Load average: 0.54 0.25 0.09 > 3 [#************************************************* > 55.8%] Uptime: 10 days, 04:47:36 > 4 > [************************************************************************** > 84.4%] > > Topology is: > > PC - HGW -> Internet > > iperf3 is run on the PC, HGW has CAKE in the -> Internet direction. > >> Best Regards >> Sebastian >> >>> >>> root@OpenWrt:~# tc -s qdisc >>> qdisc noqueue 0: dev lo root refcnt 2 >>> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) >>> backlog 0b 0p requeues 0 >>> qdisc cake 8034: dev eth0 root refcnt 9 bandwidth 900Mbit diffserv3 >>> triple-isolate nonat nowash no-ack-filter split-gso rtt 100.0ms raw >>> overhead 0 >>> Sent 1111772001 bytes 959703 pkt (dropped 134, overlimits 221223 requeues >>> 179) >>> backlog 0b 0p requeues 179 >>> memory used: 2751976b of 15140Kb >>> capacity estimate: 900Mbit >>> min/max network layer size: 42 / 1514 >>> min/max overhead-adjusted size: 42 / 1514 >>> average network hdr offset: 14 >>> >>> Bulk Best Effort Voice >>> thresh 56250Kbit 900Mbit 225Mbit >>> target 5.0ms 5.0ms 5.0ms >>> interval 100.0ms 100.0ms 100.0ms >>> pk_delay 0us 22us 232us >>> av_delay 0us 6us 7us >>> sp_delay 0us 4us 5us >>> backlog 0b 0b 0b >>> pkts 0 959747 90 >>> bytes 0 1111935437 39440 >>> way_inds 0 22964 0 >>> way_miss 0 275 2 >>> way_cols 0 0 0 >>> drops 0 134 0 >>> marks 0 0 0 >>> ack_drop 0 0 0 >>> sp_flows 0 3 1 >>> bk_flows 0 1 0 >>> un_flows 0 0 0 >>> max_len 0 68130 3714 >>> quantum 1514 1514 1514 >>> >>> >>> -- >>> Mikael Abrahamsson email: >>> swm...@swm.pp.se_______________________________________________ >>> Bloat mailing list >>> Bloat@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/bloat >> > > -- > Mikael Abrahamsson email: swm...@swm.pp.se _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat