Thanks for the feedback...I've been trying out the following based on debloat.sh:

The ath10k access point has two interfaces for these tests:
1. virtual access point - vap1
tc qdisc add dev vap1 handle 1 root mq
tc qdisc add dev vap1 parent 1:1 fq_codel target 30ms quantum 4500 noecn
tc qdisc add dev vap1 parent 1:2 fq_codel target 30ms quantum 4500
tc qdisc add dev vap1 parent 1:3 fq_codel target 30ms quantum 4500
tc qdisc add dev vap1 parent 1:4 fq_codel target 30ms quantum 4500 noecn

2. ethernet - eth1
tc qdisc add dev eth1 root fq_codel

For the netperf-wrapper tests, the 4 stations in use:
tc qdisc add dev sta101 root fq_codel target 30ms quantum 300
tc qdisc add dev sta102 root fq_codel target 30ms quantum 300
tc qdisc add dev sta103 root fq_codel target 30ms quantum 300
tc qdisc add dev sta104 root fq_codel target 30ms quantum 300

I'm planning to re-run with these settings and then again at a lower mcs.



On 03/27/2015 08:31 PM, Dave Taht wrote:
wonderful dataset isaac! A lot to learn there and quite a bit I can explain, which might take me days to do with graphs and the like.

But it's late, and unless you are planning on doing another test run I will defer.

It is mildly easier to look at this stuff in bulk, so I did a wget -l 1- m http://candelatech.com/downloads/wifi-reports/trial1/ on the data.

Quick top level notes rather than write a massive blog with graph entry....

-1) These are totally artificial tests, stressing out queue management. There are no winners, or losers per se', only data. Someday we can get to winners and losers, but we have a zillion interrelated variables to isolate and fix first. So consider this data a *baseline* for what wifi - at the highest rate possible - looks like today - and I'd dearly like some results that are below mcs4 on average also as a baseline....

Typical wifi traffic looks nothing like rrul, for example. rrul vs rrul_be is useful for showing how badly 802.11e queues actually work today, however.

0) Pretty hard to get close to the underlying capability of the mac, isn't it? Plenty of problems besides queue management could exist, including running out of cpu....

1) SFQ has a default packet limit of 128 packets which does not appear to be enough at these speeds. Bump it to 1000 for a more direct comparison to the other qdiscs.

You will note a rather big difference in cwnd on your packet captures, and bandwidth usage more similar to pfifo_fast. I would expect, anyway.

2) I have generally felt that txops needed more of a "packing" approach to wedging packets into a txop rather than a pure sfq or drr approach, as losses tend to be bursty, and maximizing the number of flows in a txop a goodness. SFQ packs better than DRR.

That said there are so many compensation stuff (like retries) getting in the way right now...

3) The SFQ results being better than the fq_codel results in several cases are also due in part to an interaction of the drr quantum and a not high enough target to compensate for wifi jitter.

But in looking at SFQ you can't point to a lower latency and say that's "better" when you also have a much lower achieved bandwidth.

So I would appreciate a run where the stations had a fq_codel quantum 300 and target 30ms. APs, on the other hand, would be better a larger (incalculable, but say 4500) quantum, a similar target, and a per dst filter rather than the full 5 tuple.



On Fri, Mar 27, 2015 at 12:00 PM, Isaac Konikoff <[email protected] <mailto:[email protected]>> wrote:

    Thanks for pointing out horst.

    I've been trying wireshark io graphs such as:
    retry comparison:  wlan.fc.retry==0 (line) to wlan.fc.retry==1
    (impulse)
    beacon delays:  wlan.fc.type_subtype==0x08 AVG
    frame.time_delta_displayed

    I've uploaded my pcap files, netperf-wrapper results and lanforge
    script reports which have some aggregate graphs below all of the
    pie charts. The pcap files with 64sta in the name correspond to
    the script reports.

    candelatech.com/downloads/wifi-reports/trial1
    <http://candelatech.com/downloads/wifi-reports/trial1>

    I'll upload more once I try the qdisc suggestions and I'll
    generate comparison plots.

    Isaac


    On 03/27/2015 10:21 AM, Aaron Wood wrote:


    On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith
    <[email protected] <mailto:[email protected]>> wrote:

        Using horst I've discovered that the major reason our WiFi
        network sucks is because 90% of the packets are sent at the
        6mbit rate.  Most of the rest show up in the 12 and 24mbit
        zone with a tiny fraction of them using the higher MCS rates.

        Trying to couple the radiotap info with the packet decryption
        to discover the sources of those low-bit rate packets is
        where I've been running into difficulty.  I can see the what
        but I haven't had much luck on the why.

        I totally agree with you that tools other than wireshark for
        analyzing this seem to be non-existent.


    Using the following filter in Wireshark should get you all that
    6Mbps traffic:

    radiotap.datarate == 6

    Then it's pretty easy to dig into what those are (by wifi
    frame-type, at least). At my network, that's mostly broadcast
    traffic (AP beacons and whatnot), as the corporate wifi has been
    set to use that rate as the broadcast rate.

    without capturing the WPA exchange, the contents of the data
    frames can't be seen, of course.

    -Aaron





--
Dave Täht
Let's make wifi fast, less jittery and reliable again!

https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb


_______________________________________________
Codel mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/codel

Reply via email to