> On May 6, 2018, at 9:27 PM, Toke Høiland-Jørgensen <t...@toke.dk> wrote:
>> 
>> The backhaul I’d like to test it on uses mostly NSM5s as the wireless
>> devices and APUs for routing, QoS, etc. The QoS scripts use the htb,
>> sfq and prio qdiscs. I’m hoping I can just add a prio qdisc / tc
>> filter somewhere in the existing rules.
> 
> Ah, so this is a wired connection? And you are only targeting a
> particular setup?

The backhaul is a mixture of wired and wireless links, each with an ALIX or APU 
router. A 5 GHz wireless node might contain:

NanoStation M5   <—Ethernet—>   APU   <—Ethernet—>   NanoStation M5

Meaning, the links to other nodes are (usually) wireless, but the connections 
to the APU routers are wired, obviously. Some links are fiber, 10 GHz, 60 GHz, 
etc...

http://mapa.czfree.net/#lat=50.75798855854454&lng=15.045905113220215&zoom=13&autofilter=1&type=satellite&geolocate=98%7C114%7C111%7C117%7C109%7C111%7C118%7C115%7C107%7C97&node=19444&tilt=0&heading=0&aponly=1&bbonly=1&actlink=1&actnode=1&;
 
<http://mapa.czfree.net/#lat=50.75798855854454&lng=15.045905113220215&zoom=13&autofilter=1&type=satellite&geolocate=98|114|111|117|109|111|118|115|107|97&node=19444&tilt=0&heading=0&aponly=1&bbonly=1&actlink=1&actnode=1&>

> If you are running sfq you are probably not going to
> measure a lot of queueing delay, though, as the UDP flow will just get
> its own queue…

Yeah, great point. :) I guess there will be some interflow latency due to fair 
queueing though, depending on the number of flows. In the lab, Cake and 
fq_codel can improve upon sfq’s interflow latency.

So far what I’m proposing was only supposed to determine the relative 
contribution to latency in the backhaul from channel contention vs queueing, 
but my ultimate goal is to demonstrate how Cake or fq_codel improves (or does 
not improve) upon sfq in the backhaul. I could run flent through various paths 
in the backhaul, if I’m allowed. It would also be nice if in addition to that I 
can say, here’s how much Cake or fq_codel improve intra-flow and inter-flow 
latency for real-world traffic. If some improvement can be shown, it may 
justify the development of the ISP flavored Cake that has been discussed.

>> I now added SmokePing ICMP and IRTT targets for the same LAN host, and
>> can look at that distribution after a day to judge the overhead.

I’m already getting the picture (attached), which we probably already know, 
that irtt shows slightly higher and more variable response times, but not tens 
of milliseconds higher, so that should hold true under higher network load as 
well, assuming the irtt devices themselves are not loaded.

> Yeah, ICMP is definitely treated differently in many places... Another
> example is routers and switches that have a data plane implemented in
> hardware will reply to ICMP from the software control plane, which is
> way *slower* than regular forwarding... Also, ICMP is often subject to
> rate limiting... etc, etc...

Ok, that’s also interesting. That makes it harder to determine when ping is 
actually measuring. :)

_______________________________________________
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org

Reply via email to