On 18 November 2017 13:43:23 CET, Pete Heist <notificati...@github.com> wrote:
>Maybe of interest, the per-packet upstream vs downstream loss now works
>properly, if you want to use it. Limitations documented:
>https://github.com/peteheist/irtt#64-bit-received-window
>
>But how to plot that. Maybe there could be an up arrow for `lost_up`,
>down arrow for `lost_down`, or X for the generic `lost` shown at y=0 on
>the latency plot, along with a gap in the plotted line?
Yeah, currently I'm skipping drops entirely, which is probably not the right
thing to do. Same with the ping runner. Will think about how to fix that.
On a separate plot, drops could just be plotted as a "drops per interval" plot.
That's what the qdisc stats plot does.
>IRTT for the voip tests would be awesome! (I hope) Just an idea,
>instead of having separate voip tests, any of the tests with UDP flows
>could become a voip test by accepting a "voip_codec" test-parameter
>that sets the interval and packet length. Common codec values:
>https://www.cisco.com/c/en/us/support/docs/voice/voice-quality/7934-bwidth-consume.html#anc1
>
>I guess having generic parameters to explicitly set interval and packet
>length would still be useful...
Are VoIP packets always straight fixed interval? Or would you need to support
various distributions?
-Toke
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345441084
_______________________________________________
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org