What I don't know is how rapidly VOIP applications will adjust their latency + jitter window (the operating point that they choose for their operation). They can't adjust it instantly, as if they do, the transitions from one operating point to another will cause problems, and you certainly won't be doing that adjustment quickly.
So the time period over which one computes jitter statistics should probably be related to that behavior. Ideally, we need to get someone involved in WebRTC to help with this, to present statistics that may be useful to end users to predict the behavior of their service. I'll see if I can get someone working on that to join the discussion. - Jim On Thu, May 7, 2015 at 7:44 AM, Mikael Abrahamsson <swm...@swm.pp.se> wrote: > On Thu, 7 May 2015, jb wrote: > > There is a web socket based jitter tester now. It is very early stage but >> works ok. >> >> http://www.dslreports.com/speedtest?radar=1 >> >> So the latency displayed is the mean latency from a rolling 60 sample >> buffer, Minimum latency is also displayed. and the +/- PDV value is the >> mean difference between sequential pings in that same rolling buffer. It is >> quite similar to the std.dev actually (not shown). >> > > So I think there are two schools here, either you take average and display > + / - from that, but I think I prefer to take the lowest of the last 100 > samples (or something), and then display PDV from that "floor" value, ie > PDV can't ever be negative, it can only be positive. > > Apart from that, the above multi-place RTT test is really really nice, > thanks for doing this! > > > -- > Mikael Abrahamsson email: swm...@swm.pp.se > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat >
_______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat