Regarding ping during the test, although it isn't ideal, it appears to be enough to identify problems, and also consistently get the same grade, to within one grade level anyway.
An "A" or "A+" is not going to get a "C", and a "D" or "F" is never going to get a "B" no matter how many times the test is re-run. (Regarding the transition between idle and downloading, the downloads are phased in, not all started at once so any conclusion on transition response has to take that into account as well). I can increase the ping frequency when the connection is seen as fast, but 100hz or 10hz would have issues, for one, there is no visibility into whether a browser is using tcp push, and for another, doing 10 or 100 a second - if they turn into packets - takes a lot of capacity out of the upload channel. If they coalesce then the measurements just add noise to the result. My hope is the test evolves but is balanced, leading to pointers on problems that may require other more specialised tests to fully explore. It has taken almost half a million tests to mostly avoid buggy browser versions and platforms, and get a repeatable and largely correct speed measurement. In a clean lab network after a dozen tests that phase of things would be over with and done. I hope there can be a user settable option for getting finer view of latency under load. Or another tool designed just for it. I don't see any issue with a solid desktop PC running a current browser, connected to a server dedicated to listening emitting a 10hz-100hz web socket ping while also doing a bunch of downloads, if that was the entire purpose of the exercise. In the mean time I'd like to add a way to allow a user to easily tag the equipment they are using because at the moment we're getting all this useful grade information without any context. We don't even know which home users have made an attempt to ameliorate problems. On Sun, May 3, 2015 at 2:25 AM, Dave Taht <dave.t...@gmail.com> wrote: > In one of the threads I saw that the dslreports test is one http ping > every second. I am not really sure how that is handled - if the > connection is > tcp (?) and persistent, that measures 1 packet RTT, if it is a new > connection, it is quite a few RTTs. > > And it is really not enough pings for valid statistical sampling. > > IF tcp, It would be vastly better to attempt a tcp ping every 10ms on > an established connection (or whatever can be achieved, with 20ms > being a good interval for most voip, 100ms seems easily doable, > but...). This would accomplish two things: > > 1) A single packet loss would not cause a RTO (usually 250ms) but be > flushed out (resent) on the next packet sent. So you would see replies > get bunched in relation to loss and delay. > > 2) More pings more accurately track actual latency over a much tighter > interval in general, particularly during the slow start phases at the > beginning of the test where things tend to get really out of hand when > you fire up tons of flows. > > In terms of plotting, I am quite fond of smokeping's methods, so you > could still show the bar chart on a per second basis, but colored as > per smokeping. > > (It had been my hope to one day leverage the webrtc apis to be able to > test udp.) > > On what interval is it feasible to fire off a new http ping, and can > the difference between a persistent connection and a new one be > determined from within the browser? > > > -- > Dave Täht > Open Networking needs **Open Source Hardware** > > https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67 >
_______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat