Sergey - I wasn't assuming anything about fast.com. The document you shared
wasn't clear about the methodology's details here. Others sadly, have actually
used ICMP pings in the way I described. I was making a generic comment of
concern.
That said, it sounds like what you are doing is really helpful (esp. given that
your measure is aimed at end user experiential qualities).
Good luck!
On Saturday, May 2, 2020 3:00pm, "Sergey Fedorov" <[email protected]> said:
Dave, thanks for sharing interesting thoughts and context. I am still a bit
worried about properly defining "latency under load" for a NAT routed
situation. If the test is based on ICMP Ping packets *from the server*, it
will NOT be measuring the full path latency, and if the potential congestion is
in the uplink path from the access provider's residential box to the access
provider's router/switch, it will NOT measure congestion caused by bufferbloat
reliably on either side, since the bufferbloat will be outside the ICMP Ping
path.
I realize that a browser based speed test has to be basically run from the
"server" end, because browsers are not that good at time measurement on a
packet basis. However, there are ways to solve this and avoid the ICMP Ping
issue, with a cooperative server.
This erroneously assumes that [ fast.com ]( http://fast.com ) measures latency
from the server side. It does not. The measurements are done from the client,
over http, with a parallel connection(s) to the same or similar set of servers,
by sending empty requests over a previously established connection (you can see
that in the browser web inspector).
It should be noted that the value is not precisely the "RTT on a TCP/UDP flow
that is loaded with traffic", but "user delay given the presence of heavy
parallel flows". With that, some of the challenges you mentioned do not apply.
In line with another point I've shared earlier - the goal is to measure and
explain the user experience, not to be a diagnostic tool showing internal
transport metrics.
SERGEY FEDOROV
Director of Engineering
[ [email protected] ]( mailto:[email protected] )
121 Albright Way | Los Gatos, CA 95032
On Sat, May 2, 2020 at 10:38 AM David P. Reed <[ [email protected] ](
mailto:[email protected] )> wrote:
I am still a bit worried about properly defining "latency under load" for a NAT
routed situation. If the test is based on ICMP Ping packets *from the server*,
it will NOT be measuring the full path latency, and if the potential congestion
is in the uplink path from the access provider's residential box to the access
provider's router/switch, it will NOT measure congestion caused by bufferbloat
reliably on either side, since the bufferbloat will be outside the ICMP Ping
path.
I realize that a browser based speed test has to be basically run from the
"server" end, because browsers are not that good at time measurement on a
packet basis. However, there are ways to solve this and avoid the ICMP Ping
issue, with a cooperative server.
I once built a test that fixed this issue reasonably well. It carefully created
a TCP based RTT measurement channel (over HTTP) that made the echo have to
traverse the whole end-to-end path, which is the best and only way to
accurately define lag under load from the user's perspective. The client end of
an unloaded TCP connection can depend on TCP (properly prepared by getting it
past slowstart) to generate a single packet response.
This "TCP ping" is thus compatible with getting the end-to-end measurement on
the server end of a true RTT.
It's like tcp-traceroute tool, in that it tricks anyone in the middle boxes
into thinking this is a real, serious packet, not an optional low priority
packet.
The same issue comes up with non-browser-based techniques for measuring true
lag-under-load.
Now as we move HTTP to QUIC, this actually gets easier to do.
One other opportunity I haven't explored, but which is pregnant with potential
is the use of WebRTC, which runs over UDP internally. Since JavaScript has
direct access to create WebRTC connections (multiple ones), this makes detailed
testing in the browser quite reasonable.
And the time measurements can resolve well below 100 microseconds, if the JS is
based on modern JIT compilation (Chrome, Firefox, Edge all compile to machine
code speed if the code is restricted and in a loop). Then again, there is Web
Assembly if you want to write C code that runs in the brower fast. WebAssembly
is a low level language that compiles to machine code in the browser execution,
and still has access to all the browser networking facilities.
On Saturday, May 2, 2020 12:52pm, "Dave Taht" <[ [email protected] ](
mailto:[email protected] )> said:
> On Sat, May 2, 2020 at 9:37 AM Benjamin Cronce <[ [email protected] ](
> mailto:[email protected] )> wrote:
> >
> > > Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
>
> I guess one of my questions is that with a switch to BBR netflix is
> going to do pretty well. If [ fast.com ]( http://fast.com ) is using bbr,
> well... that
> excludes much of the current side of the internet.
>
> > For download, I show 6ms unloaded and 6-7 loaded. But for upload the loaded
> shows as 7-8 and I see it blip upwards of 12ms. But I am no longer using any
> traffic shaping. Any anti-bufferbloat is from my ISP. A graph of the bloat
> would
> be nice.
>
> The tests do need to last a fairly long time.
>
> > On Sat, May 2, 2020 at 9:51 AM Jannie Hanekom <[ [email protected] ](
> > mailto:[email protected] )>
> wrote:
> >>
> >> Michael Richardson <[ [email protected] ]( mailto:[email protected] )>:
> >> > Does it find/use my nearest Netflix cache?
> >>
> >> Thankfully, it appears so. The DSLReports bloat test was interesting,
> but
> >> the jitter on the ~240ms base latency from South Africa (and other parts
> of
> >> the world) was significant enough that the figures returned were often
> >> unreliable and largely unusable - at least in my experience.
> >>
> >> Fast.com reports my unloaded latency as 4ms, my loaded latency as ~7ms
> and
> >> mentions servers located in local cities. I finally have a test I can
> share
> >> with local non-technical people!
> >>
> >> (Agreed, upload test would be nice, but this is a huge step forward from
> >> what I had access to before.)
> >>
> >> Jannie Hanekom
> >>
> >> _______________________________________________
> >> Cake mailing list
> >> [ [email protected] ]( mailto:[email protected] )
> >> [ https://lists.bufferbloat.net/listinfo/cake ](
> >> https://lists.bufferbloat.net/listinfo/cake )
> >
> > _______________________________________________
> > Cake mailing list
> > [ [email protected] ]( mailto:[email protected] )
> > [ https://lists.bufferbloat.net/listinfo/cake ](
> > https://lists.bufferbloat.net/listinfo/cake )
>
>
>
> --
> Make Music, Not War
>
> Dave Täht
> CTO, TekLibre, LLC
> [ http://www.teklibre.com ]( http://www.teklibre.com )
> Tel: 1-831-435-0729
> _______________________________________________
> Cake mailing list
> [ [email protected] ]( mailto:[email protected] )
> [ https://lists.bufferbloat.net/listinfo/cake ](
> https://lists.bufferbloat.net/listinfo/cake )
>
_______________________________________________
Cake mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cake