Re: [Bloat] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

2023-01-11 Thread Dick Roy via Bloat
 

 

-Original Message-
From: Starlink [mailto:starlink-boun...@lists.bufferbloat.net] On Behalf Of
Sebastian Moeller via Starlink
Sent: Wednesday, January 11, 2023 12:01 PM
To: Rodney W. Grimes
Cc: Dave Taht via Starlink; mike.reyno...@netforecast.com; libreqos; David
P. Reed; Rpm; rjmcmahon; bloat
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

 

Hi Rodney,

 

 

 

 

> On Jan 11, 2023, at 19:32, Rodney W. Grimes 
wrote:

> 

> Hello,

> 

> Yall can call me crazy if you want.. but... see below [RWG]

>> Hi Bib,

>> 

>> 

>>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink
 wrote:

>>> 

>>> My biggest barrier is the lack of clock sync by the devices, i.e. very
limited support for PTP in data centers and in end devices. This limits the
ability to measure one way delays (OWD) and most assume that OWD is 1/2 and
RTT which typically is a mistake. We know this intuitively with airplane
flight times or even car commute times where the one way time is not 1/2 a
round trip time. Google maps & directions provide a time estimate for the
one way link. It doesn't compute a round trip and divide by two.

>>> 

>>> For those that can get clock sync working, the iperf 2 --trip-times
options is useful.

>> 

>>[SM] +1; and yet even with unsynchronized clocks one can try to
measure how latency changes under load and that can be done per direction.
Sure this is far inferior to real reliably measured OWDs, but if life/the
internet deals you lemons

> 

> [RWG] iperf2/iperf3, etc are already moving large amounts of data back and
forth, for that matter any rate test, why not abuse some of that data and
add the fundemental NTP clock sync data and bidirectionally pass each others
concept of "current time".  IIRC (its been 25 years since I worked on NTP at
this level) you *should* be able to get a fairly accurate clock delta
between each end, and then use that info and time stamps in the data stream
to compute OWD's.  You need to put 4 time stamps in the packet, and with
that you can compute "offset".

[RR] For this to work at a reasonable level of accuracy, the timestamping
circuits on both ends need to be deterministic and repeatable as I recall.
Any uncertainty in that process adds to synchronization
errors/uncertainties.

 

  [SM] Nice idea. I would guess that all timeslot based access
technologies (so starlink, docsis, GPON, LTE?) all distribute "high quality
time" carefully to the "modems", so maybe all that would be needed is to
expose that high quality time to the LAN side of those modems, dressed up as
NTP server?

[RR] It's not that simple!  Distributing "high-quality time", i.e.
"synchronizing all clocks" does not solve the communication problem in
synchronous slotted MAC/PHYs!  All the technologies you mentioned above are
essentially P2P, not intended for broadcast.  Point is, there is a point
controller (aka PoC) often called a base station (eNodeB, gNodeB, .) that
actually "controls everything that is necessary to control" at the UE
including time, frequency and sampling time offsets, and these are critical
to get right if you want to communicate, and they are ALL subject to the
laws of physics (cf. the speed of light)! Turns out that what is necessary
for the system to function anywhere near capacity, is for all the clocks
governing transmissions from the UEs to be "unsynchronized" such that all
the UE transmissions arrive at the PoC at the same (prescribed) time! For
some technologies, in particular 5G!, these considerations are ESSENTIAL.
Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you don't believe
me! :-)   

 

 

> 

>> 

>> 

>>> 

>>> --trip-times

>>> enable the measurement of end to end write to read latencies (client and
server clocks must be synchronized)

> [RWG] --clock-skew

> enable the measurement of the wall clock difference between sender and
receiver

> 

>> 

>>[SM] Sweet!

>> 

>> Regards

>>Sebastian

>> 

>>> 

>>> Bob

 I have many kvetches about the new latency under load tests being

 designed and distributed over the past year. I am delighted! that they

 are happening, but most really need third party evaluation, and

 calibration, and a solid explanation of what network pathologies they

 do and don't cover. Also a RED team attitude towards them, as well as

 thinking hard about what you are not measuring (operations research).

 I actually rather love the new cloudflare speedtest, because it tests

 a single TCP connection, rather than dozens, and at the same time folk

 are complaining that it doesn't find the actual "speed!". yet... the

 test itself more closely emulates a user experience than speedtest.net

 does. I am personally pretty convinced that the fewer numbers of flows

 that a web page opens improves the likelihood of a good user

 experience, but lack data on it.

 To try to tackle the evaluation and calibration part, I've reached 

Re: [Bloat] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

2023-01-11 Thread rjmcmahon via Bloat
Iperf 2 is designed to measure network i/o. Note: It doesn't have to 
move large amounts of data. It can support data profiles that don't 
drive TCP's CCA as an example.


Two things I've been asked for and avoided:

1) Integrate clock sync into iperf's test traffic
2) Measure and output CPU usages

I think both of these are outside the scope of a tool designed to test 
network i/o over sockets, rather these should be developed & validated 
independently of a network i/o tool.


Clock error really isn't about amount/frequency of traffic but rather 
getting a periodic high-quality reference. I tend to use GPS pulse per 
second to lock the local system oscillator to. As David says, most every 
modern handheld computer has the GPS chips to do this already. So to me 
it seems more of a policy choice between data center operators and 
device mfgs and less of a technical issue.


Bob

Hello,

Yall can call me crazy if you want.. but... see below [RWG]

Hi Bib,


> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink 
 wrote:
>
> My biggest barrier is the lack of clock sync by the devices, i.e. very limited 
support for PTP in data centers and in end devices. This limits the ability to measure 
one way delays (OWD) and most assume that OWD is 1/2 and RTT which typically is a 
mistake. We know this intuitively with airplane flight times or even car commute times 
where the one way time is not 1/2 a round trip time. Google maps & directions 
provide a time estimate for the one way link. It doesn't compute a round trip and 
divide by two.
>
> For those that can get clock sync working, the iperf 2 --trip-times options 
is useful.

	[SM] +1; and yet even with unsynchronized clocks one can try to 
measure how latency changes under load and that can be done per 
direction. Sure this is far inferior to real reliably measured OWDs, 
but if life/the internet deals you lemons


 [RWG] iperf2/iperf3, etc are already moving large amounts of data
back and forth, for that matter any rate test, why not abuse some of
that data and add the fundemental NTP clock sync data and
bidirectionally pass each others concept of "current time".  IIRC (its
been 25 years since I worked on NTP at this level) you *should* be
able to get a fairly accurate clock delta between each end, and then
use that info and time stamps in the data stream to compute OWD's.
You need to put 4 time stamps in the packet, and with that you can
compute "offset".




>
> --trip-times
>  enable the measurement of end to end write to read latencies (client and 
server clocks must be synchronized)

 [RWG] --clock-skew
	enable the measurement of the wall clock difference between sender and 
receiver




[SM] Sweet!

Regards
Sebastian

>
> Bob
>> I have many kvetches about the new latency under load tests being
>> designed and distributed over the past year. I am delighted! that they
>> are happening, but most really need third party evaluation, and
>> calibration, and a solid explanation of what network pathologies they
>> do and don't cover. Also a RED team attitude towards them, as well as
>> thinking hard about what you are not measuring (operations research).
>> I actually rather love the new cloudflare speedtest, because it tests
>> a single TCP connection, rather than dozens, and at the same time folk
>> are complaining that it doesn't find the actual "speed!". yet... the
>> test itself more closely emulates a user experience than speedtest.net
>> does. I am personally pretty convinced that the fewer numbers of flows
>> that a web page opens improves the likelihood of a good user
>> experience, but lack data on it.
>> To try to tackle the evaluation and calibration part, I've reached out
>> to all the new test designers in the hope that we could get together
>> and produce a report of what each new test is actually doing. I've
>> tweeted, linked in, emailed, and spammed every measurement list I know
>> of, and only to some response, please reach out to other test designer
>> folks and have them join the rpm email list?
>> My principal kvetches in the new tests so far are:
>> 0) None of the tests last long enough.
>> Ideally there should be a mode where they at least run to "time of
>> first loss", or periodically, just run longer than the
>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>> there! It's really bad science to optimize the internet for 20
>> seconds. It's like optimizing a car, to handle well, for just 20
>> seconds.
>> 1) Not testing up + down + ping at the same time
>> None of the new tests actually test the same thing that the infamous
>> rrul test does - all the others still test up, then down, and ping. It
>> was/remains my hope that the simpler parts of the flent test suite -
>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>> tests would provide calibration to the test designers.
>> we've got zillions of flent results in the archive published here:
>> https://blog.cerowrt.org/post

Re: [Bloat] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

2023-01-11 Thread Sebastian Moeller via Bloat
Hi Rodney,




> On Jan 11, 2023, at 19:32, Rodney W. Grimes  
> wrote:
> 
> Hello,
> 
>   Yall can call me crazy if you want.. but... see below [RWG]
>> Hi Bib,
>> 
>> 
>>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink 
>>>  wrote:
>>> 
>>> My biggest barrier is the lack of clock sync by the devices, i.e. very 
>>> limited support for PTP in data centers and in end devices. This limits the 
>>> ability to measure one way delays (OWD) and most assume that OWD is 1/2 and 
>>> RTT which typically is a mistake. We know this intuitively with airplane 
>>> flight times or even car commute times where the one way time is not 1/2 a 
>>> round trip time. Google maps & directions provide a time estimate for the 
>>> one way link. It doesn't compute a round trip and divide by two.
>>> 
>>> For those that can get clock sync working, the iperf 2 --trip-times options 
>>> is useful.
>> 
>>  [SM] +1; and yet even with unsynchronized clocks one can try to measure 
>> how latency changes under load and that can be done per direction. Sure this 
>> is far inferior to real reliably measured OWDs, but if life/the internet 
>> deals you lemons
> 
> [RWG] iperf2/iperf3, etc are already moving large amounts of data back and 
> forth, for that matter any rate test, why not abuse some of that data and add 
> the fundemental NTP clock sync data and bidirectionally pass each others 
> concept of "current time".  IIRC (its been 25 years since I worked on NTP at 
> this level) you *should* be able to get a fairly accurate clock delta between 
> each end, and then use that info and time stamps in the data stream to 
> compute OWD's.  You need to put 4 time stamps in the packet, and with that 
> you can compute "offset".

[SM] Nice idea. I would guess that all timeslot based access 
technologies (so starlink, docsis, GPON, LTE?) all distribute "high quality 
time" carefully to the "modems", so maybe all that would be needed is to expose 
that high quality time to the LAN side of those modems, dressed up as NTP 
server?


> 
>> 
>> 
>>> 
>>> --trip-times
>>> enable the measurement of end to end write to read latencies (client and 
>>> server clocks must be synchronized)
> [RWG] --clock-skew
>   enable the measurement of the wall clock difference between sender and 
> receiver
> 
>> 
>>  [SM] Sweet!
>> 
>> Regards
>>  Sebastian
>> 
>>> 
>>> Bob
 I have many kvetches about the new latency under load tests being
 designed and distributed over the past year. I am delighted! that they
 are happening, but most really need third party evaluation, and
 calibration, and a solid explanation of what network pathologies they
 do and don't cover. Also a RED team attitude towards them, as well as
 thinking hard about what you are not measuring (operations research).
 I actually rather love the new cloudflare speedtest, because it tests
 a single TCP connection, rather than dozens, and at the same time folk
 are complaining that it doesn't find the actual "speed!". yet... the
 test itself more closely emulates a user experience than speedtest.net
 does. I am personally pretty convinced that the fewer numbers of flows
 that a web page opens improves the likelihood of a good user
 experience, but lack data on it.
 To try to tackle the evaluation and calibration part, I've reached out
 to all the new test designers in the hope that we could get together
 and produce a report of what each new test is actually doing. I've
 tweeted, linked in, emailed, and spammed every measurement list I know
 of, and only to some response, please reach out to other test designer
 folks and have them join the rpm email list?
 My principal kvetches in the new tests so far are:
 0) None of the tests last long enough.
 Ideally there should be a mode where they at least run to "time of
 first loss", or periodically, just run longer than the
 industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
 there! It's really bad science to optimize the internet for 20
 seconds. It's like optimizing a car, to handle well, for just 20
 seconds.
 1) Not testing up + down + ping at the same time
 None of the new tests actually test the same thing that the infamous
 rrul test does - all the others still test up, then down, and ping. It
 was/remains my hope that the simpler parts of the flent test suite -
 such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
 tests would provide calibration to the test designers.
 we've got zillions of flent results in the archive published here:
 https://blog.cerowrt.org/post/found_in_flent/
 ps. Misinformation about iperf 2 impacts my ability to do this.
>>> 
 The new tests have all added up + ping and down + ping, but not up +
 down + ping. Why??
 The behaviors of what happens in that case are really non-intuitive, I
 know, but... it's jus

Re: [Bloat] Dave's wonderful rant (was: grinch...)

2023-01-11 Thread Rich Brown via Bloat
Thanks Dave for summarizing the current state of speedtests at 
https://blog.cerowrt.org/post/speedtests/. (Perhaps this post  should be linked 
from the Bufferbloat.net home page?)

I really enjoyed Jim Roskind's presentation when I watched it on Youtube: 
https://youtu.be/_uaaCiyJCFA?t=499 I found the part about outliers and why p50 
and p90 are problematic to be delightfully intuitive. (This Youtube link is 
queued up at that point, but start at the beginning to see the full 
presentation.) 

Rich

> Message: 2
> Date: Tue, 10 Jan 2023 21:07:00 -0800
> From: Dave Taht 
> To: "Luis A. Cornejo" 
> Cc: dick...@alum.mit.edu, Cake List ,
>   "MORTON JR., AL" , IETF IPPM WG ,
>   libreqos , Rpm
>   ,  bloat 
> Subject: Re: [Bloat] [Rpm] [Starlink] [LibreQoS] the grinch meets
>   cloudflare'schristmas present
> Message-ID:
>   
> Content-Type: text/plain; charset="UTF-8"
> 
> Dear Luis:
> 
> You hit 17 seconds of delay on your test.
> 
> I got you beat, today, on my LTE connection, I cracked 182 seconds.
> 
> I'd like to thank Verizon for making it possible for me to spew 4000
> words on my kvetches about the current speedtest regimes of speedtest,
> cloudflare, and so on, by making my network connection so lousy today
> that I sat in front of emacs to rant - and y'all for helping tone
> down, a little, this blog entry:
> 
> https://blog.cerowrt.org/post/speedtests/
> 

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat