Re: [Bloat] [NNagain] CFP march 1 - network measurement conference

2023-12-07 Thread rjmcmahon via Bloat

iperf 2 supports OWD in multiple forms.

A raspberry pi 5 has a realtime clock and hardware PTP and gpio PPS. The 
retail cost for a pi5 with GPS atomic clock and active fan is less than 
$150


[rjmcmahon@fedora iperf2-code]$ src/iperf -c 192.168.1.35 --bounceback 
--trip-times  --bounceback-period 0 -i 1 -t 4


Client connecting to 192.168.1.35, TCP port 5001 with pid 48142 (1/0 
flows/load)
Bounceback test (req/reply size = 100 Byte/ 100 Byte) (server hold req=0 
usecs & tcp_quickack)

TCP congestion control using cubic
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)

[  1] local 192.168.1.103%enp4s0 port 50558 connected with 192.168.1.35 
port 5001 (prefetch=16384) (bb w/quickack req/reply/hold=100/100/0) 
(trip-times) (sock=3) (icwnd/mss/irtt=14/1448/541) (ct=0.59 ms) on 
2023-12-07 22:01:39.240 (PST)
[ ID] IntervalTransferBandwidth BB 
cnt=avg/min/max/stdev Rtry  Cwnd/RTTRPS(avg)
[  1] 0.00-1.00 sec   739 KBytes  6.05 Mbits/sec
7566=0.130/0.099/0.627/0.007 ms0   14K/115 us7666 rps
[  1] 0.00-1.00 sec  OWD (ms) Cnt=7566 TX=0.072/0.038/0.163/0.002 
RX=0.058/0.047/0.156/0.004 Asymmetry=0.015/0.001/0.103/0.004
[  1] 1.00-2.00 sec   745 KBytes  6.10 Mbits/sec
7630=0.130/0.082/0.422/0.005 ms0   14K/114 us7722 rps
[  1] 1.00-2.00 sec  OWD (ms) Cnt=7630 TX=0.073/0.027/0.364/0.004 
RX=0.057/0.048/0.097/0.003 Asymmetry=0.016/0.000/0.306/0.005
[  1] 2.00-3.00 sec   749 KBytes  6.14 Mbits/sec
7671=0.129/0.085/0.252/0.004 ms0   14K/113 us7756 rps
[  1] 2.00-3.00 sec  OWD (ms) Cnt=7671 TX=0.073/0.031/0.193/0.003 
RX=0.056/0.047/0.102/0.003 Asymmetry=0.017/0.000/0.134/0.004
[  1] 3.00-4.00 sec   737 KBytes  6.04 Mbits/sec
7546=0.131/0.085/0.290/0.004 ms0   14K/115 us7629 rps
[  1] 3.00-4.00 sec  OWD (ms) Cnt=7546 TX=0.073/0.030/0.231/0.003 
RX=0.058/0.047/0.105/0.003 Asymmetry=0.015/0.000/0.172/0.004
[  1] 0.00-4.00 sec  2.90 MBytes  6.08 Mbits/sec
30414=0.130/0.082/0.627/0.005 ms0   14K/376 us7693 rps
[  1] 0.00-4.00 sec  OWD (ms) Cnt=30414 TX=0.073/0.027/0.364/0.003 
RX=0.057/0.047/0.156/0.004 Asymmetry=0.016/0.000/0.306/0.004
[  1] 0.00-4.00 sec  OWD-TX(f)-PDF: 
bin(w=100us):cnt(30414)=1:30393,2:19,3:1,4:1 
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[  1] 0.00-4.00 sec  OWD-RX(f)-PDF: bin(w=100us):cnt(30414)=1:30400,2:14 
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[  1] 0.00-4.00 sec  BB8(f)-PDF: 
bin(w=100us):cnt(30414)=1:6,2:30392,3:14,5:1,7:1 
(5.00/95.00/99.7%=2/2/2,Outliers=16,obl/obu=0/0)


Bob
On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain 
 wrote:
What would be a comprehensive measurement? Should cover all/most 
relevant areas?


It’s easy to specify a suite of measurements which is too heavy to be
easily implemented or supported on the network.  Also, as you point
out, many things can be derived from raw data, so don’t necessarily
require additional specific measurements.


Payload Size: The size of data being transmitted.
Event Rate: The frequency at which payloads are transmitted.
Bitrate: The combination of rate and size transferred in a given test.
Throughput: The data transfer capability achieved on the test path.


All of that can probably be derived from sufficiently finely-grained
TCP data.  i.e. if you had a PCAP of a TCP flow that constituted the
measurement, you’d be able to derive all of the above.


Bandwidth: The data transfer capacity available on the test path.


Presumably the goal of a TCP transaction measurement would be to
enable this calculation.

Transfer Efficiency: The ratio of useful payload data to the overhead 
data.


This is a how-its-used rather than a property-of-the-network.  If
there are network-inherent overheads, they’re likely to be not
directly visible to endpoints, only inferable, and might require
external knowledge of the network.  So, I’d put this out-of-scope.

Round-Trip Time (RTT): The ping delay time to the target server and 
back.

RTT Jitter: The variation in the delay of round-trip time.
Latency: The transmission delay time to the target server and back.
Latency Jitter: The variation in delay of latency.


RTT is measurable.  If Latency is RTT minus processing delay on the
remote end, I’m not sure it’s really measurable, per se, without the
remote end being able to accurately clock itself, or an independent
vantage point adjacent to the remote end.  This is the 
old[rjmcmahon@fedora iperf2-code]$ src/iperf -c 192.168.1.35 
--bounceback --trip-times  --bounceback-period 0 -i 1 -t 4


Client connecting to 192.168.1.35, TCP port 5001 with pid 46358 (1/0 
flows/load)
Bounceback test (req/reply size = 100 Byte/ 100 Byte) (server hold req=0 
usecs & tcp_quickack)

TCP congestion control 

Re: [Bloat] [NNagain] massively less drafty FCC NOI response on raising the broadband standard speeds

2023-11-28 Thread rjmcmahon via Bloat
I think I'm being schedule to present iperf 2 to the FCC TAC sometime in 
January 2024. I'll know more soon.


I plan to have a hands on session, going over

o) WiFi/Broadband key latency technologies
o) Iperf 2 tooling and metrics, including bloat (in units of memory)
o) A WiFi diagnostics latency panel from a chip perspective
o) Suggested actions the FCC TAC can take to facilitate better in home 
wireless networks to support low latency needs of all Americans


The assumption is they realize that low latency is long overdue and that 
solutions are in the realm of today's engineering and, of course, 
nature's physics.


Bob

I would like to thank everyone that lept into making a contribution to
this document today! It is in much better shape than it was yesterday.
2 days to go!!!

Please comment on the document here:
https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit

If you like where the document is going, please put in your signature.
I will submit a final draft for global review a few hours before the
deadline.

I would also like to humbly thank Steve Crocker for contributing $500
to the gofundme drive for this effort. Steve is the author of the very
first RFC[1], which defined the processes by which the Arpanet and
internet were defined and refined, to this day. He made many other
fundamental contributions along the way. RFC1's history is well worth
reviewing:

https://www.internetsociety.org/blog/2015/04/46-years-of-rfcs-celebrating-the-anniversary-of-rfc-1/

I am hoping our efforts here, this week, lay a foundation for a much
better internet moving forward!

A huge thanks to all the other donors to date also! Aside from
gratitude I do not know what to say.  I sent "Tanya" my last "this
machine kills vogons" sticker, but I have new one - since I have
mellowed - that says "This Machine CURES Vogons"  - would anyone want
one of these?), bringing the sum to a magical $ as I write. This
is more than enough for a press release! When we did the successful
make-wifi-fast fcc fight, we also brought on board a media manager,
did some outreach, and my uni (karstaad) threw in for a plane ticket
to fly to Washington, and the total cost was about 12k all told.
Presently I plan to focus on getting a good document in by the
deadline, with as many signatures as possible, and do PR later.

gofundme link: https://gofund.me/c1f3ad18

PS I would also like to thank "robert" for throwing in $250 (via other
means), although I am not sure if he was throwing in to keep the flent
fleet alive or this effort.

On Mon, Nov 27, 2023 at 4:42 AM Dave Taht  wrote:


This is so drafty that normally I would not be distributing it this
early, but family matters have intruded overmuch (in a good way, I
hope everyone had a great thanksgiving!!) and the Dec 1 deadline is
looming..

Co-authors (and commentors) desired! I primarily wanted to hit
latency, latency under load, bufferbloat, as usual, as well as MTBF
and MTTR as well as how complex networks are, . Just three pages, +
repeatable experiments in the appendix, so at least those are in the
public record. Cite the bitag latency report, and this cloudflare
piece, especially:

https://blog.cloudflare.com/making-home-internet-faster/

Anyway the far too drafty draft is here:

https://docs.google.com/document/d/19ADByjakzQXCj9Re_pUvrb5Qe5OK-QmhlYRLMBY4vH4/edit?usp=sharing

In terms of things not flowing for me the concept of equating the need
for more bandwidth as a solution to everything... to the "chewbacca
defense" is entertaining but possibly not useful. People are already
telling me the conclusion is too strong, and it's 4:30 AM, and I am
going back to bed.

Please comment there, and not here. Thanks for any spare brain cells
you might have! gnight.

--
:( My old R campus is up for sale: https://tinyurl.com/yurtlab
Dave Täht CSO, LibreQos

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] [Starlink] net neutrality back in the news

2023-09-28 Thread rjmcmahon via Bloat
Here's is the point for TLDR by Noam. Neutral traffic acceptance is not 
no priorities. We want traffic priorities despite all the b.s. that 
they're unfair.


"All of common carriages free-flow, goals of low transaction cost, and 
no-liability goals are thus preserved by a system of (a) non-exclusive 
interconnection (b) neutral traffic acceptance."


Back to TLDR per Noam. This is the pertinent part. First, few in the 
U.S. want the IAPs to be common carriers. It would really bad.


The following factors are important in determining common carriage:
...
law and regulations define the responsibilities of the parties.

For contract carriers, on the other hand:
...
contracts define parties' responsibilities.

And then, the issue isn't so much about CPE side but peering or 
interconnection of networks.


Interconnectivity is critical to the future network system. Yet 
interconnectivity does not happen by itself; that is the lesson of 
decades of American experience. Open network architecture, comparably 
efficient interconnection, and collocation are part of this evolution.


Such interconnection arrangements do not depend on common carriage, 
though they are inspired by it. Therefore, its is possible,


Then Noam's suggestions on how to go forward to protect common carriage 
principals with contract carriage operators through "neutral" 
interconnections. Notice there is no mandate of equal traffic priority 
only neutral access to the network. Priorities can be negotiated per 
business contracts e.g. peering agreements.


VIII. What for the Future?

...

This suggests that new policy instruments will have to be found to deal 
with the negatives effect on information diversity and flow.


A way to do so is by replacing the principle of common carriage by a new 
principle of neutral interconnection. A carrier can elect to be private 
by running its own self-contained infrastructure, and having full 
control over its content, use and access. But if it interconnects into 
other networks and accepts transmission traffic from them, it cannot 
pick some bits over other bits. This means that while a private carrier 
can be selective in its direct customers, whether they are end-users or 
content providers, it cannot be selective in what it accepts from 
another interconnected carrier.


Among interconnected carriers, no carrier can transmit selectively 
traffic passed on to it by another carrier, based on content, uses, or 
usage, or refuse interconnection on these grounds. Any carrier offering 
interconnection to some carriers must offer it to other carriers, too, 
within technical constraints.


This does not require interconnection on equal terms, as in the case of 
common carriage. But it establishes the possibility of arbitrage if 
differentiated pricing occurs. All of common carriages free-flow, goals 
of low transaction cost, and no-liability goals are thus preserved by a 
system of (a) non-exclusive interconnection (b) neutral traffic 
acceptance.


Bob

On 9/28/23, 12:45, "Starlink on behalf of Dave Taht via Starlink"
mailto:starlink-boun...@lists.bufferbloat.net> on behalf of
starl...@lists.bufferbloat.net
> wrote:

It would be nice, if as a (dis)organisation... the bufferbloat team

could focus on somehow getting both sides of the network neutrality
debate deeplying understanding the technological problem their
pre-conceptions face, and the (now readily available and inexpensive)
solutions that could be deployed, by most ISPs, over a weekend. We are
regularly bringing up a few thousand people a week on libreqos (that
we know of), and then of course, there are all the home routers and
CPE that are increasingly capable of doing the right thing.

[JL] The FCC will soon (maybe today) open a notice of proposed
rulemaking - aka NPRM. That process provides an opportunity for anyone
to file and filings from technical experts are always highly valued.




___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] net neutrality back in the news

2023-09-27 Thread rjmcmahon via Bloat
Common Carriage goes way beyond our lifes. Eli Noam's write up in 1994 
is a good one.


http://www.columbia.edu/dlc/wp/citi/citinoam11.html

Beyond Liberalization II:
The Impending Doom of Common Carriage
Eli M. Noam
Professor of Finance and Economics
Columbia University, Graduate School of Business
March 15, 1994

I. Introduction 1

This article argues that the institution of common carriage, 
historically the foundation of the way telecommunications are delivered, 
will not survive. To clarify: "common carriers" (the misnomer often used 
to refer to telephone companies) will continue to exist, but the status 
under which they operate -- offering service on a non-discriminatory 
basis, neutral as to use and user -- will not.


...

VII. A Contract-Carrier Based Telecommunications System?

The conclusion of the analysis has been that common carriage will erode 
in time, and that a hybrid co-existence will not be stable. This is not 
to say that the common carriers qua carriers will become extinct; many 
of them will remain significant players, but they will conduct their 
business as contract carriers. But common carriage as such will 
disappear. This will not happen overnight, of course. Intermediate 
arrangements can buy several decades of transition time. But the basic 
dynamics will eventually assert themselves.


This conclusion is reached with much regret, because the socially 
positive aspects of common carriage are strong, and because the absence 
to common carriage often means gatekeeper power. But we should not let 
preferences obscure the clarity of analysis.


Bob

Jason just did a beautiful thread as to what was the original source
of the network neutrality
bittorrent vs voip bufferbloat blowup.

https://twitter.com/jlivingood/status/1707078242857849244

Seeing all the political activity tied onto it since (and now again)
reminds of two families at war about an incident that had happened
generations and generations before, where the two sides no longer
remembered why they hated each other so, but just went on hating, and
not forgiving, and not moving on.

Yes, there are entirely separate and additional NN issues, but the
technical problem of providing common carriage between two very
different network application types (voip/gaming vs file transfer) is
thoroughly solved now, and if only all sides recognised at least this
much, and made peace over it, and worked together to deploy those
solutions, maybe, just maybe, we could find mutually satisfactory
solutions to the other problems that plague the internet today, like
security, and the ipv6 rollout.

If anyone here knows anyone more political, still vibrating with 10+
years of outrage about NN on this fronts, on one side or the other, if
you could sit them down, over a beer, and try to explain that at the
start it was a technical problem nobody understood at the time, maybe
that would help.

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] receive window bug fix

2023-06-03 Thread rjmcmahon via Bloat
I think better tooling can help and I am always interested in 
suggestions on what to add to iperf 2 for better coverages.


I've thought it good for iperf 2 to support some sort of graph which 
drives socket read/write/delays vs a simplistic pattern of AFAP. It for 
sure stresses things differently, even in drivers. I've seen huge delays 
in some 10G drivers where some UDP packets seem to get stuck in queues 
and where the e2e latency is driven by the socket write rates vs the 
network delays. This is most obvious using burst patterns where the last 
packet of a latency burst is coupled to the first packet of the 
subsequent burst. The coupling between the syscalls to network 
performance is nonobvious and sometimes hard to believe.


We've been adding more "traffic profile" knobs for socket testing and 
have much of the latency metrics incorporated. Most don't use these. 
They seem to be hard to generalize. Cloudflare seems to have crafted 
specific tests after obtaining knowledge of causality.


Bob

PS. As a side note, I'm now being asked how to generate "AI loads" into 
switch fabrics, though there it probably won't be based upon socket 
syscalls but maybe using io_urings - not sure.



This is good work!  I love reading their posts on scale like this.

It’s wild to me that the Linux kernel has (apparently) never
implemented shrinking the receive window, or handling the case of
userspace starting a large transfer and then just not ever reading
it…  the latter is less surprising, I guess, because that’s an
application bug that you probably would catch separately, and would be
focused on fixing in the application layer…

-Aaron

On Sat, Jun 3, 2023 at 1:04 AM Dave Taht via Rpm
 wrote:


these folk do good work, and I loved the graphs



https://blog.cloudflare.com/unbounded-memory-usage-by-tcp-for-receive-buffers-and-how-we-fixed-it/


--
Podcast:


https://www.linkedin.com/feed/update/urn:li:activity:7058793910227111937/

Dave Täht CSO, LibreQos
___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm

 --
- Sent from my iPhone.
___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] receive window bug fix

2023-06-03 Thread rjmcmahon via Bloat

these folk do good work, and I loved the graphs

https://blog.cloudflare.com/unbounded-memory-usage-by-tcp-for-receive-buffers-and-how-we-fixed-it/


Very cool. Thanks for sharing.

I've been considering adding stress tests to iperf 2. Looks like 
Cloudfare has at least two


Small reads & writes with short delay to stress receive window 
processing per


  At the sending host, run a TCP program with an infinite loop, sending 
1500B packets, with a 1 ms delay between each send.
  At the receiving host, run a TCP program with an infinite loop, 
reading 1B at a time, with a 1 ms delay between each read.


And then, rx buffer limit tests, from 
https://blog.cloudflare.com/optimizing-tcp-for-high-throughput-and-low-latency/


  reads as fast as it can, for five seconds this is called fast mode, 
opens up the window
  calculates 5% of the high watermark of the bytes reader during any 
previous one second

  for each second of the next 15 seconds: this is called slow mode
  reads that 5% number of bytes, then stops reading
  sleeps for the remainder of that particular second
  most of the second consists of no reading at all
  steps 1-3 are repeated in a loop three times, so the entire run is 60 
seconds


  This has the effect of highlighting any issues in the handling of 
packets when the buffers repeatedly hit the limit.


Curious about any other traffic scenarios driven by socket read/write 
behaviors that could be useful. Or any others that might apply to WiFi 
aggregation.


Then, if there is a way to generalize these types of send/read/delay 
graphs with a parametric command line?


Bob
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] iperf 2 bounceback - independent request/reply sizes

2023-05-12 Thread rjmcmahon via Bloat

I use virtual machines from linode (which was bought by Akamai)

You may want to use --permit-key (or -t on the server side) to protect 
against unauthorized use.


--permit-key [=]
Set a key value that must match for the server to accept traffic on a 
connection. If the option is given without a value on the server a key 
value will be autogenerated and displayed in its initial settings 
report. The lifetime of the key is set using --permit-key-timeout and 
defaults to twenty seconds. The value is required on clients. The value 
will also be used as part of the transfer id in reports. The option set 
on the client but not the server will also cause the server to reject 
the client's traffic. TCP only, no UDP support.


--permit-key-timeout 
Set the lifetime of the permit key in seconds. Defaults to 20 seconds if 
not set. A value of zero will disable the timer.


-t, --time n
time in seconds to listen for new traffic connections, receive traffic 
or send traffic


Bob


Hi Bob,


funny, that is a feature we wanted recently for cake-autorate (not for
the controller but for hypothesis testing of what funny things might
happen over LTE). Our "poor man's" version was ICMP echo requests
against 8.8.8.8 as google accepts large echo requests, but only sends
"truncated" replys

Have a real tool like iperf2 allow to request the size per direction
directly is much better (well, it leaves the challenge of getting
one's own iperf2 server up somewhee accessible on the internet).

Regards
Sebastian


On May 12, 2023, at 17:46, rjmcmahon via Rpm 
 wrote:


Hi All,

I received a recent diff for iperf 2 to support independent request 
and reply sizes for the bounceback test. It's nice to get diffs that 
can be patched in!


[root@ctrl1fc35 ~]# iperf -c 192.168.1.231 --bounceback 
--bounceback-reply 512K


Client connecting to 192.168.1.231, TCP port 5001 with pid 305401 (1 
flows)
Bounceback test (req/reply size = 100 Byte/ 512 KByte) (server hold 
req=0 usecs & tcp_quickack)

Bursting request 10 times every 1.00 second(s)
TCP congestion control using reno
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)

[  1] local 192.168.1.15%enp2s0 port 42800 connected with 
192.168.1.231 port 5001 (bb w/quickack len/hold=100/0) (sock=3) 
(icwnd/mss/irtt=14/1448/3302) (ct=3.36 ms) on 2023-05-12 08:36:57.163 
(PDT)
[ ID] IntervalTransferBandwidth BB 
cnt=avg/min/max/stdev Rtry  Cwnd/RTTRPS(avg)
[  1] 0.00-1.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.924/7.497/27.463/5.971 ms0   14K/3992 us92 rps
[  1] 1.00-2.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.068/7.274/21.120/3.963 ms0   14K/4307 us99 rps
[  1] 2.00-3.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.674/8.148/17.413/2.798 ms0   14K/4243 us103 rps
[  1] 3.00-4.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.858/7.587/20.889/3.961 ms0   14K/4474 us101 rps
[  1] 4.00-5.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.872/7.558/17.720/2.842 ms0   14K/4692 us101 rps
[  1] 5.00-6.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.649/6.844/18.537/3.205 ms0   14K/4301 us104 rps
[  1] 6.00-7.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.502/7.083/19.839/3.697 ms0   14K/4153 us105 rps
[  1] 7.00-8.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.965/7.747/22.194/4.350 ms0   14K/4357 us100 rps
[  1] 8.00-9.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.072/7.936/20.307/3.730 ms0   14K/4442 us99 rps
[  1] 9.00-10.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.031/8.109/19.907/3.551 ms0   14K/4086 us100 rps
[  1] 0.00-10.02 sec  50.0 MBytes  41.9 Mbits/sec
100=9.962/6.844/27.463/3.740 ms0   14K/4152 us100 rps
[  1] 0.00-10.02 sec  BB8(f)-PDF: 
bin(w=100us):cnt(100)=69:1,71:1,73:1,75:1,76:3,77:1,78:2,79:3,80:3,81:1,82:6,83:7,84:1,85:3,86:4,87:4,88:4,89:5,90:7,91:3,92:4,93:2,95:8,96:3,97:1,98:1,99:1,101:3,102:1,103:1,104:1,106:2,123:1,175:1,178:1,186:1,199:1,200:1,204:1,209:1,212:1,222:1,275:1 
(5.00/95.00/99.7%=76/204/275,Outliers=1,obl/obu=0/0)


Bob
___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] iperf 2 bounceback - independent request/reply sizes

2023-05-12 Thread rjmcmahon via Bloat
Glad to hear that. Hopefully, it's useful!! Disclaimer: Very limited 
testing.


Bob

Hi Bob,


funny, that is a feature we wanted recently for cake-autorate (not for
the controller but for hypothesis testing of what funny things might
happen over LTE). Our "poor man's" version was ICMP echo requests
against 8.8.8.8 as google accepts large echo requests, but only sends
"truncated" replys

Have a real tool like iperf2 allow to request the size per direction
directly is much better (well, it leaves the challenge of getting
one's own iperf2 server up somewhee accessible on the internet).

Regards
Sebastian


On May 12, 2023, at 17:46, rjmcmahon via Rpm 
 wrote:


Hi All,

I received a recent diff for iperf 2 to support independent request 
and reply sizes for the bounceback test. It's nice to get diffs that 
can be patched in!


[root@ctrl1fc35 ~]# iperf -c 192.168.1.231 --bounceback 
--bounceback-reply 512K


Client connecting to 192.168.1.231, TCP port 5001 with pid 305401 (1 
flows)
Bounceback test (req/reply size = 100 Byte/ 512 KByte) (server hold 
req=0 usecs & tcp_quickack)

Bursting request 10 times every 1.00 second(s)
TCP congestion control using reno
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)

[  1] local 192.168.1.15%enp2s0 port 42800 connected with 
192.168.1.231 port 5001 (bb w/quickack len/hold=100/0) (sock=3) 
(icwnd/mss/irtt=14/1448/3302) (ct=3.36 ms) on 2023-05-12 08:36:57.163 
(PDT)
[ ID] IntervalTransferBandwidth BB 
cnt=avg/min/max/stdev Rtry  Cwnd/RTTRPS(avg)
[  1] 0.00-1.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.924/7.497/27.463/5.971 ms0   14K/3992 us92 rps
[  1] 1.00-2.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.068/7.274/21.120/3.963 ms0   14K/4307 us99 rps
[  1] 2.00-3.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.674/8.148/17.413/2.798 ms0   14K/4243 us103 rps
[  1] 3.00-4.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.858/7.587/20.889/3.961 ms0   14K/4474 us101 rps
[  1] 4.00-5.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.872/7.558/17.720/2.842 ms0   14K/4692 us101 rps
[  1] 5.00-6.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.649/6.844/18.537/3.205 ms0   14K/4301 us104 rps
[  1] 6.00-7.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.502/7.083/19.839/3.697 ms0   14K/4153 us105 rps
[  1] 7.00-8.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.965/7.747/22.194/4.350 ms0   14K/4357 us100 rps
[  1] 8.00-9.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.072/7.936/20.307/3.730 ms0   14K/4442 us99 rps
[  1] 9.00-10.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.031/8.109/19.907/3.551 ms0   14K/4086 us100 rps
[  1] 0.00-10.02 sec  50.0 MBytes  41.9 Mbits/sec
100=9.962/6.844/27.463/3.740 ms0   14K/4152 us100 rps
[  1] 0.00-10.02 sec  BB8(f)-PDF: 
bin(w=100us):cnt(100)=69:1,71:1,73:1,75:1,76:3,77:1,78:2,79:3,80:3,81:1,82:6,83:7,84:1,85:3,86:4,87:4,88:4,89:5,90:7,91:3,92:4,93:2,95:8,96:3,97:1,98:1,99:1,101:3,102:1,103:1,104:1,106:2,123:1,175:1,178:1,186:1,199:1,200:1,204:1,209:1,212:1,222:1,275:1 
(5.00/95.00/99.7%=76/204/275,Outliers=1,obl/obu=0/0)


Bob
___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] iperf 2 bounceback - independent request/reply sizes

2023-05-12 Thread rjmcmahon via Bloat

For completeness, here are the bounceback cli options.

--bounceback[=n]
run a TCP bounceback or rps test with optional number writes in a burst 
per value of n. The default is ten writes every period and the default 
period is one second (Note: set size with --bounceback-request). See 
NOTES on clock unsynchronized detections.

--bounceback-hold n
request the server to insert a delay of n milliseconds between its read 
and write (default is no delay)

--bounceback-no-quickack
request the server not set the TCP_QUICKACK socket option (disabling TCP 
ACK delays) during a bounceback test (see NOTES)

--bounceback-period[=n]
request the client schedule its send(s) every n seconds (default is one 
second, use zero value for immediate or continuous back to back)

--bounceback-request n
set the bounceback request size in units bytes. Default value is 100 
bytes.

--bounceback-reply n
set the bounceback reply size in units bytes. This supports asymmetric 
message sizes between the request and the reply. Default value is zero, 
which uses the value of --bounceback-request.


Note: Coming up with a weighted graph (delays & sizes) for working-load 
is on the todo list. Thoughts about this are appreciated. Defining a 
graph on the cli seems a requirement.


Bob


Hi All,

I received a recent diff for iperf 2 to support independent request
and reply sizes for the bounceback test. It's nice to get diffs that
can be patched in!

[root@ctrl1fc35 ~]# iperf -c 192.168.1.231 --bounceback 
--bounceback-reply 512K


Client connecting to 192.168.1.231, TCP port 5001 with pid 305401 (1 
flows)

Bounceback test (req/reply size = 100 Byte/ 512 KByte) (server hold
req=0 usecs & tcp_quickack)
Bursting request 10 times every 1.00 second(s)
TCP congestion control using reno
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)

[  1] local 192.168.1.15%enp2s0 port 42800 connected with
192.168.1.231 port 5001 (bb w/quickack len/hold=100/0) (sock=3)
(icwnd/mss/irtt=14/1448/3302) (ct=3.36 ms) on 2023-05-12 08:36:57.163
(PDT)
[ ID] IntervalTransferBandwidth BB
cnt=avg/min/max/stdev Rtry  Cwnd/RTTRPS(avg)
[  1] 0.00-1.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.924/7.497/27.463/5.971 ms0   14K/3992 us92 rps
[  1] 1.00-2.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.068/7.274/21.120/3.963 ms0   14K/4307 us99 rps
[  1] 2.00-3.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.674/8.148/17.413/2.798 ms0   14K/4243 us103 rps
[  1] 3.00-4.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.858/7.587/20.889/3.961 ms0   14K/4474 us101 rps
[  1] 4.00-5.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.872/7.558/17.720/2.842 ms0   14K/4692 us101 rps
[  1] 5.00-6.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.649/6.844/18.537/3.205 ms0   14K/4301 us104 rps
[  1] 6.00-7.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.502/7.083/19.839/3.697 ms0   14K/4153 us105 rps
[  1] 7.00-8.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.965/7.747/22.194/4.350 ms0   14K/4357 us100 rps
[  1] 8.00-9.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.072/7.936/20.307/3.730 ms0   14K/4442 us99 rps
[  1] 9.00-10.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.031/8.109/19.907/3.551 ms0   14K/4086 us100 rps
[  1] 0.00-10.02 sec  50.0 MBytes  41.9 Mbits/sec
100=9.962/6.844/27.463/3.740 ms0   14K/4152 us100 rps
[  1] 0.00-10.02 sec  BB8(f)-PDF:
bin(w=100us):cnt(100)=69:1,71:1,73:1,75:1,76:3,77:1,78:2,79:3,80:3,81:1,82:6,83:7,84:1,85:3,86:4,87:4,88:4,89:5,90:7,91:3,92:4,93:2,95:8,96:3,97:1,98:1,99:1,101:3,102:1,103:1,104:1,106:2,123:1,175:1,178:1,186:1,199:1,200:1,204:1,209:1,212:1,222:1,275:1
(5.00/95.00/99.7%=76/204/275,Outliers=1,obl/obu=0/0)

Bob

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


[Bloat] iperf 2 bounceback - independent request/reply sizes

2023-05-12 Thread rjmcmahon via Bloat

Hi All,

I received a recent diff for iperf 2 to support independent request and 
reply sizes for the bounceback test. It's nice to get diffs that can be 
patched in!


[root@ctrl1fc35 ~]# iperf -c 192.168.1.231 --bounceback 
--bounceback-reply 512K


Client connecting to 192.168.1.231, TCP port 5001 with pid 305401 (1 
flows)
Bounceback test (req/reply size = 100 Byte/ 512 KByte) (server hold 
req=0 usecs & tcp_quickack)

Bursting request 10 times every 1.00 second(s)
TCP congestion control using reno
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)

[  1] local 192.168.1.15%enp2s0 port 42800 connected with 192.168.1.231 
port 5001 (bb w/quickack len/hold=100/0) (sock=3) 
(icwnd/mss/irtt=14/1448/3302) (ct=3.36 ms) on 2023-05-12 08:36:57.163 
(PDT)
[ ID] IntervalTransferBandwidth BB 
cnt=avg/min/max/stdev Rtry  Cwnd/RTTRPS(avg)
[  1] 0.00-1.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.924/7.497/27.463/5.971 ms0   14K/3992 us92 rps
[  1] 1.00-2.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.068/7.274/21.120/3.963 ms0   14K/4307 us99 rps
[  1] 2.00-3.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.674/8.148/17.413/2.798 ms0   14K/4243 us103 rps
[  1] 3.00-4.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.858/7.587/20.889/3.961 ms0   14K/4474 us101 rps
[  1] 4.00-5.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.872/7.558/17.720/2.842 ms0   14K/4692 us101 rps
[  1] 5.00-6.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.649/6.844/18.537/3.205 ms0   14K/4301 us104 rps
[  1] 6.00-7.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.502/7.083/19.839/3.697 ms0   14K/4153 us105 rps
[  1] 7.00-8.00 sec  5.00 MBytes  42.0 Mbits/sec
10=9.965/7.747/22.194/4.350 ms0   14K/4357 us100 rps
[  1] 8.00-9.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.072/7.936/20.307/3.730 ms0   14K/4442 us99 rps
[  1] 9.00-10.00 sec  5.00 MBytes  42.0 Mbits/sec
10=10.031/8.109/19.907/3.551 ms0   14K/4086 us100 rps
[  1] 0.00-10.02 sec  50.0 MBytes  41.9 Mbits/sec
100=9.962/6.844/27.463/3.740 ms0   14K/4152 us100 rps
[  1] 0.00-10.02 sec  BB8(f)-PDF: 
bin(w=100us):cnt(100)=69:1,71:1,73:1,75:1,76:3,77:1,78:2,79:3,80:3,81:1,82:6,83:7,84:1,85:3,86:4,87:4,88:4,89:5,90:7,91:3,92:4,93:2,95:8,96:3,97:1,98:1,99:1,101:3,102:1,103:1,104:1,106:2,123:1,175:1,178:1,186:1,199:1,200:1,204:1,209:1,212:1,222:1,275:1 
(5.00/95.00/99.7%=76/204/275,Outliers=1,obl/obu=0/0)


Bob
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] cloudflare on a roll

2023-04-18 Thread rjmcmahon via Bloat

https://blog.cloudflare.com/making-home-internet-faster/


I wonder if we're all still missing it a bit. We're complaining that 
internet providers are using speed applied to a rated link capacity and 
then we say to use latency or responsiveness in its place. It's like 
saying a road has a speed and a latency. The road really has neither as 
direct attributes. It's stationary as are waveguides. Sure, we can come 
up with a rating of link capacity and link delay per what's attached to 
those waveguides and we also need to add the highly variable "working 
conditions" in order to take a synthetic measurement.


Us now saying "speed is the wrong metric" so use network latency can be 
equally confusing and equally wrong, e.g. if the app thread is CPU 
limited.


I think it's the travel times that matter to the end users. But the user 
doesn't know their destinations, A to B so-to-speak, so instead of 
execution times, we need to find a metric to hint at the users awaiting 
their devices (helping engineers to mitigate and eliminate the dreaded 
indeterminate progress indicators which is sad way to spend device 
energy.)


An indirect way of measuring travel times may be to measure the thread 
write delays. A thread will run as fast as possible (AFAP) when its 
threads don't block, e.g. on network i/o.


I've added support for --tcp-write-times in iperf 2. This gives the 
amount of time the thread's select() blocks awaiting on the ability to 
write (or, with linux, the amount of time awaiting the syscall write() 
to complete. This along with --tcp-write-prefetch sets TCP_NOTSENT_LOWAT 
should give an idea of the amount of time awaiting network availability 
by a thread.


[rjmcmahon@ryzen3950 iperf2-code]$ src/iperf -c mail.rjmcmahon.com 
--tcp-write-times --histograms=1m --tcp-write-prefetch 16K -i 1 -t4


Client connecting to mail.rjmcmahon.com, TCP port 5001 with pid 212310 
(1 flows)

Write buffer size: 131072 Byte (writetimer-enabled)
TCP congestion control using cubic
TOS set to 0x0 (Nagle on)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
Enabled write histograms bin-width=1.000 ms, bins=10

[  1] local 192.168.1.99%enp7s0 port 38538 connected with 45.33.58.123 
port 5001 (prefetch=16384) (sock=3) (icwnd/mss/irtt=14/1448/12335) 
(ct=12.45 ms) on 2023-04-18 10:52:02.956 (PDT)
[ ID] IntervalTransferBandwidth   Write/Err  Rtry 
Cwnd/RTTNetPwr  write-times avg/min/max/stdev (cnt)
[  1] 0.00-1.00 sec  5.25 MBytes  44.0 Mbits/sec  42/1  0 
360K/65206 us  84.43  24.173/13.509/39.347/4.196 ms (42)
[  1] 0.00-1.00 sec W8-PDF: 
bin(w=1ms):cnt(42)=14:1,15:1,19:1,21:2,23:5,24:12,25:11,26:2,27:2,28:1,29:1,33:1,35:1,40:1 
(5.00/95.00/99.7%=19/33/40,Outliers=0,obl/obu=0/0)
[  1] 1.00-2.00 sec  4.75 MBytes  39.8 Mbits/sec  38/0  6 
173K/35105 us  142  26.079/22.403/39.766/4.142 ms (38)
[  1] 1.00-2.00 sec W8-PDF: 
bin(w=1ms):cnt(38)=23:6,24:7,25:9,26:6,27:1,28:1,30:1,32:1,33:3,34:1,35:1,40:1 
(5.00/95.00/99.7%=23/35/40,Outliers=0,obl/obu=0/0)
[  1] 2.00-3.00 sec  4.88 MBytes  40.9 Mbits/sec  39/0  4 
100K/19518 us  262  25.673/22.276/35.668/2.602 ms (39)
[  1] 2.00-3.00 sec W8-PDF: 
bin(w=1ms):cnt(39)=23:2,24:6,25:10,26:9,27:5,28:5,35:1,36:1 
(5.00/95.00/99.7%=23/35/36,Outliers=0,obl/obu=0/0)
[  1] 3.00-4.00 sec  5.00 MBytes  41.9 Mbits/sec  40/0  1 
101K/19337 us  271  25.073/14.430/35.911/2.864 ms (40)
[  1] 3.00-4.00 sec W8-PDF: 
bin(w=1ms):cnt(40)=15:1,23:3,24:7,25:13,26:4,27:6,28:3,29:1,30:1,36:1 
(5.00/95.00/99.7%=23/30/36,Outliers=0,obl/obu=0/0)
[  1] 0.00-4.06 sec  20.0 MBytes  41.3 Mbits/sec  160/2 11 
103K/20126 us  257  25.230/13.509/39.766/3.563 ms (160)
[  1] 0.00-4.06 sec W8(f)-PDF: 
bin(w=1ms):cnt(160)=14:1,15:2,19:1,21:2,23:16,24:32,25:43,26:21,27:15,28:10,29:2,30:2,32:1,33:4,34:1,35:3,36:2,40:2 
(5.00/95.00/99.7%=23/34/40,Outliers=0,obl/obu=0/0)


Bob

https://developer.android.com/reference/android/widget/ProgressBar

Indeterminate Progress
Use indeterminate mode for the progress bar when you do not know how 
long an operation will take. Indeterminate mode is the default for 
progress bar and shows a cyclic animation without a specific amount of 
progress indicated. The following example shows an indeterminate 
progress bar.

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat

2023-03-29 Thread rjmcmahon via Bloat

Hi Sebastian,

I'm fine with municipal broadband projects. I do think they'll need to 
leverage the economy of scale driven by others. An ASIC tape out, just 
for the design, is ~$80M and a minimum of 18 mos of high-skill, 
engineering work by many specialties, signal integrity, etc. Then, after 
all that, one has to get in line with a foundry that needs to produce in 
volume per their mfg economies of scale. These markets fundamentally 
have to be driven by large orders from providers with millions of 
subscribers. That's just the market & engineering reality of things.


An aspect of the FiWi argument is that these NRE spends today and 
tomorrow are mostly from SERDES & lasers/optics in the data centers and 
the CMOS radios & PHYs in handsets. Let us look here for the thousands 
of engineers needed and for the supply of parts for the next decade+. I 
don't see it coming from anywhere else.


Then we need the in-premise fiber installers and the OSP labor forces 
who are critical to our success.


And finally, it's the operations & management and the reduction of those 
expenses in a manner that scales.


Bob

Hi Bob,



On Mar 28, 2023, at 19:47, rjmcmahon  wrote:

Interesting. I'm skeptical that our cities in the U.S. can get this 
(structural separation) right.


There really isn't that much to get wrong, you built the access
network and terminate the per household fibers in arge enough
"exchanges" there you offer ISPs to lighten up the fibers on the
premise that customers can use any ISP they want (that is present in
the exchange)... and on ISP change will just be patched differently in
the exchange.
While I think that local "government" also could successfully run
internet access services, I see no reason why they should do so
(unless there is no competition).
The goal here is to move the "natural monopoly" of the access network
out of the hand of the "market" (as markets simply fail as optimizing
resource allocation instruments under mono- and oligopoly conditions,
on either side).




Pre-coaxial cable & contract carriage, the FCC licensed spectrum to 
the major media companies and placed a news obligation on them for 
these OTA rights. A society can't run a democracy well without quality 
and factual information to the constituents. Sadly, contract carriage 
got rid of that news as a public service obligation as predicted by 
Eli Noam. http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we 
get January 6th and an insurrection.






It takes a staff of 300 to produce 30 minutes of news three times a 
day. The co-axial franchise agreements per each city traded this 
obligation for a community access channel and a small studio, and 
annual franchise fees. History has shown this is insufficient for a 
city to provide quality news to its citizens. Community access 
channels failed miserably.


I would argue this is that there are things where cities excel and
some where they simply are mediocre... managing monopoly
infrastructure (like roads, water, sometime power) with long
amortization times is something they do well (either directly or via
companies they own and operate).

Another requirement was two cables so there would be "competition" in 
the coaxial offerings. This rarely happened because of natural 
monopoly both in the last mile and in negotiating broadcast rights 
(mostly for sports.) There is only one broadcast rights winner, e.g. 
NBC for the Olympics, and only one last mile winner. That's been 
proven empirically in the U.S.


Yes, that is why the operator of the last mile, should really not
offer services over that mile itself. Real competition on the access
lines themselves is not going to happen (at least not is sufficient
number to make a market solution viable), but there is precedence of
getting enough service providers to offer their services over access
lines (e.g. Amsterdam).

Now cities are dependent on those franchise fees for their budgets. 
And the cable cos rolled up to a national level. So it's mostly the 
FCC that regulates all of this where they care more about Janet 
Jackson's breast than providing accurate news to help a democracy 
function well. 
https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy


It gets worse as people are moving to unicast networks for their 
"news." But we're really not getting news at all, we're gravitating to 
emotional validations per our dysfunctions. Facebook et al happily 
provide this because it sells more ads. And then the major equipment 
providers claim they're doing great engineering because they can carry 
"AI loads!!" and their stock goes up in value.  This means ads & news 
feeds that trigger dopamine hits for addicts are driving the money 
flows. Which is a sad theme for undereducated populations.


I am not 100% sure this is a uni- versus broadcast issue... even on
uni-cast I can consume traditional middle-of the road news and even on
broadcast I can opt for pretend-news. Sure 

Re: [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat

2023-03-28 Thread rjmcmahon via Bloat
If it doesn't align with privacy & security, what we know of physics, 
what can be achieved by world class engineering, what will be funded by 
market models or behaviors based upon payments & receipts, increase job 
creation for blue collar workers, reduce power consumption, etc. then I 
agree FiWi should, and likely will, fail.


Russia came very late to the industrial revolution because its leaders 
were against technological progress, e.g. trains. That was a critical 
juncture for them. 
https://blogs.lt.vt.edu/jhoran/2014/08/31/transportation-and-industrialization/


It seems likely to me we are at our own critical juncture. I hope we get 
it more or less right so that inclusive human societies, societies that 
learn to care for others, built from our technologies, technologies 
derived from the works & ideas of those who came before us, can benefit 
long after we each depart as has been done with potable water supplies 
for many (but not all.)


Bob

PS. I tend to ignore things that have no chance. I find it better to 
spend my time & energy on things that do have some possibility of 
impact. I find our lives are too short to do otherwise.



IMO, there is a very near zero chance of this ‘FiWi’ coming to
fruition.  No one wants it.  I don’t want it, I see nothing but
flaws, single points of failure, security issues, erosion of privacy
in homes and business,  and general consumer mistrust of such a model
and well as consolidation and monopolization of internet access.  I
will actively speak out against this, is bad in just about every way
you can talk about.  I cannot find a single benefit it offers.

On Mar 28, 2023 at 3:31:40 PM, rjmcmahon 
wrote:


Agreed though, from a semiconductor perspective, 100K units over
ten+
years isn't going to drive a foundry to produce the parts required.
Then, a small staff makes the same decisions for all 100K premises
regardless of things like the ability to pay for differentiators as
they
have no differentiators (we all get Model T black.) These staffs are

also trying to predict the future without any real ability to affect

that future. It's worse than a tragedy of the commons because the
sunk
mistakes get magnified every passing year.

A FiWi architecture with pluggable components may have the
opportunity
to address these issues and do it in volume and at fair prices and
also
reduce climate impacts per taking in account capacity / (latency *
distance * power), by making that aspect field upgradeable.

Bob


https://sifinetworks.com/residential/cities/simi-valley-ca/







I'm due to get it to my area Q2 (or so). we're a suburb outside
LA,



but 100k+ people so not tiny.







David Lang











On Tue, 28 Mar 2023, rjmcmahon wrote:







There are municipal broadband projects. Most are in rural areas



partially funded by the federal government via the USDA. Glasgow



started a few decades ago. Similar to LUS in Lafayette, LA.



https://www.usda.gov/broadband







Rural areas get a lot of federal money for things, a la the farm

bill



which also pays for food stamps instituted as part of the New

Deal



after the Great Depression.













https://sustainableagriculture.net/our-work/campaigns/fbcampaign/what-is-the-farm-bill/







None of this is really relevant to the vast majority of our

urban



populations that get broadband from investor-owned companies.

These



companies don't receive federal subsidies though sometimes they

get



access to municipal revenue bonds when doing city

infrastructures.







Bob



https://www.linkedin.com/in/christopher-mitchell-79078b5 and

the like



are doing a pretty good job (given the circumstances) here in

the US.



At least, that’s my understanding of his work.







All the best,







Frank



Frantisek (Frank) Borsik







https://www.linkedin.com/in/frantisekborsik







Signal, Telegram, WhatsApp: +421919416714 [2]







iMessage, mobile: +420775230885 [3]







Skype: casioa5302ca







frantisek.bor...@gmail.com







On 28 March 2023 at 7:47:33 PM, rjmcmahon

(rjmcma...@rjmcmahon.com)



wrote:







Interesting. I'm skeptical that our cities in the U.S. can get

this



(structural separation) right.







Pre-coaxial cable & contract carriage, the FCC licensed

spectrum to



the



major media companies and placed a news obligation on them for

these



OTA



rights. A society can't run a democracy well without quality

and



factual



information to the constituents. Sadly, contract carriage got

rid of







that news as a public service obligation as predicted by Eli

Noam.



http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we

get



January



6th and an insurrection.







It takes a staff of 300 to produce 30 minutes of news three

times a



day.



The co-axial franchise agreements per each city traded this



obligation



for a community access channel and a small studio, and annual



franchise



fees. History has shown this is insufficient 

Re: [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat

2023-03-28 Thread rjmcmahon via Bloat
Agreed though, from a semiconductor perspective, 100K units over ten+ 
years isn't going to drive a foundry to produce the parts required. 
Then, a small staff makes the same decisions for all 100K premises 
regardless of things like the ability to pay for differentiators as they 
have no differentiators (we all get Model T black.) These staffs are 
also trying to predict the future without any real ability to affect 
that future. It's worse than a tragedy of the commons because the sunk 
mistakes get magnified every passing year.


A FiWi architecture with pluggable components may have the opportunity 
to address these issues and do it in volume and at fair prices and also 
reduce climate impacts per taking in account capacity / (latency * 
distance * power), by making that aspect field upgradeable.


Bob

https://sifinetworks.com/residential/cities/simi-valley-ca/

I'm due to get it to my area Q2 (or so). we're a suburb outside LA,
but 100k+ people so not tiny.

David Lang


On Tue, 28 Mar 2023, rjmcmahon wrote:

There are municipal broadband projects. Most are in rural areas 
partially funded by the federal government via the USDA. Glasgow 
started a few decades ago. Similar to LUS in Lafayette, LA. 
https://www.usda.gov/broadband


Rural areas get a lot of federal money for things, a la the farm bill 
which also pays for food stamps instituted as part of the New Deal 
after the Great Depression.


https://sustainableagriculture.net/our-work/campaigns/fbcampaign/what-is-the-farm-bill/

None of this is really relevant to the vast majority of our urban 
populations that get broadband from investor-owned companies. These 
companies don't receive federal subsidies though sometimes they get 
access to municipal revenue bonds when doing city infrastructures.


Bob

https://www.linkedin.com/in/christopher-mitchell-79078b5 and the like
are doing a pretty good job (given the circumstances) here in the US.
At least, that’s my understanding of his work.

All the best,

Frank
Frantisek (Frank) Borsik

https://www.linkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714 [2]

iMessage, mobile: +420775230885 [3]

Skype: casioa5302ca

frantisek.bor...@gmail.com

On 28 March 2023 at 7:47:33 PM, rjmcmahon (rjmcma...@rjmcmahon.com)
wrote:


Interesting. I'm skeptical that our cities in the U.S. can get this
(structural separation) right.

Pre-coaxial cable & contract carriage, the FCC licensed spectrum to
the
major media companies and placed a news obligation on them for these
OTA
rights. A society can't run a democracy well without quality and
factual
information to the constituents. Sadly, contract carriage got rid of

that news as a public service obligation as predicted by Eli Noam.
http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get
January
6th and an insurrection.

It takes a staff of 300 to produce 30 minutes of news three times a
day.
The co-axial franchise agreements per each city traded this
obligation
for a community access channel and a small studio, and annual
franchise
fees. History has shown this is insufficient for a city to provide
quality news to its citizens. Community access channels failed
miserably.

Another requirement was two cables so there would be "competition"
in
the coaxial offerings. This rarely happened because of natural
monopoly
both in the last mile and in negotiating broadcast rights (mostly
for
sports.) There is only one broadcast rights winner, e.g. NBC for the

Olympics, and only one last mile winner. That's been proven
empirically
in the U.S.

Now cities are dependent on those franchise fees for their budgets.
And
the cable cos rolled up to a national level. So it's mostly the FCC
that
regulates all of this where they care more about Janet Jackson's
breast
than providing accurate news to help a democracy function well.


https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy



It gets worse as people are moving to unicast networks for their
"news."
But we're really not getting news at all, we're gravitating to
emotional
validations per our dysfunctions. Facebook et al happily provide
this
because it sells more ads. And then the major equipment providers
claim
they're doing great engineering because they can carry "AI loads!!"
and
their stock goes up in value. This means ads & news feeds that
trigger
dopamine hits for addicts are driving the money flows. Which is a
sad
theme for undereducated populations.

And ChatGPT is not the answer for our lack of education and a public

obligation to support those educations, which includes addiction
recovery programs, and the ability to think critically for
ourselves.

Bob
Here is an old (2014) post on Stockholm to my class "textbook":



https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html



[1]
Stockholm: 19 years of municipal broadband success [1]
The Stokab report should be required reading for all local
government
officials. Stockholm is one of the top Internet cities 

Re: [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat

2023-03-28 Thread rjmcmahon via Bloat
There are municipal broadband projects. Most are in rural areas 
partially funded by the federal government via the USDA. Glasgow started 
a few decades ago. Similar to LUS in Lafayette, LA. 
https://www.usda.gov/broadband


Rural areas get a lot of federal money for things, a la the farm bill 
which also pays for food stamps instituted as part of the New Deal after 
the Great Depression.


https://sustainableagriculture.net/our-work/campaigns/fbcampaign/what-is-the-farm-bill/

None of this is really relevant to the vast majority of our urban 
populations that get broadband from investor-owned companies. These 
companies don't receive federal subsidies though sometimes they get 
access to municipal revenue bonds when doing city infrastructures.


Bob

https://www.linkedin.com/in/christopher-mitchell-79078b5 and the like
are doing a pretty good job (given the circumstances) here in the US.
At least, that’s my understanding of his work.

All the best,

Frank
Frantisek (Frank) Borsik

https://www.linkedin.com/in/frantisekborsik

Signal, Telegram, WhatsApp: +421919416714 [2]

iMessage, mobile: +420775230885 [3]

Skype: casioa5302ca

frantisek.bor...@gmail.com

On 28 March 2023 at 7:47:33 PM, rjmcmahon (rjmcma...@rjmcmahon.com)
wrote:


Interesting. I'm skeptical that our cities in the U.S. can get this
(structural separation) right.

Pre-coaxial cable & contract carriage, the FCC licensed spectrum to
the
major media companies and placed a news obligation on them for these
OTA
rights. A society can't run a democracy well without quality and
factual
information to the constituents. Sadly, contract carriage got rid of

that news as a public service obligation as predicted by Eli Noam.
http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get
January
6th and an insurrection.

It takes a staff of 300 to produce 30 minutes of news three times a
day.
The co-axial franchise agreements per each city traded this
obligation
for a community access channel and a small studio, and annual
franchise
fees. History has shown this is insufficient for a city to provide
quality news to its citizens. Community access channels failed
miserably.

Another requirement was two cables so there would be "competition"
in
the coaxial offerings. This rarely happened because of natural
monopoly
both in the last mile and in negotiating broadcast rights (mostly
for
sports.) There is only one broadcast rights winner, e.g. NBC for the

Olympics, and only one last mile winner. That's been proven
empirically
in the U.S.

Now cities are dependent on those franchise fees for their budgets.
And
the cable cos rolled up to a national level. So it's mostly the FCC
that
regulates all of this where they care more about Janet Jackson's
breast
than providing accurate news to help a democracy function well.


https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy



It gets worse as people are moving to unicast networks for their
"news."
But we're really not getting news at all, we're gravitating to
emotional
validations per our dysfunctions. Facebook et al happily provide
this
because it sells more ads. And then the major equipment providers
claim
they're doing great engineering because they can carry "AI loads!!"
and
their stock goes up in value. This means ads & news feeds that
trigger
dopamine hits for addicts are driving the money flows. Which is a
sad
theme for undereducated populations.

And ChatGPT is not the answer for our lack of education and a public

obligation to support those educations, which includes addiction
recovery programs, and the ability to think critically for
ourselves.

Bob
Here is an old (2014) post on Stockholm to my class "textbook":



https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html



[1]
Stockholm: 19 years of municipal broadband success [1]
The Stokab report should be required reading for all local
government
officials. Stockholm is one of the top Internet cities in the
worl...

cis471.blogspot.com [1]

-

From: Starlink  on behalf of

Sebastian Moeller via Starlink 
Sent: Sunday, March 26, 2023 2:11 PM
To: David Lang 
Cc: dan ; Frantisek Borsik
; libreqos
; Dave Taht via Starlink
; rjmcmahon
;
bloat 
Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
w/Comcast chat

Hi David,

On Mar 26, 2023, at 22:57, David Lang  wrote:

On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:

The point of the thread is that we still do not treat digital

 communications infrastructure as life support critical.


Well, let's keep things in perspective, unlike power, water

 (fresh and waste), and often gas, communications infrastructure is
mostly not critical yet. But I agree that we are clearly on a path in
that direction, so it is time to look at that from a different
perspective.


Personally, I am a big fan of putting the access network into

 communal hands, as these guys already do a decent job with other
critical infrastructure (see list 

Re: [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat

2023-03-28 Thread rjmcmahon via Bloat
Interesting. I'm skeptical that our cities in the U.S. can get this 
(structural separation) right.


Pre-coaxial cable & contract carriage, the FCC licensed spectrum to the 
major media companies and placed a news obligation on them for these OTA 
rights. A society can't run a democracy well without quality and factual 
information to the constituents. Sadly, contract carriage got rid of 
that news as a public service obligation as predicted by Eli Noam. 
http://www.columbia.edu/dlc/wp/citi/citinoam11.html Hence we get January 
6th and an insurrection.


It takes a staff of 300 to produce 30 minutes of news three times a day. 
The co-axial franchise agreements per each city traded this obligation 
for a community access channel and a small studio, and annual franchise 
fees. History has shown this is insufficient for a city to provide 
quality news to its citizens. Community access channels failed 
miserably.


Another requirement was two cables so there would be "competition" in 
the coaxial offerings. This rarely happened because of natural monopoly 
both in the last mile and in negotiating broadcast rights (mostly for 
sports.) There is only one broadcast rights winner, e.g. NBC for the 
Olympics, and only one last mile winner. That's been proven empirically 
in the U.S.


Now cities are dependent on those franchise fees for their budgets. And 
the cable cos rolled up to a national level. So it's mostly the FCC that 
regulates all of this where they care more about Janet Jackson's breast 
than providing accurate news to help a democracy function well. 
https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII_halftime_show_controversy


It gets worse as people are moving to unicast networks for their "news." 
But we're really not getting news at all, we're gravitating to emotional 
validations per our dysfunctions. Facebook et al happily provide this 
because it sells more ads. And then the major equipment providers claim 
they're doing great engineering because they can carry "AI loads!!" and 
their stock goes up in value.  This means ads & news feeds that trigger 
dopamine hits for addicts are driving the money flows. Which is a sad 
theme for undereducated populations.


And ChatGPT is not the answer for our lack of education and a public 
obligation to support those educations, which includes addiction 
recovery programs, and the ability to think critically for ourselves.


Bob

Here is an old (2014) post on Stockholm to my class "textbook":
 
https://cis471.blogspot.com/2014/06/stockholm-19-years-of-municipal.html



 [1]
 Stockholm: 19 years of municipal broadband success [1]
 The Stokab report should be required reading for all local government
officials. Stockholm is one of the  top Internet cities in the worl...

 cis471.blogspot.com

-

From: Starlink  on behalf of
Sebastian Moeller via Starlink 
Sent: Sunday, March 26, 2023 2:11 PM
To: David Lang 
Cc: dan ; Frantisek Borsik
; libreqos
; Dave Taht via Starlink
; rjmcmahon ;
bloat 
Subject: Re: [Starlink] [Bloat] On fiber as critical infrastructure
w/Comcast chat

Hi David,


On Mar 26, 2023, at 22:57, David Lang  wrote:

On Sun, 26 Mar 2023, Sebastian Moeller via Bloat wrote:


The point of the thread is that we still do not treat digital

communications infrastructure as life support critical.


  Well, let's keep things in perspective, unlike power, water

(fresh and waste), and often gas, communications infrastructure is
mostly not critical yet. But I agree that we are clearly on a path in
that direction, so it is time to look at that from a different
perspective.

  Personally, I am a big fan of putting the access network into

communal hands, as these guys already do a decent job with other
critical infrastructure (see list above, plus roads) and I see a PtP
fiber access network terminating in some CO-like locations a viable
way to allow ISPs to compete in the internet service field all the
while using the communally build access network for a few. IIRC this
is how Amsterdam organized its FTTH roll-out. Just as POTS wiring has
beed essentially unchanged for decades, I estimate that current fiber
access lines would also last for decades requiring no active component
changes in the field, making them candidates for communal management.
(With all my love for communal ownership and maintenance, these
typically are not very nimble and hence best when we talk about life
times of decades).


This is happening in some places (the town where I live is doing

such a rollout), but the incumbant ISPs are fighting this and in many
states have gotten laws created that prohibit towns from building such
systems.

A resistance that in the current system is understandable*...
btw, my point is not wanting to get rid of ISPs, I really just think
that the access network is more of a natural monopoly and if we want
actual ISP competition, the access network is the wrong place to
implement it... as it is unlikely that we will 

Re: [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat

2023-03-26 Thread rjmcmahon via Bloat
Thanks for this. Yeah, I can understand MDUs are complex and present 
unique issues for both their Boards and companies to service them.  
Condo trusts, LLC non profits, co-ops, etc. Too many attorneys to boot. 
My attorney fees cost more than my training youth to install FiWi infra. 
The expensive, existing cos are asking $80K per building. The fire alarm 
installer is asking $100K per building. I figure we can get both for 
less than $180K but it's going to take some figuring out. And once we 
sink the money, it needs to be world-class with swappable parts. Others 
may then notice and follow suit.


Then for dark fiber to a private colo about 1.5 miles away the ask is 
$5K per month. Buy my own switch and SFPs. Peering and ISP services are 
not included.


So I do see the value Comcast brings. I think a challenge is that 
different options are needed for different customers. That's why I think 
pluggable optics, serdes and cmos radios are critical to the design for 
when we eventually go full fiber & wireless for the last meters.


Bob

Happy to help (you can ping me off-list). The main products are DOCSIS
and PON these days and it kind of depends where you are, whether it is
a new build, etc. As others said, it gets super complicated in MDUs
and the infrastructure in place and the building agreements vary quite
a bit.

Jason

From: Bloat  on behalf of Nathan
Owens via Bloat 
Reply-To: Nathan Owens 
Date: Sunday, March 26, 2023 at 09:07
To: Robert McMahon 
Cc: Rpm , dan ,
Frantisek Borsik , Bruce Perens
, libreqos , Dave
Taht via Starlink , bloat

Subject: Re: [Bloat] [Starlink] On fiber as critical infrastructure
w/Comcast chat

Comcast's 6Gbps service is a niche product with probably <1000
customers. It requires knowledge and persistence from the customer to
actually get it installed, a process that can take many months (It's
basically MetroE). It requires you to be within 1760ft of available
fiber, with some limit on install cost if trenching is required. In
some cases, you may be able to trench yourself, or cover some of the
costs (usually thousands to tens of thousands).

On Sat, Mar 25, 2023 at 5:04 PM Robert McMahon via Bloat
 wrote:


The primary cost is the optics. That's why they're p in sfp and pay
go

Bob

On Mar 25, 2023, at 4:35 PM, David Lang  wrote:

On Sat, 25 Mar 2023, Robert McMahon via Bloat wrote:

The fiber has basically infinite capacity.

in theory, but once you start aggregating it and having to pay for
equipment
that can handle the rates, your 'infinite capaicty' starts to run
out really
fast.

David Lang

-

Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat [1]


___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat [1]

Links:
--
[1] 
https://urldefense.com/v3/__https:/lists.bufferbloat.net/listinfo/bloat__;!!CQl3mcHX2A!EOSY1k9O_PBuVuNNoTVKtyE8K5P8zDDQD-_ns2m_whJemleFOcMrd25veZFZqbIvJ292Ut9e47Owc0kkGpayyP8rW_bkOQ$

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat

2023-03-26 Thread rjmcmahon via Bloat
I don't think so. The govt. just bailed out SVB for billionaires who 
were woefully underinsured. The claim is that it protected our financial 
system. Their risk officers didn't price in inflation and those impacts, 
i.e. they eliminated insurance without eliminating the liability.


Texas govt sells windstorm insurance https://www.twia.org/ so the real 
estate industry will build houses in Hurricane prone areas. Society is 
good with that.


Liabilities that will stop people from installing quality FiWi fire 
alarms are a failure that needs to be fixed too.


We've got a lot of ground to cover.

Bob


if you want to eliminate insurance, then you need to eliminate the
liability, which I don't think you want to do if you want to claim
that this is 'life critical'

David Lang

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] On fiber as critical infrastructure w/Comcast chat

2023-03-26 Thread rjmcmahon via Bloat
rselves to better serve others. Most won't act until they 
can actually see what's possible. So let's start to show them.


Sure, having real implemented examples always helps!

Regards
Sebastian




Bob



P.S.: Bruce's point about placing ducts/conduits seems like to only
way to gain some future-proofeness. For multi-story and/or
multi-dweller units this introduces the question how to stop fire
using these conduits to "jump" between levels, but I assume that is a
solved problem already, and can be squelches with throwing money in
its direction.



*)A IIRC charter technician routing coaxial cable on the outside of
the two story building and drilling through the (wooden) wall to set
the cable socket inside, all the while casually cutting the Dish
coaxial cable that was still connected to a satellite dish... Not that
I cared, we were using ADSL at the time, and in accordance with the
old "when in Rome..." rule, I bridged over the deteriorated in-house
phone wiring by running a 30m Cat5 cable on the outside of the
building to the first hand-over box.





Hi Bob,
somewhat sad. Have you considered that your described requirements 
and

the use-case might be outside of the mass-market envelope for which
the big ISPs taylor/rig their processes? Maybe, not sure that is an
option, if you approach this as a "business"* asking for a fiber
uplink for an already "wired" 5 unit property you might get better
service? You still would need to do the in-house re-wiring, but you
likely would avoid scripted hot-lines that hang up when in the
allotted time the agent sees little chance of "closing" the call. All
(big) ISPs I know treat hotline as a cost factor and not as the first
line of customer retention...
I would also not be amazed if Boston had smaller ISPs that are 
willing

and able to listen to customers (but that might be a bit more
expensive than the big ISPs).
That or try to get your foot into Comcast's PR department to sell 
them

on the "reference installation" for all Boston historic buildings, so
they can offset the custom tailoring effort with the expected good
press of doing the "right thing" publicly.
Good luck
Sebastian
*) I understand you are not, but I assume the business units to have
more leeway to actually offer more bespoke solutions than the likely
cost-optimized to Mars and back residental customer unit.
On Mar 25, 2023, at 20:39, rjmcmahon via Bloat 
 wrote:

Hi All,
I've been trying to modernize a building in Boston where I'm an HOA 
board member over the last 18 mos. I perceive the broadband network 
as a critical infrastructure to our 5 unit building.
Unfortunately, Comcast staff doesn't seem to agree. The agent 
basically closed the chat on me mid-stream (chat attached.) I've 
been at this for about 18 mos now.
While I think bufferbloat is a big issue, the bigger issue is that 
our last-mile providers must change their cultures to understand 
that life support use cases that require proper pathways, conduits & 
cabling can no longer be ignored. These buildings have coaxial 
thrown over the exterior walls done in the 80s then drilling holes 
without consideration of structures. This and the lack of 
environmental protections for our HOA's critical infrastructure is 
disheartening. It's past time to remove this shoddy work on our 
building and all buildings in Boston as well as across the globe.
My hope was by now I'd have shown through actions what a historic 
building in Boston looks like when we, as humans in our short lives, 
act as both stewards of history and as responsible guardians to 
those that share living spaces and neighborhoods today & tomorrow. 
Motivating humans to better serve one another is hard.

Bob___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] On fiber as critical infrastructure w/Comcast chat

2023-03-25 Thread rjmcmahon via Bloat
The cost of the labor is less than one might think. I've found it's 
cheaper to train young people in the trades to do this work vs using an 
overpriced company that mostly targets "rich corporations."


It's also a golden egg or geese that can lay golden eggs thing. Let's 
train our youth well here. Some of us will be pushing up daisies before 
they finish. None of us have a guarantee of tomorrow.


Bob

I've never met a Comcast sales person who was able to operate at the
level you're talking about. I think you would do better with a smaller
company.

I think you were also unrealistic if not disingenuous about lives put
at risk. Alarms do not require more than 300 baud.

Comcast would actually like to sell individual internet service for
each of the five units. That's what they're geared to do. You're not
going to get that very high speed rate for that ridiculously low price
and fan it out to five domiciles. They would offer that for a single
home and the users that could be expected in a single home, or maybe a
small business but I think they would charge a business more. I pay
Comcast more for a very small business at a lower rate.

I think realistically the fiber connections you're talking about at
the data rate you request in the privilege of fanning out to five
domiciles should cost about $2400 per month.

I get the complaint about wires on the outside etc. But who are you
expecting to do that work? If you expect Comcast and their competitors
to do that as part of their standard installation, you're asking for
tens of thousands of dollars of work, and if that is to be the
standard then everyone must pay much more than today. Nobody wants
that, and most folks don't care about the current standard of
installation. If this mattered enough to your homeowners association,
they could pay for it.

On Sat, Mar 25, 2023, 12:39 rjmcmahon via Starlink
 wrote:


Hi All,

I've been trying to modernize a building in Boston where I'm an HOA
board member over the last 18 mos. I perceive the broadband network
as a
critical infrastructure to our 5 unit building.

Unfortunately, Comcast staff doesn't seem to agree. The agent
basically
closed the chat on me mid-stream (chat attached.) I've been at this
for
about 18 mos now.

While I think bufferbloat is a big issue, the bigger issue is that
our
last-mile providers must change their cultures to understand that
life
support use cases that require proper pathways, conduits & cabling
can
no longer be ignored. These buildings have coaxial thrown over the
exterior walls done in the 80s then drilling holes without
consideration
of structures. This and the lack of environmental protections for
our
HOA's critical infrastructure is disheartening. It's past time to
remove
this shoddy work on our building and all buildings in Boston as well
as
across the globe.

My hope was by now I'd have shown through actions what a historic
building in Boston looks like when we, as humans in our short lives,
act
as both stewards of history and as responsible guardians to those
that
share living spaces and neighborhoods today & tomorrow. Motivating
humans to better serve one another is hard.

Bob___
Starlink mailing list
starl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] On fiber as critical infrastructure w/Comcast chat

2023-03-25 Thread rjmcmahon via Bloat
It's not just one phone call. I've been figuring this out for about two 
years now. I've been working with some strategic people in Boston, colos 
& dark fiber providers, and professional installers that wired up many 
of the Boston universities, some universities themselves to offer co-ops 
to students to run networsk, trainings for DIC and other high value IoT 
offerings, blue collar principals (with staffs of about 100) to help 
them learn to install fiber and provide better jobs for their employees.


My conclusion is that Comcast is best suited for the job as the 
broadband provider, at least in Boston, for multiple reasons. One chat 
isn't going to block me ;)


The point of the thread is that we still do not treat digital 
communications infrastructure as life support critical. It reminds me of 
Elon Musk and his claims on FSD. I could do the whole thing myself - but 
that's not going to achieve what's needed. We need systems that our 
loved ones can call and those systems will care for them. Similar to how 
the medical community works, though imperfect, in caring for our loved 
one's and their healths.


I think we all are responsible for changing our belief sets & developing 
ourselves to better serve others. Most won't act until they can actually 
see what's possible. So let's start to show them.


Bob


Hi Bob,


somewhat sad. Have you considered that your described requirements and
the use-case might be outside of the mass-market envelope for which
the big ISPs taylor/rig their processes? Maybe, not sure that is an
option, if you approach this as a "business"* asking for a fiber
uplink for an already "wired" 5 unit property you might get better
service? You still would need to do the in-house re-wiring, but you
likely would avoid scripted hot-lines that hang up when in the
allotted time the agent sees little chance of "closing" the call. All
(big) ISPs I know treat hotline as a cost factor and not as the first
line of customer retention...
I would also not be amazed if Boston had smaller ISPs that are willing
and able to listen to customers (but that might be a bit more
expensive than the big ISPs).
That or try to get your foot into Comcast's PR department to sell them
on the "reference installation" for all Boston historic buildings, so
they can offset the custom tailoring effort with the expected good
press of doing the "right thing" publicly.

Good luck
Sebastian


*) I understand you are not, but I assume the business units to have
more leeway to actually offer more bespoke solutions than the likely
cost-optimized to Mars and back residental customer unit.


On Mar 25, 2023, at 20:39, rjmcmahon via Bloat 
 wrote:


Hi All,

I've been trying to modernize a building in Boston where I'm an HOA 
board member over the last 18 mos. I perceive the broadband network as 
a critical infrastructure to our 5 unit building.


Unfortunately, Comcast staff doesn't seem to agree. The agent 
basically closed the chat on me mid-stream (chat attached.) I've been 
at this for about 18 mos now.


While I think bufferbloat is a big issue, the bigger issue is that our 
last-mile providers must change their cultures to understand that life 
support use cases that require proper pathways, conduits & cabling can 
no longer be ignored. These buildings have coaxial thrown over the 
exterior walls done in the 80s then drilling holes without 
consideration of structures. This and the lack of environmental 
protections for our HOA's critical infrastructure is disheartening. 
It's past time to remove this shoddy work on our building and all 
buildings in Boston as well as across the globe.


My hope was by now I'd have shown through actions what a historic 
building in Boston looks like when we, as humans in our short lives, 
act as both stewards of history and as responsible guardians to those 
that share living spaces and neighborhoods today & tomorrow. 
Motivating humans to better serve one another is hard.


Bob___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] On fiber as critical infrastructure w/Comcast chat

2023-03-25 Thread rjmcmahon via Bloat
To be fair, this isn't unique to Comcast. I hit similar issues in NYC 
with Verizon.


I think we really need to educate people that life support capable 
communications networks are now critical infrastructure.


And, per climate impact, we may want to add Jaffe's network power 
(capacity over delay) over distance & energy. Fixed wireless offerings 
are an energy waste and generate excessive type 2 emissions. A cell 
tower is about 1-5kW for 60 connections or roughly 100-500W per remote 
client at 1 Gb/s with high latencies. A FiWi network will require 3-5W 
for 2.8 Gb/s and speed of light over fiber ultra low latencies.


I think we really need our broadband providers to lead here and that 
fiber to WiFi is the only viable end game if we care about our impacts.


"The average cellular base station, which comprises the tower and the 
radio equipment attached to it, can use anywhere from about one to five 
kilowatts (kW), depending on whether the radio equipment is housed in an 
air-conditioned building, how old the tower is and how many transceivers 
are in the base station. Most of the energy is used by the radio to 
transmit and receive cell-phone signals."


Bob

Hi All,

I've been trying to modernize a building in Boston where I'm an HOA
board member over the last 18 mos. I perceive the broadband network as
a critical infrastructure to our 5 unit building.

Unfortunately, Comcast staff doesn't seem to agree. The agent
basically closed the chat on me mid-stream (chat attached.) I've been
at this for about 18 mos now.

While I think bufferbloat is a big issue, the bigger issue is that our
last-mile providers must change their cultures to understand that life
support use cases that require proper pathways, conduits & cabling can
no longer be ignored. These buildings have coaxial thrown over the
exterior walls done in the 80s then drilling holes without
consideration of structures. This and the lack of environmental
protections for our HOA's critical infrastructure is disheartening.
It's past time to remove this shoddy work on our building and all
buildings in Boston as well as across the globe.

My hope was by now I'd have shown through actions what a historic
building in Boston looks like when we, as humans in our short lives,
act as both stewards of history and as responsible guardians to those
that share living spaces and neighborhoods today & tomorrow.
Motivating humans to better serve one another is hard.

Bob

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] [Starlink] [LibreQoS] On FiWi

2023-03-21 Thread rjmcmahon via Bloat
I was around when BGP & other critical junctures 
https://en.wikipedia.org/wiki/Critical_juncture_theory  the commercial 
internet. Here's a short write-up from another thread with some thoughts 
(Note: there are no queues in the Schramm Model 
https://en.wikipedia.org/wiki/Schramm%27s_model_of_communication )


On why we're here.

I think Stuart's point about not having the correct framing is spot on. 
I also think part of that may come from the internet's origin story 
so-to-speak. In the early days of the commercial internet, ISPs formed 
by buying MODEM banks from suppliers and connecting them to the 
telephone company central offices (thanks Strowger!) and then leasing T1 
lines from the same telco, connecting the two.  Products like a Cisco 
Access Gateway were used for the MODEM side. The 4K independent ISPs 
formed in the U.S. took advantage of statistical multiplexing per IP 
packets to optimize the PSTN's time division multiplexing (TDM) design. 
That design had a lot of extra capacity because of the mother's day 
problem - the network had to carry the peak volume of calls. It was 
always odd to me that the telephone companies basically contracted out 
statistical to TDM coupling of networks and didn't do it themselves. 
This was rectified with broadband and most all the independent ISPs went 
out of business.


IP statistical multiplexing was great except for one thing. The attached 
computers were faster than their network i/o so TCP had to do things 
like congestion control to avoid network collapse based on congestion 
signals (and a very imperfect control loop.) Basically, that extra TDM 
capacity for voice calls was consumed very quickly. This set in motion 
the idea that network channel capacity is a proxy for computer speed as 
when networks are underprovisioned and congested that's basically 
accurate. Van Jacobson's work was most always about congestion on what 
today are bandwidth constrained networks.


This also started a bit of a cultural war colloquially known as 
Bellheads vs Netheads. The human engineers took sides more or less. The 
netheads mostly kept increasing capacity. The market demand curve for 
computer connections drove this. It's come to a head though, in that 
netheads most always overprovisioned similar to solving the mother's day 
problem. (This is different from the electric build out where the goal 
is to drive peak and average loads to merge in order to keep generators 
efficient at a constant speed.)


Many were first stuck with the concept of bandwidth scarcity per those 
origins. But then came bandwidth abundance and many haven't adjusted. 
Mental block number one. Mental block two occurs when one sees all that 
bandwidth and says, let's use it all as it's going to be scarce, like a 
Great Depression-era person hoarding basic items.


A digression; This isn't that much different in the early days before 
Einstein. Einstein changed thinking by realizing that the speed of 
causality was defined or limited by the speed of massless particles, 
i.e. energy or photons. We all come from energy in one way or another. 
So of course it makes sense that our causality system, e.g. aging, is 
determined by that speed. It had to be relative for Maxwell's equations 
to be held true - which Einstein agreed with as true irrelevant of 
inertial frame. A leap for us comes when we realize that the speed of 
causality, i.e. time, is fundamentally the speed of energy.  It's true 
for all clocks, objects, etc. even computers.


So when we engineer systems that queue information, we don't slow down 
energy, we slow down information. Computers are mass information tools 
so slowing down information slows down distributed compute. As Stuart 
says, "It's the latency, stupid".  It's physics too.


I was trying to explain to a dark fiber provider that I wanted 100Gb/s 
SFPs to a residential building in Boston. They said, nobody needs 
100Gb/s and that's correct from a link capacity perspective. But the 
economics & energy required for the lowest latency ber bit delivered 
actually is 100Gb/s SERDES attached to lasers attached to fiber.


What we really want is low latency at the lowest energy possible, and 
also to be unleashed from cables (as we're not dogs.) Hence FiWi.


Bob


I do believe that we all want to get the best - latency and speed,
hopefully, in this particular order :-)
The problem was that from the very beginning of the Internet (yeah, I
was still not here, on this planet, when it all started), everything
was optimised for speed, bandwidth and other numbers, but not so much
for bufferbloat in general.
Some of the things that goes into it in the need for speed, are
directly against the fixing latency...and it was not setup for it.
Gamers and Covid (work from home, the need for the enterprise network
but in homes...) brings it into conversation, thankfully, and now we
will deal with it.

Also, there is another thing I see and it's a negative sentiment
against anything 

[Bloat] On FiWi power envelope

2023-03-20 Thread rjmcmahon via Bloat
If I'm reading things correctly, the per fire alarm power rating is 120V 
at 80 mA or 9.6 W. The per power FiWi transceiver estimate is 2 Watts 
per spatial stream at 160MhZ and 1 Watt for the fiber. Looks like a 
retrofit of a fire alarm system would have sufficient power for FiWi 
radio heads. Then it's punching a few holes, run fiber, splice, patch & 
paint which is very straightforward work for the trades. Rich people as 
early adopters could show off their infinitely capable in-home network. 
Installers could do a two-for deal, buy one and I'll install another in 
a less fortunate community. 
https://www.thespruce.com/install-hardwired-smoke-detectors-1152329


Sharktank passed on the Ring deal - imagine having a real, life-support 
capable, & future-proof network vs just a silly doorbell w/camera.


Bob
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


[Bloat] On metrics

2023-03-19 Thread rjmcmahon via Bloat

Hi All,

It seems getting the metrics right is critical. Our industry can't be 
reporting things that mislead or misassign blame. The medical community 
doesn't treat people for cancer without having a high degree they've 
gotten the diagnostics correct as an example.


An initial metric, per this group, would be geared towards 
responsiveness or the speed of causality. Here, we may need to include 
linear distance, the power required to achieve a responsiveness and to 
take account of Pareto efficiencies, where one device's better 
responsiveness can't make another's worse.


An example per a possible FiWi new & comprehensive metric: A rating 
could be something like 10K responses per second at 1Km terrestrial 
(fiber) cable / 6m radius free space range / 5W total / 0-impact to 
others. If consumers can learn to read nutrition labels they can also 
learn to read these.


Maybe a device produces a scan code qr based upon its e3e measurement 
and the scan code qr loads a page with human interpretable analysis? 
Similar to how we now pull up menus on our mobile phones listing the 
food items and the nutrition information that's available to seat at a 
table. Then, in a perfect world, there is a rating per each link hop or 
better, network jurisdiction. Each jurisdiction could decide if they 
want to participate or not, similar to connecting up an autonomous 
system or not. I think measurements of network jurisdictions without 
prior agreements are unfair. The lack of measurement capability is 
likely enough pressure needed to motivate actions.


Bob

PS. As a side note, and a shameless plug, iperf 2 now supports 
bounceback and a big issue has been clock sync for one way delays (OWD.) 
Per a comment from Jean Tourrhiles 
https://sourceforge.net/p/iperf2/tickets/242/ I added some unsync 
detections in the bounceback measurements. Contact me directly if your 
engineering team needs more information on iperf 2.

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [LibreQoS] [Starlink] [Rpm] On FiWi

2023-03-18 Thread rjmcmahon via Bloat
 > All of the states use cases are already handled by inexpensive 
lorawan

sensors and are already covered by multiple lorawan networks in NYC
and most urban centers in the US.  There is no need for a new
infrastructure, it’s already there.  Not to mention NBIoT/catm
radios.

This is all just general cheapness and lack of liability keeping these
out of widespread deployment. It’s not lack of tech on the market
today.


What is the footprint of lorawan networks and what's the velocity of 
growth? What's the cost per square foot both capex and operations, 
maintaining & monitoring lorawan? What's that compared to the WiFi 
install base, i.e. now we have train even installers and maintainers on 
purpose built technology vs just use what most people know because it's 
common? This all looks like ethernet, token ring, fddi, netbios, decnet, 
etc. where the single approach of IP over WiFi/ethernet with fiber 
fronthaul wave guides and backhauls' waveguides per the ISP seems the 
effective way forward. I don't think it's in society's interest to have 
so disparate networks technologies as we have learned from IP and the 
internet. My guess is lorawan will never get built out across the planet 
as has been done for IP. I can tell that every country is adopting IP 
because they're using a free IP tool to measure their networks.


https://sourceforge.net/projects/iperf2/files/stats/map?dates=2014-02-06%20to%202023-03-18=daily

Bob
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] [Rpm] On FiWi

2023-03-18 Thread rjmcmahon via Bloat

I'm curious as to why the detectors have to be replaced every 10
years.


Dust, grease from cooking oil vapors, insects, mold, etc. accumulate,
and it's so expensive to clean those little sensors, and there is so
much liability associated with them, that it's cheaper to replace the
head every 10 years. Electrolytic capacitors have a limited lifetime
and that is also a good reason to replace the device.

The basic sensor architecture is photoelectric, the older ones used an
americium pelllet that detected gas ionization which was changed by
the presence of smoke. The half-life on the americium ones is at least
400 years (there is more than one isotope, that's the shortest-life
one).


Thanks for this. That makes sense. I do think the FiWi transceivers & 
sensors need to be pluggable & detect failures, particularly early on 
due to infant mortality.


"Infant mortality is a special equipment failure mode that shows the 
probability of failure being highest when the equipment is first 
started, but reduces as time goes on. Eventually, the probability of 
failure levels off after time."


https://www.upkeep.com/blog/infant-mortality-equipment-failure#:~:text=Infant%20mortality%20is%20a%20special,failure%20levels%20off%20after%20time.

Also curious about thermal imaging inside a building - what sensor tech 
to use and at what cost? The Bronx fire occurred because poor people in 
public housing don't have access to electric heat pumps & used a space 
heater instead. It's very sad we as a society do this, i.e. make sure 
rich people can drive Teslas with heat pumps but only provide the worst 
type of heating to children from families that aren't so fortunate.


https://www.cnn.com/2022/01/10/us/nyc-bronx-apartment-fire-monday/index.html

"A malfunctioning electric space heater in a bedroom was the source of 
an apartment building fire Sunday in the Bronx that killed 17 people, 
including 8 children, making it one of the worst fires in the city’s 
history, New York Mayor Eric Adams said Monday."


Bob
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] [Rpm] On FiWi

2023-03-17 Thread rjmcmahon via Bloat
I'm curious as to why the detectors have to be replaced every 10 years. 
Regardless, modern sensors could give a thermal map of the entire 
complex 24x7x365. Fire officials would have a better set of eyes when 
they showed up as the sensor system & network could provide thermals as 
a time series.


Also, another "killer app" for Boston is digital image correlation & the 
cameras monitor stresses and strains on historic buildings valued at 
about $10M each. And that's undervalued because they're really 
irreplaceable. Similar for some in the Netherladns. Monitoring the 
groundwater with samples every 4 mos is ok - better to monitor the 
structure itself 24x7x365.


https://www.sciencedirect.com/topics/engineering/digital-image-correlation
https://www.bostongroundwater.org/

Bob

On 2023-03-17 13:37, Bruce Perens wrote:

On Fri, Mar 17, 2023 at 12:19 PM rjmcmahon via Starlink
 wrote:You’ll hardly ever have to
deal with the annoying


“chirping” that occurs when a battery-powered smoke detector
begins to
go dead, and your entire family will be alerted in the event that a
fire
does occur since hardwire smoke detectors can be interconnected.


Off-topic, but the sensors in these hardwired units expire after 10
years, and they start beeping. The batteries in modern battery-powered
units with wireless links expire after 10 years, along with the rest
of the unit, and they start beeping.

There are exceptions, the first-generation Nest was pretty bad.


___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] [Starlink] On FiWi

2023-03-17 Thread rjmcmahon via Bloat
I think the low-power transceiver (or RRH) and fiber fronthaul is doable 
within the next 5 years. The difficult part to me seems the virtual APs 
that could service 12-256 RRHs including security monitoring & customer 
privacy.


Is there a VMWARE NSX approach to reducing the O costs by at least 1/2 
for the FiWi head end systems?


For power: My approach to the Boston historic neighborhood where my kids 
now live would be AC wired CPE treated as critical, life support 
infrastructure. But better may be to do as modern garage door openers 
and have standard AC charge a battery so one can operate even during 
power outages.


https://www.rsandrews.com/blog/hardwired-battery-powered-smoke-alarms-you/

Our Recommendation: Hardwired Smoke Alarms
Hardwired smoke alarms, while they require slightly more work upfront, 
are the clear choice if you’re considering replacing your home’s smoke 
alarm system. You’ll hardly ever have to deal with the annoying 
“chirping” that occurs when a battery-powered smoke detector begins to 
go dead, and your entire family will be alerted in the event that a fire 
does occur since hardwire smoke detectors can be interconnected.


Bob

Hi Dave,



On Mar 17, 2023, at 17:38, Dave Taht via Starlink 
 wrote:


This is a pretty neat box:

https://mikrotik.com/product/netpower_lite_7r

What are the compelling arguments for fiber vs copper, again?


As far as I can tell:

Copper:
can carry electric power

Fiber-PON:
much farther reach even without amplifiers (10 Km, 20 Km, ...
depending on loss budget)
cheaper operation (less active power needed by the headend/OLT)
less space need than all active alternatives (AON, copper ethernet)
likely only robust passive components in the field
Existing upgrade path for 25G and 50G is on the horizon over the same
PON infrastructure
mostly resistant to RF ingress along the path (as long as a direct
lightning hit does not melt the glas ;) )

Fiber-Ethernet:
like fiber-PON but
no density advantage (needs 1 port per end device)
even wider upgrade paths


I guess it really depends on how important "carry electric power" is
to you ;) feeding these from the client side is pretty cool for
consenting adults, but I would prefer not having to pay the electric
bill for my ISPs active gear in the field outside the CPE/ONT...

Regards
Sebastian





On Tue, Mar 14, 2023 at 4:10 AM Mike Puchol via Rpm 
 wrote:

Hi Bob,

You hit on a set of very valid points, which I'll complement with my 
views on where the industry (the bit of it that affects WISPs) is 
heading, and what I saw at the MWC in Barcelona. Love the FiWi term 
:-)


I have seen the vendors that supply WISPs, such as Ubiquiti, Cambium, 
and Mimosa, but also newer entrants such as Tarana, increase the 
performance and on-paper specs of their equipment. My examples below 
are centered on the African market, if you operate in Europe or the 
US, where you can charge customers a higher install fee, or even 
charge them a break-up fee if they don't return equipment, the 
economics work.


Where currently a ~$500 sector radio could serve ~60 endpoints, at a 
cost of ~$50 per endpoint (I use this term in place of ODU/CPE, the 
antenna that you mount on the roof), and supply ~2.5 Mbps CIR per 
endpoint, the evolution is now a ~$2,000+ sector radio, a $200 
endpoint, capability for ~150 endpoints per sector, and ~25 Mbps CIR 
per endpoint.


If every customer a WISP installs represents, say, $100 CAPEX at 
install time ($50 for the antenna + cabling, router, etc), and you 
charge a $30 install fee, you have $70 to recover, and you recover 
from the monthly contribution the customer makes. If the contribution 
after OPEX is, say, $10, it takes you 7 months to recover the full 
install cost. Not bad, doable even in low-income markets.


Fast-forward to the next-generation version. Now, the CAPEX at install 
is $250, you need to recover $220, and it will take you 22 months, 
which is above the usual 18 months that investors look for.


The focus, thereby, has to be the lever that has the largest effect on 
the unit economics - which is the per-customer cost. I have drawn what 
my ideal FiWi network would look like:




Taking you through this - we start with a 1-port, low-cost EPON OLT 
(or you could go for 2, 4, 8 ports as you add capacity). This OLT has 
capacity for 64 ONUs on its single port. Instead of connecting the 
typical fiber infrastructure with kilometers of cables which break, 
require maintenance, etc. we insert an EPON to Ethernet converter (I 
added "magic" because these don't exist AFAIK).


This converter allows us to connect our $2k sector radio, and serve 
the $200 endpoints (ODUs) over wireless point-to-multipoint up to 10km 
away. Each ODU then has a reverse converter, which gives us EPON 
again.


Once we are back on EPON, we can insert splitters, for example, 
pre-connectorized outdoor 1:16 boxes. 

Re: [Bloat] [Rpm] [Starlink] [LibreQoS] On FiWi

2023-03-15 Thread rjmcmahon via Bloat

I have sometimes thought that LiFi (https://lifi.co/) would suddenly
come out of the woodwork,
and we would be networking over that through the household.


I think the wishful thinking is "coming from woodwork" vs coming from 
the current and near future state of engineering. Engineering comes from 
humans solving problems who typically get paid to do so.


FiWi would leverage SFP tech. The Fi side of FiWi comes from mass NRE 
investments into the data center networks. The Wi side from mass 
investment into billions of mobile phones. Leveraging WiFi & SFP parts 
is critical to success as semiconductors are a by-the-pound business. I 
think a 1X25G VCSEL SFP, which is tolerant to dust over MMF, has a 
retail price of $40 today.  The sweet spot for DC SFP today is driven by 
1x100Gb/s serdes and I suspect angel investors are trying to improve the 
power significantly of the attached lasers. It's been said that one 
order of improvement in lowering laser power gives multiple orders of 
laser MTBF improvements. So lasers, SERDES & CMOS radios are not static 
and will constantly improve year to year per thousands of engineers 
working on them today, tomorrow & on.


The important parts of FiWi have to be pluggable - just like a light 
bulb is. The socket and wiring last (a la the fiber and antennas) - we 
just swap a bulb if it burns out, if we want a different color, if we 
want a higher foot candle rating, etc. This allows engineering cadences 
to match market cadences and pays staffs. Most engineers don't like to 
wait decades between releases so-to-speak and don't like feast & famine 
lifestyles. Moore's law was and is about human cadences too.


I don't see any engineering NRE that LiFi could leverage. Sounds cool 
though.


Bob
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] [LibreQoS] [Rpm] On FiWi

2023-03-15 Thread rjmcmahon via Bloat
Agreed, AQM is like an emergency brake. Go ahead and keep it but hope to 
never need to use it.


Bob

Hi Bob,

I like your design sketch and the ideas behind it.


On Mar 15, 2023, at 18:32, rjmcmahon via Bloat 
 wrote:


The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very 
low power (VLP) modes. The pluggable transceiver could be color coded 
to a chanspec, then the four color map problem can be used by 
installers per those chanspecs. 
https://en.wikipedia.org/wiki/Four_color_theorem


Maybe design this to be dual band from the start to avoid the up/down
"tdm" approach we currently use? Better yet go full duplex, which
might be an option if we get enough radios that not much
beamforming/MIMO is necessary? I obviously lack deep enough
understanf=dingwhether this makes any sense or is just buzzword bingo
from my side :)




There is no CTS with microwave "interference" The high-speed PHY rates 
combined with low-density AP/STA ratios, ideally 1/1, decrease the 
probability of time signal superpositions. The goal with wireless 
isn't high densities but to unleash humans. A bunch of humans stuck in 
a dog park isn't really being unleashed. It's the ability to move from 
block to block so-to-speak. FiWi is cheaper than sidewalks, sanitation 
systems, etc.


The goal now is very low latency. Higher phy rates can achieve that 
and leave the medium free the vast most of the time and shut down the 
RRH too. Engineering extra capacity by orders of magnitude is better 
than AQM. This has been the case in data centers for decades. 
Congestion? Add a zero (or multiple by 10)


I am weary of this kind of trust in continuous exponential growth...
at one point we reach a limit and will need to figure out how to deal
with congestion again, so why drop this capability on the way? The
nice thing about AQMs is if there is no queue build up these basically
do nothing... (might need some design changes to optimize an AQM to be
as cheap as possible for the uncontended case)...

Note: None of this is done. This is a 5-10 year project with zero 
engineering resources assigned.


Bob

On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
 wrote:

the AP needs to blast a CTS so every other possible conversation has
to halt.

The wireless network is not a bus. This still ignores the hidden
transmitter problem because there is a similar network in the next
room.

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] [Starlink] [LibreQoS] On FiWi

2023-03-15 Thread rjmcmahon via Bloat
My brother and I installed irrigation systems in Texas where it rains a 
lot. No problem with getting business. Digging trenches, laying & gluing 
PVC pipe, installing controller wires, etc is good, respectable work.


I wonder if too many white-collar workers avoided blue-collar work and 
don't understand that blue-collar workers actually are very interested 
in installing fiber (or Actifi) and being part of improving things.


Bob

I think the big problem with this is users per domicile. It's easy
enough to support one floor of a residence with a single AP. There is
an upper limit on the bandwidth that one user can ever require. It is
probably what is needed for full-sphere VR at the perceptual limit. We
have long achieved the perceptual limit of ears, on top of that we
have a lot of tweaking and self-deception. We will get to the limit of
eyes. Multiply this by eight users per domicile for a limit that most
would fit in. We can probably do that with one AP. The additional
equipment and maintenance outlay for structural fiber and an AP per
room doesn't really seem worth it.

On Wed, Mar 15, 2023, 09:17 Aaron Wood  wrote:


I like the general idea, especially if there was a site-wide
controller module that can do the sort of frequency allocation that
network engineers do in dense AP deployments today:  adjacent APs
run on different frequency bands so that they reduce the likelihood
of stepping on each others transmissions.

One of the biggest knowledge gaps that I see people have around
wireless is that it IS a shared medium.  It both is, and isn’t a
bus.  Shared like a bus, but with the hidden transmissions that
remove the csma abilities that get with a bus.

But the main issue will be deployment.  This would be great for
commercial buildings that get retrofitted every decade or so with
new gear.

This will be near-impossible in the US except for new construction
or big remodels of existing structures.  The cost of opening the
walls to run the fiber will make the cost of the hardware itself
insignificant.

OTOH, because the STAs aren’t specialized, the existing ones
“just work”, and so you don’t have the usual bootstrap issue
that plagues tech like zigbee and Zwave, where there isn’t enough
infra to justify the devices, or not enough devices to justify the
infra.

-Aaron

On Tue, Mar 14, 2023 at 10:21 PM Bruce Perens via Rpm
 wrote:

On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
 wrote:

the AP needs to blast a CTS so every other possible conversation has
to halt.
The wireless network is not a bus. This still ignores the hidden
transmitter problem because there is a similar network in the next
room.
___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm

 --
- Sent from my iPhone.

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] [LibreQoS] [Rpm] On FiWi

2023-03-15 Thread rjmcmahon via Bloat
The 6G is a contiguous 1200MhZ. It has low power indoor (LPI) and very 
low power (VLP) modes. The pluggable transceiver could be color coded to 
a chanspec, then the four color map problem can be used by installers 
per those chanspecs. https://en.wikipedia.org/wiki/Four_color_theorem


There is no CTS with microwave "interference" The high-speed PHY rates 
combined with low-density AP/STA ratios, ideally 1/1, decrease the 
probability of time signal superpositions. The goal with wireless isn't 
high densities but to unleash humans. A bunch of humans stuck in a dog 
park isn't really being unleashed. It's the ability to move from block 
to block so-to-speak. FiWi is cheaper than sidewalks, sanitation 
systems, etc.


The goal now is very low latency. Higher phy rates can achieve that and 
leave the medium free the vast most of the time and shut down the RRH 
too. Engineering extra capacity by orders of magnitude is better than 
AQM. This has been the case in data centers for decades. Congestion? Add 
a zero (or multiple by 10)


Note: None of this is done. This is a 5-10 year project with zero 
engineering resources assigned.


Bob

On Tue, Mar 14, 2023 at 5:11 PM Robert McMahon
 wrote:


the AP needs to blast a CTS so every other possible conversation has
to halt.


The wireless network is not a bus. This still ignores the hidden
transmitter problem because there is a similar network in the next
room.

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] Netgear wifi7 router is claiming 100x less latency

2023-03-14 Thread rjmcmahon via Bloat

It's based upon 802.11be which is quite extensive

https://en.wikipedia.org/wiki/IEEE_802.11be

Bob

I wonder where that number comes from?

https://www.engadget.com/netgears-first-wifi-7-router-offers-extra-low-latency-for-gaming-123037814.html

My joy in seeing this, is not in what the actual underlying facts may 
be,

but  "The Nighthawk RS700S also provides speeds up to 5Gbps." is the
*also* bit, in a smaller font.

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [LibreQoS] [Rpm] [Starlink] On FiWi

2023-03-14 Thread rjmcmahon via Bloat

I am old fashioned this way, also, but I think most modern users would
not care, any more about this. They are used to pretty much having all
their data exposed to the internet, available via cellphone, and used
to having their security cameras and other personal information, gone,
out there.

They just want internet.


I think people want privacy it's just that those in leadership roles, 
e.g. Eric Schmidt, rationalized their behaviors with comments like, 
"Privacy is over. Get used to it." At the same time, Google algorithms 
were advertising breast implants to women who just learned from their 
doctors they had breast cancer. Google gleaned this from her information 
search on her recently diagnosed condition.


Life support use cases and privacy have to be added back in as a base 
feature. It's past time we as a society tolerated this behavior from 
billionaires who see us as nothing more than subjects to their targeted 
ads.


Bob
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [LibreQoS] [Rpm] [Starlink] On FiWi

2023-03-14 Thread rjmcmahon via Bloat

The design has to be flexible so DIY w/local firewall is fine.

I'll disagree though that early & late majority care about firewalls. 
They want high-quality access that is secure & private. Both of these 
require high skill network engineers on staff. DIY is hard here. 
Intrusion detection systems, etc. are non-trivial. The days of broadcast 
NFL networks are over.


I disagree to with nobody wanting to pay for quality access to knowledge 
based networks. Not that many years ago, nobody wanted to pay to teach 
women to read either. Then, nobody wanted to pay for university. I grew 
up in the latter and figured out that I needed come up with payment 
somehow to develop my brain. Otherwise, I was screwed.


So, if it's a chatGPT, advertising system - sure wrong market. Free 
shit, even provided by Google, is mostly shit.


Connect to something real without the privacy invasions, no queueing, 
etc. I think it's worth it in spades despite the idea that we shouldn't 
invest so people, regardless of gender, etc. can learn to read.


Bob


end users are still going to want their own router/firewall.  That's
my point, I don't see how you can have that on-prem firewall while
having a remote radio that's useful.

I would adamantly oppose anyone I know passing their firewall off to
the upstream vendor.   I run an MSP and I would offer a customer to
drop my services if they were to buy into something like this on the
business side.

So I really only see this sort of concept for campus networks where
the end users are 'part' of the entity.

On Tue, Mar 14, 2023 at 12:14 PM Robert McMahon 
 wrote:


It's not  discrete routers. It's more like a transceiver. WiFi is 
already splitting at the MAC for MLO. I perceive two choices for the 
split, one at the PHY DAC or, two, a minimalist 802.3 tunneling of 
802.11 back to the FiWi head end. Use 802.3 to leverage merchant 
silicon supporting up to 200 or so RRHs or even move the baseband DSP 
there. I think a split PHY may not work well but a thorough eng 
analysis is still warranted.


Bob



Get BlueMail for Android
On Mar 14, 2023, at 10:54 AM, dan  wrote:


 You could always do it yourself.

 Most people need high skilled network engineers to provide them IT 
services. This need is only going to grow and grow. We can help by 
producing better and simpler offerings, be they DIY or by service 
providers.


 Steve Job's almost didn't support the iPhone development because he 
hated "the orifices." Probably time for many of us to revisit our 
belief set. Does it move the needle, even if imperfectly?


 FiWi blows the needle off the gauge by my judgment. Who does it is 
secondary.


 Bob



most people are unwilling to pay for those services also lol.

I don't see the paradigm of discreet routers/nat per prem anytime
soon.  If you subtract that piece of it then we're basically just
talking XGSPON or similar.

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


[Bloat] On FiWi

2023-03-13 Thread rjmcmahon via Bloat

To change the topic - curious to thoughts on FiWi.

Imagine a world with no copper cable called FiWi (Fiber,VCSEL/CMOS 
Radios, Antennas) and which is point to point inside a building 
connected to virtualized APs fiber hops away. Each remote radio head 
(RRH) would consume 5W or less and only when active. No need for things 
like zigbee, or meshes, or threads as each radio has a fiber connection 
via Corning's actifi or equivalent. Eliminate the AP/Client power 
imbalance. Plastics also can house smoke or other sensors.


Some reminders from Paul Baran in 1994 (and from David Reed)

o) Shorter range rf transceivers connected to fiber could produce a 
significant improvement - - tremendous improvement, really.
o) a mixture of terrestrial links plus shorter range radio links has the 
effect of increasing by orders and orders of magnitude the amount of 
frequency spectrum that can be made available.
o) By authorizing high power to support a few users to reach slightly 
longer distances we deprive ourselves of the opportunity to serve the 
many.

o) Communications systems can be built with 10dB ratio
o) Digital transmission when properly done allows a small signal to 
noise ratio to be used successfully to retrieve an error free signal.
o) And, never forget, any transmission capacity not used is wasted 
forever, like water over the dam. Not using such techniques represent 
lost opportunity.


And on waveguides:

o) "Fiber transmission loss is ~0.5dB/km for single mode fiber, 
independent of modulation"
o) “Copper cables and PCB traces are very frequency dependent.  At 
100Gb/s, the loss is in dB/inch."
o) "Free space: the power density of the radio waves decreases with the 
square of distance from the transmitting antenna due to spreading of the 
electromagnetic energy in space according to the inverse square law"


The sunk costs & long-lived parts of FiWi are the fiber and the CPE 
plastics & antennas, as CMOS radios+ & fiber/laser, e.g. VCSEL could be 
pluggable, allowing for field upgrades. Just like swapping out SFP in a 
data center.


This approach basically drives out WiFi latency by eliminating shared 
queues and increases capacity by orders of magnitude by leveraging 10dB 
in the spatial dimension, all of which is achieved by a physical design. 
Just place enough RRHs as needed (similar to a pop up sprinkler in an 
irrigation system.)


Start and build this for an MDU and the value of the building improves. 
Sadly, there seems no way to capture that value other than over long 
term use. It doesn't matter whether the leader of the HOA tries to 
capture the value or if a last mile provider tries. The value remains 
sunk or hidden with nothing on the asset side of the balance sheet. 
We've got a CAPEX spend that has to be made up via "OPEX returns" over 
years.


But the asset is there.

How do we do this?

Bob
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] [Rpm] [LibreQoS] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA

2023-03-13 Thread rjmcmahon via Bloat

[SM] It is doe because:
a) TCP needs some capacity estimate
b) preferably quickly
c) in a way gentler than what was used before the congestion collapse.


Right, but we're moving away from capacity shortages to a focus on 
better latencies. The speed of distributed compute (or speed of 
causality) is now mostly latency constrained.


Also, it's impossible per Jaffe & others for a TCP link to figure out 
the on-demand capacity so trying to get one via a "broken control loop" 
seems futile. I believe control theory states control loops need to be 
an order greater than what they're trying to control. I don't think an 
app or transport layer can do more than make educated guesses at for its 
control loop. Using a rating might help with that but for sure it's not 
accurate in space-time samples. (Note: many APs are rated 60+ Watts. 
What's the actual? Has to be sampled and that's only a sample. This 
leads to poor PoE designs - but I digress.)


Let's assume the transport layer should be designed to optimize the 
speed of causality. This also seems impossible because the e2e jitter is 
worse with respect to end host discovery so there seems no way to adapt 
from end host only.


If it's true that the end can only guess, maybe the solution domain 
comes from incorporating network measurements via telemetry with the ECN 
or equivalent? And an app can signal to the network elements to capture 
the e2e telemetry. I think this all has to happen within a few RTTs if 
the transport or host app is going to adjust.


Bob

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] [Rpm] [LibreQoS] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA

2023-03-13 Thread rjmcmahon via Bloat

On 2023-03-13 11:51, Sebastian Moeller wrote:

Hi Bob,



On Mar 13, 2023, at 19:42, rjmcmahon  wrote:


[SM] not really, given enough capacity, typical streaming protocols
will actually not hit the ceiling, at least the one's I look at every
now and then tend to stay well below actual capacity of the link.
I think DASH type protocol will hit link peaks. An example with iperf 
2's burst option a controlled WiFi test rig, server side first.


[SM] I think that depends, each segment has only a finite length and
if this can delivered before slow start ends that burst might never
hit the capacity?

Regards


I believe most CDNs are setting the initial CWND so TCP can bypass slow 
start. Slow start seems an engineering flaw from the perspective of low 
latency. It's done for "fairness" whatever that means.


Bob
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] [Rpm] [LibreQoS] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA

2023-03-13 Thread rjmcmahon via Bloat

[SM] not really, given enough capacity, typical streaming protocols
will actually not hit the ceiling, at least the one's I look at every
now and then tend to stay well below actual capacity of the link.

I think DASH type protocol will hit link peaks. An example with iperf 
2's burst option a controlled WiFi test rig, server side first.



[root@ctrl1fc35 ~]# iperf -s -i 1 -e --histograms

Server listening on TCP port 5001 with pid 23764
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
Enabled receive histograms bin-width=0.100 ms, bins=1 (clients 
should use --trip-times)

TCP window size:  128 KByte (default)

[  1] local 192.168.1.15%enp2s0 port 5001 connected with 192.168.1.234 
port 34894 (burst-period=1.00s) (trip-times) (sock=4) (peer 2.1.9-rc2) 
(icwnd/mss/irtt=14/1448/5170) on 2023-03-13 11:37:24.500 (PDT)
[ ID] Burst (start-end)  Transfer Bandwidth   XferTime  (DC%)
 Reads=Dist  NetPwr
[  1] 0.00-0.13 sec  10.0 MBytes   633 Mbits/sec  132.541 ms (13%)
209=29:31:31:88:11:2:1:16  597
[  1] 1.00-1.11 sec  10.0 MBytes   755 Mbits/sec  111.109 ms (11%)
205=34:30:22:83:11:2:6:17  849
[  1] 2.00-2.12 sec  10.0 MBytes   716 Mbits/sec  117.196 ms (12%)
208=33:39:20:81:13:1:5:16  763
[  1] 3.00-3.11 sec  10.0 MBytes   745 Mbits/sec  112.564 ms (11%)
203=27:36:30:76:6:3:6:19  828
[  1] 4.00-4.11 sec  10.0 MBytes   787 Mbits/sec  106.621 ms (11%)
193=29:26:19:80:10:4:6:19  922
[  1] 5.00-5.11 sec  10.0 MBytes   769 Mbits/sec  109.148 ms (11%)
208=36:25:32:86:6:1:5:17  880
[  1] 6.00-6.11 sec  10.0 MBytes   760 Mbits/sec  110.403 ms (11%)
206=42:30:22:73:8:3:5:23  860
[  1] 7.00-7.11 sec  10.0 MBytes   775 Mbits/sec  108.261 ms (11%)
171=20:21:21:58:12:1:11:27  895
[  1] 8.00-8.11 sec  10.0 MBytes   746 Mbits/sec  112.405 ms (11%)
203=36:31:28:70:9:3:2:24  830
[  1] 9.00-9.11 sec  10.0 MBytes   748 Mbits/sec  112.133 ms (11%)
228=41:56:27:73:7:2:3:19  834
[  1] 0.00-10.00 sec   100 MBytes  83.9 Mbits/sec  
113.238/106.621/132.541/7.367 ms  2034=327:325:252:768:93:22:50:197
[  1] 0.00-10.00 sec F8(f)-PDF: 
bin(w=100us):cnt(10)=1067:1,1083:1,1092:1,1105:1,1112:1,1122:1,1125:1,1126:1,1172:1,1326:1 
(5.00/95.00/99.7%=1067/1326/1326,Outliers=0,obl/obu=0/0) (132.541 
ms/1678732644.500333)



[root@fedora ~]# iperf -c 192.168.1.15 -i 1 -t 10 --burst-size 10M 
--burst-period 1 --trip-times


Client connecting to 192.168.1.15, TCP port 5001 with pid 132332 (1 
flows)

Write buffer size: 131072 Byte
Bursting: 10.0 MByte every 1.00 second(s)
TOS set to 0x0 (Nagle on)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)

[  1] local 192.168.1.234%eth1 port 34894 connected with 192.168.1.15 
port 5001 (prefetch=16384) (trip-times) (sock=3) 
(icwnd/mss/irtt=14/1448/5489) (ct=5.58 ms) on 2023-03-13 11:37:24.494 
(PDT)
[ ID] IntervalTransferBandwidth   Write/Err  Rtry 
Cwnd/RTT(var)NetPwr
[  1] 0.00-1.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0 0 
5517K/18027(1151) us  582
[  1] 1.00-2.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0 0 
5584K/13003(2383) us  806
[  1] 2.00-3.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0 0 
5613K/16462(962) us  637
[  1] 3.00-4.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0 0 
5635K/19523(671) us  537
[  1] 4.00-5.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0 0 
5594K/10013(1685) us  1047
[  1] 5.00-6.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0 0 
5479K/14008(654) us  749
[  1] 6.00-7.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0 0 
5613K/17752(283) us  591
[  1] 7.00-8.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0 0 
5599K/17743(436) us  591
[  1] 8.00-9.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0 0 
5577K/11214(2538) us  935
[  1] 9.00-10.00 sec  10.0 MBytes  83.9 Mbits/sec  80/0 0 
4178K/7251(993) us  1446
[  1] 0.00-10.01 sec   100 MBytes  83.8 Mbits/sec  800/0 0 
4178K/7725(1694) us  1356

[root@fedora ~]#

Note: Client side output is being updated to support outputs based upon 
the bursts. This allows one to see that a DASH type protocol can drive 
the bw bottleneck queue.


Bob

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] so great to see ISPs that care

2023-03-12 Thread rjmcmahon via Bloat
Our current WiFi designs, at least in residential, are like garden hoses 
attached to rectangular sprinklers - flexible and suboptimal. What's 
needed is an irrigation system approach where physical dimensions and 
spray patterns are designed in by a qualified designer. (I was 16 when I 
got my Texas irrigation license - needed it for summer work.) WiFi 
designers can learn from irrigation, e.g things like just enough spray 
overlap and don't spray down the street.


Also, by fire code CPE smoke detectors can be no further than 30' from a 
habitable space as humans need to be alerted. 20' radius is better.


It is silly that we don't really take advantage of this and design a 
proper WiFi network. The distances, EMF patterns, and local devices are 
known ahead of time (as our plants, yard, main pipes, etc. with 
irrigation.)


I started my career working on a network design for the International 
Space Station. The *first* requirement was to carry "life support use 
cases" for the astronauts. None of this stuff of, well it's just 
entertainment so we don't need to worry about downtime and rebooting a 
device is just fine. Also, none of the hand waving as Elon Musk does 
conflating recycling with life support. 
https://www.youtube.com/watch?v=sOpMrVnjYeY=4619s


I believe skilled engineers must take the lead here. It's not going to 
come from customers complaining, nor from exec managements looking for 
the next increment. All problems aren't bufferbloat either.


We as engineers can do better. Not sure why it's been so hard to date 
but it seems to be the case. My hope is we figure it out sooner than 
later. I also think most ISPs actually do care despite, the supposition 
in the subject line. Rather we just haven't figured out as a group how 
to do our engineering at a world-class level.


Sometimes an increment is ok. Other times we need to rethink our design. 
Maybe we need to do a bit more of the latter.


Bob


Hi Bob,



On Mar 12, 2023, at 22:02, rjmcmahon  wrote:

iperf 2 uses responses per second and also provides the bounce back 
times as well as one way delays.


The hypothesis is that network engineers have to fix KPI issues, 
including latency, ahead of shipping products.


Asking companies to act on consumer complaints is way too late. It's 
also extremely costly. Those running Amazon customer service can 
explain how these consumer calls about their devices cause things like 
device returns (as that's all the call support can provide.) This 
wastes energy to physically ship things back, causes a stack of 
working items that now go to ewaste, etc.


It's really on network operators, suppliers and device mfgs to get 
ahead of this years before consumers get their stuff.


[SM] As much as I like to tinker, I agree with you to make an impact,
doing this one network at a time scaled poorly, and a joined effort
seems way more effective and yes that better started yesterday than
today ;)




As a side note, many devices select their WiFi chanspec (AP channel+) 
based on the strongest RSSI. The network paths should be based on KPIs 
like low latency. Strong signal just means an AP is yelling to loudly 
and interfering with the neighbors. Try the optimal AP chanspec that 
has 10dB separation per spatial dimension and the whole apartment 
complex would be better for it.


[SM] Sidenote, with DSL ISP are actively optimizing the per link
transmit power in both directions. They seem to do this partially to
save energy/cost and partially to optimize group transmission rates.
Ever since vectoring was introduced to deal with crosstalk the signal
fate of all links connected to a DSLAM agare a partial common fate. In
the DSLAM to CPE direction the DSLAM will "pre-distort" each lines
signal dynamically so that after the unavoidable crosstalk interaction
between the lines the resulting "pulse shapes" are clean(er) again
when they reach the CPE (I am simplifying but the principle holds). In
CPE to DSLAM direction that is not possible (since there is no entity
seeing all concurrent transmissions and hence no possibility to
calculate or apply the pre-distortion, so the method of choice is to
simply try to decode all lines together, and to help with that CPE
transmit power sees to be adjusted that signal level at the DSLAM is
equalized. (For very short links that often results in less than
maximally possible capacity, but over the whole set of links that
method seems to increase total capacity). I would guess in theory
these methods are also applied on RF links (except RF with its 3D
propagation is probably way more challenging).





We're so focused on buffer bloat we're ignoring everything else where 
incremental engineering has led to poor products & offerings.


[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.72 -i 1 -e 
--bounceback --trip-times


Client connecting to 192.168.1.72, TCP port 5001 with pid 3123814 (1 
flows)


Re: [Bloat] [Rpm] so great to see ISPs that care

2023-03-12 Thread rjmcmahon via Bloat

for completeness, here is a concurrent "working load" example:

 [root@ryzen3950 iperf2-code]# iperf -c 192.168.1.58%enp4s0 -i 1 -e 
--bounceback --working-load=up,4 -t 3


Client connecting to 192.168.1.58, TCP port 5001 with pid 3125575 via 
enp4s0 (1 flows)

Write buffer size:  100 Byte
Bursting:  100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs & 
tcp_quickack)

TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)

[  2] local 192.168.1.69%enp4s0 port 49268 connected with 192.168.1.58 
port 5001 (bb w/quickack len/hold=100/0) (sock=7) 
(icwnd/mss/irtt=14/1448/243) (ct=0.29 ms) on 2023-03-12 14:18:25.658 
(PDT)
[  5] local 192.168.1.69%enp4s0 port 49244 connected with 192.168.1.58 
port 5001 (prefetch=16384) (sock=3) (qack) (icwnd/mss/irtt=14/1448/260) 
(ct=0.31 ms) on 2023-03-12 14:18:25.658 (PDT)
[  4] local 192.168.1.69%enp4s0 port 49254 connected with 192.168.1.58 
port 5001 (prefetch=16384) (sock=4) (qack) (icwnd/mss/irtt=14/1448/295) 
(ct=0.35 ms) on 2023-03-12 14:18:25.658 (PDT)
[  1] local 192.168.1.69%enp4s0 port 49256 connected with 192.168.1.58 
port 5001 (prefetch=16384) (sock=6) (qack) (icwnd/mss/irtt=14/1448/270) 
(ct=0.31 ms) on 2023-03-12 14:18:25.658 (PDT)
[  3] local 192.168.1.69%enp4s0 port 49252 connected with 192.168.1.58 
port 5001 (prefetch=16384) (sock=5) (qack) (icwnd/mss/irtt=14/1448/263) 
(ct=0.31 ms) on 2023-03-12 14:18:25.658 (PDT)
[ ID] IntervalTransferBandwidth   Write/Err  Rtry 
Cwnd/RTT(var)NetPwr
[  5] 0.00-1.00 sec  41.8 MBytes   351 Mbits/sec  438252/0 3 
  73K/53(3) us  826892
[  1] 0.00-1.00 sec  39.3 MBytes   330 Mbits/sec  412404/024 
  39K/45(3) us  916455
[ ID] IntervalTransferBandwidth BB 
cnt=avg/min/max/stdev Rtry  Cwnd/RTTRPS
[  2] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.323/0.093/2.147/0.641 ms0   14K/119 us3098 rps
[  4] 0.00-1.00 sec  34.2 MBytes   287 Mbits/sec  358210/015 
  55K/53(3) us  675869
[  3] 0.00-1.00 sec  33.4 MBytes   280 Mbits/sec  349927/011 
 127K/53(4) us  660241

[SUM] 0.00-1.00 sec   109 MBytes   917 Mbits/sec  1146389/029
[  5] 1.00-2.00 sec  42.1 MBytes   353 Mbits/sec  441376/0 1 
  73K/55(9) us  802502
[  1] 1.00-2.00 sec  39.6 MBytes   333 Mbits/sec  415644/0 0 
  39K/51(6) us  814988
[  2] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.079/0.056/0.127/0.019 ms0   14K/67 us12658 rps
[  4] 1.00-2.00 sec  33.8 MBytes   283 Mbits/sec  354150/0 0 
  55K/58(7) us  610603
[  3] 1.00-2.00 sec  33.7 MBytes   283 Mbits/sec  353392/0 2 
 127K/53(6) us  666777

[SUM] 1.00-2.00 sec   110 MBytes   919 Mbits/sec  1148918/0 3
[  5] 2.00-3.00 sec  42.2 MBytes   354 Mbits/sec  442685/0 0 
  73K/50(8) us  885370
[  1] 2.00-3.00 sec  36.9 MBytes   310 Mbits/sec  387381/0 0 
  39K/48(4) us  807044
[  2] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.073/0.058/0.093/0.012 ms0   14K/60 us13774 rps
[  4] 2.00-3.00 sec  33.9 MBytes   284 Mbits/sec  355533/0 0 
  55K/52(4) us  683717
[  3] 2.00-3.00 sec  29.4 MBytes   247 Mbits/sec  308725/0 1 
 127K/54(4) us  571713

[SUM] 2.00-3.00 sec   106 MBytes   886 Mbits/sec  1106943/0 1
[  5] 0.00-3.00 sec   126 MBytes   353 Mbits/sec  1322314/0 4
   73K/57(18) us  773072
[  2] 0.00-3.00 sec  7.81 KBytes  21.3 Kbits/sec
40=0.134/0.053/2.147/0.328 ms0   14K/58 us7489 rps
[  2] 0.00-3.00 sec BB8(f)-PDF: bin(w=100us):cnt(40)=1:31,2:8,22:1 
(5.00/95.00/99.7%=1/2/22,Outliers=1,obl/obu=0/0)
[  3] 0.00-3.00 sec  96.5 MBytes   270 Mbits/sec  1012045/014
  127K/57(6) us  591693
[  1] 0.00-3.00 sec   116 MBytes   324 Mbits/sec  1215431/024
   39K/51(5) us  794234
[  4] 0.00-3.00 sec   102 MBytes   285 Mbits/sec  1067895/015
   55K/55(9) us  647061

[SUM] 0.00-3.00 sec   324 MBytes   907 Mbits/sec  3402254/033
[ CT] final connect times (min/avg/max/stdev) = 0.292/0.316/0.352/22.075 
ms (tot/err) = 5/0



iperf 2 uses responses per second and also provides the bounce back
times as well as one way delays.

The hypothesis is that network engineers have to fix KPI issues,
including latency, ahead of shipping products.

Asking companies to act on consumer complaints is way too late. It's
also extremely costly. Those running Amazon customer service can
explain how these consumer calls about their devices cause things like
device returns (as that's all the call support can provide.) This
wastes energy to physically ship things back, causes a stack of
working items that now go to ewaste, etc.

It's really on network operators, suppliers and device mfgs to get
ahead of this years before consumers 

Re: [Bloat] [Rpm] so great to see ISPs that care

2023-03-12 Thread rjmcmahon via Bloat
iperf 2 uses responses per second and also provides the bounce back 
times as well as one way delays.


The hypothesis is that network engineers have to fix KPI issues, 
including latency, ahead of shipping products.


Asking companies to act on consumer complaints is way too late. It's 
also extremely costly. Those running Amazon customer service can explain 
how these consumer calls about their devices cause things like device 
returns (as that's all the call support can provide.) This wastes energy 
to physically ship things back, causes a stack of working items that now 
go to ewaste, etc.


It's really on network operators, suppliers and device mfgs to get ahead 
of this years before consumers get their stuff.


As a side note, many devices select their WiFi chanspec (AP channel+) 
based on the strongest RSSI. The network paths should be based on KPIs 
like low latency. Strong signal just means an AP is yelling to loudly 
and interfering with the neighbors. Try the optimal AP chanspec that has 
10dB separation per spatial dimension and the whole apartment complex 
would be better for it.


We're so focused on buffer bloat we're ignoring everything else where 
incremental engineering has led to poor products & offerings.


[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.72 -i 1 -e 
--bounceback --trip-times


Client connecting to 192.168.1.72, TCP port 5001 with pid 3123814 (1 
flows)

Write buffer size:  100 Byte
Bursting:  100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs & 
tcp_quickack)

TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)

[  1] local 192.168.1.69%enp4s0 port 41336 connected with 192.168.1.72 
port 5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times) 
(sock=3) (icwnd/mss/irtt=14/1448/284) (ct=0.33 ms) on 2023-03-12 
14:01:24.820 (PDT)
[ ID] IntervalTransferBandwidth BB 
cnt=avg/min/max/stdev Rtry  Cwnd/RTTRPS
[  1] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.311/0.209/0.755/0.159 ms0   14K/202 us3220 rps
[  1] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.254/0.180/0.335/0.051 ms0   14K/210 us3934 rps
[  1] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.266/0.168/0.468/0.088 ms0   14K/210 us3754 rps
[  1] 3.00-4.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.294/0.184/0.442/0.078 ms0   14K/233 us3396 rps
[  1] 4.00-5.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.263/0.150/0.427/0.077 ms0   14K/215 us3802 rps
[  1] 5.00-6.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.325/0.237/0.409/0.056 ms0   14K/258 us3077 rps
[  1] 6.00-7.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.259/0.165/0.410/0.077 ms0   14K/219 us3857 rps
[  1] 7.00-8.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.277/0.193/0.415/0.068 ms0   14K/224 us3608 rps
[  1] 8.00-9.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.292/0.206/0.465/0.072 ms0   14K/231 us3420 rps
[  1] 9.00-10.00 sec  1.95 KBytes  16.0 Kbits/sec
10=0.256/0.157/0.439/0.082 ms0   14K/211 us3908 rps
[  1] 0.00-10.01 sec  19.5 KBytes  16.0 Kbits/sec
100=0.280/0.150/0.755/0.085 ms0   14K/1033 us3573 rps
[  1] 0.00-10.01 sec  OWD Delays (ms) Cnt=100 To=0.169/0.074/0.318/0.056 
From=0.105/0.055/0.162/0.024 Asymmetry=0.065/0.000/0.172/0.0493573 
rps
[  1] 0.00-10.01 sec BB8(f)-PDF: 
bin(w=100us):cnt(100)=2:14,3:57,4:20,5:8,8:1 
(5.00/95.00/99.7%=2/5/8,Outliers=0,obl/obu=0/0)



Bob

Dave,

your presentation was awesome, I fully agree with you ;). I very much
liked your practical funnel demonstration which was boiled down to the
bare minimum (I only partly asked myself, will the liquid spill in in
your laptops keyboard, and if so is it water-proof, but you clearly
had rehearsed/tried that before).
BTW, I always have to think of this
h++ps://www.youtube.com/watch?v=R7yfISlGLNU somehow when you present
live from the marina ;)


I am still not through watching all of the presentations and panels,
but can already say, team L4S continues to over-promise and
under-deliver, but Koen's presentation itself was done well and might
(sadly) convince people to buy-in into L4(S) = 2L2L = too little, too
late.

Stuart's RPM presentation was great, making a convincing point.
(Except for pitching L4S and LLD as "solutions", I will accept them as
a step in the right direction, but why not go in all the way and
embrace proper scheduling?)

In detail though, I am not fully convinced about the decision of
taking the inverse of delay increase as singular measure here as I
consider that as a bit of a squandered opportunity at public
outreach/education and as comparing idle and working RPM is
non-intuitive, while idle and working RTT can immediately subtracted
to see the 

Re: [Bloat] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

2023-01-15 Thread rjmcmahon via Bloat
hmm, interesting. I'm thinking that GPS PPS is sufficient from iperf 2 & 
classical mechanics perspective.


Have you looked at white rabbit per CERN?

https://kt.cern/article/white-rabbit-cern-born-open-source-technology-sets-new-global-standard-empowering-world#:~:text=White%20Rabbit%20(WR)%20is%20a,the%20field%20of%20particle%20physics.

This discussion does make me question if there is a better metric than 
one way delay, i.e. "speed of causality as limited by network i/o" taken 
per each end of the e2e path? My expertise is quite limited w/respect to 
relativity so I don't know if the below makes any sense or not. I also 
think a core issue is the simultaneity of the start which isn't obvious 
on how to discern.


Does comparing the write blocking times (or frequency) histograms to the 
read blocking times (or frequency) histograms which are coupled by tcp's 
control loop do anything useful? The blocking occurs because of a 
coupling & awating per the remote. Then compare those against a write to 
read thread on the same chip (which I think should be the same in each 
reference frame and the fastest i/o possible for an end.) The frequency 
differences might be due to what you call "interruptions" & one way 
delays (& error) assuming all else equal??


Thanks in advance for any thoughts on this.

Bob

-Original Message-
From: rjmcmahon [mailto:rjmcma...@rjmcmahon.com]
Sent: Thursday, January 12, 2023 11:40 PM
To: dick...@alum.mit.edu
Cc: 'Sebastian Moeller'; 'Rodney W. Grimes';
mike.reyno...@netforecast.com; 'libreqos'; 'David P. Reed'; 'Rpm';
'bloat'
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in
USA

Hi RR,

I believe quality GPS chips compensate for relativity in pulse per

second which is needed to get position accuracy.

_[RR] Of course they do.  That 38usec/day really matters! They assume
they know what the gravitational potential is where they are, and they
can estimate the potential at the satellites so they can compensate,
and they do.  Point is, a GPS unit at Lake Tahoe (6250') runs faster
than the one in San Francisco (sea level).  How do you think these two
"should be synchronized"!   How do you define "synchronization" in
this case?  You synchronize those two clocks, then what about all the
other clocks at Lake Tahoe (or SF or anywhere in between for that
matter __J)??? These are not trivial questions. However if all one
cares about is seconds or milliseconds, then you can argue that we
(earthlings on planet earth) can "sweep such facts under the
proverbial rug" for the purposes of latency in communication networks
and that's certainly doable.  Don't tell that to the guys whose
protocols require "synchronization of all unit to nanoseconds" though!
 They will be very, very unhappy __J __J And you know who you are __J
__J _

_ _

_J_

Bob


Hi Sebastian (et. al.),







[I'll comment up here instead of inline.]







Let me start by saying that I have not been intimately involved with




the IEEE 1588 effort (PTP), however I was involved in the 802.11



efforts along a similar vein, just adding the wireless first hop



component and it's effects on PTP.







What was apparent from the outset was that there was a lack of



understanding what the terms "to synchronize" or "to be

synchronized"


actually mean.  It's not trivial … because we live in a



(approximately, that's another story!) 4-D space-time continuum

where


the Lorentz metric plays a critical role.  Therein, simultaneity

(aka


"things happening at the same time") means the "distance" between

two


such events is zero and that distance is given by sqrt(x^2 + y^2 +

z^2


- (ct)^2) and the "thing happening" can be the tick of a clock



somewhere. Now since everything is relative (time with respect to



what? / location with respect to where?) it's pretty easy to see

that


"if you don't know where you are, you can't know what time it is!"



(English sailors of the 18th century knew this well!) Add to this

the


fact that if everything were stationary, nothing would happen (as



Einstein said "Nothing happens until something moves!"), special



relativity also pays a role.  Clocks on GPS satellites run approx.



7usecs/day slower than those on earth due to their "speed" (8700 mph




roughly)! Then add the consequence that without mass we wouldn't

exist


(in these forms at leastJ), and gravitational effects (aka General



Relativity) come into play. Those turn out to make clocks on GPS



satellites run 45usec/day faster than those on earth!  The net

effect


is that GPS clocks run about 38usec/day faster than clocks on earth.




So what does it mean to "synchronize to GPS"?  Point is: it's a



non-trivial question with a very complicated answer.  The reason it

is


important to get all this right is that the "what that ties time and




space together" is the speed of light and that turns out to be a



"foot-per-nanosecond" in a vacuum (roughly 300m/usec).  This means


Re: [Bloat] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

2023-01-12 Thread rjmcmahon via Bloat

Hi RR,

I believe quality GPS chips compensate for relativity in pulse per 
second which is needed to get position accuracy.


Bob

Hi Sebastian (et. al.),

[I'll comment up here instead of inline.]

Let me start by saying that I have not been intimately involved with
the IEEE 1588 effort (PTP), however I was involved in the 802.11
efforts along a similar vein, just adding the wireless first hop
component and it's effects on PTP.

What was apparent from the outset was that there was a lack of
understanding what the terms "to synchronize" or "to be synchronized"
actually mean.  It's not trivial … because we live in a
(approximately, that's another story!) 4-D space-time continuum where
the Lorentz metric plays a critical role.  Therein, simultaneity (aka
"things happening at the same time") means the "distance" between two
such events is zero and that distance is given by sqrt(x^2 + y^2 + z^2
- (ct)^2) and the "thing happening" can be the tick of a clock
somewhere. Now since everything is relative (time with respect to
what? / location with respect to where?) it's pretty easy to see that
"if you don't know where you are, you can't know what time it is!"
(English sailors of the 18th century knew this well!) Add to this the
fact that if everything were stationary, nothing would happen (as
Einstein said "Nothing happens until something moves!"), special
relativity also pays a role.  Clocks on GPS satellites run approx.
7usecs/day slower than those on earth due to their "speed" (8700 mph
roughly)! Then add the consequence that without mass we wouldn't exist
(in these forms at leastJ), and gravitational effects (aka General
Relativity) come into play. Those turn out to make clocks on GPS
satellites run 45usec/day faster than those on earth!  The net effect
is that GPS clocks run about 38usec/day faster than clocks on earth.
So what does it mean to "synchronize to GPS"?  Point is: it's a
non-trivial question with a very complicated answer.  The reason it is
important to get all this right is that the "what that ties time and
space together" is the speed of light and that turns out to be a
"foot-per-nanosecond" in a vacuum (roughly 300m/usec).  This means if
I am uncertain about my location to say 300 meters, then I also am not
sure what time it is to a usec AND vice-versa!

All that said, the simplest explanation of synchronization is
probably: Two clocks are synchronized if, when they are brought
(slowly) into physical proximity ("sat next to each other") in the
same (quasi-)inertial frame and the same gravitational potential (not
so obvious BTW … see the FYI below!), an observer of both would say
"they are keeping time identically". Since this experiment is rarely
possible, one can never be "sure" that his clock is synchronized to
any other clock elsewhere. And what does it mean to say they "were
synchronized" when brought together, but now they are not because they
are now in different gravitational potentials! (FYI, there are land
mine detectors being developed on this very principle! I know someone
who actually worked on such a project!)

This all gets even more complicated when dealing with large networks
of networks in which the "speed of information transmission" can vary
depending on the medium (cf. coaxial cables versus fiber versus
microwave links!) In fact, the atmosphere is one of those media and
variations therein result in the need for "GPS corrections" (cf. RTCM
GPS correction messages, RTK, etc.) in order to get to sub-nsec/cm
accuracy.  Point is if you have a set of nodes distributed across the
country all with GPS and all "synchronized to GPS time", and a second
identical set of nodes (with no GPS) instead connected with a network
of cables and fiber links, all of different lengths and composition
using different carrier frequencies (dielectric constants vary with
frequency!) "synchronized" to some clock somewhere using NTP or PTP),
the synchronization of the two sets will be different unless a common
reference clock is used AND all the above effects are taken into
account, and good luck with that! J

In conclusion, if anyone tells you that clock synchronization in
communication networks is simple ("Just use GPS!"), you should feel
free to chuckle (under your breath if necessaryJ)

Cheers,

RR

-Original Message-
From: Sebastian Moeller [mailto:moell...@gmx.de]
Sent: Thursday, January 12, 2023 12:23 AM
To: Dick Roy
Cc: Rodney W. Grimes; mike.reyno...@netforecast.com; libreqos; David
P. Reed; Rpm; rjmcmahon; bloat
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in
USA

Hi RR,


On Jan 11, 2023, at 22:46, Dick Roy  wrote:















-Original Message-



From: Starlink [mailto:starlink-boun...@lists.bufferbloat.net] On

Behalf Of Sebastian Moeller via Starlink


Sent: Wednesday, January 11, 2023 12:01 PM



To: Rodney W. Grimes



Cc: Dave Taht via Starlink; mike.reyno...@netforecast.com; libreqos;

David P. Reed; Rpm; rjmcmahon; bloat


Subject: Re: 

Re: [Bloat] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

2023-01-12 Thread rjmcmahon via Bloat

For WiFi there is the TSF

https://en.wikipedia.org/wiki/Timing_synchronization_function

We in test & measurement use that in our internal telemetry. The TSF of 
a Wifi device only needs frequency-sync for some things typically 
related to access to the medium. A phase locked loop does it. A device 
that decides to go to sleep, as an example, will also stop its TSF 
creating a non-linearity. It's difficult to synchronize it to the system 
clock or the GPS atomic clock - though we do this for internal testing 
reasons so it can be done.


What's mostly missing for T with WiFi is the GPS atomic clock as 
that's a convenient time domain to use as the canonical domain.


Bob

Hi RR,



On Jan 11, 2023, at 22:46, Dick Roy  wrote:



-Original Message-
From: Starlink [mailto:starlink-boun...@lists.bufferbloat.net] On 
Behalf Of Sebastian Moeller via Starlink

Sent: Wednesday, January 11, 2023 12:01 PM
To: Rodney W. Grimes
Cc: Dave Taht via Starlink; mike.reyno...@netforecast.com; libreqos; 
David P. Reed; Rpm; rjmcmahon; bloat
Subject: Re: [Starlink] [Rpm] Researchers Seeking Probe Volunteers in 
USA


Hi Rodney,




> On Jan 11, 2023, at 19:32, Rodney W. Grimes  
wrote:
>
> Hello,
>
> Yall can call me crazy if you want.. but... see below [RWG]
>> Hi Bib,
>>
>>
>>> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink 
 wrote:
>>>
>>> My biggest barrier is the lack of clock sync by the devices, i.e. very limited 
support for PTP in data centers and in end devices. This limits the ability to measure one 
way delays (OWD) and most assume that OWD is 1/2 and RTT which typically is a mistake. We 
know this intuitively with airplane flight times or even car commute times where the one way 
time is not 1/2 a round trip time. Google maps & directions provide a time estimate for 
the one way link. It doesn't compute a round trip and divide by two.
>>>
>>> For those that can get clock sync working, the iperf 2 --trip-times options 
is useful.
>>
>>[SM] +1; and yet even with unsynchronized clocks one can try to measure 
how latency changes under load and that can be done per direction. Sure this is far 
inferior to real reliably measured OWDs, but if life/the internet deals you lemons
>
> [RWG] iperf2/iperf3, etc are already moving large amounts of data back and forth, for that matter 
any rate test, why not abuse some of that data and add the fundemental NTP clock sync data and 
bidirectionally pass each others concept of "current time".  IIRC (its been 25 years since I 
worked on NTP at this level) you *should* be able to get a fairly accurate clock delta between each 
end, and then use that info and time stamps in the data stream to compute OWD's.  You need to put 4 
time stamps in the packet, and with that you can compute "offset".
[RR] For this to work at a reasonable level of accuracy, the 
timestamping circuits on both ends need to be deterministic and 
repeatable as I recall. Any uncertainty in that process adds to 
synchronization errors/uncertainties.


  [SM] Nice idea. I would guess that all timeslot based access 
technologies (so starlink, docsis, GPON, LTE?) all distribute "high 
quality time" carefully to the "modems", so maybe all that would be 
needed is to expose that high quality time to the LAN side of those 
modems, dressed up as NTP server?
[RR] It’s not that simple!  Distributing “high-quality time”, i.e. 
“synchronizing all clocks” does not solve the communication problem in 
synchronous slotted MAC/PHYs!


[SM] I happily believe you, but the same idea of "time slot" needs to
be shared by all nodes, no? So the clockss need to be reasonably
similar rate, aka synchronized (see below).


 All the technologies you mentioned above are essentially P2P, not 
intended for broadcast.  Point is, there is a point controller (aka 
PoC) often called a base station (eNodeB, gNodeB, …) that actually 
“controls everything that is necessary to control” at the UE including 
time, frequency and sampling time offsets, and these are critical to 
get right if you want to communicate, and they are ALL subject to the 
laws of physics (cf. the speed of light)! Turns out that what is 
necessary for the system to function anywhere near capacity, is for 
all the clocks governing transmissions from the UEs to be 
“unsynchronized” such that all the UE transmissions arrive at the PoC 
at the same (prescribed) time!


[SM] Fair enough. I would call clocks that are "in sync" albeit with
individual offsets as synchronized, but I am a layman and that might
sound offensively wrong to experts in the field. But even without the
naming my point is that all systems that depend on some idea of shared
time-base are halfway there of exposing that time to end users, by
"translating it into an NTP time source at the modem.


For some technologies, in particular 5G!, these considerations are 
ESSENTIAL. Feel free to scour the 3GPP LTE 5G RLC and PHY specs if you 
don’t believe me! J


[SM Far be it 

Re: [Bloat] [Starlink] [Rpm] Researchers Seeking Probe Volunteers in USA

2023-01-11 Thread rjmcmahon via Bloat
Iperf 2 is designed to measure network i/o. Note: It doesn't have to 
move large amounts of data. It can support data profiles that don't 
drive TCP's CCA as an example.


Two things I've been asked for and avoided:

1) Integrate clock sync into iperf's test traffic
2) Measure and output CPU usages

I think both of these are outside the scope of a tool designed to test 
network i/o over sockets, rather these should be developed & validated 
independently of a network i/o tool.


Clock error really isn't about amount/frequency of traffic but rather 
getting a periodic high-quality reference. I tend to use GPS pulse per 
second to lock the local system oscillator to. As David says, most every 
modern handheld computer has the GPS chips to do this already. So to me 
it seems more of a policy choice between data center operators and 
device mfgs and less of a technical issue.


Bob

Hello,

Yall can call me crazy if you want.. but... see below [RWG]

Hi Bib,


> On Jan 9, 2023, at 20:13, rjmcmahon via Starlink 
 wrote:
>
> My biggest barrier is the lack of clock sync by the devices, i.e. very limited 
support for PTP in data centers and in end devices. This limits the ability to measure 
one way delays (OWD) and most assume that OWD is 1/2 and RTT which typically is a 
mistake. We know this intuitively with airplane flight times or even car commute times 
where the one way time is not 1/2 a round trip time. Google maps & directions 
provide a time estimate for the one way link. It doesn't compute a round trip and 
divide by two.
>
> For those that can get clock sync working, the iperf 2 --trip-times options 
is useful.

	[SM] +1; and yet even with unsynchronized clocks one can try to 
measure how latency changes under load and that can be done per 
direction. Sure this is far inferior to real reliably measured OWDs, 
but if life/the internet deals you lemons


 [RWG] iperf2/iperf3, etc are already moving large amounts of data
back and forth, for that matter any rate test, why not abuse some of
that data and add the fundemental NTP clock sync data and
bidirectionally pass each others concept of "current time".  IIRC (its
been 25 years since I worked on NTP at this level) you *should* be
able to get a fairly accurate clock delta between each end, and then
use that info and time stamps in the data stream to compute OWD's.
You need to put 4 time stamps in the packet, and with that you can
compute "offset".




>
> --trip-times
>  enable the measurement of end to end write to read latencies (client and 
server clocks must be synchronized)

 [RWG] --clock-skew
	enable the measurement of the wall clock difference between sender and 
receiver




[SM] Sweet!

Regards
Sebastian

>
> Bob
>> I have many kvetches about the new latency under load tests being
>> designed and distributed over the past year. I am delighted! that they
>> are happening, but most really need third party evaluation, and
>> calibration, and a solid explanation of what network pathologies they
>> do and don't cover. Also a RED team attitude towards them, as well as
>> thinking hard about what you are not measuring (operations research).
>> I actually rather love the new cloudflare speedtest, because it tests
>> a single TCP connection, rather than dozens, and at the same time folk
>> are complaining that it doesn't find the actual "speed!". yet... the
>> test itself more closely emulates a user experience than speedtest.net
>> does. I am personally pretty convinced that the fewer numbers of flows
>> that a web page opens improves the likelihood of a good user
>> experience, but lack data on it.
>> To try to tackle the evaluation and calibration part, I've reached out
>> to all the new test designers in the hope that we could get together
>> and produce a report of what each new test is actually doing. I've
>> tweeted, linked in, emailed, and spammed every measurement list I know
>> of, and only to some response, please reach out to other test designer
>> folks and have them join the rpm email list?
>> My principal kvetches in the new tests so far are:
>> 0) None of the tests last long enough.
>> Ideally there should be a mode where they at least run to "time of
>> first loss", or periodically, just run longer than the
>> industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
>> there! It's really bad science to optimize the internet for 20
>> seconds. It's like optimizing a car, to handle well, for just 20
>> seconds.
>> 1) Not testing up + down + ping at the same time
>> None of the new tests actually test the same thing that the infamous
>> rrul test does - all the others still test up, then down, and ping. It
>> was/remains my hope that the simpler parts of the flent test suite -
>> such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
>> tests would provide calibration to the test designers.
>> we've got zillions of flent results in the archive published here:
>> 

Re: [Bloat] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA

2023-01-09 Thread rjmcmahon via Bloat
Also released is python code. It's based on python 3's asyncio. It just 
needs password-less ssh to be able to create the pipes. This opens up 
the stats processing to a vast majority of tools used by data scientists 
at large.


https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/
https://docs.python.org/3/library/asyncio.html

Creating traffic profiles is basically instantiate then run.  Here is an 
example facetime test.



#instantiate DUT host and NIC devices
wifi1 = ssh_node(name='WiFi_A', ipaddr=args.host_wifi1, device='eth1', 
devip='192.168.1.58')
wifi2 = ssh_node(name='WiFi_B', ipaddr=args.host_wifi2, device='eth1', 
devip='192.168.1.70')


#instantiate traffic objects or flows

video=iperf_flow(name='VIDEO_FACETIME_UDP', user='root', server=wifi2, 
client=wifi1, dstip=wifi2.devip, proto='UDP', interval=1, debug=False, 
srcip=wifi1.devip, srcport='6001', dstport='6001', 
offered_load='30:600K',trip_times=True, tos='ac_vi', latency=True, 
fullduplex=True)
audio=iperf_flow(name='AUDIO_FACETIME_UDP', user='root', server=wifi2, 
client=wifi1, dstip=wifi2.devip, proto='UDP', interval=1, debug=False, 
srcip=wifi1.devip, srcport='6002', dstport='6002', 
offered_load='50:25K',trip_times=True, tos='ac_vo', latency=True, 
fullduplex=True)


ssh_node.open_consoles(silent_mode=True)

traffic_flows = iperf_flow.get_instances()
try:
if traffic_flows:
for runid in range(args.runcount) :
for traffic_flow in traffic_flows:
print("Running ({}/{}) {} traffic client={} server={} 
dest={} with load {} for {} seconds".format(str(runid+1), 
str(args.runcount), traffic_flow.name, traffic_flow.client, 
traffic_flow.server, traffic_flow.dstip, traffic_flow.offered_load, 
args.time))

gc.disable()
iperf_flow.run(time=args.time, flows='all', epoch_sync=True)
gc.enable()
try :
gc.collect()
except:
pass

for traffic_flow in traffic_flows :

traffic_flow.compute_ks_table(directory=args.output_directory, 
title=args.test_name)


else:
print("No traffic Flows instantiated per test 
{}".format(args.test_name))


finally :
ssh_node.close_consoles()
if traffic_flows:
iperf_flow.close_loop()
logging.shutdown()


Bob

A peer likes gnuplot and sed. There are many, many visualization
tools. An excerpt below:

My quick hack one-line parser was based on just a single line from the
iperf output, not the entire log:

[  1] 0.00-1.00 sec T8-PDF:
bin(w=1ms):cnt(849)=1:583,2:112,3:9,4:8,5:11,6:10,7:7,8:8,9:7,10:2,11:3,12:2,13:2,14:2,15:2,16:3,17:2,18:3,19:1,21:2,22:2,23:3,24:2,26:3,27:2,28:3,29:2,30:2,31:3,32:2,33:2,34:2,35:5,37:1,39:1,40:3,41:5,42:2,43:3,44:3,45:3,46:3,47:3,48:1,49:2,50:3,51:2,52:1,53:1
(50.00/99.7/99.80/%=1/51/52,Outliers=0,obl/obu=0/0)

Your log contains 30 such histograms.  A very crude approach would be
to filter only the lines that have T8-PDF:

plot "< sed -n '/T8-PDF/{s/.*)=//;s/ (.*//;s/,/\\n/g;s/:/ /g;p}'
lat.txt" with lp

or

plot "< sed -n '/T8(f)-PDF/{s/.*)=//;s/ (.*//;s/,/\\n/g;s/:/ /g;p}'
lat.txt" with lp

http://www.gnuplot.info/

Bob

On Mon, Jan 9, 2023 at 12:46 PM rjmcmahon  
wrote:


The write to read latencies (OWD) are on the server side in CLT form.
Use --histograms on the server side to enable them.


Thx. It is far more difficult to instrument things on the server side
of the testbed but we will tackle it.


Your client side sampled TCP RTT is 6ms with less than a 1 ms of
variance (or sqrt of variance as variance is typically squared)  No
retries suggest the network isn't dropping packets.


Thank you for analyzing that result. the cake aqm, set for a 5ms
target, with RFC3168-style ECN, is enabled on this path, on this
setup, at the moment. So the result is correct.

A second test with ecn off showed the expected retries.

I have emulations also of fifos, pie, fq-pie, fq-codel, red, blue,
sfq, with various realworld delays, and so on... but this is a bit
distracting at the moment from our focus, which was in optimizing the
XDP + ebpf based bridge and epping based sampling tools to crack
25Gbit.

I think iperf2 will be great for us after that settles down.

All the newer bounceback code is only master and requires a compile 
from
source. It will be released in 2.1.9 after testing cycles. Hopefully, 
in

early March 2023


I would like to somehow parse and present those histograms.


Bob

https://sourceforge.net/projects/iperf2/

> The DC that so graciously loaned us 3 machines for the testbed (thx
> equinix!), does support ptp, but we have not configured it yet. In ntp
> tests between these hosts we seem to be within 500us, and certainly
> 50us would be great, in the future.
>
> I note that in all my kvetching about the new tests' needing
> validation today... I kind of elided that I'm pretty happy with
> iperf2's new tests that landed last august, and are now appearing in
> linux package managers around the 

Re: [Bloat] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA

2023-01-09 Thread rjmcmahon via Bloat
A peer likes gnuplot and sed. There are many, many visualization tools. 
An excerpt below:


My quick hack one-line parser was based on just a single line from the 
iperf output, not the entire log:


[  1] 0.00-1.00 sec T8-PDF: 
bin(w=1ms):cnt(849)=1:583,2:112,3:9,4:8,5:11,6:10,7:7,8:8,9:7,10:2,11:3,12:2,13:2,14:2,15:2,16:3,17:2,18:3,19:1,21:2,22:2,23:3,24:2,26:3,27:2,28:3,29:2,30:2,31:3,32:2,33:2,34:2,35:5,37:1,39:1,40:3,41:5,42:2,43:3,44:3,45:3,46:3,47:3,48:1,49:2,50:3,51:2,52:1,53:1 
(50.00/99.7/99.80/%=1/51/52,Outliers=0,obl/obu=0/0)


Your log contains 30 such histograms.  A very crude approach would be to 
filter only the lines that have T8-PDF:


plot "< sed -n '/T8-PDF/{s/.*)=//;s/ (.*//;s/,/\\n/g;s/:/ /g;p}' 
lat.txt" with lp


or

plot "< sed -n '/T8(f)-PDF/{s/.*)=//;s/ (.*//;s/,/\\n/g;s/:/ /g;p}' 
lat.txt" with lp


http://www.gnuplot.info/

Bob

On Mon, Jan 9, 2023 at 12:46 PM rjmcmahon  
wrote:


The write to read latencies (OWD) are on the server side in CLT form.
Use --histograms on the server side to enable them.


Thx. It is far more difficult to instrument things on the server side
of the testbed but we will tackle it.


Your client side sampled TCP RTT is 6ms with less than a 1 ms of
variance (or sqrt of variance as variance is typically squared)  No
retries suggest the network isn't dropping packets.


Thank you for analyzing that result. the cake aqm, set for a 5ms
target, with RFC3168-style ECN, is enabled on this path, on this
setup, at the moment. So the result is correct.

A second test with ecn off showed the expected retries.

I have emulations also of fifos, pie, fq-pie, fq-codel, red, blue,
sfq, with various realworld delays, and so on... but this is a bit
distracting at the moment from our focus, which was in optimizing the
XDP + ebpf based bridge and epping based sampling tools to crack
25Gbit.

I think iperf2 will be great for us after that settles down.

All the newer bounceback code is only master and requires a compile 
from
source. It will be released in 2.1.9 after testing cycles. Hopefully, 
in

early March 2023


I would like to somehow parse and present those histograms.


Bob

https://sourceforge.net/projects/iperf2/

> The DC that so graciously loaned us 3 machines for the testbed (thx
> equinix!), does support ptp, but we have not configured it yet. In ntp
> tests between these hosts we seem to be within 500us, and certainly
> 50us would be great, in the future.
>
> I note that in all my kvetching about the new tests' needing
> validation today... I kind of elided that I'm pretty happy with
> iperf2's new tests that landed last august, and are now appearing in
> linux package managers around the world. I hope more folk use them.
> (sorry robert, it's been a long time since last august!)
>
> Our new testbed has multiple setups. In one setup - basically the
> machine name is equal to a given ISP plan, and a key testing point is
> looking at the differences between the FCC 25-3 and 100/20 plans in
> the real world. However at our scale (25gbit) it turned out that
> emulating the delay realistically has problematic.
>
> Anyway, here's a 25/3 result for iperf (other results and iperf test
> type requests gladly accepted)
>
> root@lqos:~# iperf -6 --trip-times -c c25-3 -e -i 1
> 
> Client connecting to c25-3, TCP port 5001 with pid 2146556 (1 flows)
> Write buffer size: 131072 Byte
> TOS set to 0x0 (Nagle on)
> TCP window size: 85.3 KByte (default)
> 
> [  1] local fd77::3%bond0.4 port 59396 connected with fd77::1:2 port
> 5001 (trip-times) (sock=3) (icwnd/mss/irtt=13/1428/948) (ct=1.10 ms)
> on 2023-01-09 20:13:37 (UTC)
> [ ID] IntervalTransferBandwidth   Write/Err  Rtry
>Cwnd/RTT(var)NetPwr
> [  1] 0.-1. sec  3.25 MBytes  27.3 Mbits/sec  26/0  0
>  19K/6066(262) us  562
> [  1] 1.-2. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
>  15K/4671(207) us  673
> [  1] 2.-3. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
>  13K/5538(280) us  568
> [  1] 3.-4. sec  3.12 MBytes  26.2 Mbits/sec  25/0  0
>  16K/6244(355) us  525
> [  1] 4.-5. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
>  19K/6152(216) us  511
> [  1] 5.-6. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
>  22K/6764(529) us  465
> [  1] 6.-7. sec  3.12 MBytes  26.2 Mbits/sec  25/0  0
>  15K/5918(605) us  554
> [  1] 7.-8. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
>  18K/5178(327) us  608
> [  1] 8.-9. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
>  19K/5758(473) us  546
> [  1] 9.-10. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
>   16K/6141(280) us  512
> [  1] 0.-10.0952 sec  30.6 MBytes  25.4 Mbits/sec  245/0
> 0   19K/5924(491) us  537
>
>
> On Mon, Jan 9, 2023 at 11:13 AM 

Re: [Bloat] [LibreQoS] [Rpm] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA

2023-01-09 Thread rjmcmahon via Bloat
The target audience for iperf 2 latency metrics is network engineers and 
not end users. My belief is that a latency complaint from an end user is 
a defect escape, i.e. it should have been caught earlier by experts in 
our industry. That's part of the reason why I think open source tooling 
that is accurate and trustworthy is critical to our industry moving 
forward & improving. Minimize barriers to measuring & understanding 
issues so to speak.


I do hope one day we move to segment routing where latency telemetry 
drives forwarding planes. The early days of the internet were about 
connectivity. Then came capacity as demand grew. Now we need to improve 
the speed of causality per what's become a massively distributed 
computer system owned by no one single entity.


https://www.segment-routing.net/tutorials/2018-03-06-sr-delay-measurement/

Unfortunately, the performance of e2e latency experiences a form of 
tragedy of the commons as each segment tends to be unaware of the full 
path and their relative contributions.


The ancient Greek philosopher Aristotle pointed out the problem with 
common resources: ‘What is common to many is taken least care of, for 
all men have greater regard for what is their own than for what they 
possess in common with others.’


Bob

I'm not offering a complete solution here  I'm not so keen on
speed tests.  It's akin to testing your car's performance by flooring
it til you hit the governor and hard breaking til you stop *while in
traffic*.   That doesn't demonstrate the utility of the car.

Data is already being transferred, let's measure that.Doing some
routine simple tests intentionally during low, mid, high congestion
periods to see how the service is actually performing for the end
user.  You don't need to generate the traffic on a link to measure how
much traffic a link can handle.  And determining congestion on a
service in a fairly rudimentary way would be frequent latency tests to
'known good' service ie high capacity services that are unlikely to
experience congestion.

There are few use cases that matche a 2 minute speed test outside of
'wonder what my internet connection can do'.  And in those few use
cases such as a big file download, a routine latency test is a really
great measure of the quality of a service.  Sure, troubleshooting by
the ISP might include a full bore multi-minute speed test but that's
really not useful for the consumer.

Further, exposing this data to the end users, IMO, is likely better as
a chart of congestion and flow durations and some scoring.  ie, slice
out 7-8pm, during this segment you were able to pull 427Mbps without
congestion, netflix or streaming service use approximately 6% of
capacity.  Your service was busy for 100% of this time ( likely
measuring buffer bloat ).Expressed as a pretty chart with consumer
friendly language.


When you guys are talking about per segment latency testing, you're
really talking about metrics for operators to be concerned with, not
end users.  It's useless information for them.  I had a woman about 2
months ago complain about her frame rates because her internet
connection was 15 emm ess's and that was terrible and I needed to fix
it.  (slow computer was the problem, obviously) but that data from
speedtest.net didn't actually help her at all, it just confused her.

Running timed speed tests at 3am (Eero, I'm looking at you) is pretty
pointless.  Running speed tests during busy hours is a little bit
harmful overall considering it's pushing into oversells on every ISP.

I could talk endlessly about how useless speed tests are to end user 
experience.



On Mon, Jan 9, 2023 at 12:20 PM rjmcmahon via LibreQoS
 wrote:


User based, long duration tests seem fundamentally flawed. QoE for 
users
is driven by user expectations. And if a user won't wait on a long 
test

they for sure aren't going to wait minutes for a web page download. If
it's a long duration use case, e.g. a file download, then latency 
isn't

typically driving QoE.

Not: Even for internal tests, we try to keep our automated tests down 
to

2 seconds. There are reasons to test for minutes (things like phy cals
in our chips) but it's more of the exception than the rule.

Bob
>> 0) None of the tests last long enough.
>
> The user-initiated ones tend to be shorter - likely because the
> average user does not want to wait several minutes for a test to
> complete. But IMO this is where a test platform like SamKnows, Ookla's
> embedded client, NetMicroscope, and others can come in - since they
> run in the background on some randomized schedule w/o user
> intervention. Thus, the user's time-sensitivity is no longer a factor
> and a longer duration test can be performed.
>
>> 1) Not testing up + down + ping at the same time
>
> You should consider publishing a LUL BCP I-D in the IRTF/IETF - like in
> IPPM...
>
> JL
>
> ___
> Rpm mailing list
> r...@lists.bufferbloat.net
> 

Re: [Bloat] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA

2023-01-09 Thread rjmcmahon via Bloat
The write to read latencies (OWD) are on the server side in CLT form. 
Use --histograms on the server side to enable them.


Your client side sampled TCP RTT is 6ms with less than a 1 ms of 
variance (or sqrt of variance as variance is typically squared)  No 
retries suggest the network isn't dropping packets.


All the newer bounceback code is only master and requires a compile from 
source. It will be released in 2.1.9 after testing cycles. Hopefully, in 
early March 2023


Bob

https://sourceforge.net/projects/iperf2/


The DC that so graciously loaned us 3 machines for the testbed (thx
equinix!), does support ptp, but we have not configured it yet. In ntp
tests between these hosts we seem to be within 500us, and certainly
50us would be great, in the future.

I note that in all my kvetching about the new tests' needing
validation today... I kind of elided that I'm pretty happy with
iperf2's new tests that landed last august, and are now appearing in
linux package managers around the world. I hope more folk use them.
(sorry robert, it's been a long time since last august!)

Our new testbed has multiple setups. In one setup - basically the
machine name is equal to a given ISP plan, and a key testing point is
looking at the differences between the FCC 25-3 and 100/20 plans in
the real world. However at our scale (25gbit) it turned out that
emulating the delay realistically has problematic.

Anyway, here's a 25/3 result for iperf (other results and iperf test
type requests gladly accepted)

root@lqos:~# iperf -6 --trip-times -c c25-3 -e -i 1

Client connecting to c25-3, TCP port 5001 with pid 2146556 (1 flows)
Write buffer size: 131072 Byte
TOS set to 0x0 (Nagle on)
TCP window size: 85.3 KByte (default)

[  1] local fd77::3%bond0.4 port 59396 connected with fd77::1:2 port
5001 (trip-times) (sock=3) (icwnd/mss/irtt=13/1428/948) (ct=1.10 ms)
on 2023-01-09 20:13:37 (UTC)
[ ID] IntervalTransferBandwidth   Write/Err  Rtry
   Cwnd/RTT(var)NetPwr
[  1] 0.-1. sec  3.25 MBytes  27.3 Mbits/sec  26/0  0
 19K/6066(262) us  562
[  1] 1.-2. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
 15K/4671(207) us  673
[  1] 2.-3. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
 13K/5538(280) us  568
[  1] 3.-4. sec  3.12 MBytes  26.2 Mbits/sec  25/0  0
 16K/6244(355) us  525
[  1] 4.-5. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
 19K/6152(216) us  511
[  1] 5.-6. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
 22K/6764(529) us  465
[  1] 6.-7. sec  3.12 MBytes  26.2 Mbits/sec  25/0  0
 15K/5918(605) us  554
[  1] 7.-8. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
 18K/5178(327) us  608
[  1] 8.-9. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
 19K/5758(473) us  546
[  1] 9.-10. sec  3.00 MBytes  25.2 Mbits/sec  24/0  0
  16K/6141(280) us  512
[  1] 0.-10.0952 sec  30.6 MBytes  25.4 Mbits/sec  245/0
0   19K/5924(491) us  537


On Mon, Jan 9, 2023 at 11:13 AM rjmcmahon  
wrote:


My biggest barrier is the lack of clock sync by the devices, i.e. very
limited support for PTP in data centers and in end devices. This 
limits
the ability to measure one way delays (OWD) and most assume that OWD 
is
1/2 and RTT which typically is a mistake. We know this intuitively 
with

airplane flight times or even car commute times where the one way time
is not 1/2 a round trip time. Google maps & directions provide a time
estimate for the one way link. It doesn't compute a round trip and
divide by two.

For those that can get clock sync working, the iperf 2 --trip-times
options is useful.

--trip-times
   enable the measurement of end to end write to read latencies 
(client

and server clocks must be synchronized)

Bob
> I have many kvetches about the new latency under load tests being
> designed and distributed over the past year. I am delighted! that they
> are happening, but most really need third party evaluation, and
> calibration, and a solid explanation of what network pathologies they
> do and don't cover. Also a RED team attitude towards them, as well as
> thinking hard about what you are not measuring (operations research).
>
> I actually rather love the new cloudflare speedtest, because it tests
> a single TCP connection, rather than dozens, and at the same time folk
> are complaining that it doesn't find the actual "speed!". yet... the
> test itself more closely emulates a user experience than speedtest.net
> does. I am personally pretty convinced that the fewer numbers of flows
> that a web page opens improves the likelihood of a good user
> experience, but lack data on it.
>
> To try to tackle the evaluation and calibration part, I've reached out
> to all the new test designers in the hope that we could get 

Re: [Bloat] [Rpm] [EXTERNAL] Re: [Starlink] Researchers Seeking Probe Volunteers in USA

2023-01-09 Thread rjmcmahon via Bloat
User based, long duration tests seem fundamentally flawed. QoE for users 
is driven by user expectations. And if a user won't wait on a long test 
they for sure aren't going to wait minutes for a web page download. If 
it's a long duration use case, e.g. a file download, then latency isn't 
typically driving QoE.


Not: Even for internal tests, we try to keep our automated tests down to 
2 seconds. There are reasons to test for minutes (things like phy cals 
in our chips) but it's more of the exception than the rule.


Bob

0) None of the tests last long enough.


The user-initiated ones tend to be shorter - likely because the
average user does not want to wait several minutes for a test to
complete. But IMO this is where a test platform like SamKnows, Ookla's
embedded client, NetMicroscope, and others can come in - since they
run in the background on some randomized schedule w/o user
intervention. Thus, the user's time-sensitivity is no longer a factor
and a longer duration test can be performed.


1) Not testing up + down + ping at the same time


You should consider publishing a LUL BCP I-D in the IRTF/IETF - like in 
IPPM...


JL

___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] [Starlink] Researchers Seeking Probe Volunteers in USA

2023-01-09 Thread rjmcmahon via Bloat
My biggest barrier is the lack of clock sync by the devices, i.e. very 
limited support for PTP in data centers and in end devices. This limits 
the ability to measure one way delays (OWD) and most assume that OWD is 
1/2 and RTT which typically is a mistake. We know this intuitively with 
airplane flight times or even car commute times where the one way time 
is not 1/2 a round trip time. Google maps & directions provide a time 
estimate for the one way link. It doesn't compute a round trip and 
divide by two.


For those that can get clock sync working, the iperf 2 --trip-times 
options is useful.


--trip-times
  enable the measurement of end to end write to read latencies (client 
and server clocks must be synchronized)


Bob

I have many kvetches about the new latency under load tests being
designed and distributed over the past year. I am delighted! that they
are happening, but most really need third party evaluation, and
calibration, and a solid explanation of what network pathologies they
do and don't cover. Also a RED team attitude towards them, as well as
thinking hard about what you are not measuring (operations research).

I actually rather love the new cloudflare speedtest, because it tests
a single TCP connection, rather than dozens, and at the same time folk
are complaining that it doesn't find the actual "speed!". yet... the
test itself more closely emulates a user experience than speedtest.net
does. I am personally pretty convinced that the fewer numbers of flows
that a web page opens improves the likelihood of a good user
experience, but lack data on it.

To try to tackle the evaluation and calibration part, I've reached out
to all the new test designers in the hope that we could get together
and produce a report of what each new test is actually doing. I've
tweeted, linked in, emailed, and spammed every measurement list I know
of, and only to some response, please reach out to other test designer
folks and have them join the rpm email list?

My principal kvetches in the new tests so far are:

0) None of the tests last long enough.

Ideally there should be a mode where they at least run to "time of
first loss", or periodically, just run longer than the
industry-stupid^H^H^H^H^H^Hstandard 20 seconds. There be dragons
there! It's really bad science to optimize the internet for 20
seconds. It's like optimizing a car, to handle well, for just 20
seconds.

1) Not testing up + down + ping at the same time

None of the new tests actually test the same thing that the infamous
rrul test does - all the others still test up, then down, and ping. It
was/remains my hope that the simpler parts of the flent test suite -
such as the tcp_up_squarewave tests, the rrul test, and the rtt_fair
tests would provide calibration to the test designers.

we've got zillions of flent results in the archive published here:
https://blog.cerowrt.org/post/found_in_flent/
ps. Misinformation about iperf 2 impacts my ability to do this.



The new tests have all added up + ping and down + ping, but not up +
down + ping. Why??

The behaviors of what happens in that case are really non-intuitive, I
know, but... it's just one more phase to add to any one of those new
tests. I'd be deliriously happy if someone(s) new to the field
started doing that, even optionally, and boggled at how it defeated
their assumptions.

Among other things that would show...

It's the home router industry's dirty secret than darn few "gigabit"
home routers can actually forward in both directions at a gigabit. I'd
like to smash that perception thoroughly, but given our starting point
is a gigabit router was a "gigabit switch" - and historically been
something that couldn't even forward at 200Mbit - we have a long way
to go there.

Only in the past year have non-x86 home routers appeared that could
actually do a gbit in both directions.

2) Few are actually testing within-stream latency

Apple's rpm project is making a stab in that direction. It looks
highly likely, that with a little more work, crusader and
go-responsiveness can finally start sampling the tcp RTT, loss and
markings, more directly. As for the rest... sampling TCP_INFO on
windows, and Linux, at least, always appeared simple to me, but I'm
discovering how hard it is by delving deep into the rust behind
crusader.

the goresponsiveness thing is also IMHO running WAY too many streams
at the same time, I guess motivated by an attempt to have the test
complete quickly?

B) To try and tackle the validation problem:ps. Misinformation about 
iperf 2 impacts my ability to do this.




In the libreqos.io project we've established a testbed where tests can
be plunked through various ISP plan network emulations. It's here:
https://payne.taht.net (run bandwidth test for what's currently hooked
up)

We could rather use an AS number and at least a ipv4/24 and ipv6/48 to
leverage with that, so I don't have to nat the various emulations.
(and funding, anyone got funding?) Or, as the code is GPLv2 

Re: [Bloat] [Starlink] [Rpm] [LibreQoS] the grinch meets cloudflare'schristmas present

2023-01-06 Thread rjmcmahon via Bloat
yeah, I'd prefer not to output CLT sample groups at all but the 
histograms aren't really human readable and users constantly ask for 
them. I thought about providing a distance from the gaussian as output 
too but so far few would understand it and nobody I found would act upon 
it. The tool produces the full histograms so no information is really 
missing except for maybe better time series analysis.


The open source flows python code also released with iperf 2 does use 
the komogorov-smirnov distances & distance matrices to cluster when the 
number of histograms are just too much. We've analyzed 1M runs to fault 
isolate the "unexpected interruptions" or "bugs" and without statistical 
support it is just not doable. This does require instrumentation of the 
full path with mapping to a common clock domain (e.g. GPS) and not just 
e2e stats. I find an e2e complaint by an end user about "poor speed" as 
useful as telling a pharmacist I have a fever. Not much diagnostically 
is going on. Take an aspirin.


https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py

Bob

See below …

-Original Message-
From: Starlink [mailto:starlink-boun...@lists.bufferbloat.net] On
Behalf Of rjmcmahon via Starlink
Sent: Friday, January 6, 2023 12:39 PM
To: MORTON JR., AL
Cc: Dave Taht via Starlink; IETF IPPM WG; libreqos; Cake List; Rpm;
bloat
Subject: Re: [Starlink] [Rpm] [LibreQoS] the grinch meets
cloudflare'schristmas present

Some thoughts are not to use UDP for testing here. Also, these speed

tests have little to no information for network engineers about what's


going on. Iperf 2 may better assist network engineers but then I'm

biased ;)

Running iperf 2 https://sourceforge.net/projects/iperf2/ with

--trip-times. Though the sampling and central limit theorem averaging
is

hiding the real distributions (use --histograms to get those)

_[RR] FWIW (IMNBWM __J)… If the output/final histograms indicate the
PDF is NOT Gaussian, then any application of the CLT is
inappropriate/contra-indicated! The CLT is a "proof under certain
regularity conditions/assumptions of underlying/constituent PDFs, that
the resulting PDF (after all the necessary convolutions are performed
to get to the PDF of the output) will asymptotically approach a
Gaussian with only a mean and a std. dev. left to specify. _

Below are 4 parallel TCP streams from my home to one of my servers in

the cloud. First where TCP is limited per CCA. Second is source side

write rate limiting. Things to note:

o) connect times for both at 10-15 ms

o) multiple TCP retries on a few rites - one case is 4 with 5 writes.

Source side pacing eliminates retries

o) Fairness with CCA isn't great but quite good with source side write


pacing

o) Queue depth with CCA is about 150 Kbytes about 100K byte with
source

side pacing

o) min write to read is about 80 ms for both

o) max is 220 ms vs 97 ms

o) stdev for CCA write/read is 30 ms vs 3 ms

o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as

TCP_QUICACK and TCP_NODELAY are both enabled.

[ CT] final connect times (min/avg/max/stdev) =

10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0

[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e

--trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N



Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4

flows)

Write buffer size: 131072 Byte

TOS set to 0x0 and nodelay (Nagle off)

TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)

Event based writes (pending queue watermark at 16384 bytes)



[  1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port

5001 (prefetch=16384) (trip-times) (sock=3) (qack)

(icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56

(PST)

[  4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port

5001 (prefetch=16384) (trip-times) (sock=5) (qack)

(icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56

(PST)

[  3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port

5001 (prefetch=16384) (trip-times) (sock=6) (qack)

(icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56

(PST)

[  2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port

5001 (prefetch=16384) (trip-times) (sock=4) (qack)

(icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56

(PST)

[ ID] IntervalTransferBandwidth   Write/Err  Rtry

Cwnd/RTT(var)NetPwr

...

[  4] 4.00-5.00 sec  1.38 MBytes  11.5 Mbits/sec  11/0 3


29K/21088(1142) us  68.37

[  2] 4.00-5.00 sec  1.62 MBytes  13.6 Mbits/sec  13/0 2


31K/19284(612) us  88.36

[  1] 4.00-5.00 sec   896 KBytes  7.34 Mbits/sec  7/0 5

16K/18996(658) us  48.30

[  3] 4.00-5.00 sec  1.00 MBytes  8.39 Mbits/sec  8/0 5

18K/18133(208) us  57.83


Re: [Bloat] [Rpm] [LibreQoS] the grinch meets cloudflare's christmas present

2023-01-06 Thread rjmcmahon via Bloat
For responsiveness, the bounceback seems reasonable even with upstream 
competition. Bunch more TCP retries though.


[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e 
--trip-times -i 1 --bounceback -t 3


Client connecting to (**hidden**), TCP port 5001 with pid 111022 (1 
flows)

Write buffer size:  100 Byte
Bursting:  100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs & 
tcp_quickack)

TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)

[  1] local *.*.*.86%enp7s0 port 36976 connected with *.*.*.123 port 
5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times) 
(sock=3) (icwnd/mss/irtt=14/1448/9862) (ct=9.90 ms) on 2023-01-06 
12:42:18 (PST)
[ ID] IntervalTransferBandwidth BB 
cnt=avg/min/max/stdev Rtry  Cwnd/RTTRPS
[  1] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec
10=12.195/9.298/16.457/2.679 ms0   14K/11327 us82 rps
[  1] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec
10=12.613/9.271/15.489/2.788 ms0   14K/12165 us79 rps
[  1] 2.00-3.00 sec  1.95 KBytes  16.0 Kbits/sec
10=13.390/9.376/15.986/2.520 ms0   14K/13164 us75 rps
[  1] 0.00-3.03 sec  5.86 KBytes  15.8 Kbits/sec
30=12.733/9.271/16.457/2.620 ms0   14K/15138 us79 rps
[  1] 0.00-3.03 sec  OWD Delays (ms) Cnt=30 To=7.937/4.634/11.327/2.457 
From=4.778/4.401/5.350/0.258 Asymmetry=3.166/0.097/6.311/2.31879 rps
[  1] 0.00-3.03 sec BB8(f)-PDF: 
bin(w=100us):cnt(30)=93:2,94:3,95:2,97:1,100:1,102:1,105:1,114:2,142:1,143:1,144:2,145:3,146:1,147:1,148:1,151:1,152:1,154:1,155:1,156:1,160:1,165:1 
(5.00/95.00/99.7%=93/160/165,Outliers=0,obl/obu=0/0)


[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e 
--trip-times -i 1 --bounceback -t 3 --bounceback-congest=up,4


Client connecting to (**hidden**), TCP port 5001 with pid 111069 (1 
flows)

Write buffer size:  100 Byte
Bursting:  100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs & 
tcp_quickack)

TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 16.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)

[  2] local *.*.*.85%enp4s0 port 38342 connected with *.*.*.123 port 
5001 (prefetch=16384) (trip-times) (sock=3) (qack) 
(icwnd/mss/irtt=14/1448/10613) (ct=10.66 ms) on 2023-01-06 12:42:36 
(PST)
[  1] local *.*.*.85%enp4s0 port 38360 connected with *.*.*.123 port 
5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times) 
(sock=4) (icwnd/mss/irtt=14/1448/14901) (ct=14.96 ms) on 2023-01-06 
12:42:36 (PST)
[  3] local *.*.*.85%enp4s0 port 38386 connected with *.*.*.123 port 
5001 (prefetch=16384) (trip-times) (sock=7) (qack) 
(icwnd/mss/irtt=14/1448/15295) (ct=15.31 ms) on 2023-01-06 12:42:36 
(PST)
[  4] local *.*.*.85%enp4s0 port 38348 connected with *.*.*.123 port 
5001 (prefetch=16384) (trip-times) (sock=5) (qack) 
(icwnd/mss/irtt=14/1448/14901) (ct=14.95 ms) on 2023-01-06 12:42:36 
(PST)
[  5] local *.*.*.85%enp4s0 port 38372 connected with *.*.*.123 port 
5001 (prefetch=16384) (trip-times) (sock=6) (qack) 
(icwnd/mss/irtt=14/1448/15371) (ct=15.42 ms) on 2023-01-06 12:42:36 
(PST)
[ ID] IntervalTransferBandwidth   Write/Err  Rtry 
Cwnd/RTT(var)NetPwr
[  3] 0.00-1.00 sec  1.29 MBytes  10.8 Mbits/sec  13502/0   115  
 28K/22594(904) us  59.76
[  4] 0.00-1.00 sec  1.63 MBytes  13.6 Mbits/sec  17048/0   140  
 42K/22728(568) us  75.01
[ ID] IntervalTransferBandwidth BB 
cnt=avg/min/max/stdev Rtry  Cwnd/RTTRPS
[  1] 0.00-1.00 sec  1.95 KBytes  16.0 Kbits/sec
10=76.140/17.224/123.195/43.168 ms0   14K/68136 us13 rps
[  5] 0.00-1.00 sec  1.04 MBytes  8.72 Mbits/sec  10893/082  
 25K/23400(644) us  46.55

[SUM] 0.00-1.00 sec  3.95 MBytes  33.2 Mbits/sec  41443/0   337
[  2] 0.00-1.00 sec  1.10 MBytes  9.25 Mbits/sec  11566/077  
 22K/23557(432) us  49.10
[  3] 1.00-2.00 sec  1.24 MBytes  10.4 Mbits/sec  13037/020  
 28K/14427(503) us  90.37
[  4] 1.00-2.00 sec  1.43 MBytes  12.0 Mbits/sec  14954/031  
 12K/13348(407) us  112
[  1] 1.00-2.00 sec  1.95 KBytes  16.0 Kbits/sec
10=14.581/10.801/20.356/3.599 ms0   14K/27791 us69 rps
[  5] 1.00-2.00 sec  1.26 MBytes  10.6 Mbits/sec  13191/016  
 12K/14749(675) us  89.44

[SUM] 1.00-2.00 sec  3.93 MBytes  32.9 Mbits/sec  41182/067
[  2] 1.00-2.00 sec  1000 KBytes  8.19 Mbits/sec  10237/013  
 19K/14467(1068) us  70.76
[  3] 2.00-3.00 sec  1.33 MBytes  11.2 Mbits/sec  13994/0 4  
 24K/20749(495) us  

Re: [Bloat] [Rpm] [LibreQoS] the grinch meets cloudflare's christmas present

2023-01-06 Thread rjmcmahon via Bloat
Some thoughts are not to use UDP for testing here. Also, these speed 
tests have little to no information for network engineers about what's 
going on. Iperf 2 may better assist network engineers but then I'm 
biased ;)


Running iperf 2 https://sourceforge.net/projects/iperf2/ with 
--trip-times. Though the sampling and central limit theorem averaging is 
hiding the real distributions (use --histograms to get those)


Below are 4 parallel TCP streams from my home to one of my servers in 
the cloud. First where TCP is limited per CCA. Second is source side 
write rate limiting. Things to note:


o) connect times for both at 10-15 ms
o) multiple TCP retries on a few rites - one case is 4 with 5 writes. 
Source side pacing eliminates retries
o) Fairness with CCA isn't great but quite good with source side write 
pacing
o) Queue depth with CCA is about 150 Kbytes about 100K byte with source 
side pacing

o) min write to read is about 80 ms for both
o) max is 220 ms vs 97 ms
o) stdev for CCA write/read is 30 ms vs 3 ms
o) TCP RTT is 20ms w/CCA and 90 ms with ssp - seems odd here as 
TCP_QUICACK and TCP_NODELAY are both enabled.


[ CT] final connect times (min/avg/max/stdev) = 
10.326/13.522/14.986/2150.329 ms (tot/err) = 4/0
[rjmcmahon@ryzen3950 iperf2-code]$ iperf -c *** --hide-ips -e 
--trip-times -i 1 -P 4 -t 10 -w 4m --tcp-quickack -N


Client connecting to (**hidden**), TCP port 5001 with pid 107678 (4 
flows)

Write buffer size: 131072 Byte
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)
Event based writes (pending queue watermark at 16384 bytes)

[  1] local *.*.*.85%enp4s0 port 42480 connected with *.*.*.123 port 
5001 (prefetch=16384) (trip-times) (sock=3) (qack) 
(icwnd/mss/irtt=14/1448/10534) (ct=10.63 ms) on 2023-01-06 12:17:56 
(PST)
[  4] local *.*.*.85%enp4s0 port 42488 connected with *.*.*.123 port 
5001 (prefetch=16384) (trip-times) (sock=5) (qack) 
(icwnd/mss/irtt=14/1448/14023) (ct=14.08 ms) on 2023-01-06 12:17:56 
(PST)
[  3] local *.*.*.85%enp4s0 port 42502 connected with *.*.*.123 port 
5001 (prefetch=16384) (trip-times) (sock=6) (qack) 
(icwnd/mss/irtt=14/1448/14642) (ct=14.70 ms) on 2023-01-06 12:17:56 
(PST)
[  2] local *.*.*.85%enp4s0 port 42484 connected with *.*.*.123 port 
5001 (prefetch=16384) (trip-times) (sock=4) (qack) 
(icwnd/mss/irtt=14/1448/14728) (ct=14.79 ms) on 2023-01-06 12:17:56 
(PST)
[ ID] IntervalTransferBandwidth   Write/Err  Rtry 
Cwnd/RTT(var)NetPwr

...
[  4] 4.00-5.00 sec  1.38 MBytes  11.5 Mbits/sec  11/0 3   
29K/21088(1142) us  68.37
[  2] 4.00-5.00 sec  1.62 MBytes  13.6 Mbits/sec  13/0 2   
31K/19284(612) us  88.36
[  1] 4.00-5.00 sec   896 KBytes  7.34 Mbits/sec  7/0 5   
16K/18996(658) us  48.30
[  3] 4.00-5.00 sec  1.00 MBytes  8.39 Mbits/sec  8/0 5   
18K/18133(208) us  57.83

[SUM] 4.00-5.00 sec  4.88 MBytes  40.9 Mbits/sec  39/015
[  4] 5.00-6.00 sec  1.25 MBytes  10.5 Mbits/sec  10/0 4   
29K/14717(489) us  89.06
[  1] 5.00-6.00 sec  1.00 MBytes  8.39 Mbits/sec  8/0 4   
16K/15874(408) us  66.06
[  3] 5.00-6.00 sec  1.12 MBytes  9.44 Mbits/sec  9/0 4   
16K/15826(382) us  74.54
[  2] 5.00-6.00 sec  1.50 MBytes  12.6 Mbits/sec  12/0 6
9K/14878(557) us  106

[SUM] 5.00-6.00 sec  4.88 MBytes  40.9 Mbits/sec  39/018
[  4] 6.00-7.00 sec  1.75 MBytes  14.7 Mbits/sec  14/0 4   
25K/15472(496) us  119
[  2] 6.00-7.00 sec  1.00 MBytes  8.39 Mbits/sec  8/0 2   
26K/16417(427) us  63.87
[  1] 6.00-7.00 sec  1.25 MBytes  10.5 Mbits/sec  10/0 5   
16K/16268(679) us  80.57
[  3] 6.00-7.00 sec  1.00 MBytes  8.39 Mbits/sec  8/0 6   
15K/16629(799) us  63.06

[SUM] 6.00-7.00 sec  5.00 MBytes  41.9 Mbits/sec  40/017
[  4] 7.00-8.00 sec  1.75 MBytes  14.7 Mbits/sec  14/0 4   
22K/13986(519) us  131
[  1] 7.00-8.00 sec  1.12 MBytes  9.44 Mbits/sec  9/0 4   
16K/12679(377) us  93.04
[  3] 7.00-8.00 sec   896 KBytes  7.34 Mbits/sec  7/0 5   
14K/12971(367) us  70.74
[  2] 7.00-8.00 sec  1.12 MBytes  9.44 Mbits/sec  9/0 6   
15K/14740(779) us  80.03

[SUM] 7.00-8.00 sec  4.88 MBytes  40.9 Mbits/sec  39/019

[root@bobcat iperf2-code]# iperf -s -i 1 -e --hide-ips -w 4m

Server listening on TCP port 5001 with pid 233615
Read buffer size:  128 KByte (Dist bin width=16.0 KByte)
TCP window size: 7.63 MByte (WARNING: requested 3.81 MByte)

[  1] local *.*.*.123%eth0 port 5001 connected with *.*.*.171 port 42480 
(trip-times) (sock=4) (peer 2.1.9-master) (qack) 
(icwnd/mss/irtt=14/1448/11636) on 2023-01-06 12:17:56 (PST)
[  2] local 

Re: [Bloat] [Starlink] [Rpm] the grinch meets cloudflare'schristmas present

2023-01-05 Thread rjmcmahon via Bloat


_[RR] ... IMO, a more useful concept of latency is the
excess transit time over the theoretical minimum that results from all
the real-world "interruptions" in the transmission path(s) including
things like regeneration of optical signals in long cables, switching
of network layer protocols in gateways (header manipulation above
layer 4), and yes, of course, buffering in switches and routers __J
These are things that can be "minimized" by appropriate system design
(the topic of these threads actually!).  "


I think this is worth repeating. Thanks for pointing it out. (I'm 
wondering if better inline network telemetry can also help forwarding 
planes use tech like segment routing to bypass and mitigate any 
"temporal interruptions.")


The only way to decrease transit time is to "go wireless everywhere, 
eliminate our atmosphere,

and then get physically closer to each other"! __J Like it or not, we
live in a Lorentz-ian space-time continuum also know as "our world"


This reminds me of the spread networks approach (who then got beat out 
by microwave for HFT.)


https://en.wikipedia.org/wiki/Spread_Networks

"According to a WIRED article, the estimated roundtrip time for an 
ordinary cable is 14.5 milliseconds, giving users of Spread Networks a 
slight advantage. However, because glass has a higher refractive index 
than air (about 1.5 compared to about 1), the roundtrip time for fiber 
optic cable transmission is 50% more than that for transmission through 
the air. Some companies, such as McKay Brothers, Metrorede and 
Tradeworx, are using air-based transmission to offer lower estimated 
roundtrip times (8.2 milliseconds and 8.5 milliseconds respectively) 
that are very close to the theoretical minimum possible (about 7.9-8 
milliseconds)."


Bob
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] [Rpm] the grinch meets cloudflare's christmas present

2023-01-04 Thread rjmcmahon via Bloat
The thing that works for gamers are colors, e.g. green, yellow and red. 
Basically, if the game slows down to a bothersome experience the 
"latency indicator" goes from green to yellow. If the game slows down to 
be unplayable it goes to red and the "phone" mfg gets lots of 
complaints. Why we call a handheld computer a phone is a whole other 
discussion.


Bob

On the other hand, we would like to be comprehensible to normal users,
especially when we want them to press their providers to deal with
bufferbloat. Differences like speed and rate would go right over their
heads.

On Wed, Jan 4, 2023 at 1:16 PM Ulrich Speidel via Starlink
 wrote:


The use of the term "speed" in communications used to be restricted
to the speed of light (or whatever propagation speed one happened to
be dealing with. Everything else was a "rate". Maybe I'm
old-fashioned but I think talking about "speed tests" muddies the
waters rather a lot.

--

Dr. Ulrich Speidel

Department of Computer Science

Room 303S.594
Ph: (+64-9)-373-7599 ext. 85282

The University of Auckland
u.spei...@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/ [1]


-

From: Starlink  on behalf of
rjmcmahon via Starlink 
Sent: Thursday, January 5, 2023 9:02 AM
To: j...@jonathanfoulkes.com 
Cc: Cake List ; IETF IPPM WG
; libreqos ; Dave
Taht via Starlink ; Rpm
; bloat 
Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's
christmas present

Curious to why people keep calling capacity tests speed tests? A
semi at
55 mph isn't faster than a porsche at 141 mph because its load
volume is
larger.

Bob

HNY Dave and all the rest,

Great to see yet another capacity test add latency metrics to the
results. This one looks like a good start.

Results from my Windstream DOCSIS 3.1 line (3.1 on download only,

up

is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter

Pro

(an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
reliable low-latency unless you shave a good bit off the targets.

My

local loop is pretty congested.

Here’s the latest Cloudflare test:




And an Ookla test run just afterward:




They are definitely both in the ballpark and correspond to other

tests

run from the router itself or my (wired) MacBook Pro.

Cheers,

Jonathan



On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
 wrote:

Please try the new, the shiny, the really wonderful test here:
https://speed.cloudflare.com/ [2]

I would really appreciate some independent verification of
measurements using this tool. In my brief experiments it appears

- as

all the commercial tools to date - to dramatically understate the
bufferbloat, on my LTE, (and my starlink terminal is out being
hacked^H^H^H^H^H^Hworked on, so I can't measure that)

My test of their test reports 223ms 5G latency under load , where
flent reports over 2seconds. See comparison attached.

My guess is that this otherwise lovely new tool, like too many,
doesn't run for long enough. Admittedly, most web objects (their
target market) are small, and so long as they remain small and

not

heavily pipelined this test is a very good start... but I'm

pretty

sure cloudflare is used for bigger uploads and downloads than

that.

There's no way to change the test to run longer either.

I'd love to get some results from other networks (compared as

usual to

flent), especially ones with cake on it. I'd love to know if they
measured more minimum rtts that can be obtained with fq_codel or

cake,

correctly.

Love Always,
The Grinch

--
This song goes out to all the folk that thought Stadia would

work:





https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-698135607352320-FXtz

[3]

Dave Täht CEO, TekLibre, LLC




___

Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm [4]



___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm [4]

___
Starlink mailing list
starl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink
___
Starlink mailing list
starl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink


--

Bruce Perens K6BP

Links:
--
[1] http://www.cs.auckland.ac.nz/%7Eulrich/
[2] https://speed.cloudflare.com
[3] 
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-698135607352320-FXtz

[4] https://lists.bufferbloat.net/listinfo/rpm

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Starlink] [Rpm] the grinch meets cloudflare's christmas present

2023-01-04 Thread rjmcmahon via Bloat
Well, from an iperf 2 perspective channel capacity of a TCP socket is 
information/time. I think that's also more or less how Shannon defined 
it. I don't think channel capacity matters if it's measured or somehow 
otherwise computed, or maybe never even known. It exists on its own 
merits regardless ;)


"Shannon's Theorem gives an upper bound to the capacity of a link, in 
bits per second (bps), as a function of the available bandwidth and the 
signal-to-noise ratio of the link."


"the channel capacity of a given channel is the highest information rate 
(in units of information per unit time) that can be achieved with 
arbitrarily small error probability."


https://en.wikipedia.org/wiki/Channel_capacity

Then, there is latency which is delay in units time. So why do we use 
the term "speed" when we're talking about delay? I find it similar to 
the speed of causality except applied to us mere mortal computer 
programmers. Our programs block while waiting on those delays. Network 
engineers reducing those delays in turn can increase the speed of the 
programmer's objectives. So speed here really is the speed of a coupled 
distributed computer system. Never to exceed the speed of light but we 
should try to get there anyway.


Bob


OK, so now we are all showing our age!  And yes, the lexicon has
become really muddied … generally the result of someone who doesn't
know (and thinking they do J), speaking the loudest and the longest
and whaddaya know, all of a sudden we have "speed tests" and "capacity
tests", when really what is happening is that
"data/information/communication rate" is being "measured/estimated".
Neither "speed" nor "capacity" is being "tested". Oh, for the good ole
days when … J

-

From: Starlink [mailto:starlink-boun...@lists.bufferbloat.net] On
Behalf Of Ulrich Speidel via Starlink
Sent: Wednesday, January 4, 2023 1:17 PM
To: j...@jonathanfoulkes.com; rjmcmahon
Cc: Dave Taht via Starlink; bloat
Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
present

The use of the term "speed" in communications used to be restricted to
the speed of light (or whatever propagation speed one happened to be
dealing with. Everything else was a "rate". Maybe I'm old-fashioned
but I think talking about "speed tests" muddies the waters rather a
lot.

--

Dr. Ulrich Speidel

Department of Computer Science

Room 303S.594
Ph: (+64-9)-373-7599 ext. 85282

The University of Auckland
u.spei...@auckland.ac.nz
http://www.cs.auckland.ac.nz/~ulrich/ [1]


-

From: Starlink  on behalf of
rjmcmahon via Starlink 
Sent: Thursday, January 5, 2023 9:02 AM
To: j...@jonathanfoulkes.com 
Cc: Cake List ; IETF IPPM WG
; libreqos ; Dave Taht
via Starlink ; Rpm
; bloat 
Subject: Re: [Starlink] [Rpm] the grinch meets cloudflare's christmas
present

Curious to why people keep calling capacity tests speed tests? A semi
at
55 mph isn't faster than a porsche at 141 mph because its load volume
is
larger.

Bob

HNY Dave and all the rest,

Great to see yet another capacity test add latency metrics to the
results. This one looks like a good start.

Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
(an i5 x86) with Cake set for 710/31 as this ISP can't deliver
reliable low-latency unless you shave a good bit off the targets. My
local loop is pretty congested.

Here's the latest Cloudflare test:




And an Ookla test run just afterward:




They are definitely both in the ballpark and correspond to other

tests

run from the router itself or my (wired) MacBook Pro.

Cheers,

Jonathan



On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm
 wrote:

Please try the new, the shiny, the really wonderful test here:
https://speed.cloudflare.com/ [2]

I would really appreciate some independent verification of
measurements using this tool. In my brief experiments it appears -

as

all the commercial tools to date - to dramatically understate the
bufferbloat, on my LTE, (and my starlink terminal is out being
hacked^H^H^H^H^H^Hworked on, so I can't measure that)

My test of their test reports 223ms 5G latency under load , where
flent reports over 2seconds. See comparison attached.

My guess is that this otherwise lovely new tool, like too many,
doesn't run for long enough. Admittedly, most web objects (their
target market) are small, and so long as they remain small and not
heavily pipelined this test is a very good start... but I'm pretty
sure cloudflare is used for bigger uploads and downloads than that.
There's no way to change the test to run longer either.

I'd love to get some results from other networks (compared as usual

to

flent), especially ones with cake on it. I'd love to know if they
measured more minimum rtts that can be obtained with fq_codel or

cake,


Re: [Bloat] [Rpm] the grinch meets cloudflare's christmas present

2023-01-04 Thread rjmcmahon via Bloat
Curious to why people keep calling capacity tests speed tests? A semi at 
55 mph isn't faster than a porsche at 141 mph because its load volume is 
larger.


Bob

HNY Dave and all the rest,

Great to see yet another capacity test add latency metrics to the
results. This one looks like a good start.

Results from my Windstream DOCSIS 3.1 line (3.1 on download only, up
is 3.0) Gigabit down / 35Mbps up provisioning. Using an IQrouter Pro
(an i5 x86) with Cake set for 710/31 as this ISP can’t deliver
reliable low-latency unless you shave a good bit off the targets. My
local loop is pretty congested.

Here’s the latest Cloudflare test:




And an Ookla test run just afterward:




They are definitely both in the ballpark and correspond to other tests
run from the router itself or my (wired) MacBook Pro.

Cheers,

Jonathan


On Jan 4, 2023, at 12:26 PM, Dave Taht via Rpm 
 wrote:


Please try the new, the shiny, the really wonderful test here:
https://speed.cloudflare.com/

I would really appreciate some independent verification of
measurements using this tool. In my brief experiments it appears - as
all the commercial tools to date - to dramatically understate the
bufferbloat, on my LTE, (and my starlink terminal is out being
hacked^H^H^H^H^H^Hworked on, so I can't measure that)

My test of their test reports 223ms 5G latency under load , where
flent reports over 2seconds. See comparison attached.

My guess is that this otherwise lovely new  tool, like too many,
doesn't run for long enough. Admittedly, most web objects (their
target market) are small, and so long as they remain small and not
heavily pipelined this test is a very good start... but I'm pretty
sure cloudflare is used for bigger uploads and downloads than that.
There's no way to change the test to run longer either.

I'd love to get some results from other networks (compared as usual to
flent), especially ones with cake on it. I'd love to know if they
measured more minimum rtts that can be obtained with fq_codel or cake,
correctly.

Love Always,
The Grinch

--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-698135607352320-FXtz
Dave Täht CEO, TekLibre, LLC
___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm



___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [Rpm] Fwd: [Make-wifi-fast] make-wifi-fast 2016 & crusader

2022-12-11 Thread rjmcmahon via Bloat
Thanks for the well-written response Sebastian. I need to think more 
about the load vs no load OWD differentials and maybe offer that as an 
integrated test. Thanks for bringing it up (again.) I do think a 
low-duty cycle bounceback test to the AP could be interesting too.


I don't know of any projects working on iperf 2 & containers but it has 
been suggested as useful.


Bob


Mail mail provider unhelpfully labelled my post as SPAM, and
apparently all receivers rejected to receive my "SPAM"
Hence I try forwarding a slightly edited version of my response below,
hoping not to trigger GMX's SPAM detection again.



Begin forwarded message:

From: Sebastian Moeller 
Subject: Re: [Make-wifi-fast] [Rpm] make-wifi-fast 2016 & crusader
Date: December 8, 2022 at 11:15:12 GMT+1
To: rjmcmahon 
Cc: rjmcmahon via Make-wifi-fast 
, Dave Täht 
, Rpm , libreqos 
, Dave Taht via Starlink 
, bloat 


Hi Bob,

thanks for the detailed response.



On Dec 7, 2022, at 20:28, rjmcmahon  wrote:

Hi Sebastian,

Per Aristotle: "That which is common to the greatest number gets the 
least amount of care. Men pay most attention to what is their own: 
they care less for what is common."


I think a challenge for many of us providing open source tooling is 
the lack of resource support to supply goods for all. Both the iperf 
2 and iperf 3 teams are under-resourced so we try not to duplicate 
each other too much except for where that duplication adds value 
(e.g. having two independently written socket measurement tools.) The 
iperf 3 team has provided public servers, I think at their costs.


	[SM] I should probably clarify my position, I was not trying to argue 
that you (or your employer) should operate public iperf2 servers, but 
that the availability of such servers probably is what made iperf3 the 
most popular of the iperf2/iperf3/netperf triple. I did not realize 
that the iperf3 team operates some of the public servers, as I have 
already seen ISPs (see e.g. hxxps://speedtest.wtnet.de) that offer 
iperf3 as mean for their existing users to run speedtest via iperf3. 
So my argument should gone more along the lines of, "to make iperf2 as 
popular as it deserves to be some publicity and available servers will 
help a lot". And actually having servers operated by other parties 
than the toll maker is an added "vote of confidence".



I've been holding off on iperf 2 public servers until I found an 
additional value add and a way to pay for them.


	[SM] Understood, and I formulated inartfully, implying you should 
host iperf2 servers; that was not my intent.


Much of the iperf 2 work has been around one way delay (OWD) or 
latency. Doing this well requires GPS clock sync on both the data 
center servers and the end host devices. I checked into this a few 
years ago and found that this level of clock sync wasn't available 
via rented servers (e.g. linode or Hurricane Electric) so I put on 
hold any further investigation of public servers for iperf 2 as being 
redundant with iperf 3. Those that need true e2e latency (vs RTTs) 
have to build their own so-to-speak.


	[SM] Yepp, except for congestion detection all that is really 
required is sufficiently stable clocks, as the delay differences 
between idle and loaded tests are quite informative and offering OWDs 
allows to pinpoint the direction of congestion.


I know of two nonprofit measurement labs being mlabs and ripe (there 
may be more) that could take an interest but neither has:


hxxps://www.ripe.net/
hxxps://www.measurementlab.net/


	[SM] I think ripe especially their ATLAS network is somewhat 
"sensitive" about throughput tests, as quite some nodes likely are 
operated by enthusiasts in their leaf networks that are not well 
suited as generic speedtest servers... (however that would allow great 
studies of achievable throughput comparing different ASs).


There could be a market opportunity for somebody to build a 
measurement system in the cloud that supported any generic sensors 
and could signal anomalies. Then one could connect iperf 2 public 
servers to that as an offering.


Note: Some GPS atomic clock options for RPi:
hxxps://store.uputronics.com/index.php?route=product/product_id=81
hxxps://store.timebeat.app/products/gnss-raspberry-pi-cm4-module?variant=41934772764843


	[SM] I followed your lead several moths ago, and have an 
GPS-disciplined NTP server in my homenetwork already, so I am prepared 
for true OWD measurements ;)




Also needed is the latest iperf 2 on an openwrt router.


	[SM] That will work well for the low throughput test, but I often see 
that routers that are fully capable of routing X Mbps get into issues 
when trying to source and/or sink the same X Mbps, so it becomes 
essential to monitor router "load" while running tests (something that 
is also still on the TODO list for cake-autorate, we should throttle 
our shapers if the traffic load exceeds a router's capability to 
schedule CPU slots timely to the shaper qdiscs).


Better may 

Re: [Bloat] [Make-wifi-fast] [Rpm] make-wifi-fast 2016 & crusader

2022-12-11 Thread rjmcmahon via Bloat

Hi Sebastian,

Per Aristotle: "That which is common to the greatest number gets the 
least amount of care. Men pay most attention to what is their own: they 
care less for what is common."


I think a challenge for many of us providing open source tooling is the 
lack of resource support to supply goods for all. Both the iperf 2 and 
iperf 3 teams are under-resourced so we try not to duplicate each other 
too much except for where that duplication adds value (e.g. having two 
independently written socket measurement tools.) The iperf 3 team has 
provided public servers, I think at their costs.


I've been holding off on iperf 2 public servers until I found an 
additional value add and a way to pay for them. Much of the iperf 2 work 
has been around one way delay (OWD) or latency. Doing this well requires 
GPS clock sync on both the data center servers and the end host devices. 
I checked into this a few years ago and found that this level of clock 
sync wasn't available via rented servers (e.g. linode or Hurricane 
Electric) so I put on hold any further investigation of public servers 
for iperf 2 as being redundant with iperf 3. Those that need true e2e 
latency (vs RTTs) have to build their own so-to-speak.


I know of two nonprofit measurement labs being mlabs and ripe (there may 
be more) that could take an interest but neither has:


https://www.ripe.net/
https://www.measurementlab.net/

There could be a market opportunity for somebody to build a measurement 
system in the cloud that supported any generic sensors and could signal 
anomalies. Then one could connect iperf 2 public servers to that as an 
offering.


Note: Some GPS atomic clock options for RPi:
https://store.uputronics.com/index.php?route=product/product_id=81
https://store.timebeat.app/products/gnss-raspberry-pi-cm4-module?variant=41934772764843

Also needed is the latest iperf 2 on an openwrt router. Better may be to 
have that router also run ptp4l or equivalent and behave as a PTP 
grandmaster.


Unfortunately, my day job requires me to focus on "shareholder 
interests" and, in that context, it's very difficult to provide public 
goods that are nonrivalrous and nonexcludable. 
https://tinyurl.com/mr63p52k


Finally, we all have to deal with "why we sleep" in order to be most 
productive (despite what Mr. Musk thinks.)


https://en.wikipedia.org/wiki/Why_We_Sleep

and there are only so many "awake hours" for us "non-exceptional" 
engineers ;-) (A joke, everybody has value by my book.)


Thanks,
Bob

Hi Bob,

What simple end users would need is (semi-)public iperf2 servers
accessible over the internet to be comparably easy to use as
iperf3

Regards
Sebastian

On 6 December 2022 18:46:18 CET, rjmcmahon via Make-wifi-fast
 wrote:


Nice write up and work over the years.

On tooling:

iperf 2 supports full duplex, multiple parallel streams, tx start
times, bounceback, isochronous, etc. Man page is here

https://iperf2.sourceforge.io/iperf-manpage.html

The flows code in the flows directory

https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/

is written in python 3 and leverages asyncio.

https://docs.python.org/3/library/asyncio.html

This is all released as open source.

Bob

This is where things stood on the wifi front, back in 2016. Nobody
understood us...





https://docs.google.com/document/d/1Se36svYE1Uzpppe1HWnEyat_sAGghB3kE285LElJBW4/edit#


So I sort of enjoyed re-reading that this morning, and all the
enthusiastic commentary we'd had on it. Perhaps we can reshape it
and
find ways to move forward today?

I am happy to have seen so many products hitting the market 5+
years
later that leverage this work, many openwrt derived, like
evenroute,
quantum, and openwifi, others from pure linux, like eero and
google
fiber, and so far as I can tell, in many a chromebook, and of
course
ios and osx.

Still, there was so much work left to be done, and the work
applied to
all forms of wireless technology, be it 6 or 12ghz, or 60ghz, or
starlink. Just the other day I was watching a 5G engineer that was
struggling to get decent simultaneous throughput up and down, the
test
tool showing that, but not the 25 seconds of buffering built into
the
rmnet driver in poor conditions, and "only" 150ms perfect ones.
This
test tool shows "perfect" throughput for this device:

https://www.spinics.net/lists/netdev/msg865852.html
(anyone know which tool it was? see image here:




https://drive.google.com/file/d/1gSbozrtd9h0X63i6vdkNpN68d-9sg8f9/view

)

vs the actual, underlying, unusable 25 seconds!!! - result - if
only
that test tool attempted to start up even one more flow partially
through the test, perhaps we'd be getting somewhere. An
increasingly
favorite test of mine is the staggered start "squarewave" tests in
the
flent suite. For those that haven't tried it, crusader is the
first
tool I've seen that not only has a staggered start latency under
load
test, but as its written in rust, runs on every OS in the planet.
Give
it a shot?

Re: [Bloat] [Rpm] make-wifi-fast 2016 & crusader

2022-12-11 Thread rjmcmahon via Bloat

Nice write up and work over the years.

On tooling:

iperf 2 supports full duplex, multiple parallel streams, tx start times, 
bounceback, isochronous, etc. Man page is here


https://iperf2.sourceforge.io/iperf-manpage.html

The flows code in the flows directory

https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/

is written in python 3 and leverages asyncio.

https://docs.python.org/3/library/asyncio.html

This is all released as open source.

Bob

This is where things stood on the wifi front, back in 2016. Nobody
understood us...

https://docs.google.com/document/d/1Se36svYE1Uzpppe1HWnEyat_sAGghB3kE285LElJBW4/edit#

So I sort of enjoyed re-reading that this morning, and all the
enthusiastic commentary we'd had on it. Perhaps we can reshape it and
find ways to move forward today?

I am happy to have seen so many products hitting the market 5+ years
later that leverage this work, many openwrt derived, like evenroute,
quantum, and openwifi, others from pure linux, like eero and google
fiber, and so far as I can tell, in many a chromebook, and of course
ios and osx.

Still, there was so much work left to be done, and the work applied to
all forms of wireless technology, be it 6 or 12ghz, or 60ghz, or
starlink. Just the other day I was watching a 5G engineer that was
struggling to get decent simultaneous throughput up and down, the test
tool showing that, but not the 25 seconds of buffering built into the
rmnet driver in poor conditions, and "only" 150ms perfect ones. This
test tool shows "perfect" throughput for this device:

https://www.spinics.net/lists/netdev/msg865852.html
(anyone know which tool it was? see image here:
https://drive.google.com/file/d/1gSbozrtd9h0X63i6vdkNpN68d-9sg8f9/view
)

vs the actual, underlying, unusable 25 seconds!!! - result - if only
that test tool attempted to start up even one more flow partially
through the test, perhaps we'd be getting somewhere. An increasingly
favorite test of mine is the staggered start "squarewave" tests in the
flent suite. For those that haven't tried it, crusader is the first
tool I've seen that not only has a staggered start latency under load
test, but as its written in rust, runs on every OS in the planet. Give
it a shot?

https://github.com/Zoxc/crusader/releases/tag/v0.0.9-testing

___
Rpm mailing list
r...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat