Re: [Cerowrt-devel] risc-v 2 ethernet port router

2023-05-07 Thread David P. Reed via Cerowrt-devel

Celerons and Atoms have been doing just fine with fq_codel on 1 GigE. However, 
the prices for 2 port boxes seems to be about $200 with RAM and enough flash to 
boot LInux.
 
I personally think the time has come to start using 2.5 GigE on the typical 2 
port home routers, and avoid USB, as it just makes support harder - use PCIe.
 
But I'm a special case, my home lab and all of my home is 10 GigE backbone, 
with 2.5 GigE to the wireless APs. This calendar year, I'm moving to either 25 
or 40 GigE on the backbone and in the lab, but I haven't picked the final 
setup. Partly it depends on what I can get from my external ISP for a 
reasonable price - hopefully 2 Gig or better will come here, we have many 
competitors (3 cable providers and various small business fixed wireless).
 
I'm wondering about how fq_codel will handle these higher speeds on inexpensive 
computers.
 
David
 
On Sunday, May 7, 2023 12:36pm, "Michael Richardson via Cerowrt-devel" 
 said:



> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 
> Dave Taht  wrote:
> > Well, I do not know if it is fq_codel capable. A lot of folk have been
> > missing on adding in BQL support of late, and as to whether it has the
> 
> I mean... at $119, if it can do fq_codel, then the price is totally
> acceptable :-)
> 
> > horsepower to do SQM at these rates is undetermined - I have generally
> > felt gobs of cache were required to do soft rate shaping.
> 
> > Secondly, you can always add a 3rd ethernet port via usb nowadays.
> 
> Yes, it's just less elegant if you want to have 5-6 of them in a lab.
> 
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


[Cerowrt-devel] Interesting

2023-04-16 Thread David P. Reed via Cerowrt-devel
AT&T Wireless traffic shaping apparently making some websites unusable - 
https://adriano.fyi/post/2023/2023-04-16-att-traffic-shaping-makes-websites-unusable/

Maybe Jason Livingood might want to comment (though as a Comcast exec, he 
probably won't point out an issue with ATT, another ISP)

I don't think it is "traffic shaping" per se, some observers on Hacker News 
have suggested it is outright discrimination against Netflix IP addresses, 
because ATT wants to slow "video streaming".

This is a practice that back when I testified at the Harvard FCC hearing on 
Network Management we got the FCC to say was bad, when Comcast was caught 
selectively slowing traffic (by interacting TCP close packets using Sandvine 
DPI gear, because they had bufferbloat in their setup, and their management was 
clueless about why heavy users of BitTorrent had terrible experiences, and said 
it was piracy that caused the lag under load!)

Anyway, given the clueless idiots on the Starlink discussion, ATT might be 
thinking they can block Netflix... 

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] 2.5gbit for $59

2022-06-02 Thread David P. Reed

There are small, low-TDP Intel systems for  up to ~$250 or so (including case) 
that use current generation Celerons with 4 2.5 GigE ports, and with the I/O 
bandwidth to easily support a full-on router at wirespeed on those ports.
 
I'm thinking of upgrading my entry-router (which is based on Fedora Server 36 
now, not Cerowrt, just because that's my general go-to distro on x86_64 and 
Aarch64) from an old Celeron system with two full speed 1 GigE ports to 2.5 
GigE, in advance of my expectation that 2.5 GigE DOCSIS 3.1 will become cheap 
enough soon at my home.
 
The problem with the low-end boards is that you need enough PCIe lanes to move 
packets at 10 Gb/sec bidirectionally. The contained ARM chips may be fast 
enough in principle, but the board and the PCIe are a bottleneck.
 
AliExpress sells such boards and also barebones, but prices and specs vary.
 
On Tuesday, May 31, 2022 8:05pm, "Dave Taht"  said:



> "LAN – 2x 2.5GbE RJ45 ports (via 2x Realtek RTL8125BG PCIe controller)
> tested up to 2.35 Gbps (Rx) and 1.85 Gbps (Tx)
> WAN – 1x Gigabit Ethernet RJ45 port (via Realtek RTL8211F) tested up
> to 941 Mbps (Tx and Rx)"
> 
> My guess is - none of these at the same time. Still... $59!
> 
> 
> https://www.cnx-software.com/2022/05/30/buy-nanopi-r5s-rockchip-rk3568-mini-router-sbc/
> 
> 
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Minirouter with pi compute module 4

2022-05-19 Thread David P. Reed

Dave -
I'm certain that I have non-x86 devices that can forward more than a gbit/sec 
in both directions. If only because I have a very nice system based on LS2160A 
ARM implementation that I use to forward 10 GigE traffic in both directions. 
Though the packets are not tiny. It's not cheap, of course. I am running Fedora 
server on it in my lab. The LS2160A carrier comes from a company called 
SolidRun.
 
I do understand that many of the low end ARM boards (and maybe even the Pi4 
Compute Module) have driver/bus weaknesses.  Have you tried to understand what 
the issues are? Could be Linux kernel and driver issues specific to the GigE 
hardware. Though I'd assume that USB 3.1 would not inherently get in the way.
 
I currently use a Celeron board with 2 1 GigE interfaces for my 1Gig cable 
connection, running Linux configured according to my preferences. As you say, 
x86 boards and PCIe ethernet interfaces tend to be fine with two 1 GigE ports.
 
So maybe you want to be a bit more specific about what you mean "device on the 
market"?
 
On Thursday, May 19, 2022 2:46pm, "Dave Taht"  said:



> I am sadly re-discovering there is not a single device on the market
> outside the x86 universe that can actually forward a gbit in both
> directions at the same time.
> 
> 
> On Thu, May 19, 2022 at 1:36 PM Matt Taggart  wrote:
> >
> > This looks like an interesting router candidate
> >
> >
> https://www.seeedstudio.com/Dual-GbE-Carrier-Board-with-4GB-RAM-32GB-eMMC-RPi-CM4-Case-p-5029.html
> >
> > Description says:
> > * one NIC is Broadcom BCM54210PE (from the CM4)
> > * the other is "Microchip's LAN7800" behind usb3
> > * 2 additional usb3 ports
> > * the usb3 uses the CM4's PCIe 2.0 x1 (500MB/s)
> > * wifi/BLE is the CM4's onboard, I think "Cypress CYW43455"?
> >
> > It sort of reminds me of the Espressobin device from a few years back,
> > but much faster and the pi has a much larger installed base, better
> > support, etc.
> >
> > --
> > Matt Taggart
> > m...@lackof.org
> > ___
> > Cerowrt-devel mailing list
> > Cerowrt-devel@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 
> 
> 
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] I wish we had a good definition of mesh

2022-03-03 Thread David P. Reed

I have avoided using the term "mesh networks" for a reason. It's way too broad 
already. This also goes for "ad hoc networking" (which was supposedly what 
MANET was about, but the instigators didn't think very clearly about what the 
possibilities were - they focused on what "tactical warfighters" might use in 
the field only.)
 
Getting to the particulars quickly is far better than creating some kind of 
"umbrella" term that can be distorted and confused. Here's two problems with 
"mesh" as used by others.
 
1) it is often assumed that the nodes of a mesh are standardized and all the 
same. To be this focuses on the wrong problem - it's like talking about 10baseT 
networks as if they are distinct and necessary. Or 802.11. While I can imagine 
that in some ways it might be simpler to design a uniformly identical set of 
mesh hardware, the "layers on top" of such a platform at NOT uniform in their 
requirements. So why stultify what is only one of many underlying solutions.
 
2) It is often assumed that the "network is intelligent" unto itself. Often 
this leads to a design that puts a "control plane" into the mesh. There are 
reasons to have mesh node *protocols* coordinate, but only those things that 
are truly necessary. And those can't be predicted at design time, not at all.
 
That's why this "constellation project" seems to be of limited value. It 
reminds me of the ARPA BCR crypto based network security project. It was the 
*sole* investment in network security by ARPA at the time when Vint was the 
program manager. It wasn't a bad concept (but all the solution space was 
narrowed to NSA's beliefs at the time that the only crypto that could be used 
for security was a link layer protection with a sealed black box to which keys 
had to be periodically distributed. End-to-end security (which required 
software encryption and much more decentralized key distribution, among other 
things) was actually *ruled out* by ARPA as valuable. So BBN built the BCR 
boxes, spec'ed how to install them on, say, the Internet links, and may have 
even pointed out that inside routers the data was en clair, so this required 
all routers to be guarded to the level needed to protect the most critical 
communications. I've never understood deeply why my work and Steve Kent's and 
Roger Needham's and Mike Schroeder's work on end-to-end security in the context 
of the TCP protocol and UDP was not supported. Steve and I offered to include a 
complete solution for TCP that he had done with me involved a little bit, and 
we were told to stop working on it. (partly for national security reasons, but 
actually since it offended NSA's rules of thumb).

As a *research* project, learning to use lasers among LEO satellites seems 
appropriate, but this is NOT proposed as research. It seems like DoD thinks the 
problem is solved already. I am very sure it is NOT solved. It's like a grand 
project from the security sector a few years ago where I tried to help - the 
so-called National Cyber Range and its test platform (hardware and opearting 
environment). Again, what should have started with R&D, but was instead put out 
for bid prematurely as if a working system could be built and demonstrated. The 
idea being proposed by the teams was the idea of using lots of virtual machines 
in a cloud to simulate any and all government IT uses and any and all attacks. 
Now Virtual Machines are not real hardware, so simulating seucrity 
vulnerabilities of real hardware in them is either a bear of a research problem 
yet to be solved, or you have to define the problems of real life (like 
Microsoft Windows running on real laptops throughout the world in DoD, Dept of 
State, and even just the gear in the pentagon) as not existing except to the 
extent that you can build models inside a cloud.
 
Anyway, that's a distraction. The USG wastes a hell of a lot of money with 
these kinds of boondoggles.
 
Anyway, if you have a bunch of satellites up there with lasers and are willing 
to plan to throw the first prototype away, you'll get really good results from 
smart engineers.
 
I suspect this project will explain why Iridium, the first space "mesh", might 
want to be viewed as what constitutes a "mesh", and to choose a new name 
entirely.
 
My term for what would be far more interesting (except it isn't focused on 
hardware at all) is a "near earth Internet". That is, a set of protocols and 
operating methods that don't specify the hardware at all, but which can 
incorporate in a scalable way a huge variety of hardware on the earth and in 
LEO and even on non-orbital space vehicles, assuming distances on the order of 
"light milliseconds" are achieved among the nodes.
 
But hey, that's what they did at Guifi and Porto. Mix and match and make 
something that is robustly independent of the underlying hardware.
 
Instead, yet another boondoggle. Let it be called a "mesh" because that sounds 
futuristic.
 
On Tuesday, March 1, 2022 9:47pm, "Dave Taht"  sai

Re: [Cerowrt-devel] 10gige and 2.5gige

2021-12-19 Thread David P. Reed

Leviton has wallplates for fiber, and the tools for fiber are cheaper than the 
tools for CAT6.
Pulling fiber through walls hasn't been a problem for me. No more than pulling 
CAT6.
 
I know I shouldn't kink or pull fiber hard. In the worst case, I pull light 
flexible conduit through walls with pull strings so I can add arbitrary numbers 
of fibers. This is good practice, anyway (for wires or fibers).
 
 
On Friday, December 17, 2021 3:18am, "Sebastian Moeller"  said:



> To add to Joel's point,
> 
> I can do my own catX cable runs and connect sockets/plugs to the cables, but I
> lack the tools for fiber-splicing... as cool as that would be it is going to 
> be
> hard to justify multi-100s EUR for a splicer.. That still leaves short 
> distance in
> the main computing area of an appartment/house, but I doubt that many 
> consumers
> have a concentration high enough to justify the costs even there.
> 
> What I do see over here in Europe, with FTTH-roll out speeding up, is CPE that
> offer SFP/SFP+ cages for the WAN side though, SFP+ becoming more common since 
> ISPs
> started to deploy XGS-PON (gross 10Gpbs bidirectionally, after FEC ~8.5 Gbps).
> 
> 
> Regards
> Sebastian
> 
> P.S.: I have not started jumping on the 2.5 Gbps or higher train just yet, 
> none of
> my devices seems massively underserved with just 1Gbps yet (with the potential
> exception of a single link where >= 2Gbps would be nice since I am one cabe
> short and >2Gbps would allow to multiplex two 1Gbps connections over that
> cable).
> 
> 
> > On Dec 16, 2021, at 22:57, Joel Wirāmu Pauling 
> wrote:
> >
> > Yes but as much as I like fibre; it's too fragile for the average household
> structured cabling real world use case. Not to mention nothing consumwe comes 
> with
> SFP+ in the home space.
> >
> > On Fri, 17 Dec 2021, 10:43 am David Lang,  wrote:
> > another valuable featur of fiber for home use is that fiber can't contribute
> to
> > ground loops the way that copper cables can.
> >
> > and for the paranoid (like me :-) ) fiber also means that any electrical
> > disaster that happens to one end won't propgate through and fry other
> equipment
> >
> > David Lang
> >
> > On Thu, 16 Dec 2021, David P. Reed wrote:
> >
> > > Thanks, That's good to know...The whole SFP+ adapter concept has seemed
> to me to be a "tweener" in hardware design space. Too many failure points. 
> That
> said, I like fiber's properties as a medium for distances.
> > >
> > >
> > > On Thursday, December 16, 2021 2:31pm, "Joel Wirāmu Pauling"
>  said:
> > >
> > >
> > >
> > >
> > > Heat issues you mention with UTP are gone; with the [ 803.bz ](
> http://803.bz ) stuff (i.e Base-N).
> > > It was mostly due to the 10G-Base-T spec being old and out of line with
> the SFP+ spec ; which led to higher power consumption than SFP+ cages were 
> rated
> to draw and aforementioned heat problems; this is not a problem with newer 
> kit.
> > > It went away with the move to smaller silicon processes and now UTP
> based 10G in the home devices are more common and don't suffer from the 
> fragility
> issues of the earlier copper based 10G spec. The AQC chipsets were the first 
> to
> introduce it but most other vendors have finally picked it up after 5 years or
> feet dragging.
> > >
> > >
> > > On Fri, Dec 17, 2021 at 7:16 AM David P. Reed <[ dpr...@deepplum.com
> ]( mailto:dpr...@deepplum.com )> wrote:
> > > Yes, it's very cheap and getting cheaper.
> > >
> > > Since its price fell to the point I thought was cheap, my home has a 10
> GigE fiber backbone, 2 switches in my main centers of computers, lots of 10 
> GigE
> NICs in servers, and even dual 10 GigE adapters in a Thunderbolt 3 external
> adapter for my primary desktop, which is a Skull Canyon NUC.
> > >
> > > I strongly recommend people use fiber and sfp+ DAC cabling because
> twisted pair, while cheaper, actually is problematic at speeds above 1 Gig -
> mostly due to power and heat.
> > >
> > > BTW, it's worth pointing out that USB 3.1 can handle 10 Gb/sec, too, and
> USB-C connectors and cables can carry Thunderbolt at higher rates. Those 
> adapters
> are REALLY CHEAP. There's nothing inherently different about the electronics, 
> if
> anything, USB 3.1 is more complicate logic than the ethernet MAC.
> > >
> > > So the reason 10 GigE is still far more expensive than USB 3.1 is mainly
> market volume - if 10 GigE were a consumer pro

Re: [Cerowrt-devel] 10gige and 2.5gige

2021-12-16 Thread David P. Reed

Thanks, That's good to know...The whole SFP+ adapter concept has seemed to me 
to be a "tweener" in hardware design space. Too many failure points. That said, 
I like fiber's properties as a medium for distances.
 
 
On Thursday, December 16, 2021 2:31pm, "Joel Wirāmu Pauling" 
 said:




Heat issues you mention with UTP are gone; with the [ 803.bz ]( http://803.bz ) 
stuff (i.e Base-N). 
It was mostly due to the 10G-Base-T spec being old and out of line with the 
SFP+ spec ; which led to higher power consumption than SFP+ cages were rated to 
draw and aforementioned heat problems; this is not a problem with newer kit.
It went away with the move to smaller silicon processes and now UTP based 10G 
in the home devices are more common and don't suffer from the fragility issues 
of the earlier copper based 10G spec. The AQC chipsets were the first to 
introduce it but most other vendors have finally picked it up after 5 years or 
feet dragging. 


On Fri, Dec 17, 2021 at 7:16 AM David P. Reed <[ dpr...@deepplum.com ]( 
mailto:dpr...@deepplum.com )> wrote:
Yes, it's very cheap and getting cheaper.
 
Since its price fell to the point I thought was cheap, my home has a 10 GigE 
fiber backbone, 2 switches in my main centers of computers, lots of 10 GigE 
NICs in servers, and even dual 10 GigE adapters in a Thunderbolt 3 external 
adapter for my primary desktop, which is a Skull Canyon NUC.
 
I strongly recommend people use fiber and sfp+ DAC cabling because twisted 
pair, while cheaper, actually is problematic at speeds above 1 Gig - mostly due 
to power and heat.
 
BTW, it's worth pointing out that USB 3.1 can handle 10 Gb/sec, too, and USB-C 
connectors and cables can carry Thunderbolt at higher rates.  Those adapters 
are REALLY CHEAP. There's nothing inherently different about the electronics, 
if anything, USB 3.1 is more complicate logic than the ethernet MAC.
 
So the reason 10 GigE is still far more expensive than USB 3.1 is mainly market 
volume - if 10 GigE were a consumer product, not a datacenter product, you'd 
think it would already be as cheap as USB 3.1 in computers and switches.
 
Since DOCSIS can support up to 5 Gb/s, I think, when will Internet Access 
Providers start offering "Cable Modems" that support customers who want more 
than "a full Gig"? Given all the current DOCSIS 3 CMTS's etc. out there, it's 
just a configuration change. 
 
So when will consumer "routers" support 5 Gig, 10 Gig?
 
On Thursday, December 16, 2021 11:20am, "Dave Taht" <[ dave.t...@gmail.com ]( 
mailto:dave.t...@gmail.com )> said:



> has really got cheap.
> 
> [ https://www.tomshardware.com/news/innodisk-m2-2280-10gbe-adapter ]( 
> https://www.tomshardware.com/news/innodisk-m2-2280-10gbe-adapter )
> 
> On the other hand users are reporting issues with actually using
> 2.5ghz cable with this router in particular, halving the achieved rate
> by negotiating 2.5gbit vs negotiating 1gbit.
> 
> [ https://forum.mikrotik.com/viewtopic.php?t=179145#p897836 ]( 
> https://forum.mikrotik.com/viewtopic.php?t=179145#p897836 )
> 
> 
> --
> I tried to build a better future, a few times:
> [ https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org ]( 
> https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org )
> 
> Dave Täht CEO, TekLibre, LLC
> ___
> Cerowrt-devel mailing list
> [ Cerowrt-devel@lists.bufferbloat.net ]( 
> mailto:Cerowrt-devel@lists.bufferbloat.net )
> [ https://lists.bufferbloat.net/listinfo/cerowrt-devel ]( 
> https://lists.bufferbloat.net/listinfo/cerowrt-devel )
>___
 Cerowrt-devel mailing list
[ Cerowrt-devel@lists.bufferbloat.net ]( 
mailto:Cerowrt-devel@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/cerowrt-devel ]( 
https://lists.bufferbloat.net/listinfo/cerowrt-devel )___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] 10gige and 2.5gige

2021-12-16 Thread David P. Reed

Yes, it's very cheap and getting cheaper.
 
Since its price fell to the point I thought was cheap, my home has a 10 GigE 
fiber backbone, 2 switches in my main centers of computers, lots of 10 GigE 
NICs in servers, and even dual 10 GigE adapters in a Thunderbolt 3 external 
adapter for my primary desktop, which is a Skull Canyon NUC.
 
I strongly recommend people use fiber and sfp+ DAC cabling because twisted 
pair, while cheaper, actually is problematic at speeds above 1 Gig - mostly due 
to power and heat.
 
BTW, it's worth pointing out that USB 3.1 can handle 10 Gb/sec, too, and USB-C 
connectors and cables can carry Thunderbolt at higher rates.  Those adapters 
are REALLY CHEAP. There's nothing inherently different about the electronics, 
if anything, USB 3.1 is more complicate logic than the ethernet MAC.
 
So the reason 10 GigE is still far more expensive than USB 3.1 is mainly market 
volume - if 10 GigE were a consumer product, not a datacenter product, you'd 
think it would already be as cheap as USB 3.1 in computers and switches.
 
Since DOCSIS can support up to 5 Gb/s, I think, when will Internet Access 
Providers start offering "Cable Modems" that support customers who want more 
than "a full Gig"? Given all the current DOCSIS 3 CMTS's etc. out there, it's 
just a configuration change. 
 
So when will consumer "routers" support 5 Gig, 10 Gig?
 
On Thursday, December 16, 2021 11:20am, "Dave Taht"  said:



> has really got cheap.
> 
> https://www.tomshardware.com/news/innodisk-m2-2280-10gbe-adapter
> 
> On the other hand users are reporting issues with actually using
> 2.5ghz cable with this router in particular, halving the achieved rate
> by negotiating 2.5gbit vs negotiating 1gbit.
> 
> https://forum.mikrotik.com/viewtopic.php?t=179145#p897836
> 
> 
> --
> I tried to build a better future, a few times:
> https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org
> 
> Dave Täht CEO, TekLibre, LLC
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] uplink bufferbloat and scheduling problems

2021-12-01 Thread David P. Reed

What's the difference between uplink and downlink?  In DOCSIS the rate 
asymmetry was the issue. But in WiFi, the air interface is completely symmetric 
(802.11ax, though, maybe not because of centrally polling).
 
In any CSMA link (WiFi), there is no "up" or "down". There is only sender and 
receiver, and each station and the AP are always doing both.
 
The problem with shared media links is that the "waiting queue" is distributed, 
so to manage queue depth, ALL of the potential senders must respond 
aggressively to excess packets.
 
This is why a lot (maybe all) of the silicon vendors are making really bad 
choices w.r.t. bufferbloat by adding buffering in the transmitter chip itself, 
and not discarding or marking when queues build up. It's the same thing that 
constantly leads hardware guys to think that more memory for buffers improves 
throughput, and only advertising throughput.
 
To say it again: More memory *doesn't* improve throughput when the queue depths 
exceed one packet on average, and it degrades "goodput" at higher levels by 
causing the ultimate sender to "give up" due to long latency. (at the extreme, 
users will just click again on a slow URL, causing all the throughput to be 
"badput", because they force the system to transmit it again, while leaving 
packets clogging the queues.
 
So, if you want good performance on a shared radio medium, you need to squish 
each flow's queue depth down from sender to receiver to "average < 1 in queue", 
and also drop packets when there are too many simultaneous flows competing for 
airtime. And if your source process can't schedule itself frequently enough, 
don't expect the network to replace buffering at the TCP source and destination 
- it is not intended to be a storage system.
 
 
 
On Tuesday, November 30, 2021 7:13pm, "Dave Taht"  said:



> Money quote: "Figure 2a is a good argument to focus latency
> research work on downlink bufferbloat."
> 
> It peaked at 1.6s in their test:
> https://hal.archives-ouvertes.fr/hal-03420681/document
> 
> --
> I tried to build a better future, a few times:
> https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org
> 
> Dave Täht CEO, TekLibre, LLC
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] risc-v options?

2021-11-30 Thread David P. Reed

For what? I have recently gotten a MicroSemi RISC-V SoC board with embedded 
FPGA (or maybe it is better thought of as an FPGA board with multicore hard 
logic RISC-V host.) Runs Linux very fast. It's not set up to be a router, 
though - not unless I populate its PCIe  slot with NICs. Standard Linux drivers 
for PCIe devices all work quite well, so far.
 
These early 64 bit RISC-V implementations are pretty darn good, but unlike 
Intel's Xeons, they don't yet handle memory channel performance very well.
 
(The 32-bit RISC-V's are really competing with ARM based microcontrollers for 
embedded systems. I don't find them interesting, though I have a couple sample 
boards with 32 bit RISC-V cores).
 
A random guess on my part: even consumer routers will be moving to 64-bit 
processor designs in the next couple years. That's because the price difference 
is getting quite small, as a percentage of total product cost, and because it 
is hard to buy "small memory address space" DIMMs. I could be wrong, but 
extrapolation from today's trends suggests that is more likely than not.
 
 
On Friday, November 26, 2021 3:02pm, "Dave Taht"  said:



> has anyone tried the latest generations of risc-v?
> 
> https://linuxgizmos.com/17-sbc-runs-linux-on-allwinner-d1-risc-v-soc/
> 
> --
> I tried to build a better future, a few times:
> https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org
> 
> Dave Täht CEO, TekLibre, LLC
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Starlink] [Bloat] Little's Law mea culpa, but not invalidating my main point

2021-09-26 Thread David P. Reed

Pretty good list, thanks for putting this together.
 
The only thing I'd add, and I'm not able to formulate it very elegantly, is 
this personal insight: One that I would research, because it can be a LOT more 
useful in the end-to-end control loop than stuff like ECN, L4S, RED, ...
 
Fact: Detecting congestion by allowing a queue to build up is a very lagging 
indicator of incipient congestion in the forwarding system. The delay added to 
all paths by that queue buildup slows down the control loop's ability to 
respond by slowing the sources. It's the control loop delay that creates both 
instability and continued congestion growth.
Observation: current forwarders forget what they have forwarded as soon as it 
is transmitted. This loses all the information about incipient congestion and 
"fairness" among multiple sources. Yet, there is no need to forget recent 
history at all after the packets have been transmitted.
 
An idea I keep proposing is the idea of remembering the last K seconds of 
packets, their flow ids (source and destination), the arrival time and 
departure time, and their channel occupancy on the outbound shared link. Then 
using this information to reflect incipient congestion information to the flows 
that need controlling, to be used in their control loops.
 
So far, no one has taken me up on doing the research to try this in the field. 
Note: the signalling can be simple (sending ECN flags on all flows that transit 
the queue, even though there is no backlog, yet, when the queue is empty but 
transient overload seems likely), but the key thing is that we already assume 
that  recent history of packets is predictive of future overflow.
This can be implemented locally on any routing path that tends to be a 
bottleneck link. Such as the uplink of a home network. It should work with TCP 
as is if the signalling causes window reduction (at first, just signal by 
dropping packets prematurely, but if TCP will handle ECN aggressively - a 
single ECN mark causing window reduction, then it will help that, too).
 
The insight is that from an "information and control theory" perspective, the 
packets that have already been forwarded are incredibly valuable for congestion 
prediction.
 
Please, if possible, if anyone actually works on this and publishes, give me 
credit for suggesting this.
Just because I've been suggesting it for about 15 years now, and being ignored. 
It would be a mitzvah.
 
 
On Thursday, September 23, 2021 1:46pm, "Bob McMahon" 
 said:



Hi All,
I do appreciate this thread as well. As a test & measurement guy here are my 
conclusions around network performance. Thanks in advance for any comments.

Congestion can be mitigated the following ways
o) Size queues properly to minimize/negate bloat (easier said than done with 
tech like WiFi)
o) Use faster links on the service side such that a queues' service rates 
exceeds the arrival rate, no congestion even in bursts, if possible
o) Drop entries during oversubscribed states (queue processing can't "speed up" 
like water flow through a constricted pipe, must drop)
o) Identify aggressor flows per congestion if possible
o) Forwarding planes can signal back the the sources "earlier" to minimize 
queue build ups per a "control loop request" asking sources to pace their writes
o) transport layers use techniques a la BBR
o) Use "home gateways" that support tech like FQ_CODEL
Latency can be mitigated the following ways
o) Mitigate or eliminate congestion, particularly around queueing delays
o) End host apps can use TCP_NOTSENT_LOWAT along with write()/select() to 
reduce host sends of "better never than late" messages 
o) Move servers closer to the clients per fundamental limit of the speed of 
light (i.e. propagation delay of energy over the wave guides), a la CDNs
(Except if you're a HFT, separate servers across geography and make sure to 
have exclusive user rights over the lowest latency links)

Transport control loop(s)
o) Transport layer control loops are non linear systems so network tooling will 
struggle to emulate "end user experience"
o) 1/2 RTT does not equal OWD used to compute the bandwidth delay product, 
imbalance and effects need to be measured
o) forwarding planes signaling congestion to sources wasn't designed in TCP 
originally but the industry trend seems to be to moving towards this per things 
like L4S
Photons, radio & antenna design
o) Find experts who have experience & knowledge, e.g. many do here
o) Photons don't really have mass nor size, at least per my limited 
understanding of particle physics and QED though, I must admit, came from 
reading things on the internet 

Bob


On Mon, Sep 20, 2021 at 7:40 PM Vint Cerf <[ v...@google.com ]( 
mailto:v...@google.com )> wrote:
see [ https://mediatrust.com/ ]( https://mediatrust.com/ )
v


On Mon, Sep 20, 2021 at 10:28 AM Steve Crocker <[ st...@shinkuro.com ]( 
mailto:st...@shinkuro.com )> wrote:

Related but slightly different: Attached is a slide some of my colleag

Re: [Cerowrt-devel] [Cake] [Bloat] Little's Law mea culpa, but not invalidating my main point

2021-09-20 Thread David P. Reed

The top posting may be confusing, but "the example" here is the example of the 
> 100 TCP destinations and dozens of DNS queries that are needed (unless 
cached) to display the front page of CNN today.
That's "one website" home page. If you look at the JavaScript resource loading 
code, and now the "service worker" javascript code, the idea that it is like 
fetching a file using FTP is just wrong. Do NANOG members understand this? I 
doubt it.
 
On Monday, September 20, 2021 5:30pm, "David P. Reed"  
said:



I use the example all the time, but not for interviewing. What's sad is that 
the answers seem to be quoting from some set of textbooks or popular 
explanations of the Internet that really have got it all wrong, but which many 
professionals seem to believe is true.
 
The same phenomenon appears in the various subfields of the design of radio 
communications at the physical and front end electronics level. The examples of 
mental models that are truly broken that are repeated by "experts" are truly 
incredible, and cover all fields. Two or three:
 
1. why do the AM commercial broadcast band (540-1600 kHz) signals you receive 
in your home travel farther than VHF band TV signals and UHF band TV signals?  
How does this explanation relate to the fact that we can see stars a million 
light-years away using receivers that respond to 500 Terahertz radio (visible 
light antennas)?
 
2. What is the "aperture" of an antenna system? Does it depend on frequency of 
the radiation? How does this relate to the idea of the size of an RF photon, 
and the mass of an RF photon? How big must a cellphone be to contain the 
antenna needed to receive and transmit signals in the 3G phone frequencies?
 
3. We can digitize the entire FM broadcast frequency band into a sequence of 
14-bit digital samples at the Nyquist sampling rate of about 40 Mega-samples 
per second, which covers the 20 Mhz bandwidth of the FM band. Does this allow a 
receiver to use a digital receiver to tune into any FM station that can be 
received with an "analog FM radio" using the same antenna? Why or why not?
 
I'm sure Dick Roy understands all three of these questions, and what is going 
on. But I'm equally sure that the designers of WiFi radios or broadcast radios 
or even the base stations of cellular data systems include few who understand.
 
And literally no one at the FCC or CTIA understand how to answer these 
questions.  But the problem is that they are *confident* that they know the 
answers, and that they are right.
 
The same is true about the packet layers and routing layers of the Internet. 
Very few engineers, much less lay people realize that what they have been told 
by "experts" is like how Einstein explained how radio works to a teenaged kid:
 
  "Imagine a cat whose tail is in New York and his head is in Los Angeles. If 
you pinch his tail in NY, he howls in Los Angeles. Except there is no cat."
 
Though others have missed it, Einstein was not making a joke. The non-cat is 
the laws of quantum electrodynamics (or classically, the laws of Maxwell's 
Equations). The "cat" would be all the stories people talk about how radio 
works - beams of energy (or puffs of energy), modulated by some analog 
waveform, bouncing off of hard materials, going through less dense materials, 
"hugging the ground", "far field" and "near field" effects, etc.
 
Einstein's point was that there is no cat - that is, all the metaphors and 
models aren't accurate or equivalent to how radio actually works. But the 
underlying physical phenomenon supporting radio is real, and scientists do 
understand it pretty deeply.
 
Same with how packet networks work. There are no "streams" that behave like 
water in pipes, the connection you have to a shared network has no "speed" in 
megabits per second built in to it, A "website" isn't coming from one place in 
the world, and bits don't have inherent meaning.
 
There is NO CAT (not even a metaphorical one that behaves like the Internet 
actually works).
 
But in the case of the Internet, unlike radio communications, there is no deep 
mystery that requires new discoveries to understand it, because it's been built 
by humans. We don't need metaphors like "streams of water" or "sites in a 
place". We do it a disservice by making up these metaphors, which are only apt 
in a narrow context.
 
For example, congestion in a shared network is just unnecessary queuing delay 
caused by multiplexing the capacity of a particular link among different users. 
It can be cured by slowing down all the different packet sources in some more 
or less fair way. The simplest approach is just to discard from the queue 
excess packets that make that queue longer than can fit through the link Then 
there can&#

Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point

2021-09-20 Thread David P. Reed

I use the example all the time, but not for interviewing. What's sad is that 
the answers seem to be quoting from some set of textbooks or popular 
explanations of the Internet that really have got it all wrong, but which many 
professionals seem to believe is true.
 
The same phenomenon appears in the various subfields of the design of radio 
communications at the physical and front end electronics level. The examples of 
mental models that are truly broken that are repeated by "experts" are truly 
incredible, and cover all fields. Two or three:
 
1. why do the AM commercial broadcast band (540-1600 kHz) signals you receive 
in your home travel farther than VHF band TV signals and UHF band TV signals?  
How does this explanation relate to the fact that we can see stars a million 
light-years away using receivers that respond to 500 Terahertz radio (visible 
light antennas)?
 
2. What is the "aperture" of an antenna system? Does it depend on frequency of 
the radiation? How does this relate to the idea of the size of an RF photon, 
and the mass of an RF photon? How big must a cellphone be to contain the 
antenna needed to receive and transmit signals in the 3G phone frequencies?
 
3. We can digitize the entire FM broadcast frequency band into a sequence of 
14-bit digital samples at the Nyquist sampling rate of about 40 Mega-samples 
per second, which covers the 20 Mhz bandwidth of the FM band. Does this allow a 
receiver to use a digital receiver to tune into any FM station that can be 
received with an "analog FM radio" using the same antenna? Why or why not?
 
I'm sure Dick Roy understands all three of these questions, and what is going 
on. But I'm equally sure that the designers of WiFi radios or broadcast radios 
or even the base stations of cellular data systems include few who understand.
 
And literally no one at the FCC or CTIA understand how to answer these 
questions.  But the problem is that they are *confident* that they know the 
answers, and that they are right.
 
The same is true about the packet layers and routing layers of the Internet. 
Very few engineers, much less lay people realize that what they have been told 
by "experts" is like how Einstein explained how radio works to a teenaged kid:
 
  "Imagine a cat whose tail is in New York and his head is in Los Angeles. If 
you pinch his tail in NY, he howls in Los Angeles. Except there is no cat."
 
Though others have missed it, Einstein was not making a joke. The non-cat is 
the laws of quantum electrodynamics (or classically, the laws of Maxwell's 
Equations). The "cat" would be all the stories people talk about how radio 
works - beams of energy (or puffs of energy), modulated by some analog 
waveform, bouncing off of hard materials, going through less dense materials, 
"hugging the ground", "far field" and "near field" effects, etc.
 
Einstein's point was that there is no cat - that is, all the metaphors and 
models aren't accurate or equivalent to how radio actually works. But the 
underlying physical phenomenon supporting radio is real, and scientists do 
understand it pretty deeply.
 
Same with how packet networks work. There are no "streams" that behave like 
water in pipes, the connection you have to a shared network has no "speed" in 
megabits per second built in to it, A "website" isn't coming from one place in 
the world, and bits don't have inherent meaning.
 
There is NO CAT (not even a metaphorical one that behaves like the Internet 
actually works).
 
But in the case of the Internet, unlike radio communications, there is no deep 
mystery that requires new discoveries to understand it, because it's been built 
by humans. We don't need metaphors like "streams of water" or "sites in a 
place". We do it a disservice by making up these metaphors, which are only apt 
in a narrow context.
 
For example, congestion in a shared network is just unnecessary queuing delay 
caused by multiplexing the capacity of a particular link among different users. 
It can be cured by slowing down all the different packet sources in some more 
or less fair way. The simplest approach is just to discard from the queue 
excess packets that make that queue longer than can fit through the link Then 
there can't be any congestion. However, telling the sources to slow down 
somehow would be an improvement, hopefully before any discards are needed.
 
There is no "back pressure", because there is no "pressure" at all in a packet 
network. There are just queues and links that empty queues of packets at a 
certain rate. Thinking about back pressure comes from thinking about sessions 
and pipes. But 90% of the Internet has no sessions and no pipes. Just as there 
is "no cat" in real radio systems.
 
On Monday, September 20, 2021 12:09am, "David Lang"  said:



> On Mon, 20 Sep 2021, Valdis Klētnieks wrote:
> 
> > On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:
> >> what actually happens during a web page load,
> >
> > I'm pretty sure that nobody actually

Re: [Cerowrt-devel] [Bloat] [Cake] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board

2021-09-03 Thread David P. Reed

Regarding "only needs to be solved ... high density" - Musk has gone on record 
as saying that Starlink probably will never support dense subscriber areas. 
Which of course contradicts many other statements by Starlink and Starfans that 
they can scale up to full coverage of the world. My point in this regard is 
that "armchair theorizing" is not going to discover how scalable Starlink 
technology (or LEO technology) can be, because there are many, many physical 
factors besides constellation size that will likely limit scaling.
 
It really does bug me that Musk and crew have promised very low latency as a 
definite feature of Starlink, but then couldn't seem to even bother to get 
congestion control in their early trial deployments.
That one should be solvable.
 
But they are declaring victory and claiming they have solved every problem, so 
they should get FCC permission to roll out more of their unproven technology, 
right now. Reminds me of ATT deploying the iPhone. As soon as it stopped 
working very well after the early raving reviews from early adopters, ATT's top 
technology guy (a John Donavan) went on a full on rampage against Apple for 
having a "defective product" when in fact it was ATT's HSPA network that was 
getting severely congested due to its extreme bufferbloat design. (It wasn't 
ATT, it was actually Alcatel Lucent that did the terrible design, but ATT 
continued to blame Apple.)
 
Since some on this list want to believe that Starlink is the savior, but others 
are technically wise, I'm not sure where the discussion will go. I hope that 
there will be some feedback to Starlink rather than just a fan club or 
user-support group.
 
 
On Friday, September 3, 2021 10:35am, "Matt Mathis"  
said:



I am very wary of a generalization of this problem: software engineers who 
believe that they can code around arbitrary idosynchronies of network hardware. 
 They often succeed, but generally at a severe performance penalty.
How much do we know about the actual hardware?   As far as I understand the 
math, some of the prime calculations used in Machine Learning are isomorphic to 
multidimensional correlators and convolutions, which are the same computations 
as needed to do phased array beam steering.   One can imagine scenarios where 
Tesla (plans to) substantially overbuild the computational HW by recycling some 
ML technology, and then beefing up the SW over time as they better understand 
reality.
Also note that the problem really only needs to be solved in areas where they 
will eventually have high density.   Most of the early deployment will never 
have this problem.









Thanks,--MM--
The best way to predict the future is to create it.  - Alan Kay

We must not tolerate intolerance;
   however our response must be carefully measured: 
too strong would be hypocritical and risks spiraling out of control;
too weak risks being mistaken for tacit approval.


On Thu, Sep 2, 2021 at 10:36 AM David P. Reed <[ dpr...@deepplum.com ]( 
mailto:dpr...@deepplum.com )> wrote:
I just want to thank Dick Roy for backing up the arguments I've been making 
about physical RF communications for many years, and clarifying terminology 
here. I'm not the expert - Dick is an expert with real practical and 
theoretical experience - but what I've found over the years is that many who 
consider themselves "experts" say things that are actually nonsense about radio 
systems.
 
It seems to me that Starlink is based on a propagation model that is quite 
simplistic, and probably far enough from correct that what seems "obvious" will 
turn out not to be true. That doesn't stop Musk and cronies from asserting 
these things as absolute truths (backed by actual professors, especially 
professors of Economics like Coase, but also CS professors, network protocol 
experts, etc. who aren't physicists or practicing RF engineers).
 
The fact is that we don't really know how to build a scalable LEO system. 
Models can be useful, but a model can be a trap that causes even engineers to 
be cocky. Or as the saying goes, a Clear View doesn't mean a Short Distance.
 
If there are 40 satellites serving 10,000 ground terminals simultaneously, 
exactly what is the propagation environment like? I can tell you one thing: if 
the phased array is digitized at some sample rate and some equalization and 
some quantization, the propagation REALLY matters in serving those 10,000 
ground terminals scattered randomly on terrain that is not optically flat and 
not fully absorbent.
 
So how will Starlink scale? I think we literally don't know. And the modeling 
matters.
 
Recently a real propagation expert (Ted Rapaport and his students) did a study 
of how well 70 GHz RF signals propagate in an urban environment - Brooklyn.  
The standard model would say that coverage would be terrible!

Re: [Cerowrt-devel] [Cake] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board

2021-09-02 Thread David P. Reed

I just want to thank Dick Roy for backing up the arguments I've been making 
about physical RF communications for many years, and clarifying terminology 
here. I'm not the expert - Dick is an expert with real practical and 
theoretical experience - but what I've found over the years is that many who 
consider themselves "experts" say things that are actually nonsense about radio 
systems.
 
It seems to me that Starlink is based on a propagation model that is quite 
simplistic, and probably far enough from correct that what seems "obvious" will 
turn out not to be true. That doesn't stop Musk and cronies from asserting 
these things as absolute truths (backed by actual professors, especially 
professors of Economics like Coase, but also CS professors, network protocol 
experts, etc. who aren't physicists or practicing RF engineers).
 
The fact is that we don't really know how to build a scalable LEO system. 
Models can be useful, but a model can be a trap that causes even engineers to 
be cocky. Or as the saying goes, a Clear View doesn't mean a Short Distance.
 
If there are 40 satellites serving 10,000 ground terminals simultaneously, 
exactly what is the propagation environment like? I can tell you one thing: if 
the phased array is digitized at some sample rate and some equalization and 
some quantization, the propagation REALLY matters in serving those 10,000 
ground terminals scattered randomly on terrain that is not optically flat and 
not fully absorbent.
 
So how will Starlink scale? I think we literally don't know. And the modeling 
matters.
 
Recently a real propagation expert (Ted Rapaport and his students) did a study 
of how well 70 GHz RF signals propagate in an urban environment - Brooklyn.  
The standard model would say that coverage would be terrible! Why? Because 
supposedly 70 GHz is like visible light - line of sight is required or nothing 
works.
 
But in fact, Ted, whom I've known from being on the FCC Technological Advisory 
Committee (TAC) together when it was actually populated with engineers and 
scientists, not lobbyists, discovered that scattering and diffraction at 70 GHz 
in an urban environment significantly expands coverage of a single transmitter. 
Remarkably so. Enough that "cellular architecture" doesn't make sense in that 
propagation environment.
 
So all the professional experts are starting from the wrong place, and amateurs 
perhaps even more so.
 
I hope Starlink views itself as a "research project". I'm afraid it doesn't - 
partly driven by Musk, but equally driven by the FCC itself, which demands that 
before a system is deployed that the entire plan be shown to work (which would 
require a "model" that is actually unknowable because something like this has 
never been tried). This is a problem with today's regulation of spectrum - 
experiments are barred, both by law, and by competitors who can claim your 
system will destroy theirs and not work.
 
But it is also a problem when "fans" start setting expectations way too high. 
Like claiming that Starlink will eliminate any need for fiber. We don't know 
that at all!
 
 
 
 
 
 
 
On Tuesday, August 10, 2021 2:11pm, "Dick Roy"  said:




To add a bit more, as is easily seen below, the amplitudes of each of the 
transfer functions between the three transmit and three receive antennas are 
extremely similar.  This is to be expected, of course, since the “aperture” of 
each array is very small compared to the distance between them.  What is much 
more interesting and revealing is the relative phases.  Obviously this requires 
coherent receivers, and ultimately if you want to control the spatial 
distribution of power (aka SDMA (or MIMO in some circles) coherent 
transmitters. It turns out that just knowing the amplitude of the transfer 
functions is not really all that useful for anything other than detecting a 
broken solder joint:^)))
 
Also, do not forget that depending how these experiments were conducted, the 
estimates are either of the RF channel itself (aka path loss),or of the RF 
channel in combination with the transfer functions of the transmitters and//or 
receivers.  What this means is the CALIBRATION is CRUCIAL!  Those who do not 
calibrate, are doomed to fail   I suspect that it is in calibration where 
the major difference in performance between vendors’’ products can be found 
:^
 
It’s complicated … 
 


From: Bob McMahon [mailto:bob.mcma...@broadcom.com] 
Sent: Tuesday, August 10, 2021 10:07 AM
To: dick...@alum.mit.edu
Cc: Rodney W. Grimes; Cake List; Make-Wifi-fast; 
starl...@lists.bufferbloat.net; codel; cerowrt-devel; bloat
Subject: Re: [Starlink] [Cake] [Make-wifi-fast] [Cerowrt-devel] Due Aug 2: 
Internet Quality workshop CFP for the internet architecture board
 

The slides show that for WiFi every transmission produces a complex frequency 
response, aka the h-matrix. This is valid for that one transmission only.  The 
slides show an amplitude plot for a 3 radio device hence the 9 ele

Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point

2021-07-13 Thread David P. Reed
ut actually knowing how resources are shared (fair share as in
>> WiFi, FIFO as nearly everywhere else) it becomes very difficult to
>> interpret the results or provide a proper argument on latency. You are
>> right - TCP stats are a proxy for user experience but I believe they are
>> difficult to reproduce (we are always talking about very short TCP flows -
>> the infinite TCP flow that converges to a steady behavior is purely
>> academic).
>>
>> By the way, Little's law is a strong tool when it comes to averages. To be
>> able to say more (e.g. 1% of the delays is larger than x) one requires more
>> information (e.g. the traffic - On-OFF pattern) see [1].  I am not sure
>> when does such information readily exist.
>>
>> Best
>> Amr
>>
>> [1] https://dl.acm.org/doi/10.1145/3341617.3326146 or if behind a paywall
>> https://www.dcs.warwick.ac.uk/~florin/lib/sigmet19b.pdf
>>
>> 
>> Amr Rizk (amr.r...@uni-due.de)
>> University of Duisburg-Essen
>>
>> -Ursprüngliche Nachricht-
>> Von: Bloat  Im Auftrag von Ben Greear
>> Gesendet: Montag, 12. Juli 2021 22:32
>> An: Bob McMahon 
>> Cc: starl...@lists.bufferbloat.net; Make-Wifi-fast <
>> make-wifi-f...@lists.bufferbloat.net>; Leonard Kleinrock ;
>> David P. Reed ; Cake List ;
>> co...@lists.bufferbloat.net; cerowrt-devel <
>> cerowrt-devel@lists.bufferbloat.net>; bloat 
>> Betreff: Re: [Bloat] Little's Law mea culpa, but not invalidating my main
>> point
>>
>> UDP is better for getting actual packet latency, for sure.  TCP is
>> typical-user-experience-latency though, so it is also useful.
>>
>> I'm interested in the test and visualization side of this.  If there were
>> a way to give engineers a good real-time look at a complex real-world
>> network, then they have something to go on while trying to tune various
>> knobs in their network to improve it.
>>
>> I'll let others try to figure out how build and tune the knobs, but the
>> data acquisition and visualization is something we might try to
>> accomplish.  I have a feeling I'm not the first person to think of this,
>> howeverprobably someone already has done such a thing.
>>
>> Thanks,
>> Ben
>>
>> On 7/12/21 1:04 PM, Bob McMahon wrote:
>> > I believe end host's TCP stats are insufficient as seen per the
>> > "failed" congested control mechanisms over the last decades. I think
>> > Jaffe pointed this out in
>> > 1979 though he was using what's been deemed on this thread as "spherical
>> cow queueing theory."
>> >
>> > "Flow control in store-and-forward computer networks is appropriate
>> > for decentralized execution. A formal description of a class of
>> > "decentralized flow control algorithms" is given. The feasibility of
>> > maximizing power with such algorithms is investigated. On the
>> > assumption that communication links behave like M/M/1 servers it is
>> shown that no "decentralized flow control algorithm" can maximize network
>> power. Power has been suggested in the literature as a network performance
>> objective. It is also shown that no objective based only on the users'
>> throughputs and average delay is decentralizable. Finally, a restricted
>> class of algorithms cannot even approximate power."
>> >
>> > https://ieeexplore.ieee.org/document/1095152
>> >
>> > Did Jaffe make a mistake?
>> >
>> > Also, it's been observed that latency is non-parametric in it's
>> > distributions and computing gaussians per the central limit theorem
>> > for OWD feedback loops aren't effective. How does one design a control
>> loop around things that are non-parametric? It also begs the question, what
>> are the feed forward knobs that can actually help?
>> >
>> > Bob
>> >
>> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear > <mailto:gree...@candelatech.com>> wrote:
>> >
>> > Measuring one or a few links provides a bit of data, but seems like
>> if someone is trying to understand
>> > a large and real network, then the OWD between point A and B needs
>> to just be input into something much
>> > more grand.  Assuming real-time OWD data exists between 100 to 1000
>> endpoint pairs, has anyone found a way
>> > to visualize this in a useful manner?
>> >
>> > Also, considering something better than ntp may not reall

Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point

2021-07-12 Thread David P. Reed
 
On Monday, July 12, 2021 9:46am, "Livingood, Jason" 
 said:

> I think latency/delay is becoming seen to be as important certainly, if not a 
> more direct proxy for end user QoE. This is all still evolving and I have to 
> say is a super interesting & fun thing to work on. :-)
 
If I could manage to sell one idea to the management hierarchy of 
communications industry CEOs (operators, vendors, ...) it is this one:

"It's the end-to-end latency, stupid!"

And I mean, by end-to-end, latency to complete a task at a relevant layer of 
abstraction.

At the link level, it's packet send to packet receive completion.

But at the transport level including retransmission buffers, it's datagram (or 
message) origination until the acknowledgement arrives for that message being 
delivered after whatever number of retransmissions, freeing the retransmission 
buffer.

At the WWW level, it's mouse click to display update corresponding to 
completion of the request.

What should be noted is that lower level latencies don't directly predict the 
magnitude of higher-level latencies. But longer lower level latencies almost 
always amplfify higher level latencies. Often non-linearly.

Throughput is very, very weakly related to these latencies, in contrast.

The amplification process has to do with the presence of queueing. Queueing is 
ALWAYS bad for latency, and throughput only helps if it is in exactly the right 
place (the so-called input queue of the bottleneck process, which is often a 
link, but not always).

Can we get that slogan into Harvard Business Review? Can we get it taught in 
Managerial Accounting at HBS? (which does address logistics/supply chain 
queueing).
 
 
 
 
 

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Little's Law mea culpa, but not invalidating my main point

2021-07-09 Thread David P. Reed
sitting in queues more than a couple 
milliseconds MADE THE USERS HAPPY. Apparently the required capacity was there 
all along! 
 
So I conclude that the 4 second delay was the largest delay users could barely 
tolerate before deciding the network was DOWN and going away. And that the 
backup was the accumulation of useless packets sitting in queues because none 
of the end systems were receiving congestion signals (which for the Internet 
stack begins with packet dropping).
 
I should say that most operators, and especially ATT in this case, do not 
measure end-to-end latency. Instead they use Little's Lemma to query routers 
for their current throughput in bits per second, and calculate latency as if 
Little's Lemma applied. This results in reports to management that literally 
say:
 
  The network is not dropping packets, utilization is near 100% on many of our 
switches and routers.
 
And management responds, Hooray! Because utilization of 100% of their hardware 
is their investors' metric of maximizing profits. The hardware they are 
operating is fully utilized. No waste! And users are happy because no packets 
have been dropped!
 
Hmm... what's wrong with this picture? I can see why Donovan, CTO, would accuse 
Apple of lousy software that was ruining iPhone user experience!  His network 
was operating without ANY problems.
So it must be Apple!
 
Well, no. The entire problem, as we saw when ATT just changed to shorten egress 
queues and drop packets when the egress queues overflowed, was that ATT's 
network was amplifying instability, not at the link level, but at the network 
level.
 
And queueing theory can help with that, but *intro queueing theory* cannot.
 
And a big part of that problem is the pervasive belief that, at the network 
boundary, *Poisson arrival* is a reasonable model for use in all cases.
 
 
 
 
 
 
 
 
 
 
On Friday, July 9, 2021 6:05am, "Luca Muscariello"  said:







For those who might be interested in Little's law
there is a nice paper by John Little on the occasion 
of the 50th anniversary  of the result.
[ 
https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary
 ]( 
https://www.informs.org/Blogs/Operations-Research-Forum/Little-s-Law-as-Viewed-on-its-50th-Anniversary
 )
[ https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf 
]( 
https://www.informs.org/content/download/255808/2414681/file/little_paper.pdf )
 
Nice read. 
Luca 
 
P.S. 
Who has not a copy of L. Kleinrock's books? I do have and am not ready to lend 
them!

On Fri, Jul 9, 2021 at 11:01 AM Leonard Kleinrock <[ l...@cs.ucla.edu ]( 
mailto:l...@cs.ucla.edu )> wrote:
David,
I totally appreciate  your attention to when and when not analytical modeling 
works. Let me clarify a few things from your note.
First, Little's law (also known as Little’s lemma or, as I use in my book, 
Little’s result) does not assume Poisson arrivals -  it is good for any arrival 
process and any service process and is an equality between time averages.  It 
states that the time average of the number in a system (for a sample path w) is 
equal to the average arrival rate to the system multiplied by the time-averaged 
time in the system for that sample path.  This is often written as   NTimeAvg 
=λ·TTimeAvg .  Moreover, if the system is also ergodic, then the time average 
equals the ensemble average and we often write it as N ̄ = λ T ̄ .  In any 
case, this requires neither Poisson arrivals nor exponential service times.  
 
Queueing theorists often do study the case of Poisson arrivals.  True, it makes 
the analysis easier, yet there is a better reason it is often used, and that is 
because the sum of a large number of independent stationary renewal processes 
approaches a Poisson process.  So nature often gives us Poisson arrivals.  
Best,
Len


On Jul 8, 2021, at 12:38 PM, David P. Reed <[ dpr...@deepplum.com ]( 
mailto:dpr...@deepplum.com )> wrote:


I will tell you flat out that the arrival time distribution assumption made by 
Little's Lemma that allows "estimation of queue depth" is totally unreasonable 
on ANY Internet in practice.
 
The assumption is a Poisson Arrival Process. In reality, traffic arrivals in 
real internet applications are extremely far from Poisson, and, of course, 
using TCP windowing, become highly intercorrelated with crossing traffic that 
shares the same queue.
 
So, as I've tried to tell many, many net-heads (people who ignore applications 
layer behavior, like the people that think latency doesn't matter to end users, 
only throughput), end-to-end packet arrival times on a practical network are 
incredibly far from Poisson - and they are more like fractal probability 
distributions, very irregular at all scales of time.
 
So, the idea that iperf can estimate queue depth by Little's Lemma by just 
measuring saturation of capacity of a path is bogus.

Re: [Cerowrt-devel] [Bloat] Abandoning Window-based CC Considered Harmful (was Re: Bechtolschiem)

2021-07-08 Thread David P. Reed

Keep It Simple, Stupid.
 
That's a classic architectural principle that still applies. Unfortunately 
folks who only think hardware want to add features to hardware, but don't study 
the actual real world version of the problem.
 
IMO, and it's based on 50 years of experience in network and operating systems 
performance, latency (response time) is almost always the primary measure users 
care about. They never care about maximizing "utilization" of resources. After 
all, in a city, you get maximum utilization of roads when you create a traffic 
jam. That's not the normal state. In communications, the network should always 
be at about 10% utilization, because you never want a traffic jam across the 
whole system to accumulate. Even the old Bell System was engineered to not 
saturate the links on the worst minute of the worst hour of the worst day of 
the year (which was often Mother's Day, but could be when a power blackout 
occurs).
 
Yet, academics become obsessed with achieving constant very high utilization. 
And sometimes low leve communications folks adopt that value system, until 
their customers start complaining.
 
Why doesn't this penetrate the Net-Shaped Heads of switch designers and others?
 
What's excellent about what we used to call "best efforts" packet delivery 
(drop early and often to signal congestion) is that it is robust and puts the 
onus on the senders of traffic to sort out congestion as quickly as possible. 
The senders ALL observe congested links quite early if their receivers are 
paying attention, and they can collaborate *without even knowing who the others 
congesting the link are*. And by picking the heaviest congestors with higher 
probability to drop, fq_codel pushes back in a "fair" way when congestion 
actually crops up. (probabilistically).
 
It isn't the responsibility of routers to get packets through at any cost. It's 
their responsibility to signal congestion early enough that it doesn't persist 
very long at all due to source based rate adaptation.
In other words, a router's job is to route packets and do useful telemetry for 
the end points using it at the instant.
 
Please stop focusing on what is an irrelevant metric (maximum throughput with 
maximum utilization in a special situation only).
 
Focus on what routers can do well because they actually observe it 
(instantaneous congestion events) and keep them simple.
.
On Thursday, July 8, 2021 10:40am, "Jonathan Morton"  
said:



> > On 8 Jul, 2021, at 4:29 pm, Matt Mathis via Bloat
>  wrote:
> >
> > That said, it is also true that multi-stream BBR behavior is quite
> complicated and needs more queue space than single stream. This complicates 
> the
> story around the traditional workaround of using multiple streams to 
> compensate
> for Reno & CUBIC lameness at larger scales (ordinary scales today). 
> Multi-stream does not help BBR throughput and raises the queue occupancy, to 
> the
> detriment of other users.
> 
> I happen to think that using multiple streams for the sake of maximising
> throughput is the wrong approach - it is a workaround employed pragmatically 
> by
> some applications, nothing more. If BBR can do just as well using a single 
> flow,
> so much the better.
> 
> Another approach to improving the throughput of a single flow is high-fidelity
> congestion control. The L4S approach to this, derived rather directly from 
> DCTCP,
> is fundamentally flawed in that, not being fully backwards compatible with 
> ECN, it
> cannot safely be deployed on the existing Internet.
> 
> An alternative HFCC design using non-ambiguous signalling would be 
> incrementally
> deployable (thus applicable to Internet scale) and naturally overlaid on 
> existing
> window-based congestion control. It's possible to imagine such a flow reaching
> optimal cwnd by way of slow-start alone, then "cruising" there in a true
> equilibrium with congestion signals applied by the network. In fact, we've
> already shown this occurring under lab conditions; in other cases it still 
> takes
> one CUBIC cycle to get there. BBR's periodic probing phases would not be 
> required
> here.
> 
> > IMHO, two approaches seem to be useful:
> > a) congestion-window-based operation with paced sending
> > b) rate-based/paced sending with limiting the amount of inflight data
> 
> So this corresponds to approach a) in Roland's taxonomy.
> 
> - Jonathan Morton
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Starlink] [Make-wifi-fast] Due Aug 2: Internet Quality workshop CFP for the internet architecture board

2021-07-08 Thread David P. Reed

I will tell you flat out that the arrival time distribution assumption made by 
Little's Lemma that allows "estimation of queue depth" is totally unreasonable 
on ANY Internet in practice.
 
The assumption is a Poisson Arrival Process. In reality, traffic arrivals in 
real internet applications are extremely far from Poisson, and, of course, 
using TCP windowing, become highly intercorrelated with crossing traffic that 
shares the same queue.
 
So, as I've tried to tell many, many net-heads (people who ignore applications 
layer behavior, like the people that think latency doesn't matter to end users, 
only throughput), end-to-end packet arrival times on a practical network are 
incredibly far from Poisson - and they are more like fractal probability 
distributions, very irregular at all scales of time.
 
So, the idea that iperf can estimate queue depth by Little's Lemma by just 
measuring saturation of capacity of a path is bogus.The less Poisson, the worse 
the estimate gets, by a huge factor.
 
 
Where does the Poisson assumption come from?  Well, like many theorems, it is 
the simplest tractable closed form solution - it creates a simplified view, by 
being a "single-parameter" distribution (the parameter is called lambda for a 
Poisson distribution).  And the analysis of a simple queue with poisson arrival 
distribution and a static, fixed service time is the first interesting Queueing 
Theory example in most textbooks. It is suggestive of an interesting 
phenomenon, but it does NOT characterize any real system.
 
It's the queueing theory equivalent of "First, we assume a spherical cow...". 
in doing an example in a freshman physics class.
 
Unfortunately, most networking engineers understand neither queuing theory nor 
application networking usage in interactive applications. Which makes them 
arrogant. They assume all distributions are poisson!
 
 
On Tuesday, July 6, 2021 9:46am, "Ben Greear"  said:



> Hello,
> 
> I am interested to hear wish lists for network testing features. We make test
> equipment, supporting lots
> of wifi stations and a distributed architecture, with built-in udp, tcp, ipv6,
> http, ... protocols,
> and open to creating/improving some of our automated tests.
> 
> I know Dave has some test scripts already, so I'm not necessarily looking to
> reimplement that,
> but more fishing for other/new ideas.
> 
> Thanks,
> Ben
> 
> On 7/2/21 4:28 PM, Bob McMahon wrote:
> > I think we need the language of math here. It seems like the network
> power metric, introduced by Kleinrock and Jaffe in the late 70s, is something
> useful.
> > Effective end/end queue depths per Little's law also seems useful. Both are
> available in iperf 2 from a test perspective. Repurposing test techniques to
> actual
> > traffic could be useful. Hence the question around what exact telemetry
> is useful to apps making socket write() and read() calls.
> >
> > Bob
> >
> > On Fri, Jul 2, 2021 at 10:07 AM Dave Taht  <mailto:dave.t...@gmail.com>> wrote:
> >
> > In terms of trying to find "Quality" I have tried to encourage folk to
> > both read "zen and the art of motorcycle maintenance"[0], and Deming's
> > work on "total quality management".
> >
> > My own slice at this network, computer and lifestyle "issue" is aiming
> > for "imperceptible latency" in all things. [1]. There's a lot of
> > fallout from that in terms of not just addressing queuing delay, but
> > caching, prefetching, and learning more about what a user really needs
> > (as opposed to wants) to know via intelligent agents.
> >
> > [0] If you want to get depressed, read Pirsig's successor to "zen...",
> > lila, which is in part about what happens when an engineer hits an
> > insoluble problem.
> > [1] https://www.internetsociety.org/events/latency2013/
> <https://www.internetsociety.org/events/latency2013/>
> >
> >
> >
> > On Thu, Jul 1, 2021 at 6:16 PM David P. Reed  <mailto:dpr...@deepplum.com>> wrote:
> > >
> > > Well, nice that the folks doing the conference  are willing to
> consider that quality of user experience has little to do with signalling 
> rate at
> the
> > physical layer or throughput of FTP transfers.
> > >
> > >
> > >
> > > But honestly, the fact that they call the problem "network quality"
> suggests that they REALLY, REALLY don't understand the Internet isn't the 
> hardware
> or
> > the routers or even the routing algorithms *to its users*.
> > >
> > >
> > >
> > > By ignoring the diversity of app

Re: [Cerowrt-devel] Due Aug 2: Internet Quality workshop CFP for the internet architecture board

2021-07-01 Thread David P. Reed

Well, nice that the folks doing the conference  are willing to consider that 
quality of user experience has little to do with signalling rate at the 
physical layer or throughput of FTP transfers.
 
But honestly, the fact that they call the problem "network quality" suggests 
that they REALLY, REALLY don't understand the Internet isn't the hardware or 
the routers or even the routing algorithms *to its users*.
 
By ignoring the diversity of applications now and in the future, and the fact 
that we DON'T KNOW what will be coming up, this conference will likely fall 
into the usual trap that net-heads fall into - optimizing for some imaginary 
reality that doesn't exist, and in fact will probably never be what users 
actually will do given the chance.
 
I saw this issue in 1976 in the group developing the original Internet 
protocols - a desire to put *into the network* special tricks to optimize ASR33 
logins to remote computers from terminal concentrators (aka remote login), bulk 
file transfers between file systems on different time-sharing systems, and 
"sessions" (virtual circuits) that required logins. And then trying to exploit 
underlying "multicast" by building it into the IP layer, because someone 
thought that TV broadcast would be the dominant application.
 
Frankly, to think of "quality" as something that can be "provided" by "the 
network" misses the entire point of "end-to-end argument in system design". 
Quality is not a property defined or created by The Network. If you want to 
talk about Quality, you need to talk about users - all the users at all times, 
now and into the future, and that's something you can't do if you don't bother 
to include current and future users talking about what they might expect to 
experience that they don't experience.
 
There was much fighting back in 1976 that basically involved "network experts" 
saying that the network was the place to "solve" such issues as quality, so 
applications could avoid having to solve such issues.
 
What some of us managed to do was to argue that you can't "solve" such issues. 
All you can do is provide a framework that enables different uses to 
*cooperate* in some way.
 
Which is why the Internet drops packets rather than queueing them, and why 
diffserv cannot work.
(I know the latter is conftroversial, but at the moment, ALL of diffserv 
attempts to talk about end-to-end applicaiton specific metrics, but never, ever 
explains what the diffserv control points actually do w.r.t. what the IP layer 
can actually control. So it is meaningless - another violation of the so-called 
end-to-end principle).
 
Networks are about getting packets from here to there, multiplexing the 
underlying resources. That's it. Quality is a whole different thing. Quality 
can be improved by end-to-end approaches, if the underlying network provides 
some kind of thing that actually creates a way for end-to-end applications to 
affect queueing and routing decisions, and more importantly getting "telemetry" 
from the network regarding what is actually going on with the other end-to-end 
users sharing the infrastructure.
 
This conference won't talk about it this way. So don't waste your time.
 
 
 
On Wednesday, June 30, 2021 8:12pm, "Dave Taht"  said:



> The program committee members are *amazing*. Perhaps, finally, we can
> move the bar for the internet's quality metrics past endless, blind
> repetitions of speedtest.
> 
> For complete details, please see:
> https://www.iab.org/activities/workshops/network-quality/
> 
> Submissions Due: Monday 2nd August 2021, midnight AOE (Anywhere On Earth)
> Invitations Issued by: Monday 16th August 2021
> 
> Workshop Date: This will be a virtual workshop, spread over three days:
> 
> 1400-1800 UTC Tue 14th September 2021
> 1400-1800 UTC Wed 15th September 2021
> 1400-1800 UTC Thu 16th September 2021
> 
> Workshop co-chairs: Wes Hardaker, Evgeny Khorov, Omer Shapira
> 
> The Program Committee members:
> 
> Jari Arkko, Olivier Bonaventure, Vint Cerf, Stuart Cheshire, Sam
> Crowford, Nick Feamster, Jim Gettys, Toke Hoiland-Jorgensen, Geoff
> Huston, Cullen Jennings, Katarzyna Kosek-Szott, Mirja Kuehlewind,
> Jason Livingood, Matt Mathias, Randall Meyer, Kathleen Nichols,
> Christoph Paasch, Tommy Pauly, Greg White, Keith Winstein.
> 
> Send Submissions to: network-quality-workshop...@iab.org.
> 
> Position papers from academia, industry, the open source community and
> others that focus on measurements, experiences, observations and
> advice for the future are welcome. Papers that reflect experience
> based on deployed services are especially welcome. The organizers
> understand that specific actions taken by operators are unlikely to be
> discussed in detail, so papers discussing general categories of
> actions and issues without naming specific technologies, products, or
> other players in the ecosystem are expected. Papers should not focus
> on specific protocol solutions.
> 
> The workshop will be by invitatio

Re: [Cerowrt-devel] [Cake] access to cmsg from go?

2021-06-23 Thread David P. Reed
(They closed the issue on the golang link.)


I'm not a golang user. One language too many for me. It sounds like a library 
issue.

My suggestion would be to use the openness of open source. Generate a patchset 
that extends the interface properly. Don't try to "improve" what you don't like 
- communities like stability and backward compatibility. Explain the added 
semantics in documentation.

Then, maintain your fork. I don't know how the golang community works with 
versioning of libraries, but Python, Rust, Haskell, and NodeJS all have ways to 
let projects use variants of libraries.

Then, submit that patchset upstream to the golang community. Advocate for 
upstreaming it, and develop a community that uses the patched library. 
Eventually, you may be able to stop maintaining your variant toolset. Or you 
will develop an alternat library user base that disagrees with upstream's 
decisions.

(Analogy, Android's Linux Kernel vs. Linus Torvalds's. Google Android rejects 
to some extent Linus's crew's unwillingness to accept what Android needs as 
improvements.)

This is "modern open source community" practice for getting things done. 
Pragmatic innovations in shared codebases sometimes have to wait for the 
original egos to die off.

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] wireguard almost takes a bullet

2021-03-30 Thread David P. Reed

Theodore -
 
I appreciate you showing the LF executive salary numbers are not quite as high 
as I noted. My numbers may have been inflated, but I've definitely seen a 
$900,000 package for at least one executive reported in the press (an executive 
who was transferred in from a F100 company which is close to the LF).
 
On the other hand, they are pretty damn high salaries for a non-profit. Are 
they appropriate? Depends. There are no stockholders and no profits, just a 
pretty substantial net worth.
 
Regarding the organizaton of "Linux, Inc." as  a hierachical control structure 
- I'll just point out that hierarchical control of the development of Linux 
suggests that it is not at all a "community project" (if it ever was). It's a 
product development organization with multiple levels of management.
 
Yet the developers are employees of a small number of major corporations. In 
this sense, it is like a "joint venture" among those companies.
 
To the extent that those companies gain (partial) control of the Linux kernel, 
as appears to be the case, I think Linux misrepresents itself as a "community 
project", and in particular, the actual users of the software may have little 
say in the direction development takes going forwards.
 
There's little safeguard, for example, against "senior management" biases in 
support of certain vendors, if other vendors are excluded from effective 
participation by one of many techniques. In other words, there's no way it can 
be a level playing field for innovation.
 
In that sense, the Linux kernel community has reached a point very much like 
Microsoft Windows development reached in 1990 or so. I note that date because 
at that point, Microsoft was challenged with a variety of anti-trust actions 
based on the fact that it used its Windows monopoly status to put competitors 
in the application space, and competitors producing innovative operating 
systems out of business (GO Computer Corporation being one example of many).
 
This troubles me. It may not trouble the developers who are in the Linux 
community and paid by the cartel of companies that control its direction.
 
I have no complaint about the technical competence of individual developers - 
the quality is pretty high, at least as good as those who worked on Windows and 
macOS. But it's becoming clear that their is a narrowing of control of an OS 
that has a lot of influence in a few hands. That those few hands don't work for 
one company doesn't eliminate its tendency to become a cartel. (one that is not 
transparent at all about functioning as such - preferring to give the 
impression that the kernel is developed by part-time voluntary "contributions").
 
The contrast with other open source communities is quite sharp now. There is 
little eleemosynary intent that can be detected any more. I think that is too 
bad, but things change.
 
This is just the personal opinion of someone who has been developing systems 
for 50+ years now. I'm kind of disappointed, but my opinion does not really 
matter much.
 
David
 
 
 
 
On Monday, March 29, 2021 9:52pm, "Theodore Ts'o"  said:



> On Mon, Mar 29, 2021 at 04:28:11PM -0400, David P. Reed wrote:
> >
> >
> > What tends to shape Linux and FreeBSD, etc. are the money sources
> > that flow into the communities. Of course Linux is quite
> > independently wealthy now. The senior executives of the Linux
> > Foundation are paid nearly a million dollars a year, each. Which
> > just indicates that major corporations are seriously interested in
> > controlling the evolution of Linux (not the Gnu part, the part that
> > has Linus Torvalds at its center).
> 
> First of all, I don't believe your salary numbers are correct.
> 
> https://nonprofitlight.com/ca/san-francisco/linux-foundation
> 
> Secondly, the "senior executives" of the Linux Foundation don't have
> any control over "the evolution of Linux". The exception to that are
> the "Fellows" (e.g., Linus Torvalds, Greg K-H, etc.) and I can assure
> you that they don't take orders from Jim Zemlin, the executive
> director, or any one else at the Linux Foundation.
> 
> The senior developers of Linux do tend to work for the big
> corporations, but culturally, we do try to keep our "corporate hats"
> and our "community" hats quite separate, and identify when we our
> company hats on. Many senior developers have transitioned between
> multiple companies, and over time, it's been understood that their
> primarily allegiance is to Linux, and not to the company. In fact,
> the primary job of maintainers is to say "no" to companies when they
> try to push crap code into the kernel. And that's because it's the
>

Re: [Cerowrt-devel] wireguard almost takes a bullet

2021-03-29 Thread David P. Reed

Dave -
 
I've spent a fair amount of time orbiting the FreeBSD community over the past 
few years. It's not as sad as you might think.
However, the networking portion of FreeBSD community is quite differently 
organized than it is in Linux.
 
What tends to shape Linux and FreeBSD, etc. are the money sources that flow 
into the communities. Of course Linux is quite independently wealthy now. The 
senior executives of the Linux Foundation are paid nearly a million dollars a 
year, each.  Which just indicates that major corporations are seriously 
interested in controlling the evolution of Linux (not the Gnu part, the part 
that has Linus Torvalds at its center).
 
FreeBSD, in contrast, is a loose alliance of what you might call "embedded 
hardware vendors" like NetApp as an example. They value an open, portable, 
efficient operating environment, but not for servers, laptops or smartphones.
 
They overlap at the intersection of network routing and storage platforms, 
where Linux doesn't seem to fit well, except in the case of "home routers".
 
At least that's my view. The major controllers of architectural elements are 
not terribly interested in FreeBSD's positive qualities. FreeBSD is not very 
visible at Intel and ARM at all, interms of their product planning. IBM has no 
"Power" FreeBSD.
 
Take for example, bufferbloat as an issue that routing and switching hardware 
ought to address. This is a serious weakness in the FreeBSD community (where it 
should matter!) There's not been much demand by the major corporate spenders on 
FreeBSD in fixing bufferbloat. But then again, there's not been much visibility 
regarding bufferbloat in the IETF, either. I'm not sure Torvalds has ever even 
heard of it (and I suspect he would try to argue it isn't a problem at all, 
given his tendency to not think clearly about systems scale issues, so what's 
caused Linux to even bother is the fringes in OpenWRT land and mesh networking 
land, plus Jim Gettys).
 
Anyway, FreeBSD and FreeRTOS and a few other very strong but small communities 
have solutions that are far better for their actual needs than the behemoth 
mess that Linux has become. And for those communities, they work very well. 
They are disentangled from Gnu, which is both a good and a bad thing depending 
on your perspective.
 
I just spent 9 months trying to get a very tiny fix to the Linux kernel into 
the mainline kernel. I actually gave up, because it seemed utterly pointless, 
even though it was clearly a design error that I was fixing, and I was trying 
to meet all the constraints on patches. No one was fighting me, no one said it 
was wrong. I found the problem in a personal research project where it was a 
blocking bug, so I had to maintain it as an add-on private patch (and I still 
do) that I needed to verify every release of the Linux kernel. Why is this? 
Well, it shows how Linux excludes ideas by the very bureaucracy of its 
management structure. (and I'd suggest that the mess that "init" has turned 
into in the OS, which the kernel actually requires in order to be useful, 
called "systemd", is an example of how not to modularize a portable OS kernel).
 
So FreeBSD, compared to Linux, in some ways, is far more pleasant to deal with. 
The community doesn't have rude and clueless and entitled members like Torvalds 
and Alan Cox have been. It isn't being driven by a consortium of F100 companies 
in a near-cartel.
 
So there are pluses and minuses. I suspect this is why many, many Linux 
developers actually use macOS as their personal computer for development. A 
paradox, given that macOS is completely proprietary.
 
 
 
On Sunday, March 28, 2021 11:56am, "Dave Taht"  said:



> I am sad about the state of freebsd today, and of companies
> contracting outside the authors of the code to get crappy things
> committed without review and testing.
> 
> https://lwn.net/Articles/850757/
> 
> (long rant of mine in the comments).
> 
> My hat is off to jason for sinking a frantic week into vastly
> improving that wireguard implementation, and I hope he and his team
> gets caught up on sleep now.
> 
> --
> "For a successful technology, reality must take precedence over public
> relations, for Mother Nature cannot be fooled" - Richard Feynman
> 
> d...@taht.net  CTO, TekLibre, LLC Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] smart queue management conflict

2021-03-22 Thread David P. Reed

I know I am seen as too outspoken on these things, but this paper was frank 
nonsense! I mean, starting from the idea that sensor and actuator devices use a 
small bit rate averaged over a 24 hour period means that they should be 
assigned a "narrowband channel" (which would be fine if real sensors are 
Constant Bit Rate sources that required isochronous bit-level synchronization 
with extremely narrow jitter bounds.
 
But real sensors and actuators in real machine-to-machine automation systems in 
real life are not AT ALL like that.
 
In fact many such sensors and actuators are cheap video sources with pattern 
recognition on them (these cost < $7 in quantiy 1 today with fully encrypted 
WiFi connectivity supporting both IPv4 and IPv6).
 
So, as they say, WTF are these LTE affiliated researchers smoking?
 
The whole revolution of "packet networking" (from the very beginning) was not 
about creating "sessions" that have nailed-up constant bit rates.
 
Let's say I have a sensor on my dog's collar, or on a fork lift in a warehouse? 
Do these researchers honestly think that each sensor will make a *phone call* 
over the LTE fabric and continue that phone call for all the time the device is 
powered on? Is the cost going to be proportional to the delivered bit-rate? In 
"message units" for each 3 minutes of call time?
 
And then there's the term "end-to-end queue management". What that seems to 
mean is that the "call" is tracked at every intermediate multiplexing/switching 
point in the network, fully allocating "reliable bits".
 
And QoS?  I know, in telephony circles (LTE) QoS is measured by the number 
retransmissions of individual bits, and the whole point is never to drop any 
bit once it is transmitted from the source, so the source doesn't need to 
remember the bit so it can be retransmitted upon the receiver's request.
 
They do still teach this concept of communications in engineering schools. In 
fact, you can still occasionally observe it in electrical engineering classes 
on topics that arose before the ARPANET was invented. And I know that many of 
these engineers infect the telephony profession, where hopefully they get 
promoted into management roles before they actually encounter packet switching.
 
But really! It's been more than 4 decades since this whole set of concepts 
investigated in the paper become obsolescent. Even ATM is dead with its 
"circuit-like" use of packets that aren't end-to-end acknowledged.
 
I have long suspected that IETF has got itself infected by this nonsense 
antiquarian view of communications.
 
Note: service quality (properly redefined for packet switching networks with 
freedom to use arbitrary redundant paths, and without pre-allocation) is 
important. But when most engineers use QoS, they are talking about a 
meaningless issue around creating a near-isochronous path that does not ever 
retransmit any bit once it enters the physical network "owned" by a single 
company (or oligopoy) from end-to-end.
 
Please let us hope that IoT (whatever it is in fact) isn't being defined by the 
community that thinks this paper is working on an interesting problem.
 
On Monday, March 22, 2021 1:28am, "Dave Taht"  said:



> see: https://www.mdpi.com/1424-8220/20/8/2324
> 
> I do kind of wish the authors hadn't overloaded our term "smart queue
> management" (SQM) with their interpretation in "End-to-End QoS “Smart
> Queue” Management" (
> https://www.bufferbloat.net/projects/cerowrt/wiki/Smart_Queue_Management/
> )
> 
> It's not trademarked but, well, read the paper for some insight into 
> lte-think.
> 
> 
> --
> "For a successful technology, reality must take precedence over public
> relations, for Mother Nature cannot be fooled" - Richard Feynman
> 
> d...@taht.net  CTO, TekLibre, LLC Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] a start at the FCC filing

2021-03-07 Thread David P. Reed

Politically incorrect recommendations of mine:
 
Start by how a changed FCC approach will do at solving problems in Detroit, 
Milwaukee and LA.
 
"Rural" broadband is not a good leverage point to change policy. It's just a 
box to tick to promise something that will get rural votes, but the only real 
problem with rural networking is getting funding there because the private 
sector won't invest. That's why Rural Broadband basically sank Open Internet in 
Genachowski's time - it was just a promise of a "handout". Hence, I point out 
cities that have lots of votes but no Internet services.
 
Republicans have a strong aversion to government programs or redistribution of 
wealth from rich to poor.. The FCC has no budget for programs, so that isn't a 
fundamental issue, but they do regulate some very wealthy industries. Do not 
make this look or act as a "redistribution" project.
 
Many of the folks in DC (of all partisan stripes) view the FCC as a "revenue 
generator" for the government - a sort of "taxing authority" without the word 
"tax" being involved. That is, the major function of the FCC to them is 
"running auctions" to fund the government - the auctions being spectrum 
auctions.
 
In that sense, the FCC is thought to be like the Dept' of the Interior - it 
takes the Property Rights of the US Government and hands them out for 
"development". (now my personal view is that this idea that the USG has 
Property Rights is like the idea that the King/Queen of England owns all the 
land of England, and the people live on it at sufferance of the Royals - an 
absurd idea, but one that comes from the idea that governments rule, and people 
who live there are just tenants).
 
Nearly anything you propose to do with the FCC touches on its primary role as 
manager of the intangible Property that belongs solely to the Gov't. 
Republicans generally believe Property is sacred. Democrats generall believe 
Property is best managed by complex Government bureaucratic control. 
 
Free Speech, Free Assembly, and Free Press are excluded from the FCC's ambit, 
by a history of Supreme Court decisions. That group of ideas has no sway in US 
communications policy, because people don't have property rights, and 
communicaitons is property.
 
To win Republican or Democratic minds (with only a few exceptions) you have to 
understand these issues of perception.
 
That's separate from the ideas of pseudo-physics that permeate society (like 
the idea that there is a small bounded set of opportunities to interconnect 
over wires, fiber, or radio waves, or certain "beachfront property" which 
justifies excluding communications except among a select few operators that are 
regulated). Some of us have made a few dents in questioning this 
pseudo-physical scarcity argument in radio, but not in wires and fiber. But 
where we lost wasn't there - where we lost was in not understanding the FCC's 
role as described above.
 
On Friday, March 5, 2021 1:15am, "Stephen Hemminger" 
 said:



Start with Ron Wyden


On Thu, Mar 4, 2021, 7:54 PM Dave Taht <[ dave.t...@gmail.com ]( 
mailto:dave.t...@gmail.com )> wrote:I am planning to take my time on this. I 
would like for example, to
 at least communicate well with a republican senator and a democratic one.

 Admittedly, if we can upgrade everybody to 100Mbit, everybody can have
 all 4 home members being couch potatoes in front of HD netflix and
 there won't be much motivation to do anything else.

[ 
https://news.slashdot.org/story/21/03/04/1722256/senators-call-on-fcc-to-quadruple-base-high-speed-internet-speeds
 ]( 
https://news.slashdot.org/story/21/03/04/1722256/senators-call-on-fcc-to-quadruple-base-high-speed-internet-speeds
 )

 Anybody know these guys?

 On Sun, Feb 21, 2021 at 8:50 AM David P. Reed <[ dpr...@deepplum.com ]( 
mailto:dpr...@deepplum.com )> wrote:
 >
 > This is an excellent proposal. I am happy to support it somehow.
 >
 >
 >
 > I strongly recommend trying to find a way to make sure it doesn't become a 
 > proposal put forward by "progressive" potlitical partisans. (this is hard 
 > for me, because my politics are more aligned with the Left than with the 
 > self-described conservatives and right-wing libertarians.
 >
 >
 >
 > This is based on personal experience starting in 2000 and continuing through 
 > 2012 or so with two issues:
 >
 >
 >
 > 1. Open Spectrum (using computational radio networking to make a scalable 
 > framework for dense wireless extremely wideband internetworking). I along 
 > with a small number of others started this as a non-partisan effort. It 
 > became (due to lobbyists and "activists") considered to be a socialist 
 > taking of property from spectrum "owners". After

Re: [Cerowrt-devel] easic from intel

2021-03-02 Thread David P. Reed

These are ASICs, not fpgas. Presumably they are manufactured on Intel fabs 
using Intel processes, after designing them.
 
The overview references RTL design specs. Now Verilog and VHDL can speciry RTL 
designs (but are a bit more general).
 
Also, the I/O pins seem to be a bit more specialized than those of FPGAs I use 
from Xilinx. A typical FPGA has a fixed number of specialized I/O pins that can 
do very fast SERDES, for example. But I presume that these ASICs can have 
varying numbers of specialized I/O's.
 
What typically distinguishes FPGAs, MCUs and ASICs from general purpose CPU 
chips is the I/O pins' potential configurability at design time. General 
purpose CPUs don't have GPIO or configurable specialized I/O. They typically 
dedicate each pin to a particular electrical bus signaling physical layer.
 
In contrast, look at the STM32 or the RP2040 chips I/O pin specs. 
Configurability from input to output to differential from voltage to voltage on 
each single pin. These are things that software guys don't understand are 
important. And they aren't that important compared to logic and memory in 
general purpose CPUs like the X86 or the ARM general purpose chips.
 
 
On Monday, March 1, 2021 10:10pm, "Dave Taht"  said:



> Got no idea how these really differ from fpgas. Do like the number of
> gates tho. quad core 64bit arm also. Anyone seen the design tools?
> 
> https://www.intel.com/content/www/us/en/products/programmable/asic/easic-devices/n5x.html
> 
> --
> "For a successful technology, reality must take precedence over public
> relations, for Mother Nature cannot be fooled" - Richard Feynman
> 
> d...@taht.net  CTO, TekLibre, LLC Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] a start at the FCC filing

2021-02-21 Thread David P. Reed

This is an excellent proposal. I am happy to support it somehow.
 
I strongly recommend trying to find a way to make sure it doesn't become a 
proposal put forward by "progressive" potlitical partisans. (this is hard for 
me, because my politics are more aligned with the Left than with the 
self-described conservatives and right-wing libertarians.
 
This is based on personal experience starting in 2000 and continuing through 
2012 or so with two issues:
 
1. Open Spectrum (using computational radio networking to make a scalable 
framework for dense wireless extremely wideband internetworking). I along with 
a small number of others started this as a non-partisan effort. It became (due 
to lobbyists and "activists") considered to be a socialist taking of property 
from spectrum "owners". After that, it became an issue where a subset of the 
Democratic Party (progressives) decided to make it a wedge issue in political 
form. (It should be noted that during this time, a Republican Secretary of 
Commerce took up the idea of making UWB legal, and fought off lobbyists to some 
extent, though the resulting regulation was ineffective because it was too weak 
to be usable).
 
2. Network Neutrality or Open Internet. Here the key issue was really about 
keeping Internet routing intermediaries from being selective about what packets 
they would deliver and what ones they would not. The design of the Internet was 
completely based on open carriage of all packets without the routers billing 
for or metering based on end-to-end concerns. Again, for a variety of reasons, 
this simple idea got entangled with partisanship politically - such that 
advocates for an Open Internet were seen to be promoting both Democratic Party 
and Silicon Valley Tech interests. In fact, the case for Open Internet is not 
primarily political. It's about scalability of the infrastructure and the 
ability to carry Internet packets over any concatenation of paths, for mutual 
benefit to all users. (That "mutual benefit" concept does seem to be alien to a 
certain kind of individualist libertarian cult thinking that is a small subset 
of Republican Party membership).
 
If this becomes yet another Democratic Party initiative, it will encounter 
resistance, both from Republican-identified polarizing reaction, and also from 
the corporate part of the Democratic Party (so called Blue Dog Democrats where 
telecom providers provide the largest quantity of funding to those Democrats).
 
Some "progressive" Democrats will reach out to add this to their "platform" as 
a partisan issue.
 
It may feel nice to have some of them on your side. Like you aren't alone. But 
by accepting this "help" on this issue, you may be guaranteeing its failure.
 
In a world where compromise is allowed to generate solutions to problems, 
polarizing would not be effective to kill a good idea, rather merely raising 
the issue would lead to recognizing the problem is important and joint work to 
create a solution. In 1975, the Internet was not partisan. Its designers 
weren't party members or loyalists. We were solving a problem of creating a 
scalable, efficient alternative to the "Bell System" model of communications 
where every piece of gear got involved in deciding what to do with each bit of 
information, where there were "voice bits" and "data bits", "business bits" and 
"residential bits", and every piece of equipment had to be told everything 
about each bits (through call setup).
 
But today, compromise is not considered possible, even at the level of defining 
the problem!
 
So this simple architectural approach to clearing out the brush that has grown 
like weeds throughout the Internet, especially at the "access provider" will 
become political. 
 
Since in the end of the day it threatens to reduce control and revenues to edge 
"access providers" that come from selling higher-rate pipes, the natural 
opposition will likely come from lobbyists for telecom incumbents, funded by 
equipment providers for those incumbents (Cisco, Alcatel Lucent and their 
competitors), with Republicans and Blue-Dog Democrats carrying their water. 
That's tthe likely polarization axis. I can say that Progressive members of the 
Democratic Party will love to have a new issue to raise funds. I can make the 
argument that it should be supported by Republicans or Independents, though. If 
so, it will be opposed by Democrats and Progressives, and the money will flow 
through Blue Dogs to them.
 
Either way, you won't get it adopted at scale, IF you make it a Party Loyalist 
issue.
 
So please look that "gift horse" of Democratic Party support in the mouth when 
it comes.
 
Accept the support, ONLY if you can be assured it isn't accompanied by a use in 
polarization of the issue. In other words, if you can get support from 
Republicans, too.
 
Since I am neither an R or a D, I'd be happy to support it however it is 
supported. Personally, I don't want it to be affiliated with stances on 
abortio

Re: [Cerowrt-devel] cringley rants well on bloat

2021-02-09 Thread David P. Reed

Hmmm... good post, I guess. But aren't WiFi 6 and StarLink being built by 
people who have proved their genius by being billionaires?
 
It's sad, though, to read through the comments. There's a whole 'nother world 
out there now.
Apparently the world of commenters are largely convinced bufferbloat doesn't 
exist, and never did.
 
Perhaps that is our problem? We are hallucinating and disconnected from reality?
 
I constantly hear that IETF attendees don't believe it is a problem. And folks 
like Andy Bechtolsheim get loads of VC money to create Ethernet 10+ GigE 
switches that are full of buffers, therefore creating lots of lag under load in 
datacenters where they buy this gear because Andy is famous. AndyB's company 
even produce white papers that claim more buffers improves performance by 
keeping all the links running at wirespeed (the hell with latency).
 
And some of the commenters seem to want to insult Jim Gettys, too, by name, 
saying that he "got it wrong".
 
I've concluded that COVID-19 reflects a general infection of brains with some 
kind of arrogant ignorance, including many of the folks who get boondoggles 
from their company to attend IETF.
 
But it may be the opposite, after all. Maybe we folks don't understand what is 
true in this real Bizarro World, and bufferbloat is a hoax like COVID. Q is 
probably right, and we are probably all pederasts like Hillary Clinton.
 
 
On Monday, February 8, 2021 8:45pm, "Dave Taht"  said:



> https://www.cringely.com/2021/02/04/2021-prediction-4-wifi-6-is-a-bust-for-now-as-bufferbloat-returns-thanks-to-isp-greed/
> 
> 
> --
> "For a successful technology, reality must take precedence over public
> relations, for Mother Nature cannot be fooled" - Richard Feynman
> 
> d...@taht.net  CTO, TekLibre, LLC Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] my thx to spacex (and kerbal space program) forcheering me up all year

2021-01-01 Thread David P. Reed
It has bufferbloat? 

Why am I not surprised?

I can share that one stack hasn't had it from the start, by design. That is one 
implemented for trading at 10+ GB/sec, implemented in Verilog, and now 
apparently in production use at one of the largest NY trading intermediaries.

Why? Simply two reasons:

1. People who design parallel hardware systems are trained to focus on closing 
timing constraints. Which means never using FIFOs that are longer than absolute 
minimum. The designer is a VLSI designer by trade, not a networking guy.

2. Trading is all about managing delay. In this case, 100 msec packet delay is 
worst allowable case end to end.

Yet it is full TCP in hardware.

Can't share more, because I don't know more, it being all proprietary to the 
bank in question.

Now, one wonders: why can't Starlink get it right first time?

It's not like bufferbloat is hard on a single bent pipe hop, which is all 
Starlink does today.

-Original Message-
From: "Dave Taht" 
Sent: Thu, Dec 31, 2020 at 1:37 pm
To: "bloat" , "cerowrt-devel" 
, "Scott Manley" 

Cc: "bloat" , "cerowrt-devel" 
, "Scott Manley" 

Subject: [Cerowrt-devel] my thx to spacex (and kerbal space program) 
forcheering me up all year

If it wasn't for such a long list of wonderful accomplishments in
space, it would have been a sadder year. i just re-recorded my song
"one first landing" out on my dinghy:

https://www.youtube.com/watch?v=wjur0RG-v-I&feature=youtu.be

Maybe someday I'll get scott manley to do his verse on this. Try as I
might over the past few years, I still can't cop his accent.

Now if we can only fix starlink's bufferbloat! It looks to me like the
firmware is QCA's openwrt derivative...

-- 
"For a successful technology, reality must take precedence over public
relations, for Mother Nature cannot be fooled" - Richard Feynman

d...@taht.net  CTO, TekLibre, LLC Tel: 1-831-435-0729
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] apparently this is an end goal of a lot of ipv6 work in the ietf

2020-07-02 Thread David P. Reed
Interop 2019 gave this an award?

I have to say, it reads like a clone of the Bell System Technical Manual (or 
some of the LTE spec).

In the tutorial it doesn't seem to say what problem it is solving.

But hey, maybe the IAB loves it? They seem to be clueless as hell about 
internetworking as a concept, seeming to think thet "interoperation" is 
irrelevant, and that the job of the Internet is to defend a collusion of big 
corporations from being accused of anti-trust violation, by colluding to 
exclude folks who aren't insiders from connecting to the Internet without 
paying incumbents.


On Thursday, July 2, 2020 2:04pm, "Dave Taht"  said:

> who knew?
> 
> https://www.ipv6plus.net/
> 
> --
> "For a successful technology, reality must take precedence over public
> relations, for Mother Nature cannot be fooled" - Richard Feynman
> 
> d...@taht.net  CTO, TekLibre, LLC Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] nostalgia

2020-05-08 Thread David P. Reed

Yeah. In 1969, Bruce Daniels was a neighbor in my dorm (Random Hall, MIT) and 
Tim Anderson was working in the same office space at Project MAC as Carl Hewitt 
in around 1974, when I was, among other things like working on Multics kernel 
and building MACLISP, helping Carl with implementing Planner, and Tim started 
working on Zork soon after that. This was before the Apple II existed, it's 
worth remembering.
 
Something worth noting about this: the ONLY computer gaming worth anything at 
the time was being built by students in ARPA funded labs like MIT Project MAC, 
and Xerox PARC. Not ARPA funded games (those came much latter as battle 
simulators were interesting), but games were great for computer languages. In 
fact, the Planner effort was entangled with the Muddle language, which I think 
the Zork folks worked on with the folks working with Carl on Planner around 
that time.
 
The idea of a "packaged software product" really didn't happen until the Apple 
II started taking off (along with the TRS-80). There was no such thing, no such 
market. But what was really smart about the Zork guys was that they saw that 
opportunity for what it was, and started their company. And in some sense, they 
were the "killer app" for gaming (given the character displays of the early PCs 
like the Apple and Tandy machines). Just as Visicalc was the killer app for 
business. (Printers were so terrible that word processing had no real 
opportunity for business users, just for hobbyists, that came later with laser 
printers that became cheap). My point here is that there are folks who have 
something just about ready for a technology change, and thus they can, if 
smart, move first and define the industry that results.
 
On Friday, May 8, 2020 2:58am, "Dave Taht"  said:



> https://github.com/MITDDC/zork
> 
> I still haven't finished zork II. Had to reverse engineer zork 1 to win.
> 
> --
> Make Music, Not War
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] real time text

2020-04-16 Thread David P. Reed

It does seem awfully complicated compared to how I would imagine the 
functionality could be implemented if you just did it on top of UDP. One of the 
costs of using UDP is that one needs to support protocol-specific end-to-end 
congestion control as well as protocol-specific datagram-loss handling.
 
To me a far simpler idea would be to start with "UDP congestion control" that 
didn't assume UDP datagrams arrived in-order and at-most-once, using observed 
drops and ECN marks, or end-to-end delay (by timestamping packets).
 
Then use on that logic a sort of erasure coding (allowing reconstruction of 
packets containing backspace/delete) that allows out-of-order delivery as 
information becomes known. Erasure coding (like Digital Fountain codes) are 
more efficient than retransmission of duplicates of packets - if there are N 
packets queued in the network, you'd need some kind of SACK-like scheme, but 
SACK doesn't work very well when the buffering is a backup in the network, 
rather than in the receive endpoint's OS queueing. Digital Fountains or its 
successors work great! (and I think the patent expired finally).
 
Up to this point, encryption hasn't been mentioned. But there are encryption 
schemes that work very well for UDP - emulating a "one-time pad" based on a 
random start value fed back into a good cipher. Ideally it would be inserted 
under the erasure code layer. What you need to know to decrypt a block to feed 
into the erasure-code decoder is just a sequence number for the transmitted 
block, so you can index into the OTP.
 
Very simple.
 
But doing this on top of WebRTC (not a bad protocol, just a complicated 
platform) etc. seems to introduce problems that need to be patched around.
 
 
On Wednesday, April 15, 2020 7:34pm, "Dave Taht"  said:



> dave
> 
> I am a big fan of udp. but reading about how this was implemented made
> my head hurt. Then add crypto.
> 
> https://www.meetecho.com/blog/realtime-text-sip-and-webrtc/
> 
> --
> Make Music, Not War
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] 800gige

2020-04-12 Thread David P. Reed
Sadly, out-of-order delivery tolerance was a "requirement" when we designed TCP 
originally. There was a big motivation: spreading traffic across a variety of 
roughly equivalent paths, when you look at the center of the network activity 
(not the stupid image called "backbone" the forces you to think it is just one 
pipe in the middle).
Instead a bunch of bell-head, circuit-oriented thought was engineered into 
TCP's later assumptions (though not UDP, thank the lord). And I mean to be 
insulting there.

It continues to appall me how much the post-1990 TCP tinkerers have assumed 
"almost perfectly in-order" delivery of packets that are in transit in the 
network between endpoints, and how much they screw up when that isn't true.

Almost every paper in the literature (and RFC's) makes the assumption. 

But here's the point. With a little careful thought, it is unnecessary to make 
this assumption in almost all cases. For example: you can get the effect of 
SACK without having to assume that delivery is almost in-order. And the resut 
will be a protocol that works better for out-of-order delivery, and also have 
much better performance wnen in-order delivery happens to occur. (and that's 
not even taking advantage of erasure coding like the invention called "Digital 
Fountains", which is also an approach for out-of-order delivery in TCP).

This is another example of the failure to adhere to the end-to-end argument. 
You don't need to put "near-in-order-delivery" as a function into the network 
to get the result you want (congestion control, efficient error-tolerance). So 
don't put that requirement on the network. Let it choose a different route for 
every packet from A to B.



On Saturday, April 11, 2020 7:08pm, "Dave Taht"  said:

> The way I've basically looked at things since 25Gbit ethernet was that
> improvements in single stream throughput were dead. I see a lot of
> work on out of order delivery tolerance as an outgrowth of that,
> but... am I wrong?
> 
> https://ethernettechnologyconsortium.org/wp-content/uploads/2020/03/800G-Specification_r1.0.pdf
> 
> --
> Make Music, Not War
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] openwrt or "open" security cams?

2020-04-04 Thread David P. Reed
I have some servers with 512 GB of RAM, and my company sells "Software Defined 
Server" capability to relatively inexpensively make virtualized systems with 10 
TB of RAM out of these 512 GB systems. They fit in less then a single 19" rack.

[Couldn't resist :-) https://www.tidalscale.com/technology is the place to 
learn more.

More OT: We deal with "bloat" at a different scale in our Ethernet internal to 
our software defined server implementation, since our interconnects are 10 GigE 
up to 100 GigE, but I can tell you it is a problem there, too. Serious problem 
if you by Arista gear, which is bloated by intent :-( because latency doesn't 
occur to Bechtolsheim to be a problem, only throughput. Fortunately, we (my 
design) control the network stack in the hyperkernel, and can use speciaized 
end-to-end protocols that don't use the bloat.

Anyway, my actual non-work desktop has 32 GB RAM. So 128 GB isn't surprising in 
my context.]

On Friday, April 3, 2020 7:10pm, "Jonathan Morton"  said:

>> On 4 Apr, 2020, at 2:08 am, Joel Wirāmu Pauling  wrote:
>>
>> 128G of Ram
> 
> That's somewhat more than I have in my desktop PCs.  Did you mean 128MB?
> 
>  - Jonathan Morton
> 
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] openwrt or "open" security cams?

2020-04-03 Thread David P. Reed
The ESP32-CAM device (which is under $10 quantity 1 from lots of sources, just 
google) is a WiFi enabled camera board with lots of functionaliy built in, 
including a full WiFi (2.4 GHz) and TCP/IP with TLS stack.
I have been playing with a couple, as have my friends. Various folks have 3D 
printed cases for particular uses, or you can just use any little box with a 
hole drilled for the camera.

It's programmable with the Arduino tools, or with Micropython, or with an 
embedded Javascript framework. You need an FTDI usb device to boot it, program 
it, ...

Folks have used it effectively for security camera applications. The camera 
that is usually sold with it (a very teeny camera indeed, smaller than a black 
bean, which I almost lost when I opened the package with the board and camera, 
the first time).

Easily battery powered. You can find a lot of support from the hacker community.

It does a simple (imperfect) face recognition onboard as an option, and can do 
single frames or streams, and has a number of GPIO pins you can use to trigger 
it, if triggering by motion isn't what you want.

On Thursday, April 2, 2020 2:05pm, "Dave Taht"  said:

> I am considering doing a security camera deployment, but am concerned
> about the overall security of
> security cams. Are there any with a reasonably rebuildable set of sources? 
> ipv6?
> 
> Anyone have recent experience with zoneminder, jitsi or big blue button?
> 
> --
> Make Music, Not War
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] AQL in openwrt head, but not 19 stable

2020-03-29 Thread David P. Reed
Pragmatically, I solve this by a mixed, manual strategy. My entry router at 
home isn't OpenWRT based, it only connects a WAN GigE port to a home LAN GigE 
port. I use multiple APs, and for now solve the "make wifi fast" problem by 
using one 5 GHz channel per AP, and enough APs so I can have only one laptop or 
phone per AP/channel.
And the bulk of my heavy dutu use is wired to a 10 GigE backbone in the house. 
That way, most of my file transfers only interfere with the same endpoints 
interactions.

I'd really like to not have to do this, but to be honest, maintaining OpenWRT, 
and dealing with the super-proprietary garbage in WiFi chipsets is just a waste 
of my time, which is spent on other efforts - I can buy my way out by treating 
APs as disposable, suboptimal crap.

I'd really like to see someone fund you guys and to actually learn what you 
know.

The atheros, broadcom, and other chip providers are the obvious source of 
support, but for some reason they don't want to compete on getting rid of bloat 
in the airwaves.

Maybe some Chinese company is motivated to beat Qualcomm and Broadcom by going 
open in their packet handling driver code and letting you guys make it work?

ESP32 devices show that you don't have to be Broadcom or Qualcomm/Atheros to do 
WiFi chips. They aren't that open, but they don't seem to be focused on locking 
out innovators from the parket. Maybe Huawei is motivated, since Qualcomm is 
the big company behind Trump's trade war against them.

On Sunday, March 29, 2020 2:44pm, "Dave Taht"  said:

> being that I have got absolutely miserable performance out of the
> ath10k based ubnt mesh lite and pro at the moment,
> I guess I'm going to bite the bullet and try head.
> 
> Not clear what the right things were to get the ath10k up to speed.
> 
> https://git.openwrt.org/?p=openwrt/openwrt.git;a=commit;h=f0aff72c2bfae884b5482a288a191cc33a37f66b
> 
> 
> --
> Make Music, Not War
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?

2020-03-28 Thread David P. Reed
Regarding EDF.

I've been pushing folks to move latency sensitive computing in ALL OS's to a 
version of EDF since about 1976. This was when I was in grad school working on 
distributed computing on LANs. In fact, it is where I got the idea for my Ph.D. 
thesis (completed in 1978) which pointed out a bigger idea - that getting ACID 
consistency [ACID hadn't been invented then as a term, we called it atomic 
actions] on data in a distributed system being processed by concurrent 
distributed transactions can be done by using timestamps that behave like the 
"deadlines" in EDF. In fact, the scheduling of code in my thesis was a 
generalized version of EDF, approximated because of the impossibility of 
perfect synchronization.

The Croquet system, which was a real-time edge based decentralized system, with 
no central server, that we demonstrated with a Second-Life style virtual world 
that wored entirely on a set of laptops that could be across the country from 
each other was based on an OS implemented in a variant of the Squeak 
programming language, where the scheduling and object model was not process 
based, but message based with replicated computation synchronized via a shared 
"timestamp" that was used for execution scheduling (essentially distributed 
EDF). The latency requirements for this distributed virtual world were on the 
order of 100 msec. simultaneity for mouse clicks affecting all participating 
nodes across the country in a virtual 3D world, with sound, etc.

Croquet was built in 2 years by 3 people (starting from scratch).  And 
scheduling was never a problem, nor was variable network delay (our protocol 
was based on UDP frames synchronized by the same timestamps used to synchronize 
each object method execution.

The operating system model is one I created within that modified Squeak 
environment as part of its base "interpreter", which wasn't a loop, but a 
scheduler using EDF.

To make this work properly, the programming model has to be unified around this 
kind of scheduling.

And here's why I am mentioning this. To put EDF *only* into the networking 
stack, but leave the userspace applicaiton living with the stupid Linux 
timesharing system scheduler, optimized for people typing commands on terminals 
every few seconds and running batch compilation is the *worst of all possible 
ways to use EDF*.

Because it creates a huge mess bridging those two ideas.

Croquet is a much more complicated thing that a teleconferencing system, 
because it actually lets end users write simple programs that control the user 
interactive experience, 30 frames per second across the entire US, replicated 
on each computer, in the Squaak variant of Smalltalk. And we did it with 3 
coders in a couple of years. (yes, they are sckilled people - me, David A. 
Smith, and the late Andreas Raab, who died way too young).

In contrast, trying to bridge between EDF and regular Linux processes running 
under the ordinary scheduler, even with "nice" and all kinds of hacks, just to 
do a video conferencing system with fixed, non-programmable behavior, would 
take far more design, far more lines of code, etc.

So this is why I think timesharing OS's are really obsolescent for modern 
distributed interactive systems. Yeah, "rsync" and "git" are nice for batch 
replication of files. ANd yeah, EDF can help make them perform faster in their 
file transferring.

But to make an immersive, real-time experience (which is what computing today 
is all about, on all time scales, even in the servers other than HPC) it is ALL 
wrong, and incrementally patching little pieced of Linux ain't gonna get there. 
Windows or BSD (macOS) ain't gonna do it either.

I'm old. Why is Linux living in the idea space of operating systems that 
precededed networking, distributed computing, media sharing?

My opinion, and it is only an opinion based on experience, is that it really is 
time for networking to stop focusing on file transfers, and OS's to stop 
focusing on timesharing behavior. The world is "live" and time-based. It may 
not be hard-real-time. But latency is what matters.

Since networking will remain separate from OS's, the interface concepts in both 
really need to be matched to get to that future.

It's why I pushed so hard for UDP, not reliable in-order streams alone. And in 
my view, though no one every implemented it, those UDP packets will be carring 
times, essential for synchronization of coordinated operations at all the 
endpoints of the computation.

I'd love to see that happen before this old guy dies. I think it will make it a 
whole lot easier to make networked programs work.

Decentralization isn't "blockchain". My thesis, in 1978, talked about one way 
to decentralize computation, not just data structures. And timing is critical.

Sorry for the rant. I'm tired of waiting for "backwards compatibility" with 
Unix version 1 to allow us to go forward. To me, Linux is a great version of a 
subset of the operating systems I w

Re: [Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?

2020-03-27 Thread David P. Reed
Congestion control for real-time video is quite different than for streaming. 
Streaming really is dealt with by a big enough (multi-second) buffering, and 
can in principle work great over TCP (if debloated).

UDP congestion control MUST be end-to-end and done in the application layer, 
which is usually outside the OS kernel. This makes it tricky, because you end 
up with latency variation due to eh OS's process scheduler that is on the order 
of magnitude of the real-time requirements for air-to-air or light-to-light 
response (meaning the physical transition from sound or picture to and from the 
transducer).

This creates a godawful mess when trying to do an app. Whether in WebRTC (peer 
to peer UDP) or in a Linux userspace app, the scheduler has huge variance in 
delay.

Now getting rid of bloat currently requires TCP to respond to congestion 
signalling. UDP in the kernel doesn't do that, and it doesn't tell userspace 
much either (you can try to detect packet drops in userspace, but coding that 
up is quite hard because the schdulers get in the way of measurement, and 
forget about ECN being seen in userspace)

This is OS architecture messiness, not a layer 2 or 3 issue.

I've thought about this a lot. Here's my thoughts:

I hate putting things in the kernel! It's insecure. But what this says is that 
for very historical and stupid reasons (related to the ideas of early 
timesharing systems like Unix and Multics) folks try to make real-time 
algorithms look like ordinary "processes" whose notion of controlling temporal 
behavior is abstracted away.

So: 
1. We really should rethink how timing-sensitive algorithms are expressed, and 
it isn't gonna be good to base them on semaphores and threads that run at 
random rates. That means a very different OS conceptual framework. Can this 
share with, say, the Linux we know and love - yes, the hardware can be shared. 
One should be able to dedicate virtual processors that are not running Linux 
processes, but instead another computational model (dataflow?).
An example of this (though clunky and unsupported by good tools) is in FreeBSD, 
it's called *netgraph*. It's a structured way to write reactive algorithms that 
are demand or arrival driven. It also has some security issues, and since it is 
heavily based on passing mbufs around it's really quirky. But I have found it 
useful for the kind of things that need to get done in teleconferencing voice 
and video.

2. EBPF is interesting, because it is more secure, and is again focused on 
running code at kernel level, event-driven.  I think it would be a seriously 
difficult lift to get it to the point where one could program the networked 
media processing in BPF.

3. One of the nice things about KVM (hardware virtualization) is that 
potentially it lets different low level machine models share a common machine. 
It occurs to me that using VIRTIO network devices and some kind of VIRTIO media 
processing devices, that a KVM virtual machine could be hooked up to the 
packet-level networking drivers in the end device, isolating the 
teleconferencing from the rest of the endpoint OS, and creating the right kind 
of near-bare--metal environment for managing the timing of network packets and 
the paths to the screen and audio that would be simple and clean and tightly 
scheduled. KVM could "own" one or more of the physical cores during the 
teleconference.

You can see, though, that this isn't just a "network protocol design" problem. 
This is only partly a network protocol issue, but one that is coupled with the 
architecture of the end systems.

I reminisce a little bit thinking back to the 1970's and 80's when TCP/IP and 
UDP/IP were being designed. Sadly, it was one of the big problems of 
communicating between the OS community and the protocol community that the OS 
community couldn't think outside the "timesharing" system box, and the protocol 
community thought of networking like phone calls (sessions). This is where the 
need for control of timing and buffering got lost. The timesharing folks 
largely thought of networks as for reliable timeless sequential "streams" of 
data that had no particular urgency. The network protocol folks were focused on 
ARQ.
Only a few of us cared about end-to-end latency bounds (where ends meant 
keyboard click or audio sample to screen display change or speaker motion). The 
packet speech guys did, but most networking guys wanted to toss them under the 
bus as annoying. And those of us doing distributed multinode algorithms did, 
but the remote login and FTP guys were skeptical that would ever matter.

It's the latency, stupid. Not the reliability, nor the consistency, nor 
throughput. Unless both the OS and the path are focused on minimizing latency, 
a vast set of applications will suck. Unfortunately, both the OS and network 
communities are *stuck* in a world where latency is uncontrollable, and there 
are no tools for getting it better.

 

On Friday, March 27, 2020 1:2

Re: [Cerowrt-devel] [Bloat] OT: Netflix vs 6in4 from HE.net

2020-03-24 Thread David P. Reed
Thanks, Colin, for the info. Sadly, I learned all about the licensing of 
content in the industry back about 20 years ago when I was active in the 
battles about Xcasting rights internationally (extending "broadcast rights" to 
the Web, which are rights that exist only in the EU, having to do with 
protecting broadcasters whose signals are powerful enough to cross borders of 
countries, so a whole new, non-copyright-based Intellectual Property Right was 
invented. WIPO wanted to argue that the Web was just like broadcasting across 
borders, so web pages should be burdened by Xcasting rights, along with all 
other copyrighted things.)

What I wanted to know was exactly what you just said in passing: that he.net's 
address space was entirely blocked by Netflix because it wasn't accurately 
geolocated for "region restriction" enforcement.

Whether I think that is "correct" or "reasonable", I just want to be able to 
get Netflix in my US house. Not to be any sort of "pirate" intentionally trying 
to break the license. I really just want that stuff to work as the license 
between Netflix and content provider requires (I'm sure the license doesn't say 
"block he.net").


On Tuesday, March 24, 2020 11:11am, "Colin Dearborn"  
said:

> HE IPv6 space has been tagged as a vpn type service by Netflix, since it has 
> users
> all over the world, but it's space is all geolocated in the US. If HE had
> geolocated the blocks of each POP to the country the POP resided in, and put 
> some
> rules around geolocation of using each POP (IE Canadian residents can only use
> Canadian POPs) this could have been avoided, but it also would have been a 
> large
> amount of work on HE's side just to make geolocation accurate-ish.
> 
> Fortunately, my ISP got IPv6 working natively shortly after Netflix started
> blocking HE's space, so I didn't have to suffer for too long (but lost my US
> netflix.)
> 
> Content licensing is a very complex thing. While you might believe that your
> subscription equals the license, in reality the license is the agreement 
> between
> Netflix and the content providers. Content providers put strict geolocation 
> rules
> of where content can be played on Netflix, and Netflix can be sued by them if 
> it
> appears that they're not doing enough to protect these rules. This is to 
> protect
> the value of the content providers content, when they sell it to someone other
> than Netflix, or start their own streaming service.  For example, in Canada, 
> we
> have a streaming service called Crave. There's a lot of content on there that
> would be available to Netflix in the States, so if Netflix didn't properly 
> adhere
> to geolocation rules, Crave could legitimately either sue Netflix directly, 
> or get
> the content provider to do it for them (again, depending on the licensing
> agreement).
> This is why when you travel, you get the local Netflix content, not the 
> content of
> the country where you pay the subscription.
> 
> Your option of using a cloud server may work. :)
> 
> 
> This might turn out to be a problem for me - I have a "smart TV" that I watch
> Netflix on, and it appears to use IPv4. What specifically triggers Netflix to
> reject specific IPv6 clients? Is it the player's IPv6 address? Is all of 
> he.net's
> address space blocked?
> 
> I've been planning to move more of my home networks to routed IPv6.
> 
> In principle, Netflix as a business shouldn't care - it's just doing its best
> efforts to protect its content's licensing requirements. So if I'm actually 
> in the
> US, and my net claims correctly to be in US (by whatever trickery I use), 
> neither
> Netflix nor I am violating any license from a legal point of view.
> 
> So all I need to do would be to get a legit US IPv6 address (I have one /64 
> on a
> public cloud server), and tunnel it to my house and give it to my TV. Not 
> ideal,
> but until Netflix does its geofencing *correctly* according to the license, 
> rather
> than according to IP address, I'd say it's a proper thing.
> 
> 
> 
> On Saturday, March 21, 2020 8:47pm, "Rich Brown"  
> said:
> 
>>  I love knowing smart people.
>>
>> Yes, it does appear to be Netflix geo-fencing their services. Given that I 
>> only
>> watch Netflix on one computer, I am taking Sebastian's advice and turning off
>> IPv6
>> DNS queries in Firefox.
>>
>> Thanks again for these responses.
>>
>> Rich
>>
>>> On Mar 21, 2020, at 6:14 PM, Sebastian Moeller  wrote:
>>>
>>> Hi Rich,
>>>
>>> since it seems to be IPv6 related, why not use firefox for netflix and 
>>> disable
>>> IPv6 in firefox (see
>>> https://support.mozilla.org/en-US/kb/firefox-cant-load-websites-other-browsers-can#w_ipv6)
>>> maybe that works well enough?
>>>
>>> Best Regards
>>>  Sebastian
>>>
>>>
>>>
>>>
 On Mar 21, 2020, at 21:20, Rich Brown  wrote:

 to Bloat & CeroWrt folks: This is a little OT for either of these lists, 
 but I
 figured there are plenty of experts here, and I would be delighted to g

Re: [Cerowrt-devel] [Bloat] OT: Netflix vs 6in4 from HE.net

2020-03-24 Thread David P. Reed
Sadly, my home provider, RCN, which is otherwise hugely better than Comcast and 
Verizon provisioning wise, still won't provide IPv6 to its customers. It's a 
corporate level decision. I know the regional network operations guys, which is 
why I know about the provisioning - they have very high-end DOCSIS 3.1 fabric, 
with extra capacity, unlike Comcast and Verizon, who are not replacing gear 
with newer gear until it breaks.

Unfortunately I haven't found a really local place to get IPv6 tunneling from. 
I've had my he.net /56 forever, but the best tunnel goes down to NYC. And the 
peering wars are just truly annoying when he.net is blocked as it is from some 
AS's.

On Tuesday, March 24, 2020 1:47pm, "Dave Taht"  said:

> It is easy to use a nearby linode server as an ipv6 vpn. Back when I was still
> doing it (I too went native ipv6), I used wireguard and babel and
> source specific routing to bring ipv6 anywhere I felt I needed it.
> Linode will give you your own ipv6/64 if asked. If asked especially
> nicely you can get a /56
> 
> whether or not they meet netflix's requirements for geolocation i don't know.
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] OT: Netflix vs 6in4 from HE.net

2020-03-22 Thread David P. Reed
This might turn out to be a problem for me - I have a "smart TV" that I watch 
Netflix on, and it appears to use IPv4. What specifically triggers Netflix to 
reject specific IPv6 clients? Is it the player's IPv6 address? Is all of 
he.net's address space blocked?

I've been planning to move more of my home networks to routed IPv6.

In principle, Netflix as a business shouldn't care - it's just doing its best 
efforts to protect its content's licensing requirements. So if I'm actually in 
the US, and my net claims correctly to be in US (by whatever trickery I use), 
neither Netflix nor I am violating any license from a legal point of view.

So all I need to do would be to get a legit US IPv6 address (I have one /64 on 
a public cloud server), and tunnel it to my house and give it to my TV. Not 
ideal, but until Netflix does its geofencing *correctly* according to the 
license, rather than according to IP address, I'd say it's a proper thing.



On Saturday, March 21, 2020 8:47pm, "Rich Brown"  said:

>  I love knowing smart people.
> 
> Yes, it does appear to be Netflix geo-fencing their services. Given that I 
> only
> watch Netflix on one computer, I am taking Sebastian's advice and turning off 
> IPv6
> DNS queries in Firefox.
> 
> Thanks again for these responses.
> 
> Rich
> 
>> On Mar 21, 2020, at 6:14 PM, Sebastian Moeller  wrote:
>>
>> Hi Rich,
>>
>> since it seems to be IPv6 related, why not use firefox for netflix and 
>> disable
>> IPv6 in firefox (see
>> https://support.mozilla.org/en-US/kb/firefox-cant-load-websites-other-browsers-can#w_ipv6)
>> maybe that works well enough?
>>
>> Best Regards
>>  Sebastian
>>
>>
>>
>>
>>> On Mar 21, 2020, at 21:20, Rich Brown  wrote:
>>>
>>> to Bloat & CeroWrt folks: This is a little OT for either of these lists, 
>>> but I
>>> figured there are plenty of experts here, and I would be delighted to get 
>>> your
>>> thoughts.
>>>
>>> I just tried to view a Netflix movie and got a F7111-5059 error message. 
>>> This
>>> prevented the video from playing. (As recently as a month or two ago, it 
>>> worked
>>> fine.)
>>>
>>> Googling the error message gets to this page
>>> https://help.netflix.com/en/node/54085 that singles out use of an IPv6 Proxy
>>> Tunnel.
>>>
>>> Sure enough, I'm have a 6in4 tunnel through Hurricane Electric on WAN6. 
>>> Stopping
>>> that WAN6 interface caused Netflix to work.
>>>
>>> What advice could you offer? (I could, of course, turn off WAN6 to watch 
>>> movies.
>>> But that's a drag, and other family members couldn't do this.) Many thanks.
>>>
>>> Rich
>>> ___
>>> Bloat mailing list
>>> bl...@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>>
> 
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] I got ipv6 on my cell tether this morning

2020-01-28 Thread David P. Reed
Good grief, can't we kill off NAT? IPv4.01 isn't IPv6.

What does the other end think your IPv6 source address is? Can your tethered 
systems pass addressable endpoints to the other end, and expect them to work? 
Or will there be STUN6/TURN6 needed to do, say, WebRTC peering?

On Tuesday, January 28, 2020 12:51pm, "Michael Richardson"  
said:

> Dave Taht  wrote:
> > I got a new phone for christmas, and was surprised to see I finally had 
> an
> IPv6
> > allocation on my tether. More surprising was seeing fc:: used... and
> > while my laptop
> 
> Sounds like your phone has decided to use NAT66, as fc:: is a ULA.
> 
> --
> ]   Never tell me the odds! | ipv6 mesh networks [
> ]   Michael Richardson, Sandelman Software Works|IoT architect   [
> ] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails
> [
> 
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] 5g nas protocol

2020-01-19 Thread David P. Reed
What a beautiful document. I noticed a few bugs in the protocol here and there 
as I skimmed it, and what appear to just be typos.

I'm sure it all just works perfectly, after reading the proof of correctness 
that I found on github.

:-(

On Sunday, January 19, 2020 8:54am, "Dave Taht"  said:

> If anyone out there is having trouble sleeping, I highly recommend
> trying to make heads or tails of the 3gpp nas protocol which governs
> how user equipment connects and moves about it. I got as far as sec
> 6.2.5.1.1.2.
> 
> https://www.etsi.org/deliver/etsi_ts/124500_124599/124501/15.00.00_60/ts_124501v15p.pdf
> 
> --
> Make Music, Not War
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] starlink as a mesh network

2020-01-13 Thread David P. Reed
Fun, but it is all just a dynamic geometry game. 

It's worth remembering that Congestion Control will be a huge problem here. 
It's far from obvious that current TCP congestion control (which assumes all 
packets in a virtual circuit traverse the same path in a very deep way indeed) 
will do the job. Thus there is a serious risk of congestion collapse if windows 
larger than one packet are allowed to operate.

So the Dijkstra algorithm reachability is not at all predictive of how this 
network will respond once it has a moderate load percentage on any path (over 
10% of average link capacity).

Glad it's being funded by billionaires who may have a long timeframe in mind.

Iridium took a much different approach, focused on 14 kb/sec constrant-rate 
virtual circuits (compressed voice).

My guess is that, just as in the Internet, nobody understands bufferbloat, or 
deeply understands the TCP congestion control approach's limitations. And I bet 
they will throw in "differential service" as if it were a solved problem, and 
maybe network layer multicast, too. Why not create a huge mess based on 
assuming you can just figure it out after the satellites are up?

Are there any queueing theory and control theory folks among the leadership 
here? There are few in the IETF, and few in the cellular community, too, who 
can explore a completely new topology...

On Sunday, January 12, 2020 7:59am, "Dave Taht"  said:

> Mark  ignores retries and loss. I'm really far from confident this can
> be avoided, however perhaps with multiple terminals retransmitting...
> 
> http://nrg.cs.ucl.ac.uk/mjh/starlink/hotnets19.pdf
> 
> And it's still a bear to cross an ocean.
> 
> --
> Make Music, Not War
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] will starlink have bufferbloat?

2019-05-23 Thread David P. Reed

Sorry, I can't help - I never spend time or effort on Twitter, Reddit, etc. 
because I see no value and lots of problems in doing that.
 
Musk probably wouldn't love my views on most of his companies, anyway.
I hope he gets something right on this one.
 
On Wednesday, May 22, 2019 6:45am, "Dave Taht"  said:



> And I tried my first ever post to reddit, not realizing it didn't take 
> html
> 
> https://www.reddit.com/r/Starlink/comments/brn6gg/will_starlink_have_bufferbloat/
> 
> and slashdot also.
> 
> On Wed, May 22, 2019 at 11:53 AM Dave Taht  wrote:
> >
> > With the first major starlink launch stuck on the pad, and having
> > never got a straight answer about how starlink was going to manage
> > satellite handovers and admission control, I came up with an
> > "annoyer-in-chief" idea to see if we could find out what the plan was.
> >
> > If everybody here (500+ members of these mailing lists!) could take 2
> > minutes to compose an interesting tweet or reply to elon musk on the
> > subject, maybe we'd get somewhere. I just did one (
> > https://twitter.com/mtaht/status/1131131277413822464 ) but a huge
> > variety of posts on the theme are possible, other thoughts were things
> > like:
> >
> > @elonmusk There seems to be no intelligent life among ISPs down here.
> > Has #starlink handled the #bufferbloat problem? (
> > https://blog.tohojo.dk/media/bufferbloat-and-beyond.pdf )
> >
> > @elonmusk Keep hoping #bufferbloat will be solved by #starlink - got a
> > plan? ( https://blog.tohojo.dk/media/bufferbloat-and-beyond.pdf )
> >
> > do it on replies to anything he says about starlink, keep doin it, and
> > perhaps, an answer will appear.
> >
> > Sometimes ya gotta be loud.
> >
> > --
> >
> > Dave Täht
> > CTO, TekLibre, LLC
> > http://www.teklibre.com
> > Tel: 1-831-205-9740
> 
> 
> 
> --
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-205-9740
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


[Cerowrt-devel] Does Ubiquiti Unifi have bufferbloat and unfair/bloated WiFi scheduling?

2019-05-23 Thread David P. Reed

I have been chatting with a startup in the Multi-User Dwelling networking 
operations space, and they seem to really be attracted to Ubiquiti Unifi 
systems. I can't blame them for wanting a comprehensive and evolving system.
 
But on the questions related to bufferbloat and making wifi both low latency 
and fast, I really don't know much about these products. (I have a Unifi 10 
Gb/sec switch as my home/lab fiber backbone, but that's not really relevant to 
answering this question).
 
So, since you, Dave, and others have been talking about real-world fq_codel 
etc. and faster wifi scheduling, does anyone know what the status at Ubiquiti 
is?
 
I know some here run OpenWRT/LEDE on Unifi APs, but that's not my question, 
really.
 
Any knowledge out there?
 
(I'd like to recommend that they run some actual load tests - flent RRUL, etc. 
- but I think they might need some help).
Personally I have no stake whatever in their use, but I'd love to get someone 
to start solving the bloat and queueing/scheduling problems.
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] (no subject)

2019-05-18 Thread David P. Reed

Correction accepted. Between the US east and west coasts, the time of flight of 
packets on fiber or cable is about 23 msec. (Boston-LA, driving route, over 
fiber, at 207 Mm/sec).
 
So, if all intermediate links are equal in rate, at say, 10 Gb/sec, that means 
that there should be no more than 10,000,000,000 * 0.023 bits actually in 
transit on the actual fiber, plus a packet in each router between's outbound 
queue. Let's say a packet is around 1500 bytes, or 12,000 bits, since that is 
the MTU we stupidly enforce even today, and there are 10 hops (typical today 
between Boston and LA.)
 
So we would expect the optimal window size sum, for all flows on any hop of 
that path to be 10 * 12000 bits + 230,000,000 bits:
 
230,120,000 bits in flight between BOS and LAX. Divide that by 12,000 
bits/packet, and you get about 19,177 packets along that path. At most points 
along the path, you would expect about 10 different flows or more to be in 
flight, so there would be, optimally, about 1,918 1500 byte packets. Each flow 
would get 1 Gb/sec as its share.
 
If the connection is limited to < 1 Gb/sec at either endpoint, then there's no 
reason for any intermediate node to buffer that much of the flow.
 
This gives a reasonable understanding of where AQM should be ending up in terms 
of the cwnd needed to sustain max throughput and optimum end-to-end latency 
(which would be about 23 msec + 10 hops * 12000 / 1 Gb/sec.  = 23.012 msec. for 
a packet to get from one end to the other).
 
 
 
On Saturday, May 18, 2019 6:57pm, "Jonathan Morton"  
said:



> > On 19 May, 2019, at 1:36 am, David P. Reed 
> wrote:
> >
> > Pardon, but cwnd should NEVER be larger than the number of forwarding hops
> between source and destination.
> > Kleinrock and students recently proved that the optimum cwnd for both
> throughput and minimized latency is achieved when there is one packet or less 
> in
> each outbound queue from source to destination (including cross traffic - 
> meaning
> other flows sharing the same outbound queue.
> 
> This argument holds only if time-of-flight *between* nodes is negligible. 
> Trivially, a geosynchronous satellite hop adds only two nodes but 
> approximately
> half a second to the one-way path delay, with potentially thousands of packets
> existing only as radio waves in the distance between, not in a queue.
> 
> - Jonathan Morton
> 
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] (no subject)

2019-05-18 Thread David P. Reed
more advanced algorithms like cake could take hold.
> >
> > On Wed, May 15, 2019 at 9:32 AM Sebastian Moeller 
> wrote:
> >>
> >> Hi All,
> >>
> >>
> >> I believe the following to be relevant to this discussion:
> https://apenwarr.ca/log/20180808
> >> Where he discusses a similar idea including implementation albeit aimed
> at lower bandwidth and sans the automatic bandwidth tracking.
> >>
> >>
> >>> On May 15, 2019, at 01:34, David P. Reed 
> wrote:
> >>>
> >>>
> >>> Ideally, it would need to be self-configuring, though... I.e.,
> something
> >>> like the IQRouter auto-measuring of the upstream bandwidth to tune
> the
> >>> shaper.
> >>
> >> @Jonathan from your experience how tricky is it to get reliable speedtest
> endpoints and how reliable are they in practice. And do you do any 
> sanitization,
> like take another measure immediate if the measured rate differs from the 
> last by
> more than XX% or something like that?
> >>
> >>
> >>>
> >>> Sure, seems like this is easy to code because there are exactly two
> ports to measure, they can even be labeled physically "up" and "down" to 
> indicate
> their function.
> >>
> >> IMHO the real challenge is automated measurements over the internet at
> Gbps speeds. It is not hard to get some test going (by e.g. tapping into 
> ookla's
> fast net of confederated measurement endpoints) but getting something where 
> the
> servers can reliably saturate 1Gbps+ seems somewhat trickier (last time I 
> looked
> one required a 1Gbps connection to the server to participate in speedtest.net,
> obviously not really suited for measuring Gbps speeds).
> >> In the EU there exists a mandate for national regulators to establish
> and/or endorse an anointed "official" speedtests, untended to keep ISP 
> marketing
> honest, that come with stricter guarantees (e.g. the official German 
> speedtest,
> breitbandmessung.de will only admit tests if the servers are having sufficient
> bandwidth reserves to actually saturate the link; the enduser is required to
> select the speed-tier giving them a strong hint about the required rates I
> believe).
> >> For my back-burner toy project "per-packet-overhead estimation on
> arbitrary link technology" I am currently facing the same problem, I need a
> traffic sink and source that can reliably saturate my link so I can measure
> maximum achievable goodput, so if anybody in the list has ideas, I am all
> ears/eyes.
> >>
> >>>
> >>> For reference, the GL.iNet routers are tiny and nicely packaged, and
> run
> >>> OpenWrt; they do have one with Gbit ports[0], priced around $70. I
> very
> >>> much doubt it can actually push a gigabit, though, but I haven't had
> a
> >>> chance to test it. However, losing the WiFi, and getting a slightly
> >>> beefier SoC in there will probably be doable without the price going
> >>> over $100, no?
> >>>
> >>> I assume the WiFi silicon is probably the most costly piece of
> intellectual property in the system. So yeah. Maybe with the right parts being
> available, one could aim at $50 or less, without sales channel markup. 
> (Raspberry
> Pi ARM64 boards don't have GigE, and I think that might be because the GigE
> interfaces are a bit pricey. However, the ARM64 SoC's available are typically
> Celeron-class multicore systems. I don't know why there aren't more ARM64 
> systems
> on a chip with dual GigE, but I suspect searching for them would turn up 
> some).
> >>
> >> The turris MOX (https://www.turris.cz/en/specification/) might be a
> decent startimg point as it comes with one Gbethernet port and both a SGMII 
> and a
> PCIe signals routed on a connector, they also have a 4 and an 8 port switch
> module, but for our purposes it might be possible to just create a small 
> single Gb
> ethernet port board to get started.
> >>
> >> Best Regards
> >> Sebastian
> >>
> >>>
> >>> -Toke
> >>>
> >>> [0] https://www.gl-inet.com/products/gl-ar750s/
> >>> ___
> >>> Cerowrt-devel mailing list
> >>> Cerowrt-devel@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> >>
> >> ___
> >> Bloat mailing list
> >> bl...@lists.bufferbloat.net
> >> https://lists.bufferbloat.net/listinfo/bloat
> >
> >
> >
> > --
> >
> > Dave Täht
> > CTO, TekLibre, LLC
> > http://www.teklibre.com
> > Tel: 1-831-205-9740
> > ___
> > Bloat mailing list
> > bl...@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> 
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Huawei banned by US gov...

2019-05-16 Thread David P. Reed

Thanks for the song share. It's timely. I've been recommending this to all my 
Democrat friends. [ https://youtu.be/bLqKXrlD1TU ]( 
https://youtu.be/bLqKXrlD1TU ) [ https://youtu.be/GqNxne97ubc ]( 
https://youtu.be/GqNxne97ubc ) (the two song versions together) (The 
Republicans are too far gone to bother). I think you had to be there, though. 
They mostly don't get the point. The song describes the Democratic Party 
leading us into the Big Muddy back then, and now they think that party gave us 
civil rights and progress, and saved us from disaster. It didn't, it wasn't the 
sargeant. We did, by being the sergeant ourselves, recognizing the Parties were 
both part of the problem.
 
By the way, can I see the letters to the editor? Did they get published?
 
 
On Thursday, May 16, 2019 10:44am, "Dave Taht"  said:



> One thing I've been trying to do (again) is more outreach outside our
> direct circles, on various subjects, in various ways. Up until
> recently I was pretty happy with the overall progress of the fq_codel
> deployment, and it was things like this not bufferbloat-related that
> were getting me down the most.
> 
> Jim, esr, and I wrote letters to the editor on this subject of the
> washington post, guardian and the economist, recently. This is an
> ancient technique, but so long as we're persistent about having a (or
> multiple) letters like that, at a low level of effort, perhaps that is
> one "new" way to "get through". We need to keep trying various
> avenues. The rules, though, of letters to the editor is that they have
> to be unique, and well, give each one a week or three, then try
> another pub, I figure is unique enough. After a while, perhaps an open
> letter. I have no idea... but we have to try! More of us, have to try.
> If someone(s) from here can merely get something on some subject they
> care about into their local newspaper, it's a plus.
> 
> I've had quite a lot of solace in playing a ton of rock and roll of
> late, notably an updated version of "working class hero" that I should
> sit down and record. Playing the guitar is just about the only way I
> feel even halfway connected to anything of late. "It gpls me"
> recently got the most hits of any song I've ever posted.
> 
> Buying a press release a we did before on the fcc fight, did work, but
> it was expensive, and never crossed over into the business press.
> Trying to create an environment when something suddenly becomes
> "obvious" to a lot of people, requires a supersaturated solution. For
> all I know the world (I certainly am) is at its breaking point
> regarding all the security (and bufferbloat!) problems in the
> computing world and ready to accept something new instead of business
> as usual.
> 
> Recently I had one of the weirder things happen in a while. For about
> a month, I've been using in various public and private conversations
> an analogy "about me being a scared and scarred survivor of a poetry
> slam between vogons and bokononists", and realizing how few had read
> Vonnegut's "cat's cradle" to understand what I meant, fully.
> Yesterday, or the day before, slashdot had a whole bunch of people
> refer to that book and I felt a bit less mis-understood. Co-incidence?
> no idea
> 
> One of the things that cheers me up is that book was published in the
> early 60s and civilization survived, after, admittedly, getting neck
> deep in the big muddy.
> So anyway, here's that song, that has a fascinating history:
> 
> https://www.youtube.com/watch?v=uXnJVkEX8O4
> 
> and to me applies to a lot of folk, currently in power. Perhaps the
> times are a changin, too.
> 
> On Thu, May 16, 2019 at 4:12 PM David P. Reed  wrote:
> >
> > In my personal view, the lack of any evidence that Huawei has any more
> government-controlled or classified compartmented Top Secret offensive 
> Cyberwar
> exploits than Cisco, Qualcomm, Broadcom, Mellanox, F5, NSO group, etc. is 
> quite a
> strong indication that there's no relevant "there" there.
> >
> >
> >
> > Given the debunking of both the Supermicro and Huawei fraudulent claims 
> > (made
> by high level "government sources" in the intelligence community), this entire
> thing looks to me like an attempt to use a fake National Emergency to achieve
> Trade War goals desired by companies close to the US Government agencies 
> (esp. now
> that the Secretary of Defense is a recent Boeing CEO who profits directly from
> such imaginary threats).
> >
> >
> >
> > Now, I think that this "open up the sources" answer is a really good part of
> a 

Re: [Cerowrt-devel] Huawei banned by US gov...

2019-05-16 Thread David P. Reed

In my personal view, the lack of any evidence that Huawei has any more 
government-controlled or classified compartmented Top Secret offensive Cyberwar 
exploits than Cisco, Qualcomm, Broadcom, Mellanox, F5, NSO group, etc. is quite 
a strong indication that there's no relevant "there" there.
 
Given the debunking of both the Supermicro and Huawei fraudulent claims (made 
by high level "government sources" in the intelligence community), this entire 
thing looks to me like an attempt to use a fake National Emergency to achieve 
Trade War goals desired by companies close to the US Government agencies (esp. 
now that the Secretary of Defense is a recent Boeing CEO who profits directly 
from such imaginary threats).
 
Now, I think that this "open up the sources" answer is a really good part of a 
solution. The other parts are having resiliency built in to our systems. The 
Internet is full of resiliency today. A balkanized and "sort of air-gapped" US 
transport network infrastructure is far more fragile and subject to both random 
failure and targeted disruption.
 
But who is asking me?  Fear is being stoked.
 
 
On Thursday, May 16, 2019 5:58am, "Dave Taht"  said:



> And we labor on...
> 
> https://tech.slashdot.org/story/19/05/15/2136242/trump-signs-executive-order-barring-us-companies-from-using-huawei-gear
> 
> To me, the only long term way to even start to get out of this
> nightmare (as we cannot trust anyone else's gear either, and we have
> other reminders of corruption like the volkswagon scandal) is to
> mandate the release of source code, with reproducible builds[1], for
> just about everything connected to the internet or used in safety
> critical applications, like cars. Even that's not good enough, but it
> would be a start. Even back when we took on the FCC on this issue, (
> http://www.taht.net/~d/fcc_saner_software_practices.pdf ) I never
> imagined it would get this bad.
> 
> 'round here we did produce one really trustable router in the cerowrt
> project, which was 100% open source top to bottom, which serves as an
> existence proof - and certainly any piece of gear reflashed with
> openwrt is vastly better and more secure than what we get from the
> manufacturer - but even then, I always worried that my build
> infrastructure for cerowrt was or could be compromised and took as
> many steps as I could to make sure it wasn't - cross checking builds,
> attacking it with various attack tools, etc.
> 
> Friends don't let friends run factory firmware, we used to say. Being
> able to build from sources yourself is a huge improvement in potential
> trustability - (but even then the famous paper on reflections on
> trusting trust applies). And so far, neither the open source or
> reproducable builds concepts have entered the public debate.
> 
> Every piece of hardware nowadays is rife with binary blobs and there
> are all sorts of insecurities in all the core cpus and co-processors
> designed today.
> 
> And it isn't of course, just security in huawei's case - intel just
> exited the business - they are way ahead of the US firms in general in
> so many areas.
> 
> I have no idea where networked computing can go anymore, particularly
> in the light of the latest MDS vulns revealed over the past few days (
> https://lwn.net/Articles/788522/ ). I long ago turned off
> hyperthreading on everything I cared about, moved my most critical
> resources out of the cloud, but I doubt others can do that. I know
> people that run a vm inside a vm. I keep hoping someone will invest
> something major into the mill computing's cpu architecture - which
> does no speculation and has some really robust memory and stack
> smashing protection features (
> http://millcomputing.com/wiki/Protection ), and certainly there's hope
> that risc-v chips could be built with a higher layer of trust than any
> arm or intel cpu today (but needs substancial investment into open
> on-chip peripherals)
> 
> This really isn't a bloat list thing, but the slashdot discussion is
> toxic. Is there a mailing list where these sorts of issues can be
> rationally discussed?
> 
> Maybe if intel just released all their 5G IP into the public domain?
> 
> /me goes back to bed
> 
> [1] https://en.wikipedia.org/wiki/Reproducible_builds
> 
> --
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-205-9740
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] fq_codel is SEVEN years old today...

2019-05-14 Thread David P. Reed

 

Ideally, it would need to be self-configuring, though... I.e., something
like the IQRouter auto-measuring of the upstream bandwidth to tune the
shaper.
 
Sure, seems like this is easy to code because there are exactly two ports to 
measure, they can even be labeled physically "up" and "down" to indicate their 
function.

For reference, the GL.iNet routers are tiny and nicely packaged, and run
OpenWrt; they do have one with Gbit ports[0], priced around $70. I very
much doubt it can actually push a gigabit, though, but I haven't had a
chance to test it. However, losing the WiFi, and getting a slightly
beefier SoC in there will probably be doable without the price going
over $100, no?
 
I assume the WiFi silicon is probably the most costly piece of intellectual 
property in the system. So yeah. Maybe with the right parts being available, 
one could aim at $50 or less, without sales channel markup. (Raspberry Pi ARM64 
boards don't have GigE, and I think that might be because the GigE interfaces 
are a bit pricey. However, the ARM64 SoC's available are typically 
Celeron-class multicore systems. I don't know why there aren't more ARM64 
systems on a chip with dual GigE, but I suspect searching for them would turn 
up some).

-Toke

[0] https://www.gl-inet.com/products/gl-ar750s/___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] fq_codel is SEVEN years old today...

2019-05-14 Thread David P. Reed

I wonder if an interesting project to design and pitch for CrowdSupply to fund 
would be a little board that packages sch_cake or something in the minimal 
hardware package that could sit between a 1 GigE symmetric port and either an 
asymmetric GigE or a symmetric 1 GigE connection into a 10 GigE switch.
The key point is that it needs to support wire-rate forwarding with small 
packets of Gigabit throughput. Ideally, it also supports a dnsmasq NAT and 
wireguard optionally.
 
I know a Celeron with 2 GB of RAM can easily do it (because that is what I 
use). We know (well that's what you guys tell me) that the dinky MIPS 
processors are underpowered to handle sch_cake at such packet rates. The 
Linksys and Netgear and TP-link guys seem to see no market at all for any such 
thing. But I see it as a useful jellybean device if it could be cheap and 
simple.
 
Could maybe design, produce, and sell this for $100? No one else seems to want 
to make such a thing. I could just barely design and implement the board and 
get it made, but to be honest I'm better at spec'ing and prototyping than 
making manufacturable hardware designs. I suspect I could find someone to do 
the PCB design, layout and parts selection as a project.
 
The idea for this hardware "product" is to decouple this buffer management from 
the WiFi compatibility and driver mess, and make it easy for people, maybe to 
demonstrate that it could be a great product. Forget designing the packaging, 
negotiating a sales channel, etc. Just do what is needed to make a few thousand 
for the CrowdSupply market.
 
Thoughts?
 
-Original Message-
From: "David P. Reed" 
Sent: Tuesday, May 14, 2019 2:38pm
To: "Valdis Klētnieks" 
Cc: "Rich Brown" , "cerowrt-devel" 
, "bloat" 
Subject: Re: [Cerowrt-devel] fq_codel is SEVEN years old today...



Well, of all the devices in my house (maybe 100), only the router attached to 
the cable modem (which is a 2x GigE Intel Linux board based on Fedora 29 server 
with sch_cake configured) is running fq_codel. And setting that up was a labor 
of love. But it works a charm for my asymmetric Gigabit cable service.
 
My home's backbone is 10 GigE fiber, so I suppose fq_codel would be helpful for 
devices that run on 1 GigE subnets like my 2 802.11ac access points when 
talking to my NAS's.
However, the 802.11ac access point high speed functionality doesn't seem to be 
supportable by LEDE. So what can I do? 
 
I suppose I could stick some little custom Intel Linux 2x GigE devices between 
access points and the 10 GigE backbone, and put fq_codel in there.
 
My point is, to get the primary benefit of bufferbloat reduction, one has to 
stick little Linux boxes everywhere, because fq_codel is not supported except 
via DIY hacking.
 
And indeed, 10 GigE->1 GigE buffering does affect storage access latency in bad 
ways.
 
We see the same problem in datacenter networks that have excessive buffering - 
a famous switch company backed by Andy Bechtolsheim is really problematic 
because they claim building up huge buffers is a "feature" not a bug.
-Original Message-
From: "Valdis Klētnieks" 
Sent: Tuesday, May 14, 2019 1:57pm
To: "Rich Brown" 
Cc: "cerowrt-devel" , "bloat" 

Subject: Re: [Cerowrt-devel] fq_codel is SEVEN years old today...



___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
On Tue, 14 May 2019 08:16:06 -0400, Rich Brown said:

> Let's all pat ourselves on the back for this good work!

Do we have an estimate of what percent of connected devices
are actually using fq_codel or other modern anti-bloat methods?
I'm reasonably sure my TV, my PS3, and my PS4 are still
behind the curve.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] fq_codel is SEVEN years old today...

2019-05-14 Thread David P. Reed

Well, of all the devices in my house (maybe 100), only the router attached to 
the cable modem (which is a 2x GigE Intel Linux board based on Fedora 29 server 
with sch_cake configured) is running fq_codel. And setting that up was a labor 
of love. But it works a charm for my asymmetric Gigabit cable service.
 
My home's backbone is 10 GigE fiber, so I suppose fq_codel would be helpful for 
devices that run on 1 GigE subnets like my 2 802.11ac access points when 
talking to my NAS's.
However, the 802.11ac access point high speed functionality doesn't seem to be 
supportable by LEDE. So what can I do? 
 
I suppose I could stick some little custom Intel Linux 2x GigE devices between 
access points and the 10 GigE backbone, and put fq_codel in there.
 
My point is, to get the primary benefit of bufferbloat reduction, one has to 
stick little Linux boxes everywhere, because fq_codel is not supported except 
via DIY hacking.
 
And indeed, 10 GigE->1 GigE buffering does affect storage access latency in bad 
ways.
 
We see the same problem in datacenter networks that have excessive buffering - 
a famous switch company backed by Andy Bechtolsheim is really problematic 
because they claim building up huge buffers is a "feature" not a bug.
-Original Message-
From: "Valdis Klētnieks" 
Sent: Tuesday, May 14, 2019 1:57pm
To: "Rich Brown" 
Cc: "cerowrt-devel" , "bloat" 

Subject: Re: [Cerowrt-devel] fq_codel is SEVEN years old today...



___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
On Tue, 14 May 2019 08:16:06 -0400, Rich Brown said:

> Let's all pat ourselves on the back for this good work!

Do we have an estimate of what percent of connected devices
are actually using fq_codel or other modern anti-bloat methods?
I'm reasonably sure my TV, my PS3, and my PS4 are still
behind the curve.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Spectrum Auctions Are Killing Competition AndFailing Rural Access

2019-04-08 Thread David P. Reed
I've spent almost 25 tears trying to address this problem, technologically.

First with UWB, then with technologies that scale capacity with the number of 
users in a band, then with the FCC Spectrum Policy Task Force, then on the FCC 
Technolocical Advisory Committee.

Each time, most radio engineers, having paychecks from monopoly wireless 
businesses, joined with their bosses to fight us, with the engineers spouting 
bogus technical arguments in support of technology ideation that dates back to 
Crystal radio receivers. 

Folks like me, Paul Baran, Dewayne Hendricks, Tim Shepard, TAPRS, have 
demonstrated a diverse set of technical ways forward.

But the Spectrum Auctions idea, due to a completely flawed understanding of 
physics and information theory by an economist/lawyer naned Coase, is beloved 
by the monopolists who control government regulated access to airwaves 
worldwide. wins every time.

We haven't had any support worth a damn. We're old now. There aren't any 
disciples pushing this. Google, Intel, and Microsoft actively support auctions 
and dynamic property rights to spectrum at the same time they support network 
neutrality.

When I read pieces like this, I think, you've got no program, no plans, you 
haven't designed an alternative approach. Just whining is all you can do.

If you believe that the current approach is wrong, put some goddamn skin in the 
game!

Learn about why this continues, and push for a new way.

Walk the talk.

I'm now retired. And tired. Today's radio is being swallowed by a hype frenzy 
called 5G. It's a bunch of voodoo designed to suggest that governments around 
the world should extend cellular monopolies' monopoly on the airwaves 
themselves.


-Original Message-
From: "Dave Taht" 
Sent: Mon, Apr 8, 2019 at 1:25 pm
To: "cerowrt-devel" , "Make-Wifi-fast" 

Cc: "cerowrt-devel" , "Make-Wifi-fast" 

Subject: [Cerowrt-devel] Spectrum Auctions Are Killing Competition AndFailing 
Rural Access

I worked with steve a while back on the mesh potato

https://manypossibilities.net/2019/04/spectrum-auctions-are-killing-competition-and-failing-rural-access/

-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] plenty of huawei in the news today

2019-03-28 Thread David P. Reed

Yes, yes, yes, yes!
 
Defense in depth is also good. We long ago learned that you don't design any 
large scale system without a lot of attention avoiding single-point 
catastrophes.  One really major example is to achieve content protection with 
end-to-end security and authentication based on solid key distribution systems. 
Then "APT" in the switching gear and routing masquerading to send traffic to a 
MITM can't succeed. Doesn't matter what vendor you buy from!
Another defense in depth approach for telecommunications is decentralized and 
redundant routing, rather than centralized static routing.  Then the system 
components can route-around-damage.
 
And this doesn't depend on the Nationality of the designers, manufacturers, 
etc. At least for any system that has lots of components assembled by the 
operator, as telecom does.
 
The whole idea is nonsense that in today's world "National Allegiance" is the 
core frame for thinking about systems reliability and security. I don't think 
anyone in the world should trust companies infiltrated by NSA (Cisco) or GCHQ 
(BT) or companies who build infrastructure for governments (Google for US DoD 
and China, Amazon for vast swaths of USG) fully.
 
That's not because these companies or governments are "Russian" or "Chinese" or 
"American". They aren't. They have power within and power over, but they don't 
answer to us humans. They answer to themselves or their "owners".
 
Just don't trust them.  You can buy their stuff and use it because it is pretty 
darn functional, but don't put your life entirely in their hands, even if they 
have similar facial features to you.
 
-Original Message-
From: "Jim Gettys" 
Sent: Thursday, March 28, 2019 2:44pm
To: "David P. Reed" 
Cc: "Dave Taht" , "cerowrt-devel" 
, "bloat" 
Subject: Re: [Bloat] [Cerowrt-devel] plenty of huawei in the news today





It's worth looking at the UK government oversight report:
[ 
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/790270/HCSEC_OversightBoardReport-2019.pdf
 ]( 
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/790270/HCSEC_OversightBoardReport-2019.pdf
 )
Not clear that Huawei is worse than other 5g vendors, if our experience with 
other embedded system vendors is any clue.  Certainly I was unimpressed by 
ALU's software engineering practices when I was at Bell Labs.  The ownership 
structure of Huawei is "interesting", to say the least.
My solution is more radical: all the vendors should be held to much higher 
standards, including reproducible builds (something that the UK government has 
been trying to get them to do for years, and failed).
- Jim


On Thu, Mar 28, 2019 at 2:32 PM David P. Reed <[ dpr...@deepplum.com ]( 
mailto:dpr...@deepplum.com )> wrote:
Look, the existence of security flaws in software isn't news. Real news would 
be if there were systems discovered to have no flaws at all...
 
So what does this article really say? 
 
It says that Britain and the US intelligence officials are now going after 
Huawei in a new way, because the idea that Huawei just steals intellectual 
property no longer flies - they actually have great technology that the 
non-Chinese never had.
 
And there is a massive Trade War currently aimed between Trump and China.
 
And recently, the UK, including GCHQ, said it was NOT going to stop plans to 
deploy Huawei telecom gear, because it saw no particular flaws worth worrying 
about if UK operators wanted to use Huawei "5G" gear because it was better and 
cheaper.
 
You can see, of course, that the US diplomatic efforts under Pompeo might go 
into high gear to get some kind of supportive public response from somewhere in 
the UK, even if the UK government itself wasn't going to support the US.
 
Hence, the PR guys figured out how to get a story into the NYTimes and other 
papers that appears to contradict the UK decision. 
 
This is how the game is played.
 
This is how Trade Wars are conducted (we haven't seen them for decades, so we 
aren't used to them, but we had the big fearmongering about Japan back in the 
'80's that was similar, and the Japanese "lead" with its "Fifth Generation 
Computing" effort required major tax dollars to protect the US from becoming a 
third world country)
 
Humans don't think. They react emotionally, and tribally.
 
-Original Message-
From: "Dave Taht" <[ dave.t...@gmail.com ]( mailto:dave.t...@gmail.com )>
Sent: Thursday, March 28, 2019 2:16pm
To: "David P. Reed" <[ dpr...@deepplum.com ]( mailto:dpr...@deepplum.com )>
Cc: "cerowrt-devel" <[ cerowrt-devel@lists.bufferbloat.net ]( 
mailto:cerowrt-de

Re: [Cerowrt-devel] plenty of huawei in the news today

2019-03-28 Thread David P. Reed

Look, the existence of security flaws in software isn't news. Real news would 
be if there were systems discovered to have no flaws at all...
 
So what does this article really say? 
 
It says that Britain and the US intelligence officials are now going after 
Huawei in a new way, because the idea that Huawei just steals intellectual 
property no longer flies - they actually have great technology that the 
non-Chinese never had.
 
And there is a massive Trade War currently aimed between Trump and China.
 
And recently, the UK, including GCHQ, said it was NOT going to stop plans to 
deploy Huawei telecom gear, because it saw no particular flaws worth worrying 
about if UK operators wanted to use Huawei "5G" gear because it was better and 
cheaper.
 
You can see, of course, that the US diplomatic efforts under Pompeo might go 
into high gear to get some kind of supportive public response from somewhere in 
the UK, even if the UK government itself wasn't going to support the US.
 
Hence, the PR guys figured out how to get a story into the NYTimes and other 
papers that appears to contradict the UK decision. 
 
This is how the game is played.
 
This is how Trade Wars are conducted (we haven't seen them for decades, so we 
aren't used to them, but we had the big fearmongering about Japan back in the 
'80's that was similar, and the Japanese "lead" with its "Fifth Generation 
Computing" effort required major tax dollars to protect the US from becoming a 
third world country)
 
Humans don't think. They react emotionally, and tribally.
 
-Original Message-----
From: "Dave Taht" 
Sent: Thursday, March 28, 2019 2:16pm
To: "David P. Reed" 
Cc: "cerowrt-devel" , "bloat" 

Subject: Re: [Cerowrt-devel] plenty of huawei in the news today



Well, it's a widely placed story in every newspaper.

On Thu, Mar 28, 2019 at 11:16 AM David P. Reed  wrote:
>
> The NYTimes has become a mouthpiece for those who want to see China as the 
> new evil empire. Recent pieces by David Sanger have hyped the idea that the 
> US has a "5G Gap" and that China (Huawei) will threaten to conquer the world 
> with 5G superiority, so we should be vigilantly opposing Huawei.
>
>
>
> Worth noting that Cisco, ALU, ... are not any better than Huawei appears to 
> be in these matters. But they aren't getting headlines in the NYTimes.
>
>
>
> Remember, Judith Miller wrote NYTimes headlines based on "leaks from senior 
> intelligence officials" that Saddam Hussein was on the verge of deploying 
> dirty bombs, nuclear missiles and biowarfare agents.
>
>
>
> Recently, Bloomberg got scammed by "leaks from senior intelligence officials" 
> that Supermicro (Chinese) had built and sold server motherboards that had 
> special chips soldered into them that didn't belong there [the stories were 
> completely debunked by the companies supposedly targeted].
>
>
>
> Personally, I think the cynical fearmongering here does the legitimate 
> security engineering community no good at all. It's just more "wag the dog" 
> psyops, designed to let all the pseudo-security-experts take over the story 
> and get their 15 minutes in the headlines.
>
>
>
> The Qualcomms and Ciscos of the US are happy to get the USG to help scare 
> countries off of Chinese brandnames. But the open secret is that Qualcomm and 
> Cisco's systems are designed and made in China, too. There's no US 
> manufacturing of switches, and precious few entirely American hardware design 
> centers, either.
>
>
>
> So be a little skeptical. Check the story behind the story. Don't believe 
> stories based on "intelligence agency" leaks.
>
>
>
> -Original Message-
> From: "Dave Taht" 
> Sent: Thursday, March 28, 2019 1:55pm
> To: "cerowrt-devel" , "bloat" 
> 
> Subject: [Cerowrt-devel] plenty of huawei in the news today
>
> https://www.nytimes.com/2019/03/28/technology/huawei-security-british-report.html
>
> --
>
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-205-9740
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel



-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] plenty of huawei in the news today

2019-03-28 Thread David P. Reed

The NYTimes has become a mouthpiece for those who want to see China as the new 
evil empire. Recent pieces by David Sanger have hyped the idea that the US has 
a "5G Gap" and that China (Huawei) will threaten to conquer the world with 5G 
superiority, so we should be vigilantly opposing Huawei.
 
Worth noting that Cisco, ALU, ... are not any better than Huawei appears to be 
in these matters. But they aren't getting headlines in the NYTimes.
 
Remember, Judith Miller wrote NYTimes headlines based on "leaks from senior 
intelligence officials" that Saddam Hussein was on the verge of deploying dirty 
bombs, nuclear missiles and biowarfare agents.
 
Recently, Bloomberg got scammed by "leaks from senior intelligence officials" 
that Supermicro (Chinese) had built and sold server motherboards that had 
special chips soldered into them that didn't belong there [the stories were 
completely debunked by the companies supposedly targeted].
 
Personally, I think the cynical fearmongering here does the legitimate security 
engineering community no good at all. It's just more "wag the dog" psyops, 
designed to let all the pseudo-security-experts take over the story and get 
their 15 minutes in the headlines.
 
The Qualcomms and Ciscos of the US are happy to get the USG to help scare 
countries off of Chinese brandnames. But the open secret is that Qualcomm and 
Cisco's systems are designed and made in China, too. There's no US 
manufacturing of switches, and precious few entirely American hardware design 
centers, either.
 
So be a little skeptical. Check the story behind the story. Don't believe 
stories based on "intelligence agency" leaks.
 
-Original Message-
From: "Dave Taht" 
Sent: Thursday, March 28, 2019 1:55pm
To: "cerowrt-devel" , "bloat" 

Subject: [Cerowrt-devel] plenty of huawei in the news today



https://www.nytimes.com/2019/03/28/technology/huawei-security-british-report.html

-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Will transport innovation collapse the Internet?

2019-03-23 Thread David P. Reed

Dave -
 
I tend to agree with Christian's thesis, despite the flaws in his "history".
 
HTTP/3 does NOT specify a congestion control algorithm, and in fact seems to 
encourage experimentation with wacky concepts.  That's a terrible approach to 
standardization.
 
Roskind is not my kind of a hero.  As Bob Dylan said, "to live outside the law 
you must be honest". But Roskind's approach has not been honest, and the race 
to deploy *at huge scale* that Google pursues has not been backed up by any 
serious experimentation to understand its effects. In fact, that approach, 
non-scientific in the extreme, has become typical of Google's PR-driven and 
deceptive actions. (See the extremely flawed claims about its "AI" 
flu-prediction experiment being "better than any public health department" and 
its flacking of a Jeff Dean Nature publication that was reframed by Google PR 
and Jeff himself as *proving that upon admission to a hospital, a patient's 
life or death outcome was predicted far more accurately than was done by a 
control group of doctorsI*.  NEITHER of those are valid interpretations of the 
actual experimental results. The Flu test has never been reproduced since, 
suggesting it was (un?)intentionally cherry-picked to show off)
 
Now whether Roskind is typical of Google's Aim, Fire, Ready approach to science 
and engineering or not, it seems you are calling him a "hero" because of that 
cowboy approach.
 
Even though I find the IETF of today full of non-science and corrupt influence, 
the idea that Roskind is "better" than that, I can't stomach the idea that QUIC 
or HTTP/3 are "good" merely because they are different and the product of a 
renegade.
 
 
-Original Message-
From: "Dave Taht" 
Sent: Saturday, March 23, 2019 2:15am
To: "Matt Taggart" 
Cc: "cerowrt-devel" 
Subject: Re: [Cerowrt-devel] Will transport innovation collapse the Internet?



On Sat, Mar 23, 2019 at 1:31 AM Matt Taggart  wrote:
>
> This is from Jan 12th but I hadn't seen it yet.
>
> https://huitema.wordpress.com/2019/01/12/will-transport-innovation-collapse-the-internet/

I am awaiting moderation on this comment:

While I agree with your thesis, about the problem!

I am very bothered by your descriptions of the timelines and who were
involved. Other processes are required long before something hits the
ietf and recent attempts to file the serial numbers off in favor of
corporate “innovation” rather bug me, so:

1) “More recent algorithms were developed in the IETF AQM Working
Group to address the “buffer bloat” problem, such as FQ-CODEL or PIE.
”

fq_codel sprang from an outside the ietf effort (bufferbloat.net)
founded by myself and jim gettys in 2011. In may of 2012 (after many
other innovations in linux such as BQL (which made FQ and AQM
technology possible), multiple other fixes in the stack, the
publication of Van Jacobson’s and Kathie nichols’s paper on codel
(which we turned from ns2 to linux in a week, and arrived in linux
mainline the week later)… and two weeks later .- fq_codel incorporated
the best of all our research. It took 6 hours to write, and while
there have been many minor tweaks along the way, it then took 6 years
to standardize in the IETF while achieving a near total deployment in
linux today, and is now in freebsd.

The ietf AQM working group was founded only because of and after VJ
and Kathie’s breakthrough AQM design. It was a hard fight to even get
fair queuing as part of the charter.

2) QUIC’s real history started with a renegade engineer (Jim Roskind,
father of QUIC) that gathered a small team inside google to re-examine
tcp in the context of web traffic around 2011 – 3 years before you
claim it happened. See the commit logs. They re-evaluated 30 years of
discarded tcp ideas, and retried them, much in the manner of how
edison would try 3000 ideas to get one. Month in month out, they built
release after release, wrote the code, deployed it, made changes to
the code and protocol on a monthly basis. They faced enormous barriers
by folk that thought we should just fix tcp, or laughed at each new
idea and said that couldn’t (or shouldn’t) be done.

They just went ahead and did it.

Every time they made a “breaking change” in the protocol they bumped
the version number. Sometimes it was crypto, sometimes frames,
sometimes congestion control, etc.

It went through *20* *deployed* revisions before it made the ietf.

Looking at the wire spec
https://docs.google.com/document/d/1WJvyZflAO2pq77yOLbp9NsGjC1CHetAXV8I0fQe-B_U/edit

You can see the long list of recent versions:

Q009: added priority as the first 4 bytes on spdy streams.
Q010: renumber the various frame types
Q011: shrunk the fnv128 hash on NULL encrypted packets from 16 bytes
to 12 bytes.
Q012: optimize the ack frame format to reduce the size and better
handle ranges of nacks, which should make truncated acks virtually
impossible. Also adding an explicit flag for truncated acks and moving
the ack outside of the connection close frame.
Q013: 

Re: [Cerowrt-devel] friends don't let friends run factory firmware

2019-02-05 Thread David P. Reed

Well, pots and kettles - I bet there are, amongst the huge numbers of 
LEDE/OpenWRt packages, some very useful DDoS amplification concerns. So it's 
really not a strong proof of the claim that "factory firmware" is bad.

My own home border router I built myself, and yet it acquires new problems with 
new updates (as well as having some fixed).

And, one thing that scares the bejeezus out of me is the passion for stuff like 
code allowing injection of binary code into the kernel (eBPF) being thrown into 
the Linux Kernel for "performance reasons". Hacking the clever network 
developer has never been easier - just throw them some complicated and subtle 
code that runs in the kernel that "everybody thinks is the coolest new thing". 
Here's the description of eBPF from the documentation I use: "The extended BPF 
(eBPF) variant has become a universal in-kernel virtual machine, that has hooks 
all over the kernel. " Lovely. So userspace can make the kernel do completely 
untestable things.
 
There are lots of great things about creating the freedom to experiment, modify 
your own devices' firmware, etc. I think the existence of that community makes 
the world generally safer (more eyeballs, more innovation, etc.).
 
But this idea that everybody benefits by running some non-standard firmware 
they choose for themselves?  That's bizarre to me, unjustifiable by any very 
good argument.
 
UBNT here seems to be doing the right thing - developing an update and 
distributing it to all its customers.

-Original Message-
From: "Dave Taht" 
Sent: Monday, February 4, 2019 3:41pm
To: "cerowrt-devel" 
Subject: [Cerowrt-devel] friends don't let friends run factory firmware

https://www.zdnet.com/article/over-485000-ubiquiti-devices-vulnerable-to-new-attack/

-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] https://tools.ietf.org/html/draft-ietf-tsvwg-le-phb-06 is in last call

2019-02-03 Thread David P. Reed
Well, you all know that I think of diffserv as an abortion. It's based on 
thinking that assumes central, hierachical adminstrative agreements among what 
should be autonomous systems.

Yeah, at layer 2 for packets that stay within an administratively uniform 
domain, diffserv can be useful.

But even "Paris Metro" scheduling (2 classes, priced dynamically) is highly 
unstable.

And the nature of networks is that they MUST operate almost all the time well 
below their capacity. (This is true of packet nets, railroads, highways, ...). 
It's called the "Mother's Day problem". When Mother's Day happens, you should 
have enough capacity to absorb vast demand. Therefore what you do all the other 
days doesn't matter. And on Mother's Day, if you have congestion, there's no 
way in hell that anybody is happy.

This fairy story about traffic giving way to higher priority traffic being a 
normal mode of operation is just that. A made up story, largely used by folks 
who want to do selective pricing based on what customers are willing to pay, 
not on value received. (that's a business story, thouhg - like the Xerox 
machines that were supposed to charge more for billion dollar real estate 
contract copies and less for notices to put up near coffee machines - Xerox 
wanted a share of the real-estate business cash flow).

Which doesn't mean that there might not be better ways to do large scale 
traffic engineering balancing of flows - but that's not an end-to-end problem. 
It's a network management problem that involves changing routing tables.

-Original Message-
From: "Dave Taht" 
Sent: Sunday, February 3, 2019 1:39pm
To: "cerowrt-devel" , "Cake List" 

Subject: [Cerowrt-devel] https://tools.ietf.org/html/draft-ietf-tsvwg-le-phb-06 
is in last call

And seems likely to be adopted.

There seems to be an urge to make this codepoint starvable, which
since I ascribe to nagle's dictum "every application has a right to
one packet in the network" - doesn't work for me - but the draft is
vaguely worded enough to just start dumping this codepoint into the
background queue of both sqm and cake and worry about it in a decade
or three.

it's 01 which I guess is:

diff --git a/sch_cake.c b/sch_cake.c
index 3a26db0..67263b3 100644
--- a/sch_cake.c
+++ b/sch_cake.c
@@ -343,7 +343,7 @@ static const u8 diffserv4[] = {
 };

 static const u8 diffserv3[] = {
-   0, 0, 0, 0, 2, 0, 0, 0,
+   0, 1, 0, 0, 2, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,

(or is that reversed? my big endian/little endian chops scuks, and
nobody modified the gen_cake_const tool to match what cake expects
now)

on my off days I kind of wish the diffserv lookup we do in cake had
managed to make it into the linux mqprio/prio stuff by default.

-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


[Cerowrt-devel] Hmmm... Worth reading re router security

2018-12-16 Thread David P. Reed
A look at home routers, and a surprising bug in Linux/MIPS - 
https://cyber-itl.org/2018/12/07/a-look-at-home-routers-and-linux-mips.html



___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] dlte

2018-12-09 Thread David P. Reed
Conquer the spectrum licensing and device certification nexus. Or else your 
cell is will pwn yr physical world.

LTE over UNII band is not even as good as CSMA at sharing and cooperation, and 
without coordination at installation planning time, it doesn't work well.

802.11ax has the same fragility in Multi Unit Dwellings due to requiring a 
radio propagation plan and coordination so neighbors don't completely jam 
neighbors.

Don't obsess about throughput at the link layer, when the design assumes 
exclusive rights to transmit.

There are techniques for cooperative space-time-rate multiplexing that scale. 
LTE licensed or unlicensed or 802.11ax are not such techniques.

Small cells are a fantasy of the carriers that they can put their licensed gear 
on your property at points they choose. Technically it appears to work in an 
abstract fantasy prototype. In the real world, it can't scale unless you let 
the phone company invade your premises and control all your placement of 
furniture, doors, mirrors, etc.

Original Message-
From: "Toke Høiland-Jørgensen" 
Sent: Fri, Dec 7, 2018 at 4:08 am
To: "Dave Taht" 
Cc: "Dave Taht" , "cerowrt-devel" 

Subject: Re: [Cerowrt-devel] dlte

Dave Taht  writes:

> Toke Høiland-Jørgensen  writes:
>
>> Mikael Abrahamsson  writes:
>>
>>> On Tue, 4 Dec 2018, Dave Taht wrote:
>>>
 I expect dave reed to comment, so I'll withhold mine for now

 https://kurti.sh/pubs/dLTE-Johnson-HotNets-2018.pdf
>>>
>>> When I read the first page I was hopeful, then unfortunately I got 
>>> disappointed and just quickly scanned the rest. It's still tunneled and 
>>> the same architecture, just more distributed.
>>
>> OK, now I read the paper, and I think you may have missed the part where
>> they say that they terminate the tunnelling at the AP and assign new IPs
>> whenever a client roams. So it's basically WiFi APs over the LTE
>> layer-2... Which is pretty cool, I think :)
>
> It's still based on the false optimism that users will ever get to own
> and control their own LTE AP.

Well, they did say they had done proof of concept tests; and that they
could build a base station for $8000... So might not be completely
impossible...

-Toke
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] security guidelines for home routers

2018-11-28 Thread David P. Reed

Michael Richardson asked: "So where would it go, if not the FTC?"
 
I think Congress has to create a function in some organization that has 
technical and policy capabilities, and the powers to regulate manufacturers.
 
It could be in the Dept. of Commerce, but it needs things the FTC doesn't have. 
I know NIST (also in Commerce) has a number of initiatives in non-military 
security, but not privacy or individual rights. They have the technical 
capabilities in house, and define standards where appropriate. But NIST doesn't 
do policy nor have any power to regulate.
 
Much like the FDA has powers to regulate medical device makers and sellers, 
because there are important public goods in medical treatment, I think it might 
be time to begin dealing with *essential* devices like routers in an 
appropriate way. Doing so while retaining low cost and maximizing innovation is 
hard, but it need not be done the same way as the regulation of medical devices 
are regulated (in fact medical device regulations should probably be rethought 
after 100 years of progress in technology and medicine).
 

FYI: This whole idea, which seems necessary, makes part of me personally 
uncomfortable. I don't trust Congress to get it right, given the huge amount of 
money available to drive them in the wrong direction. FB and Google have run 
extremely successful propaganda campaigns to convince America that they "serve 
their users" and it is too hard to do the right thing, so we should admire 
their tiny amount of concern about their own bad behavior.  But the real truth 
is that they "serve their users to their customers on a platter", where their 
customers are not their users at all, but a vast advertising and data-brokerage 
system that lives to maximize surveillance of of every behavior of every human 
on the planet, and then to find new exploits that can "monetize" the observed 
behavior.
 
We didn't build the Internet protocols to enable mass surveillance by anybody. 
We built it for simplifying communications among willing participants. But the 
latter good is lost, as the Pied Piper solved our communications concerns using 
the Internet, and then demanded control of our children.

 
-Original Message-
From: "Michael Richardson" 
Sent: Wednesday, November 28, 2018 4:14am
To: "David P. Reed" 
Cc: "Sebastian Moeller" , "cerowrt-devel" 

Subject: Re: [Cerowrt-devel] security guidelines for home routers



David P. Reed  wrote:
 > Personally, I think it's time to move "security" out of the military
 > sector of government..

+1

 > But maybe not in the FCC, which is in a weird part of the USG, with no
 > budget for technical expertise at all. (Congress doesn't want them to
 > have technical resources)

So where would it go, if not the FTC?

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] security guidelines for home routers

2018-11-26 Thread David P. Reed

> I would like it very much if my country attempted to get to something> 
> similar as a requirement for FCC certification or import. Stronger> yes, 
> would be nice, but there was> nothing horrible in here that I could see.
 
Dave T. - You may remember from when I helped get you in contact with the FCC 
regarding their attempt to rule against software updates of routers. Subsequent 
to that, I and others were brought into an ex parte discussion with the top 
policy people in the FCC regarding their role in supporting security reviews of 
routers and development of secure routers for WiFi. The FCC lawyers have 
asserted that they have no legal authority whatsoever in regard to assuring 
security of routers.
 
They haven't been interested in communications security at all, in all of my 
work with them over the last 20 years. Personally, I don't see Congress passing 
laws on router security, or for that matter, "Internet of Things" security. 
There is some thought that the Federal Trade Commission might have authority 
under "consumer protection" and "product safety" laws. But FTC is generally 
weak and uninterested in regulating most technologies.
 
One of the US's problems (which may have parallels in Europe) is that ALL 
responsibility for communications security in the USG resides in the NSA, which 
is in the Department of Defense. Every other part of the USG depends on NSA 
support. (Even the Federal Information Processing Standards for commercial 
encryption are vetted officially by NSA, because they are the only agency that 
has security competencies)
 
Is it a good thing to bring NSA into regulating the security of home routers or 
IoT? Technically, they and their contractors are very sharp on this. I've 
worked with them since I began work in computer security in my research group 
at MIT in 1973. (we did no classified research, but NSA was part of our 
support, and the chief scientist of the NSA shared an office with me when he 
visited us).
 
Personally, I think it's time to move "security" out of the military sector of 
government..
 
But maybe not in the FCC, which is in a weird part of the USG, with no budget 
for technical expertise at all. (Congress doesn't want them to have technical 
resources)
 
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] closing up my make-wifi-fast lab

2018-08-26 Thread David P. Reed
Baran: I got the year wrong. I remember it as 1993, but it was 1994 CNGN speech 
he made, which is resurrected here:
https://www.eff.org/pages/false-scarcity-baran-cngn-94

Paul was educated in EE, as was I. So radio made sense to him. Unlike kids 
brought up on the idea that bits are and must be physically discrete spatial 
and temporal mechanical things.

You know, one can have 1/10 of a bit of information, and store it in 1/10 of a 
bit of storage. Or transmit a symbol that passes through local noise and comes 
out the other side uncorrupted.

But kids trained in fancy CS depts. assume that bits require clear, empty, 
noiseless, pristine paths. Pure Bullshit. But CS and now many EE depts. and the 
FCC all proselytize such crap 

So scarcity is inventedand sustained.

There is a reigning Supreme Court opinion, the law of the land, that says that 
there is by law a "finite number" of usable frequencies, and only one 
transmitter can be allowed to use it at a time. Like legislating that pi = 3 in 
a state, to make math easier.

Except it is totally designed to create scarcity. And the State/Industry Nexus 
maintains it at every turn. It's why lunatic economists claim that spectrum is 
a form of property that can be auctioned. Like creating property rights to each 
acre of the sea, allowing owners to block shipping by buying a connected path 
down the mid Atlantic.

We live in a Science Ignorant world. Intentionally. Even trained pH D. 
Engineers testify before the FCC to preserve these lies.

Yeah, I sound nuts. Check it out.


-Original Message-
From: "Dave Taht" 
Sent: Sat, Aug 25, 2018 at 5:22 pm
To: dpr...@deepplum.com
Cc: bloat-annou...@lists.bufferbloat.net, "bloat" 
, "Make-Wifi-fast" 
, cerowrt-devel@lists.bufferbloat.net
Subject: Re: [Cerowrt-devel] closing up my make-wifi-fast lab

On Sat, Aug 25, 2018 at 1:04 PM David P. Reed  wrote:
>
> WiFi is a bit harder than IP. But you know that.
>
> I truly believe that we need to fix the phy/waveform/modulation space to 
> really scale up open wireless networking capability. LBT is the basic bug in 
> WiFi, and it is at that layer, melow the MAC.
>
> I have tried for 20 years now to find a way to begin work at that project, by 
> the way. There is also no major donor anywhere to be found for that work. 
> Instead, any funds that seem to be appearing get attacked and sucked into 
> projects that miss the point, being controlled by folks who oppose openness 
> (e.g. WISPs wanting exclusive ownership of a market, such as so called 
> SuperWiFi or whitespaces). I did once come close to a useful award when I was 
> at MIT Media Lab, from NSF. But after the award, the funding was cut by 90%, 
> leaving just enough to support a Master's thesis on co-channel sharing, using 
> two 1st Gen USRPs. Using my own funds, spare time, and bubblegum and baling 
> wire, I've slowly begun work on extra wideband FPGA based sounding-centric 
> sharing in the 10 GHz Ham band. (500 MHz wide modulation), where I can self 
> certify multiple stations in a network.
>
> But the point is, I've failed, because there is less than zero support. There 
> is active opposition, on top of cluelessness.
>
> Paul Baran tried in 1993 to push forward a similar agenda, famously. 99% of 
> his concepts died.

Cite?

One of the things that bothers me about packet processing is that
Donald Davies (the oft uncredited other founder of the concept) wrote
11 volumes on this subject. So far as I know, those have vanished to
history.

Periodically, when I get stuck on something in this field, I fantasize
that scribbled in the margin of volume 9 was the solution to the
problem.

> Thanks to Apple, and lots of others, we got WiFi, barely. Industry hated 
> that, and vow never to let that ever happen again.

It really was a strange convolution of circumstances that led to wifi.
When i first got it working in 1998, metricom ruled the world. They
failed. After that, nobody thought it was feasible at scale until the
concept of a mac retry emerged to fix the packet loss problem, and APs
to provide a central clock (best we could do with the DSPs then).  So
a window emerged (and yes, hugely driven by apple, but also by huge
popular demand for "wireless freedom") to put "buggy" wireless tech on
the crap 2.4 band in the hands of the people, it got established and
made the coffee shop a workplace, and bigcos attempting to wipe it out
(and largely, in the last few years, succeeding in dislodging it) have
had an uphill battle.

If metricom had succeeded, or the celluar folk got their
implementations working only a few years faster, it would be a very
different world.

(this history is all covered in my MIT preso here:
https://www.youtube.com/watch?v=Wksh2DPHCDI&t=2007s - david was at
that one)

>
> So Dave, I salute you and Toke and the ot

Re: [Cerowrt-devel] closing up my make-wifi-fast lab

2018-08-25 Thread David P. Reed
WiFi is a bit harder than IP. But you know that. 

I truly believe that we need to fix the phy/waveform/modulation space to really 
scale up open wireless networking capability. LBT is the basic bug in WiFi, and 
it is at that layer, melow the MAC.

I have tried for 20 years now to find a way to begin work at that project, by 
the way. There is also no major donor anywhere to be found for that work. 
Instead, any funds that seem to be appearing get attacked and sucked into 
projects that miss the point, being controlled by folks who oppose openness 
(e.g. WISPs wanting exclusive ownership of a market, such as so called 
SuperWiFi or whitespaces). I did once come close to a useful award when I was 
at MIT Media Lab, from NSF. But after the award, the funding was cut by 90%, 
leaving just enough to support a Master's thesis on co-channel sharing, using 
two 1st Gen USRPs. Using my own funds, spare time, and bubblegum and baling 
wire, I've slowly begun work on extra wideband FPGA based sounding-centric 
sharing in the 10 GHz Ham band. (500 MHz wide modulation), where I can self 
certify multiple stations in a network.

But the point is, I've failed, because there is less than zero support. There 
is active opposition, on top of cluelessness.

Paul Baran tried in 1993 to push forward a similar agenda, famously. 99% of his 
concepts died. Thanks to Apple, and lots of others, we got WiFi, barely. 
Industry hated that, and vow never to let that ever happen again.

So Dave, I salute you and Toke and the others. I salute Tim Shepard, who also 
moved the ball in his PhD thesis, only to hit the same wall of opposition.

It's so sad. We get shit like the "Obama band" proposed by PCAST, and are told 
to be thankful. 

UWB failed miserably, too.

My advice to any young smart innovator: don't touch wireless unless you are 
working for an incumbent. Expect the incumbents and governments to close and 
destroy wireless innovation.

Really. You will be in a world of hurt, and NO ONE will support anything. Not 
even VCs.

Very sorry to say this. I had hoped Make WiFi Fast would have gone somewhere. I 
mourn its passing.

-Original Message-
From: "Dave Taht" 
Sent: Fri, Aug 24, 2018 at 4:10 pm
To: bloat-annou...@lists.bufferbloat.net, "bloat" 
, "Make-Wifi-fast" 
, cerowrt-devel@lists.bufferbloat.net
Cc: bloat-annou...@lists.bufferbloat.net, "bloat" 
, "Make-Wifi-fast" 
, cerowrt-devel@lists.bufferbloat.net
Subject: [Cerowrt-devel] closing up my make-wifi-fast lab

All:

It is with some regret that I am announcing the closing of my
make-wifi-fast lab at the end of this month.

Over the years we have relied on the donation of lab space from
ISC.org, georgia tech, the LINCs, and the University of Karstadt and
elsewhere - but my main base of operation has always been the
"yurtlab", in a campground deep in the los gatos hills where I could
both experiment and deploy wifi fixes[0] at scale. CeroWrt, in
particular, was made here.

During the peak of the make-wifi-fast effort I rented additional space
on the same site, which at peak had over 30 routers in a crowded
space, competing. Which I (foolishly) kept, despite the additional
expense. Having heat in the winter and aircond in the summer was
helpful.

With ongoing donations running at $90/month[1] - which doesn't even
cover bufferbloat.net's servers in the cloud - my biggest expense has
been keeping the lab at lupin open at $1800/mo.

I kept the lab going through the sch_cake and openwrt 18.06 release
process, and I'm now several months behind on rent[3], and given how
things have gone for the past 2 years I don't see much use for it in
the future. Keeping it open, heated and dry in the winter has always
been a problem also. I'm also aware of a few larger, much better
equipped wifi labs that have thoroughly tested our "fq_codel for
wifi"[4] work that finally ends the "wifi performance anomaly". it's
in multiple commercial products now, we're seeing airtime fairness
being actually *marketed* as a wifi feature, and I kind of expect
deployment be universal across all mediatek mt76, and qualcomm ath9k
and ath10k based products in the next year or two. We won, big, on
wifi. Knocked it out of the park. Thanks all!

Despite identifying all kinds of other work[5] that can be done to
make wifi better, no major (or even minor) direct sponsor has ever
emerged[2] for the make-wifi-fast project. We had a small grant from
comcast, a bit of support from nlnet also, I subsidized what I did
here from other work sources, toke had his PHD support, and all the
wonderful volunteers here... and that's it.

Without me being able, also, to hire someone to keep the lab going, as
I freely admit to burnout and PTSD on perpetually reflashing and
reconfiguring routers...

I'm closing up shop here to gather enough energy, finances, and time
for the next project, whatever it is.

The make-wifi-fast mailing list and project will continue, efforts to
make more generic the new API also, and hopefully ther

Re: [Cerowrt-devel] dnsmasq CVEs

2017-10-04 Thread David P Reed
I share your concern for updates, and support for same.

However, there are architectural solutions we should have pursued a long time 
ago, which would bound the damage of such vulnerabilities. Make the system far 
more robust.

There's no reason for dnsmasq to run with privileges. Not should packet 
parsing. All datagrams should be end to end authenticated.

We developed these rules in 1973-78, both in Multics and in the MIT part of the 
Internet design. Recommended a specific embedding of cryptography in TCP.

They were rejected as unnecessary by Unix and by the TCP decision-makers.

Now Fedora Server uses SELinux in it's packaged version of dnsmasq, so dnsmasq 
can't do anything it is not permitted to do, or access resources it isn't 
supposed to. My personal home router is Fedora 26 Server, so I feel very calm 
about using dnsmasq.

But the "community" rejects SELinux! Turns it off after install. I know it is a 
pain, but it works. And it is based on the Multics concepts that Unix ignored. 
The principle of least privilege.


⁣Sent from Blue ​

On Oct 3, 2017, 8:50 PM, at 8:50 PM, Dave Taht  wrote:
>Back before I was trying to keep my blood pressure reliably low, I
>would have responded to this set of dnsmasq vulns
>
>https://www.cso.com.au/article/628031/prehistoric-bugs-dnsmasq-strike-android-linux-google-kubernetes/
>
>with an impassioned plea to keep a financial floor under the primary
>authors of network facing software as an insurance policy for network
>society. I also have long hoped that we would see useful risk
>assessments vs costs of prevention emerge from network vulnerable
>companies and insurance houses.
>
>Billions of devices run dnsmasq, and it had been through multiple
>security audits before now. Simon had done the best job possible, I
>think. He got beat. No human and no amount of budget would have found
>these problems before now, and now we face the worldwide costs, yet
>again, of something ubiquitous now, vulnerable.
>
>I'd long hoped, also, we'd see rapid updates enter the entire IoT
>supply chain, which remains a bitter joke. "Prehistoric" versions of
>dnsmasq litter that landscape, and there is no way they will ever be
>patched, and it would be a good bet that many "new" devices for the
>next several years will ship with a vulnerable version.
>
>I've grown quite blase' I guess, since heartbleed, and the latest list
>of stuff[1,2,3,4] that scared me only just last week, is now topped by
>this one, affecting a humongous list of companies and products.
>
>http://www.kb.cert.org/vuls/byvendor?searchview&Query=FIELD+Reference=973527&SearchOrder=4
>
>I am glad to see lede and google reacting so fast to distribute
>updates... and I'm sure the container folk and linux distros will also
>react quickly...
>
>... but,  it will take decades for the last vulnerable router to be
>taken out of the field. And that hardly counts all the android boxes,
>all the linux distros that use dnsmasq, all the containers you'll find
>dnsmasq in, and elsewhere. Those upgrades, might only take years.
>
>[1]
>http://bits-please.blogspot.com/2016/06/trustzone-kernel-privilege-escalation.html
>(many others, just google for "trustzone vulnerability")
>[2]
>http://www.zdnet.com/article/researchers-say-intels-management-engine-feature-can-be-switched-off/
>[3] https://www.kb.cert.org/vuls/id/240311
>[4]
>https://arstechnica.com/information-technology/2013/09/researchers-can-slip-an-undetectable-trojan-into-intels-ivy-bridge-cpus/
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Problems testing sqm

2015-10-24 Thread David P. Reed
Understand.

On Oct 24, 2015, Jonathan Morton  wrote:
>
>> On 24 Oct, 2015, at 19:34, David P. Reed  wrote:
>>
>> Not trying to haggle. Just pointing out that this test configuration
>has a very short RTT. maybe too short for our SQM to adjust to.
>
>It should still get the bandwidth right.  When it does, we’ll know that
>the setup is correct.
>
> - Jonathan Morton

-- Sent with K-@ Mail - the evolution of emailing.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Problems testing sqm

2015-10-24 Thread David P. Reed
Not trying to haggle. Just pointing out that this test configuration has a very 
short RTT. maybe too short for our SQM to adjust to.

On Oct 24, 2015, Sebastian Moeller  wrote:
>Hi David,
>
>On Oct 24, 2015, at 00:53 , David P. Reed  wrote:
>
>> In particular,  the DUT should probably have no more than 2 packets
>of outbound queueing given the very small RTT. 2xRTT is the most
>buffering you want in the loop.
>
>   Let’s not haggle about the precise amount of queueing we deem
>acceptable, as long as we all agree that >= 2 seconds is simply not
>acceptable ;) (the default sqm will approximately limit the latency
>under load increase (LULI) to roughly twice the target or typically 10
>ms; note that this LULI only applies to unrelated flows). The exact
>number of queued packets seems to correlate with the beefiness of the
>DUT, the beefier the fewer packets should work, wimpier devices might
>need to batch some processing up, resulting in  higher LULI…
>
>Best Regards
>   Sebastian
>
>>
>> On Oct 23, 2015, Richard Smith  wrote:
>> On 10/23/2015 02:41 PM, Michael Richardson wrote:
>> Richard Smith  wrote:
>> My test setup:
>>
>> Laptop<--1000BaseT-->DUT<--1000baseT-->Server
>>
>> So, given that the DUT is the only real constraint in the network,
>what
>> do you expect to see from this setup?
>>
>> Given that the probably DUT can't forward at Gb/s, and it certainly
>can't
>> shape anything, it's gonna drop packets, and it's probably gonna drop
>them in
>> Rx, having overrun the Rx-queue (so tail-drop). If there is too much
>ram
>> (bufferbloated), then you'll see different results...
>>
>> Setting ingress/egress to 10Mbit/s I expected to see the speed
>> measurements bounce around those limits with the ping times staying
>in
>> the low double digits of ms. What I saw however, was the data rates
>> going well past 10Mbit limit and pings up to 2000 ms.
>>
>> This is what I've seen in prior rrul testing using a the 50/10 cable
>> link at our office and my 25(ish)/6 link at my apartment and a well
>> connected server on the net. That however was using QoS and not SQM.
>>
>> Its that a reasonable expectation?
>>
>> -- Sent with K-@ Mail - the evolution of emailing.
>___
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent with K-@ Mail - the evolution of emailing.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Problems testing sqm

2015-10-23 Thread David P. Reed
In particular,  the DUT should probably have no more than 2 packets of outbound 
queueing given the very small RTT. 2xRTT is the most buffering you want in the 
loop.

On Oct 23, 2015, Richard Smith  wrote:
>On 10/23/2015 02:41 PM, Michael Richardson wrote:
>> Richard Smith  wrote:
>>  > My test setup:
>>
>>  > Laptop<--1000BaseT-->DUT<--1000baseT-->Server
>>
>> So, given that the DUT is the only real constraint in the network,
>what
>> do you expect to see from this setup?
>>
>> Given that the probably DUT can't forward at Gb/s, and it certainly
>can't
>> shape anything, it's gonna drop packets, and it's probably gonna drop
>them in
>> Rx, having overrun the Rx-queue (so tail-drop).  If there is too much
>ram
>> (bufferbloated), then you'll see different results...
>
>Setting ingress/egress to 10Mbit/s I expected to see the speed
>measurements bounce around those limits with the ping times staying in
>the low double digits of ms.  What I saw however, was the data rates
>going well past 10Mbit limit and pings up to 2000 ms.
>
>This is what I've seen in prior rrul testing using a the 50/10 cable
>link at our office and my 25(ish)/6 link at my apartment and a well
>connected server on the net.  That however was using QoS and not SQM.
>
>Its that a reasonable expectation?

-- Sent with K-@ Mail - the evolution of emailing.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Problems testing sqm

2015-10-23 Thread David P. Reed
Sqm is a way to deal with the dsl or cable modem having bufferbloat. In the 
configuration described neither end is the problem ... the DUT itself may have 
bufferbloat. That would be terrible.

On Oct 23, 2015, Richard Smith  wrote:
>On 10/23/2015 02:41 PM, Michael Richardson wrote:
>> Richard Smith  wrote:
>>  > My test setup:
>>
>>  > Laptop<--1000BaseT-->DUT<--1000baseT-->Server
>>
>> So, given that the DUT is the only real constraint in the network,
>what
>> do you expect to see from this setup?
>>
>> Given that the probably DUT can't forward at Gb/s, and it certainly
>can't
>> shape anything, it's gonna drop packets, and it's probably gonna drop
>them in
>> Rx, having overrun the Rx-queue (so tail-drop).  If there is too much
>ram
>> (bufferbloated), then you'll see different results...
>
>Setting ingress/egress to 10Mbit/s I expected to see the speed
>measurements bounce around those limits with the ping times staying in
>the low double digits of ms.  What I saw however, was the data rates
>going well past 10Mbit limit and pings up to 2000 ms.
>
>This is what I've seen in prior rrul testing using a the 50/10 cable
>link at our office and my 25(ish)/6 link at my apartment and a well
>connected server on the net.  That however was using QoS and not SQM.
>
>Its that a reasonable expectation?

-- Sent with K-@ Mail - the evolution of emailing.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [FCC] some comments from elsewhere on the lockdown

2015-10-01 Thread David P. Reed
I love copy left thinking. I worry that I can't sign something so provocative, 
because it invokes regulatory overreach.

The letter is taking a totalitarian turn, asking government to go beyond 
choice. I thought we were reducing the power of the FCC Iintitution, but now it 
is a call for extreme control.

Instead of innovation it seeks control over innovators.

On Sep 30, 2015, valdis.kletni...@vt.edu wrote:
>On Wed, 30 Sep 2015 16:11:38 -0400, Christopher Waid said:
>
>> > Apparently, they were of the opinion that the mere fact that I
>might
>> > die of a heart attack a year after distributing something doesn't
>> > excuse me from complying.)
>>
>> I don't know if it does excuse you from complying, but I say good
>luck
>> to the person trying to get it enforced.
>
>They could quite possibly hassle the executor of my estate if they were
>sufficiently determined.
>
>But given that abandonware (both software and hardware) is a big chunk
>of the problem, we really *do* need to address the problem of companies
>that can't provide patches because they've gone under.  Possibly a
>requirement that they open-source the hardware/software if possible?
>(That's another can-o-worms - consider that a big chunk of why NVidia
>doesn't open-source their proprietary graphics drivers is because
>there's
>a lot of OpenGL-related patents and trade secrets that Microsoft bought
>when
>there was the big fire sale when SGI got out of the graphics market -
>so
>it's quite possible that a vendor *can't* open-source it when they go
>under due to licensing issues...)

-- Sent with K-@ Mail - the evolution of emailing.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] some comments from elsewhere on the lockdown

2015-09-25 Thread David P. Reed
Sounds great to me

On Sep 25, 2015, Dave Taht  wrote:
>The core of the FCC letter is currently this. comments?
>
>snip snip
>
>In place of last year’s and the new proposed regulations, we propose a
>system of rules that would foster innovation, improve security, make
>Wi-Fi better, and overall improve usage of the Wi-Fi spectrum for
>everybody.
>
>1) Mandate that: for a SDR, wireless, or Wi-Fi radio of any sort - in
>order to achieve FCC compliance - full and maintained source code for
>at least the device driver and radio firmware be made publicly
>available in a source code repository on the internet, available for
>review and improvement by all.
>
>2) Mandate that: the vendor supply a continuous update stream, one
>that must respond to regulatory transgressions and CVEs within 45 days
>of disclosure, for the warranted lifetime of the product + 5 years
>after last customer ship.
>
>3) Mandate that: secure update of firmware be working at shipment, and
>that update streams be under ultimate control of the owner of the
>equipment. Problems with compliance can then be fixed going forward.
>
>4)  Failure to comply with these regulations will result in FCC
>decertification of the existing product and in severe cases, bar new
>products from that vendor from being considered for certification.
>
>5) In addition, we ask that the FCC review and rescind the rules for
>anything that conflicts with open source best practices, and/or which
>causes the vendors to believe they should hide the mechanisms they use
>by shipping undocumented “binary blobs” of compiled code.  This had
>been an ongoing problem to all in the internet community trying to do
>change control and error correction on safety-critical systems.
>
>
>On Fri, Sep 25, 2015 at 9:16 PM, David P. Reed  wrote:
>> Those of us who innovate at the waveform and MAC layer would argue
>> differently.  The cellular operators are actually the responsible
>control
>> operators and hold licenses for that. They may want to lock down
>phones'
>> cellular transmitters. But U-NII and ism bands are not licensed to
>these
>> operators. There is no license requirement for those bands to use
>particular
>> waveforms or MAC layers.
>>
>> So this is massive overreach. The control operator of the "licensed
>by rule"
>> Part 15 radios in your phone or home are licensed to the device user
>and not
>> to the mfr at all. For example, the user is responsible that the
>device not
>> interfere with licensed services, and that the device stop
>transmitting if
>> such harmful interference is called to their attention, *even* if the
>device
>> passed certification.
>>
>> Lock down has not been demonstrated to be necessary. This is all due
>to
>> fearful what - if speculation by people who have no data to justify
>the
>> need, plus attempt to stop innovation by licensees who want to
>exclude
>> competitors from being created, like LTE operators proposing LTE-U
>which
>> will be locked down and is the stalking horse for taking back open
>part 15
>> operation into a licensed regime based on property rights to
>spectrum.
>>
>>
>> On Sep 24, 2015, Dave Taht  wrote:
>>>
>>> a commenter that I will keep anonymous wrote:
>>>
>>>
>>> Regarding the FCC firmware lockdown issue, I’m sure you’re aware
>that
>>> baseband firmware in cellphones has been subject to similar
>>> restrictions for some time. In fact, the FCC effectively mandates
>that
>>> baseband functionality is implemented on a whole separate subsystem
>>> with its own CPU to make it easier to isolate and protect. Also, the
>>> cellphone system is designed so that a misbehaving node can be
>easily
>>> identified and blocked from the network, making it useless and
>>> removing most of the incentive to find ways around regulatory
>>> restrictions. Wi-Fi devices have none of these protections.
>>>
>>> I believe this new attention to Wi-Fi devices is a consequence of
>many
>>> factors:
>>>
>>> The precedent from cellphone baseband firmware control; regulators
>are
>>> easily inspired by success stories in related areas
>>> The substantial increase in flexibility offered by SDR
>implementations
>>> Technical ignorance, for example of the difference between OS,
>>> protocol, and UI firmware and baseband firmware
>>> The expansion of allowed capabilities in Wi-Fi hardware (from 5.8
>GHz
>>> ISM to the U-NII bands, increases in transmit power allowances,
>etc.)
>>> The improved spectrum utilizat

Re: [Cerowrt-devel] some comments from elsewhere on the lockdown

2015-09-25 Thread David P. Reed
Those of us who innovate at the waveform and MAC layer would argue differently. 
 The cellular operators are actually the responsible control operators and hold 
licenses for that. They may want to lock down phones' cellular transmitters. 
But U-NII and ism bands are not licensed to these operators. There is no 
license requirement for those bands to use particular waveforms or MAC layers.

So this is massive overreach. The control operator of the "licensed by rule" 
Part 15 radios in your phone or home are licensed to the device user and not to 
the mfr at all. For example, the user is responsible that the device not 
interfere with licensed services, and that the device stop transmitting if such 
harmful interference is called to their attention, *even* if the device passed 
certification.

Lock down has not been demonstrated to be necessary. This is all due to fearful 
what - if speculation by people who have no data to justify the need, plus 
attempt to stop innovation by licensees who want to exclude competitors from 
being created, like LTE operators proposing LTE-U which will be locked down and 
is the stalking horse for taking back open part 15 operation into a licensed 
regime based on property rights to spectrum.

On Sep 24, 2015, Dave Taht  wrote:
>a commenter that I will keep anonymous wrote:
>
>
>Regarding the FCC firmware lockdown issue, I’m sure you’re aware that
>baseband firmware in cellphones has been subject to similar
>restrictions for some time. In fact, the FCC effectively mandates that
>baseband functionality is implemented on a whole separate subsystem
>with its own CPU to make it easier to isolate and protect. Also, the
>cellphone system is designed so that a misbehaving node can be easily
>identified and blocked from the network, making it useless and
>removing most of the incentive to find ways around regulatory
>restrictions. Wi-Fi devices have none of these protections.
>
>I believe this new attention to Wi-Fi devices is a consequence of many
>factors:
>
>The precedent from cellphone baseband firmware control; regulators are
>easily inspired by success stories in related areas
>The substantial increase in flexibility offered by SDR implementations
>Technical ignorance, for example of the difference between OS,
>protocol, and UI firmware and baseband firmware
>The expansion of allowed capabilities in Wi-Fi hardware (from 5.8 GHz
>ISM to the U-NII bands, increases in transmit power allowances, etc.)
>The improved spectrum utilization of newer Wi-Fi modulation schemes
>Inconsistencies among international regulations for spectrum allocation
>Spectrum sharing between Wi-Fi and life safety applications
>The relative lack of attention to (and sometimes, the deliberate
>flouting of) regulatory constraints in open-source firmware
>The increased availability of open-source firmware for higher-power
>and narrow-beam Wi-Fi devices (not just the WRT-54G :-)
>
>
>And probably more I can’t think of off the top of my head, but which
>regulators are obsessing over every day.
>
>Although I agree with the spirit of your FCC email draft letter, it
>does not address most of these factors, so it’s likely to be seen as
>missing the point by regulators. If you want to reach these people,
>you have to talk about the things they’re thinking about.
>
>What you ought to be pushing for instead is that Wi-Fi devices be
>partitioned the same way cellphones are, defining a baseband section
>that can be locked down so that the device can’t operate in ways that
>are prohibited by the relevant local regulations, so that the OS,
>protocol, and UI code on the device can be relatively more open for
>the kinds of optimizations and improvements we all want to see.
>
>It’s possible that the partition could be in software alone, or in
>some combination of hardware and software, that doesn’t require a
>cellphone-style independent baseband processor, which would add a lot
>of cost to Wi-Fi devices. For example, the device vendor could put
>baseband-related firmware into a trusted and _truly minimal_ binary
>module that the OS has to go through to select the desired frequency,
>power, and modulation scheme, even for open-source solutions. That
>doesn’t mean the source code for the binary module can’t be published,
>or even that there can’t be a mandate to publish it.
>
>I’m sure that doesn’t sound like a great solution to you, but making
>it easy for end users to configure commercial devices to transmit at
>maximum power on unauthorized frequencies using very dense modulation
>schemes doesn’t sound like a great solution to regulators, and the
>difference between you and the regulators is that they are more
>determined and, frankly, better armed. It will do you no good to
>constrain the range of the solutions you’ll accept so that it doesn’t
>overlap with the solutions they will accept.
>
>.   png
>
>
>On Sep 21, 2015, at 5:10 AM, Dave Taht  wrote:
>
>
>Dave,
>
>
>Huh. I have been interested in mesh networking 

Re: [Cerowrt-devel] [Bloat] marketing #102 - giving netperf-wrapper a better name?

2015-03-20 Thread David P. Reed
Drag is an fluid dynamic term that suggests a meaning close to this... flow 
rate dependent friction.

But what you really want to suggest is a flow rate dependent *delay* that 
people are familiar with quantifying.

Fq_codel limits the delay as flow rate increases and is fair.

The max buffer limits delay due to Little's lemma also.

The actual delay limit in practice is what one measures in most cases. Codel is 
just softer than the hard limit of a small buffer.

So there are two qualitative measures - delay limit in units of milliseconds 
and softness or stiffness in units of milliseconds per queue depth I'd guess. 
Softness gives the recovery rate after a burst.

You should divide delay limit by elephant packet size in milliseconds. Based on 
channel rate.

I'd think the scaled delay limit should

On Mar 20, 2015, "Bill Ver Steeg (versteb)"  wrote:
>I was kidding about "sucks-less", and forgot the smiley in my initial
>note.
>
>We do need a metric with an end-user-friendly name, though. Most people
>understand "lag", and understand that lower numbers are better. You
>could probably explain "lag-while-loaded" to most users (particularly
>people who care, like gamers) in a manner that got the point across.
>
>Bvs
>
>
>-Original Message-
>From: Jonathan Morton [mailto:chromati...@gmail.com]
>Sent: Friday, March 20, 2015 4:26 PM
>To: Bill Ver Steeg (versteb)
>Cc: Rémi Cardona; bloat; cerowrt-devel@lists.bufferbloat.net
>Subject: Re: [Bloat] marketing #102 - giving netperf-wrapper a better
>name?
>
>
>> On 20 Mar, 2015, at 22:08, Bill Ver Steeg (versteb)
> wrote:
>>
>> We should call the metric "sucks-less". As in "Box A sucks less than
>Box B", or "Box C scored a 17 on the sucks less test".
>
>I suspect real marketing drones would get nervous at a
>negative-sounding name.
>
>My idea - which I’ve floated in the past, more than once - is that the
>metric should be “responsiveness”, measured in Hertz.  The baseline
>standard would be 10Hz, corresponding to a dumb 100ms buffer.  Get down
>into the single-digit millisecond range, as fq_codel does, and the
>Responsiveness goes up above 100Hz, approaching 1000Hz.
>
>Crucially, that’s a positive sort of term, as well as trending towards
>bigger numbers with actual improvements in performance, and is thus
>more potentially marketable.
>
> - Jonathan Morton
>
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent with K-@ Mail - the evolution of emailing.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation?

2015-03-20 Thread David P. Reed
I think this is because there are a lot of packets in flight from end to end 
meaning that the window is wide open and has way overshot the mark. This can 
happen if the receiving end keeps opening it's window and has not encountered a 
lost frame.  That is: the dropped or marked packets are not happening early 
eniugh.

Evaluating an RTO measure from an out of whack system that is  not sending 
congestion signals is not a good source of data, unless you show the internal 
state of the endpoints that was going on at the same time.

Do the control theory.

On Mar 20, 2015, Michael Welzl  wrote:
>
>> On 20. mar. 2015, at 17.31, Jonathan Morton 
>wrote:
>>
>>
>>> On 20 Mar, 2015, at 16:54, Michael Welzl  wrote:
>>>
>>> I'd like people to understand that packet loss often also comes with
>delay - for having to retransmit.
>>
>> Or, turning it upside down, it’s always a win to drop packets (in the
>service of signalling congestion) if the induced delay exceeds the
>inherent RTT.
>
>Actually, no: as I said, the delay caused by a dropped packet can be
>more than 1 RTT - even much more under some circumstances. Consider
>this quote from the intro of
>https://tools.ietf.org/html/draft-dukkipati-tcpm-tcp-loss-probe-01  :
>
>***
>To get a sense of just how long the RTOs are in relation to
>   connection RTTs, following is the distribution of RTO/RTT values on
>   Google Web servers. [percentile, RTO/RTT]: [50th percentile, 4.3];
>   [75th percentile, 11.3]; [90th percentile, 28.9]; [95th percentile,
>   53.9]; [99th percentile, 214].
>***
>
>That would be for the unfortunate case where you drop a packet at the
>end of a burst and you don't have TLP or anything, and only an RTO
>helps...
>
>Cheers,
>Michael

-- Sent with K-@ Mail - the evolution of emailing.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation?

2015-03-20 Thread David P. Reed
SamKnows is carefully constructed politically to claim that everyone has great 
service and no problems are detected. They were constructed by opponents of 
government supervision - the corporate FCC lobby.

Don't believe they have any incentive to measure customer relevant measures

M-Lab is better by far. But control by Google automatically discredits it's 
data. As well as the claims by operators that measurements by independent 
parties violate their trade secrets. Winning that battle requires a group that 
can measure while supporting a very expensive defense against lawsuits by 
operators making such claim of trade secrecy.

Criticizing M-LAB is just fodder fir the operators' lobby in DC.

On Mar 20, 2015, "Livingood, Jason"  wrote:
>>*I realize not everyone likes the Ookla tool, but it is popular and
>about
>>as "sexy" as you are going to get with a network performance tool.
>
>Ookla has recently been acquired by Ziff-Davis
>(http://finance.yahoo.com/news/ziff-davis-acquires-ookla-120100454.html).
>I am not sure have that may influence their potential involvement. I
>have
>suggested they add this test previously. I also suggested it be added
>to
>the FCC¹s SamKnows / Measuring Broadband American platform and that the
>FCC potentially does a one-off special report on the results.
>
>- Jason

-- Sent with K-@ Mail - the evolution of emailing.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation?

2015-03-20 Thread David P. Reed
The mystery in most users' minds is that ping at a time when there is no load 
does tell them anything at all about why the network connection will such when 
their kid is uploading to youtube.

So giving them ping time is meaningless.
I think most network engineers think ping time is a useful measure of a badly 
bufferbloated system. It is not.

The only measure is ping time under maximum load of raw packets.

And that requires a way to test maximum load rtt.

There is no problem with that ... other than that to understand why and how 
that is relevant you have to understand Internet congestion control.

Having had to testify before CRTC about this, I learned that most access 
providers (the Canadian ones) claim that such measurements are never made as a 
measure of quality, and that you can calculate expected latency by using 
Little's lemma from average throughput. And that dropped packets are the right 
measure of quality of service.

Ookla ping time  is useless in a context where even the "experts" wearing ties 
from the top grossing Internet firms are so confused. And maybe deliberately 
misleading on purpose... they had to be forced to provide any data they had 
about  congestion in their networks by a ruling during the proceeding and then 
responded that they had no data - they never measured queueing delay and 
disputed that it mattered. The proper measure of congestion was throughput.

I kid you not.

So Ookla ping time is useless against such public ignorance.



That's completely wrong for

On Mar 20, 2015, MUSCARIELLO Luca IMT/OLN  wrote:
>I agree. Having that ping included in Ookla would help a lot more
>
>Luca
>
>
>On 03/20/2015 12:18 AM, Greg White wrote:
>> Netalyzr is great for network geeks, hardly consumer-friendly, and
>even so
>> the "network buffer measurements" part is buried in 150 other
>statistics.
>> Why couldn't Ookla* add a simultaneous "ping" test to their
>throughput
>> test?  When was the last time someone leaned on them?
>>
>>
>> *I realize not everyone likes the Ookla tool, but it is popular and
>about
>> as "sexy" as you are going to get with a network performance tool.
>>
>> -Greg
>>
>>
>>
>> On 3/19/15, 2:29 PM, "dpr...@reed.com"  wrote:
>>
>>> I do think engineers operating networks get it, and that Comcast's
>>> engineers really get it, as I clarified in my followup note.
>>>
>>> The issue is indeed prioritization of investment, engineering
>resources
>>> and management attention. The teams at Comcast in the engineering
>side
>>> have been the leaders in "bufferbloat minimizing" work, and I think
>they
>>> should get more recognition for that.
>>>
>>> I disagree a little bit about not having a test that shows the
>issue, and
>>> the value the test would have in demonstrating the issue to users.
>>> Netalyzer has been doing an amazing job on this since before the
>>> bufferbloat term was invented. Every time I've talked about this
>issue
>>> I've suggested running Netalyzer, so I have a personal set of
>comments
>> >from people all over the world who run Netalyzer on their home
>networks,
>>> on hotel networks, etc.
>>>
>>> When I have brought up these measurements from Netalyzr (which are
>not
>>> aimed at showing the problem as users experience) I observe an
>>> interesting reaction from many industry insiders:  the results are
>not
>>> "sexy enough for stupid users" and also "no one will care".
>>>
>>> I think the reaction characterizes the problem correctly - but the
>second
>>> part is the most serious objection.  People don't need a measurement
>>> tool, they need to know that this is why their home network sucks
>>> sometimes.
>>>
>>>
>>>
>>>
>>>
>>> On Thursday, March 19, 2015 3:58pm, "Livingood, Jason"
>>>  said:
>>>
 On 3/19/15, 1:11 PM, "Dave Taht"  wrote:

> On Thu, Mar 19, 2015 at 6:53 AM,   wrote:
>> How many years has it been since Comcast said they were going to
>fix
>> bufferbloat in their network within a year?
 I¹m not sure anyone ever said it¹d take a year. If someone did
>(even if
 it
 was me) then it was in the days when the problem appeared less
 complicated
 than it is and I apologize for that. Let¹s face it - the problem is
 complex and the software that has to be fixed is everywhere. As I
>said
 about IPv6: if it were easy, it¹d be done by now. ;-)

>> It's almost as if the cable companies don't want OTT video or
>> simultaneous FTP and interactive gaming to work. Of course not.
>They'd
>> never do that.
 Sorry, but that seems a bit unfair. It flies in the face of what we
>have
 done and are doing. We¹ve underwritten some of Dave¹s work, we got
 CableLabs to underwrite AQM work, and I personally pushed like heck
>to
 get
 AQM built into the default D3.1 spec (had CTO-level awareness &
>support,
 and was due to Greg White¹s work at CableLabs). We are starting to
>field
 test D3.1 gear now, by the way. We made some bad bets too, such as
 trying
 

Re: [Cerowrt-devel] DOCSIS 3+ recommendation?

2015-03-17 Thread David P. Reed
It is not the cable modem itself that is bufferbloated. It is the head end 
working with the cable modem. Docsis 3 has mechanisms to avoid queue buildup 
but they are turned on by the head end.

I don't know for sure but I believe that the modem itself cannot measure or 
control the queueing in the system to minimize latency.

You can use codel or whatever if you bound you traffic upward and stifle 
traffic downward. But that doesn't deal with the queueing in the link away from 
your home.

On Mar 17, 2015, valdis.kletni...@vt.edu wrote:
>On Mon, 16 Mar 2015 13:35:32 -0700, Matt Taggart said:
>> Hi cerowrt-devel,
>>
>> My cable internet provider (Comcast) has been pestering me (monthly
>email
>> and robocalls) to upgrade my cable modem to something newer. But I
>_like_
>> my current one (no wifi, battery backup) and it's been very stable
>and can
>> handle the data rates I am paying for. But they are starting to roll
>out
>> faster service plans and I guess it would be good to have that option
>(and
>> eventually they will probably boost the speed of the plan I'm paying
>for).
>> So...
>>
>> Any recommendations for cable modems that are known to be solid and
>less
>> bufferbloated?
>
>I've been using the Motorola Surfboard SB6141 on Comcast with good
>results.
>Anybody got a good suggestion on how to test a cablemodem for
>bufferbloat,
>or what you can do about it anyhow (given that firmware is usually
>pushed
>from the ISP side)?
>
>
>
>
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent with K-@ Mail - the evolution of emailing.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] make-wifi-fast videoconference?

2015-02-28 Thread David P. Reed
I am booked solid on Monday and Tuesday - speaking at the F2C conference in NYC 
about the big issues in evolving the wireless Internet to be less centralized 
and more scalable. I will briefly mention Cerowrt  as a good model for making a 
difference and mention the WiFi challenge. It won't be technically deep, but 
will frame why leaving it to the cellular companies as offload is far worse 
than improving the architecture to scale and interoperate well.

On Feb 28, 2015, Dave Taht  wrote:
>in trying to get make-wifi-fast off the ground, I am certainly finding
>email to be a release (for ranting & venting!) but not very productive
>for planning and figuring out various bits and pieces of what else is
>needed to be done.
>
>There is an awful lot to be done, and I am thinking that maybe pulling
>together a videoconference of everyone interested would be useful.
>These days I mostly use appear.in. I am (presently) in PDT, and
>typically awake between 8AM and 1AM.
>
>So does anyone have time next week (say, monday afternoon or evening)
>to chat more interactively?
>
>I could try to go back to operating over irc, also.
>
>--
>Dave Täht
>Let's make wifi fast, less jittery and reliable again!
>
>https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent with K-@ Mail - the evolution of emailing.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Cerowrt-devel Digest, Vol 37, Issue 24

2014-12-21 Thread David P. Reed
All microwave frequencies heat water molecules, fyi. The early ovens used a 
klystron that was good at 2.4 GHZ because it was available and cheap enough. 
But they don't radiate much. 5.8 GHz was chosen because the band's primary was 
a government band at EOL.

Yes... higher frequency bands have not been used for broadcasting. That's 
because planetary curvature can be conquered by refraction near the earth's 
surface and reflection by the ionosphere. That's why power doesn't help were we 
to use higher frequencies for broadcasting.  But data communications is not 
broadcasting. So satellite broadcasters can use higher frequencies  for 
broadcasting. And they do, because it's a lot easier to build directional 
antennas at higher frequencies. Same for radar and GPS.

Think about acoustics.  Higher frequencies from a tweeter propagate through air 
just as well as lower frequencies from subwoofers. But our ears are more 
directional antennae at the higher frequencies. Similar properties apply to EM 
waves. And low frequencies refract around corners and along the ground better.  
The steel of a car body does not couple to higher frequencies so it reradiates 
low freq sounds better than high freq ones. Hence the loud car stereo bass is 
much louder than treble when the cabin is sealed.

On Dec 21, 2014, David Lang  wrote:
>On Sat, 20 Dec 2014, David P. Reed wrote:
>
>> Neither 2.4 GHZ nor 5.8 GHz are absorbed more than other bands.
>That's an old
>> wives tale. The reason for the bands' selection is that they were
>available at
>> the time. The water absorption peak frequency is 10x higher.
>
>well, microwave ovens do work at around 2.4GHz, so there's some
>interaction with
>water at that frequency.
>
>> Don't believe what people repeat without checking. The understanding
>of radio
>> propagation by CS and EE folks is pitiful. Some even seem to think
>that RF
>> energy travels less far the higher the frequency.
>
>I agree that the RF understanding is poor, but given that it's so far
>outside
>their area of focus, that's understandable.
>
>the mistake about higher frequencies traveling less is easy to
>understand, since
>higher frequency transmistters tend to be lower power than lower
>frequencies,
>there is a correlation between frequency and distance with commonly
>available
>equipment that is easy to mistake for causation.
>
>David Lang
>
>> Please don't repeat nonsense.
>>
>> On Dec 20, 2014, Mike O'Dell  wrote:
>>> 15.9bps/Hz is unlikely to be using simple phase encoding
>>>
>>> that sounds more like 64QAM with FEC.
>>> given the chips available these days for DTV, DBS,
>>> and even LTE, that kind of processing is available
>>> off-the-shelf (relatively speaking - compared to
>>> writing your own DSP code).
>>>
>>> keep in mind that the reason the 2.4 and 5.8 ISM bands
>>> are where they are is specifically because of the ready
>>> absorption of RF at those frequencies. the propagation
>>> is *intended* to be problematic. that said, with
>>> good-enough antennas mounted with sufficient stability
>>> and sufficient power on the TX end and a good enough
>>> noise floor on the RX end, one can push a bunch of bits
>>> pretty far.
>>>
>>> Bdale Garbee (of Debian fame) had a 10GHz bent-pipe repeater
>>> up on the mountain above Colo Spgs for quite some time. X-band
>>> Gunnplexers were not hard to come by and retune for the
>>> 10GHz ham band. i believe he just FM'ed the Gunnplexer
>>> with the output of a 10Mbps ethernet chip and ran
>>> essentially pure Aloha. X-band dishes are relatively
>>> small and with just a few stations in the area he had fun.
>>>
>>>  -mo
>>> ___
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
>> -- Sent from my Android device with K-@ Mail. Please excuse my
>brevity.
>
>
>
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Cerowrt-devel Digest, Vol 37, Issue 24

2014-12-20 Thread David P. Reed
Neither 2.4 GHZ nor 5.8 GHz are absorbed more than other bands. That's an old 
wives tale. The reason for the bands' selection is that they were available at 
the time. The water absorption peak frequency is 10x higher.

Don't believe what people repeat without checking. The understanding of radio 
propagation by CS and EE folks is pitiful. Some even seem to think that RF 
energy travels less far the higher the frequency.

Please don't repeat nonsense.

On Dec 20, 2014, Mike O'Dell  wrote:
>15.9bps/Hz is unlikely to be using simple phase encoding
>
>that sounds more like 64QAM with FEC.
>given the chips available these days for DTV, DBS,
>and even LTE, that kind of processing is available
>off-the-shelf (relatively speaking - compared to
>writing your own DSP code).
>
>keep in mind that the reason the 2.4 and 5.8 ISM bands
>are where they are is specifically because of the ready
>absorption of RF at those frequencies. the propagation
>is *intended* to be problematic. that said, with
>good-enough antennas mounted with sufficient stability
>and sufficient power on the TX end and a good enough
>noise floor on the RX end, one can push a bunch of bits
>pretty far.
>
>Bdale Garbee (of Debian fame) had a 10GHz bent-pipe repeater
>up on the mountain above Colo Spgs for quite some time. X-band
>Gunnplexers were not hard to come by and retune for the
>10GHz ham band. i believe he just FM'ed the Gunnplexer
>with the output of a 10Mbps ethernet chip and ran
>essentially pure Aloha. X-band dishes are relatively
>small and with just a few stations in the area he had fun.
>
>  -mo
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] an option for a new platform?

2014-12-13 Thread David P. Reed
Anyone measured what is the actual bottleneck in 300 mb/s shaping?  On an Intel 
platform you can measure a running piece of code pretty accurately. I ask 
because it is not obvious a cpu needs to touch much of a frame to do shaping, 
so it seems more likely that the driver and memory management structures are 
the bottleneck.

But it is really easy to write very slow code in a machine with limited cache. 
So maybe that is it.

On a multi core intel arch machine these days it is a surprising fact that a 
single core can't use up more than about 25 percent of  a socket's memory 
cycles so to get full i/o speed you need to be running your code on 4 cores or 
more... this kind of thing can really limit achievable speed of a poorly 
threaded design.  Architectural optimization needs more than llvm and clean 
code. You need to think about the whole software pipeline. Debian may not be 
great out of the box for this reason - it was never designed for routing 
throughput.

On Dec 12, 2014, Dave Taht  wrote:
>There was a review of that hardware that showed it couldn't push more
>than 600Mbit natively (without shaping). I felt that the ethernet
>driver could be improved significantly after looking it over, but
>didn't care for the 600mbit as a starting point to have to improve
>from.
>
>Not ruling it out, though! It met quite a few requirements we have.
>
>On Thu, Dec 11, 2014 at 11:33 PM, Erkki Lintunen 
>wrote:
>>
>> Hello,
>>
>> while enjoying and reading another thread from the list...
>>
>>>  Forwarded Message 
>>> Subject: Re: [Cerowrt-devel] how is everyone's uptime?
>>> Date: Thu, 11 Dec 2014 16:42:37 -0800
>>> From: Dave Taht 
>> [snip]
>>> But frankly, I would prefer for most of the chaos there to subside
>and to find
>>> a new, additional platform, to be working on before resuming work,
>>> that can do inbound shaping at up to 300mbit. And
>>> to be more openwrt compatible in whatever we do, whatever that is.
>>
>> this reminded me that another day I passed a web-page of a platform
>and
>> in the hope this has not been on the list yet passing it forward.
>>
>> 
>>
>> An interesting tidbit in the platform is the choice of firmware, I
>> think. Haven't seen any board yet with the similar choice by the
>> manufacturer. With a quick summing from the vendor part catalog, the
>> platform is sub 200 EUR (238 USD in current exchange rate) for an
>about
>> working assembly of 3x 1GbE, 4G ram, 1G flash, 802.11a/b/g/n radio...
>>
>> I can't say anything how capable the hw might be for the stated
>inbound
>> shaping performance. I have had an ALIX board from their previous
>> generation for years and its been humming nicely though I haven't
>pushed
>> it to its envelope.
>>
>> Best
>> Erkki
>> ___
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
>
>
>--
>Dave Täht
>
>thttp://www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] tinc vpn: adding dscp passthrough (priorityinherit), ecn, and fq_codel support

2014-12-04 Thread David P. Reed
n my home, avoiding their
>extinction/pawning by keeping the predators away; as fitness is
>relative. Might not work perfectly, but “good enough” would do ;)
>   To cite the russians: Dowerjai, no prowerjai, "Trust, but verify”…
>
>
>> We could also put congestion control in the network by re-creating
>admission control and requiring contractual agreements to carry traffic
>across every intermediary.  But I think that basically destroys almost
>all the value of an "inter" net.  It makes it a balkanized proprietary
>set of subnets that have dozens of reasons why you can't connect with
>anyone else, and no way to be free to connect.
>>
>>
>>
>>
>>
>> On Wednesday, December 3, 2014 2:44pm, "Dave Taht"
> said:
>>
>> > On Wed, Dec 3, 2014 at 6:17 AM, David P. Reed 
>wrote:
>> > > Tor needs this stuff very badly.
>> >
>> > Tor has many, many problematic behaviors relevant to congestion
>control
>> > in general. Let me paste a bit of private discussion I'd had on it
>in a second,
>> > but a very good paper that touched upon it all was:
>> >
>> > DefenestraTor: Throwing out Windows in Tor
>> > http://www.cypherpunks.ca/~iang/pubs/defenestrator.pdf
>> >
>> > Honestly tor needs to move to udp, and hide in all the upcoming
>> > webrtc traffic
>> >
>> >
>http://blog.mozilla.org/futurereleases/2014/10/16/test-the-new-firefox-hello-webrtc-feature-in-firefox-beta/
>> >
>> > webrtc needs some sort of non-centralized rendezvous mechanism, but
>I am REALLY
>> > happy to see calls and video stay entirely inside my network when
>they can be
>> > negotiated as such.
>> >
>> > https://plus.google.com/u/0/107942175615993706558/posts/M4xUtpCKJ4P
>> >
>> > And of course, people are busily reinventing torrent in webrtc
>without
>> > paying attention to congestion control at all.
>> >
>> > https://github.com/feross/webtorrent/issues/39
>> >
>> > Giving access to udp to javascript programmers... what could go
>wrong?
>> > :/
>> >
>> > > I do wonder whether we should focus on vpn's rather than end to
>end
>> > > encryption that does not leak secure information through from
>inside as the
>> > > plan seems to do.
>> >
>> > "plan"?
>> >
>> > I like e2e encryption. I also like overlay networks. And meshes.
>> > And working dns and service discovery. And low latency.
>> >
>> > vpns are useful abstractions for sharing an address space you
>> > may not want to share more widely.
>> >
>> > and: I've taken a lot of flack about how fq doesn't help on
>conventional
>> > vpns, and well, just came up with an unconventional vpn idea,
>> > that might have some legs here... (certainly in my case tinc
>> > as constructed already, no patches, solves hooking together the
>> > 12 networks I have around the globe, mostly)
>> >
>> > As for "leaking information", packet size and frequency is
>generally
>> > an obvious indicator of a given traffic type, some padding added or
>> > no. There is one piece of plaintext
>> > in tinc (the seqno), also. It also uses a fixed port number for
>both
>> > sides of the connection (perhaps it shouldn't)
>> >
>> > So I don't necessarily see a difference between sending a whole lot
>of
>> > varying data on one tuple
>> >
>> > 2001:db8::1 <-> 2001:db8:1::1 on port 655
>> >
>> > vs
>> >
>> > 2001:db8::1 <-> 2001:db8:1::1 port 655
>> > 2001:db8::2 <-> 2001:db8:1::1 port 655
>> > 2001:db8::3 <-> 2001:db8:1::1 port 655
>> > 2001:db8::4 <-> 2001:db8:1::1 port 655
>> > 
>> >
>> > which solves the fq problem on a vpn like tinc neatly. A security
>feature
>> > could be source specific routing where we send stuff over different
>paths
>> > from different ipv6 source addresses... and mixing up the src/dest
>ports
>> > more but that complexifies the fq portion of the algo my
>thought
>> > for an initial implementation is to just hard code the ipv6 address
>range.
>> >
>> > I think however that adding tons and tons of ipv6 addresses to a
>given
>> > interface is probably slow,
>> > and might break things like nd and/or multicast...
>> >
>> > what would be cooler would be if you could al

Re: [Cerowrt-devel] tinc vpn: adding dscp passthrough (priorityinherit), ecn, and fq_codel support

2014-12-03 Thread David P. Reed
Tor needs this stuff very badly.

I do wonder whether we should focus on vpn's rather than end to end encryption 
that does not leak secure information through from inside as the plan seems to 
do.



On Dec 3, 2014, Guus Sliepen  wrote:
>On Wed, Dec 03, 2014 at 12:07:59AM -0800, Dave Taht wrote:
>
>[...]
>> https://github.com/dtaht/tinc
>> 
>> I successfully converted tinc to use sendmsg and recvmsg, acquire (at
>> least on linux) the TTL/Hoplimit and IP_TOS/IPv6_TCLASS packet
>fields,
>
>Windows does not have sendmsg()/recvmsg(), but the BSDs support it.
>
>> as well as SO_TIMESTAMPNS, and use a higher resolution internal
>clock.
>> Got passing through the dscp values to work also, but:
>>
>> A) encapsulation of ecn capable marked packets, and availability in
>> the outer header, without correct decapsulationm doesn't work well.
>>
>> The outer packet gets marked, but by default the marking doesn't make
>> it back into the inner packet when decoded.
>
>Is the kernel stripping the ECN bits provided by userspace? In the code
>in your git branch you strip the ECN bits out yourself.
>
>> So communicating somehow that a path can take ecn (and/or diffserv
>> markings) is needed between tinc daemons. I thought of perhaps
>> crafting a special icmp message marked with CE but am open to ideas
>> that would be backward compatible.
>
>PMTU probes are used to discover whether UDP works and how big the path
>MTU is, maybe it could be used to discover whether ECN works as well?
>Set one of the ECN bits on some of the PMTU probes, and if you receive
>a
>probe with that ECN bit set, also set it on the probe reply. If you
>succesfully receive a reply with ECN bits set, then you know ECN works.
>Since the remote side just echoes the contents of the probe, you could
>also put a copy of the ECN bits in the probe payload, and then you can
>detect if the ECN bits got zeroed. You can also define an OPTION_ECN in
>src/connection.h, so nodes can announce their support for ECN, but that
>should not be necessary I think.
>
>> B) I have long theorized that a lot of userspace vpns bottleneck on
>> the read and encapsulate step, and being strict FIFOs,
>> gradually accumulate delay until finally they run out of read socket
>> buffer space and start dropping packets.
>
>Well, encryption and decryption takes a lot of CPU time, but context
>switches are also bad.
>
>Tinc is treating UDP in a strictly FIFO way, but actually it does use a
>RED algorithm when tunneling over TCP. That said, it only looks at its
>own buffers to determine when to drop packets, and those only come into
>play once the kernel's TCP buffers are filled.
>
>> so I had a couple thoughts towards using multiple rx queues in the
>> vtun interface, and/or trying to read more than one packet at a time
>> (via recvmmsg) and do some level of fair queueing and queue
>management
>> (codel) inside tinc itself. I think that's
>> pretty doable without modifying the protocol any, but I'm not sure of
>> it's value until I saturate some cpu more.
>
>I'd welcome any work in this area :)
>
>> (and if you thought recvmsg was complex, look at recvmmsg)
>
>It seems someone is already working on that, see
>https://github.com/jasdeep-hundal/tinc.
>
>> D)
>>
>> the bottleneck link above is actually not tinc but the gateway, and
>as
>> the gateway reverts to codel behavior on a single encapsulated flow
>> encapsulating all the other flows, we end up with about 40ms of
>> induced delay on this test. While I have a better codel (gets below
>> 20ms latency, not deployed), *fq*_codel by identifying individual
>> flows gets the induced delay on those flows down below 5ms.
>
>But that should improve with ECN if fq_codel is configured to use that,
>right?
>
>> At one level, tinc being so nicely meshy means that the "fq" part of
>> fq_codel on the gateway will have more chance to work against the
>> multiple vpn flows it generates for all the potential vpn
>endpoints...
>>
>> but at another... lookie here! ipv6! 2^64 addresses or more to use!
>> and port space to burn! What if I could make tinc open up 1024 ports
>> per connection, and have it fq all it's flows over those? What could
>> go wrong?
>
>Right, hash the header of the original packets, and then select a port
>or address based on the hash? What about putting that hash in the flow
>label of outer packets? Any routers that would actually treat those as
>separate flows?

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] High Performance (SSH) Data Transfers using fq_codel?

2014-11-14 Thread David P. Reed
Filling intermediate buffers doesn't make the tcp congestion algorithms work.  
They just kick in when the buffers are full! And then you end up with a pile of 
packets that will be duplicated which amplifies the pressure on buffers!

If there could be no buffering,the big file transfers would home in on the 
available capacity more quickly and waste fewer retransmit - while being 
friendly to folks sharing the bottleneck!

The HPC guys really don't understand a thing about control theory...

On Nov 13, 2014, Aaron Wood  wrote:
>I have a couple friends in that crowd, and they _also_ aren't using
>shared
>lines.  So they don't worry in the slightest about congestion when
>they're
>trying to keep dedicated links fully saturated.  They're big issue with
>dropped packets is that some of the TCP congestion-control algorithms
>kick
>in on a single dropped packet:
>http://fasterdata.es.net/network-tuning/tcp-issues-explained/packet-loss/
>
>I'm thinking that some forward-error-correction would make their lives
>much, much better.
>
>-Aaron
>
>On Thu, Nov 13, 2014 at 7:11 PM, Dave Taht  wrote:
>
>> One thing the HPC crowd has missed is that in their quest for big
>> buffers for contenental distances, they hurt themselves on shorter
>> ones...
>>
>> ... and also that big buffers with FQ on them works just fine in the
>> general case.
>>
>> As always I recomend benchmarking - do a rrul test between the two
>> points, for example, with their recomendations.
>>
>>
>> On Fri, Oct 17, 2014 at 4:21 PM, Frank Horowitz 
>wrote:
>> > G’Day folks,
>> >
>> > Long time lurker. I’ve been using Cero for my home router for quite
>a
>> while now, with reasonable results (modulo bloody OSX wifi stuffola).
>> >
>> > I’m running into issues doing zfs send/receive over ssh across a
>> (mostly) internet2 backbone between Cornell (where I work) and West
>> Virginia University (where we have a collaborator on a DOE sponsored
>> project. Both ends are linux machines running fq_codel configured
>like so:
>> > tc qdisc
>> > qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows
>1024
>> quantum 1514 target 5.0ms interval 100.0ms ecn
>> >
>> > I stumbled across hpn-ssh 
>and
>> —  of particular interest to this group — their page on tuning TCP
>> parameters:
>> >
>> > 
>> >
>> > N.B. their advice to increase buffer size…
>> >
>> > I’m curious, what part (if any) of that advice survives with
>fq_codel
>> running on both ends?
>> >
>> > Any advice from the experts here would be gratefully received!
>> >
>> > (And thanks for all of your collective and individual efforts!)
>> >
>> > Cheers,
>> > Frank Horowitz
>> >
>> >
>> > ___
>> > Cerowrt-devel mailing list
>> > Cerowrt-devel@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/cerowrt-devel
>> >
>>
>>
>>
>> --
>> Dave Täht
>>
>> thttp://www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks
>> ___
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
>
>
>
>
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] wifi over narrow channels

2014-10-14 Thread David P. Reed
David -

1) you are right that LBT fails because propagation does not allow a 
transmitter to hear the stations that might be transmitting on the same 
channel. But LBT need not be the optimal approach for the general idea of 
packet  multiplexing. A better decentralized approach is rts/cts. Think about 
it. Start with that idea and improve it... I'm leaving it there as a learning 
moment. You'll learn more by improving it than by having me tell you.

2) co channel separation was hard before computing and sampling became good. It 
is no longer hard unless you handicap your thinking by thinking it is hard.

3) spread spectrum is the first non- narrowband modulation technique. It is far 
from the best.  Consider very wideband wavelet coding...

4) subdivision of space time is wasteful in the extreme. Why try to create 
walls when the space around the walls becomes useless? The denser the walls in 
space time and frequency the more waste...

Radio is not the sin or cos function.  There is nothing in either Maxwell or 
QED  that invokes Fourier. Nothing.

On Oct 14, 2014, David Lang  wrote:
>The problem is that you really have trouble with different transmitters
>sharing
>one channel over a wide area. By having more channels and smaller areas
>that
>each one covers you end up with better coverage overall, less
>interference and
>greater throughput.
>
>If it was really possible to have different transmitters talking at the
>same
>time on the same channel, this wouldn't be a problem, but we really
>can't in
>practice [*], so it is a problem.
>
>The key problems show up because you can have two mobile stations on
>opposite
>sides of the AP that can't hear each other, but can hear the AP just
>fine. When
>they both transmit, the AP can't listen to both at once (it's
>technically
>possible if the two signals are very close to the same strength, but if
>one is
>significantly more powerful than the other, the weaker one gets lost in
>the
>noise)
>
>So in practice, you are better off with more APs on different channels
>with the
>APs connected together than you are with fewer, faster channels
>covering a
>larger area per channel.
>
>Things like a fixed "listen before transmit" wait time just emphisise
>this
>because they mean that as the bit rate of the channel goes up, you are
>wasting
>a higher percentage of the capability when you are transmitting the
>same amount
>of data.
>
>If you wait 4ms and then transmit for 28ms, you are wasting 1/8 of the
>channel
>bandwidth, but if you can transmit the same data in 12ms, you are
>wasting 1/4 of
>the channel bandwidth.
>
>Now, shorter transmissions do mean that there is less chance that
>someone else
>who you can't hear will transmit, but having that other station on a
>different 
>channel talking to an AP that's closer to it will do even better.
>
>
>[*] In theory, spread spectrum transmission allows you to have
>different
>stations talking on the same wideband channel without interfering with
>each
>other.
>
>In practice there are a few problems with this.
>
>1. the different sets of transmissions need to have different
>transmission
>patterns (keys), and these patterns need to be distributed to all the
>stations
>using a particular key.
>
>2. not all keys are equally good, especially when combined with other
>keys in
>use, so coordination of the keys is needed.
>
>3. timing on all the different radios in the network is critical so
>that they
>all start looking at the same time. This is something that is very hard
>to get
>in practice without some external timekeeping being provided (some
>equipment
>would use the broadcast TV sync timing before it went digital for
>example)
>
>4. You still have the problem of not being able to transmit and receive
>at the
>same time.
>
>5. The RF signal gets amplified by a variable amount to make the
>resulting
>signal a constant strength before it's decoded. The RF signal from
>different
>transmitters is falling off at roughly the cube of the distance, and
>the
>decoders have 12-14 bits of resolution. If a more distant station is
>transmitting at the same time as a nearby station, this makes the
>distant staton
>unreadable because it's signal ends up being represented by too few
>bits in the
>decoder.
>
>David Lang
>
>On Thu, 9 Oct 2014, dpr...@reed.com wrote:
>
>> Wideband is far better for scaling than narrowband, though.  This may
>seem
>> counterintuitive, but narrowband systems are extremely inefficient.
>They
>> appeal to 0/1 thinking intuitively, but in actual fact the wider the
>bandwidth
>> the more sharing and the more scaling is possible (and not be
>"balkanization"
>> or "exclusive channel negotiation").
>>
>> Two Internets are far worse than a single Internet that combines
>both.
>> That's because you have more degrees of freedom in a single network
>than you
>> can in two distinct networks, by a combinatorial factor.
>>
>> The analogy holds that one wide band is far better than two disjoint
>bands in
>> terms of scaling an

Re: [Cerowrt-devel] bulk packet transmission

2014-10-10 Thread David P. Reed
I do know that. I would say that benchmarks rarely match real world problems of 
real systems- they come from sources like academia and technical marketing 
depts. My job for the last few years has been looking at stems with dozens of 
processors across 2 and 4 sockets and multiple 10 GigE adapters.

There are few benchmarks that look like real workloads. And even smaller 
systems do very poorly compared to what is possible.  Linux is slowly getting 
better but not so much in the network area at scale.  That would take a plan 
and a rethinking. Beyond incremental tweaks. My opinion ... ymmv.

On Oct 10, 2014, David Lang  wrote:
>I've been watching Linux kernel development for a long time and they
>add locks
>only when benchmarks show that a lock is causing a bottleneck. They
>don't just
>add them because they can.
>
>They do also spend a lot of time working to avoid locks.
>
>One thing that you are missing is that you are thinking of the TCP/IP
>system as
>a single thread of execution, but there's far more going on than that,
>especially when you have multiple NICs and cores and have lots of
>interrupts
>going on.
>
>Each TCP/IP stream is not a separate queue of packets in the kernel,
>instead
>the details of what threads exist is just a table of information. The
>packets
>are all put in a small number of queues to be sent out, and the
>low-level driver
>picks the next packet to send from these queues without caring about
>what TCP/IP
>stream it's from.
>
>David Lang
>
>On Fri, 10 Oct 2014, dpr...@reed.com wrote:
>
>> The best approach to dealing with "locking overhead" is to stop
>thinking that
>> if locks are good, more locking (finer grained locking) is better.
>OS
>> designers (and Linux designers in particular) are still putting in
>way too
>> much locking.  I deal with this in my day job (we support systems
>with very
>> large numbers of cpus and because of the "fine grained" locking
>obsession, the
>> parallelized capacity is limited).  If you do a thoughtful design of
>your
>> network code, you don't need lots of locking - because TCP/IP streams
>don't
>> have to interact much - they are quite independent.  But instead OS
>designers
>> spend all their time thinking about doing "one thing at a time".
>>
>> There are some really good ideas out there (e.g. RCU) but you have to
>think
>> about the big picture of networking to understand how to use them.
>I'm not
>> impressed with the folks who do the Linux networking stacks.
>>
>>
>> On Thursday, October 9, 2014 3:48pm, "Dave Taht"
> said:
>>
>>
>>
>>> I have some hope that the skb->xmit_more API could be used to make
>>> aggregating packets in wifi on an AP saner. (my vision for it was
>that
>>> the overlying qdisc would set xmit_more while it still had packets
>>> queued up for a given station and then stop and switch to the next.
>>> But the rest of the infrastructure ended up pretty closely tied to
>>> BQL)
>>>
>>> Jesper just wrote a nice piece about it also.
>>>
>http://netoptimizer.blogspot.com/2014/10/unlocked-10gbps-tx-wirespeed-smallest.html
>>>
>>> It was nice to fool around at 10GigE for a while! And
>netperf-wrapper
>>> scales to this speed also! :wow:
>>>
>>> I do worry that once sch_fq and fq_codel support is added that there
>>> will be side effects. I would really like - now that there are al
>>> these people profiling things at this level to see profiles
>including
>>> those qdiscs.
>>>
>>> /me goes grumbling back to thinking about wifi.
>>>
>>> On Thu, Oct 9, 2014 at 12:40 PM, David Lang  wrote:
>>> > lwn.net has an article about a set of new patches that avoid some
>locking
>>> > overhead by transmitting multiple packets at once.
>>> >
>>> > It doesn't work for things with multiple queues (like fq_codel) in
>it's
>>> > current iteration, but it sounds like something that should be
>looked at and
>>> > watched for latency related issues.
>>> >
>>> > http://lwn.net/Articles/615238/
>>> >
>>> > David Lang
>>> > ___
>>> > Cerowrt-devel mailing list
>>> > Cerowrt-devel@lists.bufferbloat.net
>>> > https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>>
>>>
>>>
>>> --
>>> Dave Täht
>>>
>>> https://www.bufferbloat.net/projects/make-wifi-fast
>>> ___
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>>

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] Fixing bufferbloat: How about an open letter to the web benchmarkers?

2014-09-12 Thread David P. Reed
I have a working ping-over-http mobile browser app at  alt.reed.com. feel free 
to try it and look at the underlying packet stream with wireshark. I did a 
prototype of a RRUL test using Web sockets and a modified nginx websocket   
module as a server that could be commanded to generate precise traffic and 
server end measurements ... it showed this can work up to a few 10 s of Mb/s.

It's slightly tricky and requires a good understanding of the Web sockets 
protocol stack.


On Sep 12, 2014, Rick Jones  wrote:
>On 09/11/2014 06:48 PM, Rich Brown wrote:
>> Jonathan,
>>
>>> Could we make use of the existing test servers (running netperf) for
>that demonstration?  How hard is the protocol to fake in Javascript?
>>
>> Not having coded a stitch of this, I *think* it would require the
>following:
>>
>> - Web page on netperf-xxx.bufferbloat.net that served out the
>javascript (required to get around cross-domain protections within the
>browser)
>>
>> - Javascript function to connect back to that host on port 12865 and
>fake out the netserver with TCP_STREAM or TCP_MAERTS request
>>
>> - Javascript that's efficient enough to source/swallow full-rate data
>stream
>>
>> - Cloning the code from https://github.com/apenwarr/blip to make fake
>pings from TCP requests
>>
>> Anyone know more than I do about this?
>
>Not about the javascript stuff, but your high level description of the
>netperf side sounds plausible.  There are a few control messages
>netperf
>will exchange with netserver that if you want to leverage a remote
>netserver will need to be included.  You can run a netperf command with
>
>a higher debug level to see them.
>
>rick jones

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] still trying to find hardware for the next generation worth hacking on

2014-08-22 Thread David P. Reed
You missed the on board switch which is a major differentiator.

On Aug 22, 2014, William Katsak  wrote:
>This is a nice board, but other than the form factor, it isn’t much
>different than this Supermicro board which is readily available:
>
>http://www.supermicro.com/products/motherboard/Atom/X10/A1SRi-2758F.cfm
>
>SuperBiiz.com has it for 326.99:
>http://www.superbiiz.com/detail.php?name=MB-A1SR2F
>
>They also have a similar board in a larger MicroATX form factor.
>
>I have the Avoton equivalent of this board (everything the same except
>Avoton instead of Rangeley) and it is super nice.
>
>-Bill
>
>On Aug 21, 2014, at 11:11 PM, Dave Taht  wrote:
>
>> On Sun, Aug 17, 2014 at 12:13 PM,   wrote:
>>>
>http://www.habeyusa.com/products/fwmb-7950-rangeley-network-communication-board/
>>> looks intriguing.
>>
>> I have to say that looks very promising as a testbed vehicle. Perhaps
>> down the road a candidate for
>> a head-end solution... or a corporate edge gateway.
>>
>> I also spoke to an intel rep at linuxcon
>> that mentioned a rangeley board with 10GigE capability onboard.
>>
>> Have you contacted habeyusa?
>
>
>
>
>
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] still trying to find hardware for the next generation worth hacking on

2014-08-15 Thread David P. Reed
Anybody got a TI connection? The wandboard is nice based on I.MX6 but it is not 
ideal for a router.

On Aug 15, 2014, Jonathan Morton  wrote:
>> one promising project is this one: https://www.turris.cz/en/
>
>That does look promising. The existing software is OpenWRT, so porting
>CeroWRT shouldn't be difficult.
>
>The P2020 CPU is a PowerPC Book E type - basically a 603e with the FPU
>ripped out, then turned into an SoC. It should have loads of
>performance,
>and enough I/O to feed those GigE ports effectively.
>
>The only real software concern should be that it's big-endian, but
>since I
>already use an old PowerBook as a firewall, that's unlikely to be a big
>hurdle. Fq_codel works well on it.
>
>- Jonathan Morton
>
>
>
>
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] Check out www.speedof.me - no Flash

2014-07-25 Thread David P. Reed
It's important to note that modern browsers have direct access to TCP Up and 
Down connections using Web sockets from javascript threads. It is quite 
feasible to drive such connections at near wire speeds in both directions... 
I've done it in my own experiments. A knowledgeable network testing expert 
should be able to create a javascript library that can be used by any gui...so 
beauty and measurement quality can be improved by experts in the relevant 
fields.

My experiments are a bit dated but if someone wants advice on how please ask. 
I'm flat out on my day job for the next month.

On Jul 25, 2014, Sebastian Moeller  wrote:
>Hi Neil,
>
>
>On Jul 25, 2014, at 14:24 , Neil Davies  wrote:
>
>> Rich
>>
>> I have a deep worry over this style of single point measurement - and
>hence speed - as an appropriate measure.
>
>   But how do you propose to measure the (bottleneck) link capacity then?
>It turns out for current CPE and CMTS/DSLAM equipment one typically can
>not relay on good QoE out of the box, since typically these devices do
>not use their (largish) buffers wisely. Instead the current remedy is
>to take back control over the bottleneck link by shaping the actually
>sent traffic to stay below the hardware link capacity thereby avoiding
>feeling the consequences of the over-buffering. But to do this is is
>quite helpful to get an educated guess what the bottleneck links
>capacity actually is. And for that purpose a speediest seems useful.
>
>
>> We know, and have evidence, that throughput/utilisation is not a good
>proxy for the network delivering suitable quality of experience. We
>work with organisation (Telco’s, large system integrators etc) where we
>spend a lot of time having to “undo” the consequences of “maximising
>speed”. Just like there is more to life than work, there is more to QoE
>than speed.
>>
>> For more specific comments see inline
>>
>> On 25 Jul 2014, at 13:09, Rich Brown  wrote:
>>
>>> Neil,
>>>
>>> Thanks for the note and the observations. My thoughts:
>>>
>>> 1) I note that speedof.me does seem to overstate the speed results.
>At my home, it reports 5.98mbps down, and 638kbps up, while
>betterspeedtest.sh shows 5.49/0.61 mbps. (speedtest.net gives numbers
>similar to the betterspeedtest.net script.)
>>>
>>> 2) I think we're in agreement about the peak upload rate that you
>point out is too high. Their measurement code runs in the browser. It
>seems likely that the browser pumps out a few big packets before
>getting flow control information, thus giving the impression that they
>can send at a higher rate. This comports with the obvious decay that
>ramps toward the long-term rate.
>>
>> I think that its simpler than that, it is measuring the rate at which
>it can push packets out the interface - its real time rate is precisely
>that - it can not be the rate being reported by the far end, it can
>never exceed the limiting link. The long term average (if it is like
>other speed testers we’ve had to look into) is being measured at the
>TCP/IP SDU level by measuring the difference in time between the first
>and last timestamps of data stream and dividing that into the total
>data sent. Their “over-estimate” is because there are packets buffered
>in the CPE that have left the machine but not arrived at the far end.
>
>   Testing from an openwrt router located at a high-symmetric-bandwidth
>location shows that speedof.me does not scale higher than ~ 130 Mbps
>server to client and ~15Mbps client to server (on the same connection I
>can get 130Mbps S2C and ~80Mbps C2S, so the asymmetry in the speedof.me
>results is not caused by my local environment).
>   @Rich and Dave, this probably means that for the upper end of fiber
>and cable and VDSL connections speed of.me is not going to be a
>reliable speed measure… Side note www.sppedtest.net shows ~100Mbps S2C
>and ~100Mbps C2S, so might be better suited to high-upload links...
>
>>
>>>
>>> 3) But that long-term speed should be at or below the theoretical
>long-term rate, not above it.
>>
>> Agreed, but in this case knowing the sync rate already defines that
>maximum.
>
>   I fully agree, but for ADSL the sync rate also contains a lot of
>encapsulation, so the maximum achievable TCP rate is at best ~90% of
>link rate. Note for cerowrt’s SQM system the link rate is exactly the
>right number to start out with at that system can take the
>encapsulation into account. But even then it is somewhat unintuitive to
>deduce the expected good-put from the link rate.
>
>>
>>>
>>> Two experiments for you to try:
>>>
>>> a) What does betterspeedtest.sh show? (It's in the latest CeroWrt,
>in /usr/lib/CeroWrtScripts, or get it from github:
>https://github.com/richb-hanover/CeroWrtScripts )
>>>
>>> b) What does www.speedtest.net show?
>>>
>>> I will add your question (about the inaccuracy) to the note that I
>want to send out to speedof.me this weekend. I will also ask that they
>include min/max latency measurements to their test, and an o

Re: [Cerowrt-devel] Check out www.speedof.me - no Flash

2014-07-20 Thread David P. Reed
Include Doc in the discussion you have... need his email address?

On Jul 20, 2014, Rich Brown  wrote:
>Doc Searls
>(http://blogs.law.harvard.edu/doc/2014/07/20/the-cliff-peronal-clouds-need-to-climb/)
>mentioned in passing that he uses a new speed test website. I checked
>it out, and it was very cool…
>
>www.speedof.me is an all-HTML5 website that seems to make accurate
>measurements of the up and download speeds of your internet connection.
>It’s also very attractive, and the real-time plots of the speed show
>interesting info. (screen shot at:
>http://richb-hanover.com/speedof-me/)
>
>Now if we could get them to a) allow longer/bigger tests to circumvent
>PowerBoost, and b) include a latency measurement so people could point
>out their bufferbloated equipment.
>
>I'm going to send them a note. Anything else I should add?
>
>Rich
>
>
>
>
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] viability of the data center in the internet of the future

2014-06-28 Thread David P. Reed
I hope it is obvious I am in violent agreement. 

The folks who think a centralized structure is more efficient or more practical 
just have not thought it through.

The opposite is true. Sadly people's intuitions are trained to ignore evidence 
and sound argument

So we have a huge population of engineers who go along without thinking because 
they honestly think centralized systems are better for some important reason 
they never question.

This means that the non engineering public has no chance at understanding.

Whenever I have looked at why centralized designs are 'needed' it has turned 
out to be the felt need for 'control' of something by one small group or 
individual.

Sometimes it is the designer. Shame on him/her. 

Sometimes it is the builder. Ditto.

Sometimes it is the operator. Do we need one operator?

Sometimes it is the owner. Don't the users own their uses and purposes?

Sometimes it is the fearful. I sympathize. But they don't really want to cede 
collective control to a small group they can't trust ir even understand. Or 
maybe they do...

Sometimes it is the wannabe sovereign. 

The weakness of the argument is that control need not be centralized. In fact 
centralized control is inefficient and unnecessary.

I've devoted much of my work to that last sentence.

For example... In Croquet we (me and 3 others) demonstrated that it's pretty 
easy to build a real time shared multimedia virtual world that works without a 
single central server. It really worked and scaled linearly with users adding 
their own computer when they entered the world, and removing it when they got 
disconnected. . Just pulling the plug was fine.) 

Same with decentralized wireless ... no need for centralized spectrum 
allocation... linear growth of capacity with transceiver participation coming 
from the actual physics of the real propagation environment.

Equating centralized control with efficiency or necessary management is a false 
intuition.

Always be skeptical of the claim that centralized control is good. Cui bono?

On Jun 28, 2014, Dave Taht  wrote:
>I didn't care for my name in the subject line in the first place,
>although it did inspire me to do some creative venting elsewhere, and
>now here. And this is still way off topic for the bloat list...
>
>One of the points in the wired article that kicked this thread off was
>this picture of what the internet is starting to look like:
>
>http://www.wired.com/wp-content/uploads/2014/06/net_neutral.jpg.jpeg
>
>I don't want it to look like that. I worked pretty hard to defuse the
>"fast vs slow" lane debate re peering because it was so inaccurate,
>and it does look like it has died down somewhat, but
>that doesn't mean I like the concentration of services that is going
>on.
>
>I want the "backbone" to extend all the way to the edge.
>
>I want the edge to be all connected together, so in the unlikely event
>comcast goes out of business tomorrow, I can get re-routed 1 hop out
>from my house through verizon, or joe's mom and pop fiber shop, or
>wherever. I want a network that can survive multiple backhoe events,
>katrinas, and nuclear wars, all at the same time. I'd like to be able
>to get my own email,
>and do my own phone and videoconferencing calls with nobody in the
>middle, not even for call setup, and be able to host my own my own
>services on my own hardware, with some level of hope that anything
>secret or proprietary stays within my premise. I want a static ip
>address range, and
>control over my own dns.
>
>I don't mind at all sharing some storage for the inevitable
>advertising if the cdn's co-located inside my business are also
>caching useful bits of javascript, etc, just so I can save on latency
>on wiping the resulting cruft from my eyeballs. I want useful
>applications, running, directly, on my own devices, with a minimum
>amount of connectivity to the outside world required to run them. I
>want the 83 items in my netflix queue already downloaded, overnight,
>so I can pick and choose what to see without ever having a "Buffering"
>event. I want my own copy of wikipedia, and a search engine that
>doesn't share everything you are looking for with the universe.
>
>I want the legal protections, well established for things inside your
>home, that are clearly not established in data centers.
>
>I'd like it if the software we had was robust, reliable, and secure
>enough to do that. I'd like it if it were easy to make offsite
>backups, as well as mirror services with friends and co-authors.
>
>And I'd like my servers to run on a couple watts, at most, and not
>require special heating, or cooling.
>
>And I'd like (another) beer and some popcorn. Tonight's

Re: [Cerowrt-devel] [Bloat] Dave Täht quoted in the ACLU blog

2014-06-27 Thread David P. Reed
Maybe I am misunderstanding something... it just took my Mac book Pro doing an 
rsync to copy a TB of data from a small NAS  at work yesterday to get about 700 
Gb/sec on a GigE office network for hours yesterday.

I had to do that in our Santana Clara office rather than from home outside 
Boston, which is where I work 90% of the time.

That's one little computer and one user...

What does my Mac Book Pro draw doing that? 80 Watts?

On Jun 27, 2014, David Lang  wrote:
>On Tue, 24 Jun 2014, Michael Richardson wrote:
>
>> Rick Jones  wrote:
>>> Perhaps, but where does having gigabit fibre to a business imply
>the business
>>> has the space, power, and cooling to host all the servers it
>might need/wish
>>> to have?
>>
>> That's a secondary decision.
>> Given roof space, solar panels and/or snow-outside, maybe the answer
>is that
>> I regularly have 2 our of 3 of those available in a decentralized
>way.
>
>given the amount of processing capacity that you can get today in a
>pasively 
>cooled system, you can do quite a bit of serving from a small amount of
>space 
>and power.
>
>The days when it took rooms of Sun boxes to saturate a Gb line are long
>gone, 
>you can do that with just a handful of machines.
>
>David Lang
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] BQL, txqueue lengths and the internet of things

2014-06-11 Thread David P. Reed
Maybe you can do a quick blog howto?  I'd bet the same could be done for 
raspberry pi and perhaps my other toy the wandboard which has a gigE adapter 
and Scsi making it a nice iscsi target or nfs server. 

De bloating the world... One step at a time.

On Jun 11, 2014, Dave Taht  wrote:
>The bloat problem and solutions are not just limited to fixing
>routers, but hosts.
>
>Nearly every low end board I've seen out there forgos a gigE ethernet
>interface in favor of a lower power and cost 100mbit interface.
>
>No distro I've seen modifies the default pfifo txqueuelen from the
>current 1000 packet default down to a more reasonable 100 packet
>default in that case. And, while many ethernet devices in this
>category are hooked up via usb (and currently hard to add BQL support
>to), some are not, and byte queue limit support can be easily added to
>those.
>
>Sadly byte queue limits (BQL) is only implemented on a bunch of top
>end ethernet drivers. (about 10, last I looked)
>
>I needed a break from big problems, so a couple late nights later, I
>have a very small patch adding support for BQL to the beaglebone
>black:
>
>http://snapon.lab.bufferbloat.net/~d/0001-Add-BQL-support-to-cpsw-beaglebone-driver.patch
>
>And the results were quite pleasing at 100mbit. BQL holds things down
>to two full size packets in the tx ring and we see an enormous
>improvement in bidirectional throughput, jitter, and latency.
>
>http://snapon.lab.bufferbloat.net/~d/beagle_bql/bql_makes_a_difference.png
>http://snapon.lab.bufferbloat.net/~d/beagle_bql/beaglebonewins.png
>
>The default linux behavior ( pfifo fast, txqueue 1000 ) prior to this
>patch looked pretty awful:
>
>http://snapon.lab.bufferbloat.net/~d/beagle_nobql/pfifo_nobql_tsq3028txqueue1000.svg
>
>and went to looking like this:
>
>http://snapon.lab.bufferbloat.net/~d/beagle_bql/pfifo_bql_tsq3028txqueue1000.svg
>
>And adding the new fq scheduler looked like this:
>
>http://snapon.lab.bufferbloat.net/~d/beagle_bql/fq_bql_tsq3028.svg
>
>(fq_codel was similar)
>
>The fact that we don't achieve full upload throughput on this last
>test is probably
>due to having a tail dropping switch in the way, and/or some dma
>dequeuing
>cleanup conflicts between the low level transmit and receive queues on
>this device (they share an interrupt AND use napi which seems
>puzzling).
>
>But any day I can get a 4-10x improvement in latency and throughput is
>a good day. One IoT device down, thousands to go. It would be nice if
>the chipmakers were incorporating bql into boxes destined for the
>internet of things.
>
>-- 
>Dave Täht
>___
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Ubiquiti QOS

2014-05-29 Thread David P. Reed
Good points...

On May 29, 2014, Michael Richardson  wrote:
>
>David P. Reed  wrote:
>> ECN-style signaling has the right properties ... just like TTL it can
>> provide
>
>How would you send these signals?
>
>> A Bloom style filter can remember flow statistics for both of these
>local
>> policies. A great use for the memory no longer misapplied to
>> buffering
>
>Well.
>
>On the higher speed dataflow equipment, the buffer is general purpose
>memory,
>so reuse like this is particularly possible.
>
>On routers built around general purpose architectures, the limiting
>factor
>in performance is often memory throughput; adding memory rarely
>increases
>total throughput.   Packet I/O is generally quiet sequential and so
>makes
>good use of wide memory data paths and multiple accesses per address
>cycle.
>Updating of tables such as Bloom filter or any other hash has a big
>impact
>due to the RMW and random access nature.
>
>All I'm saying is that quantity of memory is seldom the problem, but
>access
>to it, is.
>
>I do like the entire idea; it seems that it has to be implemented at
>the
>places where the flow converge, which is often in the DSL line card, or
>CTMS...
>
>--
>]   Never tell me the odds! | ipv6 mesh
>networks [
>]   Michael Richardson, Sandelman Software Works| network
>architect  [
>] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on
>rails[

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Ubiquiti QOS

2014-05-29 Thread David P. Reed
ECN-style signaling has the right properties ... just like TTL it can provide 
valid and current sampling of the packet ' s environment as it travels. The 
idea is to sample what is happening at a bottleneck for the packet ' s flow.  
The bottleneck is the link with the most likelihood of a collision from flows 
sharing that link.

A control - theoretic estimator of recent collision likelihood is easy to do at 
each queue.  All active flows would receive that signal, with the busiest ones 
getting it most quickly. Also it is reasonable to count all potentially 
colliding flows at all outbound queues, and report that.

The estimator can then provide the signal that each flow responds to.

The problem of "defectors" is best dealt with by punishment... An aggressive 
packet drop policy that makes causing congestion reduce the cause's throughput 
and increases latency is the best kind of answer. Since the router can remember 
recent flow behavior, it can penalize recent flows.

A Bloom style filter can remember flow statistics for both of these local 
policies. A great use for the memory no longer misapplied to buffering

Simple?

On May 28, 2014, David Lang  wrote:
>On Wed, 28 May 2014, dpr...@reed.com wrote:
>
>> I did not mean that "pacing".  Sorry I used a generic term.  I meant
>what my 
>> longer description described - a specific mechanism for reducing
>bunching that 
>> is essentially "cooperative" among all active flows through a
>bottlenecked 
>> link.  That's part of a "closed loop" control system driving each TCP
>endpoint 
>> into a cooperative mode.
>
>how do you think we can get feedback from the bottleneck node to all
>the 
>different senders?
>
>what happens to the ones who try to play nice if one doesn't?,
>including what 
>happens if one isn't just ignorant of the new cooperative mode, but
>activly 
>tries to cheat? (as I understand it, this is the fatal flaw in many of
>the past 
>buffering improvement proposals)
>
>While the in-house router is the first bottleneck that user's traffic
>hits, the 
>bigger problems happen when the bottleneck is in the peering between
>ISPs, many 
>hops away from any sender, with many different senders competing for
>the 
>avialable bandwidth.
>
>This is where the new buffering approaches win. If the traffic is below
>the 
>congestion level, they add very close to zero overhead, but when
>congestion 
>happens, they manage the resulting buffers in a way that's works better
>for 
>people (allowing short, fast connections to be fast with only a small
>impact on 
>very long connections)
>
>David Lang
>
>> The thing you call "pacing" is something quite different.  It is
>disconnected 
>> from the TCP control loops involved, which basically means it is
>flying blind. 
>> Introducing that kind of "pacing" almost certainly reduces
>throughput, because 
>> it *delays* packets.
>> 
>> The thing I called "pacing" is in no version of Linux that I know of.
> Give it 
>> a different name: "anti-bunching cooperation" or "timing phase
>management for 
>> congestion reduction". Rather than *delaying* packets, it tries to
>get packets 
>> to avoid bunching only when reducing window size, and doing so by
>tightening 
>> the control loop so that the sender transmits as *soon* as it can,
>not by 
>> delaying sending after the sender dallies around not sending when it
>can.
>> 
>> 
>> 
>> 
>> 
>>
>>
>> On Tuesday, May 27, 2014 11:23am, "Jim Gettys" 
>said:
>>
>>
>>
>>
>>
>>
>>
>> On Sun, May 25, 2014 at 4:00 PM, 
><[dpr...@reed.com](mailto:dpr...@reed.com)> wrote:
>>
>> Not that it is directly relevant, but there is no essential reason to
>require 50 ms. of buffering.  That might be true of some particular
>QOS-related router algorithm.  50 ms. is about all one can tolerate in
>any router between source and destination for today's networks - an
>upper-bound rather than a minimum.
>> 
>> The optimum buffer state for throughput is 1-2 packets worth - in
>other words, if we have an MTU of 1500, 1500 - 3000 bytes. Only the
>bottleneck buffer (the input queue to the lowest speed link along the
>path) should have this much actually buffered. Buffering more than this
>increases end-to-end latency beyond its optimal state.  Increased
>end-to-end latency reduces the effectiveness of control loops, creating
>more congestion.
>> 
>> The rationale for having 50 ms. of buffering is probably to avoid
>disruption of bursty mixed flows where the bursts might persist for 50
>ms. and then die. One reason for this is that source nodes run
>operating systems that tend to release packets in bursts. That's a
>whole other discussion - in an ideal world, source nodes would avoid
>bursty packet releases by letting the control by the receiver window be
>"tight" timing-wise.  That is, to transmit a packet immediately at the
>instant an ACK arrives increasing the window.  This would pace the flow
>- current OS's tend (due to scheduling mismatches) to send bursts of
>packets, "catching up" on sending that 

Re: [Cerowrt-devel] Ubiquiti QOS

2014-05-26 Thread David P. Reed
Codel and PIE are excellent first steps... but I don't think they are the best 
eventual approach.  I want to see them deployed ASAP in CMTS' s and server load 
balancing networks... it would be a disaster to not deploy the far better 
option we have today immediately at the point of most leverage. The best is the 
enemy of the good.

But, the community needs to learn once and for all that throughput and latency 
do not trade off. We can in principle get far better latency while maintaining 
high throughput and we need to start thinking about that.  That means that 
the framing of the issue as AQM is counterproductive. 

On May 26, 2014, Mikael Abrahamsson  wrote:
>On Mon, 26 May 2014, dpr...@reed.com wrote:
>
>> I would look to queue minimization rather than "queue management"
>(which 
>> implied queues are often long) as a goal, and think harder about the 
>> end-to-end problem of minimizing total end-to-end queueing delay
>while 
>> maximizing throughput.
>
>As far as I can tell, this is exactly what CODEL and PIE tries to do.
>They 
>try to find a decent tradeoff between having queues to make sure the
>pipe 
>is filled, and not making these queues big enough to seriously affect 
>interactive performance.
>
>The latter part looks like what LEDBAT does?
>
>
>Or are you thinking about something else?

-- Sent from my Android device with K-@ Mail. Please excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


  1   2   >