Re: [Cerowrt-devel] Random thought - reactions?

2017-12-17 Thread dpreed
Good point about separating concerns. I would suggest that home router to POP 
encryption would satisfy the first. End to end encryption can be done at the 
endpoints on, as it should be.

The home router to POP link need not be tappable for the NSA for it to spy. It 
is not end to end.

Sent from Nine

From: David Lang 
Sent: Friday, December 15, 2017 6:14 PM
To: Joel Wirāmu Pauling
Cc: David Reed; cerowrt-devel@lists.bufferbloat.net
Subject: Re: [Cerowrt-devel] Random thought - reactions?

There are two different issues here. 

1. the last mile ISP plays games with the traffic for their own benefit (and 
thir competitors detriment) 

2. the government wants to spy on everybody 

It's possible for the VPN tunnel providers to solve problem #1 without solving 
problem #2 

k 
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Random thought - reactions?

2017-12-15 Thread dpreed

Thanks for this. I hadn't seen it yet.
 
On Friday, December 15, 2017 2:32pm, "tapper"  said:



> Motherboard & VICE Are Building a Community Internet Network
> https://motherboard.vice.com/en_us/article/j5djd7/motherboard-and-vice-are-building-a-community-internet-network-to-protect-net-neutrality
> It seems that people are all thinking the same thing, but coming up with
> different things!


I'm all for what Motherboard and VICE are contemplating. It's a great option, 
and may create an interesting opportunity for wireless mobile, too. But that's 
far more complex to fund and maintain than constructing an overlay over an 
already subscribable infrastructure. I wish them well, and I hope that the key 
idea of maximizing interoperability of all functions (including paying for 
upstream capacity) will be front and center in their minds. Balkanization of 
the subnets of the public Internet is a big worry - boundaries will destroy the 
Internet as effectively as content selectivity and content-based rate limiting 
will.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


[Cerowrt-devel] Random thought - reactions?

2017-12-15 Thread dpreed

The disaster in the FCC's move to reverse the Open Internet Order will probably 
continue.
 
As some of you may know, but most probably don't, I have a somewhat nuanced 
view of the best way to preserve what is called network neutrality. That's 
because I have a precise definition of what the Internet architecture is based 
on. Essentially, access providers (or for that matter anyone who stands between 
one part of the Internet and another) should forward packets as specified in 
the IPv4 or IPv6 header, with best efforts. In particular, that means: meet the 
protocol specification of the IP layer, base routing, queueing, and discarding 
only on the information therein contained. "Best efforts" does not mean 
queueing or discarding packets selectively based on addresses or protocol. 
However, ToS can be used.
 
It turns out that the Open Internet Order pretty much matched that definition 
in effect.
 
But we are about to enter a new age, where arbitrary content inspection, 
selective queueing, and modification are allowed at the access provider 
switching fabric. Based on any information in the packet. Also, data collection 
and archiving of content information (e.g. wiretapping) is likely to be OK as 
well, as long as the data is "protected" and there is a contract with the 
customer that sort of discloses the potential of such collection.
 
Companies like Sandvine, Ellacoya, Phorm, NebuAd and more modern instantiations 
will be ramping up production of "Deep Packet Inspection" gear that can be 
customized and deployed by access providers. (10-15 years ago they ramped up to 
sell exactly this capability to access providers).
 
I have never viewed the FCC rulemaking approach as the right way for the 
Internet to deal with this attack by one piece of the transport network on the 
integrity of the Internet architecture as a whole. However, it was a very 
practical solution until now.
 
So I've been thinking hard about this for the last 15 years.
 
The best and most open Internet we had for end users was available when the 
Internet was "dialup". That includes modems, ISDN digital, and some DSL 
connectivity to non-telco POPs. There was competition that meant that screwing 
with traffic, if detected, could be dealt with by switching what were then 
called ISPs - owners of POPs. This died when Cable and Telco monopolies 
eliminated the POPs, and made it impossible to decide where to connect the 
"last mile" to the Internet.
 
So can we recreate "dialup"?  Well, I think we can. We have the technical 
ingredients. The key model here is IPv6 "tunnel brokers" (I don't mean the 
specific ones we have today, which are undercapitalized and not widely 
dispersed). Today's Home Routers (minus their embedded WiFi access points) 
could be the equivalent of ISDN modems.
 
What we need is to rethink the way we transport IP packets, so that they are 
not visible or corruptible by the access provider, just as they were not 
visible or corruptible by the phone company during the "dialup" era.
 
I don't think I am the first to think of this. But the CeroWRT folks are a 
great resource for one end of this, if there were companies willing to invest 
in creating the POPs. I know of some folks who might want to capitalize the 
latter, if there would be a return on investment.
 
Under the Open Internet Order, there was no meaningful potential of a return on 
investment. Now there is.
 
I think the missing piece is a "stealth" approach to carrying packets over the 
access provider's link that cannot be practically disrupted by DPI gear, even 
very high speed gear with good computing power in it. That involves encryption 
and sort-of-steganography. Tor can't solve the problem, and is not really 
needed, anyway.
 
Anyway, I have some protocol ideas for transporting arbitrary IPv6 and IPv4 
packets to POPs, and some ideas for how to evolve POPs in this novel context.
 
I'm interested in thoughts by the CeroWRT developers. Not just technical 
thoughts, but practical ones. And especially "services" that such POP operators 
could offer that would allow them to charge a bit of cost/profit, on top of the 
basic access provider services that will be needed to reach them.
 
BTW, the same applies to cellular, where I think the problem of breaking the 
Internet architecture will be a lot worse. We need to make cellular Internet 
access more like "dialup".___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] DC behaviors today

2017-12-13 Thread dpreed

Just to be clear, I have built and operated a whole range of network platforms, 
as well as diagnosing problems and planning deployments of systems that include 
digital packet delivery in real contexts where cost and performance matter, for 
nearly 40 years now. So this isn't only some kind of radical opinion, but 
hard-won knowledge across my entire career. I also havea very strong 
theoretical background in queueing theory and control theory -- enough to teach 
a graduate seminar, anyway.
That said, there are lots of folks out there who have opinions different than 
mine. But far too many (such as those who think big buffers are "good", who 
brought us bufferbloat) are not aware of how networks are really used or the 
practical effects of their poor models of usage.
 
If it comforts you to think that I am just stating an "opinion", which must be 
wrong because it is not the "conventional wisdom" in the circles where you 
travel, fine. You are entitled to dismiss any ideas you don't like. But I would 
suggest you get data about your assumptions.
 
I don't know if I'm being trolled, but a couple of comments on the recent 
comments:
 
1. Statistical multiplexing viewed as an averaging/smoothing as an idea is, in 
my personal opinion and experience measuring real network behavior, a 
description of a theoretical phenomenon that is not real (e.g. "consider a 
spherical cow") that is amenable to theoretical analysis. Such theoretical 
analysis can make some gross estimates, but it breaks down quickly. The same 
thing is true of common economic theory that models practical markets by linear 
models (linear systems of differential equations are common) and gaussian 
probability distributions (gaussians are easily analyzed, but wrong. You can 
read the popular books by Nassim Taleb for an entertaining and enlightening 
deeper understanding of the economic problems with such modeling).
 
One of the features well observed in real measurements of real systems is that 
packet flows are "fractal", which means that there is a self-similarity of rate 
variability all time scales from micro to macro. As you look at smaller and 
smaller time scales, or larger and larger time scales, the packet request 
density per unit time never smooths out due to "averaging over sources". That 
is, there's no practical "statistical multiplexing" effect. There's also 
significant correlation among many packet arrivals - assuming they are 
statistically independent (which is required for the "law of large numbers" to 
apply) is often far from the real situation - flows that are assumed to be 
independent are usually strongly coupled.
 
The one exception where flows average out at a constant rate is when there is a 
"bottleneck". Then, there being no more capacity, the constant rate is forced, 
not by statistical averaging but by a very different process. One that is 
almost never desirable.
 
This is just what is observed in case after case.  Designers may imagine that 
their networks have "smooth averaging" properties. There's a strong thread in 
networking literature that makes this pretty-much-always-false assumption the 
basis of protocol designs, thinking about "Quality of Service" and other sorts 
of things. You can teach graduate students about a reality that does not exist, 
and get papers accepted in conferences where the reviewers have been trained in 
the same tradition of unreal assumptions.
 
2. I work every day with "datacenter" networking and distributed systems on 10 
GigE and faster Ethernet fabrics with switches and trunking. I see the packet 
flows driven by distributed computing in real systems. Whenever the sustained 
peak load on a switch path reaches 100%, that's not "good", that's not 
"efficient" resource usage. That is a situation where computing is experiencing 
huge wasted capacity due to network congestion that is dramatically slowing 
down the desired workload.
 
Again this is because *real workloads* in distributed computation don't have 
smooth or averagable rates over interconnects. Latency is everything in that 
application too!
 
Yes, because one buys switches from vendors who don't know how to build or 
operate a server or a database at all, you see vendors trying to demonstrate 
their amazing throughput, but the people who build these systems (me, for 
example) are not looking at throughput or statistical multiplexing at all! We 
use "throughput" as a proxy for "latency under load". (and it is a poor proxy! 
Because vendors throw in big buffers, causing bufferbloat. See Arista Networks' 
attempts to justify their huge buffers as a "good thing" -- when it is just a 
case of something you have to design around by clocking the packets so they 
never accumulate in a buffer).
 
So, yes, the peak transfer rate matters, of course. And sometimes it is 
utilized for very good reason (when the latency of a file transfer as a whole 
is the latency that matters). But to be clear, just because as a user I want 

Re: [Cerowrt-devel] [Bloat] DC behaviors today

2017-12-04 Thread dpreed

I suggest we stop talking about throughput, which has been the mistaken idea 
about networking for 30-40 years.
 
Almost all networking ends up being about end-to-end response time in a 
multiplexed system.
 
Or put another way: "It's the Latency, Stupid".
 
I get (and have come to expect) 27 msec. RTT's under significant load, from 
Boston suburb to Sunnyvale, CA.
 
I get 2 microsecond RTT's within my house (using 10 GigE).
 
What will we expect tomorrow?
 
This is related to Bufferbloat, because queueing delay is just not a good thing 
in these contexts - contexts where Latency Matters. We provision multiplexed 
networks based on "peak capacity" never being reached.
 
Consequently, 1 Gig to the home is "table stakes". And in DOCSIS 3.1 
deployments that is what is being delivered, cheap, today.
 
And 10 Gig within the home is becoming "table stakes", especially for 
applications that need quick response to human interaction.
 
1 NvME drive already delivers around 11 Gb/sec at its interface. That's what is 
needed in the network to "impedance match".
 
802.11ax already gives around 10 Gb/sec. wireless (and will be on the market 
soon).
 
The folks who think that having 1 Gb/sec to the home would only be important if 
you had to transfer at that rate 8 hours a day are just not thinking clearly 
about what "responsiveness" means.
 
For a different angle on this, think about what the desirable "channel change 
time" is if a company like Netflix were covering all the football (I mean US's 
soccer) games in the world. You'd like to fill the "buffer" in 100 msec. so 
channel change to some new channel is responsive. 100 msec. of 4K sports, which 
you are watching in "real time" needs to be buffered, and you want no more than 
a second or two of delay from camera to your screen. So buffering up 1 second 
of a newly selected 4 K video stream in 100 msec. on demand is why you need 
such speeds. Do the math.
 
VR sports coverage - even moreso.
 
 


On Monday, December 4, 2017 7:44am, "Mikael Abrahamsson"  
said:



> On Mon, 4 Dec 2017, Pedro Tumusok wrote:
> 
> > Looking at chipsets coming/just arrived from the chipset vendors, I think
> > we will see CPE with 10G SFP+ and 802.11ax Q3/Q4 this year.
> > Price is of course a bit steeper than the 15USD USB DSL modem :P, but
> > probably fits nicely for the SMB segment.
> 
> https://kb.netgear.com/31408/What-SFP-modules-are-compatible-with-my-Nighthawk-X10-R9000-router
> 
> This has been available for a while now. Only use-case I see for it is
> Comcast 2 gigabit/s service, that's the only one I know of that would fit
> this product (since it has no downlink 10GE ports).
> 
> --
> Mikael Abrahamsson email: swm...@swm.pp.se
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] dnsmasq CVEs

2017-10-09 Thread dpreed

Sooner or later it dawns on all security professionals that the idea of a 
"provably secure system" is a pipedream. Same with systems designers working on 
fault tolerance who are tasked with making their systems "NonStop" as the 
Tandem claim used to go.
 
What we see here in most of the examples is a failure of design, especially of 
modularity.
 
Modularity is there to tame complexity, that's really its only reason to exist.
 
One thing that helps enforce modularity is end-to-end encryption, though there 
are other such concepts. The baseband processor shouldn't have the encryption 
keys - it has no "need to know".
 
The bigger problem is that those who design these "modules" don't understand 
the way to make systems containing them modular. A module, for example, 
performs a function that can be described abstractly, without knowing what is 
inside it. And it should hide information about "how" and "what" it does to the 
largest extent possible (Parnas' Information Hiding Principle). There are other 
time-tested notions of modular design that help.
 
But what we see often are what we used to describe in software as "taking the 
listing and breaking it into files by tearing on page boundaries".  The 
tendency is toward throwing a grab bag of functions on a chip that have no 
consistent and modular functionality. (e.g. the so-called "chipsets" that 
accompany new Intel CPU releases).
 
I don't know what can be done, other than to design systems that have modular 
structure where we can.
The one remaining thing we can do is limit the bad things that can happen in a 
design, by compartmentalizing failure and security risk using fairly strong and 
redundant approaches. Defense in depth. Don't assume that your firewall will 
prevent errors from creeping in.
 


On Monday, October 9, 2017 4:32am, "Mikael Abrahamsson"  said:



> On Sat, 7 Oct 2017, valdis.kletni...@vt.edu wrote:
> 
> > Know how x86 people complain that SSM mode introduces jitter? That's
> > just the tip of the iceberg. Believe it or not, there's an entire
> > IPv4/IPv6 stack *and a webserver* hiding in there...
> >
> > https://schd.ws/hosted_files/ossna2017/91/Linuxcon%202017%20NERF.pdf
> >
> > Gaak. Have some strong adult beverage handy, you'll be needing it
> 
> Also see the wifi processor remote exploit that Apple devices (and others
> I presume) had problems with.
> 
> Mobile baseband processors behave in the same way, and also have their own
> stack. I have talked to mobile developers who discovered all of a sudden
> the baseband would just silently "steal" a bunch of UDP ports from the
> host OS and just grab these packets. At least with IPv6, the baseband can
> have its own IPv6 address, separated from the host stack IPv6 addreses.
> 
> Just to illustrate (incompletely) what might be going on when you're
> tethering through a mobile phone.
> 
> Packet comes in on the 4G interface. It now hits the baseband processor
> (that runs code), which might send the packet to either the host OS, or
> via an packet accelerator path (which the host OS might or not have a
> control plane into), and this accelerator runs code, and then it hits the
> wifi chip, which also runs code.
> 
> So while the host OS programmer might see their host OS doing something,
> in real life the packet potentially hits at least three other things that
> run code using their own firmware. Also, these components are manufactured
> in factories, how do we verify that these actually do what they were
> intended to do, and not modified between design and production? How do we
> know the firmware we load is actually loaded and it's not intercepted and
> real time patched before execution? Oh, also, the OS is loaded from
> permanent storage, that is also running code. There are several talks
> about people modifying the storage controller (which also runs code of
> coutse) to return different things depending on usage pattern. So it's not
> safe for the OS to read data, check that it passes integrity checks, and
> then read it again, and execute. The storage might return different things
> the second time.
> 
> I don't know that we as humanity know how to do this securely. I've
> discussed this with vendors in different sectors, and there's a lot of
> people who aren't even aware of the problem.
> 
> I'd say the typical smartphone today probably has 10 things or more
> running code/firmware, all susceptable to bugs, all of them risked of even
> with exposed security problems, they're never going to be patched.
> 
> So the IoT problem isn't only for "smart meters" etc, it's for everything.
> We've created devices that are impossible to verify without destroying
> them (sanding down ICs and looking at billions of gates), and in order to
> verify them, you need equipment and skills that are not available to most
> people.
> 
> --
> Mikael Abrahamsson email: swm...@swm.pp.se
> ___
Cerowrt-devel mailing list

Re: [Cerowrt-devel] solar wifi ap designs?

2017-06-05 Thread dpreed
It doesn't jump to mind, but a radio carrying bits near the edge probably won't 
be used near capacity most of the 24 hours it is operating. Just as Iridium was 
designed to quiesce most of its electronics on the dark side of the earth, 
extending its battery life, you can probably assume that a radio in a tree 
won't be heavily used most of the hours of a 24 hour cycle. 



On Monday, June 5, 2017 1:52pm, dpr...@reed.com said:

> "Deep discharge" batteries work in LEO satellites for such applications. But 
> they
> are extraordinarily expensive, because the designs are specialized, and that 
> use
> case doesn't have the 2-3 day solar outage problem.
> 
> You are not going to put a good enough system for an AP up in a tree. Maybe 
> on an
> antenna mast structure with solid base and guy wires. Roofs and ground are 
> better
> choices.
> 
> But I would wonder whether redesigning the AP itself to be power-conserving 
> would
> be the place to start. They are not designed to be "low power" - they are 
> designed
> to be inexpensive.
> 
> So, for example: why 12V??? No logic needs 12V. Integrate the battery into 
> the AP
> and run it at 3V, eliminating multiple conversion losses.
> 
> You can use 12/20 V off the solar panel to charge the 3V battery system (high
> current only while charging).
> 
> Pay lots of attention to the duty cycle of the radio. If you really expect the
> radio to be on 100% of the time, you may have to power it all the time. 
> Otherwise,
> minimize uptime.  Similarly, the processor need not be on most of the time if 
> it
> is mostly idle while accepting and sending packets from memory. (ARM 
> BIG.little
> might be helpful).
> 
> Get rid of Linux if possible. Linux is not a low-power OS - needs a lot of 
> work in
> configuring or rewriting drivers to cut power. (there's a need for an LP 
> Linux,
> but like Desktop Linux, Linus, and his coterie, isn't terribly interested in
> fixing his server OS to optimize for non-servers, so "server power saving" is 
> the
> only design point for power).
> 
> 
> 
> 
> On Monday, June 5, 2017 12:01pm, "Richard Smith"  said:
> 
>> On 06/04/2017 08:49 PM, Dave Taht wrote:
>>> I keep finding nicely integrated solar/battery/camera/wifi designs
>>>
>>> https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Delectronics=solar+wifi=n%3A172282%2Ck%3Asolar+wifi
>>>
>>> But what I want is merely an solar/battery/AP design well supported by
>>> lede... and either the ath9k or ath10k chipset - or mt72 - that I can
>>> hang off a couple trees. I've not worked with solar much in the past
>>> years, and picking the right inverter/panel/etc seems like a pita, but
>>> perhaps there are ideas out there?
>>
>> This is something I was up against constantly when I worked for OLPC.
>> There's a big gap for products that use more power than a cell phone but
>> less than an RV or a off-grid cabin.
>>
>> For the XO itself we worked around it by designing the front end of the
>> XO to be able to handle the range of output voltages from "12V" panels
>> (open circuit voltages up to 20V) and to implement an MPPT algorithim in
>> the EC firmware.  You can plug up any solar panel with a Voc of 20V or
>> less to an XO-1.5 to XO-4 and it will DTRT.
>>
>> Figuring out what to do with the deployment's APs though was always a
>> struggle.
>>
>> Solutions exist but you need to get a good estimate of what sort of
>> power budget you need.  It makes a big difference in what equipment you
>> need.
>>
>> Unless its a really low power device the numbers can get large fast.
>>
>> My WNDR 3700v2 power supply is rated at 12V 2.5A which is a peak of 30W.
>>
>> Lets assume your average is 30% of peak.  That's 9W.  Your 24h energy
>> requirement is 216Wh.  A reasonable input to usable efficiency for a PV
>> system is 70%.  Given average 5 hour window of full sun you need a PV
>> output of at least 62W.  It only goes up from there.
>>
>> Realistically you need to survive a 2-3 day period of terrible solar
>> output.  So your storage requirements should be at least 2-3x that.
>> When you do get sun again you need excess PV capacity to be able to
>> recharge your batteries.  You would probably need a PV output in the
>> 100W-150W range to make a system you could count on to have 100%
>> availability 24/7.
>>
>> That's going to be a pretty big chunk of hardware up in a tree.
>>
>> If the average power draw is more in the 3W or 1W range then things look
>> a lot better.   That starts to get down into the 40 and 20W range.
>>
>>> so am I the only one left that likes edison batteries? you don't need
>>> a charge controller... they last for a hundred years
>>> ___
>>
>> I've never used this battery type but it looks like the resistant to
>> overcharge assumes you replace the electrolyte.  All the cells I've
>> looked at on a few sites seem to be flooded which means maintenance.
>> Are there sealed maintenance free versions?
>>

Re: [Cerowrt-devel] solar wifi ap designs?

2017-06-05 Thread dpreed
"Deep discharge" batteries work in LEO satellites for such applications. But 
they are extraordinarily expensive, because the designs are specialized, and 
that use case doesn't have the 2-3 day solar outage problem.

You are not going to put a good enough system for an AP up in a tree. Maybe on 
an antenna mast structure with solid base and guy wires. Roofs and ground are 
better choices.

But I would wonder whether redesigning the AP itself to be power-conserving 
would be the place to start. They are not designed to be "low power" - they are 
designed to be inexpensive.

So, for example: why 12V??? No logic needs 12V. Integrate the battery into the 
AP and run it at 3V, eliminating multiple conversion losses.

You can use 12/20 V off the solar panel to charge the 3V battery system (high 
current only while charging).

Pay lots of attention to the duty cycle of the radio. If you really expect the 
radio to be on 100% of the time, you may have to power it all the time. 
Otherwise, minimize uptime.  Similarly, the processor need not be on most of 
the time if it is mostly idle while accepting and sending packets from memory. 
(ARM BIG.little might be helpful).

Get rid of Linux if possible. Linux is not a low-power OS - needs a lot of work 
in configuring or rewriting drivers to cut power. (there's a need for an LP 
Linux, but like Desktop Linux, Linus, and his coterie, isn't terribly 
interested in fixing his server OS to optimize for non-servers, so "server 
power saving" is the only design point for power).




On Monday, June 5, 2017 12:01pm, "Richard Smith"  said:

> On 06/04/2017 08:49 PM, Dave Taht wrote:
>> I keep finding nicely integrated solar/battery/camera/wifi designs
>>
>> https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Delectronics=solar+wifi=n%3A172282%2Ck%3Asolar+wifi
>>
>> But what I want is merely an solar/battery/AP design well supported by
>> lede... and either the ath9k or ath10k chipset - or mt72 - that I can
>> hang off a couple trees. I've not worked with solar much in the past
>> years, and picking the right inverter/panel/etc seems like a pita, but
>> perhaps there are ideas out there?
> 
> This is something I was up against constantly when I worked for OLPC.
> There's a big gap for products that use more power than a cell phone but
> less than an RV or a off-grid cabin.
> 
> For the XO itself we worked around it by designing the front end of the
> XO to be able to handle the range of output voltages from "12V" panels
> (open circuit voltages up to 20V) and to implement an MPPT algorithim in
> the EC firmware.  You can plug up any solar panel with a Voc of 20V or
> less to an XO-1.5 to XO-4 and it will DTRT.
> 
> Figuring out what to do with the deployment's APs though was always a
> struggle.
> 
> Solutions exist but you need to get a good estimate of what sort of
> power budget you need.  It makes a big difference in what equipment you
> need.
> 
> Unless its a really low power device the numbers can get large fast.
> 
> My WNDR 3700v2 power supply is rated at 12V 2.5A which is a peak of 30W.
> 
> Lets assume your average is 30% of peak.  That's 9W.  Your 24h energy
> requirement is 216Wh.  A reasonable input to usable efficiency for a PV
> system is 70%.  Given average 5 hour window of full sun you need a PV
> output of at least 62W.  It only goes up from there.
> 
> Realistically you need to survive a 2-3 day period of terrible solar
> output.  So your storage requirements should be at least 2-3x that.
> When you do get sun again you need excess PV capacity to be able to
> recharge your batteries.  You would probably need a PV output in the
> 100W-150W range to make a system you could count on to have 100%
> availability 24/7.
> 
> That's going to be a pretty big chunk of hardware up in a tree.
> 
> If the average power draw is more in the 3W or 1W range then things look
> a lot better.   That starts to get down into the 40 and 20W range.
> 
>> so am I the only one left that likes edison batteries? you don't need
>> a charge controller... they last for a hundred years
>> ___
> 
> I've never used this battery type but it looks like the resistant to
> overcharge assumes you replace the electrolyte.  All the cells I've
> looked at on a few sites seem to be flooded which means maintenance.
> Are there sealed maintenance free versions?
> 
> For discharge nominal is 1.2V but charging is listed as ~1.6V/cell so
> you are going to need 16V to charge.  I don't really see how you can
> build a workable system with out some sort of setup that can isolate
> your 12V loads from a 16V charge.
> 
> Perhaps undercharge them at a lower voltage and live with the capacity
> loss?
> 
> --
> Richard A. Smith
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 



Re: [Cerowrt-devel] Fwd: License File for Open Source Repositories

2016-12-23 Thread dpreed
My understanding is that it is already settled case law that contributed code 
to a GPL licensed projects implicitly grants a perpetual, royalty free license 
to use any applicable patent the author uses in the code.

Of course there is no case law regarding patent s in other licenses, in 
particular MIT and BSD, which have no strong copyleft provisions.

This issue of submarine patent traps is important in communications protocol 
invention. Protocol patents are far worse than software patents... IMO, 
communications protocols should never be property. IESG is struggling to create 
a middle ground, where there should be no middle, IMO.





-Original Message-
From: "Marc Petit-Huguenin" 
Sent: Fri, Dec 23, 2016 at 2:23 pm
To: "Dave Taht" , "cerowrt-devel@lists.bufferbloat.net" 

Cc: "Dave Taht" , "cerowrt-devel@lists.bufferbloat.net" 

Subject: Re: [Cerowrt-devel] Fwd: License File for Open Source Repositories

On 12/23/2016 08:05 AM, Dave Taht wrote:
> I have no idea what they are trying to do.

This is to prevent people to propose text to be included in a specification 
without disclosing that this may be relevant to a patent or patent application 
they own or know about.  As soon you make a contribution, you are supposed to 
disclose such IPR in the IETF database.  This text makes it explicit that 
anything done in such repository is covered by the same requirements.

An alternative would have been a variant of the Signed-off-by header, but as 
the repository does not extend to the RFC-editor or the IETF Trust, that's, the 
best that can be done for now.

> 
> 
> -- Forwarded message --
> From: IESG Secretary 
> Date: Fri, Dec 23, 2016 at 7:36 AM
> Subject: License File for Open Source Repositories
> To: IETF Announcement List 
> Cc: i...@ietf.org, i...@ietf.org
> 
> 
> The IESG has observed that many working groups work with open source
> repositories even for their work on specifications. That's great, and
> we're happy to see this development, as it fits well the working style
> of at least some of our working groups. This style is also likely to be
> more popular in the future.
> 
> As always, we'd like to understand areas where we can either be helpful
> in bringing in some new things such as tooling, or where we need to
> integrate better between the repository world and the IETF process. As
> an example of the latter, we're wondering whether it would be helpful to
> have a standard boilerplate for these repositories with respect to the
> usual copyright and other matters. The intent is for such text to be
> placed in a suitable file (e.g., "CONTRIBUTING"), probably along with
> some additional information that is already present in these files in
> many repositories. The idea is that people should treat, e.g., text
> contributions to a draft-foo.xml in a repository much in the same way as
> they treat text contributions on the list, at least when it comes to
> copyright, IPR, and other similar issues.
> 
> We have worked together with the IETF legal team and few key experts
> from the IETF who are actively using these repositories, and suggest the
> following text.
> 
> We're looking to make a decision on this matter on our January 19th,
> 2017 IESG Telechat, and would appreciate feedback before then. This
> message will be resent after the holiday period is over to make sure it
> is noticed. Please send comments to the IESG (i...@ietf.org) by 2017-01-17.
> 
> The IESG
> 
> ——
> 
> This repository relates to activities in the Internet Engineering Task
> Force(IETF). All material in this repository is considered Contributions
> to the IETF Standards Process, as defined in the intellectual property
> policies of IETF currently designated as BCP 78
> (https://www.rfc-editor.org/info/bcp78), BCP 79
> (https://www.rfc-editor.org/info/bcp79) and the IETF Trust Legal
> Provisions (TLP) Relating to IETF Documents
> (http://trustee.ietf.org/trust-legal-provisions.html).
> 
> Any edit, commit, pull-request, comment or other change made to this
> repository constitutes Contributions to the IETF Standards Process. You
> agree to comply with all applicable IETF policies and procedures,
> including, BCP 78, 79, the TLP, and the TLP rules regarding code
> components (e.g. being subject to a Simplified BSD License) in
> Contributions.
> 
> 
> 


-- 
Marc Petit-Huguenin
Email: m...@petit-huguenin.org
Blog: https://marc.petit-huguenin.org
Profile: https://www.linkedin.com/in/petithug

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Intel latency issue

2016-12-04 Thread dpreed
The language used in the article seems confused. However, since firmware 
sometimes means software (the OS kernel, for example) and this is "lag under 
load", it's barely possible that this is bufferbloat of a sort, it seems. Would 
we be surprised?

200 ms. can also be due to interrupt mishandling, recovered by a watchdog. It's 
common for performance to reduce interrupt overhead by switching from interrupt 
driven to polled while packets are arriving at full rate and then back again 
when the traffic has a gap. If you don't turn interrupts back on correctly 
(there's a race between turning on interrupts and packet arrival after you 
decide and before you succeed in turning on interrupts), then you end up 
waiting for some "watchdog" (every 200 ms?) to handle the incoming packets.

The idea that something actually runs for 200 ms. blocking everything seems to 
be the least likely situation - of course someone might have written code that 
held a lock while waiting for something or masked interrupts while waiting for 
something. But actually executing code for 200 ms.? Probably not.






On Sunday, December 4, 2016 3:27am, "Jonathan Morton"  
said:

> 
>> On 4 Dec, 2016, at 10:25, Matt Taggart  wrote:
>>
>> "Modems powered by Intel's Puma 6 chipset that suffer from bursts of
>> game-killing latency include the Arris Surfboard SB6190, the Hitron
>> CGNV4, and the Compal CH7465-LG, and Puma 6-based modems rebadged by
>> ISPs, such as Virgin Media's Superhub 3 and Comcast's top-end Xfinity
>> boxes. There are other brands, such as Linksys and Cisco, that use the
>> system-on-chip that may also be affected."
> 
> I do have to ask: the Atom isn’t very powerful, but WTF is it doing for
> 200ms every few seconds?
> 
>  - Jonathan Morton
> 
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] anybody know anything about the armada 3700?

2016-10-06 Thread dpreed
Reading between the lines on the datasheet, the "packet processor" doesn't look 
to be anything fancy or problematic.  It contains:

DMA - meaning that it will transfer into and out of the Tx and Rx rings in RAM 
automatically. Every NIC does DMA at the packet level.

PTP (IEEE1588) essentially this just timestamps packets as they are being sent 
and as they are being received on the wire. Has to be in the hardware device to 
make PTP precise enough to do nanosecond-level clock sync. But the PTP protocol 
(which does all kinds of fancy "frequency lock" algorithms, etc. won't be in 
there, just the timestamping and perhaps a high res clock register.

Buffer management - as a frame arrives off of the cable, you can't just stream 
it into snd out of RAM without some buffering to cope with the multiportedness 
of coherent RAM and the scatter/gather of data for a frame into RAM buffers in 
the Tx/Rx rings. This would just be the logic for that.

Also, if you look at the description of "features" there are no networking 
"features" listed that suggest advanced functionality or another specialized 
microcontroller doing magic.

So I suspect that the packet processor is less complex than a typical 1 GigE 
NIC - no checksum offload, no TSO, no ... there's not even a switch between the 
two ports. You get to do it all in software, which is great.

Having NBASE-T is also pretty nice, though there's not a lot of gear for the 
other end of the NBASE-T connection out there (though NBASE-T switches are 
becoming a standard for attaching 802.11ac in enterprise campuses to aggregate 
them into a 10 GigE or faster datacenter switch).

I do have CAT-6A throughout my house, so I wonder if I can wire my house with 
NBASE-T if I replace my GigE switch... :-)






On Tuesday, October 4, 2016 12:18pm, "Dave Taht"  said:

> On Tue, Oct 4, 2016 at 2:46 AM, Mikael Abrahamsson  wrote:
>> On Mon, 3 Oct 2016, Dave Taht wrote:
>>
>>>
>>> https://www.kickstarter.com/projects/874883570/marvell-espressobin-board?token=6a67e544
>>
>>
>> Oh, oh, another device with a "packet processor".
>>
>> http://www.marvell.com/embedded-processors/assets/Marvell-88F37xx-Product-Brief-20160830.pdf
>>
>> Do we know anything about this packet processor and FOSS support for it? I
>> guess the "buffer manager" is very much of interest to anti-bufferbloat...
> 
> Well, it's a competitor to the edgerouter X pricewise, and my hope
> would be with the cache coherent I/O and the arm v8s that it could
> push 1Gbit with ease out each port, regardless of offloads. USB3 makes
> for high speed nas capability, (although I have high hopes for usb-c
> on something router-ish someday). Also I am gradually thinking we'll
> start seeing more 2.5gbit ethernet over TP. And there's a mini-pcie
> slot for wifi-card-of-choice...
> 
> all at a pricepoint that's lower than almost anything I've seen with
> these capabilities.
> 
> Who knows, perhaps the "full SDK" will allow for programming the
> packet coprocessor?
> /me drinks some kool-aid
> 
> Downsides: Globalscale, historically, has had heat issues in their
> designs. And it is quite far from shipping, as yet.
> 
>>
>> --
>> Mikael Abrahamssonemail: swm...@swm.pp.se
> 
> 
> 
> --
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> http://blog.cerowrt.org
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] BBR congestion control algorithm for TCP innet-next

2016-09-21 Thread dpreed
On Wednesday, September 21, 2016 2:00pm, "Mikael Abrahamsson" 
 said:

> Yes, I guess people who have been staring at traffic graphs and Netflow
> collector system output for 10-15 years have no insight into what and how
> much traffic is going where.

That's exactly what I am saying.



___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] BBR congestion control algorithm for TCP innet-next

2016-09-21 Thread dpreed
Don't want to dwell on this, but Sandvine is not an unbiased source.  And it is 
apparently the *only* source - and 50% is a LOT.  Even Trump and Clinton don't 
have 50% of the electorate each. :-)

Does Sandvine have the resources to examine a true sample of all Internet 
traffic?

Maybe the NSA does.



On Wednesday, September 21, 2016 2:42pm, "Alan Jenkins" 
 said:

> On 20/09/2016, dpr...@reed.com  wrote:
>> I constantly see the claim that >50% of transmitted data on the Internet are
>> streaming TV. However, the source seems to be as hard to nail down as the
>> original claim that >50% of Internet traffic was pirated music being sent
>> over bittorrent.
> 
> uh, ibid.
> 
> 50-60% "upstream bandwidth", 2010 and 2008 respectively.
> 
> I'm quite happy to believe the trend, at least. Do you have a
> preferred assessment or even a rebuttal (back of the envelope,
> whatever) for around that time?
> 
> BT for media is a real sweet spot.  Music particularly because people
> _collect_, though I don't know what the timeline would look like for
> music v.s. video.
> 
> Not as if the original figure was being cited as scientific gospel.
> 
> The last paper out of Netflix, they said it works best to stomp out
> the isochronous behaviour and run in FTP mode as much as possible :).
> (Subject to upper limits on quality and application buffers).  Even
> dumb chunk downloading uses TCP; it's not isochronous in the way
> that's usually used to describe RTP etc.
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] BBR congestion control algorithm for TCP innet-next

2016-09-21 Thread dpreed
On Wednesday, September 21, 2016 3:32am, "Alan Jenkins" 
 said:

> On 20/09/16 21:27, dpr...@reed.com wrote:
> I don't think the source is hard to identify.  It's Sandvine press
> releases.  That's what the periodic stories on Ars Technica are always
> derived from.
> 
> https://www.sandvine.com/pr/2015/12/7/sandvine-over-70-of-north-american-traffic-is-now-streaming-video-and-audio.html

Press releases have almost no scientific verifiability and validity, and 
Sandvine is self-interested in a biased outcome. (Sad that Ars Technica just 
repeats these without questioning the numbers). In the past, I have actually 
questioned Sandvine directly, and had friends in the measurement community have 
asked for the raw data and methodology.  The response: "trade secrecy" and 
"customer privacy" prevent release of both raw data and methods.

If one is predisposed to "like" the result, one then repeats it as a "citation" 
often omitting the actual source and quoting the place where it appeared (e.g. 
Ars Technica said, rather than Sandvine said).

This is how propaganda works. It's exactly how propaganda works - I happen to 
have propaganda textbooks from the late 1940's that have chapters about these 
techniques.

Of course, the best propaganda is the stuff that you can get engineers in the 
field to promulgate based on their "gut feel".

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] BBR congestion control algorithm for TCP innet-next

2016-09-21 Thread dpreed
bittorrent is a kind of "FTP" rather than semi-isochornous (i.e. rate-bounded) 
"TV" in my personal classification.  As I'm sure you know, the two are quite 
different in their effects on congestion and their handling of congestion.

The idea that bittorrent dominated the Internet traffic volume at any point in 
time in the past is just plain not credible.  And an observation of one's own 
personal household by a high-end network user is not extrapolatable at scale.

(for example, it would require that all enterprise traffic through the Internet 
be dominated by bittorrent, unless enterprise traffic on the Internet is 
insignificant. There were never significant bittorrent users in enterprises, 
either connecting out from the companies' networks, or connecting "in" to the 
companies' externally facing sites).

In any case, there is no scienfic validity to "I seem to remember" claims.





On Wednesday, September 21, 2016 2:24am, "Mikael Abrahamsson" 
 said:

> On Tue, 20 Sep 2016, dpr...@reed.com wrote:
> 
>> I constantly see the claim that >50% of transmitted data on the Internet
>> are streaming TV. However, the source seems to be as hard to nail down
>> as the original claim that >50% of Internet traffic was pirated music
>> being sent over bittorrent.
> 
> It's my firm opinion that in the past 5-15 years (depending on market),
> more than 50% of Internet traffic is video, in some form or another.
> 
> This is from working at ISPs and seeing where traffic went. First it was
> bitorrent (pirated video), now it's Youtube, Netflix and other kind of
> streaming video. I wouldn't call this "TV" though.
> 
> In my household (2 adults, 1 6 year old), 75% of the average weekly
> traffic by volume, is over IPv6. I haven't checked in detail, but my guess
> is that Youtube+Netflix is the majority of this traffic. I come to this
> conclusion by looking at when the traffic occurs and what the traffic
> levels are, and from remembering what was done at the time.
> 
> --
> Mikael Abrahamssonemail: swm...@swm.pp.se
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] BBR congestion control algorithm for TCP innet-next

2016-09-20 Thread dpreed
I constantly see the claim that >50% of transmitted data on the Internet are 
streaming TV. However, the source seems to be as hard to nail down as the 
original claim that >50% of Internet traffic was pirated music being sent over 
bittorrent.

You recently repeated that statistic as if it were a verified fact.

I remember that in the early days of WiFi DSSS availability the claim was 
repeatedly made from podiums at conferences I attended that "the amount of WiFi 
in parking lots on Sand Hill Road [then the location of most major Silicon 
Valley VC firms] had made it so that people could not open their car doors with 
their remote keys".  This was not intended as hyperbole or a joke - I got into 
the habit of asking the speakers how they knew this, and they told me that 
their VC friends had all had it happen to them...

Propaganda consists of clever stories that "sound plausible" and which are 
spread by people because they seem to support something they *wish* were true 
for some reason.

I suspect that this 70% number is more propaganda of this sort.

In case it is not obvious, the beneficiaries of this particular propaganda are 
those who want to claim various things - that the Internet is now just TV 
broadcasting and thus should be treated that way (Internet Access Providers 
should select "channels", charge for allowing them through to customers, 
improving the "quality of programming" and censoring anything offensive, as 
just one example)

So I am extremely curious as to an actual source of such a number, how it was 
measured, and how its validity can be tested reproducibly.

Some may remember that the original discovery of "bufferbloat" was due to the 
fact that Comcast deployed Sandvine gear in its network to send RST packets for 
any connections that involved multiple concurrent TCP uploads (using DPI 
technology to guess what TCP connections to RST and the right header data to 
put on the RST packets).

Their argument for why they *had* to do that was that they "had data" that said 
that their network was being overwhelmed by bittorrent pirates.

In fact, the problem was bufferbloat - DOCSIS 2.0 gear that was designed to 
fail miserably under any intense upload.  The part about bittorrent piracy was 
based on claimed measurements that apparently were never in fact performed 
about the type of packets that were causing the problem.

Hence: I know it is a quixotic thing on my part, but the scientist in me wants 
to see the raw data and see the methods used to obtain it.

I have friends who actually measure Internet traffic (kc claffy, for example), 
and they do a darn good job.  The difficulty in getting data that could provide 
the 70% statistic is *so high* that it seems highly likely that no such 
measurement has ever been done, in fact.

But if someone has done such a measurement (directly or indirectly), defining 
their terms and methodology sufficiently so that it is a reproducible result, 
it would probably merit an award for technical excellence.

Otherwise, please, please, please don't lend your name to promulgating 
nonsense, even if it seems useful to argue your case.  Verify your sources.



On Monday, September 19, 2016 4:26pm, "Dave Taht"  said:

> ok, I got BBR built with net-next + v2 of the BBR patch. If anyone
> wants .deb files for ubuntu, I can put them up somewhere. Some quick
> results:
> 
> http://blog.cerowrt.org/post/bbrs_basic_beauty/
> 
> I haven't got around to testing cubic vs bbr in a drop tail
> environment, my take on matters is with fq (fq_codel) in place, bbr
> will work beautifully against cubic, and I just wanted to enjoy the
> good bits for a while before tearing apart the bad... and staying on
> fixing wifi.
> 
> I had to go and rip out all the wifi patches to get here... as some
> code landed to the ath10k that looks to break everything there, so
> need to test that as a baseline first - and I wanted to see if
> sch_fq+bbr did anything to make the existing ath9k driver work any
> better.
> 
> 
> 
> 
> On Sat, Sep 17, 2016 at 2:33 PM, Dave Taht  wrote:
>> On Sat, Sep 17, 2016 at 2:11 PM,   wrote:
>>> The assumption that each flow on a path has a minimum, stable  RTT fails in
>>> wireless and multi path networks.
>>
>> Yep. But we're getting somewhere serious on having stabler RTTs for
>> wifi, and achieving airtime fairness.
>>
>> http://blog.cerowrt.org/flent/crypto_fq_bug/airtime_plot.png
>>
>>>
>>>
>>>
>>> However, it's worth remembering two things: buffering above a certain level 
>>> is
>>> never an improvement,
>>
>> which BBR recognizes by breaking things up into separate bandwidth and
>> RTT analysis phases.
>>
>>>and flows through any shared router come and go quite frequently on the real
>>> Internet.
>>
>> Very much why I remain an advocate of fq on the routers is that your
>> congestion algorithm for your particular flow gets more independent of
>> the other flows, and ~0 latency and 

Re: [Cerowrt-devel] BBR congestion control algorithm for TCP innet-next

2016-09-17 Thread dpreed
The assumption that each flow on a path has a minimum, stable  RTT fails in 
wireless and multi path networks.



However, it's worth remembering two things: buffering above a certain level is 
never an improvement, and flows through any shared router come and go quite 
frequently on the real Internet.

Thus RTT on a single flow is not a reasonable measure of congestion. ECN 
marking is far better and packet drops are required for bounding time to 
recover after congestion failure.

The authors suffer from typical naivete by thinking all flows are for file 
transfer and that file transfer throughput is the right basic perspective, 
rather than end to end latency/jitter due to sharing, and fair sharing 
stability.




-Original Message-
From: "Jonathan Morton" 
Sent: Sat, Sep 17, 2016 at 4:11 pm
To: "Maciej Soltysiak" 
Cc: "Maciej Soltysiak" , 
"cerowrt-devel@lists.bufferbloat.net" 
Subject: Re: [Cerowrt-devel] BBR congestion control algorithm for TCP innet-next


> On 17 Sep, 2016, at 21:34, Maciej Soltysiak  wrote:
> 
> Cake and fq_codel work on all packets and aim to signal packet loss early to 
> network stacks by dropping; BBR works on TCP and aims to prevent packet loss. 

By dropping, *or* by ECN marking.  The latter avoids packet loss.

 - Jonathan Morton

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Making AQM harder...

2016-08-12 Thread dpreed
Maybe we all need to buy some Sandvine devices for our homes that detect IW10 
and forge TCP RSTs?  Or maybe Ellacoya will develop one for consumers?

That's what Comcast did when bufferbloat was killing their gear on upload, as 
I'm sure we all remember.  Except the FCC is not going to ask me to testify on 
network management practices, nor is the CRTC or Ofcom, and a lame duck 
commission or a Brexiting UK is not going to stop folks from deploying DPI and 
packet forging technology.

So maybe the point of cake is past time. Let chaos reign.



On Friday, August 12, 2016 9:21am, "moeller0"  said:

> Hi List,
> 
> according to (which most of you probably know already) MS is jumping on the 
> IW10
> train, so brace for impact ;) Also windows “exposes” LEDBAT, if
> “LEDBAT is only exposed through an undocumented socket option at the
> moment” can actually be called exposure…
>   Also I noticed some discussions of windows 10 update traffic, 
> effectively
> monopolizing links with (competent) AQWM/QoS setups, it seems we need cake to
> fully bake soon ;)
> 
> 
> Best Regards
>   Sebastian
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] not exactly the most positive outcome

2016-07-26 Thread dpreed
It's a terrible outcome.  However, there is literally no significant support 
for the alternative, either from a policy basis or from entrepreneurial folks. 
N-O-N-E.

I am biased towards the entrepreneurial side, but have been fighting this on 
the policy side as well in one form or another since 1994 (when we began trying 
to legalize UWB).

People just take for granted that having their communications controlled 
"end-to-end" by some third party (e.g. The Phone Company) is optimal for them.  
After all, AT Bell Labs created the Internet and the WWW.



On Tuesday, July 26, 2016 1:23pm, "Dave Taht"  said:

> From: https://www.cirrent.com/emerging-wi-fi-trends/
> 
> "As Wi-Fi becomes a bigger part of our daily lives, expect that our
> broadband ISPs will likely move to extend their demarcation
> point(demarc for short). This demarc will quickly shift away from
> where it resides today, on the cable modem or DSL router, to the air
> interface on the Wi-Fi access points in our home. Carriers have
> already been using Wi-Fi enabled cable modems and DSL routers for some
> years now. However, with the advances I’ve mentioned, I expect to see
> almost every broadband ISP offer an in-home Wi-Fi mesh solution as a
> standard part of their broadband service over the next two to three
> years.
> 
> The main motivation for the ISPs is to quickly get to a point where
> they can offer their users a high quality, secure Wi-Fi network that
> provides improved coverage throughout the whole home. These
> carrier-controlled Wi-Fi mesh networks will allow us to still have our
> private networks running alongside their networks and both will run on
> the same equipment. The upshot is that we won’t have to buy our own
> Wi-Fi mesh solutions and will be able to use those provided by our
> ISPs, like the vast majority of us already do today."
> 
> 
> 
> 
> --
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> http://blog.cerowrt.org
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-24 Thread dpreed
Without custom silicon, doing what I was talking about would involve non 
standard MAC power management, which would require all devices to agree.

David Lang's explanation was the essence of what I meant. the transmission from 
access point on multiple channels is just digital addition if the DACs have 
enough bits per sample. to make sure that the signals to the AP are equalized, 
just transmit at a power that makes that approximately true... which means a 
power amp with at most 30 dB of dynamic gain setting. typical dynamic path 
attenuation range (strongest to weakest ratio) among stations served by an AP 
is < 20 dB from my past experiments on well operating-installtions, but 25 can 
be seen in reflection heavy environments.-Original Message-
From: "David Lang" 
Sent: Fri, Jun 24, 2016 at 1:19 am
To: "Bob McMahon" 
Cc: "Bob McMahon" , 
make-wifi-f...@lists.bufferbloat.net, "cerowrt-devel@lists.bufferbloat.net" 

Subject: Re: [Make-wifi-fast] more well funded attempts showing market 
demandfor better wifi

well, with the kickstarter, I think they are selling a bill of goods.

Just using the DFS channels and aggregating them as supported by N and AC 
standards would do wonders (as long as others near you don't do the same)

David Lang

On Thu, 23 Jun 2016, Bob McMahon wrote:

> Date: Thu, 23 Jun 2016 20:01:22 -0700
> From: Bob McMahon 
> To: David Lang 
> Cc: dpr...@reed.com, make-wifi-f...@lists.bufferbloat.net,
> "cerowrt-devel@lists.bufferbloat.net"
> 
> Subject: Re: [Make-wifi-fast] more well funded attempts showing market demand
> for better wifi
> 
> Thanks for the clarification.   Though now I'm confused about how all the
> channels would be used simultaneously with an AP only solution (which is my
> understanding of the kickstarter campaign.)
>
> Bob
>
> On Thu, Jun 23, 2016 at 7:14 PM, David Lang  wrote:
>
>> I think he is meaning when one unit is talking to one AP the signal levels
>> across multiple channels will be similar. Which is probably fairly true.
>>
>>
>> David Lang
>>
>> On Thu, 23 Jun 2016, Bob McMahon wrote:
>>
>> Curious, where does the "in a LAN setup, the variability in [receive]
>>> signal strength is likely small enough" assertion come?   Any specific
>>> power numbers here? We test with many combinations of "signal strength
>>> variability" (e.g. deltas range from 0 dBm -  50 dBm) and per different
>>> channel conditions.  This includes power variability within the spatial
>>> streams' MiMO transmission.   It would be helpful to have some physics
>>> combined with engineering to produce some pragmatic limits to this.
>>>
>>> Also, mobile devices have a goal of reducing power in order to be
>>> efficient
>>> with their battery (vs a goal to balance power such that an AP can
>>> receive simultaneously.)  Power per bit usually trumps most other design
>>> goals.  There market for battery powered wi-fi devices drives a
>>> semi-conductor mfg's revenue so my information come with that bias.
>>>
>>> Bob
>>>
>>> On Thu, Jun 23, 2016 at 1:48 PM,  wrote:
>>>
>>> The actual issues of transmitting on multiple channels at the same time
 are quite minor if you do the work in the digital domain (pre-DAC).  You
 just need a higher sampling rate in the DAC and add the two signals
 together (and use a wideband filter that covers all the channels).  No RF
 problem.

 Receiving multiple transmissions in different channels is pretty much the
 same problem - just digitize (ADC) a wider bandwidth and separate in the
 digital domain.  the only real issue on receive is equalization - if you
 receive two different signals at different receive signal strengths, the
 lower strength signal won't get as much dynamic range in its samples.

 But in a LAN setup, the variability in signal strength is likely small
 enough that you can cover that with more ADC bits (or have the MAC
 protocol
 manage the station transmit power so that signals received at the AP are
 nearly the same power.

 Equalization at transmit works very well when there is a central AP (as
 in
 cellular or normal WiFi systems).



 On Thursday, June 23, 2016 4:28pm, "Bob McMahon" >>> 
 bob.mcma...@broadcom.com>
 said:

 ___
> Make-wifi-fast mailing list
> make-wifi-f...@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> An AP per room/area, reducing the tx power (beacon range) has been my
> approach and has scaled very well.   It does require some wires to each
>
 AP

> but I find that paying an electrician to run some quality wiring to
>
 things

> that are to remain stationary has been well worth the cost.
>
> just my $0.02,
> Bob
>
> On Thu, Jun 23, 2016 at 

Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demand for better wifi

2016-06-23 Thread dpreed




On Thursday, June 23, 2016 4:52pm, "David Lang"  said:

> On Thu, 23 Jun 2016, dpr...@reed.com wrote:
> 
>> The actual issues of transmitting on multiple channels at the same time are
>> quite minor if you do the work in the digital domain (pre-DAC).  You just 
>> need
>> a higher sampling rate in the DAC and add the two signals together (and use a
>> wideband filter that covers all the channels).  No RF problem.
> 
> that works if you are using channels that are close together, and is how the
> current standard wide channels in N and AC work.
> 
> If you try to use channels that aren't adjacent, this is much harder to do.
>
The whole 5 GHz U-NII band is not that wide.  It's easy to find DACs that run 
at 1 Gsps or better. On transmission you don't need to filter the bands where 
you put no energy in the middle (or not much).
 
> Remember that the current adjacent channel use goes up to 160MHz wide, going
> wider than that starts getting hard.
> 
>> Receiving multiple transmissions in different channels is pretty much the 
>> same
>> problem - just digitize (ADC) a wider bandwidth and separate in the digital
>> domain.  the only real issue on receive is equalization - if you receive two
>> different signals at different receive signal strengths, the lower strength
>> signal won't get as much dynamic range in its samples.
>>
>> But in a LAN setup, the variability in signal strength is likely small enough
>> that you can cover that with more ADC bits (or have the MAC protocol manage
>> the station transmit power so that signals received at the AP are nearly the
>> same power.
>>
>> Equalization at transmit works very well when there is a central AP (as in
>> cellular or normal WiFi systems).
> 
> define 'normal WiFi system'
Ones based on access points. In general, in typical WiFi deployments one 
prefers to make smaller cells so that the signal level variation between "near" 
and "far" signals is modest, which makes equalization much easier or even 
optional. If there is a large variation of power received at the access point 
then CSMA is hard to achieve, and the far stations have to run at slow rates, 
occupying more than their fair share of airtime.
(a non-normal system would be a peer-to-peer mesh over a wide enough area that 
you end up with "hidden terminal" issues all over the place)
> 
> It's getting very common for even moderate size houses to need more than one 
> AP
> to cover the entire house.
> 
Agree. No question about that.

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demand for better wifi

2016-06-23 Thread dpreed
The actual issues of transmitting on multiple channels at the same time are 
quite minor if you do the work in the digital domain (pre-DAC).  You just need 
a higher sampling rate in the DAC and add the two signals together (and use a 
wideband filter that covers all the channels).  No RF problem.

Receiving multiple transmissions in different channels is pretty much the same 
problem - just digitize (ADC) a wider bandwidth and separate in the digital 
domain.  the only real issue on receive is equalization - if you receive two 
different signals at different receive signal strengths, the lower strength 
signal won't get as much dynamic range in its samples.

But in a LAN setup, the variability in signal strength is likely small enough 
that you can cover that with more ADC bits (or have the MAC protocol manage the 
station transmit power so that signals received at the AP are nearly the same 
power.

Equalization at transmit works very well when there is a central AP (as in 
cellular or normal WiFi systems).



On Thursday, June 23, 2016 4:28pm, "Bob McMahon"  
said:

> ___
> Make-wifi-fast mailing list
> make-wifi-f...@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> An AP per room/area, reducing the tx power (beacon range) has been my
> approach and has scaled very well.   It does require some wires to each AP
> but I find that paying an electrician to run some quality wiring to things
> that are to remain stationary has been well worth the cost.
> 
> just my $0.02,
> Bob
> 
> On Thu, Jun 23, 2016 at 1:10 PM, David Lang  wrote:
> 
>> Well, just using the 5GHz DFS channels in 80MHz or 160 MHz wide chunks
>> would be a huge improvement, not many people are using them (yet), and the
>> wide channels let you get a lot of data out at once. If everything is
>> within a good range of the AP, this would work pretty well. If you end up
>> needing multiple APs, or you have many stations, I expect that you will be
>> better off with more APs at lower power, each using different channels.
>>
>> David Lang
>>
>>
>>
>>
>> On Thu, 23 Jun 2016, Bob McMahon wrote:
>>
>> Date: Thu, 23 Jun 2016 12:55:19 -0700
>>> From: Bob McMahon 
>>> To: Dave Taht 
>>> Cc: make-wifi-f...@lists.bufferbloat.net,
>>> "cerowrt-devel@lists.bufferbloat.net"
>>> 
>>> Subject: Re: [Make-wifi-fast] more well funded attempts showing market
>>> demand
>>> for better wifi
>>>
>>>
>>> hmm, I'm skeptical.   To use multiple carriers simultaneously is difficult
>>> per RF issues.   Even if that is somehow resolved, to increase throughput
>>> usually requires some form of channel bonding, i.e. needed on both sides,
>>> and brings in issues with preserving frame ordering.  If this is just
>>> channel hopping, that needs coordination between both sides (and isn't
>>> simultaneous, possibly costing more than any potential gain.)   An AP only
>>> solution can use channel switch announcements (CSA) but there is a cost to
>>> those as well.
>>>
>>> I guess don't see any break though here and the marketing on the site
>>> seems
>>> to indicate something beyond physics, at least the physics that I
>>> understand.  Always willing to learn and be corrected if I'm
>>> misunderstanding things.
>>>
>>> Bob
>>>
>>> On Wed, Jun 22, 2016 at 10:18 AM, Dave Taht  wrote:
>>>
>>> On Wed, Jun 22, 2016 at 10:03 AM, Dave Taht  wrote:

>
>
 https://www.kickstarter.com/projects/portalwifi/portal-turbocharged-wifi?ref=backerkit

>
> "Portal is the first and only router specifically engineered to cut
> through and avoid congestion, delivering consistent, high-performance
> WiFi with greater coverage throughout your home.
>
> Its proprietary spectrum turbocharger technology provides access to
> 300% more of the radio airwaves than any other router, improving
> performance by as much as 300x, and range and coverage by as much as
> 2x in crowded settings, such as city homes and multi-unit apartments"
>
> It sounds like they are promising working DFS support.
>

 It's not clear what chipset they are using (they are claiming wave2) -
 but they are at least publicly claiming to be using openwrt. So I
 threw in enough to order one for september, just so I could comment on
 their kickstarter page. :)

 I'd have loved to have got in earlier (early shipments are this month
 apparently), but those were sold out.



 https://www.kickstarter.com/projects/portalwifi/portal-turbocharged-wifi/comments



> --
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> http://blog.cerowrt.org
>



 --
 Dave Täht
 Let's go make home routers and wifi faster! 

Re: [Cerowrt-devel] trying to make sense of what switch vendors say wrt buffer bloat

2016-06-10 Thread dpreed

Just today I found out that a datacenter my company's engineering group is 
expanding into is putting us on Arista 7050's. And our very preliminary tests 
of our systems there is showiung what seems to be a latency problem under load. 
 I can't get in the way of the deployment process, but it's 
interesting/worrying that "big buffers" are there in the middle of our system, 
which is highly latency sensitive.

I may also need a diagnostic test that would detect the  potential occurence of 
bufferbloat within a 10 GigE switch, now.  Our software layers are not prepared 
to self-diagnose at the ethernet layer very well.

My thought is to use an ethernet ping while our system is loaded.  (our 
protocol is at the Ethernet layer, no IP stack). Anyone have an idea of the 
simplest way to do that?

On Tuesday, June 7, 2016 1:51pm, "Eric Johansson"  said:



 

On 6/6/2016 10:58 PM, [ dpr...@reed.com ]( mailto:dpr...@reed.com ) wrote:
Even better, it would be fun to get access to an Arista switch and some high 
performance TCP sources and sinks, and demonstrate extreme bufferbloat compared 
to a small-buffer switch.  Just a demo, not a simulation full of assumptions 
and guesses.
 I'm in the middle of a server room/company move.  I can make a available a  
XSM4348S NETGEAR M4300-24X24F, and probably an arista 7050T-52 for a short time 
frame as part of my "testing".  tell me what you need for a test setup, give me 
a script I can run and where I should send the results.  I really need a cut 
and paste test because I have no time to think about anything more than the 
move.

 thanks.

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] trying to make sense of what switch vendors say wrt buffer bloat

2016-06-06 Thread dpreed

Even better, it would be fun to get access to an Arista switch and some high 
performance TCP sources and sinks, and demonstrate extreme bufferbloat compared 
to a small-buffer switch.  Just a demo, not a simulation full of assumptions 
and guesses.
 
RRUL, basically.
 


On Monday, June 6, 2016 10:52pm, dpr...@reed.com said:



So did anyone write a response debunking their paper?   Their NS-2 simulation 
is most likely the erroneous part of their analysis - the white paper would not 
pass a review by qualified referees because there is no way to check their 
results and some of what they say beggars belief.
 
Bechtolsheim is one of those guys who can write any damn thing and it becomes 
"truth" - mostly because he co-founded Sun. But that doesn't mean that he can't 
make huge errors - any of us can.
 
The so-called TCP/IP Bandwidth Capture effect that he refers to doesn't sound 
like any capture effect I've ever heard of.  There is an "Ethernet Capture 
Effect" (which is cited), which is due to properties of CSMA/CD binary 
exponential backoff, not anything to do with TCP's flow/congestion control.  So 
it has that "truthiness" that makes glib people sound like they know what they 
are talking about, but I'd like to see a reference that says this is a property 
of TCP!
 
What's interesting is that the reference to the Ethernet Capture Effect in that 
white paper proposes a solution that involves changing the backoff algorithm 
slightly at the Ethernet level - NOT increasing buffer size!
 
Another thing that would probably improve matters a great deal would be to 
drop/ECN-mark packets when a contended output port on an Arista switch develops 
a backlog.  This will throttle TCP sources sharing the path.
 
The comments in the white paper that say that ACK contention in TCP in the 
reverse direction are the problem that causes the "so-called TCP/IP Bandwidth 
Capture effect" that is invented by the authors appears to be hogwash of the 
first order.
 
Debunking Bechtolsheim credibly would get a lot of attention to the bufferbloat 
cause, I suspect.
 


On Monday, June 6, 2016 5:16pm, "Ketan Kulkarni"  said:



some time back they had this whitepaper -
"Why Big Data Needs Big Buffer Switches"

[ http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf ]( 
http://www.arista.com/assets/data/pdf/Whitepapers/BigDataBigBuffers-WP.pdf )
the type of apps they talk about is big data, hadoop etc


On Mon, Jun 6, 2016 at 11:37 AM, Mikael Abrahamsson <[ swm...@swm.pp.se ]( 
mailto:swm...@swm.pp.se )> wrote:
On Mon, 6 Jun 2016, Jonathan Morton wrote:

At 100ms buffering, their 10Gbps switch is effectively turning any DC it’s 
installed in into a transcontinental Internet path, as far as peak latency is 
concerned.  Just because RAM is cheap these days…Nono, nononononono. I can tell 
you they're spending serious money on inserting this kind of buffering memory 
into these kinds of devices. Buying these devices without deep buffers is a lot 
lower cost.

 These types of switch chips either have on-die memory (usually 16MB or less), 
or they have very expensive (a direct cost of lowered port density) off-chip 
buffering memory.

 Typically you do this:

 ports ---|---
 ports ---|  |
 ports ---| chip |
 ports ---|---

 Or you do this

 ports ---|--|---buffer
 ports ---| chip |---TCAM
  

 or if you do a multi-linecard-device

 ports ---|--|---buffer
  | chip |---TCAM
  
 |
 switch fabric

 (or any variant of them)

 So basically if you want to buffer and if you want large L2-L4 lookup tables, 
you have to sacrifice ports. Sacrifice lots of ports.

 So never say these kinds of devices add buffering because RAM is cheap. This 
is most definitely not why they're doing it. Buffer memory for them is 
EXTREMELY EXPENSIVE.

 -- 
 Mikael Abrahamssonemail: [ swm...@swm.pp.se ]( mailto:swm...@swm.pp.se )
___
 Cerowrt-devel mailing list
[ Cerowrt-devel@lists.bufferbloat.net ]( 
mailto:Cerowrt-devel@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/cerowrt-devel ]( 
https://lists.bufferbloat.net/listinfo/cerowrt-devel )

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] [Babel-users] perverse powersave bug with sta/ap mode

2016-04-28 Thread dpreed
Discovery is a special case, that is not quite multicast. Discovery is 
"noticing".  A node wishing to be discovered must be noticed by one (or maybe 
more) already existent stations in a group (groups are noticed by any member 
being noticed by a member of another group).

So you don't need any facility to "reach all" in one message.  It's sufficient 
to "reach any". From that point on, it's a higher level problem of "association 
management" (tracking members and their reachability).  I use these general 
terms in quotes to step outside the frame limited to 802.11 and its rigid 
culture.

So the key to discovery is *anycast* not multicast.

So for example, a station that is not yet associated could follow some 
predictable sequence of transmissions, using a variety of MI transmissions 
(multiple input, i.e. multiple antennas transmitting simultaneously) with a 
variety of waveforms, where that sequence was determined to have a high 
probability of being noticed by at least one member of the group to be joined. 
A station noticing such a signal could then use the signal's form itself to 
respond and begin to bring that station into the group of stations that can 
hear each other, discovering further information (like mutual propagation 
characteristics (multipath/MIMO coefficients, attenuation (for equalization), 
noise)).

By conflating discovery with multicast, one loses design options for discovery 
and cooperative transmission. So yes, the "normative" centralized access point 
discovery now practiced in 802.11 nets assumes a sort of "multicast", but that 
is because we have "centralized" architectures, not mesh at the phy level.

On Thursday, April 28, 2016 9:43am, "Toke Høiland-Jørgensen"  
said:

> Juliusz Chroboczek  writes:
> 
>> For discovery, multicast is unavoidable -- there's simply no way you're
>> going to send a unicast to a node that you haven't discovered yet.
> 
> Presumably the access point could transparently turn IP-level multicast
> into a unicast frame to each associated station? Not sure how that would
> work in an IBSS network, though... Does the driver (or mac80211 stack)
> maintain a list of neighbours at the mac/phy level?
> 
> -Toke
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] perverse powersave bug with sta/ap mode

2016-04-28 Thread dpreed
Interesting stuff.  A deeper problem with WiFi-type protocols is that the very 
idea of "multicast" on the PHY level (air interface) is flawed, based on a 
model of propagation that assumes that every station can be easily addressed 
simultaneously, at the same bitrate, etc. Multicast is seductive to designers 
who ignore the realities of propagation and channel coding issues, because they 
think it works one way, but the reality is quite different.

So just as years were wasted in the RTP and media streaming world on 
router/switch layer multicast (thought to be easy and more efficient), my 
personal opinion is that any wireless protocol that tries to solve problems 
with multicast at the PHY layer is a fragile, brittle design that will waste 
years of effort trying to make the horse dance on its forelegs.

THe list of issues is enormous, but the most obvious ones are a) equalization, 
b) inability to use MIMO, and c) PHY layer acknowledgment complexity.

The usual argument is that in some special case circumstance, using multicast 
is "optimal".  But how much better is that "optimal" than the non-multicast 
general solution, and how does that "optimization" make the normal operation 
worse, in common conditions?

Whenever someone says that a "cross layer optimization" or a complicated 
special case added into a robust design is "optimal", I check that my wallet is 
still in my pocket.  Because "optimal" is a magic word often used to distract 
one's attention from what really matters.

So "multicast" considered harmful is my view.


On Tuesday, April 26, 2016 7:27pm, "Aaron Wood"  said:

> ___
> Make-wifi-fast mailing list
> make-wifi-f...@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
> Has anyone modeled what the multicast to multiple-unicast efficiency
> threshold is?  The point where you go from it being more efficient to send
> multicast traffic to individual STAs instead of sending a monstrous (in
> time) multicast-rate packet?
> 
> 2, 5, 10 STAs?
> 
> The per-STA-queue work should make that relatively easy, by allowing the
> packet to be dumped into each STA's queue...
> 
> -Aaron
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [bufferbloat-fcc-discuss] [Make-wifi-fast] arstechnica confirmstp-link router lockdown

2016-03-14 Thread dpreed
Well, the answer is basically no on there being a current chip that is only 
concerned with the PHY layer. Because chip count is a crucial part of system 
cost, years ago it became the practice to put the PHY and MAC, and now the 
protocol processing too, on the same mixed-signal chip. For example the ESP8266 
(the IoT favorite WiFi maker gadget) has a 32 bit general purpose processor and 
all of the elements of basic MAC and PHY on the chip. You can program the 
ESP8266 and use its WiFi just fine.

But I would argue that it is no longer true that division by a hardware 
interface routed through the PC board need be the solution.  Modularity on 
board the chip (as in the ESP8266) is sufficient to do what I described - the 
PHY layer can isolated (even if it is implemented in updatable firmware) if the 
firmware that controls the transmission DAC is isolated, locked down, and 
enforces the band limit and power limit required by the FCC. It can even be 
updatable, as long as it is protected with an adequate barrier to consumer 
modification. It need not be secret - in fact it would be better if it could be 
reviewed by those concerned with ensuring it will actually limit the output.

Now in other radios, less integrated, there are separate chips for the transmit 
DAC path from the protocol path.  I use those chips in my experimental Part 97 
transceivers on 2.4 and 5 GHz, and there are chips from MAXIM and Analog 
Devices for example, that might be appropriate if you want to design your own 
router, however they are never going to be part of consumer devices because 
using them is costly. If you want to enforce a filter, you can do that fairly 
easily. But to implement full 802.11ac OFDM + MIMO would be a bear of a DSP 
program to build from scratch for these chips, though in principle they are 
capable of doing that.

But in the fully integrated designs, there is enough modularity *on-chip* that 
one could make sure that there are no signals emitted that are outside the 
relevant band and power limits.  (one might even just do this by detecting that 
limits are exceeded and powering of the transmit amplifier, which would 
probably make the FCC even more happy... the algorithm to detect excess power 
or out-of-band emissions could be quite simple, compared to a filter spliced in 
the signal path).

An external "limit-exceeding signal detector" could also be very inexpensive, 
if it did not need to do ADC from the transmitted signal, but could get access 
to the digital samples and do a simple power measurement.






On Monday, March 14, 2016 12:16pm, "Wayne Workman" 
 said:

> Is there an existing chip that is only concerned with layer 1?
> On Mar 14, 2016 9:15 AM, "Jonathan Morton"  wrote:
> 
>>
>> > On 14 Mar, 2016, at 16:02, dpr...@reed.com wrote:
>> >
>> > The WiFi protocols themselves are not a worry of the FCC at all.
>> Modifying them in software is ok. Just the physical emissions spectrum must
>> be certified not to be exceeded.
>> >
>> > So as a practical matter, one could even satisfy this rule with an
>> external filter and power limiter alone, except in part of the 5 GHz band
>> where radios must turn off if a radar is detected by a specified algorithm.
>> >
>> > That means that the radio software itself could be tasked with a
>> software filter in the D/A converter that is burned into the chip, and not
>> bypassable. If the update path requires a key that is secret, that should
>> be enough, as key based updating is fine for all radios sold for other uses
>> that use digital modulation using DSP.
>> >
>> > So the problem is that 802.11 chips don't split out the two functions,
>> making one hard to update.
>>
>> To put this another way, what we need is a cleaner separation of ISO
>> Layers 1 (physical) and 2 (MAC).
>>
>> The FCC is concerned about locking down Layer 1 for RF compliance.  We’re
>> concerned with keeping Layer 2 (and upwards) open for experimentation and
>> improvement.
>>
>> These are compatible goals, at the fundamental level, but there is a
>> practical problem with existing implementations which mix the layers
>> inappropriately.
>>
>>  - Jonathan Morton
>>
>> ___
>> bufferbloat-fcc-discuss mailing list
>> bufferbloat-fcc-disc...@lists.redbarn.org
>> http://lists.redbarn.org/mailman/listinfo/bufferbloat-fcc-discuss
>>
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] hardware from hell

2016-03-05 Thread dpreed
My Jetway NU93-2930 board is a NUC with dual Ethernet.  It's a bare board, but 
I made my own case quickly enough out of acrylic sheet using my bandsaw and 
glue. Works fine.  Takes a wall wart power supply as long as it delivers 
between 12 V and 36 V.

I recommend it highly.  Jetway probably sells it in a case, as well, but I 
generally don't like their superexpensive cases.

I added an mSATA drive (which mounts into the board) and RAM.  That's all you 
need for a router with two GigE ports.  There's also a slot under the mSATA 
slot for whatever mini-PCIe WLAN card you want - I'm not sure yet what I want 
to put on it, any suggestions for an 802.11ac capable with either 5 GHz or dual 
capability?)




On Thursday, March 3, 2016 9:16pm, "Dave Täht"  said:

> I am A) still fiddling with alternate web site generators and B) just
> finished writing up (grousing) about all the hardware I just tried to
> make work.
> 
> http://the-edge.taht.net/post/hardware_from_hell/
> 
> I am about to tear apart the dual ethernet nuc we discussed here, again,
> swapping out everything in it to see if I can get it to work. I fear I
> fried it by trying to compile a kernel on it, or something
> 
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] odroid C1+ status

2016-03-05 Thread dpreed
I have a Banana Pi, but I can't imagine why it would be useful as a router.
Why would Comcast even bother?  A Raspberry Pi 3 would be better, and far more 
available. (though I think the slow Ethernet port and low end WiFi on the 
Raspberry Pi 3 would make it sort of marginal, it's certainly quite fine for a 
low-end OpenWRT machine if you want to live at 50 Mb/sec)



On Saturday, March 5, 2016 3:23pm, "Dave Taht"  said:

> wow, thx for all the suggestions on alternate x86 router hardware... I
> will read more later.
> 
> Would using a blog format for things like the following work better
> for people? I could more easily revise, including graphics, etc,
> etc... could try to hit on our hot buttons (upgradability, bloat,
> reliability, kernel versions, manufacturer support) with some sort of
> grading system...
> 
> http://the-edge.taht.net/post/odroid_c1_plus/ in this case
> 
> ...
> 
> I got the odroid C1+ to work better. (either a cable or power supply
> issue, I swapped both). On output it peaks at about 416Mbits with 26%
> of cpu being spent in a softirq interrupt.  On input I can get it to
> gbit, with 220% of cpu in use.
> 
> The rrul tests were pretty normal, aside from the apparent 400mbit
> upload limit causing contention on rx/tx (at the moment I have no good
> place to put these test results since snapon is now behind a firewall.
> I'd like to get more organized about how we store and index these
> results also)
> 
> There is no BQL support in the odroid driver for it, and it ships with
> linux 3.10.80. At least its a LTS version I am totally unfamiliar
> with the odroid ecosystem but maybe there is active kernel dev on it
> somewhere?
> 
> (The pi 2, on the other hand, is kernel 4.1.17-v7 AND only has a
> 100mbit phy, so it is hard to complain about only getting 400mbit from
> the odroid c1+, but, dang it, a much later kernel would be nice in the
> odroid)
> 
> My goal in life, generally, is to have a set of boxes with known
> characteristics to drive tests with, that are reliable enough to setup
> once and ignore.
> 
> A) this time around, I definitely wanted variety, particularly in tcp
> implementations, kernel versions, ethernet and wifi chips - as it
> seemed like drawing conclusions from "perfect" drivers like the e1000e
> all the time was a bad idea. We have a very repeatable testbed in
> karlstad, already - I'm interested in what random sort of traffic can
> exist on a home network that messes life up.
> 
> One of the things I noticed while using kodi is that the box announces
> 2k of multicast ipv4 packets every 30 seconds or so on the upnp
> port... AND over 4k of multicast ipv6 packets, if ipv6 is enabled.
> 
> B) Need to be able to drive 802.11ac as hard as possible with as many
> stations as possible.
> 
> C) needs to be low power and quiet (cheap is good too!)
> 
> Has anyone tried the banana pi? That's what comcast is using in their 
> tests
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] better service discovery

2016-01-26 Thread dpreed
There's a paper from Cambridge University that focuses on evaluating Raft.  In 
particular they have some key findings about performance tuning, plus 
discovering some potential livelocks.

I'm interested in Consul on a planet-wide scale - not sure it scales 
effectively but if it or something like it can be made to, I have a really 
revolutionary use for it that I've been exploring.  So I will be playing with 
it - like to see if it can survive attacks in a non-friendly environment (not a 
datacenter) as well.

I don't know of any projects that would have experience with it in production, 
at least not yet.



On Tuesday, January 26, 2016 2:07am, "Aaron Wood"  said:

> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> Consul is based on Raft, so anyone using Consul is using Raft.
> 
> (and we're poking around at it at my company, but I don't have any insight
> to give on it, yet).  But in general, I also like distributed redundancy
> (as opposed to primary/backup redundancy).
> 
> -Aaron
> 
> On Mon, Jan 25, 2016 at 9:40 AM, Dave Täht  wrote:
> 
>> While at last week's scale conference I ran across a guy doing
>> interesting things in tinc. One of the things he'd pointed out was the
>> general availability of service discovery options using a very flexible
>> many master/client protocol called "raft" - including using it as a dns
>> substitute in his environment.
>>
>> https://raft.github.io/
>>
>> I like things that have redundancy and distributed state. Has anyone
>> been using this in any scenario?
>> ___
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] first 802.11ad appearance

2016-01-20 Thread dpreed

I'm trying to imagine what its intended market is.  Open factory/warehouse 
floor networking?  The atmospheric absorbtion of that band is a problem, since 
oxygen's absorption peak is 60 GHz.  The ability to use multipath 
constructively due to the number of antennas (BLAST style MIMO) may help.  But 
this is worth thinking about:

[ http://faculty.poly.edu/~tsr/Publications/%20ICC_2012.pdf ]( 
http://faculty.poly.edu/~tsr/Publications/%20ICC_2012.pdf )
 


On Wednesday, January 20, 2016 1:36pm, "Outback Dingo"  
said:







On Wed, Jan 20, 2016 at 7:27 PM, Dave Täht <[ d...@taht.net ]( 
mailto:d...@taht.net )> wrote:
It would be so nice, of course, if this was open source from the getgo.

[ 
http://arstechnica.com/gadgets/2016/01/tp-link-unveils-worlds-first-802-11ad-wigig-router/
 ]( 
http://arstechnica.com/gadgets/2016/01/tp-link-unveils-worlds-first-802-11ad-wigig-router/
 )
Sweet! nice design also, ask the for the source :) maybe its GPL who knows.
 
 ___
 Cerowrt-devel mailing list
[ Cerowrt-devel@lists.bufferbloat.net ]( 
mailto:Cerowrt-devel@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/cerowrt-devel ]( 
https://lists.bufferbloat.net/listinfo/cerowrt-devel )___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] FCC: We aren’t banning DD-WRT on Wi-Fi routers

2015-11-12 Thread dpreed
It's a start. (I'm optimistic that there is room to move the ball farther and 
that the FCC folks are willing to listen, though they didn't directly address 
concerns about the security closed, buggy quality of the factory software as 
being important and in the public interest).



On Thursday, November 12, 2015 5:22pm, "Jim Reisert AD1C" 
 said:

> http://arstechnica.com/information-technology/2015/11/fcc-we-arent-banning-dd-wrt-on-wi-fi-routers/
> 
> From the article:
> 
> Today, the FCC issued an updated version of the guidance that strips
> out the DD-WRT reference. Instead, it now says:
> 
>Describe, if the device permits third-party software or firmware
> installation, what mechanisms
>are provided by the manufacturer to permit integration of such
> functions while ensuring that
>the RF parameters of the device cannot be operated outside its
> authorization for operation
>in the US. In the description include what controls and/or
> agreements are in place with
>providers of third-party functionality to ensure the devices’
> underlying RF parameters are
>unchanged and how the manufacturer verifies the functionality.
> 
> FCC Engineering and Technology Chief Julius Knapp also wrote a blog
> post titled, "Clearing the Air on Wi-Fi Software Updates."
> 
> https://www.fcc.gov/blog/clearing-air-wi-fi-software-updates
> 
> 
> --
> Jim Reisert AD1C, , http://www.ad1c.us
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Editorial questions for response to the FCC

2015-10-02 Thread dpreed

This is good. Regarding my concern about asking for mandated FCC-centered 
control - we need less of that. It establishes a bad precedent for the means of 
"protecting the airwaves".  (or reinforces a precedent that must be changed 
soon - of establishing de-facto, anti-innovation monopolies that derive from 
regulatory mandates). And it creates a problematic "us vs. them" debate where 
there is none.
 
A vast majority of folks want interoperability of all kinds of communications, 
and that includes sharability of the wireless medium.  That means finding 
approaches for coexistence and innovation, without mandates for the 
technological solution based on a particular implementation of hardware or 
systems architecture. An evolutionary approach.
 
So we don't have to specify alternative rules in detail.  And by going into 
detail too much, we only lose support.
 
What have to demonstrate is that there is *at least one* better approach, while 
also pointing out what will be lost with the proposed approach in the NPRM.  
Most of the letter focuses on these, quite correctly, but it might be useful to 
clarify those two lines of argument by an editorial pass.  The approach 
suggested in the letter should be presented as open to further improvement that 
achieves the shared goals.
 
This is why emphasizing "mandates" and punishments is probably 
counterproductive. The goals that stand behind the proposed mandates should be 
emphasized, the mandates left to be developed in detail based on those goals.
 
The battle here is part of a longer term, necessary reframing of regulatory 
practice.  Current regulation is based on means rather than ends, and doesn't 
consider what is technologically possible. And it is centered on control of 
what capabilities are delivered in fixed function devices, rather than in how 
those devices are used or how they integrate into systems. As an absurd 
example, we certainly would like to prevent people from being electrocuted.  
But does that mean we must specify how an electric vehicle is built down to the 
voltages of batteries used and what kind of switching transistors must be used 
in the power supply?  Or that the only battery charging that is allowed must be 
done at stations owned by the vendor of the vehicle?
 
That longer term reframing must take into account the characteristics of the 
trillions of wireless nodes that will be deployed in the next 25 years.  
There's been no study of that by the FCC (though some of us have tried, as I 
did at the TAC when I was on it) at all.
 
That can't be accomplished in this NPRM.  The battle is to prevent this 
regulation from being deployed, because it is a huge step backwards.  
Fortunately, we *don't* need to rewrite the regulation, so quickly.
 
We need to argue that they need to go back to the drawing board with a new 
perspective.  The approach proposed should focus that effort, but it need not 
be adopted now.  It might turn out even worse...  So the goal is to stop the 
NPRM.
 


On Friday, October 2, 2015 10:22am, "Rich Brown"  said:



> Folks,
> 
> I have screwed up my nerve to take an editorial pass over the document. It 
> has a
> lot of good information and many useful citations, but it needs focus to be
> effective.
> 
> As I read through (yesterday's) draft of the document and the comments, I 
> came up
> with observations to confirm and questions that I want to understand before
> charging ahead.
> 
> Observations:
> 
> 1) Unfortunately, we are coming to this very late: if I understand the 
> timeline,
> the FCC proposed these rules a year ago, the comment period for the NPRM 
> closed 2
> months ago, and we only got an extra month's extension of the deadline because
> their computer was going to be down on the original filing date. (Note - that
> doesn't challenge any of our points' validity, only that we need to be very
> specific in what we say/ask for.)
> 
> 2) The FCC will view all this through the lens of "Licensed use has priority 
> for
> spectrum over unlicensed." That's just the rules. Any effort to say they 
> should
> change their fundamental process will cause our comments to be disregarded.
> 
> 3) The *operator* (e.g., homeowner) is responsible for the proper operation 
> of a
> radio. If the FCC discovers your home router is operating outside its allowed
> parameters *you* must (immediately?) remediate it or take it off the air.
> 
> 4) We must clearly and vigorously address the FCC admonishment to "prevent
> installing DD-WRT"
> 
> 5) [Opinion] I share dpreed's concern that the current draft overplays our 
> hand,
> requesting more control/change than the FCC would be willing to allow. See
> Question 7 below for a possible alternative.
> 
> Questions:
> 
> 1) What is our request? What actions would we like the FCC to take?
> 
> 2) How much of a deviation from their current rules (the ones we're 
> commenting on)
> are we asking them to embrace?
> 
> 3) How much dust 

Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-08 Thread dpreed

David - I find it interesting that you think I am an idiot.  I design waveforms 
for radios, and am, among other things, a fully trained electrical engineer 
with deep understanding of information theory, EM waves, propagation, etc. as 
well as an Amateur Radio builder focused on building experimental radio network 
systems in the 5 GHz and 10 GHz Amateur Radio bands.
 
I know a heck of a lot about 802.11 PHY layer and modulation, propagation, 
etc., and have been measuring the signals in my personal lab, as well as having 
done so when I was teaching at MIT, working on cooperative network diversity 
protocols (physical layers for mesh cooperation in digital networks).
 
And I was there with Metcalfe and Boggs when they designed Ethernet's PHY and 
MAC, and personally worked on the protocol layers in what became the Token Ring 
standard as well - so I understand the backoff and other issues associated with 
LANs.  (I wrote an invited paper in IEEE Proceedings An Introduction to Local 
Area Networks that appeared in the same special issue as the Cerf and Kahn 
paper entitled A Transmission Control Protocol that described the first 
Internet protocol concept..)
 
I guess what I'm saying is not that I'm always correct - no one is, but I would 
suggest that it's worth considering that I might know a little more than most 
people about some things - especially the physical and MAC layers of 802.11, 
but also about the internal electronic design of radio transceivers and digital 
interfaces to them. From some of your comments below, I think you either 
misunderstood my point (my fault for not explaining it better) or are 
misinformed.

There's a lot of folklore out there about radio systems and WiFi that is 
quite wrong, and you seem to be quoting some of it - e.g. the idea that the 1 
Mb/s waveform of 802.11b DSSS is somehow more reliable than the lowest-rate 
OFDM modulations, which is often false.  The 20 MHz-wide M0 modulation with 
800ns GI gives 6.2 Mb/s and typically much more reliable than than the 802.11b 
standard 1 Mb/sec DSSS signals in normal environments, with typical receiver 
designs. It's not the case that beacon frames are transmitted at 1 Mb/sec. - 
that is only true when there are 802.11b stations *associated* with the access 
point (which cannot happen at 5 GHz). Nor is it true that the preamble for ERP 
frames is wastefully long. The preamble for an ERP (OFDM operation) frame is 
about 6 microseconds long, except in the odd case on 2.4GHz of 
compatibility-mode (OFDM-DSSS) operation, where the DSSS preamble is used.   
The DSSS preamble is 72 usec. long, because 72 bits at 1 Mb/sec takes that 
long, but the ERP frame's preamble is much shorter.
 
In any case, my main points were about the fact that channel estimation is 
the key issue in deciding on a modulation to use (and MIMO settings to use), 
and the problem with that is that channels change characteristics quite quickly 
indoors! A spinning fan blade can create significant variation in the impulse 
response over a period of a couple milliseconds.  To do well on channel 
estimation to pick a high data rate, you need to avoid a backlog in the 
collection of outbound packets on all stations - which means minimizing queue 
buildup (even if that means sending shorter packets, getting a higher data rate 
will minimize channel occupancy).
 
Long frames make congested networks work badly - ideally there would only be 
one frame ready to go when the current frame is transmitted, but the longer the 
frame, the more likely more than one station will be ready, and the longer the 
frames will be (if they are being combined).  That means that the penalty due 
to, and frequency of, collisions where more than one frame are being sent at 
the same time grows, wasting airtime with collisions.  That's why CTS/RTS is 
often a good approach (the CTS/RTS frames are short, so a collision will be 
less wasteful of airtime).  But due to preamble size, etc., CTS/RTS can't be 
very short, so an alternative hybrid approach is useful (assume that all 
stations transmit CTS frames at the same time, you can use the synchronization 
acquired during the CTS to mitigate the need for a preamble on the packet sent 
after the RTS).   (One of the papers I did with my student Aggelos Bletsas on 
Cooperative Diversity uses CTS/RTS in this clever way - to measure the channel 
while acquiring it).
 
 
 

On Friday, August 7, 2015 6:31pm, David Lang da...@lang.hm said:



 On Fri, 7 Aug 2015, dpr...@reed.com wrote:
 
  On Friday, August 7, 2015 4:03pm, David Lang da...@lang.hm said:
 
 
  Wifi is the only place I know of where the transmit bit rate is going to
 vary
  depending on the next hop address.
 
 
  This is an interesting core issue. The question is whether additional
  queueing helps or hurts this, and whether the MAC protocol of WiFi deals 
  well
  or poorly with this issue. It is clear that this is a peculiarly WiFi'ish
  issue.
 
  It's not clear that the best 

Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-04 Thread dpreed

On Monday, August 3, 2015 8:13pm, David Lang da...@lang.hm said:
 

 That requires central coordination of the stations. Something we don't have in
 wifi. Wifi lives and dies with 'listen for a gap, try transmitting, and if you
 collide, backoff a random period'


Central coordination is not the only form of coordination... there are 
perfectly fine decentralized coordination schemes that do better than LBT. 
Depends on your definition of 802.11, but I did point out that the MAC layer 
could be a lot better, and internode coordination can be both decentralized and 
far more power efficient, in principle. It's important to realize that the 
preparation of an OFDM modulated waveform can be pipelined, so that a 
transmitter can have the physical waveform built (via DFT, etc.) while 
waiting for its time to go.  And the collision resolution can and should be 
an arbitration process that starts before the current packet in the air is 
finished.

What prevents this is unnecessary legacy compatibility - making high speed 
modulated packets suffer because there are still stupid 2 Mb/sec. 802.11b 
devices on the 2.4 GHz band.  There are ways to coexist with legacy systems 
that are better than transmitting the prefix on the front of every packet (you 
can transmit a fake 802.11b prefix that will lock out the 2.4 GHz competitors 
for a period of time when many turbo stations occupy the air using better 
cooperating physical layer methods, as a conceptually trivial example).
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-07-31 Thread dpreed

Hardware people tend to think about queues way too much in general.  Queues 
should be almost never occupied.  That causes the highest throughput possible.  
And getting there is simple: push queueing back to the source.
 
The average queue length into a shared medium should be as close to zero as 
possible, and the variance should be as close to zero as possible.  This is why 
smaller packets are generally better (modulo switching overhead).
 
The ideal network is a network that maintains what I call a ballistic phase.  
(like a perfect metallic phase in a conductive material).
 
It's easy to prove (as Kleinrock recently did with a student) that a network 
working optimally will have an average queue length everywhere that is less 
than 1 packet.
 
I think that is achievable, *even if there is a WiFi network in the middle*, by 
thinking about the fact that the shared airwaves in a WiFi network behaves like 
a single link, so all the queues on individual stations are really *one queue*, 
and that the optimal behavior of that link will be achieved if there is at most 
one packet queued at a time.
 
The problem with hardware folks and link folks is that they conflate the link 
with the network - two very different things.  The priority (if there is any) 
should be resolved by pushing back at the source, NOT by queueing low priority 
traffic inside the network!

If you think deeply about this, it amounts to a distributed priority-managed 
source-endpoint-located queuing strategy.  That is not actually hard to think 
about - when packets are dropped/ECN'd, the node that does the dropping knows a 
lot about the other competing traffic - in particular, it implicitly reflects 
some information about the existence of competing traffic to the source/dest 
pair (and in ECN, that can be rich information, like the stated urgency of the 
competing traffic).  Then the decision about retransmitting can be pushed to 
the sources, with a lot of information about what's competing in the congested 
situation.
 
This is *far* better than leaving a lot of low priority stuff clogging the 
intermediate nodes.

So ignore the hardware folks who can't think about the fact that their link is 
embedded in a context that the link doesn't understand at all!   Don't let them 
convince you to queue things, especially lower priority things  instead 
push congestion back to the source!!!
 
I know it is really, really productive of *research papers* to try to make a 
DSCP-based switching decision inside the network.  But it is totally 
ass-backwards in the big picture of an Internet.


On Thursday, July 30, 2015 11:27pm, Sebastian Moeller moell...@gmx.de said:



 Hi Jonathan,
 
 
 On July 30, 2015 11:56:23 PM GMT+02:00, Jonathan Morton
 chromati...@gmail.com wrote:
 Hardware people tend to think in terms of simple priority queues, much
 like
 old fashioned military communications (see the original IP precedence
 spec). Higher priority thus gets higher throughput as well as lower
 latency.
 
 I note also that in 802.11e, leftover space in a TXOP can't be (or at
 least
 generally isn't) used opportunistically for traffic from another class,
 because the four queues are so rigidly separated.
 
 I think the hardware people are shortsighted in this respect. It's so
 easy
 to game simple priority queues when there's no filter on the field
 controlling it. That's why cake's Diffserv layer works the way it does.
 And
 if I ever get the chance to do a Wi-Fi specific version, I'll avoid
 both of
 the above problems.
 
 - Jonathan Morton
 
 Thanks for the insight. Now I Start to realize why my jome network behaves AS 
 it
 does. When I run RRUL locally from my macbook over WiFi with cerowrt as AP 
 (which
 if I recall correctly only uses AC_BE) the macbook's send starves the AP and 
 hence
 the macbook's receive tanks. Since macos seems to exercise the AC_v[I|o] 
 queues,
 it hogs airtime and and all systems using lower AC classes see less airtime, 
 less
 bandwidth and higher latency. I guess my gut feeling would be to run the AP 
 always
 at AC_VO so it does not get starved. But really calling such a system where 
 any
 station can inflict that much pain/badness on others 'quality of service' 
 makes me
 wonder. Then again it certainly affects quality of service just not 
 deterministic
 or overall positive ;)
 
 Best Regards
 Sebastian
 --
 Sent from my Android device with K-9 Mail. Please excuse my brevity.
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] wrt1900ac v1 vs v2

2015-07-07 Thread dpreed

[ https://community.linksys.com/t5/Wireless-Routers/WRT1900AC-V2/td-p/940588 ]( 
https://community.linksys.com/t5/Wireless-Routers/WRT1900AC-V2/td-p/940588 )
 
Shows a v2 with 512M of memory, actually purchased.  I would think that the 512 
is definitely useful.
 


On Tuesday, July 7, 2015 2:09am, Mikael Abrahamsson swm...@swm.pp.se said:



 On Mon, 6 Jul 2015, John Yates wrote:
 
  There are refurbished wrt1900ac units available quite cheap ($10 or $20
  more than wrt1200ac). I assume that they are v1 units as the v2 units have
  only been on the market for a few months. From lurking on this list I get
  the sense that these will support full sqm in short order (correct?).
 
  So what are the differences between wrt1900ac v1 and v2? Is there any
  reason to pay nearly $100 more for a v2?
 
 v1 has Armada XP chipset which has packet accelerator HW in it that
 OpenWrt doesn't use. v2 has Armada 385 which doesn't have a packet
 accelerator, but instead has a much better CPU for forwarding packets.
 
 So basically if you buy a v1 you'll get a third or so in forwarding
 performance over the v2 with OpenWrt. With the Linksys firmware I could
 imagine the v1 is faster than the v2. The v1 has a fan, v2 does not.
 
 --
 Mikael Abrahamsson email: swm...@swm.pp.se
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] DSL Reports Speed Test results (WNDR3800, SQM=fc_codel)

2015-07-02 Thread dpreed

Wonderful!


On Wednesday, July 1, 2015 9:46pm, Jim Reisert AD1C jjreis...@alum.mit.edu 
said:



 Model: NETGEAR WNDR3800
 Firmware Version: OpenWrt Chaos Calmer r46069 / LuCI Master
 (git-15.168.50780-bae48b6)
 
 Comcast cable modem, 60 Mbps down/6 Mbps up (nominal)
 
 Motorola SB6141
 Hardware Version: 7.0
 Firmware Name: SB_KOMODO-1.0.6.14-SCM03-NOSH
 Boot Version: PSPU-Boot(25CLK) 1.0.12.18m3
 
 
 without SQM:
 
 59.8 Mbps down, 5.97 Mbps up
 Bloat grade: F
 
 Full report: http://www.dslreports.com/speedtest/782774
 
 
 with SQM:
 
 Interface name: eth1
 Download: 57000
 Upload: 5700
 Queueing discipline: fq_codel
 Queue setup script: simple.qos
 
 54.7 Mbps down, 5.26 Mbps up
 Bloat grade: A
 
 Full report: http://www.dslreports.com/speedtest/782801
 
 --
 Jim Reisert AD1C, jjreis...@alum.mit.edu, http://www.ad1c.us
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] performance numbers from WRT1200AC (Re: Latest build test - new sqm-scripts seem to work; cake overhead 40 didn't)

2015-07-02 Thread dpreed

Having not bought a 1200ac yet, I was wondering if I should splurge for the 
1900ac v2 (which has lots of memory unlike the 1900ac v1).
 
Any thoughts on the compatibility of this with the 1200ac?
 
Current plans are to deploy Supermicro Mini ITX A1SRI-2558F-O Quad Core 
(Rangely) as my externally facing router and services platform, and either 
one of the above as my experimental wireless solution.



On Thursday, July 2, 2015 11:47am, Toke Høiland-Jørgensen t...@toke.dk said:



 Mikael Abrahamsson swm...@swm.pp.se writes:
 
  Do you have a link to your .config for your builds somewhere?
 
  http://swm.pp.se/aqm/wrt1200ac.config
 
 Cool, thanks!
 
  BUT! I have had problems getting WPA2 to work properly with this
  .config. I must have missed something that is needed that has to do
  with the password/crypto handling.
 
  There already is a profile for the WRT1200AC (caiman) in Chaos Calmer
  RC and trunk, so it's actually not that hard to get working. The
  biggest problem is finding all those utilities one wants and making
  sure they're compiled into the image so one doesn't have to add them
  later.
 
 Yeah, realise that. Still have my old .config from when I used to build
 cerowrt for the WNDR lying around somewhere, so will take a look at
 that and make sure everything is in there :)
 
 -Toke
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Build instructions for regular OpenWRT with Ceropackages

2015-06-30 Thread dpreed

What happens if the SoC ports aren't saturated, but the link is GigE?  That is, 
suppose this is an access link to a GigE home or office LAN with wired servers?


On Tuesday, June 30, 2015 9:58am, Mikael Abrahamsson swm...@swm.pp.se said:



 On Mon, 29 Jun 2015, dpr...@reed.com wrote:
 
  I would love to try out cake in my environment. However, as a
  non-combatant, it would be nice to have an instruction sheet on how to
  set the latest version up, and what hardware it works best on
  (WRT1200AC?). Obviously this is a work in progress, so that will
  change, but it would be nice to have a summarized wiki page.
 
 WRT1200AC seems to be the most powerful around, however it can't really be
 used for PHY 100M testing since both SoC ports goes to a switch that then
 terminate all the external ports. Therefore it's hard to test
 AQM-on-metal with it because the SoC links are never saturated, you always
 have to use an encompassing shaper (htb).
 
 Here is a short writeup from what I learnt from the past days. I haven't
 verified every step, this is from memory.
 
 Get ubuntu 14.04 LTS according to:
 
 http://www.acme-dot.com/building-openwrt-14-07-barrier-breaker-on-ubuntu-and-os-x/
 
 Check out either trunk (git clone git://git.openwrt.org/openwrt.git) or
 Chaos Calmer RC (clone git://git.openwrt.org/15.05/openwrt.git).
 
 Copy feeds.conf.default to feeds.conf in the openwrt dir. Add first in
 file:
 
 src-git cero https://github.com/dtaht/ceropackages-3.10.git;
 
 scripts/feeds update -a
 scripts/feeds install luci luci-app-sqm sqm-scripts tc-adv ip ethtool
 kmod-sched-cake kmod-sched-fq_pie
 
 make menuconfig
 
 Now comes the hard part because you want to change * for everything that
 you want to install as default in the resulting image. M means it compiles
 the package but doesn't include it in the resulting image (for utilities).
 What the above does is only to make it available to make menuconfig as
 packages. I tend to choose traceroute, tcpdump and all the other nice to
 have utilities. You can download and use
 http://swm.pp.se/aqm/wrt1200ac.config (it's for trunk, ie the nightly,
 don't know if it works for CC RC2) and put as .config in the openwrt
 directory as a template.
 
 I recommend to build on a fast machine with SSD, i/o is usually the
 limiting factor. I use an 3.5GHz core i5 dual core (4 with HT) and SSD,
 and with make -j 10 it compiles in an hour or so. Subsequent compiles
 are quicker.
 
 You need to find your platform etc. If you get an WRT1200AC you can use my
 builds if you want to. After this you need to go into the luci sqm scripts
 and set queueing algorithm etc.
 
 some good commands to see what's going on:
 
 tc -d qdisc
 tc -s qdisc
 
 Hope it helps.
 
 --
 Mikael Abrahamsson email: swm...@swm.pp.se
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] performance numbers from WRT1200AC (Re: Latest build test - new sqm-scripts seem to work; cake overhead 40 didn't)

2015-06-29 Thread dpreed

I would love to try out cake in my environment.  However, as a non-combatant, 
it would be nice to have an instruction sheet on how to set the latest version 
up, and what hardware it works best on (WRT1200AC?).  Obviously this is a work 
in progress, so that will change, but it would be nice to have a summarized 
wiki page.
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems

2015-05-18 Thread dpreed

I'm curious as to why one would need low priority class if you were using 
fq_codel?  Are the LEDBAT flows indistinguishable?  Is there no congestion 
signalling (no drops, no ECN)? The main reason I ask is that end-to-end flows 
should share capacity well enough without magical and rarely implemented things 
like diffserv and intserv.


On Monday, May 18, 2015 8:30am, Simon Barber si...@superduper.net said:





I am likely out of date about Windows Update, but there's many other programs 
that do background downloads or uploads that don't implement LEDBAT or similar 
protection. The current AQM recommendation draft in the IETF will make things 
worse, by not drawing attention to the fact that implementing AQM without 
implementing a low priority traffic class (such as DSCP 8 - CS1) will prevent 
solutions like LEDBAT from working, or there being any alternative. Would 
appreciate support on the AQM list in the importance of this.
Simon
Sent with AquaMail for Android
[ http://www.aqua-mail.com ]( http://www.aqua-mail.com )

On May 18, 2015 4:42:43 AM Eggert, Lars l...@netapp.com wrote:On 2015-5-18, 
at 07:06, Simon Barber [ si...@superduper.net ]( mailto:si...@superduper.net 
) wrote:

Windows update will kill your Skype call.
Really? AFAIK Windows Update has been using a LEDBAT-like scavenger-type 
congestion control algorithm for years now.
Lars___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] heisenbug: dslreports 16 flow test vs cablemodems

2015-05-17 Thread dpreed

What's your definition of 802.11 performing well?  Just curious.  Maximizing 
throughput at all costs or maintaing minimal latency for multiple users sharing 
an access point?

Of course, if all you are doing is trying to do point-to-point outdoor links 
using 802.11 gear, the issue is different - similar to dallying to piggyback 
acks in TCP, which is great when you have two dimensional flows, but lousy if 
each packet has a latency requirement that is small.
 
To me this is hardly so obvious. Maximizing packet sizes is actually 
counterproductive for many end-to-end requirements.  But of course for hot rod 
benchmarkers applications don't matter at all - just the link performance 
numbers.
 
One important use of networking is multiplexing multiple users.  Otherwise, 
bufferbloat would never matter.
 
Which is why I think actual numbers rather than hand waving claims matter.


On Friday, May 15, 2015 10:36am, Simon Barber si...@superduper.net said:





One question about TCP small queues (which I don't think is a good solution to 
the problem). For 802.11 to be able to perform well it needs to form maximum 
size aggregates. This means that it needs to maintain a minimum queue size of 
at least 64 packets, and sometimes more. Will TCP small queues prevent this?
Simon
Sent with AquaMail for Android
[ http://www.aqua-mail.com ]( http://www.aqua-mail.com )

On May 15, 2015 6:44:21 AM Jim Gettys j...@freedesktop.org wrote:



On Fri, May 15, 2015 at 9:09 AM, Bill Ver Steeg (versteb) [ vers...@cisco.com 
]( mailto:vers...@cisco.com ) wrote:
Lars-

 You make some good points. It boils down to the fact that there are several 
things that you can measure, and they mean different things.

 Bvs




 -Original Message-
 From: Eggert, Lars [mailto:[ l...@netapp.com ]( mailto:l...@netapp.com )]
 Sent: Friday, May 15, 2015 8:44 AM
 To: Bill Ver Steeg (versteb)
 Cc: Aaron Wood; [ c...@lists.bufferbloat.net ]( 
mailto:c...@lists.bufferbloat.net ); Klatsky, Carl; [ 
cerowrt-devel@lists.bufferbloat.net ]( 
mailto:cerowrt-devel@lists.bufferbloat.net ); bloat
 Subject: Re: [Bloat] [Cerowrt-devel] heisenbug: dslreports 16 flow test vs 
cablemodems


 I disagree. You can use them to establish a lower bound on the delay an 
application over TCP will see, but not get an accurate estimate of that 
(because socket buffers are not included in the measurement.) And you rely on 
the network to not prioritize ICMP/UDP but otherwise leave it in the same 
queues.

​On recent versions of  Linux and Mac, you can get most of the socket buffers 
to go away.  I forget the socket option offhand.​ 

​And TCP small queues in Linux means that Linux no longer gratuitously 
generates packets just to dump them into the queue discipline system where they 
will rot.
How accurate this now can be is still an interesting question: but has clearly 
improved the situation a lot over 3-4 years ago.​


  If you can instrument TCP in the kernel to make instantaneous RTT available 
  to the application, that might work. I am not sure how you would roll that 
  out in a timely manner, though.


​Well, the sooner one starts, the sooner it gets deployed.​
Jim___
 Bloat mailing list
 bl...@lists.bufferbloat.net
[ https://lists.bufferbloat.net/listinfo/bloat ]( 
https://lists.bufferbloat.net/listinfo/bloat )

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] better business bufferbloat monitoring tools?

2015-05-14 Thread dpreed

Tools, tools, tools.  Make it trivially easy to capture packets in the home 
(don't require cerowrt, for obvious reasons).  For example, an iPhone app that 
does a tcpdump and sends it to us would be fantastic to diagnose make wifi 
fast issues and also bufferbloat issues.  Give feedback that is helpful to 
every one who contributes data.  (That's what made netalyzr work so well... you 
got feedback ASAP that could be used to understand your own situation).
 
Not sure an iPhone app can be disseminated.  An Android app might be, as could 
a MacBook app and a WIndows app.
 
Linux/FreeBSD options: One could  generate a memstick app that would boot Linux 
on a standard windows laptop to run tcpdump and upload the results, or 
something that would run in Parallels or VMWare fusion on a Mac.
 
I've started looking at a hardware measurement platform for my make WiFi fast 
work - currently looks like a Rangely board will do the trick.  But that won't 
scale well outside my home since it costs a few hundred bucks for the hardware.

On Wednesday, May 13, 2015 11:30am, Jim Gettys j...@freedesktop.org said:






On Wed, May 13, 2015 at 9:20 AM, Bill Ver Steeg (versteb) [ vers...@cisco.com 
]( mailto:vers...@cisco.com ) wrote:
Time scales are important. Any time you use TCP to send a moderately large 
file, you drive the link into congestion. Sometimes this is for a few 
milliseconds per hour and sometimes this is for 10s of minutes per hour.

 For instance, watching a 3 Mbps video (Netflix/YouTube/whatever) on a 4 Mbps 
link with no cross traffic can cause significant bloat, particularly on older 
tail drop middleboxes.  The host code does an HTTP get every N seconds, and 
drives the link as hard as it can until it gets the video chunk. It waits a 
second or two and then does it again. Rinse and Repeat. You end up with a very 
characteristic delay plot. The bloat starts at 0, builds until the middlebox 
provides congestion feedback, then sawtooths around at about the buffer size. 
When the burst ends, the middlebox burns down its buffer and bloat goes back to 
zero. Wait a second or two and do it again.

​It's time to do some packet traces to see what the video providers are doing.  
In YouTube's case, I believe the traffic is using the new sched_fq qdisc, which 
does packet pacing; but exactly how this plays out by the time packets reach 
the home isn't entirely clear to me. Other video providers/CDN's may/may not 
have started generating clues.


Also note that so far, no one is trying to pace the IW transmission at all.

​ 
 You can't fix this by adding bandwidth to the link. The endpoint's TCP 
sessions will simply ramp up to fill the link. You will shorten the congested 
phase of the cycle, but TCP will ALWAYS FILL THE LINK (given enough time to 
ramp up)

​That has been the behavior in the past, but it's no longer safe to presume​ we 
should tar everyone with the same brush, rather, we should do a bit of science, 
and then try to hold people's feet to the fire that do not play nice with the 
network.
​Some packet captures in the home can easily sort this out.
Jim
​
 The new AQM (and FQ_AQM) algorithms do a much better job of controlling the 
oscillatory bloat, but you can still see ABR video patterns in the delay 
figures.

 Bvs




 -Original Message-
 From: [ bloat-boun...@lists.bufferbloat.net ]( 
mailto:bloat-boun...@lists.bufferbloat.net ) [mailto:[ 
bloat-boun...@lists.bufferbloat.net ]( 
mailto:bloat-boun...@lists.bufferbloat.net )] On Behalf Of Dave Taht
 Sent: Tuesday, May 12, 2015 12:00 PM
 To: bloat; [ cerowrt-devel@lists.bufferbloat.net ]( 
mailto:cerowrt-devel@lists.bufferbloat.net )
 Subject: [Bloat] better business bufferbloat monitoring tools?

 One thread bothering me on [ dslreports.com ]( http://dslreports.com ) is that 
some folk seem to think you only get bufferbloat if you stress test the 
network, where transient bufferbloat is happening all the time, everywhere.

 On one of my main sqm'd network gateways, day in, day out, it reports about 
6000 drops or ecn marks on ingress, and about 300 on egress.
 Before I doubled the bandwidth that main box got, the drop rate used to be 
much higher, and a great deal of the bloat, drops, etc, has now moved into the 
wifi APs deeper into the network where I am not monitoring it effectively.

 I would love to see tools like mrtg, cacti, nagios and smokeping[1] be more 
closely integrated, with bloat related plugins, and in particular, as things 
like fq_codel and other ecn enabled aqms deploy, start also tracking congestive 
events like loss and ecn CE markings on the bandwidth tracking graphs.

 This would counteract to some extent the classic 5 minute bandwidth summaries 
everyone looks at, that hide real traffic bursts, latencies and loss at sub 5 
minute timescales.

 mrtg and cacti rely on snmp. While loss statistics are deeply part of snmp, I 
am not aware of there being a mib for CE events and a quick google search was 
unrevealing. ?


Re: [Cerowrt-devel] DOCSIS 3+ recommendation?

2015-03-19 Thread dpreed
How many years has it been since Comcast said they were going to fix 
bufferbloat in their network within a year?

And LTE operators haven't even started.

THat's a sign that the two dominant sectors of Internet Access business are 
refusing to support quality Internet service. (the old saying about monopoly 
ATT: we don't care. we don't have to. applies to these sectors).

Have fun avoiding bufferbloat in places where there is no home router you can 
put fq_codel into.

It's almost as if the cable companies don't want OTT video or simultaneous FTP 
and interactive gaming to work. Of course not. They'd never do that.



On Wednesday, March 18, 2015 3:50pm, Jonathan Morton chromati...@gmail.com 
said:

 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 Right, so until 3.1 modems actually become available, it's probably best to
 stick with a modem that already supports your subscribed speed, and manage
 the bloat separately with shaping and AQM.
 
 - Jonathan Morton
 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation?

2015-03-19 Thread dpreed
I do think engineers operating networks get it, and that Comcast's engineers 
really get it, as I clarified in my followup note.

The issue is indeed prioritization of investment, engineering resources and 
management attention. The teams at Comcast in the engineering side have been 
the leaders in bufferbloat minimizing work, and I think they should get more 
recognition for that.

I disagree a little bit about not having a test that shows the issue, and the 
value the test would have in demonstrating the issue to users.  Netalyzer has 
been doing an amazing job on this since before the bufferbloat term was 
invented. Every time I've talked about this issue I've suggested running 
Netalyzer, so I have a personal set of comments from people all over the world 
who run Netalyzer on their home networks, on hotel networks, etc.

When I have brought up these measurements from Netalyzr (which are not aimed at 
showing the problem as users experience) I observe an interesting reaction from 
many industry insiders:  the results are not sexy enough for stupid users and 
also no one will care.

I think the reaction characterizes the problem correctly - but the second part 
is the most serious objection.  People don't need a measurement tool, they need 
to know that this is why their home network sucks sometimes.





On Thursday, March 19, 2015 3:58pm, Livingood, Jason 
jason_living...@cable.comcast.com said:

 On 3/19/15, 1:11 PM, Dave Taht dave.t...@gmail.com wrote:
 
On Thu, Mar 19, 2015 at 6:53 AM,  dpr...@reed.com wrote:
 How many years has it been since Comcast said they were going to fix
bufferbloat in their network within a year?
 
 I¹m not sure anyone ever said it¹d take a year. If someone did (even if
 it
 was me) then it was in the days when the problem appeared less complicated
 than it is and I apologize for that. Let¹s face it - the problem is
 complex and the software that has to be fixed is everywhere. As I said
 about IPv6: if it were easy, it¹d be done by now. ;-)
 
It's almost as if the cable companies don't want OTT video or
simultaneous FTP and interactive gaming to work. Of course not. They'd
never do that.
 
 Sorry, but that seems a bit unfair. It flies in the face of what we have
 done and are doing. We¹ve underwritten some of Dave¹s work, we got
 CableLabs to underwrite AQM work, and I personally pushed like heck to get
 AQM built into the default D3.1 spec (had CTO-level awareness  support,
 and was due to Greg White¹s work at CableLabs). We are starting to field
 test D3.1 gear now, by the way. We made some bad bets too, such as trying
 to underwrite an OpenWRT-related program with ISC, but not every tactic
 will always be a winner.
 
 As for existing D3.0 gear, it¹s not for lack of trying. Has any DOCSIS
 network of any scale in the world solved it? If so, I have something to
 use to learn from and apply here at Comcast - and I¹d **love** an
 introduction to someone who has so I can get this info.
 
 But usually there are rational explanations for why something is still not
 done. One of them is that the at-scale operational issues are more
 complicated that some people realize. And there is always a case of
 prioritization - meaning things like running out of IPv4 addresses and not
 having service trump more subtle things like buffer bloat (and the effort
 to get vendors to support v6 has been tremendous).
 
I do understand there are strong forces against us, especially in the USA.
 
 I¹m not sure there are any forces against this issue. It¹s more a
 question
 of awareness - it is not apparent it is more urgent than other work in
 everyone¹s backlog. For example, the number of ISP customers even aware of
 buffer bloat is probably 0.001%; if customers aren¹t asking for it, the
 product managers have a tough time arguing to prioritize buffer bloat work
 over new feature X or Y.
 
 One suggestion I have made to increase awareness is that there be a nice,
 web-based, consumer-friendly latency under load / bloat test that you
 could get people to run as they do speed tests today. (If someone thinks
 they can actually deliver this, I will try to fund it - ping me off-list.)
 I also think a better job can be done explaining buffer bloat - it¹s hard
 to make an Œelevator pitch¹ about it.
 
 It reminds me a bit of IPv6 several years ago. Rather than saying in
 essence Œyou operators are dummies¹ for not already fixing this, maybe
 assume the engineers all Œget it¹ and what to do it. Because we really
 do
 get it and want to do something about it. Then ask those operators what
 they need to convince their leadership and their suppliers and product
 managers and whomever else that it needs to be resourced more effectively
 (see above for example).
 
 We¹re at least part of the way there in DOCSIS networks. It is in D3.1 by
 default, and we¹re starting trials now. And probably within 18-24 months
 we won¹t buy any DOCSIS CPE that is not 3.1.
 
 The question for me is how and when to address it in 

Re: [Cerowrt-devel] DOCSIS 3+ recommendation?

2015-03-19 Thread dpreed
I'll look up the quote, when I get home from California, in my email archives.  
It may have been private email from Richard Woundy (an engineering SVP at 
Comcast who is the person who drove the CableLabs effort forward, working with 
Jim Gettys - doing the in-house experiments...). To be clear, I am not blaming 
Comcast's engineers or technologists for the most part. I *am* blaming the 
failure of the Comcast leadership to invest in deploying the solution their own 
guys developed. I was skeptical at the time (and I think I can find that email 
to Rich Woundy, too, as well as a note to Jim Gettys expressing the same 
skepticism when he was celebrating the CableLabs experiments and their best 
practices regarding AQM).

It's worth remembering that CableLabs, while owned jointly by all cable 
operators, does not actually tell the operators what to do in any way.  So 
recommendations are routinely ignored in favor of profitable operations.  I'm 
sure you know that.  It's certainly common knowledge among those who work at 
CableLabs (I had a number of conversations with Richard Green when he ran the 
place on this very subject).

So like any discussion where we anthropomorphize companies, it's probably not 
useful to pin blame.

I wasn't trying to pin blame anywhere in particular - just observing that Cable 
companies still haven't deployed the actual AQM options they already have.

Instead the cable operators seem obsessed with creating a semi-proprietary 
game lane that involves trying to use diffserv, even though they don't (and 
can't) have end-to-end agreement on the meaning of the DCP used, and therefore 
will try to use that as a basis for requiring gaming companies to directly peer 
with the cable distribution network, where the DCP will work (as long as you 
buy only special gear) to give the gaming companies a fast lane that they 
have to pay for (to bypass the bloat that they haven't eliminated by upgrading 
their deployments).

Why will the game providers not be able to just use the standard Internet 
access service, without peering to every cable company directly?  Well, because 
when it comes to spending money on hardware upgrades, there's more money in it 
to pay for the upgrade.

That's just business logic, when you own a monopoly on Internet access.  You 
want to maximize the profits from your monopoly, because competition csn't 
exist. [Fixing bufferbloat doesn't increase profits for a monopoly. In fact it 
discourages people from buying more expensive service, so it probably decreases 
profits.]

It's counterintuitive, I suppose, to focus on the business ecology distortions 
caused by franchise monopolies in a technical group. But engineering is not 
just technical - it's about economics in a very fundamental way.  Network 
engineering in particular.

If you want better networks, eliminate the monopolies who have no interest in 
making them better for users.

On Thursday, March 19, 2015 10:11am, JF Tremblay 
jean-francois.tremb...@viagenie.ca said:

 
 On Mar 19, 2015, at 9:53 AM, dpr...@reed.com wrote:

 How many years has it been since Comcast said they were going to fix 
 bufferbloat
 in their network within a year?
 
 Any quote on that?
 
 THat's a sign that the two dominant sectors of Internet Access business are
 refusing to support quality Internet service.
 
 I’m not sure this is a fair statement. Comcast is a major (if not
 “the” player) in CableLabs, and they made it clear that for Docsis
 3.1, aqm was one of the important target. This might not have happened 
 without all
 the noise around bloat that Jim and Dave made for years. (now peering and 
 transit
 disputes are another ball game)
 
 While cable operators started pretty much with a blank slate in the early 
 days of
 Docsis, they now have to deal with legacy and a huge tail of old devices. So 
 in
 this respect, yes they are now a bit like the DSL incumbents, introduction of 
 new
 technologies is over a 3-4 years timeframe at least.
 
 It's almost as if the cable companies don't want OTT video or simultaneous 
 FTP
 and interactive gaming to work. Of course not. They'd never do that.
 
 
 You might be surprised at how much they care for gamers, these are often their
 most vocal users. And those who will call to get things fixed. Support calls 
 and
 truck rolls are expensive and touch the bottom line, where it hurts…
 
 JF
 (a former cable operator)
 
 
 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Fwd: Dave's wishlist [was: Source-specific routing merged]

2015-03-17 Thread dpreed
I agree wholeheartedly with your point, David.

One other clarifying point (I'm not trying to be pedantic, here, but it may 
sound that way):

Reliability is not the same as Availability.  The two are quite different.

 Bufferbloat is pretty much an availability issue, not a reliability issue.  
In other words, packets are not getting lost.  The system is just preventing 
desired use.

Availability issues can be due to actual failures of components, but there are 
lots of availability issues that are caused (as you suggest) by attempts to 
focus narrowly on loss of data or component failures.

When you build a system, there is a temptation to apply what is called the 
Fallacy of Composition (look it up on Wikipedia for precise definition).  The 
key thing in the Fallacy of Composition is that when a system of components has 
a property as a whole, then every component of the system must by definition 
have that property.

(The end-to-end argument is a specific rule that is based on a recognition of 
the Fallacy of Composition in one case.)

We all know that there is never a single moment when any moderately large part 
of the Internet does not contain failed components.  Yet the Internet has 
*very* high availability - 24x7x365, and we don't need to know very much about 
what parts are failing.  That's by design, of course. And it is a design that 
does not derive its properties from a trivial notion of proof of correctness, 
or even bug freeness

The relevance of a failure or even a design flaw to system availability is 
a matter of a much bigger perspective of what the system does, and what its 
users perceive as to whether they can get work done.




On Tuesday, March 17, 2015 3:30pm, David Lang da...@lang.hm said:

 On Tue, 17 Mar 2015, Dave Taht wrote:
 
 My quest is always for an extra 9 of reliability. Anyplace where you can
 make something more robust (even if it is out at the .99) level, I
 tend to like to do in order to have the highest MTBF possible in
 combination with all the other moving parts on the spacecraft (spaceship
 earth).
 
 There are different ways to add reliability
 
 one is to try and make sure nothing ever fails
 
 the second is to have a way of recovering when things go wrong.
 
 
 Bufferbloat came about because people got trapped into the first mode of
 thinking (packets should never get lost), when the right answer ended up being
 to realize that we have a recovery method and use it.
 
 Sometimes trying to make sure nothing ever fails adds a lot of complexity to 
 the
 code to handle all the corner cases, and the overall reliability will improve 
 by
 instead simplify normal flow, even if it add a small number of failures, if 
 that
 means that you can have a common set of recovery code that's well excercised 
 and
 tested.
 
 As you are talking about loosing packets with route changes, watch out that 
 you
 don't fall into this trap.
 
 David Lang
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] great interview on the scale conference wifi successes

2015-03-12 Thread dpreed
On Thursday, March 12, 2015 11:31am, Richard Smith smithb...@gmail.com said:

 On 03/10/2015 05:12 PM, Dave Taht wrote:
 

 This year I deployed 53 APs. I'll make an updated map showing where they
 were deployed.

 So far as I know all the APs were fq-codeled, but the firewall/gw was
 not.
 
 How does this work?  I thought the AP's in this setup were run as bridges.

Pretty good question! Of course if AP's running as Ethernet bridges are bloated 
(meaning their queues can grow quite large) that's yet another reason that we 
need to make WiFi fast (by putting codel into the bridge function).

Ethernet bridges should definitely manage their outbound queues to keep the 
queues in them on the average quite small (average  2 frames in steady state). 
Otherwise, if the outbound queue runs at 802.11b rates, and the inbound queues 
run at 802.11ac rates, there will be a serious disaster.

Since you can't ECN generalized Ethernet packets, codel would have to drop 
packets. And this might have been what David Lang is doing. (of course, it's 
perfectly reasonable if you know that the LAN is transporting an IP datagram, 
to ECN-mark those datagrams.  This is what an Internet transport layer is 
allowed to do, which is why ECN is part of the envelope, not the contents of 
the end-to-end packet.

The same argument applies to packets held for retransmission over an 802.11 
link. It's perfectly OK to hold a packet outside the outbound queue for 
retransmission when the conditions to the destination get better, but that 
packet should not block the next packet coming in going to a different 
destination.  The retransmission queue (which is there to improve reliability) 
is a different thing.  [However, my intuition suggests that only one packet per 
next hop should be in the retransmission queue, and it should not stay there 
very long - after a period of time, let the sender at the next layer up figure 
out what to do. Propagation changes in the 10's of millisecond time frame. It 
won't get better if you wait 1/2 second or more]




___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [aqm] [Bloat] ping loss considered harmful

2015-03-04 Thread dpreed
It's a heavy burden to place on ICMP ping to say that it should tell you about 
all aspects of its path through all the networks between source and destination.

On the other hand, I'll suggest that Fred's point - treat ICMP Ping like any 
other IP datagram with the same header options is the essence of Ping's 
function.

I'd suggest that a more flexible rule would be for the echo reply to set header 
options (including DSCP) based on the ping packet's content tells it to be set 
to.

DSCP should not be changed en route, so the receiver of the echo reply should 
be able to know what DSCP was used on the reply packet.

Clearly the value of Ping is its standardized form and its ubiquity.  Being 
able to control all header options from the sender is useful for that function. 
 If the receiver cannot satisfy the request (e.g. it doesn't support the DSCP 
mechanism), it can just refuse to set it. That way, Ping acquires an option, 
but the option is upward compatible if not supported.

(I specifically talk about all header options here, rather than DSCP in 
particular.  For example, one could request ECN marking in the same way, with 
the same rules. I'm not a big fan of DSCP because I think the code points are 
poorly defined and so forth, but that's irrelevant to the thinking about Ping 
vs. envelope option - fully end-to-end modular services like ECN clearly 
should be testable in this way, and in the case of ECN, the notion of just not 
doing it if you can't do it fits into the Ping conceptual framework).




On Tuesday, March 3, 2015 1:00pm, Fred Baker (fred) f...@cisco.com said:

 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 
 On Mar 3, 2015, at 9:29 AM, Wesley Eddy w...@mti-systems.com wrote:

 On 3/3/2015 12:20 PM, Fred Baker (fred) wrote:

 On Mar 1, 2015, at 7:57 PM, Dave Taht dave.t...@gmail.com
 mailto:dave.t...@gmail.com wrote:

 How can we fix this user perception, short of re-prioritizing ping in
 sqm-scripts?

 IMHO, ping should go at the same priority as general traffic - the
 default class, DSCP=0. When I send one, I am asking whether a random
 packet can get to a given address and get a response back. I can imagine
 having a command-line parameter to set the DSCP to another value of my
 choosing.

 I generally agree, however ...

 The DSCP of the response isn't controllable though, and likely the DSCP
 that is ultimately received will not be the one that was sent, so it
 can't be as simple as echoing back the same one.  Ping doesn't tell you
 latency components in the forward or return path (some other protocols
 can do this though).

 So, setting the DSCP on the outgoing request may not be all that useful,
 depending on what the measurement is really for.
 
 Note that I didn’t say “I demand”… :-)
 
 I share the perception that ping is useful when it’s useful, and that it is
 at best an approximation. If I can get a packet to the destination and a 
 response
 back, and I know the time I sent it and the time I received the response, I 
 know
 exactly that - messages went out and back and took some amount of total time. 
 I
 don’t know anything about the specifics of the path, of buffers en route, or
 delay time in the target. Traceroute tells me a little more, at the cost of a 
 more
 intense process. In places I use ping, I tend to send a number of them over a
 period of time and observe on the statistics that result, not a single ping
 result.
 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] the cerowrt easter egg

2015-03-04 Thread dpreed
+1



On Wednesday, March 4, 2015 3:19pm, Dave Taht dave.t...@gmail.com said:

 As you can see over the last month I have been laughing in despair
 about the futility we seem to face in getting solutions for
 bufferbloat out there, ever since the streamboost and gogo-in-flight
 data came in.
 
 So it occurs to me that most of you may have missed the easter egg, I
 long, long ago tossed in the cerowrt.
 
 I LOVE easter eggs! they are so fun to find in so many games and other 
 products!
 
 I was going to stick R. Goldberg, R. Feynman, H. Mencken, and M. Twain
 in there at one point (all of whom's various attributes and attitudes
 towards life have actually been a great help to me, at least, during
 this project!), but I thought that would have been too obvious.
 
 However, in
 
 http://cero2.bufferbloat.net/cerowrt/credits.html
 
 George Burdell has also been a great help, overall, in aiding my
 coping skills. He is, also a bot we use for various things.
 
 http://en.wikipedia.org/wiki/George_P._Burdell
 
 
 
 --
 Dave Täht
 Let's make wifi fast, less jittery and reliable again!
 
 https://plus.google.com/u/0/107942175615993706558/posts/TVX3o84jjmb
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Lost access to Web GUI

2015-02-25 Thread dpreed
Dave - I understand the rationale, but the real issue here is with the 
printers, etc.

Their security model is completely inappropriate - even WITHIN a home 
network... depending on peripheral protection doesn't work anywhere very well.  
It's so easy to break into someone's home net... either by exploiting a hole, 
or by social engineering.  And it is worse in any enterprise network.

At my last two large company employers, the amount of attack traffic within the 
Intranet, behind the firewall, was measured, and it is FAR worse than in the 
public Internet (and Sony showed us how risky that is).

IPP should be default secure, with authenticated users only able to use it.  
There's a way to make this simpler - establish an authentication server in the 
home, and require every device to have a relationship with the authentication 
server, so the category of authorized devices is easy to set up.

NAT is not a proper security solution.  It's features can be useful to reduce 
the frequency of some attacks, and maybe DoS.

Maybe CeroWRT should have a good authentication server in it, turned on by 
default, and with a well-designed user experience to make adding a device easy, 
and provide that device with the authorized credentials.

We've known how to do this since Kerberos was designed - it was designed to 
work exactly this way.

Instead this crappy border protection model has become the only mental model 
of security.

Companies have locked desks, locked offices, badges, etc.  And they don't place 
their entire security bet on the receptionist at the front door.



On Wednesday, February 25, 2015 12:21pm, Dave Taht dave.t...@gmail.com said:

 On Mon, Feb 16, 2015 at 4:41 AM, Rich Brown richb.hano...@gmail.com wrote:
 I figured it out.

 The default blockconfig2 rule in the firewall prevents access to the
 configuration ports of CeroWrt from any host in a *guest* network. Switching 
 to a
 non-guest network (from CEROwrt-guest5 to CEROwrt5) allows me to log in with
 ease.

 D'oh!
 
 Yes. I also note that by default stuff connected via the meshy wifi
 interfaces are also blocked from access to the web gui. And my
 reasoning for using port 81 rather than 80, is that it is easy to
 accidentally open up port 80 to the universe via ipv6, and preferred
 to avoid that. In fact, I really do wish that the world of embedded
 devices had chosen a non-default port for their configuration web gui
 in light of their potential exposure to the universe via ipv6, or
 those that did were more careful about only accepting local
 connections by default.
 
 That said, even that is not enough.
 
 There are going to be an awful lot of printers exposed the ipv6
 internet on port 631 (ipp) for example, and hacks have already
 deployed that can get your printer to emit spam, at the very least.
 
 I note that I do, in my own, trusted environments, I do allow the
 meshy interfaces full reign to the gui and secure network (it makes
 them a lot more useful), but ya know, I am generally careful enough to
 actually change admin passwords and the like, everywhere, on
 everything.
 
 Rich


 On Feb 9, 2015, at 7:54 AM, Rich Brown richb.hano...@gmail.com wrote:

 I cannot currently get to the web GUI on CeroWrt. When I browse to
 gw.home.lan:80, I see the short Welcome to CeroWrt page. Clicking the link
 gives a page that says Redirecting with 
 http://gw.home.lan/bgi-bin/redir.sh in
 the URL bar. Shortly thereafter the page times out. telnet gw.home.lan 81 
 fails
 - I get Trying 172.30.42.1... but no connection.

 The router is otherwise working just fine.

 I first noticed this yesterday, and I didn't have time to write it up. I 
 believe
 this has happened before. In the past, I believe that I waited a few days 
 and it
 came back. (only 80% sure of this - I may just have rebooted.)

 I'm using CeroWrt 3.10.50-1. Uptime is 46+ days. I can ssh in, so I can get
 diagnostic info including the cerostats.sh output (below).

 Any thoughts? What else should I check/try? Thanks.

 Rich

 # I have not changed any of the lighttpd .conf files
 root@cerowrt:/usr/lib/CeroWrtScripts# ps | grep http
 1334 www-data  3956 S/usr/sbin/lighttpd -D -f /etc/lighttpd/cerowrt.conf
 3389 root  4840 S/usr/sbin/lighttpd -D -f 
 /etc/lighttpd/lighttpd.conf
 19089 root  1388 Sgrep http

 # somebody's listening on ports 80  81
 root@cerowrt:/usr/lib/CeroWrtScripts# netstat -ant
 Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address   Foreign Address State
 tcp0  0 0.0.0.0:139 0.0.0.0:*   LISTEN
 tcp0  0 0.0.0.0:80  0.0.0.0:*   LISTEN
 tcp0  0 0.0.0.0:81  0.0.0.0:*   LISTEN
 tcp0  0 0.0.0.0:53  0.0.0.0:*   LISTEN
 tcp0  0 0.0.0.0:21  0.0.0.0:*   LISTEN
 tcp0  0 0.0.0.0:23  0.0.0.0:*   

Re: [Cerowrt-devel] [Bloat] Two d-link products tested for bloat...

2015-02-20 Thread dpreed

+1 for this idea.  It really worked for Anand's and Tom's - their reviews 
caught fire and got followed so much that they could become profitable 
businesses from the ads.
 
Craigslist style business model, funding both reviewing and CeroWRT promotion 
activities would be the logical thing.  And I love the names! (free + some 
premium service that doesn't compromise the purity and freeness of the 
reviews)...
 
Thoughts on the premium service that might go with this:
 
1) some kind of support service that links people with skilled support for 
WiFi in their area (for a percentage on each referral)
 
2) Premium insider news  content (like LWN.net, which I subscribe to at the 
professional level, because it is so great).
 
The point of this is not to maximize the likelihood of buyout for billions of 
dollars.  I don't oppose that outcome, but it is tricky to aim for that goal 
without compromising the review (and news if there) quality.  You don't want 
vendor sponsorship.  You might want early access to upcoming products, as 
long as it is on your own terms and not a way of letting vendors buy your 
integrity, which they would certainly attempt.
 
I don't normally do this, but I would contribute content at a modest level - 
and I'm sure others would.  The key missing feature is an editor (e.g. Jonathan 
Corbet, Michael Swaine, Doc Searls, .. - that type of editor, not necessarily 
those people).
 
 
 


On Friday, February 20, 2015 3:47am, Jonathan Morton chromati...@gmail.com 
said:



Out of curiosity, perhaps you could talk to AA about their FireBrick router. 
They make a big point of having written the firmware for it themselves, and 
they might be more interested in having researchers poke at it in interesting 
ways than the average big name.  AA are an ISP, not a hardware manufacturer by 
trade.
Meanwhile, I suspect the ultimate hardware vendors don't care because their 
customers, the big brands, don't care. They in turn don't care because neither 
ISPs nor consumers care (on average). A coherent, magazine style review system 
with specific areas given star ratings might have a chance of fixing that, if 
it becomes visible enough. I'm not sure that a rant blog would gain the same 
sort of traction.
Some guidance can be gained from the business of reviewing other computer 
hardware. Power supplies are generally, at their core, one of a few standard 
designs made by one of a couple of big subcontractors. The quality of the 
components used to implement that design, and ancillary hardware such as 
heatsinks and cabling, are what distinguish them in the marketplace. Likewise 
motherboards are all built around a standard CPU socket, chipset and form 
factor, but the manufacturers find lots of little ways to distinguish 
themselves on razor thin margins; likewise graphics cards. Laptops are usually 
badly designed in at least one stupid way despite the best efforts of 
reviewers, but thanks to them it is now possible to sort through the general 
mess and find one that doesn't completely suck at a reasonable price.
As for the rating system itself:
- the Communications Black Hole, for when we can't get it to work at all. Maybe 
we can shrink a screen grab from Interstellar for the job.
- the Tin Cans  String, for when it passes packets okay (out of the box) but 
is horrible in every other important respect.
- the Carrier Pigeon. Bonus points if we can show it defecating on the message 
(or the handler's wrist).
- the Telegraph Pole (or Morse Code Key). Maybe put the Titanic in the 
background just to remind people how hard they are failing.
- the Dial-Up Modem. Perhaps products which become reliable and useful if the 
user installs OpenWRT should get at least this rating.
- the Silver RJ45, for products which contrive to be overall competent in all 
important respects.
- the Golden Fibre, for the very best, most outstanding examples of best 
practice, without any significant faults at all. Bonus Pink Floyd reference.
I've been toying with the idea of putting up a website on a completely 
different subject, but which might have similar structure. Being able to use 
the same infrastructure for two different sites might spread the costs in an 
interesting way...
- Jonathan Morton___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Fwd: Throughput regression with `tcp: refine TSO autosizing`

2015-02-01 Thread dpreed

Just to clarify, managing queueing in a single access point WiFi network is 
only a small part of the problem of fixing the rapidly degrading performance of 
WiFi based systems.  Similarly, mesh routing is only a small part of the 
problem with the scalability of cooperative meshes based on the WiFi MAC.
 
So I don't disagree with work on queue management (which is not good in any 
commercial product).
 
But Dave T has done some great talks on fixing WiFi that don't have to do 
with queueing very much at all.
 
For example, rate selection for various packets is terrible.  When you have 
nearly 1000:1 ratios of transmission rates and codes that are not backward 
compatible, there's a huge opportunity for improvement.  Similarly, choice of 
frequency bandwidth and center frequency at each station offers huge 
opportunities for practical scalability of systems.  Also, as we noted earlier, 
handoff from one next hop to another is a huge problem with performance in 
practical deployments (a factor of 10x at least, just in that).
 
Propagation information is not used at all when 802.11 systems share a channel, 
even in single AP deployments, yet all stations can measure propagation quite 
accurately in their hardware.
 
Finally, Listen-before-talk is highly wasteful for two reasons: 1) any random 
radio noise from other sources unnecessarily degrades communications (and in 
the 5.8 MHz band, the rule about radar avoidance requires treating very low 
level noise as a signal to shut the net down by law, but there is a loophole 
if you can tell that it's not actually radar (the technique requires two or 
more stations to measure the same noise event, and if the power is 
significantly different - more than a few dB - then it can't possibly be due to 
a distant transmitter, and therefore can be ignored). 2) the transmitter cannot 
tell when the intended receiver will be perfectly able to decode the signal 
without interference with the station it hears (this second point is actually 
proven in theory in a paper by Jon Peha that argued against trivial 
etiquettes as a mechanism for sharing among uncooperative and 
non-interoperable stations).
 
Dave T has discussed more, as have I in other venues.
 
The reason no one is making progress on any of these particular issues is that 
there is no coordination at the systems level around creating rising tides 
that lift all boats in the WiFi-ish space.  It's all about ripping the 
competition by creating stuff that can sell better than the other guys' stuff, 
and avoiding cooperation at all costs.
 
I agree that, to the extent that managing queues in a single box or a single 
operating system doesn't require cooperation, it's much easier to get such 
things into the market.  That's why CeroWRT has been as effective as it has 
been.  But has Microsoft done anything at all about it?   Do the better ECN 
signals that can arise from good queue management get used by the TCP 
endpoints, or for that matter UDP-based protocol endpoints?
 
But the big wins in making WiFi better are going begging.  As WiFi becomes more 
closed, as it will as the major Internet Access Providers and Gadget builders 
(Google, Apple) start excluding innovators in wireless from the market by 
closed, proprietary solutions, the problem WILL get worse.  You won't be able 
to fix those problems at all.  If you have a solution you will have to convince 
the oligopoly to even bother trying it.
 
So, let me reiterate.  The problem is not just getting Minstrel adopted, 
though I have nothing against that as a subgoal.  The problem is to find good 
systems-level answers, and to find a strategy to deliver those answers to a 
WiFi ecology that spans the planet, and where the marketing value-story focuses 
on things one can measure between two stations in a Faraday cage, and never on 
any systems-level issues.
 
 
I personally think that things like promoting semi-closed, essentially 
proprietary ESSID-based bridged distribution systems as good ideas are 
counterproductive to this goal.  But that's perhaps too radical for this crowd. 
 It reminds me of Cisco's attempt to create a proprietary Internet technology 
with IOS, which fortunately was not the success Cisco hoped for, or Juniper 
would not have existed. Maybe IOS would have been a fine standard, but it would 
have killed the evolution of the Internet as we know it.

On Sunday, February 1, 2015 5:47am, Jonathan Morton chromati...@gmail.com 
said:



Since this is going to be a big job, it's worth prioritising parts of it 
appropriately.
Minstrel is probably already the single best feature of the Linux Wi-Fi stack. 
AFAIK it still outperforms any other rate selector we know about. So I don't 
consider improving it further to be a high priority, although that trick of 
using it as a sneaky random packet loss inducer is intriguing.
Much more important and urgent is getting some form of functioning SQM closer 
to the hardware, where the information is. I don't 

Re: [Cerowrt-devel] Fwd: Throughput regression with `tcp: refine TSO autosizing`

2015-01-31 Thread dpreed

I think we need to create an Internet focused 802.11 working group that would 
be to the OS wireless designers and IEEE 802.11 standards groups as the 
WHATML group was to W3C.
 
W3C was clueless about the real world at the point WHATML was created.  And 
WHATML was a revenge of the real against W3C - advancing a wide variety of 
important practical innovations rather than attending endless standards 
meetings with people who were not focused on solving actually important 
problems.
 
It took a bunch of work to get WHATML going, and it offended W3C, who became 
unhelpful.  But the approach actually worked - we now have a Web that really 
uses browser-side expressivity and that would never have happened if W3C were 
left to its own devices.
 
The WiFi consortium was an attempt to wrest control of pragmatic direction from 
802.11 and the proprietary-divergence folks at Qualcomm, Broadcom, Cisco, etc.  
But it failed, because it became thieves on a raft, more focused on picking 
each others' pockets than on actually addressing the big issues.
 
Jim has seen this play out in the Linux community around X.  Though there are 
lots of interests who would benefit by moving the engineering ball forward, 
everyone resists action because it means giving up the chance at dominance, and 
the central group is far too weak to do anything beyond adjudicating the worst 
battles.
 
When I say we I definitely include myself (though my time is limited due to 
other commitments and the need to support my family), but I would only play 
with people who actually are committed to making stuff happen - which includes 
raising hell with the vendors if need be, but also effective engineering steps 
that can achieve quick adoption.
 
Sadly, and I think it is manageable at the moment, there are moves out there 
being made to get the FCC to protect WiFi from interference.  The current 
one was Marriott, who requested the FCC for a rule to make it legal to disrupt 
and block use of WiFi in people's rooms in their hotels, except with their 
access points.  This also needs some technical defense.  I believe any issues 
with WiFi performance in actual Marriott hotels are due to bufferbloat in their 
hotel-wide systems, just as the issues with GoGo are the same.  But it's 
possible that queueing problems in their own WiFi gear are bad as well.
 
I mention this because it is related, and to the layperson, or 
non-radio-knowledgeable executive, indistinguishable.  It will take away the 
incentive to actually fix the 802.11 implementations to be better performing, 
making the problem seem to be a management issue that can be solved by making 
WiFi less interoperable and less flexible by rules, rather than by engineering.
 
However, solving the problems of hotspot networks and hotel networks are 
definitely real world issues, and quite along the same lines you mention, 
Dave.  FQ is almost certainly a big deal both in WiFi and in the distribution 
networks behind WiFi. Co-existence is also a big deal (RTS/CTS-like mechanisms 
can go a long way to remediate hidden-terminal disruption of the basic 
protocols). Roaming and scaling need work as well.
 
It would even be a good thing to invent pragmatic ways to provide low rate 
subnets and high rate subnets that can coexist, so that compatibility with 
ancient b networks need not be maintained on all nets, at great cost - just 
send beacons at a high rate, so that the b NICs can't see them but you 
need pragmatic stack implementations.
 
But the engineering is not the only challenge. The other challenge is to take 
the initiative and get stuff deployed.  In the case of bufferbloat, the grade 
currently is a D for deployments, maybe a D-.  Beautiful technical work, 
but the economic/business/political side of things has been poor.  Look at how 
slow IETF has been to achieve anything (the perfect is truly the enemy of the 
good, and Dave Clark's rough consensus and working code has been replaced by 
technocratic malaise, and what appears to me to be a class of people who love 
traveling the world to a floating cocktail party without getting anything 
important done).
 
The problem with communications is that you can't just ship a product with a 
new feature, because the innovation only works if widely adopted.  Since 
there is no Linux Desktop (and Linus hates the idea, to a large extent) Linux 
can't be the sole carrier of the idea.  You pretty much need iOS and Android 
both to buy in or to provide a path for easy third-party upgrades.  How do you 
do that?  Well, that's where the WHATML-type approach is necessary.
 
I don't know if this can be achieved, and there are lots of details to be 
worked out.  But I'll play.
 
 


On Saturday, January 31, 2015 4:05pm, Dave Taht dave.t...@gmail.com said:



I would like to have somehow assembled all the focused resources to make a go 
at fixing wifi, or at least having a f2f with a bunch of people in the late 
march timeframe. This message of mine to 

Re: [Cerowrt-devel] Recording RF management info _and_ associated traffic?

2015-01-25 Thread dpreed

Looking up an address in a routing table is o(1) if the routing table is a hash 
table.  That's much more efficient than a TCAM.  My simple example just 
requires a delete/insert at each node's route lookup table.
 
My point was about collections of WLAN's bridged together.  Look at what 
happens (at the packet/radio layer) when a new node joins a bridged set of 
WLANs using STP.  It is not exactly simple to rebuild the Ethernet layer's 
bridge routing tables in a complex network.  And the limit of 4096 entries in 
many inexpensive switches is not a trivial limit.
 
Routers used to be memory-starved (a small number of KB of RAM was the norm).  
Perhaps the thinking then (back before 2000) has not been revised, even though 
the hardware is a lot more capacious.
 
Remember, the Ethernet layer in WLANs is implemented by microcontrollers, 
typically not very capable ones, plus TCAMs which are pretty limited in their 
flexibility.
 
While it is tempting to use the pre-packaged, proprietary Ethernet switch 
functionality, routing gets you out of the binary blobs, and let's you be a lot 
smarter and more scalable.  Given that it does NOT cost more to do routing at 
the IP layer, building complex Ethernet bridging is not obviously a win.
 
BTW, TCAMs are used in IP layer switching, too, and also are used in packet 
filtering.  Maybe not in cheap consumer switches, but lots of Gigabit switches 
implement IP layer switching and filtering.  At HP, their switches routinely 
did all their IP layer switching entirely in TCAMs.


On Sunday, January 25, 2015 9:58pm, Dave Taht dave.t...@gmail.com said:



 On Sun, Jan 25, 2015 at 6:43 PM, David Lang da...@lang.hm wrote:
  On Sun, 25 Jan 2015, Dave Taht wrote:
 
  To your roaming point, yes this is certainly one place where migrating
  bridged vms across machines breaks down, and yet more and more vm
  layers are doing it. I would certainly prefer routing in this case.
 
 
  What's the difference between roaming and moving a VM from one place in
  the network to another?
 
 I think most people think of roaming as moving fairly rapidly from one
 piece of edge connectivity to another, and moving a vm is a great deal more
 permanent operation.
 
  As far as layer 2 vs layer 3 goes. If you try to operate at layer 3, you are
  going to have quite a bit of smarts in the endpoint. Even if it's only
  connected vi a single link. If you think about it, even if your network
  routing tables list every machine in our environment individually, you still
  have a problem of what gateway the endpoint uses. It would have to change
  every time it moved. Since DHCP doesn't update frequently enough to be
  transparent, you would need to have each endpoint running a routing
  protocol.
 
 Hmm? I don't ever use a dhcp-supplied default gateway, I depend on the routing
 protocol to supply that. In terms of each vm running a routing protocol,
 well, no, I would rely on the underlying bare metal OS to be doing
 that, supplying
 the FIB tables to the overlying vms, if they need it, but otherwise the vms
 just see a default route and don't bother with it. They do need to inform 
 the
 bare metal OS (better term for this please? hypervisor?) of what IPs they own.
 
 static default gateways are evil. and easily disabled. in linux you
 merely comment
 out the routers in /etc/dhcp/dhclient.conf, in openwrt, set
 defaultroute 0 for the
 interface fetching dhcp.
 
 When a box migrates, it tells the hypervisor it's addresses, and then that box
 propagates out the route change to elsewhere.
 
 
  This can work for individual hobbiests, but not when you need to support
  random devices (how would you configure an iPhone to support this?)
 
 Carefully. :)
 
 I do note that this stuff does (or at least did) work on some of the open
 source variants of android. I would rather like it if android added ipv6
 tethering soon, and made it possible to mesh together multiple phones.
 
 
 
  Letting the layer 2 equipment deal with the traffic within the building and
  invoking layer 3 to go outside the building (or to a different security
  domain) makes a lot of sense. Even if that means that layer 2 within a
  building looks very similar to what layer 3 used to look like around a city.
 
 Be careful what you wish for.
 
 
 
  back to the topic of wifi, I'm not aware of any APs that participate in the
  switch protocols at this level. I also don't know of any reasonably priced
  switches that can do anything smarter than plain spanning tree when
  connected through multiple paths (I'd love to learn otherwise)
 
  David Lang
 
 
 
 --
 Dave Täht
 
 thttp://www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Recording RF management info _and_ associated traffic?

2015-01-25 Thread dpreed

Disagree. See below.


On Saturday, January 24, 2015 11:35pm, David Lang da...@lang.hm said:



 On Sat, 24 Jan 2015, dpr...@reed.com wrote:
  A side comment, meant to discourage continuing to bridge rather than route.
 
  There's no reason that the AP's cannot have different IP addresses, but a
  common ESSID. Roaming between them would be like roaming among mesh subnets.
  Assuming you are securing your APs' air interfaces using encryption over the
  air, you are already re-authenticating as you move from AP to AP. So using
  routing rather than bridging is a good idea for all the reasons that routing
  rather than bridging is better for mesh.
 
 The problem with doing this is that all existing TCP connections will break 
 when
 you move from one AP to another and while some apps will quickly notice this 
 and
 establish new connections, there are many apps that will not and this will 
 cause
 noticable disruption to the user.
 
 Bridgeing allows the connections to remain intact. The wifi stack 
 re-negotiates
 the encryption, but the encapsulated IP packets don't change.


There is no reason why one cannot set up an enterprise network to support 
roaming, yet maintaining the property that IP addresses don't change while 
roaming from AP to AP.  Here's a simple concept, that amounts to moving what 
would be in the Ethernet bridging tables up to the IP layer.
 
All addresses in the enterprise are assigned from a common prefix (XXX/16 in 
IPv4, perhaps).  Routing in each access point is used to decide whether to send 
the packet on its LAN, or to reflect it to another LAN.  A node's preferred 
location would be updated by the endpoint itself, sending its current location 
to its current access point (via ARP or some other protocol).   The access 
point that hears of a new node that it can reach tells all the other access 
points that the node is attached to it.  Delivery of a packet to a node is done 
by the access point that receives the packet by looking up the destination IP 
address in its local table, and sending it to the access point that currently 
has the destination IP address.
 
This is far better than bridging at the Ethernet level from a functionality 
point of view - it is using routing, not bridging.  Bridging at the Ethernet 
level uses Ethernet's STP feature, which doesn't work very well in collections 
of wireless LAN's (it is slow to recalculate when something moves, because it 
was designed for unplug/plug of actual cables, and moving the host from one 
physical location to another).
 
IMO, Ethernet sometimes aspires to solve problems that are already well-solved 
in the Internet protocols. (for example the 802.11s mess which tries to do a 
mesh entirely in the Ethernet layer, and fails pretty miserably).
Of course that's only my opinion, but I think it applies to overuse of bridging 
at the Ethernet layer when there are better approaches at the next layer up.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Nyt missed bloat on airplane WiFi entirely

2015-01-24 Thread dpreed

Yeah.  Someone should send them to my blog post.


On Thursday, January 22, 2015 11:16pm, Dave Taht dave.t...@gmail.com said:



[ 
http://mobile.nytimes.com/2015/01/22/style/the-sorry-state-of-in-flight-wi-fi.html?_r=2referrer=
 ]( 
http://mobile.nytimes.com/2015/01/22/style/the-sorry-state-of-in-flight-wi-fi.html?_r=2referrer=
 )___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


[Cerowrt-devel] SInce I mentioned this crew's work in a post, I don't want anyone to be surprised.

2015-01-06 Thread dpreed

[ GoGo does not need to run “Man in the Middle Attacks” on YouTube ]( 
http://www.reed.com/blog-dpr/?p=174 )___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Cerowrt-devel Digest, Vol 37, Issue 24

2014-12-22 Thread dpreed

Hi Sebastian -
 
So reading this chart, which is consistent with my reference materials: At 6 
GHz, I see additional attenuation of water vapor being -0.002 db/kM, additional 
to the dry air attenuation of  10.0075 dB/kM already due to the atmosphere, at 
5.8 GHz.
 
So at 5 kM (1 mile in English units), a signal will be attenuated by about 
0.01 dB by water vapour, in addition to 0.0375 dB of attenuation by the 
atmosphere.  But the attenuation due to path loss at distance d, which is 
typically log(d**k), or k log(d) - where 2k4 depending on a variety of 
factors - will be somewhere between -122 dB to ~ -190 dB (assuming the antennas 
are dipoles).
 
So the contribution of water vapor at 5.8 GHz is pretty insignificant.
 
 


On Sunday, December 21, 2014 2:20pm, Sebastian Moeller moell...@gmx.de said:



 Hi David,
 
 
 On Dec 21, 2014, at 17:45 , David P. Reed dpr...@reed.com wrote:
 
  All microwave frequencies heat water molecules, fyi. The early ovens used a
 klystron that was good at 2.4 GHZ because it was available and cheap enough. 
 But
 they don't radiate much. 5.8 GHz was chosen because the band's primary was a
 government band at EOL.
 
 Looking at figure 5 of
 http://www.itu.int/dms_pubrec/itu-r/rec/p/R-REC-P.676-10-201309-I!!PDF-E.pdf 
 it
 pretty much looks like there is higher attenuation at 5GHz compared to 2.4GHz
 (roughly 126 = % more attenuation @5GHz due to water in air), so there are 
 some
 propagation differences at different frequencies, no?
 
 
  Yes... higher frequency bands have not been used for broadcasting. That's
 because planetary curvature can be conquered by refraction near the earth's
 surface and reflection by the ionosphere. That's why power doesn't help were 
 we to
 use higher frequencies for broadcasting. But data communications is not
 broadcasting. So satellite broadcasters can use higher frequencies for
 broadcasting. And they do, because it's a lot easier to build directional 
 antennas
 at higher frequencies. Same for radar and GPS.
 
  Think about acoustics. Higher frequencies from a tweeter propagate through
 air just as well as lower frequencies from subwoofers.
 
 But look at https://ccrma.stanford.edu/~jos/HarrisJASA66.pdf figure 5; air 
 seems
 to attenuate sound waves as a function of frequency, so high frequencies do 
 not
 travel as far as low frequencies (but are more “efficiently converted into
 heat). But that looks similar to RF waves in air (see link above)...
 
  But our ears are more directional antennae at the higher frequencies.
 
 True, once the inter-ear distance is down to 1/4 wavelength there is no 
 useable
 intensity and phase difference between the signal at both ears, hence the
 inability to localize the subwoofer (that allows to get a way with one 
 subwoofer
 in a stereo system). But this depends to a good deal on the inter-ear distance
 (e.g. elephants can reliably localize sounds that humans can not due to the
 bigger head…)
 
 Best Regards
 Sebastian
 
  Similar properties apply to EM waves. And low frequencies refract around
 corners and along the ground better. The steel of a car body does not couple 
 to
 higher frequencies so it reradiates low freq sounds better than high freq 
 ones.
 Hence the loud car stereo bass is much louder than treble when the cabin is
 sealed.
 
  On Dec 21, 2014, David Lang da...@lang.hm wrote:
  On Sat, 20 Dec 2014, David P. Reed wrote:
 
  Neither 2.4 GHZ nor 5.8 GHz are absorbed more than other bands. That's an 
  old
  wives tale. The reason for the bands' selection is that they were available
 at
  the time. The water absorption peak frequency is 10x higher.
 
  well, microwave ovens do work at around 2.4GHz, so there's some interaction
 with
  water at that frequency.
 
  Don't believe what people repeat without checking. The understanding of 
  radio
  propagation by CS and EE folks is pitiful. Some even seem to think that RF
  energy travels less far the higher the frequency.
 
  I agree that the RF understanding is poor, but given that it's so far 
  outside
  their area of focus, that's understandable.
 
  the mistake about higher frequencies traveling less is easy to understand,
 since
  higher frequency transmistters tend to be lower power than lower 
  frequencies,
  there is a correlation between frequency and distance with commonly 
  available
  equipment that is easy to mistake for causation.
 
  David Lang
 
  Please don't repeat nonsense.
 
  On Dec 20, 2014, Mike O'Dell m...@ccr.org wrote:
  15.9bps/Hz is unlikely to be using simple phase encoding
 
  that sounds more like 64QAM with FEC.
  given the chips available these days for DTV, DBS,
  and even LTE, that kind of processing is available
  off-the-shelf (relatively speaking - compared to
  writing your own DSP code).
 
  keep in mind that the reason the 2.4 and 5.8 ISM bands
  are where they are is specifically because of the ready
  absorption of RF at those frequencies. the propagation
  is *intended* to be problematic. 

Re: [Cerowrt-devel] tinc vpn: adding dscp passthrough (priorityinherit), ecn, and fq_codel support

2014-12-03 Thread dpreed

Awesome start on the issue, in your note, Dave.  Tor needs to change for 
several reasons - not that it isn't great, but with IPv6 and other things 
coming on line, plus the understanding of fq_codel's rationale, plus ... - the 
world can do much better.  Same with VPNs.
 
I hope we can set our sights on a convergent target that doesn't get bogged 
down in the tradeoffs that were made when VPNs were originally proposed.  The 
world is no longer a bunch of disconnected networks protected by Cheswick 
firewalls.  Cheswick said they were only temporary, and they've outlived their 
usefulness - they actually create security risks more than they fix them 
(centralizing security creates points of failure and attack that exponentially 
decrease the attackers' work factor).  To some extent that is also true for Tor 
after these many years.
 
By putting the intelligence about security in the network, you basically do all 
the bad things that the end-to-end argument encourages you to avoid.  We could 
also put congestion control in the network by re-creating admission control and 
requiring contractual agreements to carry traffic across every intermediary.  
But I think that basically destroys almost all the value of an inter net.  It 
makes it a balkanized proprietary set of subnets that have dozens of reasons 
why you can't connect with anyone else, and no way to be free to connect.
 
 
 


On Wednesday, December 3, 2014 2:44pm, Dave Taht dave.t...@gmail.com said:



 On Wed, Dec 3, 2014 at 6:17 AM, David P. Reed dpr...@reed.com wrote:
  Tor needs this stuff very badly.
 
 Tor has many, many problematic behaviors relevant to congestion control
 in general. Let me paste a bit of private discussion I'd had on it in a 
 second,
 but a very good paper that touched upon it all was:
 
 DefenestraTor: Throwing out Windows in Tor
 http://www.cypherpunks.ca/~iang/pubs/defenestrator.pdf
 
 Honestly tor needs to move to udp, and hide in all the upcoming
 webrtc traffic
 
 http://blog.mozilla.org/futurereleases/2014/10/16/test-the-new-firefox-hello-webrtc-feature-in-firefox-beta/
 
 webrtc needs some sort of non-centralized rendezvous mechanism, but I am 
 REALLY
 happy to see calls and video stay entirely inside my network when they can be
 negotiated as such.
 
 https://plus.google.com/u/0/107942175615993706558/posts/M4xUtpCKJ4P
 
 And of course, people are busily reinventing torrent in webrtc without
 paying attention to congestion control at all.
 
 https://github.com/feross/webtorrent/issues/39
 
 Giving access to udp to javascript programmers... what could go wrong?
 :/
 
  I do wonder whether we should focus on vpn's rather than end to end
  encryption that does not leak secure information through from inside as the
  plan seems to do.
 
 plan?
 
 I like e2e encryption. I also like overlay networks. And meshes.
 And working dns and service discovery. And low latency.
 
 vpns are useful abstractions for sharing an address space you
 may not want to share more widely.
 
 and: I've taken a lot of flack about how fq doesn't help on conventional
 vpns, and well, just came up with an unconventional vpn idea,
 that might have some legs here... (certainly in my case tinc
 as constructed already, no patches, solves hooking together the
 12 networks I have around the globe, mostly)
 
 As for leaking information, packet size and frequency is generally
 an obvious indicator of a given traffic type, some padding added or
 no. There is one piece of plaintext
 in tinc (the seqno), also. It also uses a fixed port number for both
 sides of the connection (perhaps it shouldn't)
 
 So I don't necessarily see a difference between sending a whole lot of
 varying data on one tuple
 
 2001:db8::1 - 2001:db8:1::1 on port 655
 
 vs
 
 2001:db8::1 - 2001:db8:1::1 port 655
 2001:db8::2 - 2001:db8:1::1 port 655
 2001:db8::3 - 2001:db8:1::1 port 655
 2001:db8::4 - 2001:db8:1::1 port 655
 
 
 which solves the fq problem on a vpn like tinc neatly. A security feature
 could be source specific routing where we send stuff over different paths
 from different ipv6 source addresses... and mixing up the src/dest ports
 more but that complexifies the fq portion of the algo my thought
 for an initial implementation is to just hard code the ipv6 address range.
 
 I think however that adding tons and tons of ipv6 addresses to a given
 interface is probably slow,
 and might break things like nd and/or multicast...
 
 what would be cooler would be if you could allocate an entire /64 (or
 /118) to the vpn daemon
 
 bindtoaddress(2001:db8::/118) (give me all the data for 1024 ips)
 
 but I am not sure how to go about doing that..
 
 ...moving back to a formerly private discussion about tors woes...
 
 
 This conversation is a bit separate from #11197 (which is an
 implementation issue in obfsproxy), so separate discussion somewhere
 would probably be required.
 
 So, there appears to be a slight misconception on how tor traffic
 travels across the 

Re: [Cerowrt-devel] SQM: tracking some diffserv related internet drafts better

2014-11-13 Thread dpreed

The IETF used to require rough consensus and *working code*.  The latter seems 
to be out of fashion - especially with a zillion code points for which no 
working code has been produced, and worse yet, no real world testing has 
demonstrated any value whatsoever.
 
It's also true that almost no actual agreements exist at peering points about 
what to do at those points.  That's why diffserv appears to be a lot of energy 
wasted on something that has little to do with inter-networking.
 
intserv was a plausible idea, because within a vertically integrated system, 
one can enforce regularity and consistency.
 
So my view, for what it is worth, is that there are a zillion better things to 
put effort into than incorporating diffserv into CeroWRT!
 
Cui bono?
 


On Thursday, November 13, 2014 12:26pm, Dave Taht dave.t...@gmail.com said:



 This appears to be close to finalization, or finalized:
 
 http://tools.ietf.org/html/draft-ietf-dart-dscp-rtp-10
 
 And this is complementary:
 
 http://tools.ietf.org/html/draft-ietf-tsvwg-rtcweb-qos-03
 
 While wading through all this is tedious, and much of the advice 
 contradictory,
 there are a few things that could be done more right in the sqm system
 that I'd like to discuss. (feel free to pour a cup of coffee and read
 the drafts)
 
 -1) They still think the old style tos imm bit is obsolete. Sigh. Am I
 the last person that uses ssh or plays games?
 
 0) Key to this draft is expecting that the AF code points on a single
 5-tuple not be re-ordered, which means dumping AF41 into a priority
 queue and AF42 into the BE queue is incorrect.
 
 1) SQM only prioritizes a few diffserv codepoints (just the ones for
 which I had tools doing classification, like ssh). Doing so with tc
 rules is very inefficient presently. I had basically planned on
 rolling a new tc and/or iptables filter to do the right thing to map
 into all 64 codepoints via a simple lookup table (as what is in the
 wifi code already), rather than use the existing mechanism... and
 hesitated
 as nobody had nailed down the definitions of each one.
 
 That said, I have not measured recently the impact of the extra tc
 filters and iptables rules required.
 
 1a) Certainly only doing AF42 in sqm is pretty wrong (that was left
 over from my test patches against mosh - mosh ran with AF42 for a
 while until they crashed a couple routers with it)
 
 The relevant lines are here:
 
 https://github.com/dtaht/ceropackages-3.10/blob/master/net/sqm-scripts/files/usr/lib/sqm/functions.sh#L411
 
 1b) The cake code presently does it pretty wrong, which is eminately fixable.
 
 1c) And given that the standards are settling, it might be time to
 start baking them into a new tc or iptables filter. This would be a
 small, interesting project for someone who wants to get their feet wet
 writing this sort of thing, and examples abound of how to do it.
 
 2) A lot of these diffserv specs - notably all the AFxx codepoints -
 are all about variable drop probability. (Not that this concept has
 been proven to work in the real world) We don't do variable drop
 probability... and I haven't the slightest clue as to how to do it in
 fq_codel. But keeping variable diffserv codepoints in order on the
 same 5 tuple seems to be the way things are going. Still I have
 trouble folding these ideas into the 3 basic queue system fq_codel
 uses, it looks to me as most of the AF codepoints end up in the
 current best effort queue, as the priority queue is limited to 30% of
 the bandwidth by default.
 
 
 3) Squashing inbound dscp should still be the default option...
 
 4) My patch set to the wifi code for diffserv support disables the VO
 queue almost entirely in favor of punting things to the VI queue
 (which can aggregate), but I'm not sure if I handled AFxx
 appropriately.
 
 5) So far as I know, no browser implements any of this stuff yet. So
 far as I know nobody actually deployed a router that tries to do smart
 things with this stuff yet.
 
 6) I really wish there were more codepoints for background traffic than cs1.
 
 --
 Dave Täht
 
 thttp://www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Torrents are too fast

2014-11-03 Thread dpreed

In other words, rather than share the capacity of the link fairly among flows 
(as TCP would if you eliminated excess buffer-bloat), you want to impose 
control on an endpoint from the middle?
 
This seems counterproductive... what happens when the IP address changes, new 
services arise, and more ports are involved?
 
TCP and other protocols generally are responsive when packets are dropped at a 
congested point, and they generally end up sharing the available capacity 
relatively fairly.  If you want even more fairness, use fq_codel. It should do 
pretty much what you want without even having to identify the source addresses.


On Monday, November 3, 2014 2:53am, Dane Medic dm7...@gmail.com said:





Hi,

what lines do I have to add to simple.qos script on cerowrt to slow down bulk 
traffic from a specific IP address (172.30.42.6) and from a specific port 
(18224)?


Thank you guys___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] bulk packet transmission

2014-10-10 Thread dpreed

The best approach to dealing with locking overhead is to stop thinking that 
if locks are good, more locking (finer grained locking) is better.  OS 
designers (and Linux designers in particular) are still putting in way too much 
locking.  I deal with this in my day job (we support systems with very large 
numbers of cpus and because of the fine grained locking obsession, the 
parallelized capacity is limited).   If you do a thoughtful design of your 
network code, you don't need lots of locking - because TCP/IP streams don't 
have to interact much - they are quite independent.   But instead OS designers 
spend all their time thinking about doing one thing at a time.
 
There are some really good ideas out there (e.g. RCU) but you have to think 
about the big picture of networking to understand how to use them.  I'm not 
impressed with the folks who do the Linux networking stacks.


On Thursday, October 9, 2014 3:48pm, Dave Taht dave.t...@gmail.com said:



 I have some hope that the skb-xmit_more API could be used to make
 aggregating packets in wifi on an AP saner. (my vision for it was that
 the overlying qdisc would set xmit_more while it still had packets
 queued up for a given station and then stop and switch to the next.
 But the rest of the infrastructure ended up pretty closely tied to
 BQL)
 
 Jesper just wrote a nice piece about it also.
 http://netoptimizer.blogspot.com/2014/10/unlocked-10gbps-tx-wirespeed-smallest.html
 
 It was nice to fool around at 10GigE for a while! And netperf-wrapper
 scales to this speed also! :wow:
 
 I do worry that once sch_fq and fq_codel support is added that there
 will be side effects. I would really like - now that there are al
 these people profiling things at this level to see profiles including
 those qdiscs.
 
 /me goes grumbling back to thinking about wifi.
 
 On Thu, Oct 9, 2014 at 12:40 PM, David Lang da...@lang.hm wrote:
  lwn.net has an article about a set of new patches that avoid some locking
  overhead by transmitting multiple packets at once.
 
  It doesn't work for things with multiple queues (like fq_codel) in it's
  current iteration, but it sounds like something that should be looked at and
  watched for latency related issues.
 
  http://lwn.net/Articles/615238/
 
  David Lang
  ___
  Cerowrt-devel mailing list
  Cerowrt-devel@lists.bufferbloat.net
  https://lists.bufferbloat.net/listinfo/cerowrt-devel
 
 
 
 --
 Dave Täht
 
 https://www.bufferbloat.net/projects/make-wifi-fast
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] wifi over narrow channels

2014-10-09 Thread dpreed

Wideband is far better for scaling than narrowband, though.  This may seem 
counterintuitive, but narrowband systems are extremely inefficient.  They 
appeal to 0/1 thinking intuitively, but in actual fact the wider the bandwidth 
the more sharing and the more scaling is possible (and not be balkanization 
or exclusive channel negotiation).
 
Two Internets are far worse than a single Internet that combines both.  That's 
because you have more degrees of freedom in a single network than you can in 
two distinct networks, by a combinatorial factor.
 
The analogy holds that one wide band is far better than two disjoint bands in 
terms of scaling and adaptation. The situation only gets better because of the 
physics of multipath, which creates more problems the more narrowband the 
signal, and when the signal is a single frequency, multipath is disastrous.
 
The same is true if you try to divide space into disjoint channels (as 
cellular tries to).
 
So in the near term, narrowband wifi might be a short-term benefit, but 
long-term it is 180 degress away from where you want to go.
 
(the listen-before-talk protocol in WiFi is pragmatic because it is built into 
hardware today, but terrible for wideband signals, because you can't shorten 
the 4 usec. pre-transmit delay, and probably need to lengthen it, since 4 usec. 
is about 1.25 km or 0.8 miles, and  holds 40 bits at 10 Mb/s, or 4000 bits at 1 
Gb/sec).
 
Either for distance or for rate, the Ethernet MAC+PHY was designed for short 
coax or hub domains. Its not good for digital wireless Internet, except for 
one thing: it is based on distributed control that does not require any advance 
planning.
 
If you want to improve open wireless, you have to a) go wide, b) maintain 
distributed control, c) get rid of listen-before-talk to replace it with a 
mixture of co-channel decoding and propagation negotiation.  Then you can beat 
cellular trivially.
 
I wish I could attract investment away from the short term WiFi thinking, but 
in the last 15 years, I've failed.  Meanwhile WiFi also attracts those people 
who want to add bufferbloat into the routers because they don't understand 
congestion control.
 
Sad.


On Wednesday, October 8, 2014 6:14pm, Dave Taht dave.t...@gmail.com said:



 https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final142.pdf
 
 I've had 5mhz channels working in the ath9k at various points in
 cerowrt's lifetime. (using it for meshy stuff) After digesting most of
 the 802.11ac standard I do find myself wishing they'd gone towards
 narrower channels rather than wider.
 
 The netgear x4 defaults to a 160mhz wide channel. :sigh:
 
 The above paper has some nifty ideas in it.
 
 --
 Dave Täht
 
 https://www.bufferbloat.net/projects/make-wifi-fast
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] full duplex wifi?

2014-09-18 Thread dpreed

This is not completely crazy.  A couple of grad students and I demonstrated 
this type of thing with USRP's in my lab at MIT. The problem you, David Lang, 
refer to is basically the key thing to deal with, but the physics and 
information theory issues can be dealt with.
 
There's significant work in the RADAR (not radio) field that bears on the 
design of this.  I am sure there is more of that that is currently classified.
 
There are a lot of practical design issues in the front-end and the waveform 
design to be able to do this sort of thing well - especially in the field 
rather than the lab.   Your receive antenna will receive echoes of your own 
transmission that have to be separated from your signal and the source you are 
listening to.
 
Since this is full-duplex, there are only two signals involved and each knows 
its own signal's waveform pretty precisely - you can even attenuate the antenna 
output to get a precise measure of your signal.
 
So I think in a few years this might be practical - but a protocol to exploit 
this capability optimally would be complicated because of the need to 
compensate for the propagation environment effects.

On Tuesday, September 16, 2014 11:08pm, David Lang da...@lang.hm said:



 On Tue, 16 Sep 2014, David Lang wrote:
 
  On Tue, 16 Sep 2014, Dave Taht wrote:
 
  It would be very nice to get some TXOPs back:
 
  Is this crazy or not?
 
  http://web.stanford.edu/~skatti/pubs/sigcomm13-fullduplex.pdf
 
  I start of _extremely_ skeptical of the idea. While it would be a
  revolutionary improvement if it can work, there are some very basic points 
  of
  physics that make this very hard to achieve.
 
  If they can do it, they double the capacity of existing wireless systems,
  which helps, but it's not really that much (the multipath directed
  beamforming helps more)
 
  I'll read though the paper and comment more later.
 
 Ok, they are working on exacty the problem I described. They do a significant
 amount of the work in digital, which is probably why they get an 87% 
 improvement
 instead of a 2x improvement. This also will eat a fair bit of the DSP 
 processing
 capacity.
 
 As they note, this only works with single antenna systems. They list support 
 for
 multi-antenna systems as future work, and that's going to be quite a bit of 
 work
 (not impossible, but very hard)
 
 This will be a great thing for point-to-point infrastructure type links, but
 isn't that useful for more 'normal' situations (let alone high density
 environments)
 
 MIMO multi-destination can provide as much or more airtime saving when you
 actually have multiple places to send the data
 
 think of it as the core frequency vs core count type of tradeoff.
 
 David Lang
 
 
  warning, radio primer below
 
  the strength of a radio signal drops off FAST ( distance^3 in the worst 
  case,
  but close to distance^2 if you have pretty good antennas)
 
  you loose a lot of signal in the transition from the antenna wire to the air
  and from the air to the antenna wire.
 
  The result of this is that your inbound signal is incredibly tiny compared 
  to
  your outbound signal.
 
  In practice, this is dealt with by putting a very high power amplifier on 
  the
  inbound signal to make it large enough for our electronics to deal with. to
  do this effectively for signals that vary wildly in strength, this amplifier
  is variable, and amplifies all the signals that it gets until the strongest
  one is at the limits of the amplifier's output.
 
  Because of this, a receiver without a good input filter can get into a
  situation where it cannot recive it's desired signal because some other
  signal somewhat near the signal it wants is strong enough to cause problems.
 
  digital signal processing is no help here. If you digitize the signal (let's
  talk 8 bits for the moment, although 12-14 bits is more common in the real
  world), and you have one signal that's 100 times as strong as the other
  (which could be that one is 10 ft away and the other 100 ft away), the near
  signal is producing samples of 0-255, while the far signal is producing
  samples 0-2. there's not much you can do to get good fidelity when your only
  hvae 3 possible values for your data.
 
  Real radios deal with this by having analog filters to cut out the strong
  signal so that they can amplify the weak signal more before it hits the
  digital section.
 
  But if we are trying to transmit and receive at the same time, on the same
  channel, then we are back to the problem of the transmit vs receive power.
 
  Taking a sample radio, the Baofeng uv-5r handheld (because I happen to have
  it's stats handy)
 
  on transmit, it is producing 5w into a 50ohm load, or ~15v (v=sqrt(P*R)),
  while it is setup to receive signals of 0.2u volt.
 
  being able to cancel the transmitting signal perfectly enough to be able to
  transmit and at the same time receive a weak signal on a nearby frequency
  with the same antenna is a HARD thing 

Re: [Cerowrt-devel] Fixing bufferbloat: How about an open letter to the web benchmarkers?

2014-09-11 Thread dpreed

I will sign.  It would be better if we had an actual demonstration of how to 
implement a speedtest improvement.
 


On Thursday, September 11, 2014 12:03pm, Dave Taht dave.t...@gmail.com said:



 The theme of networks being engineered for speedtest has been a
 common thread in nearly every conversation I've had with ISPs and
 vendors using every base technology out there, be it dsl, cable,
 ethernet, or fiber, for the last 4 years. Perhaps, in pursuing better
 code, and RFCs, and the like, we've been going about fixing
 bufferbloat the wrong way.
 
 If Verizon can petition the FCC to change the definition of
 broadband... why can't we petition speedtest to *change their test*?
 Switching to merely reporting the 98th percentile results for ping
 during an upload or download, instead of the baseline ping, would be a
 vast improvement on what happens today, and no doubt we could suggest
 other improvements.
 
 What if we could publish an open letter to the benchmark makers such
 as speedtest, explaining how engineering for their test does *not*
 make for a better internet? The press fallout from that letter, would
 improve some user education, regardless if we could get the tests
 changed or not.
 
 Who here would sign?
 
 
 On Wed, Sep 10, 2014 at 2:54 PM, Joel Wirāmu Pauling
 j...@aenertia.net wrote:
  I have been heavily involved with the UFB (Ultrafast Broadband) PON
  deployment here in New Zealand.
 
  I am not sure how the regulated environment is playing out in Canada
  (I am moving there in a month so I guess I will find out). But here
  the GPON architecture is METH based and Layer2 only. Providers (RSP's)
  are the ones responsible for asking for Handoffer buffer tweaks to the
  LFC(local fibre companies; the layer 0-2 outfits-) which have mandated
  targets for Latency (at most 4.5ms) accross their PON Access networks
  to the Handover port.
 
  Most of the time this has been to 'fix' Speedtest.net TCP based
  results to report whatever Marketed service (100/30 For example) is in
  everyones favourite site speedtest.net.
 
  This has meant at least for the Chorus LFC regions where they use
  Alcatel-Lucent 7450's as the handover/aggregation switches we have
  deliberately introduced buffer bloat to please the RSP's - who
  otherwise get whingy about customers whinging about speedtest not
  showing 100/30mbit. Of course user education is 'too hard' .
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] Fixing bufferbloat: How about an open letter to the web benchmarkers?

2014-09-11 Thread dpreed

The speedof.me API probably can be used directly as the measurement of download 
and upload - you can create a competing download or upload in Javascript using 
a WebWorker talking to another server that supports the websocket API to force 
buffer overflow.  (sort of poor man's RRUL).
 
The speedof.me API would give you the measured performance, while the other 
path would just be aan easier to code test load to a source/sink.
 
Not sure that would help, but for a prototype it's not bad.


On Thursday, September 11, 2014 8:42pm, Jonathan Morton 
chromati...@gmail.com said:



 
 On 12 Sep, 2014, at 3:35 am, dpr...@reed.com wrote:
 
  Among friends of mine, we can publicize this widely. But those friends
 probably would like to see how the measurement would work.
 
 Could we make use of the existing test servers (running netperf) for that
 demonstration? How hard is the protocol to fake in Javascript?
 
 Or would a netperf-wrapper demonstration suffice? We've already got that, but
 we'd need to extract the single-figures-of-merit from the data.
 
 I wonder if the speedof.me API can already be tricked into doing the right 
 thing?
 
 - Jonathan Morton
 
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] 10GigE nics and SFP+ modules?

2014-09-10 Thread dpreed

I'm confused
 
SFP is not SFP+.   SFP carries at most 4.25 Gb/sec.   SFP+ works at 10 Gb/sec. 
  So, it's not clear that the MikroTik is very useful in a 10 Gig world.  It's 
not immediately clear but VERY likely, that this is an edge switch that is 
intended for collapsing the GigE copper traffic onto a potentially bottlenecked 
GigE local backbone.
 
Of course if you want to go from GigE fiber to GigE copper, that board might be 
useful.


On Wednesday, September 10, 2014 2:05pm, Dave Taht dave.t...@gmail.com said:



 On Wed, Sep 10, 2014 at 12:01 AM, Mikael Abrahamsson swm...@swm.pp.se
 wrote:
 
  I don't like when people create their cable plant to match what GPON needs.
  It's done because of the illusion that long-haul fiber is expensive. It
  isn't, if you have to dig anyway. The difference in cost of a 12 fiber
  cable, and a 1000 fiber cable, isn't huge compared to the digging costs.
  Splicing a 1000 fiber cable isn't huge either. Point-to-point fiber cabling
  is the way to go. If you then decide to light it up using PON of some kind,
  fine, that's up to you, at least you have the flexibility to change
  technology in the future.
 
  and several costs there have dropped significantly, routerboard is
  making a SFP+ capable
  5 port switch for like 50 dollars
 
 
  URL?
 
 Nick weaver (of ICSI) just turned me onto them -
 
 http://www.cloudrouterswitches.com/RB260GS.asp?gclid=Cj0KEQjw7b-gBRC45uLY_avSrdgBEiQAD3Olx8_iFXJ_xKjZInc2T54XEu5VyMsTe42Rla3GTRKrkwwaAu2M8P8HAQ
 
 He also steered me to a nifty port mirroring POE passthrough device:
 
 http://www.dual-comm.com/gigabit_port-mirroring-LAN_switch.htm
 
 Haven't tried either yet, personally.
 
 It turns out that both he and I are using the nearly same model nucs
 for load testing, with a standard 2.5inch sata 3 slot, and 2 mini pcie
 slots for a half length and full length wifi device.
 
 The D54250WYK1, which is a dual core, dual thread/core i5 based with
 16 GB Ram and a 120 GB SSD). The only disadvantage is it doesn't
 include the Vpro lights out management suite present in the 3rd gen
 model.
 
 I am using the i3 versions. I'd written up a review of the one without
 the 2.5 inch slot here:
 
 http://snapon.lab.bufferbloat.net/~cero2/nuc-to-puck/results.html
 
 and later upgraded to the one with the slot, as sata is faster than
 msata, and the best wifi (atheros ath9k and ath10k) cards are all full
 length).
 
 These have e1000e cards in them, which support BQL under linux 3.6 and
 later, and although the (i3 at least) can't drive gigE to saturation
 without TSO, they have been quite nice and quite quiet so far, and
 have been giving solid results. They are a really good desktop, too,
 under linux, and mounting them on the back of the monitor (or, as I
 do, on a pegboard), is helpful too.
 
 
 
  --
  Mikael Abrahamsson email: swm...@swm.pp.se
 
 
 
 --
 Dave Täht
 
 https://www.bufferbloat.net/projects/make-wifi-fast
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] 10GigE nics and SFP+ modules?

2014-09-06 Thread dpreed

I have been happy with the PicoPSU power supplies which are tiny ATX PSU's that 
go up to 160 W.
They take 12V input, which can be supplied by an external brick or a small 12V 
power supply of the sort used to supply power to lighting circuits (I use a 
Meanwell NES-350-12 to power 3 boards with individual PicoPSU's.
 
For home experiments I have been using the less expensive X520-DA2 Dual copper 
Cat6 10 GigE boards from Intel, at about $500 per card. They are well supported 
in Linux.  SFP+ is the way to go for fiber.
 
Also, Netgear sells a small 8 port copper interface 10 GigE switch for  $1000. 
http://www.netgear.com/business/products/switches/unmanaged-plus/10g-plus-switch.aspx
 It's an 8-port desktop switch, which means unmanaged, etc.  I haven't tried 
that.  For various reasons my home machines are connected directly with each 
other, without a switch, but you want a switch, and this is far less pricey.
 
I think the copper option might get you 100m of distance, but I'm not sure.
 
All that said, if you want high end stuff, you might want to go for the X540 
fiber product line and switches that seem to be $5,000 or more, plus the cost 
of the SFP+ cables, ...  At least a factor of 10 more expensive, for better 
latency in the switch, fancy management features in the switch and more reach 
in the cabling for less electrical power.  Depends on what you want.
 
We use the higher end stuff at work.
 
 
 
 


On Saturday, September 6, 2014 2:36pm, Dave Taht dave.t...@gmail.com said:



 Given that the rangeley series of processors apparently has support in
 openwrt already, I picked up one last week.
 
 http://www.amazon.com/Supermicro-Atom-C2758-Motherboards-MBD-A1SRI-2758F-O/dp/B00FM4M7TQ/ref=sr_1_1?ie=UTF8qid=undefinedsr=8-1keywords=C2758
 
 I'm still looking for a good case for it - the first rack-mount case I
 got had mini-itx mounts but a non-ATX power supply, suggestions? Only
 need 50 watts or less of power supply...
 
 Where my brain falls off a cliff is sorting through the SFP+ options.
 I've been told to seek out the intel chipset cards as the best
 supported under linux, so would this be good? Are there other options?
 
 http://www.amazon.com/Intel-Gigabit-Dual-Server-Adapter/dp/B001AGFXTQ/ref=sr_1_10?ie=UTF8qid=1410027344sr=8-10keywords=10GigE+nic
 
 Or these?
 
 http://www.amazon.com/Intel-Ethernet-X520-SR2-Server-Adapter/dp/B002I9JCQY/ref=pd_cp_pc_0
 
 I'd like single mode (20km+) fiber support, but also to try whatever
 mode is more common in DCs... ?
 
 http://www.amazon.com/Intel-Ethernet-ETHERNET-MODULE-10GBase-SR/dp/B009KZNWE2/ref=sr_1_1?s=electronicsie=UTF8qid=1410028448sr=1-1keywords=Intel+SFP%2B
 
 These are the most expensive items (with the exception of snapon) I've
 ever bought on behalf of bufferbloat.net, and it feels weird to be
 doing this after fighting all month with 100mbit issues... but I've
 been dying to get some 10GigE data, so...
 
 --
 Dave Täht
 
 https://www.bufferbloat.net/projects/make-wifi-fast
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] still trying to find hardware for the next generation worth hacking on

2014-08-22 Thread dpreed

Yes.
 


On Friday, August 22, 2014 11:12am, William Katsak wkat...@gmail.com said:


On the FWMB-7950? Are you referring to the bypass switch?
-Bill


On Aug 22, 2014, at 10:19 AM, David P. Reed [ dpr...@reed.com ]( 
mailto:dpr...@reed.com ) wrote:


You missed the on board switch which is a major differentiator.


On Aug 22, 2014, William Katsak [ wkat...@gmail.com ]( 
mailto:wkat...@gmail.com ) wrote:This is a nice board, but other than the form 
factor, it isn’t much different than this Supermicro board which is readily 
available:
[ http://www.supermicro.com/products/motherboard/Atom/X10/A1SRi-2758F.cfm ]( 
http://www.supermicro.com/products/motherboard/Atom/X10/A1SRi-2758F.cfm )
[ SuperBiiz.com ]( http://superbiiz.com/ ) has it for 326.99:
[ http://www.superbiiz.com/detail.php?name=MB-A1SR2F ]( 
http://www.superbiiz.com/detail.php?name=MB-A1SR2F )
They also have a similar board in a larger MicroATX form factor.
I have the Avoton equivalent of this board (everything the same except Avoton 
instead of Rangeley) and it is super nice.
-Bill


On Aug 21, 2014, at 11:11 PM, Dave Taht [ dave.t...@gmail.com ]( 
mailto:dave.t...@gmail.com ) wrote:
On Sun, Aug 17, 2014 at 12:13 PM,  [ dpr...@reed.com ]( mailto:dpr...@reed.com 
) wrote:
[ 
http://www.habeyusa.com/products/fwmb-7950-rangeley-network-communication-board/
 ]( 
http://www.habeyusa.com/products/fwmb-7950-rangeley-network-communication-board/
 )
looks intriguing.
I have to say that looks very promising as a testbed vehicle. Perhaps
down the road a candidate for
a head-end solution... or a corporate edge gateway.

I also spoke to an intel rep at linuxcon
that mentioned a rangeley board with 10GigE capability onboard.

Have you contacted habeyusa?
Cerowrt-devel mailing list
[ Cerowrt-devel@lists.bufferbloat.net ]( 
mailto:Cerowrt-devel@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/cerowrt-devel ]( 
https://lists.bufferbloat.net/listinfo/cerowrt-devel )
-- Sent from my Android device with [ K-@ Mail ]( 
https://play.google.com/store/apps/details?id=com.onegravity.k10.pro2 ). Please 
excuse my brevity.___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] Check out www.speedof.me - no Flash

2014-07-25 Thread dpreed

I think what is being discussed is how to measure the quality of one 
endpoint's experience of the entire Internet over all time or over a specific 
interval of time.
 
Yet the systems that are built on top of the Internet transport do not have any 
kind of uniform dependence on the underlying transport behavior in terms of 
their quality.  Even something like VoIP's quality as experienced by two humans 
talking over it has a dependency on the Internet's behavior, but one that is 
hardly simple.
 
As an extreme, if one endpoint experiences a direct DDoS attack or is 
indirectly affected by one somewhere in the path, the quality of the experience 
might be dramatically reduced.
 
So any attempt to define a delta-Q that has  meaning in terms of user 
experience appears pointless and even silly - the endpoint experience is 
adequate under a very wide variety of conditions, but degrades terribly under 
certain kinds of conditions.
 
As a different point, let's assume that the last-mile is 80% utilized, but the 
latency variation in that utilization is not larger than 50 msec.  This is a 
feasible-to-imagine operating point, but it requires a certain degreee of tight 
control that may be very hard to achieve over thousands of independent 
application services through that point, so its feasibility is contingent on 
lots of factors. Then if the 20% capacity is far larger than 64 kb/sec we know 
that toll-quality audio can be produced with a small endpoint jitter buffer.  
There's no delta-Q there at all - quality is great.
 
So the point is: a single number or even a single morphism (whatever that is) 
to a specific algebraic domain element (a mapping to a semi-lattice with 
non-Abelian operators?) does not allow one to define a measure of an endpoint 
of the Internet that can be used to compute quality of all applications.
 
Or in purely non-abstract terms: if there were a delta-Q it would be useless 
for most network applications, but might be useful for a single network 
application.
 
So I submit that delta-Q is a *metaphor* and not a very useful one at that.  
It's probably as useful as providing a funkiness measure for an Internat 
access point.  We can certainly talk about and make claims about the relative 
funkiness of different connections and different providers.  We might even 
claim that cable providers make funkier network providers than cellular 
providers.
 
But to what end?


On Friday, July 25, 2014 5:13pm, David Lang da...@lang.hm said:



 On Fri, 25 Jul 2014, Martin Geddes wrote:
 
  So what is ΔQ and how do you compute it (to the extent it is a
 computed
  thing)?
 
 don't try to reduce it to a single number, we have two numbers that seem to
 matter
 
 1. throughput (each direction)
 
 2. latency under load
 
 Currently the speed test sites report throughput in each direction and ping 
 time
 while not under load
 
 If they could just add a ping time under load measurement, then we could talk
 meaningfully about either the delta or ratio of the ping times as the
 bufferbloat factor
 
 no, it wouldn't account for absolutly every nuance, but it would come pretty
 close.
 
 If a connection has good throughput and a low bufferbloat factor, it should be
 good for any type of use.
 
 If it has good throughput, but a horrid bufferbloat factor, then you need to
 artifically limit your traffic to stay clear of saturating the bandwith
 (sacraficing throughput)
 
 David Lang
 
  Starting point: the only observable effect of a network is to lose and
  delay data -- i.e. to attenuate quality by adding the toxic effects of
  time to distributed computations. ΔQ is a *morphism* that relates the
  quality attenuation that the network imposes to the application
  performance, and describes the trading spaces at all intermediate layers of
  abstraction. It is shown in the attached graphic.
 
  Critically, it frames quality as something that can only be lost
  (attenuated), both by the network and the application. Additionally, it
  is stochastic, and works with random variables and distributions.
 
  At its most concrete level, it is the individual impairment encountered by
  every packet when the network in operation. But we don't want to have to
  track every packet - 1:1 scale maps are pretty useless. So we need to
  abstract that in order to create a model that has value.
 
  Next abstraction: an improper random variable. This unifies loss and delay
  into a single stochastic object.
  Next abstraction: received transport, which is a CDF where we are
  interested in the properties of the tail.
 
  Next abstraction, that joins network performance and application QoE (as
  relates to performance): relate the CDF to the application through a
  Quality Transport Agreement. This stochastic contract is both necessary
  and sufficient to deliver the application outcome.
 
  Next concretisation towards QoE: offered load of demand, as a CDF.
  Next concretisation towards QoE: breach hazard metric, which 

Re: [Cerowrt-devel] Low Power UPSes (Was: Re: [Bloat] Dave Täht quoted in the ACLU blog)

2014-06-30 Thread dpreed

Good suggestions.  Also, if you have 12V charging the relevant battery, you can 
power 5V stuff with a cheap, off-the-shelf UBEC.  In a system I built recently, 
I powered a Wandboard, an SSD (SSD's typically only use their 5V supply) and an 
8 port GigE desktop switch with one that puts out 5@5V:
 
http://www.robotmarketplace.com/products/0-DYS30055.html.
 
There are lots of UBEC's out there in the robotics and radio control suppliers. 
Motors and batteries like to be higher than 5V, and the electronics and small 
servos like 5V.  You could design your own, but why bother...
 
 
 


On Sunday, June 29, 2014 11:45pm, David Lang da...@lang.hm said:



 On Sat, 28 Jun 2014, Joseph Swick wrote:
 
  On 06/28/2014 12:28 AM, Dave Taht wrote:
 
  One thing that does bug me is most UPSes are optimized to deliver a
 large
  load over a short time, a UPS capable of driving 5 watts for, say, 3 days
 is
  kind of rare.
 
 
  I think this is something that's in need of a new approach/disruption.
  For low power devices like NUCs and RasPi servers, running them off of a
  traditional UPS is hugely waste-full, since you're going from your Line
  voltage (120VAC or 240VAC in many places) to 12 or 24VDC (Or 48VDC for a
  bigger UPS). Then when the UPS has to kick in, it converts the battery
  voltage back to your line voltage.
 
  A better approach would be to have a UPS that had a good intelligent
  charger for your deep-cycle type battery that coming off the battery,
  you kept it at the correct DC level for your NUC or Raspi. Which for
  many of these devices is 5 or 12VDC. So in a sense, it becomes your
  low-power device's power suppy, it just happens to have the added
  benefit of having a built-in backup battery.
 
  Coming from a Ham Radio perspective, some hams run their base stations
  off of deep-cycle marine batteries with some form of charger keeping
  them topped off. This way, the radio operator can operate his or her
  station for days just on emergency power. Since a lot of ham gear is
  designed to operate off of 12VDC (with some notable exceptions like your
  high-power amplifiers).
 
  It shouldn't be hard to develop a decent grade Low-power UPS for home or
  small office use that can run these low power devices for days at a time
  with out all the inefficiencies of converting VAC to VDC and back again.
  And there's probably a bunch of Raspi (or similar low-power computer
  boards) enthusiasts who already have for their own personal use.
 
 I think a lot of people are just using li battery packs with USB output to run
 their Pi type computers, with a wall charger into the battery pack.
 
 it may not be the best thing for the batteries, but it's off-the-shelf and
 cheap.
 
 for 12v computers, it's easy to just float a gell-cell on the output of a 
 power
 supply. If you want to be a purist, have some sort of current limiting 
 resister
 so that when the battery is extremely low you don't overload the power supply,
 but in practice, the power supplies are cheap (getting hold of an old PC power
 supply is probably free, and they tend to have a fairly heafty 12v output), 
 and
 gell cells are pretty forgiving of abuse, so you can get away with the
 dirt-simple PS - battery - device the vast majority of the time.
 
 It helps that 12v equipment tends to actually be speced to run off of
 automotive power, which is about the ugliest power source you can deal with.
 
 David Lang
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Ubiquiti QOS

2014-05-28 Thread dpreed

Interesting conversation.   A particular switch has no idea of the latency 
budget of a particular flow - so it cannot have its *own* latency budget.   
The switch designer has no choice but to assume that his latency budget is near 
zero.
 
The number of packets that should be sustained in flight to maintain maximum 
throughput between the source (entry) switch and destination (exit) switch of 
the flow need be no higher than
 
the flow's share of bandwidth of the bottleneck
 
multiplied by
 
the end-to-end delay (including packet forwarding, but not queueing).
 
All buffering needed for isochrony (jitter buffer) and alternative path 
selection can be moved to either before the entry switch or after the exit 
switch.
 
If you have multiple simultaneous paths, the number of packets in flight 
involves replacing bandwidth of the bottleneck with aggregate bandwidth 
across the minimum cut-set of the chosen paths used for the flow.
 
Of course, these are dynamic - the flow's share and paths used for the flow 
change over short time scales.  That's why you have a control loop that needs 
to measure them.
 
The whole point of minimizing buffering is to make the measurements more timely 
and the control inputs more timely.  This is not about convergence to an 
asymptote
 
A network where every internal buffer is driven hard toward zero makes it 
possible to handle multiple paths, alternate paths, etc. more *easily*.   
That's partly because you allow endpoints to see what is happening to their 
flows more quickly so they can compensate.
 
And of course for shared wireless resources, things change more quickly because 
of new factors - more sharing, more competition for collision-free slots, 
varying transmission rates, etc.
 
The last thing you want is long-term standing waves caused by large buffers and 
very loose control.
 


On Tuesday, May 27, 2014 11:21pm, David Lang da...@lang.hm said:



 On Tue, 27 May 2014, Dave Taht wrote:
 
  On Tue, May 27, 2014 at 4:27 PM, David Lang da...@lang.hm wrote:
  On Tue, 27 May 2014, Dave Taht wrote:
 
  There is a phrase in this thread that is begging to bother me.
 
  Throughput. Everyone assumes that throughput is a big goal - and
 it
  certainly is - and latency is also a big goal - and it certainly is
 -
  but by specifying what you want from throughput as a compromise
 with
  latency is not the right thing...
 
  If what you want is actually high speed in-order packet delivery -
  say, for example a movie,
  or a video conference, youtube, or a video conference - excessive
  latency with high throughput, really, really makes in-order packet
  delivery at high speed tough.
 
 
  the key word here is excessive, that's why I said that for max
 throughput
  you want to buffer as much as your latency budget will allow you to.
 
  Again I'm trying to make a distinction between throughput, and packets
  delivered-in-order-to-the-user. (for-which-we-need-a-new-word-I think)
 
  The buffering should not be in-the-network, it can be in the application.
 
  Take our hypothetical video stream for example. I am 20ms RTT from netflix.
  If I artificially inflate that by adding 50ms of in-network buffering,
  that means a loss can
  take 120ms to recover from.
 
  If instead, I keep a 3*RTT buffer in my application, and expect that I have
 5ms
  worth of network-buffering, instead, I recover from a loss in 40ms.
 
  (please note, it's late, I might not have got the math entirely right)
 
 but you aren't going to be tuning the retry wait time per connection. what is
 the retry time that is set in your stack? It's something huge to survive
 international connections with satellite paths (so several seconds worth). If
 your server-to-eyeball buffering is shorter than this, you will get a window
 where you aren't fully utilizing the connection.
 
 so yes, I do think that if your purpose is to get the maximum possible 
 in-order
 packets delivered, you end up making different decisions than if you are just
 trying to stream a HD video, or do other normal things.
 
 The problem is thinking that this absolute throughput is representitive of
 normal use.
 
  As physical RTTs grow shorter, the advantages of smaller buffers grow
 larger.
 
  You don't need 50ms queueing delay on a 100us path.
 
  Many applications buffer for seconds due to needing to be at least
  2*(actual buffering+RTT) on the path.
 
 For something like streaming video, there's nothing wrong with the application
 buffering aggressivly (assuming you have the space to do so on the client 
 side),
 the more you have gotten transmitted to the client, the longer it can survive 
 a
 disruption of it's network.
 
 There's nothing wrong with having an hour of buffered data between the server
 and the viewer's eyes.now, this buffering should not be in the network 
 devices, it
 should be in the
 client app, but this isn't because there's something wrong with bufferng, it's
 just because the client device has so much more 

Re: [Cerowrt-devel] Ubiquiti QOS

2014-05-26 Thread dpreed

 
On Monday, May 26, 2014 9:02am, Mikael Abrahamsson swm...@swm.pp.se said:



 So, I'd agree that a lot of the time you need very little buffers, but
 stating you need a buffer of 2 packets deep regardless of speed, well, I
 don't see how that would work.

 
My main point is that looking to increased buffering to achieve throughput 
while maintaining latency is not that helpful, and often causes more harm than 
good. There are alternatives to buffering that can be managed more dynamically 
(managing bunching and something I didn't mention - spreading flows or packets 
within flows across multiple routes when a bottleneck appears - are some of 
them).
 
I would look to queue minimization rather than queue management (which 
implied queues are often long) as a goal, and think harder about the end-to-end 
problem of minimizing total end-to-end queueing delay while maximizing 
throughput.
 
It's clearly a totally false tradeoff between throughput and latency - in the 
IP framework.  There is no such tradeoff for the operating point.  There may be 
such a tradeoff for certain specific implementations of TCP, but that's not 
fixed in stone.
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Ubiquiti QOS

2014-05-25 Thread dpreed

Not that it is directly relevant, but there is no essential reason to require 
50 ms. of buffering.  That might be true of some particular QOS-related router 
algorithm.  50 ms. is about all one can tolerate in any router between source 
and destination for today's networks - an upper-bound rather than a minimum.
 
The optimum buffer state for throughput is 1-2 packets worth - in other words, 
if we have an MTU of 1500, 1500 - 3000 bytes. Only the bottleneck buffer (the 
input queue to the lowest speed link along the path) should have this much 
actually buffered. Buffering more than this increases end-to-end latency beyond 
its optimal state.  Increased end-to-end latency reduces the effectiveness of 
control loops, creating more congestion.
 
The rationale for having 50 ms. of buffering is probably to avoid disruption of 
bursty mixed flows where the bursts might persist for 50 ms. and then die. One 
reason for this is that source nodes run operating systems that tend to release 
packets in bursts. That's a whole other discussion - in an ideal world, source 
nodes would avoid bursty packet releases by letting the control by the receiver 
window be tight timing-wise.  That is, to transmit a packet immediately at 
the instant an ACK arrives increasing the window.  This would pace the flow - 
current OS's tend (due to scheduling mismatches) to send bursts of packets, 
catching up on sending that could have been spaced out and done earlier if 
the feedback from the receiver's window advancing were heeded.
 
That is, endpoint network stacks (TCP implementations) can worsen congestion by 
dallying.  The ideal end-to-end flows occupying a congested router would have 
their packets paced so that the packets end up being sent in the least bursty 
manner that an application can support.  The effect of this pacing is to move 
the backlog for each flow quickly into the source node for that flow, which 
then provides back pressure on the application driving the flow, which 
ultimately is necessary to stanch congestion.  The ideal congestion control 
mechanism slows the sender part of the application to a pace that can go 
through the network without contributing to buffering.
 
Current network stacks (including Linux's) don't achieve that goal - their 
pushback on application sources is minimal - instead they accumulate buffering 
internal to the network implementation.  This contributes to end-to-end latency 
as well.  But if you think about it, this is almost as bad as switch-level 
bufferbloat in terms of degrading user experience.  The reason I say almost 
is that there are tools, rarely used in practice, that allow an application to 
specify that buffering should not build up in the network stack (in the kernel 
or wherever it is).  But the default is not to use those APIs, and to buffer 
way too much.
 
Remember, the network send stack can act similarly to a congested switch (it is 
a switch among all the user applications running on that node).  IF there is a 
heavy file transfer, the file transfer's buffering acts to increase latency for 
all other networked communications on that machine.
 
Traditionally this problem has been thought of only as a within-node fairness 
issue, but in fact it has a big effect on the switches in between source and 
destination due to the lack of dispersed pacing of the packets at the source - 
in other words, the current design does nothing to stem the burst groups from 
a single source mentioned above.
 
So we do need the source nodes to implement less bursty sending stacks.  This 
is especially true for multiplexed source nodes, such as web servers 
implementing thousands of flows.
 
A combination of codel-style switch-level buffer management and the stack at 
the sender being implemented to spread packets in a particular TCP flow out 
over time would improve things a lot.  To achieve best throughput, the optimal 
way to spread packets out on an end-to-end basis is to update the receive 
window (sending ACK) at the receive end as quickly as possible, and to respond 
to the updated receive window as quickly as possible when it increases.
 
Just like the bufferbloat issue, the problem is caused by applications like 
streaming video, file transfers and big web pages that the application 
programmer sees as not having a latency requirement within the flow, so the 
application programmer does not have an incentive to control pacing.  Thus the 
operating system has got to push back on the applications' flow somehow, so 
that the flow ends up paced once it enters the Internet itself.  So there's no 
real problem caused by large buffering in the network stack at the endpoint, as 
long as the stack's delivery to the Internet is paced by some mechanism, e.g. 
tight management of receive window control on an end-to-end basis.
 
I don't think this can be fixed by cerowrt, so this is out of place here.  It's 
partially ameliorated by cerowrt, if it aggressively drops packets from flows 

Re: [Cerowrt-devel] Fwd: qos in open commotion?

2014-05-25 Thread dpreed

Besides my diversionary ramble in previous post, let me observe this.
 
Until you realize that maintaining buffers inside the network never helps with 
congestion in a resource limited network, you don't really understand the 
problem.
 
The only reason to have buffers at all is to deal with transient burst 
arrivals, and the whole goal is to stanch the sources quickly.
 
Therefore I suspect that commotion would probably work better by reducing 
buffering to fit into small machines' RAM constraints, and on larger machines, 
just letting the additional RAM be used for something else.
 
For many network experts this idea is heresy because they've been told that 
maximum throughput is gained only by big buffers in the network. Buffers are 
thought of like processor cache memories - the more the better for throughput.
 
That statement is generally not true. It is true in certain kinds of throughput 
tests (single flows between the same source and destination, where end-to-end 
packet loss rates are high and not mitigated by link-level retrying once or 
twice). But in those tests there is no congestion, just an arbitrarily high 
queueing delay, which does not matter for pure throughput tests.
 
Buffering *is* congestion, when a link is shared among multiple flows.
 


On Sunday, May 25, 2014 3:56pm, Dave Taht dave.t...@gmail.com said:



 meant to cc cerowrt-devel on this...
 
 
 -- Forwarded message --
 From: Dave Taht dave.t...@gmail.com
 Date: Sun, May 25, 2014 at 12:55 PM
 Subject: qos in open commotion?
 To: andyg...@opentechinstitute.org, commotion-...@lists.chambana.net
 
 
 Dear Andy:
 
 In response to your thread on qos in open commotion my list started a thread
 
 https://lists.bufferbloat.net/pipermail/cerowrt-devel/2014-May/003044.html
 
 summary:
 
 You can and should run packet scheduling/aqm/qos in routers with 32MB
 of memory or less. Some compromises are needed:
 
 https://lists.bufferbloat.net/pipermail/cerowrt-devel/2014-May/003048.html
 
 FIRST:
 
 We strongly recomend that your edge gateways have aqm/packet
 scheduling/qos on all their connections to the internet. See
 innumerable posting on bufferbloat and the fixes for it...
 
 http://gettys.wordpress.com/
 
 Feel free to lift cerowrt's SQM scripts and gui from the ceropackages
 repo for your own purposes. Openwrt barrier breaker qos-scripts are
 pretty good too but don't work with ipv6 at the moment...
 
 http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_SQM_for_CeroWrt_310
 
 For the kind of results we get on cable:
 
 http://snapon.lab.bufferbloat.net/~cero2/jimreisert/results.html
 
 Wifi has a built in QoS (802.11e) system but it doesn't work well in
 congested environments
 and optimizing wireless-n aggregation works better.
 
 As for fixing wifi, well, we know what to do, but never found any
 funding for it. Ath9k is still horribly overbuffered and while
 fq_codel takes some of the edge off of wifi (and recently we disabled
 802.11e entirely in favor of fq_codel), and in cerowrt we reduce
 aggregation to get better latency also - much more work remains to
 truly make it scale down to levels of latency we consider reasonable
 while  (In other words, wifi latencies suck horribly now no matter
 what yet we think we know how to improve that. Feel free to do
 measurements of your mesh with tools like netperf-wrapper. There are
 also a few papers out there now showing how bad wifi can get nowadays)
 
 As for replacing pfifo_fast, openwrt barrier breaker replaced
 pfifo_fast with fq_codel in barrier breaker a year ago.
 
 fq_codel by default is essentially zero cost (64k per interface*hw
 queues) and the default in openwrt on all interfaces by default now...
 
 but the typical router cpus are so weak it is rare it kicks in except
 at 100mbit and below. (where it can be wonderful) - and it's on a rate
 limited (eg dsl or cable) system where it's most obviously useful.
 
 Presently.
 
 Lastly, I've been running a deployed babel mesh network for 2 years
 with fq_codel in it, 2 SSIDs per nanostation m5 and picostation, and
 it runs pretty good. Recent tests on the ubnt edgerouter went well, as
 well...
 
 Please give this stuff a shot. Your users will love it.
 
 --
 Dave Täht
 
 NSFW:
 https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
 
 
 --
 Dave Täht
 
 NSFW:
 https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Ubiquiti QOS

2014-05-25 Thread dpreed

Len Kleinrock and his student proved that the optimal state for throughput in 
the internet is the 1-2 buffer case.  It's easy to think this through...
 
A simple intuition is that each node that qualifies as a bottleneck (meaning 
that the input rate exceeds the service rate of the outbound queue) will work 
optimally if it is in a double buffering state - that is a complete packet 
comes in for the outbound link during the time that the current packet goes out.
 
That's topology independent.   It's equivalent to saying that the number of 
packets in flight along a path in an optimal state between two endpoints is 
equal to the path's share of the bottleneck link's capacity times the physical 
minimum RTT for the MTU packet - the amount of pipelining that can be 
achieved along that path.
 
Having more buffering can only make the throughput lower or at best the same. 
In other words, you might get the same throughput with more buffering, but more 
likely the extra buffering will make things worse.  (the rare special case is 
the hot rod scenario of maximum end-to-end throughput with no competing 
flows.)
 
The real congestion management issue, which I described, is the unnecessary 
bunching of packets in a flow.  The bunching can be ameliorated at the source 
endpoint (or controlled by the receive endpoint transmitting an ack only when 
it receives a packet in the optimal state, but immediately responding to it to 
increase the responsiveness of the control loop: analogous to impedance 
matching in complex networks of transmission lines - bunching analogously 
corresponds to standing waves that reduce power transfer when impedance is not 
matched approximately.  The maximum power transfer does not happen if some 
intermediate point includes a bad impedance match, storing energy that 
interferes with future energy transfer).
 
Bunching has many causes, but it's better to remove it at the entry to the 
network than to allow it to clog up latency of competing flows.
 
I'm deliberately not using queueing theory descriptions, because the queueing 
theory and control theory associated with networks that can drop packets and 
have finite buffering with end-to-end feedback congestion control is quite 
complex, especially for non-Poisson traffic - far beyond what is taught in 
elementary queueing theory.
 
But if you want, I can dig that up for you.
 
The problem of understanding the network congestion phenomenon as a whole is 
that one can not carry over intuitions from a single, multi hop, linear network 
of nodes to the global network congestion control problem.
 
[The reason a CDMA (wired) or CSMA (wireless) Ethernet has collision-driven 
exponential-random back off is the same rationale - it's important to de-bunch 
the various flows that are competing for the Ethernet segment.  The right level 
of randomness creates local de-bunching (or pacing) almost as effectively as a 
perfect, zero-latency admission control that knows the rates of all incoming 
queues. That is, when a packet ends, all senders with a packet ready to 
transmit do so.  They all collide, and back off for different times - 
de-bunching the packet arrivals that next time around. This may not achieve 
maximal throughput, however, because there's unnecessary blocking of newly 
arrived packets on the backed-off NICs - but fixing that is a different 
story, especially when the Ethernet is an internal path in the Internet as a 
whole - there you need some kind of buffer limiting on each NIC, and ideally to 
treat each flow as distinct back-off entity.]
 
The same basic idea - using collision-based back-off to force competing flows 
to de-bunch themselves - and keeping the end-to-end feedback loops very, very 
short by avoiding any more than the optimal buffering, leads to a network that 
can operate at near-optimal throughput *and* near-optimal latency.
 
This is what I've been calling in my own terminology, a ballistic state of 
the network - analogous to, but not the same as, a gaseous rather than a liquid 
or solid phase of matter. The least congested state that has the most fluidity, 
and therefore the highest throughput of individual molecules (rather than a 
liquid which transmits pressure very well, but does not transmit individual 
tagged molecules very fast at all).
 
That's what Kleinrock and his student showed.  Counterintuitive though it may 
seem. (It doesn't seem counterintuitive to me at all, but many, many network 
engineers are taught and continue to think that you need lots of buffering to 
maximize throughput).
 
I conjecture that it's an achievable operating mode of the Internet based 
solely on end-to-end congestion-control algorithms, probably not very different 
from TCP, running over a network where each switch signals congestion to all 
flows passing through it.  It's probably the most desirable operating mode, 
because it maximizes throughput while minimizing latency simultaneously.  
There's no inherent tradeoff 

Re: [Cerowrt-devel] Ideas on how to simplify and popularize bufferbloat control for consideration.

2014-05-21 Thread dpreed

Besides deployment in cerowrt and openwrt, what would really have high leverage 
is that the techniques developed in cerowrt's exploration (including fq_codel) 
get deployed where they should be deployed: in the access network systems: 
CMTS's, DSLAM's, Enterprise boundary gear, etc. from the major players.
 
Cerowrt's fundamental focus has been proving that the techniques really, really 
work at scale.
 
However, the fundamental bloat-induced experiences are actually occurring due 
to bloat at points where fast meets slow.  Cerowrt can't really fix the 
problem in the download direction (currently not so bad because of high 
download speeds relative to upload speeds in the US - that's in the CMTS's and 
DSLAM's.
 
What's depressing to me is that the IETF community spends more time trying to 
convince themselves that bloat is only a theoretical problem, never encountered 
in the field.  In fact, every lab I've worked at (including the startup 
accelerator where some of my current company work) has had the network managers 
complaining to me that a single heavy FTP I'm running causes all of the other 
users in the site to experience terrible web performance.  But when they call 
Cisco or F5 or whomever, they get told there's nothing to do but buy 
complicated flow-based traffic management boxes to stick in line with the 
traffic (so they can slow me down).
 
Bloat is the most common invisible elephant on the Internet.  Just fixing a few 
access points is a start, but even if we fix all the access points so that 
uploads interfere less, there's still more impact this one thing can have.
 
So, by all means get this stuff into mainstream, but it's time to start pushing 
on the access network technology companies (and there are now open switches 
from Cumulus and even Arista to hack)
 


On Wednesday, May 21, 2014 7:42am, Frits Riep r...@riepnet.com said:



 Thanks Dave for your responses.  Based on this, it is very good that 
 qos-scripts
 is available now through openwrt, and as I experienced, it provides a huge
 advantage for most users.  I would agree prioritizing ping is in and of 
 itself not
 the key goal, but based on what I've read so far, fq-codel provides 
 dramatically
 better responsiveness for any interactive application such as web-browsing, 
 voip,
 or gaming, so it qos-scripts would be advantageous for users like your mom if 
 she
 were in an environment where she had a slow and shared internet connection.  
 Is
 that a valid interpretation?  I am interested in further understanding the
 differences based on the brief differences you provide.  It is true that few
 devices provide DSCP marking, but if the latency is controlled for all 
 traffic,
 latency sensitive traffic benefits tremendously even without prioritizing by 
 l7
 (layer 7 ?). Is this interpretation also valid?
 
 Yes, your mom wouldn't be a candidate for setting up ceroWRT herself, but if 
 it
 were set up for her, or if it could be incorporated into a consumer router 
 with
 automatically determining speed parameters, she would benefit totally from the
 performance improvement.  So the technology ultimately needs to be taken
 mainstream, and yes that is a huge task.
 
 Frits
 
 -Original Message-
 From: Dave Taht [mailto:dave.t...@gmail.com]
 Sent: Tuesday, May 20, 2014 7:14 PM
 To: Frits Riep
 Cc: cerowrt-devel@lists.bufferbloat.net
 Subject: Re: [Cerowrt-devel] Ideas on how to simplify and popularize 
 bufferbloat
 control for consideration.
 
 On Tue, May 20, 2014 at 3:11 PM, Frits Riep r...@riepnet.com wrote:
  The concept of eliminating bufferbloat on many more routers is quite
  appealing.  Reading some of the recent posts makes it clear there is a
  desire to  get to a stable code, and also to find a new platform
  beyond the current Netgear.  However, as good as some of the proposed
  platforms maybe for developing and for doing all of the new
  capabilities of CeroWRT, I also would like to propose that there also
  be some focus on reaching a wider and less sophisticated audience to
  help broaden the awareness and make control of bufferbloat more available 
  and
 easier to attain for more users.
 
 I agree that reaching more users is important. I disagree we need to reach 
 them
 with cerowrt. More below:
 
 
 
  · It appears there is a desire to merge the code into an
 upcoming
  OpenWRT barrier breaker release, which is excellent as it will make it
  easier to fight buffer bloat on a wide range of platforms and provide
  users with a much easier to install firmware release.  I’d like to be
  able to download luci-qos-scripts and sqm-scripts and have basic
  bufferbloat control on a much greater variety of devices and to many
  more users.  From an awareness perspective this would be a huge win.
  Is the above scenario what is being planned, is it likely to happen in the
 reasonable future?
 
 Yes, I'd submitted sqm for review upstream, got back a few comments. Intend to
 resubmit again when I get a 

Re: [Cerowrt-devel] Ideas on how to simplify and popularize bufferbloat control for consideration.

2014-05-21 Thread dpreed

The end-to-end argument against putting functionality in the network is a 
modularity principle, as you know. The exception is when there is a function 
that you want to provide that is not strictly end-to-end.
 
Congestion is one of them - there is a fundamental issue with congestion that 
it happens because of collective actions among independent actors.
 
So if you want to achieve the goals of the modularity principle, you need to 
find either a) the minimal sensing and response you can put in the network that 
allows the independent actors to cooperate, or b) require the independent 
actors to discover and communicate amongst each other individually.
 
Any solution that tries to satisfy the modularity principle has the property 
that it provides sufficient information, in a sufficiently timely manner, for 
the independent actors to respond cooperatively to resolve the issue (by 
reducing their transmission volume in some - presumably approximately fair - 
way).
 
Sufficiently timely is bounded by the draining time of a switch's outbound 
link's queue.  For practical applications of the Internet today, the draining 
time should never exceed about 30-50 msec., at the outbound link's rate.  
However, the optimal normal depth of the queue should be no larger than the 
size needed to keep the outbound link continuously busy at its peak rate 
whatever that is (for a shared WiFi access point the peak rate is highly 
variable as you know).
 
This suggests that the minimal function the network must provide to the 
endpoints is the packet's instantaneous contribution to the draining time of 
the most degraded link on the path.
 
Given this information, a pair of endpoints know what to do.  If it is a 
receiver-managed windowed protocol like TCP, the window needs to be adjusted to 
minimize the contribution to the draining time of the currently bottlenecked 
node, to stop pipelined flows from its sender as quickly as possible.
 
In that case, cooperative behavior is implicit.  The bottleneck switch needs 
only to inform all independent flows of their contribution, and with an 
appropriate control loop on each node, approximate fairness can result.
 
And this is the most general approach.  Switches have no idea of the meaning 
of the flows, so beyond timely and accurate reporting, they can't make useful 
decisions about fixing congestion.
 
Note that this all is an argument about architectural principles and the 
essence of the congestion problem.
 
I could quibble about whether fq_codel is the simplest or best choice for the 
minimal functionality an internetwork could provide.  But it's pretty nice 
and simple.  Not clear it works for a decentralized protocol like WiFi as a 
link - but something like it would seem to be the right thing.
 
 


On Wednesday, May 21, 2014 12:30pm, Dave Taht dave.t...@gmail.com said:



 On Wed, May 21, 2014 at 9:03 AM,  dpr...@reed.com wrote:
  In reality we don't disagree on this:
 
 
 
  On Wednesday, May 21, 2014 11:19am, Dave Taht dave.t...@gmail.com
 said:
 
 
 
  Well, I disagree somewhat. The downstream shaper we use works quite
  well, until we run out of cpu at 50mbits. Testing on the ubnt edgerouter
  has had the inbound shaper work up a little past 100mbits. So there is
  no need (theoretically) to upgrade the big fat head ends if your cpe is
  powerful enough to do the job. It would be better if the head ends did
 it,
  of course
 
 
 
 
  There is an advantage for the head-ends doing it, to the extent that each
  edge device has no clarity about what is happening with all the other cpe
  that are sharing that head-end. When there is bloat in the head-end even if
  all cpe's sharing an upward path are shaping themselves to the up to speed
  the provider sells, they can go into serious congestion if the head-end
  queues can grow to 1 second or more of sustained queueing delay.  My
  understanding is that head-end queues have more than that.  They certainly
  do in LTE access networks.
 
 Compelling argument! I agree it would be best for the devices that have the
 most information about the network to manage themselves better.
 
 It is deeply ironic to me that I'm arguing for an e2e approach on fixing
 the problem in the field, with you!
 
 http://en.wikipedia.org/wiki/End-to-end_principle
 
 
 
 
 
 
 --
 Dave Täht
 
 NSFW:
 https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Ideas on how to simplify and popularize bufferbloat control for consideration.

2014-05-21 Thread dpreed

On Wednesday, May 21, 2014 1:53pm, Dave Taht dave.t...@gmail.com said:
 

 Or we can take a break, and write books about how we learned to relax and
 stop worrying about the bloat.
 
Leading to waistline bloat?___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] fq_codel is two years old

2014-05-16 Thread dpreed

I agree with you Jim about being careful with QoS.  That's why Andrew Odlyzko 
proposed the experiment with exactly two classes, and proposed it as an 
*experiment*. So many researchers and IETF members seem to think we should just 
turn on diffserv and everything will work great... I've seen very senior 
members of IETF actually propose diffserv become a provider-wide standard as 
soon as possible.  I suppose they have a couple of ns2 runs that show nothing 
can go wrong. :-)
 
(that's why I'm so impressed by the fq_codel work - it's more than just 
simulation, but has been tested and more or less stressed in real life, yet it 
is quite simple).
 
I don't agree with the idea that switches alone can solve global system 
problems by themselves. That's why the original AIMD algorithms use packet 
drops as signals, but make the endpoints responsible for managing congestion.  
The switches have nothing to do with the AIMD algorithm, they just create the 
control inputs.
 
So it is kind of telling that Valdis cites a totally switch-centric view from 
NANOG's perspective.  It's not the job of switches to manage congestion, just 
as it is not the job of endpoints to program switches.  There's a separation of 
concerns.
 
The simpler observation would be if you are a switch, there is NOTHING you can 
do to stop congestion.  Even dropping packets doesn't ameliorate congestion.  
However, if you are a switch there are some things you can tell the endpoints, 
in particular the receiving endpoints of flows traveling across the switch, 
about the local 'collision' of packets trying to get through the switch at the 
same time.
 
Since the Internet end-to-end protocols are receiver controlled (TCP's 
receive window is what controls the sender's flow, but it is set by the 
receiver), the locus of decision making is the collection of receivers.
 
Buffering is not the real issue - the issue is the frequency with which the 
packets of all the flows going through a particular switch collide.  The 
control problem is to make that frequency of collision quite small.
 
The nice thing about packet drops is that collisions are remediated 
immediately, rather than creating sustained bottlenecks that increase the 
collision cross section of that switch, increasing the likelihood of 
collisions in the switch dramatically. Replacing a collided/dropped packet with 
a much smaller token that goes on to the receiver would keep the collision 
cross section from growing, but provide better samples of collision info to the 
receiver.  For fairness, you want all packets involved in a collision to carry 
information, and ideally all near collisions to also carry information about 
near collisions.
 
A collision in this is simply defined: a packet that enters a switch collides 
with any other packets that have not completed traversal of the switch when the 
packet arrives is considered to have collided with those packets.
 
You can expand packets' virtual time in the switch by thinking of them as 
virtually still in the switch for some number of bit times after they exit.   
Then a near collision happens between a packet and any packets that are still 
virtually in the switch.  Near collisions are signals that can keep the system 
inside the ballistic region of the phase space.
 
(you can track near collisions by a little memory on each outbound link state - 
and even use Bloom Filters to quickly detect collisions, but that is for a 
different lecture).
 
Please steal this idea and develop it.
 
 
 


On Friday, May 16, 2014 12:06pm, Jim Gettys j...@freedesktop.org said:







On Fri, May 16, 2014 at 10:52 AM,  
[valdis.kletni...@vt.edu](mailto:valdis.kletni...@vt.edu) wrote:

On Thu, 15 May 2014 16:32:55 -0400, [dpr...@reed.com](mailto:dpr...@reed.com) 
said:

  And in the end of the day, the problem is congestion, which is very
  non-linear.  There is almost no congestion at almost all places in the 
  Internet
  at any particular time.  You can't fix congestion locally - you have to slow
  down the sources across all of the edge of the Internet, quickly.

There's a second very important point that somebody mentioned on the NANOG
 list a while ago:

 If the local router/net/link/whatever isn't congested, QoS cannot do anything
 to improve life for anybody.

 If there *is* congestion, QoS can only improve your service to the normal
 uncongested state - and it can *only do so by making somebody else's experience
 suck more*

​The somebody else might be you, in which life is much better.​  once you 
have the concept of flows (at some level of abstraction), you can make more 
sane choices.
​Personally, I've mostly been interested in QOS in the local network: as 
hints, for example, that it is worth more aggressive bidding for transmit 
opportunities in WiFi, for example to ensure my VOIP, teleconferencing, gaming, 
music playing and other actually real time packets get priority over bulk data 
(which includes web traffic), and may need 

Re: [Cerowrt-devel] [Bloat] fq_codel is two years old

2014-05-15 Thread dpreed

Well done.  I'm optimistic for deployment everywhere, except CMTS's, the LTE 
and HSPA+ access networks, and all corporate firewall and intranet gear.
 
The solution du jour is to leave bufferbloat in place, while using the real 
fads: prioritization (diffserv once we have the fast lanes fully legal) and 
software defined networking appliances that use DPI for packet routing and 
traffic management.
 
Fixing buffer bloat allows the edge- and enterprise- networks to be more 
efficient, whereas not fixing it lets the edge networks move users up to more 
and more expensive plans due to frustration and to sell much more gear into 
Enterprises because they are easy marks for complex gadgets.
 
But maybe a few engineers who operate and design gear for such networks will 
overcome the incredible business biases against fixing this.
 
That's why all the efforts you guys have put forth are immensely worth it.  I 
think this is one of the best innovations in recent years (Bram Cohen's 
original BitTorrent is another, for fully decentralizing data delivery for the 
very first time in a brilliant way.) I will certainly push everywhere I can to 
see fq_codel deployed.
 
If there were a prize for brilliant projects, this would be top on my list.
 


On Wednesday, May 14, 2014 9:25pm, Dave Taht dave.t...@gmail.com said:



 On Wed, May 14, 2014 at 3:32 PM, Kathleen Nichols nich...@pollere.com
 wrote:
 
  Thanks, Rich.
 
  And to show you what an amazing bit of work that first fq_codel was, I have
  on my calendar that I first exposed CoDel to a small group in a
  meeting room
  and on the phone at ISC on April 24.
 
 And we had all sorts of trouble with the phone, (eric didn't hear
 much) and we then spent hours and hours afterwards discussing wifi
 instead of codel... it was too much to take in...
 
 me, I'd started jumping up and down in excitement about 20 minutes
 into kathies preso...
 
 May 3rd, 2012 was the last 24 hr coding stint I think I'll ever have.
 
 https://lists.bufferbloat.net/pipermail/codel/2012-May/23.html
 
 Ahh, the good ole days, when bufferbloat was first solved and we all
 thought the internet would speed up overnight, and we were going to be
 rock stars, invited to all the best parties, acquire fame and fortune,
 and be awarded medals and given awards. Re-reading all this brought
 back memories (heck, there's still a couple good ideas in that
 thread left unimplemented)
 
 https://lists.bufferbloat.net/pipermail/codel/2012-May/thread.html
 
 It looks by may 5th we were getting in shape, and then there were a
 few other issues along the way with the control law and so on... and
 eric rewrote the whole thing and made it oodles faster and then as
 best as I recall came up with fq_codel on saturday may 5th(?) -
 
 Ah, I haven't had so much fun in in years. My life since then seems
 like an endless string of meetings, politics, and bugfixing.
 
 The code went from sim/paper, to code, to testing, to mainline linux
 in another week. I wish more research went like that!
 
 commit 76e3cc126bb223013a6b9a0e2a51238d1ef2e409
 Author: Eric Dumazet eduma...@google.com
 Date:   Thu May 10 07:51:25 2012 +
 
 codel: Controlled Delay AQM
 
 Now, as I recall the story, eric came up with fq_codel on a saturday
 afternoon, so I guess that was may 5th - cinco de mayo!
 
 And that too, landed in mainline...
 
 commit 4b549a2ef4bef9965d97cbd992ba67930cd3e0fe
 Author: Eric Dumazet eduma...@google.com
 Date:   Fri May 11 09:30:50 2012 +
 
 fq_codel: Fair Queue Codel AQM
 
 let's not forget tom herbert  BQL
 
 commit 75957ba36c05b979701e9ec64b37819adc12f830
 Author: Tom Herbert therb...@google.com
 Date:   Mon Nov 28 16:32:35 2011 +
 
 dql: Dynamic queue limits
 
 Implementation of dynamic queue limits (dql).  This is a libary which
 allows a queue limit to be dynamically managed.  The goal of dql is
 to set the queue limit, number of objects to the queue, to be minimized
 without allowing the queue to be starved.
 
 
 
 
  It was really amazing to me to watch
  something Van and I had been discussing (okay, arguing) about privately for
  6 months and I'd been tinkering with be turned into real code on real
  networks.
  Jim Gettys is an incredible (and constructive) nagger, Eric Dumazet and
  amazing
  coder, and the entire open source community a really nifty group of folks.
 
  Maybe someday we will actually update the first article with some of the
  stuff
  we got into the last internet draft
 
  be the change,
  Kathie
 
  On 5/14/14 2:01 PM, Rich Brown wrote:
  Folks,
 
  I just noticed that the announcement for the first testable
  implementation of fq_codel was two days ago today - 14 May 2012.
  Build 3.3.6-2 was the first to include working fq_codel.
 
 https://lists.bufferbloat.net/pipermail/cerowrt-devel/2012-May/000233.html
 
   Two years later, we see fq_codel being adopted lots of places. As
  more and more people/organizations 

Re: [Cerowrt-devel] [Bloat] fq_codel is two years old

2014-05-15 Thread dpreed

I don't think that at all. I suspect you know that. But if I confused you, let 
me assure you that my view of the proper operating point of the Internet as a 
whole is that the expected buffer queue on any switch anywhere in the Internet 
is  1 datagram.
 
fq_codel is a good start, but it still requires letting buffer queueing 
increase.  However, mathematically, one need not have the queues build up to 
sustain the control loop that fq_codel creates.
 
I conjecture that one can create an equally effective congestion control 
mechanism as fq_codel without any standing queues being allowed to build up. 
(Someone should try the exercise of trying to prove that an optimal end-to-end 
feedback control system requires queueing delay to be imposed. I've tried and 
it's led me to the conjecture that one can always replace a standing queue with 
a finite memory of past activities - and if one does, the lack of a standing 
queue means that the algorithm is better than any that end up with a standing 
queue).
 
fq_codel could be redesigned into a queue-free fq_codel.


On Thursday, May 15, 2014 7:46pm, David Lang da...@lang.hm said:



 If you think fast lanes will actually increase performance for any traffic,
 you are dreaming.
 
 the people looking for fast lanes are't trying to get traffic through any
 faster, they are looking to charge more for the traffic they are already
 passing.
 
 David Lang
 
   On Thu, 15 May 2014, dpr...@reed.com wrote:
 
  Well done.  I'm optimistic for deployment everywhere, except CMTS's, the LTE
 and HSPA+ access networks, and all corporate firewall and intranet gear.
 
  The solution du jour is to leave bufferbloat in place, while using the real
 fads: prioritization (diffserv once we have the fast lanes fully legal) and
 software defined networking appliances that use DPI for packet routing and
 traffic management.
 
  Fixing buffer bloat allows the edge- and enterprise- networks to be more
 efficient, whereas not fixing it lets the edge networks move users up to more 
 and
 more expensive plans due to frustration and to sell much more gear into
 Enterprises because they are easy marks for complex gadgets.
 
  But maybe a few engineers who operate and design gear for such networks will
 overcome the incredible business biases against fixing this.
 
  That's why all the efforts you guys have put forth are immensely worth it.  
  I
 think this is one of the best innovations in recent years (Bram Cohen's 
 original
 BitTorrent is another, for fully decentralizing data delivery for the very 
 first
 time in a brilliant way.) I will certainly push everywhere I can to see 
 fq_codel
 deployed.
 
  If there were a prize for brilliant projects, this would be top on my list.
 
 
 
  On Wednesday, May 14, 2014 9:25pm, Dave Taht dave.t...@gmail.com
 said:
 
 
 
  On Wed, May 14, 2014 at 3:32 PM, Kathleen Nichols
 nich...@pollere.com
  wrote:
  
   Thanks, Rich.
  
   And to show you what an amazing bit of work that first fq_codel was,
 I have
   on my calendar that I first exposed CoDel to a small group in a
   meeting room
   and on the phone at ISC on April 24.
 
  And we had all sorts of trouble with the phone, (eric didn't hear
  much) and we then spent hours and hours afterwards discussing wifi
  instead of codel... it was too much to take in...
 
  me, I'd started jumping up and down in excitement about 20 minutes
  into kathies preso...
 
  May 3rd, 2012 was the last 24 hr coding stint I think I'll ever have.
 
  https://lists.bufferbloat.net/pipermail/codel/2012-May/23.html
 
  Ahh, the good ole days, when bufferbloat was first solved and we all
  thought the internet would speed up overnight, and we were going to be
  rock stars, invited to all the best parties, acquire fame and fortune,
  and be awarded medals and given awards. Re-reading all this brought
  back memories (heck, there's still a couple good ideas in that
  thread left unimplemented)
 
  https://lists.bufferbloat.net/pipermail/codel/2012-May/thread.html
 
  It looks by may 5th we were getting in shape, and then there were a
  few other issues along the way with the control law and so on... and
  eric rewrote the whole thing and made it oodles faster and then as
  best as I recall came up with fq_codel on saturday may 5th(?) -
 
  Ah, I haven't had so much fun in in years. My life since then seems
  like an endless string of meetings, politics, and bugfixing.
 
  The code went from sim/paper, to code, to testing, to mainline linux
  in another week. I wish more research went like that!
 
  commit 76e3cc126bb223013a6b9a0e2a51238d1ef2e409
  Author: Eric Dumazet eduma...@google.com
  Date:   Thu May 10 07:51:25 2012 +
 
  codel: Controlled Delay AQM
 
  Now, as I recall the story, eric came up with fq_codel on a saturday
  afternoon, so I guess that was may 5th - cinco de mayo!
 
  And that too, landed in mainline...
 
  commit 4b549a2ef4bef9965d97cbd992ba67930cd3e0fe
  Author: Eric Dumazet 

Re: [Cerowrt-devel] Had to disable dnssec today

2014-04-26 Thread dpreed

Is this just a dnsmasq issue or is the DNSSEC mechanism broken at these sites?  
 If it is the latter, I can get attention from executives at some of these 
companies (Heartbleed has sensitized all kinds of companies to the need to 
strengthen security infrastructure).
 
If the former, the change process is going to be more tricky, because dnsmasq 
is easily dismissed as too small a proportion of the market to care.  (wish it 
were not so).


On Saturday, April 26, 2014 7:38am, Aaron Wood wood...@gmail.com said:



Just too many sites aren't working correctly with dnsmasq and using Google's 
DNS servers.
- Bank of America ([http://sso-fi.bankofamerica.com] sso-fi.bankofamerica.com)
- Weather Underground ([http://cdnjs.cloudflare.com] cdnjs.cloudflare.com)
- Akamai ([http://e3191.dscc.akamaiedge.net.0.1.cn.akamaiedge.net] 
e3191.dscc.akamaiedge.net.0.1.cn.akamaiedge.net)
And I'm not getting any traction with reporting the errors to those sites, so 
it's frustrating in getting it properly fixed.
While Akamai and cloudflare appear to be issues with their entries in google 
dns, or with dnsmasq's validation of them being insecure domains, the BofA 
issue appears to be an outright bad key.  And BofA isn't being helpful (just a 
continual we use ssl sort of quasi-automated response).
So I'm disabling it for now, or rather, falling back to using my ISP's dns 
servers, which don't support DNSSEC at this time.  I'll be periodically turning 
it back on, but too much is broken (mainly due to the cdns) to be able to rely 
on it at this time.
-Aaron___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] comcast provisioned rates?

2014-04-19 Thread dpreed

Very good.   So the idea, rather than Comcast implementing codel or something 
proper in the DOCSIS 3.0 systems they have in the field, is to emulate power 
boost to impedance match the add-on router-based codel approach to some kind 
of knowledge of what the DOCSIS CMTS buffering state looks like
 
And of course nothing can be done about downstream bufferbloat in the Comcast 
DOCSIS deployment.
 
So instead of fixing Comcast's stuff correctly, we end up with a literal 
half measure.
 
Who does Comcast buy its CMTS gear from, and if it has a Heartbleed bug, maybe 
some benevolent hacker should just fix it for them?
 
It's now been 2 years since Comcast said they were deploying a fix.  Was that 
just them hoping the critics would dissipate their time and effort?   And is 
Comcast still using its Sandvine DPI gear?
 
I'm afraid that monopolists really don't care.  Even friendly-sounding ones.  
Especially when they can use their technical non-deployments to get paid more 
by Netflix.

On Saturday, April 19, 2014 1:57pm, Dave Taht dave.t...@gmail.com said:



 The features of the PowerBoost feature are well documented at this
 point. A proper
 emulation of them is in the ns2 code. It has been a persistent feature
 request, to
 add support to some Linux rate shaper to properly emulate PowerBoost,
 but no funding
 ever arrived.
 
 Basically  you get 10 extra megabytes above the base rate at whatever
 rate the line
 can sustain before it settles back to the base rate.
 
 You can also see that as presently implemented, at least on a short
 RTT path, the feature
 does not prevent bufferbloat.
 
 http://snapon.lab.bufferbloat.net/~cero2/jimreisert/results.html
 
 I'd like a faster, less cpu intense rate shaper than sch_htb in
 general, and powerboost emulation would be nice.
 
 
 On Sat, Apr 19, 2014 at 9:38 AM, Aaron Wood wood...@gmail.com wrote:
  Based on these results:
 
  http://snapon.lab.bufferbloat.net/~cero2/jimreisert/results.html
 
  And talking off-list with Jim, I think that the PowerBoost is above the
  quoted rate, as the 24/4 service hits 36Mbps TCP data rate.  I'm
 definitely
  sad that using SQM in the router instead of the modem loses features like
  that.  But I'll just be happy to have upload over 1Mbps again.
 
  I do know that the FCC was cracking down on advertised vs. actual rates, and
  started a measuring broadband in America project:
 
  http://www.fcc.gov/measuring-broadband-america
 
  -Aaron
 
 
  On Sat, Apr 19, 2014 at 6:21 PM, dpr...@reed.com wrote:
 
  As a non-Comcast-customer, I am curious too.  I had thought their
 boost
  feature allowed temporary rates *larger* than the quoted up to rates.
  (but I remember the old TV-diagonal games and disk capacity games, where
 any
  way to get a larger number was used in the advertising, since the FTC
 didn't
  have a definition that could be applied).
 
 
 
  I wonder if some enterprising lawyer might bring the necessary consumer
  fraud class-action before the FTC to get clear definitions of the
 numbers?
  It's probably too much to ask for Comcast to go on the record with a
 precise
  definition.
 
 
 
 
 
  On Saturday, April 19, 2014 8:55am, Aaron Wood
 wood...@gmail.com said:
 
  I'm setting up new service in the US, and I'm currently assuming that
 all
  of Comcast's rates are boosted rates, not the provisioned rates.
  So if they quote 50/10Mbps, I assume that's not what will need to be set
  in SQM with CeroWRT.
  Does anyone have good info on the provisioned rates that go with each
 of
  the Comcast tiers?
  Basically, I'm trying to get to an apples-to-apples comparison with
  Sonic.net DSL (I'll be close enough to the CO to run in Annex M upload
  priority mode and get ~18/2 service).
  Thanks,
  Aaron
 
 
 
  ___
  Cerowrt-devel mailing list
  Cerowrt-devel@lists.bufferbloat.net
  https://lists.bufferbloat.net/listinfo/cerowrt-devel
 
 
 
 
 --
 Dave Täht
 
 NSFW:
 https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [aqm] chrome web page benchmarker fixed

2014-04-18 Thread dpreed

Why is the DNS PLR so high?  1% is pretty depressing.
 
Also, it seems odd to eliminate 19% of the content retrieval because the tail 
is fat and long rather than short.  Wouldn't it be better to have 1000 servers?
 
 


On Friday, April 18, 2014 2:15pm, Greg White g.wh...@cablelabs.com said:



 Dave,
 
 We used the 25k object size for a short time back in 2012 until we had
 resources to build a more advanced model (appendix A).  I did a bunch of
 captures of real web pages back in 2011 and compared the object size
 statistics to models that I'd seen published.  Lognormal didn't seem to be
 *exactly* right, but it wasn't a bad fit to what I saw.  I've attached a
 CDF.
 
 The choice of 4 servers was based somewhat on logistics, and also on a
 finding that across our data set, the average web page retrieved 81% of
 its resources from the top 4 servers.  Increasing to 5 servers only
 increased that percentage to 84%.
 
 The choice of RTTs also came from the web traffic captures. I saw
 RTTmin=16ms, RTTmean=53.8ms, RTTmax=134ms.
 
 Much of this can be found in
 https://tools.ietf.org/html/draft-white-httpbis-spdy-analysis-00
 
 In many of the cases that we've simulated, the packet drop probability is
 less than 1% for DNS packets.  In our web model, there are a total of 4
 servers, so 4 DNS lookups assuming none of the addresses are cached. If
 PLR = 1%, there would be a 3.9% chance of losing one or more DNS packets
 (with a resulting ~5 second additional delay on load time).  I've probably
 oversimplified this, but Kathie N. and I made the call that it would be
 significantly easier to just do this math than to build a dns
 implementation in ns2.  We've open sourced the web model (it's on Kathie's
 web page and will be part of ns2.36) with an encouragement to the
 community to improve on it.  If you'd like to port it to ns3 and add a dns
 model, that would be fantastic.
 
 -Greg
 
 
 On 4/17/14, 3:07 PM, Dave Taht dave.t...@gmail.com wrote:
 
 On Thu, Apr 17, 2014 at 12:01 PM, William Chan (陈智昌)
 willc...@chromium.org wrote:
  Speaking as the primary Chromium developer in charge of this relevant
 code,
  I would like to caution putting too much trust in the numbers
 generated. Any
  statistical claims about the numbers are probably unreasonable to make.
 
 Sigh. Other benchmarks such as the apache (ab) benchmark
 are primarily designed as stress testers for web servers, not as realistic
 traffic. Modern web traffic has such a high level of dynamicism in it,
 that static web page loads along any distribution, seem insufficient,
 passive analysis of aggregated traffic feels incorrect relative to the
 sorts of home and small business traffic I've seen, and so on.
 
 Famous papers, such as this one:
 
 http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-leland.pdf
 
 Seem possibly irrelevant to draw conclusions from given the kind
 of data they analysed and proceeding from an incorrect model or
 gut feel for how the network behaves today seems to be foolish.
 
 Even the most basic of tools, such as httping, had three basic bugs
 that I found in a few minutes of trying to come up with some basic
 behaviors yesterday:
 
 https://lists.bufferbloat.net/pipermail/bloat/2014-April/001890.html
 
 Those are going to be a lot easier to fix than diving into the chromium
 codebase!
 
 There are very few tools worth trusting, and I am always dubious
 of papers that publish results with unavailable tools and data. The only
 tools I have any faith in for network analysis are netperf,
 netperf-wrapper,
 tcpdump and xplot.org, and to a large extent wireshark. Toke and I have
 been tearing apart d-itg and I hope to one day be able to add that to
 my trustable list... but better tools are needed!
 
 Tools that I don't have a lot of faith in include that, iperf, anything
 written
 in java or other high level languages, speedtest.net, and things like
 shaperprobe.
 
 Have very little faith in ns2, slightly more in ns3, and I've been meaning
 to look over the mininet and other simulators whenever I got some spare
 time; the mininet results stanford gets seem pretty reasonable and I
 adore their reproducing results effort. Haven't explored ndt, keep meaning
 to...
 
  Reasons:
  * We don't actively maintain this code. It's behind the command line
 flags.
  They are broken. The fact that it still results in numbers on the
 benchmark
  extension is an example of where unmaintained code doesn't have the UI
  disabled, even though the internal workings of the code fail to
 guarantee
  correct operation. We haven't disabled it because, well, it's
 unmaintained.
 
 As I mentioned I was gearing up for a hacking run...
 
 The vast majority of results I look at are actually obtained via
 looking at packet captures. I mostly use benchmarks as abstractions
 to see if they make some sense relative to the captures and tend
 to compare different benchmarks against each other.
 
 I realize others don't go into that level of detail, so you have 

Re: [Cerowrt-devel] Full blown DNSSEC by default?

2014-04-13 Thread dpreed

I'd be for A.  Or C with a very, very strong warning that would encourage users 
to pressure their broken upstream.  Users in China will never not have a broken 
upstream, of course, but they know that already... :-)
 
Similarly, I hope we don't have Heartbleed in our SSL.  Maybe we should put a 
probe in Cero's SSL that tests clients to see if they have Heartbleed fixed on 
their side, and warns them.
 
Any DNS provider that doesn't do DNSSEC should be deprecated strongly (I'm 
pretty sure OpenDNS cannot do so, since it deliberately fakes its lookups, 
redirecting to man in the middle sites that it runs).
 


On Sunday, April 13, 2014 12:26am, Dave Taht dave.t...@gmail.com said:



 I am delighted that we have the capability now to do dnssec.
 
 I am not surprised that various domain name holders are doing it
 wrong, nor that some ISPs and registrars don't support doing it
 either. We are first past the post here, and kind of have to expect
 some bugs...
 
 but is the overall sense here:
 
 A) we should do full dnssec by default, and encourage users to use
 open dns resolvers like google dns that support it when their ISPs
 don't?
 
 B) or should we fall back to the previous partial dnssec
 implementation that didn't break as hard, and encourage folk to turn
 it up full blast if supported correctly by the upstream ISP?
 
 C) or come up with a way of detecting a broken upstream and falling
 back to a public open resolver?
 
 Is there a D?
 
 --
 Dave Täht
 
 NSFW:
 https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Got Bloat?

2014-03-14 Thread dpreed

I can tell you that when I originally spoke to ATT about their 4G HSPA+ 
network's buffer bloat (which was before it had that name and when the folks in 
the IETF said I must have been incompetently measuring the system), ATT's Sr. 
VP of network operations and his chief technical person refused to even try to 
repeat my measurements (which I had done in 5 cities and on a Boston area 
commuter-rail that got service from ATT).
 
After intervention by friends of ATT technical management they agreed to 
measure and discovered that my measurements were accurate.  They apparently 
went back to their vendor, who apparently claimed that I could not possibly be 
right and that ATT could not possibly be right, either, making some comment or 
other about channel scheduling being the problem and arguing that since they 
got maximum *throughput* that was all that mattered - multi-second latency was 
just not on their radar screen.
 
So, I'm sure that it won't be easy to talk to the access providers.  They don't 
care, because they don't have to.
 
And they can get peer-reviewed surveys from IETF that say this is not a 
problem for anyone but a minuscule fraction of cranky experts.
 
Reality denial is really, really good from the cellular industry.   AFAIK, even 
the stuff that Jim Gettys got his research buddies in ALU to demonstrate in LTE 
has not been fixed in shipping products or in the field.
 
Of course, they have a monopoly, so why fix anything? - just sell upgrades to 
premium service instead.
 


On Friday, March 14, 2014 4:57pm, Rich Brown richb.hano...@gmail.com said:



 I'm riding on the bus to Boston. It's wifi equipped, but the
 connection's terribly slow.  A ping (attached) shows:
 
 - No responses for 10 seconds, then the first ping returns.  (!)
 - This trace gets as bad as 12 seconds, but I have seen another with 20 
 seconds
 
 I wonder what it would take to get the bus company to talk to their
 radio vendor,
 and get a better SQM in their router and head end...
 
 Rich
 
 bash-3.2$ ping gstatic.com
 PING gstatic.com (74.125.196.120): 56 data bytes
 Request timeout for icmp_seq 0
 Request timeout for icmp_seq 1
 Request timeout for icmp_seq 2
 Request timeout for icmp_seq 3
 Request timeout for icmp_seq 4
 Request timeout for icmp_seq 5
 Request timeout for icmp_seq 6
 Request timeout for icmp_seq 7
 Request timeout for icmp_seq 8
 Request timeout for icmp_seq 9
 Request timeout for icmp_seq 10
 64 bytes from 74.125.196.120: icmp_seq=0 ttl=35 time=11080.951 ms
 64 bytes from 74.125.196.120: icmp_seq=1 ttl=35 time=10860.209 ms
 Request timeout for icmp_seq 13
 64 bytes from 74.125.196.120: icmp_seq=2 ttl=35 time=12432.495 ms
 64 bytes from 74.125.196.120: icmp_seq=3 ttl=35 time=11878.852 ms
 64 bytes from 74.125.196.120: icmp_seq=4 ttl=35 time=1.612 ms
 64 bytes from 74.125.196.120: icmp_seq=5 ttl=35 time=11170.454 ms
 64 bytes from 74.125.196.120: icmp_seq=6 ttl=35 time=10774.446 ms
 64 bytes from 74.125.196.120: icmp_seq=7 ttl=35 time=9991.265 ms
 64 bytes from 74.125.196.120: icmp_seq=8 ttl=35 time=9068.379 ms
 64 bytes from 74.125.196.120: icmp_seq=9 ttl=35 time=8162.352 ms
 64 bytes from 74.125.196.120: icmp_seq=10 ttl=35 time=7321.143 ms
 64 bytes from 74.125.196.120: icmp_seq=11 ttl=35 time=6553.093 ms
 64 bytes from 74.125.196.120: icmp_seq=12 ttl=35 time=6205.100 ms
 64 bytes from 74.125.196.120: icmp_seq=13 ttl=35 time=5384.352 ms
 64 bytes from 74.125.196.120: icmp_seq=14 ttl=35 time=4903.169 ms
 64 bytes from 74.125.196.120: icmp_seq=15 ttl=35 time=4821.944 ms
 64 bytes from 74.125.196.120: icmp_seq=16 ttl=35 time=4438.738 ms
 64 bytes from 74.125.196.120: icmp_seq=17 ttl=35 time=4239.312 ms
 64 bytes from 74.125.196.120: icmp_seq=18 ttl=35 time=5573.525 ms
 64 bytes from 74.125.196.120: icmp_seq=19 ttl=35 time=5023.965 ms
 64 bytes from 74.125.196.120: icmp_seq=20 ttl=35 time=4994.414 ms
 64 bytes from 74.125.196.120: icmp_seq=21 ttl=35 time=4679.299 ms
 64 bytes from 74.125.196.120: icmp_seq=22 ttl=35 time=5013.662 ms
 64 bytes from 74.125.196.120: icmp_seq=23 ttl=35 time=5557.759 ms
 ^C
 --- gstatic.com ping statistics ---
 32 packets transmitted, 24 packets received, 25.0% packet loss
 round-trip min/avg/max/stddev = 4239.312/7551.687/12432.495/2805.706 ms
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] uplink_buffer_adjustment

2014-02-25 Thread dpreed
I've measured buffer size with TCP, when there is no fq_codel or whatever doing 
drops.  After all, this is what caused me to get concerned.

And actually, since UDP packets are dropped by fq_codel the same as TCP 
packets, it's easy to see how big fq_codel lets the buffers get.

If the buffer gets to be 1200 msec. long with UDP, that's a problem with 
fq_codel - just think about it.  Someone's tuning fq_codel to allow excess 
buildup of queueing, if that's observed.

So I doubt this is a netalyzr bug at all.  Operator error more likely, in 
tuning fq_codel.




On Tuesday, February 25, 2014 11:46am, Jim Gettys j...@freedesktop.org said:

 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 On Tue, Feb 25, 2014 at 11:02 AM, Nicholas Weaver nwea...@icsi.berkeley.edu
 wrote:
 

 On Feb 25, 2014, at 7:59 AM, Jim Gettys j...@freedesktop.org wrote:
  So it is arguably a bug in netalyzr.  It is certainly extremely
 misleading.
 
  Nick?

 Rewriting it as a TCP-based stresser is definatly on our to-do list.

 
 Good; though I'm not sure you'll be able to build a TCP one that fills the
 buffers fast enough to determine some of the buffering out there (at least
 without hacking the TCP implementation, anyway).
 
 The other piece of this is detecting flow queuing being active; this makes
 a bigger difference to actual latency than mark/drop algorithms do by
 themselves.
   - Jim
 
 


 --
 Nicholas Weaver  it is a tale, told by an idiot,
 nwea...@icsi.berkeley.edufull of sound and fury,
 510-666-2903 .signifying nothing
 PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc


 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Friends don't let friends run factory firmware

2014-02-18 Thread dpreed
Apropos of this topic construed broadly, just got the following in my email.  
I'm thinking about a MicroZed network appliance anyway, so a PMOD interface is 
interesting because that's the MicroZed peripheral standard.  But wouldn't it 
be nice if one could have this kind of authentication in a router?  

http://www.maximintegrated.com/app-notes/index.mvp/id/5822

It's a nice little chip, easy to interface to almost anything.  Pretty easy to 
make a PCB that can be added to almost any commercial home router.



On Tuesday, February 18, 2014 5:21pm, Dave Taht dave.t...@gmail.com said:

 On Tue, Feb 18, 2014 at 5:13 PM, Dave Taht dave.t...@gmail.com wrote:
 While we are at it. (wobbly wednesday)

 http://www.ioactive.com/news-events/IOActive_advisory_belkinwemo_2014.html

 Don't leave home with it on.

 At least they left the signing keys for the certificate in the
 firmware, so that bad guys can exploit it, and good guys, improve it.



 On Tue, Feb 18, 2014 at 5:10 PM, Rich Brown richb.hano...@gmail.com wrote:
 More excitement...

 https://isc.sans.edu/forums/diary/Linksys+Worm+TheMoon+Summary+What+we+know+so+far/17633
 
 I was incidentally quite surprised to see the original limited scope
 of the DNS changer worm. I didn't think we'd busted the folk involved
 in the scam soon enough, nor was I happy with the ensuing publicity,
 nor with how long it took for Paul to be able to turn off the the
 servers supplying the (4+m) busted routers with corrected data.
 
 The world has been ripe for the same attack or worse, across over half
 the home routers in the universe, as
 well as much CPE.
 
 This is in part why I'm so adamant about getting DNSSEC support out
 there, adding sensors to cerowrt,
 improving security, doing bcp38 and source sensitive routing and the like.
 
 
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel



 --
 Dave Täht

 Fixing bufferbloat with cerowrt: 
 http://www.teklibre.com/cerowrt/subscribe.html
 
 
 
 --
 Dave Täht
 
 Fixing bufferbloat with cerowrt: 
 http://www.teklibre.com/cerowrt/subscribe.html
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
 


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] hwrngs

2014-02-02 Thread dpreed

Any idea what the price will be in quantity?   The fact that it supports both 
BB black and RPi is great news for makers interested in authentication and 
security.
 


On Saturday, February 1, 2014 11:11pm, Dave Taht dave.t...@gmail.com said:



 I am still quite irked by having to use /dev/urandom for important
 tasks like dnssec key generation, and in wireless WPA. And like
 others, distrust having only one source of random numbers in the mix.
 
 I just ordered some of these
 
 http://cryptotronix.com/2013/12/27/hashlet_random_tests/
 
 Simultaneously while I was getting nsupdate dns working on cerowrt
 from the yurt to the dynamic ipv6 stuff, my main dns server died, and
 I decided
 I'd move dns to a beaglebone black, so running across this hwrng made
 me feel better about randomness on embedded systems.
 
 I bought the last 5 Joshua had, sorry about that! I'd like to find something
 that could run off the internal serial port on the wndr3800s... and
 worth incorporating in future designs. (multiple vendors)
 
 --
 Dave Täht
 
 Fixing bufferbloat with cerowrt: 
 http://www.teklibre.com/cerowrt/subscribe.html
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] hwrngs

2014-02-02 Thread dpreed

Ordered the RPi version (5 more in stock, if anyone wants one).  Thanks, Dave!
 


On Sunday, February 2, 2014 11:25am, Dave Taht dave.t...@gmail.com said:



 On Sun, Feb 2, 2014 at 8:17 AM,  dpr...@reed.com wrote:
  Any idea what the price will be in quantity?
 
 No. Pretty cheap, it's a very tiny board
 
 http://cryptotronix.com/2013/12/27/hashlet_random_tests/
 
 I got a discount for 5, and he has a couple left...
 
  The fact that it supports
  both BB black and RPi is great news for makers interested in authentication
  and security.
 
 yep.
 
 It is open hardware also, with a schematic supplied, open source
 driver (not a kernel driver yet), I was very happy to support this
 project.
 
 I have looked for usb equivalents, btw, and haven't found anything
 inexpensive. And in the case of cero I'd wanted something that could
 run
 on the internal serial header...
 
 I note that theoretically the BBB also has an on-cpu hwrng but
 documentation on it from TI is lacking. Perhaps someone could lean on
 TI to free that information up in the post-snowden era?
 
 (my take on it is the more hwrngs the better, one from china, one from
 russia, one from the USA...)
 
 /me goes back to converting his dns/mail/vpn server over to a BBB
 
 
 
 
 
 
  On Saturday, February 1, 2014 11:11pm, Dave Taht
 dave.t...@gmail.com
  said:
 
  I am still quite irked by having to use /dev/urandom for important
  tasks like dnssec key generation, and in wireless WPA. And like
  others, distrust having only one source of random numbers in the mix.
 
  I just ordered some of these
 
  http://cryptotronix.com/2013/12/27/hashlet_random_tests/
 
  Simultaneously while I was getting nsupdate dns working on cerowrt
  from the yurt to the dynamic ipv6 stuff, my main dns server died, and
  I decided
  I'd move dns to a beaglebone black, so running across this hwrng made
  me feel better about randomness on embedded systems.
 
  I bought the last 5 Joshua had, sorry about that! I'd like to find
  something
  that could run off the internal serial port on the wndr3800s... and
  worth incorporating in future designs. (multiple vendors)
 
  --
  Dave Täht
 
  Fixing bufferbloat with cerowrt:
  http://www.teklibre.com/cerowrt/subscribe.html
  ___
  Cerowrt-devel mailing list
  Cerowrt-devel@lists.bufferbloat.net
  https://lists.bufferbloat.net/listinfo/cerowrt-devel
 
 
 
 
 --
 Dave Täht
 
 Fixing bufferbloat with cerowrt: 
 http://www.teklibre.com/cerowrt/subscribe.html
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


[Cerowrt-devel] side issue, related to the bigger picture surrounding Cerowrt and Bufferbloat.

2014-01-25 Thread dpreed

On Friday, January 24, 2014 5:27pm, Dave Taht dave.t...@gmail.com said:
 

 and also, suddenly every device with a web server on it on 80 and 443
 is vulnerable, ranging from your printer to your fridge.
 
One of the reasons I like the Cerowrt project is that it focuses on fixing 
the aspects of the Internet plumbing that are due to careless practices like 
presuming that a printer or fridge will be protected inside a firewall and 
thus need not be designed correctly.
 
It reminds me of the attitude toward safety taken by the Auto Industry prior to 
Ralph Nader.  (whether you like Nader or not, his point was correct at the time 
- GM and Ford engineering did not design sufficiently safe cars, and that had a 
huge social impact that individuals could not cope with).
 
We now have printers and fridges that are unsafe at any speed, just as we 
have access networks that are knowingly designed to get bloated under stress, 
amplifying the stress rather than ameliorating it.
 
Now there may be temporary kludges that can protect the printers and fridges 
thus misdesigned - and NAT firewalls are possibly OK in that light.  But 
honestly, I want to be able to connect to my printer from anywhere.
 
For a few bucks I can probably build a front-end box for my printer that is a 
printer server based on encrypted connections (using SSL with certificates, 
perhaps). E.g. for each printer and fridge, a Raspberry Pi with a USB WiFi 
interface, connected directly on IPv6. That's about $50 per badly designed 
consumer electronics device.
 
I'd prefer, however for the printer makers, etc. to make this a standard.  To 
do so, we need an open source project like Cerowrt to show the way, perhaps 
starting with the front-end box that implements the standard, since adding 
software to a printer or fridge itself is hard.
 
 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] treating 2.4ghz as -legacy?

2013-12-19 Thread dpreed

The tower is a slightly different situation.  There you are not between the 
antenna and ground - the ground is between the antenna and you
|/
| 0
|-+-
| /\
==
 
If the element is at the top of the mast at left, the path from it to your 
phone (e.g.) is not close to the ground,
so the losses along the path from you to the element are small.  But as the 
path gets closer to the ground, the electric field tends to be dragged toward 
zero.  So the higher the antenna the better, because if it were close to the 
ground, the energy along the path would tend to be absorbed by the ground 
because it is closer to the antenna than you.
 
But if the situation looks like this:
 
__  [antenna]
 
\
\
\
0
-+-
/\
=
 
There is no ground between you and the antenna, but the field is forced to zero 
at the earth-ground, and also reflected away from you back towards the sky as 
the slanted ray between you and the antenna will reflect off the ground to 
the right.  So you will find that since you are close to the ground, the field 
is zero near your head at the spot there, too.  There is no absorption of 
energy by the air/wood between you and the antenna, but the zero boundary 
condition at the ground makes the field weaker around you by attenuating the 
field and reflecting it away from you.
 
The differential equation solutions that describe the time varying EM fields in 
both situations are, of course, far more complicated (Maxwell's equation with 
fixed boundary conditions).  But what I'm saying is a rough characterization of 
the fields' energy structure, given the earth-ground being a roughly flat 
surface that is conductive enough to hold a near constant zero voltage.
 
Hope this helps.
 
 
On Thursday, December 19, 2013 7:50pm, Theodore Ts'o ty...@mit.edu said:



 Thanks for the detailed explanation about antenna issues.  One
 question, if I might:
 
  (and don't put your AP in the attic and expect a good signal near
  the ground or in the basement.  Physics will make sure that the
  signal is zero at any ground, so being closer to the ground than the
  antenna weakens the signal a lot!)
 
 I thought the opposite was true?  That is, ground loses go up when the
 antenna is closer to the ground, so it was good to have bigger, taller
 atenna towers?  If what you say is correct, how do antennas on cell
 towers mitigate this particular issue?
 
   - Ted
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] treating 2.4ghz as -legacy?

2013-12-18 Thread dpreed

Yes - there are significant differences in the physical design of access points 
that may affect 5 GHz and 2.4 GHz differently.  There are also modulation 
differences, and there may actually be scheduling/protocol differences.
 
All of these affect connectivity far more than center-frequency will.
 
1) Antennas.  One of the most obvious problems is antenna aperture.  That is 
a measure of the effective 2-D area of the antenna on the receiving side.  A 
dipole antenna (the cheapest kind, but not the only kind used in access points) 
is tuned by making its length a specific fraction of the wavelength.  Thus a 
5 GHz antenna of the dipole type has 1/4 the aperture of a dipole antenna for 
2.4 GHz.   This means that the 5 GHz antenna of the same design can access only 
1/4 of the radiated energy at 5 GHz.  But that's entirely due to antenna size.  
 If you hold the antenna size constant (which means using a design that is 
inherently twice as big as a dipole), you will find that range dramatically 
increases.   You can demonstrate this with parabolic reflecting *receive* 
antennas at the two frequencies. (the aperture can be kept constant by using 
the same dish diameter).   If you look at the antenna elements for 5 and 2.4 in 
an access pony, you will probably see, if you understand the circuitry, that 
the 5 GHz antenna has a smaller aperture.
 
The other problem is antenna directionality for the transmit and receive 
antennas.  Indeed almost all AP antennas have flattened doughnut radiation 
patterns in free-space.   Worse, however, is that indoors, the antenna patterns 
are shaped by reflectors and absorbers so that the energy is highly variable, 
and highly dependent on wavelength in the pattern.  So 5 GHz and 2.4 GHz 
signals received at any particular point have highly variable relative 
energies.   In one place the 5 GHz signal might be 10x the energy of a 2.4 GHz 
signal from the same AP, and in another, 1/10th. The point here is that a 
controlled experiment that starts at a point where 2.4 GHz works OK might 
find weak 5 GHz, but moving 1 foot to the side will cause 2.4 to be unworkable, 
whereas 5 works fine.   Distances of 1 foot completely change the situation in 
a diffusive propagation environment.
 
Fix: get the AP designers to hire smarter antenna designers.  Even big 
companies don't understand the antenna issue - remember the Apple iPhone design 
with the antenna that did not work if you held the phone at the bottom, but 
worked fine if you held it at the top?  Commercial APs are generally made of 
the cheapest parts, using the cheapest designs, in the antenna area.  And you 
buy them and use them with no understanding of how antennas actually work.  
Caveat emptor.  And get your antennas evaluated by folks who understand 
microwave antennas in densely complex propagation environments, not outdoor 
free-space.
 
(and don't put your AP in the attic and expect a good signal near the 
ground or in the basement.  Physics will make sure that the signal is zero 
at any ground, so being closer to the ground than the antenna weakens the 
signal a lot!)
 
2) Modulation and digitization.   Indoor environments are multipath-rich.   
OFDM, because it reduces the symbol rate, doesn't mind multipath as much as 
does DSSS.   But it does require a wider band and equalization across the band, 
in order to work well.  The problem with 802.11 as a protocol is that the 
receiver has only a  microsecond or so to determine how to equalize the signal 
from a transmitter, and to apply that equalization.   Since the AP is 
constantly receiving packets from multiple sources, with a high dynamic range, 
the radios may or may not succeed in equalizing enough.   The more bits/sample 
received, and the more variable the analog gain in the front-end can be 
adapted, the better the signal can be digitized.  Receiver designs are highly 
variable, and there is no particularly good standard for adjusting the power of 
transmitters to minimize the dynamic range of signals at the receiver end of a 
packet transmission.  This can be quite different in 5 GHz and 2.4 GHz due to 
the type of modulation used in the beacon packets sent by APs.   Since the 
endpoints are made by a different designers the PHY layer standards are 
required to do the job of making the whole system work.  Advanced modulation 
and digitization systems at 5 GHz are potentially better, but may in fact be 
far more incompatible with each other.  I've seen some terrible design choices.
 
3) Software/Protocol.   The most problematic software issue I know of is the 
idea of using RSSI as if it were meaningful for adaptation of rates, etc.  The 
rate achieved is the best measure of channel capacity, not signal strength!   
You can get remarkably good performance at lower signal strengths, and poor 
performance at higher signal strengths - because performance is only weakly 
affected by signal strength.   Even in the Shannon capacity law, inside the log 

Re: [Cerowrt-devel] happy 4th!

2013-07-08 Thread dpreed

I was suggesting that there is no reason to be intimidated.
 
And yes, according to the dictionary definition, they are ignorant - as in they 
don't know what they are talking about, and don't care to.
 
They may be influential, and they may have a great opinion of themselves.  And 
others may view them as knowledgeable.   The folks who told Galileo that he 
was wrong were all of that.  But they remained ignorant.
 
As to being constructive, I'm not convinced that these people can be convinced 
that their dismissal of bufferbloat and their idea that goodput is a useful 
Internet concept are incorrect.
 
If they are curious, experimental evidence might be useful.  But have they done 
their own experiments to validate what they accept as true?   I've been told 
by more than 50% of professional EE's practicing that Shannon's Law places a 
limit on all radio communications capacity.  But none of these EE's can even 
explain the Shannon-Hartley AWGN channel capacity theorem, its derivation, and 
its premises and range of applicability.  They just think they know what it 
means.  And they are incredibly arrogant and dismissive, while being totally 
*incurious*.
 
The same is true about most networking professionals.  Few understand 
queueing theory, its range of applicability, etc. *or even exactly how TCP 
works*.  But that doesn't stop them from ignoring evidence, evidence that is 
right in front of their eyes - every day.  It took Jim Gettys' curiosity of why 
his home network performance *sucked* to get him to actually dig into the 
problem.  And yet much of IETF still tries to claim that the problem doesn't 
exist!  They dismiss evidence - out of hand.
 
That's not science, it's not curiosity.  It's *dogmatism* - the opposite of 
science.  And those people are rarely going to change their minds.  After 45 
years in advanced computing and communications, I can tell you they will 
probably go to their graves spouting their old-wives-tales.
 
Spend your time on people who don't throw things in your face.  On the people 
who are actually curious enough to test your claims themselves (which is quite 
easy for anyone who can do simple measurements).  RRUL is a nice simple test.  
Let them try it!
 
 


On Sunday, July 7, 2013 8:24pm, Mikael Abrahamsson swm...@swm.pp.se said:



 On Sun, 7 Jul 2013, dpr...@reed.com wrote:
 
  So when somebody throws that in your face, just confidently use the
  words Bullshit, show me evidence, and ignore the ignorant person who
 
 Oh, the people that have told me this are definitely not ignorant. Quite
 the contrary.
 
 ... and by the way, they're optimising for the case where a single TCP
 flow from a 10GE connected host is traversing a 10G based backbone, and
 they want this single TCP session to use every spare capacity the network
 has to give. Not 90% of available capcity, but 100%.
 
 This is the kind of people that have a lot of influence and causes core
 routers to get designed with 600 ms of buffering (well, latest generation
 ones are down to 50ms buffering). We're talking billion dollar investments
 by hardware manufacturers. We're talking core routers of latest generation
 that are still being put into production as we speak.
 
 Calling them ignorant and trying to wave them off by that kind of
 reasonsing isn't productive. Why not just implement the high RTT testing
 part and prove that you're right instead of just saying you're right?
 
 THe bufferbloat initiative is trying to change how things are done. Burden
 of proof is here. When I participate in IETF TCP WG, they talk goodput.
 They're not talking latency and interacting well with UDP based
 interactive streams. They're optimising goodput. If we want buffers to be
 lower, we need to convince people that this doesn't hugely affect goodput.
 
 I have not so far seen tests with FQ_CODEL with a simulated 100ms extra
 latency one-way (200ms RTT). They might be out there, but I have not seen
 them. I encourage these tests to be done.
 
 --
 Mikael Abrahamssonemail: swm...@swm.pp.se
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] happy 4th!

2013-07-08 Thread dpreed

Regarding Galileo, I think he did not bother trying to convince his enemies 
(who wanted to burn him at the stake, but had to turn to one of his followers 
to carry out that revenge).  I think he devoted time to explaining his ideas to 
people who were interested in learning about them.
 
He wrote a book making his scientific case, and did not spend (waste) time 
trying to figure out how to convert those that were trying to get the Pope to 
do him in, by using logic.
 
I don't think it is useful to try to convince James Imhofe that global warming 
has a scientific basis.  He is convinced that it is a fraud perpetrated by 
scientists, and *nothing* will change his mind.  If anything, trying to 
convince him makes him appear to be important beyond his importance in the 
matter.
 
And yes, the folks who set Internet Land Speed Records are just as important 
to the Internet as people who drive Indycars (a fun thing to watch) contribute 
to automobile engineering.
 
I respect their extremely narrow talents, but not necessarily their wisdom 
outside their narrow field.
 


On Tuesday, July 9, 2013 1:48am, Mikael Abrahamsson swm...@swm.pp.se said:



 On Mon, 8 Jul 2013, dpr...@reed.com wrote:
 
  I was suggesting that there is no reason to be intimidated.
 
 I was not intimidated, I just lacked data to actually reply to the
 statement made.
 
  And yes, according to the dictionary definition, they are ignorant - as
  in they don't know what they are talking about, and don't care to.
 
 I object to the last part of the statement. If you're a person who has
 been involved in winning an Internet Land Speed Record you probably care,
 but you're have knowledge for a certain application and a certain purpose,
 which might not be applicable to the common type of home connection usage
 today. It doesn't mean the use case is not important or that person is
 opposing solving bufferbloat problem.
 
  As to being constructive, I'm not convinced that these people can be
  convinced that their dismissal of bufferbloat and their idea that
  goodput is a useful Internet concept are incorrect.
 
 I haven't heard any dismissal of the problem, only that they optimize for
 a different use case, and they're concerned that their use case will
 suffer if buffers are smaller. This is the reason I want data because if
 FQ_CODEL gets similar results then their use case is not hugely negatively
 affected, and there is data showing it helps a lot for a lot of other use
 cases, then they shouldn't have much to worry about and can stop arguing.
 
 Thinking of Galileo, he didn't walk around saying the earth revolves
 around the sun and when people questioned him, he said check it out for
 yourself, prove your point, I don't need to prove mine!, right?
 
 --
 Mikael Abrahamssonemail: swm...@swm.pp.se
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] happy 4th!

2013-07-07 Thread dpreed

Whereever the idea came from that you had to buffer RTT*2 in a midpath node, 
it is categorically wrong.
 
What is possibly relevant is that you will have RTT * bottleneck-bit-rate bits 
in flight from end-to-end in order not to be constrained by the 
acknowledgement time.   That is: TCP's outstanding window should be 
RTT*bottleneck-bit-rate to maximize throughput.   Making the window *larger* 
than that is not helpful.
 
So when somebody throws that in your face, just confidently use the words 
Bullshit, show me evidence, and ignore the ignorant person who is repeating 
an urban legend similar to the one about the size of crocodiles in New York's 
sewers that are supposedly there because people throw pet crocodiles down there.
 
If you need a simplified explanation of why having 
2*RTT-in-the-worst-case-around-the-world * maximum-bit-rate-on-the-path, all 
you need to think about is what happens when some intermediate huge bottleneck 
buffer fills up (which it certainly will, very quickly, since by definition the 
paths feeding it have much higher delivery rates than it can handle).
 
What will happen?  A packet will be silently discarded from the tail of the 
queue.  But that packet's loss will not be discovered by the endpoints until 
the bottleneck-bit-rate * the worst-case-RTT * 2 (or maybe 4 if the reverse 
path is similarly clogged) seconds later.  Meanwhile the sources would have 
happily *sustained* the size of the bottleneck's buffer, by putting out that 
many bits past the lost packet's position (thinking all is well).
 
And so what will happen?  most of the following packets behind the lost packet 
will be retransmitted by the source again.   This of course *doubles* the 
packet rate into the bottleneck.
 
And there is an infinite regression - all the while there being a solidly 
maintained extremely long queue of packets that are waiting for the bottleneck 
link.  Many, many seconds of end-to-end latency on that link, perhaps.
 
Only if all users give up and go home for the day on that link will the 
bottleneck link's send queue ever drain.  New TCP connections will open, and if 
lucky, they will see a link with delays from earth-to-pluto as its norm on 
their SYN/SYN-ACK.  But they won't get better service than that, while 
continuing to congest the node.
 
What you need is a message from the bottleneck link to say WHOA - I can't 
process all this traffic.  And that happens *only* when that link actually 
drops packets after about 50 msec. or less of traffic is queued.
 
 
 
 


On Thursday, July 4, 2013 1:57am, Mikael Abrahamsson swm...@swm.pp.se said:



 On Wed, 3 Jul 2013, Dave Taht wrote:
 
  Suggestions as to things to test and code to test them welcomed. In
 
 I'm wondering a bit what the shallow buffering depth means to higher-RTT
 connections. When I advocate bufferbloat solutions I usually get thrown in
 my face that shallow buffering means around-the-world TCP-connections will
 behave worse than with a lot of buffers (traditional truth being that you
 need to be able to buffer RTT*2).
 
 It would be very interesting to see what an added 100ms
 (http://stackoverflow.com/questions/614795/simulate-delayed-and-dropped-packets-on-linux)
 and some packet loss/PDV would result in. If it still works well, at least
 it would mean that people concerned about this could go back to rest.
 
 Also, would be interesting to see is Googles proposed QUIC interacts well
 with the bufferbloat solutions. I imagine it will since it in itself
 measures RTT and FQ_CODEL is all about controlling delay, so I imagine
 QUIC will see a quite constant view of the world through FQ_CODEL.
 
 --
 Mikael Abrahamssonemail: swm...@swm.pp.se
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] blip: a tool for seeing internet latency with javascript

2013-04-28 Thread dpreed


 
Actually, using HTTP 1.1 GET  that generates a single packet in each direction 
for a ping is quite reasonable.  In fact, it is better for measuring actual 
path latencies, since ICMP pings *could* be discriminated against in a router 
along the way (in the old days people in the routing community suggested that 
ICMP should be diverted off of the fast path to avoid degrading the user 
experience).
 
I've been using this technique to measure bufferbloat-induced delays on Phones 
and Android phones for quite a while.  I have a couple of servers that use 
nginx status handlers to generate a short GET response without touching files 
as my targets.
 
Since it depends on HTTP 1.1's re-use of the underlying TCP connection for 
successive GET commands, it's a bit fragile.
 
Javascript can be made to do a lot of performance testing - you can access both 
TCP and DNS protocols from the browser, so if you play cards right, you can 
cause single TCP exchanges and single UDP exchanges to happen with cooperative 
servers (web servers using HTTP 1.1 and DNS resolvers using uncacheable UDP 
name lookups).
 
 
 


On Sunday, April 28, 2013 10:56am, Rich Brown richb.hano...@gmail.com said:



 This is indeed a cool hack. I was astonished for a moment, because it was a
 bedrock belief that you can't send pings from Javascript. And in fact, that is
 still true.
 
 Apenwarr's code sends short HTTP queries of the format shown below to each of 
 two
 hosts:
 
 http://gstatic.com/generate_204
 http://apenwarr.ca/blip/
 
 The Blip tool shows ~60-70ms for the gstatic host, and ~130 msec for the 
 latter.
 Ping times are ~52 msec and 125msec, respectively. These times seem to track
 response times by my eye (no serious analysis) to load both on my primary
 (bloated) router and CeroWrt.
 
 Still a cool hack.
 
 Rich
 
 -
 HTTP Request  Response for typical blip ping
 
 OPTIONS /generate_204 HTTP/1.1
 Host: gstatic.com
 Connection: keep-alive
 Access-Control-Request-Method: GET
 Origin: http://gfblip.appspot.com
 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.31
 (KHTML, like Gecko) Chrome/26.0.1410.65 Safari/537.31
 Access-Control-Request-Headers: accept, origin, x-requested-with
 Accept: */*
 Referer: http://gfblip.appspot.com/
 Accept-Encoding: gzip,deflate,sdch
 Accept-Language: en-US,en;q=0.8
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
 
 HTTP/1.1 204 No Content
 Content-Length: 0
 Content-Type: text/html; charset=UTF-8
 Date: Sun, 28 Apr 2013 12:37:17 GMT
 Server: GFE/2.0
 
 
 On Apr 26, 2013, at 7:04 PM, Dave Taht dave.t...@gmail.com wrote:
 
  Apenwarr has developed a really unique tool for seeing latency and
  packet loss via javascript. I had no idea this was possible:
 
  http://apenwarr.ca/log/?m=201304#26
 
 
 
  --
  Dave Täht
 
  Fixing bufferbloat with cerowrt:
 http://www.teklibre.com/cerowrt/subscribe.html
  ___
  Bloat mailing list
  bl...@lists.bufferbloat.net
  https://lists.bufferbloat.net/listinfo/bloat
 
 ___
 Cerowrt-devel mailing list
 Cerowrt-devel@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/cerowrt-devel

 ___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


  1   2   >