Re: [Cerowrt-devel] 10gige and 2.5gige

2021-12-16 Thread David Lang
another valuable featur of fiber for home use is that fiber can't contribute to 
ground loops the way that copper cables can.


and for the paranoid (like me :-) ) fiber also means that any electrical 
disaster that happens to one end won't propgate through and fry other equipment


David Lang

On Thu, 16 Dec 2021, David P. Reed wrote:


Thanks, That's good to know...The whole SFP+ adapter concept has seemed to me to be a 
"tweener" in hardware design space. Too many failure points. That said, I like 
fiber's properties as a medium for distances.


On Thursday, December 16, 2021 2:31pm, "Joel Wirāmu Pauling" 
 said:




Heat issues you mention with UTP are gone; with the [ 803.bz ]( http://803.bz ) stuff (i.e Base-N). 
It was mostly due to the 10G-Base-T spec being old and out of line with the SFP+ spec ; which led to higher power consumption than SFP+ cages were rated to draw and aforementioned heat problems; this is not a problem with newer kit.
It went away with the move to smaller silicon processes and now UTP based 10G in the home devices are more common and don't suffer from the fragility issues of the earlier copper based 10G spec. The AQC chipsets were the first to introduce it but most other vendors have finally picked it up after 5 years or feet dragging. 



On Fri, Dec 17, 2021 at 7:16 AM David P. Reed <[ dpr...@deepplum.com ]( 
mailto:dpr...@deepplum.com )> wrote:
Yes, it's very cheap and getting cheaper.

Since its price fell to the point I thought was cheap, my home has a 10 GigE 
fiber backbone, 2 switches in my main centers of computers, lots of 10 GigE 
NICs in servers, and even dual 10 GigE adapters in a Thunderbolt 3 external 
adapter for my primary desktop, which is a Skull Canyon NUC.

I strongly recommend people use fiber and sfp+ DAC cabling because twisted 
pair, while cheaper, actually is problematic at speeds above 1 Gig - mostly due 
to power and heat.

BTW, it's worth pointing out that USB 3.1 can handle 10 Gb/sec, too, and USB-C 
connectors and cables can carry Thunderbolt at higher rates.  Those adapters 
are REALLY CHEAP. There's nothing inherently different about the electronics, 
if anything, USB 3.1 is more complicate logic than the ethernet MAC.

So the reason 10 GigE is still far more expensive than USB 3.1 is mainly market 
volume - if 10 GigE were a consumer product, not a datacenter product, you'd 
think it would already be as cheap as USB 3.1 in computers and switches.

Since DOCSIS can support up to 5 Gb/s, I think, when will Internet Access Providers start offering "Cable Modems" that support customers who want more than "a full Gig"? Given all the current DOCSIS 3 CMTS's etc. out there, it's just a configuration change. 


So when will consumer "routers" support 5 Gig, 10 Gig?

On Thursday, December 16, 2021 11:20am, "Dave Taht" <[ dave.t...@gmail.com ]( 
mailto:dave.t...@gmail.com )> said:




has really got cheap.

[ https://www.tomshardware.com/news/innodisk-m2-2280-10gbe-adapter ]( 
https://www.tomshardware.com/news/innodisk-m2-2280-10gbe-adapter )

On the other hand users are reporting issues with actually using
2.5ghz cable with this router in particular, halving the achieved rate
by negotiating 2.5gbit vs negotiating 1gbit.

[ https://forum.mikrotik.com/viewtopic.php?t=179145#p897836 ]( 
https://forum.mikrotik.com/viewtopic.php?t=179145#p897836 )


--
I tried to build a better future, a few times:
[ https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org ]( 
https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org )

Dave Täht CEO, TekLibre, LLC
___
Cerowrt-devel mailing list
[ Cerowrt-devel@lists.bufferbloat.net ]( 
mailto:Cerowrt-devel@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/cerowrt-devel ]( 
https://lists.bufferbloat.net/listinfo/cerowrt-devel )
___

Cerowrt-devel mailing list
[ Cerowrt-devel@lists.bufferbloat.net ]( 
mailto:Cerowrt-devel@lists.bufferbloat.net )
[ https://lists.bufferbloat.net/listinfo/cerowrt-devel ]( 
https://lists.bufferbloat.net/listinfo/cerowrt-devel )___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] uplink bufferbloat and scheduling problems

2021-12-02 Thread David Lang

On Thu, 2 Dec 2021, Toke Høiland-Jørgensen wrote:


"Valdis Klētnieks"  writes:


On Wed, 01 Dec 2021 13:09:46 -0800, David Lang said:


with wifi where you can transmit multiple packets in one airtime slot, you need
enough buffer to handle the entire burst.


OK, I'll bite... roughly how many min-sized or max-sized packets can you fit
into one slot?


On 802.11n, 64kB; on 802.11ac, 4MB(!); on 802.11ax, no idea - the same as 
802.11ac?


As I understnad it, 802.11ax can do 16MB (4MB to each of 4 different endpoints)

This is made significantly messier because the headers for each transmission are 
sent at FAR slower rates than the data can be, so if you send a single 64 byte 
packet in a timeslot that could send 4/16MB, it's not a matter of taking 
1/128,000 of the time (the ratio of the data), it's more like 1/2 of the 
time.


So it's really valuable for overall throughput to fill those transmit slots 
rather than having the data trickle out over many slots.


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] uplink bufferbloat and scheduling problems

2021-12-01 Thread David Lang

On Wed, 1 Dec 2021, David P. Reed wrote:

To say it again: More memory *doesn't* improve throughput when the queue 
depths exceed one packet on average


slight disagreement here. the buffer improves throughput up to the point where 
it handles one burst of packets. When packets are transmitted individually, 
that's about one packet (insert hand waving about scheduling delays, etc). but 
with wifi where you can transmit multiple packets in one airtime slot, you need 
enough buffer to handle the entire burst.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] uplink bufferbloat and scheduling problems

2021-12-01 Thread David Lang
with 802.11ac, the difference between uplink and downlink is that the AP can 
transmit to multiple users at the same time (multiple signals spacially 
multiplexed), but the users transmit back one at a time.


David Lang

On Wed, 1 Dec 2021, David P. Reed wrote:


What's the difference between uplink and downlink?  In DOCSIS the rate 
asymmetry was the issue. But in WiFi, the air interface is completely symmetric 
(802.11ax, though, maybe not because of centrally polling).

In any CSMA link (WiFi), there is no "up" or "down". There is only sender and 
receiver, and each station and the AP are always doing both.

The problem with shared media links is that the "waiting queue" is distributed, 
so to manage queue depth, ALL of the potential senders must respond aggressively to 
excess packets.

This is why a lot (maybe all) of the silicon vendors are making really bad 
choices w.r.t. bufferbloat by adding buffering in the transmitter chip itself, 
and not discarding or marking when queues build up. It's the same thing that 
constantly leads hardware guys to think that more memory for buffers improves 
throughput, and only advertising throughput.

To say it again: More memory *doesn't* improve throughput when the queue depths exceed one packet on average, 
and it degrades "goodput" at higher levels by causing the ultimate sender to "give up" 
due to long latency. (at the extreme, users will just click again on a slow URL, causing all the throughput 
to be "badput", because they force the system to transmit it again, while leaving packets clogging 
the queues.

So, if you want good performance on a shared radio medium, you need to squish each flow's 
queue depth down from sender to receiver to "average < 1 in queue", and also 
drop packets when there are too many simultaneous flows competing for airtime. And if your 
source process can't schedule itself frequently enough, don't expect the network to replace 
buffering at the TCP source and destination - it is not intended to be a storage system.



On Tuesday, November 30, 2021 7:13pm, "Dave Taht"  said:




Money quote: "Figure 2a is a good argument to focus latency
research work on downlink bufferbloat."

It peaked at 1.6s in their test:
https://hal.archives-ouvertes.fr/hal-03420681/document

--
I tried to build a better future, a few times:
https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org

Dave Täht CEO, TekLibre, LLC
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] Little's Law mea culpa, but not invalidating my main point

2021-09-19 Thread David Lang

On Mon, 20 Sep 2021, Valdis Klētnieks wrote:


On Sun, 19 Sep 2021 18:21:56 -0700, Dave Taht said:

what actually happens during a web page load,


I'm pretty sure that nobody actually understands that anymore, in any
more than handwaving levels.


This is my favorite interview question, it's amazing and saddning at the answers 
that I get, even from supposedly senior security and networking people.


David Lang___
Bloat mailing list
bl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Starlink] Anhyone have a spare couple a hundred million ... Elon may need to start a go-fund-me page!

2021-08-10 Thread David Lang
the biggest problem starlink faces is shipping enough devices (and launching the 
satellites to support them), not demand. There are enough people interested in 
paying full price that if the broadband subsities did not exist, it wouldn't 
reduce the demand noticably.


but if the feds are handing out money, SpaceX is foolish not to apply for it.

David Lang

On Tue, 10 Aug 2021, Jeremy Austin wrote:


Date: Tue, 10 Aug 2021 12:33:11 -0800
From: Jeremy Austin 
To: dick...@alum.mit.edu
Cc: Cake List ,
Make-Wifi-fast ,
Bob McMahon , starl...@lists.bufferbloat.net,
codel ,
cerowrt-devel ,
bloat 
Subject: Re: [Starlink] Anhyone have a spare couple a hundred million ... Elon
 may need to start a go-fund-me page!

A 5.7% reduction in funded locations for StarLink is… not dramatic. If the
project falls on that basis, they've got bigger problems. Much of that
discrepancy falls squarely on the shoulders of the FCC and incumbent ISPs
filing form 477, as well as the RDOF auction being held before improving
mapping — as Rosenworcel pointed out. The state of broadband mapping is
still dire.

If I felt like the reallocation of funds would be 100% guaranteed to
benefit the end Internet user… I'd cheer too.

If.

JHA

On Tue, Aug 10, 2021 at 12:16 PM Dick Roy  wrote:


You may find this of some relevance!




https://arstechnica.com/tech-policy/2021/07/ajit-pai-apparently-mismanaged-9-billion-fund-new-fcc-boss-starts-cleanup/



Cheers (or whatever!),



RR


___
Starlink mailing list
starl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink




___
Starlink mailing list
starl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board

2021-08-02 Thread David Lang

I agree that we don't want to make perfect the enemy of better.

A lot of the issues I'm calling out can be simulated/enhanced with different 
power levels.


over wifi distances, I don't think time delays are going to be noticable (we're 
talking 10s to low 100s of feet, not miles)


David Lang

On Mon, 2 Aug 2021, Bob McMahon wrote:


fair enough, but for this "RF emulator device" being able to support
distance matrices, even hollow symmetric ones, is much better than what's
typically done. The variable solid state phase shifters are 0-360 so don't
provide real time delays either.

This is another "something is better than nothing" type proposal. I think
it can be deployed at a relatively low cost which allows for more
standardized, automated test rigs and much less human interactions and
human errors.

Bob

On Mon, Aug 2, 2021 at 9:30 PM David Lang  wrote:


symmetry is not always (or usually) true. stations are commonly heard at
much
larger distances than they can talk, mobile devices have much less
transmit
power (becuase they are operating on batteries) than fixed stations, and
when
you adjust the transmit power on a station, you don't adjust it's receive
sensitivity.

David Lang

  On Mon, 2 Aug 2021, Bob McMahon wrote:


Date: Mon, 2 Aug 2021 20:23:06 -0700
From: Bob McMahon 
To: David Lang 
Cc: Ben Greear ,
Luca Muscariello ,
Cake List ,
Make-Wifi-fast ,
Leonard Kleinrock , starl...@lists.bufferbloat.net,
co...@lists.bufferbloat.net,
cerowrt-devel ,
bloat 
Subject: Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug

2:

Internet Quality workshop CFP for the internet architecture board

The distance matrix defines signal attenuations/loss between pairs.  It's
straightforward to create a distance matrix that has hidden nodes because
all "signal  loss" between pairs is defined.  Let's say a 120dB

attenuation

path will cause a node to be hidden as an example.

AB CD
A   -   35   120   65
B -  65   65
C   -   65
D -

So in the above, AC are hidden from each other but nobody else is. It

does

assume symmetry between pairs but that's typically true.

The RF device takes these distance matrices as settings and calculates

the

five branch tree values (as demonstrated in the video). There are
limitations to solutions though but I've found those not to be an issue

to

date. I've been able to produce hidden nodes quite readily. Add the phase
shifters and spatial stream powers can also be affected, but this isn't
shown in this simple example.

Bob

On Mon, Aug 2, 2021 at 8:12 PM David Lang  wrote:


I guess it depends on what you are intending to test. If you are not

going

to
tinker with any of the over-the-air settings (including the number of
packets
transmitted in one aggregate), the details of what happen over the air
don't
matter much.

But if you are going to be doing any tinkering with what is getting

sent,

and
you ignore the hidden transmitter type problems, you will create a
solution that
seems to work really well in the lab and falls on it's face out in the
wild
where spectrum overload and hidden transmitters are the norm (at least

in

urban
areas), not rare corner cases.

you don't need to include them in every test, but you need to have a way
to
configure your lab to include them before you consider any
settings/algorithm
ready to try in the wild.

David Lang

On Mon, 2 Aug 2021, Bob McMahon wrote:


We find four nodes, a primary BSS and an adjunct one quite good for

lots

of

testing.  The six nodes allows for a primary BSS and two adjacent ones.

We

want to minimize complexity to necessary and sufficient.

The challenge we find is having variability (e.g. montecarlos) that's
reproducible and has relevant information. Basically, the distance

matrices

have h-matrices as their elements. Our chips can provide these

h-matrices.


The parts for solid state programmable attenuators and phase shifters
aren't very expensive. A device that supports a five branch tree and

2x2

MIMO seems a very good starting point.

Bob

On Mon, Aug 2, 2021 at 4:55 PM Ben Greear 

wrote:



On 8/2/21 4:16 PM, David Lang wrote:

If you are going to setup a test environment for wifi, you need to

include the ability to make a fe cases that only happen with RF, not

with

wired networks and

are commonly overlooked

1. station A can hear station B and C but they cannot hear each other
2. station A can hear station B but station B cannot hear station A

3.

station A can hear that station B is transmitting, but not with a

strong

enough signal to

decode the signal (yes in theory you can work around interference,

but

in practice interference is still a real thing)


David Lang



To add to this, I think you need lots of different station devices,
different capabilities (/n, /ac, /ax, etc)
different numbers of spatial streams, and different distances from the
AP.  From

Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board

2021-08-02 Thread David Lang
symmetry is not always (or usually) true. stations are commonly heard at much 
larger distances than they can talk, mobile devices have much less transmit 
power (becuase they are operating on batteries) than fixed stations, and when 
you adjust the transmit power on a station, you don't adjust it's receive 
sensitivity.


David Lang

 On Mon, 2 Aug 2021, Bob McMahon wrote:


Date: Mon, 2 Aug 2021 20:23:06 -0700
From: Bob McMahon 
To: David Lang 
Cc: Ben Greear ,
Luca Muscariello ,
Cake List ,
Make-Wifi-fast ,
Leonard Kleinrock , starl...@lists.bufferbloat.net,
co...@lists.bufferbloat.net,
cerowrt-devel ,
bloat 
Subject: Re: [Cake] [Make-wifi-fast] [Starlink] [Cerowrt-devel] Due Aug 2:
Internet Quality workshop CFP for the internet architecture board

The distance matrix defines signal attenuations/loss between pairs.  It's
straightforward to create a distance matrix that has hidden nodes because
all "signal  loss" between pairs is defined.  Let's say a 120dB attenuation
path will cause a node to be hidden as an example.

AB CD
A   -   35   120   65
B -  65   65
C   -   65
D -

So in the above, AC are hidden from each other but nobody else is. It does
assume symmetry between pairs but that's typically true.

The RF device takes these distance matrices as settings and calculates the
five branch tree values (as demonstrated in the video). There are
limitations to solutions though but I've found those not to be an issue to
date. I've been able to produce hidden nodes quite readily. Add the phase
shifters and spatial stream powers can also be affected, but this isn't
shown in this simple example.

Bob

On Mon, Aug 2, 2021 at 8:12 PM David Lang  wrote:


I guess it depends on what you are intending to test. If you are not going
to
tinker with any of the over-the-air settings (including the number of
packets
transmitted in one aggregate), the details of what happen over the air
don't
matter much.

But if you are going to be doing any tinkering with what is getting sent,
and
you ignore the hidden transmitter type problems, you will create a
solution that
seems to work really well in the lab and falls on it's face out in the
wild
where spectrum overload and hidden transmitters are the norm (at least in
urban
areas), not rare corner cases.

you don't need to include them in every test, but you need to have a way
to
configure your lab to include them before you consider any
settings/algorithm
ready to try in the wild.

David Lang

On Mon, 2 Aug 2021, Bob McMahon wrote:


We find four nodes, a primary BSS and an adjunct one quite good for lots

of

testing.  The six nodes allows for a primary BSS and two adjacent ones.

We

want to minimize complexity to necessary and sufficient.

The challenge we find is having variability (e.g. montecarlos) that's
reproducible and has relevant information. Basically, the distance

matrices

have h-matrices as their elements. Our chips can provide these

h-matrices.


The parts for solid state programmable attenuators and phase shifters
aren't very expensive. A device that supports a five branch tree and 2x2
MIMO seems a very good starting point.

Bob

On Mon, Aug 2, 2021 at 4:55 PM Ben Greear 

wrote:



On 8/2/21 4:16 PM, David Lang wrote:

If you are going to setup a test environment for wifi, you need to

include the ability to make a fe cases that only happen with RF, not

with

wired networks and

are commonly overlooked

1. station A can hear station B and C but they cannot hear each other
2. station A can hear station B but station B cannot hear station A 3.

station A can hear that station B is transmitting, but not with a strong
enough signal to

decode the signal (yes in theory you can work around interference, but

in practice interference is still a real thing)


David Lang



To add to this, I think you need lots of different station devices,
different capabilities (/n, /ac, /ax, etc)
different numbers of spatial streams, and different distances from the
AP.  From download queueing perspective, changing
the capabilities may be sufficient while keeping all stations at same
distance.  This assumes you are not
actually testing the wifi rate-ctrl alg. itself, so different throughput
levels for different stations would be enough.

So, a good station emulator setup (and/or pile of real stations) and a

few

RF chambers and
programmable attenuators and you can test that setup...

 From upload perspective, I guess same setup would do the job.
Queuing/fairness might depend a bit more on the
station devices, emulated or otherwise, but I guess a clever AP could
enforce fairness in upstream direction
too by implementing per-sta queues.

Thanks,
Ben

--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com










___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listin

Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board

2021-08-02 Thread David Lang
I guess it depends on what you are intending to test. If you are not going to 
tinker with any of the over-the-air settings (including the number of packets 
transmitted in one aggregate), the details of what happen over the air don't 
matter much.


But if you are going to be doing any tinkering with what is getting sent, and 
you ignore the hidden transmitter type problems, you will create a solution that 
seems to work really well in the lab and falls on it's face out in the wild 
where spectrum overload and hidden transmitters are the norm (at least in urban 
areas), not rare corner cases.


you don't need to include them in every test, but you need to have a way to 
configure your lab to include them before you consider any settings/algorithm 
ready to try in the wild.


David Lang

On Mon, 2 Aug 2021, Bob McMahon wrote:


We find four nodes, a primary BSS and an adjunct one quite good for lots of
testing.  The six nodes allows for a primary BSS and two adjacent ones. We
want to minimize complexity to necessary and sufficient.

The challenge we find is having variability (e.g. montecarlos) that's
reproducible and has relevant information. Basically, the distance matrices
have h-matrices as their elements. Our chips can provide these h-matrices.

The parts for solid state programmable attenuators and phase shifters
aren't very expensive. A device that supports a five branch tree and 2x2
MIMO seems a very good starting point.

Bob

On Mon, Aug 2, 2021 at 4:55 PM Ben Greear  wrote:


On 8/2/21 4:16 PM, David Lang wrote:

If you are going to setup a test environment for wifi, you need to

include the ability to make a fe cases that only happen with RF, not with
wired networks and

are commonly overlooked

1. station A can hear station B and C but they cannot hear each other
2. station A can hear station B but station B cannot hear station A 3.

station A can hear that station B is transmitting, but not with a strong
enough signal to

decode the signal (yes in theory you can work around interference, but

in practice interference is still a real thing)


David Lang



To add to this, I think you need lots of different station devices,
different capabilities (/n, /ac, /ax, etc)
different numbers of spatial streams, and different distances from the
AP.  From download queueing perspective, changing
the capabilities may be sufficient while keeping all stations at same
distance.  This assumes you are not
actually testing the wifi rate-ctrl alg. itself, so different throughput
levels for different stations would be enough.

So, a good station emulator setup (and/or pile of real stations) and a few
RF chambers and
programmable attenuators and you can test that setup...

 From upload perspective, I guess same setup would do the job.
Queuing/fairness might depend a bit more on the
station devices, emulated or otherwise, but I guess a clever AP could
enforce fairness in upstream direction
too by implementing per-sta queues.

Thanks,
Ben

--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com





___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board

2021-08-02 Thread David Lang
that matrix cannot create asymmetric paths (at least, not unless you are also 
tinkering with power settings on the nodes), and will have trouble making hidden 
transmitters (station A can hear station B and C but B and C cannot tell the 
other exists) as a node can hear that something is transmitting at much lower 
power levels than it cn decode the signal.


David Lang

On Mon, 2 Aug 2021, Bob McMahon wrote:


On Mon, Aug 2, 2021 at 4:16 PM David Lang  wrote:


If you are going to setup a test environment for wifi, you need to include
the
ability to make a fe cases that only happen with RF, not with wired
networks and
are commonly overlooked

1. station A can hear station B and C but they cannot hear each other
2. station A can hear station B but station B cannot hear station A
3. station A can hear that station B is transmitting, but not with a
strong
enough signal to decode the signal (yes in theory you can work around
interference, but in practice interference is still a real thing)

David Lang






___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] [Make-wifi-fast] [Starlink] Due Aug 2: Internet Quality workshop CFP for the internet architecture board

2021-08-02 Thread David Lang
If you are going to setup a test environment for wifi, you need to include the 
ability to make a fe cases that only happen with RF, not with wired networks and 
are commonly overlooked


1. station A can hear station B and C but they cannot hear each other
2. station A can hear station B but station B cannot hear station A 
3. station A can hear that station B is transmitting, but not with a strong 
enough signal to decode the signal (yes in theory you can work around 
interference, but in practice interference is still a real thing)


David Lang

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] [Bloat] Little's Law mea culpa, but not invalidating my main point

2021-07-12 Thread David Lang
I have seen some performance tests that do explicit DNS timing tests separate 
from other throughput/latency tests.


Since DNS uses UDP (even if it then falls back to TCP in some cases), UDP 
performance (and especially probability of loss at congested links) is very 
important.


David Lang

On Mon, 12 Jul 2021, Ben Greear wrote:

UDP is better for getting actual packet latency, for sure.  TCP is 
typical-user-experience-latency though,

so it is also useful.

I'm interested in the test and visualization side of this.  If there were a 
way to give engineers
a good real-time look at a complex real-world network, then they have 
something to go on while trying

to tune various knobs in their network to improve it.

I'll let others try to figure out how build and tune the knobs, but the data 
acquisition and
visualization is something we might try to accomplish.  I have a feeling I'm 
not the
first person to think of this, howeverprobably someone already has done 
such

a thing.

Thanks,
Ben

On 7/12/21 1:04 PM, Bob McMahon wrote:
I believe end host's TCP stats are insufficient as seen per the "failed" 
congested control mechanisms over the last decades. I think Jaffe pointed 
this out in 
1979 though he was using what's been deemed on this thread as "spherical 

cow queueing theory."


"Flow control in store-and-forward computer networks is appropriate for 
decentralized execution. A formal description of a class of "decentralized 
flow control 
algorithms" is given. The feasibility of maximizing power with such 
algorithms is investigated. On the assumption that communication links behave 
like M/M/1 
servers it is shown that no "decentralized flow control algorithm" can 
maximize network power. Power has been suggested in the literature as a 
network 
performance objective. It is also shown that no objective based only on the 
users' throughputs and average delay is decentralizable. Finally, a 
restricted class 

of algorithms cannot even approximate power."

https://ieeexplore.ieee.org/document/1095152

Did Jaffe make a mistake?

Also, it's been observed that latency is non-parametric in it's 
distributions and computing gaussians per the central limit theorem for OWD 
feedback loops 
aren't effective. How does one design a control loop around things that are 
non-parametric? It also begs the question, what are the feed forward knobs 
that can 

actually help?

Bob

On Mon, Jul 12, 2021 at 12:07 PM Ben Greear 
<mailto:gree...@candelatech.com>> wrote:


Measuring one or a few links provides a bit of data, but seems like if 

someone is trying to understand
a large and real network, then the OWD between point A and B needs to 

just be input into something much
more grand.  Assuming real-time OWD data exists between 100 to 1000 

endpoint pairs, has anyone found a way

to visualize this in a useful manner?

Also, considering something better than ntp may not really scale to 

1000+ endpoints, maybe round-trip
time is only viable way to get this type of data.  In that case, maybe 

clever logic could use things
like trace-route to get some idea of how long it takes to get 'onto' 

the internet proper, and so estimate
the last-mile latency.  My assumption is that the last-mile latency is 

where most of the pervasive
assymetric network latencies would exist (or just ping 8.8.8.8 which is 

20ms from everywhere due to

$magic).

Endpoints could also triangulate a bit if needed, using some anchor 

points in the network

under test.

Thanks,
Ben

On 7/12/21 11:21 AM, Bob McMahon wrote:
 > iperf 2 supports OWD and gives full histograms for TCP write to 
read, TCP connect times, latency of packets (with UDP), latency of "frames" 
with
 > simulated video traffic (TCP and UDP), xfer times of bursts with low 
duty cycle traffic, and TCP RTT (sampling based.) It also has support for 
sampling (per
 > interval reports) down to 100 usecs if configured with 
--enable-fastsampling, otherwise the fastest sampling is 5 ms. We've released 
all this as open source.

 >
 > OWD only works if the end realtime clocks are synchronized using a 
"machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data centers 
don't

provide
 > sufficient level of clock accuracy and the GPS pulse per second * to 

colo and vm customers.

 >
 > https://iperf2.sourceforge.io/iperf-manpage.html
 >
 > Bob
 >
 > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed 
<mailto:dpr...@deepplum.com> <mailto:dpr...@deepplum.com

<mailto:dpr...@deepplum.com>>> wrote:
 >
 >
 >     On Monday, July 12, 2021 9:46am, "Livingood, Jason" 

mailto:jason_living...@comcast.com>
<mailto:jason_living...@comcast.com 

<mailto:jason_living...@comcast.com>>> sai

Re: [Cerowrt-devel] Fwd: geeks, internet

2021-03-31 Thread David Lang
with multiple geeks in the house, I've survived for years with 8M down 1M up (I 
live in southern california in the middle of a city of >100k people and it's 
only in the last year I've been able to get better, which is 600/30 for $300/m).


100M is a lot (especially 100M upload)

My sister is in rural Michigan and the best she can get is 2M (until starlink), 
with 3 kids doing remote learning and her teaching. Not great, but they survived 
2020 with it.


yes, more is nice, but saying that 100Mb is not enough is ignoring the huge 
population that isn't getting 1/10 of that today.


David Lang

On Wed, 31 Mar 2021, Karl Auerbach wrote:


Date: Wed, 31 Mar 2021 09:55:45 -0700
From: Karl Auerbach 
To: Dave Taht ,
William Allen Simpson 
Cc: cerowrt-devel 
Subject: Re: [Cerowrt-devel] Fwd: geeks, internet

100mbits/second is to my mind rather inadequate.  It is surprising how 
chatty my house is even in the wee hours in this era of IoT and massive 
software updates for phones, cars, and toasters.


I have concern that policy is being made using a simple number 
("bandwidth") to represent something too complex to be characterized by 
any single number.


I wrote a note about that a while back, I think it dovetails with your 
point about obtaining "better bandwidth" based on the way bandwidth is 
going to be used:


Why You Shouldn't Believe Network Speed Tests - 
https://blog.iwl.com/blog/do_not_trust_speed_tests


(In a slightly different direction, way back in time I did a quite 
partial design of a protocol to evaluate hop-by-hop path characteristics 
in a lightweight way and in not much more than a small multiple of 
round-trip time. 
https://www.cavebear.com/archive/fpcp/fpcp-sept-19-2000.html   I still 
think we need something like that in order to improve the way that 
clients chose among replicated resources on the net.)


    --karl--

On 3/31/21 5:48 AM, Dave Taht wrote:

It would be really nice if there was some string I could pull to get
the senators behind this



https://arstechnica.com/tech-policy/2021/03/100mbps-uploads-and-downloads-should-be-us-broadband-standard-senators-say/


to help morph this:



https://docs.google.com/document/d/1T21on7g1MqQZoK91epUdxLYFGdtyLRgBat0VXoC9e3I/edit?usp=sharing


into something actionable.

On Wed, Mar 31, 2021 at 3:39 AM William Allen Simpson
 wrote:

Thanks.  I didn't know about the internet-history mailing list.
If I survive my covid vaccination today, I'll join it.
(My father died within 4 hours of his 1st Moderna dose.)

I am terribly sorry to hear that. I worry a lot about the rapidity of
the rollout here without regard for potential side-effects, and since
I've been so successfully self isolating on my boat,
and kind of used to it, generally have felt that it was better that
early adoptors and people that really need it get theirs first.

I also recently re-watched the stepford wives, which doesn't help.


Strongly agree with Karl Auerbach.  I've had the opportunity of

Karl is a fascinating person and more people should read him and his blog.


living with a (now former) Member of Congress for 20+ years.

As I've said many times, all human interaction involves politics.
We Internauts designing and implementing standards are also
involved in politics, but are very bad at it.

I am willing to re-enter it, reluctantly.



On 3/31/21 12:17 AM, Dave Taht wrote:

I note I really like the internet history mailing list.

-- Forwarded message -
From: Dave Täht 
Date: Tue, Mar 30, 2021 at 7:50 PM
Subject: geeks, internet
To: 


- Forwarded message from the keyboard of geoff goodfellow via
Internet-history  -

Date: Mon, 13 Jul 2020 06:52:58 -1000
From: the keyboard of geoff goodfellow via Internet-history
  
To: Internet-history 
Subject: Re: [ih] Keep the geeks in charge of the internet

-- Forwarded message -
From: Karl Auerbach 
Date: July 12, 2020 at 06:19:26 GMT+9

That piece demonstrates why "geeks" should *not* run the Internet.

Bodies such as ICANN have demonstrated time and time again that they are
incapable of resisting capture by organized business interests, such as 

the

trademark industry, and the domain name registry industry (which, though
ICANN's decades long self-blindness has created a multi $Billion per year
money pump of monopoly-rent profit.)

Over the years I've spent a fair amount of  time among both "geeks" and
"policymakers".

There are definitely many very intelligent people in those camps. 

However

there are relative few "geeks" who understand economics, law, or social
forces.  The same can be said of the policymakers - there are many who's
depth of understanding of the Internet is no deeper than having an AOL
email account.

The voice of experts who know how a thing works, from top to bottom, is
essential.  But our world is like the fabled elephant in the tale of the
blind men who each perceive the creature as on

Re: [Cerowrt-devel] [Make-wifi-fast] wireguard almost takes a bullet

2021-03-30 Thread David Lang
the 'control' that the various companies gain over parts of the kernel is less a 
matter of the company having control and more a matter of them hiring/sponsoring 
a developer who has the control. If the person leaves that company for another 
one, any control moves with that developer.


and while most of the developers do work for a reltively small group of 
companies, the list of developers does shift over time nd people can 'break in' 
by submitting patches.


I'm not thrilled by the Linux Foundation, it was created to be a way to pay 
Linus without him working for a specific company (avoiding even the appearance 
of bias) but it's morphed to present at least the appearance of special access.


David Lang

On Tue, 30 Mar 2021, David P. Reed wrote:


Date: Tue, 30 Mar 2021 21:23:50 -0400 (EDT)
From: David P. Reed 
To: Theodore Ts'o 
Cc: Make-Wifi-fast ,
Cake List ,
cerowrt-devel ,
bloat 
Subject: Re: [Make-wifi-fast] [Cerowrt-devel] wireguard almost takes a bullet


Theodore -

I appreciate you showing the LF executive salary numbers are not quite as high 
as I noted. My numbers may have been inflated, but I've definitely seen a 
$900,000 package for at least one executive reported in the press (an executive 
who was transferred in from a F100 company which is close to the LF).

On the other hand, they are pretty damn high salaries for a non-profit. Are 
they appropriate? Depends. There are no stockholders and no profits, just a 
pretty substantial net worth.

Regarding the organizaton of "Linux, Inc." as  a hierachical control structure - I'll 
just point out that hierarchical control of the development of Linux suggests that it is not at all 
a "community project" (if it ever was). It's a product development organization with 
multiple levels of management.

Yet the developers are employees of a small number of major corporations. In this sense, 
it is like a "joint venture" among those companies.

To the extent that those companies gain (partial) control of the Linux kernel, as appears 
to be the case, I think Linux misrepresents itself as a "community project", 
and in particular, the actual users of the software may have little say in the direction 
development takes going forwards.

There's little safeguard, for example, against "senior management" biases in 
support of certain vendors, if other vendors are excluded from effective participation by 
one of many techniques. In other words, there's no way it can be a level playing field 
for innovation.

In that sense, the Linux kernel community has reached a point very much like 
Microsoft Windows development reached in 1990 or so. I note that date because 
at that point, Microsoft was challenged with a variety of anti-trust actions 
based on the fact that it used its Windows monopoly status to put competitors 
in the application space, and competitors producing innovative operating 
systems out of business (GO Computer Corporation being one example of many).

This troubles me. It may not trouble the developers who are in the Linux 
community and paid by the cartel of companies that control its direction.

I have no complaint about the technical competence of individual developers - the quality 
is pretty high, at least as good as those who worked on Windows and macOS. But it's 
becoming clear that their is a narrowing of control of an OS that has a lot of influence 
in a few hands. That those few hands don't work for one company doesn't eliminate its 
tendency to become a cartel. (one that is not transparent at all about functioning as 
such - preferring to give the impression that the kernel is developed by part-time 
voluntary "contributions").

The contrast with other open source communities is quite sharp now. There is 
little eleemosynary intent that can be detected any more. I think that is too 
bad, but things change.

This is just the personal opinion of someone who has been developing systems 
for 50+ years now. I'm kind of disappointed, but my opinion does not really 
matter much.

David




On Monday, March 29, 2021 9:52pm, "Theodore Ts'o"  said:




On Mon, Mar 29, 2021 at 04:28:11PM -0400, David P. Reed wrote:
>
>
> What tends to shape Linux and FreeBSD, etc. are the money sources
> that flow into the communities. Of course Linux is quite
> independently wealthy now. The senior executives of the Linux
> Foundation are paid nearly a million dollars a year, each. Which
> just indicates that major corporations are seriously interested in
> controlling the evolution of Linux (not the Gnu part, the part that
> has Linus Torvalds at its center).

First of all, I don't believe your salary numbers are correct.

https://nonprofitlight.com/ca/san-francisco/linux-foundation

Secondly, the "senior executives" of the Linux Foundation don't have
any control over "the evolution of Linux". The exception to that are

Re: [Cerowrt-devel] Looking for MORE SQM Router Recommendations !

2021-03-16 Thread David Lang

This is using the compute module, that does not have any on-board ports

so it's 2 Gig ports total

David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] mo bettah open source multi-party videoconferncing in an age of bloated uplinks?

2020-03-27 Thread David Lang

On Fri, 27 Mar 2020, David P. Reed wrote:



Congestion control for real-time video is quite different than for streaming. 
Streaming really is dealt with by a big enough (multi-second) buffering, and 
can in principle work great over TCP (if debloated).

UDP congestion control MUST be end-to-end and done in the application layer, 
which is usually outside the OS kernel. This makes it tricky, because you end 
up with latency variation due to eh OS's process scheduler that is on the order 
of magnitude of the real-time requirements for air-to-air or light-to-light 
response (meaning the physical transition from sound or picture to and from the 
transducer).


at some level this is correct, but if the link is clogged with TCP packets, it 
doesn't matter what your UDP application attempts to do, so installing cake to 
keep individual links from being too congested will allow your UDP application 
have a chance to operate.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] I so love seeing stuff like this

2019-02-02 Thread David Lang

On Sat, 2 Feb 2019, Mikael Abrahamsson wrote:


On Fri, 1 Feb 2019, David Lang wrote:

I had high hopes for these, but the driver development is not working well, 
it's one guy at Marvell who does it in his spare time, nobody else has the 
info to be able to work on it.


It's the wifi you're worried about? I've read about people having problems 
with the wifi part.


Yes, it's the wifi development and support that is the biggest problem with 
these APs.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] I so love seeing stuff like this

2019-02-01 Thread David Lang

On Fri, 1 Feb 2019, Mikael Abrahamsson wrote:


On Fri, 1 Feb 2019, Matt Taggart wrote:


Anyone else have inexpensive, better cpu, and 802.11ac capable
replacements for WNDR3800?


Used WRT1200AC, WRT1900ACv2 or WRT1900ACS.


I had high hopes for these, but the driver development is not working well, it's 
one guy at Marvell who does it in his spare time, nobody else has the info to be 
able to work on it.


I'm working on the c2600 as my replacement for the wndr3800. I tried the C7 but 
it's not really much better than the wndr3800


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] fq_pie for linux

2018-12-06 Thread David Lang

On Thu, 6 Dec 2018, Dave Taht wrote:


Toke Høiland-Jørgensen  writes:


Dave Taht  writes:


https://github.com/gautamramk/FQ-PIE-for-Linux-Kernel/issues/2


With all the variants of fq+AQM, maybe decoupling the FQ part and the
AQM part would be worthwhile, instead of reimplementing it for each
variant...


I actually sat down to write a userspace implementation of the fq bits
in C with a pluggable AQM a while back. I called it "drrp".

I think that there are many applications today that do too much
processing per packet, and end up with their recv socket buffer
overflowing (ENOBUFS) and tail-dropping in the kernel. I've certainly
seen this with babeld, in particular.

So by putting an intervening layer around the udp recv call to keep
calling that as fast as possible, and try to FQ and AQM the result, I
thought we'd get better fairness between different flows over udp and a
smarter means of shedding load when that was happening.

Then... there was all this activity recently around other approaches to
the udp problem in the kernel, and I gave up while that got sorted out.


one of these is (IIRC) mmreceive, which lets the app get all the pending packets 
from the Linux UDP stack in one systemcall rather than having to make one 
syscall per packet. In rsyslog this is a significant benefit at high packet 
rates.


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Spectre and EBPF JIT

2018-01-05 Thread David Lang

He does a good job of explaining these high provile vulnerabilities.

On Fri, 5 Jan 2018, Jonathan Morton wrote:


On 5 Jan, 2018, at 5:35 pm, dpr...@deepplum.com wrote:

Of course the "press" wants everyone to be superafraid, so if they can say "KVM is 
affected" that causes the mob to start running for the exits!


Meanwhile, in XKCD land...

https://xkcd.com/1938/

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] solar wifi ap designs?

2017-06-05 Thread David Lang

On Mon, 5 Jun 2017, Richard Smith wrote:


My WNDR 3700v2 power supply is rated at 12V 2.5A which is a peak of 30W.


don't forget that this includes providing power out to the USB port as well.

yet another reason to measure things :-)

David Lang
k
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Turris Omnia

2016-11-06 Thread David Lang

On Sun, 6 Nov 2016, James Cloos wrote:


I can't find any mailing list specifically about the Omnia, I hope
someone here may have some tips...

I've put my omnia into production, but the wireless is extremely
sub-optimal.

Things like the droid netinfo widget show 433 Mbps for the 11ac and 72
Mbps on the 2.4, but throughput sucks.

The thoughput seems backwards:  upload speeds eclipse download speeds.

I'm using it only for my 802.11 wlan; its "wan" interface is on my
primary switch, and I've set it for routing rather than bridging, with a
/27 for each of the 5.0 and 2.4 (and a /24 its wired lan ports, which
are not currently in use).

As an example, rsync/ssh/tcp/ip reports net throughput of around 6 Mbps
down on the 11ac, and around ten times that for upload.

I tried 40Mhz and 20Mhz as well as just 11n on the 5.0 and there was no
improvement over the 80Mhz 11ac.

I'm doing a (Gentoo) emerge sync right now on the laptop, which only
does 11n on 2.4.  That is almost OK for a change.  On the omnia,
luci/admin/status/overview reports 135 Mbps for both up and down
from/to the laptop (during said emerge sync).  And the rsync(1)
output looks like it may be updating about that fast.

That is vastly better than I previously saw.

But even so, the 5.0 radio still never shows more than a 6Mbps download
speed and spikes of up to 300 Mbps upload.

I can't find anything online about backwards throughput for  802.11.
The only search results talk about wan links rather than wlan links.

Does anyone have any ideas of how to diagnose or fix this?


My first question is if you have checked that you don't have other 5GHz users on 
the same channel in your area.


Also, check that you have a country code set so that you can use the full 
power/frequency range for your location.


upload faster than download is unusual, but interference can cause strange 
issues.


Get the RF right before you worry about other things.

David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] conntrack and ipv6

2016-07-03 Thread David Lang

On Sat, 2 Jul 2016, Dave Täht wrote:


It is generally my hope that ipv6 nat will not be widely deployed.

Firewalls will be stateful instead, and thus there would be no need to
access the conntrack information for ipv6 in cake.


well, conntrack is the way that the firewall handles it's state. Conntrack also 
has features to let you sync it's state from one system to it's backup so that 
failover maintains the state.



I'm not sure, however, to
what extent ipv6 conntrack is in openwrt today, certainly udp and tcp,
"in" is essentially blocked by default, and needs to be triggered by an
outgoing message.


It's compiled into the kernel and on by default (I fight to turn it off for 
Scale where I don't need to maintain state in the APs as they don't do any 
firewalling)


David Lang


Similarly I'm unfamiliar with the state of ipv6 upnp
and pcp support in openwrt or client applications at present.


On 6/30/16 10:33 AM, Kevin Darbyshire-Bryant wrote:



On 02/06/16 13:29, Jonathan Morton wrote:

On 2 Jun, 2016, at 14:09, Kevin Darbyshire-Bryant
<ke...@darbyshire-bryant.me.uk> wrote:

Cake uses the flow dissector API to do flow hashing...including per
host flows for dual/triple isolation.  The unfortunate bit is that
the qdisc inevitably gets placed after packets have been NATed on
egress and before they've been de-NATed on ingress.

When mentioned before Johnathan said "flow dissector ideally needs to
be tweaked to do this" or words to that effect.

I'd like to progress that idea...the thought of me kernel programming
should horrify everyone but really I'm asking for help in being
pointed in the right direction to ask for help...and go from there :-)

I believe Linux does NAT using a “connection tracker” subsystem.  That
would contain the necessary data for resolving NAT equivalents.  I
don’t know how easy it is to query in a qdisc context, though.

Imagine my joy of discovering http://fatooh.org/esfq-2.6/  - someone has
already bl**dy done itand I found it lurking in LEDE as part of a
patch.

So there relevant bits are something of the order:


+#ifdef CONFIG_NET_SCH_ESFQ_NFCT
+   enum ip_conntrack_info ctinfo;
+   struct nf_conn *ct = nf_ct_get(skb, );
+#endif

+#ifdef CONFIG_NET_SCH_ESFQ_NFCT
+   /* defaults if there is no conntrack info */
+   info.ctorigsrc = info.src;
+   info.ctorigdst = info.dst;
+   info.ctreplsrc = info.dst;
+   info.ctrepldst = info.src;
+   /* collect conntrack info */
+   if (ct && ct != _conntrack_untracked) {
+   if (skb->protocol == __constant_htons(ETH_P_IP)) {
+   info.ctorigsrc =
ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u3.ip;
+   info.ctorigdst =
ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.u3.ip;
+   info.ctreplsrc =
ct->tuplehash[IP_CT_DIR_REPLY].tuple.src.u3.ip;
+   info.ctrepldst =
ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u3.ip;
+   }
+   else if (skb->protocol == __constant_htons(ETH_P_IPV6)) {
+   /* Again, hash ipv6 addresses into a single u32. */
+   info.ctorigsrc =
jhash2(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u3.ip6, 4,
q->perturbation);
+   info.ctorigdst =
jhash2(ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.u3.ip6, 4,
q->perturbation);
+   info.ctreplsrc =
jhash2(ct->tuplehash[IP_CT_DIR_REPLY].tuple.src.u3.ip6, 4,
q->perturbation);
+   info.ctrepldst =
jhash2(ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u3.ip6, 4,
q->perturbation);
+   }
+
+   }
+#endif

I'd rip out the IPv6 conntrack stuff as I'm much more concerned by
handling IPv4 NAT.  And I'm not sure how to get it into cake's host
handling yet but

I can feel an experiment and hackery coming on later today :-)

Am overjoyed!
___
Cake mailing list
c...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

___
Cake mailing list
c...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Bob McMahon wrote:


While the 802.11 ack doesn't need to do collision avoidance, it does need
to wait a SIFS, send a PHY header and its typically transmitted at lower
PHY rate.   My estimate is 40 us for that overhead.  So yes, one would have
to get rid of that too, e.g. assume a transmit without a collision
succeeded - hopefully negating the need for the 802.11 ack.


don't forget that while there is teh 802.11 ack, there is also the TCP ack that 
will show up later as well.



(It does seem the wired engineers have it much easier per the point/point,
full duplex and wave guides.)


yep. even wireless is far easier when you can do point-to-point with highly 
directional antennas. Even if you don't do full duplex as well.


It's the mobility and unpredictability of the stations that makes things hard. 
The fact that Wifi works as well as it does is impressive, given the amount 
things have changed since it was designed, and the fact that backwards 
compatibility has been maintained.


David Lang


Bob

On Mon, Jun 27, 2016 at 2:09 PM, David Lang <da...@lang.hm> wrote:


On Mon, 27 Jun 2016, Bob McMahon wrote:

packet size is smallest udp payload per a socket write() which in turn

drives the smallest packet supported by "the wire."

Here is a back of the envelope calculation giving ~100 microseconds per BE
access.

# Overhead estimates (slot time is 9 us):
# o DIFS 50 us or *AIFS (3 * 9 us) = 27 us
# o *Backoff Slot * CWmin,  9 us * rand[0,xf] (avg) = 7 * 9=63 us
# o 5G 20 us
# o Multimode header 20 us
# o PLCP (symbols) 2 * 4 us = 8 us
# o *SIFS 16 us
# o ACK 40 us



isn't the ack a separate transmission by the other end of the connection?
(subject to all the same overhead)

#

# Even if there is no collision and the CW stays at the aCWmin, the
average
# backoff time incurred by CSMA/CA is aDIFS + aCWmin/2 * aSlotTime = 16 µs
# +(2+7.5)*9 µs = 101.5 µs for OFDM PHY, while the data rate with OFDM PHY
# can reach 600 Mbps in 802.11n, leading to a transmission time of 20 µs
# for a 1500 byte packet.



well, are you talking a 64 byte packet or a 1500 byte packet?

But this is a good example of why good aggregation is desirable. It
doesn't have
to add a lot of latency. you could send 6x as much data in 2x the time by
sending 9K per transmission instead of 1.5K per transmission (+100us/7.5K)

if the aggregation is done lazily (send whatever's pending, don't wait for
more data if you have an available transmit slot), this can be done with
virtually no impact on latency, you just have to set a reasonable maximum,
and adjust it based on your transmission rate.

The problem is that right now thing don't set a reasonable max, and they
do greedy aggregation (wait until you have a lot of data to send before you
send anything)

All devices in a BSSID would have to agree that the second radio is to be

used for BSSID "carrier state" information and all energy will be sourced
by the AP serving that BSSID.  (A guess is doing this wouldn't improve the
100 us by enough to justify the cost and that a new MAC protocol is
required.  Just curious to what such a protocol and phy subsystem would
look like assuming collision avoidance could be replaced with collision
detect.)



if the second radio is on a separate band, you have the problem that
propogation
isn't going to be the same, so it's very possible to be able to talk to
the AP
on the 'normal' channel, but not on the 'coordination' channel.

I'm also not sure what good it would do, once a transmission has been
stepped
on, it will need to be re-sent (I guess you would be able to re-send
immediatly)


David Lang

Bob




On Mon, Jun 27, 2016 at 1:09 PM, David Lang <da...@lang.hm> wrote:

On Mon, 27 Jun 2016, Bob McMahon wrote:


The ~10K is coming from empirical measurements where all aggregation


technologies are disabled, i.e. only one small IP packet per medium
arbitration/access and where there is only one transmitter and one
receiver.  900Mb/sec is typically a peak-average throughput measurement
where max (or near max) aggregation occurs, amortizing the access
overhead
across multiple packets.



so 10K is minimum size packets being transmitted?or around 200
transmissions/sec (plus 200 ack transmissions/sec)?

Yes, devices can be hidden from each other but not from the AP (hence the


use of RTS/CTS per hidden node mitigation.) Isn't it the AP's view of
the
"carrier state" that matters (at least in infrastructure mode?)  If
that's
the case, what about a different band (and different radio) such that
the
strong signal carrying the data could be separated from the the BSSID's
"carrier/energy state" signal?



how do you solve the interference problem on this other band/radio? When
you have other APs in the area operating, you will have the same problem
there.

David Lang


Bob



On Mon, Jun 27, 2016 at 12:40 PM, David Lang <da...@lang.hm> wrote:

On Mon, 27 Jun 2016, Bob McMahon wrote

Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Bob McMahon wrote:


packet size is smallest udp payload per a socket write() which in turn
drives the smallest packet supported by "the wire."

Here is a back of the envelope calculation giving ~100 microseconds per BE
access.

# Overhead estimates (slot time is 9 us):
# o DIFS 50 us or *AIFS (3 * 9 us) = 27 us
# o *Backoff Slot * CWmin,  9 us * rand[0,xf] (avg) = 7 * 9=63 us
# o 5G 20 us
# o Multimode header 20 us
# o PLCP (symbols) 2 * 4 us = 8 us
# o *SIFS 16 us
# o ACK 40 us


isn't the ack a separate transmission by the other end of the connection? 
(subject to all the same overhead)



#
# Even if there is no collision and the CW stays at the aCWmin, the average
# backoff time incurred by CSMA/CA is aDIFS + aCWmin/2 * aSlotTime = 16 µs
# +(2+7.5)*9 µs = 101.5 µs for OFDM PHY, while the data rate with OFDM PHY
# can reach 600 Mbps in 802.11n, leading to a transmission time of 20 µs
# for a 1500 byte packet.


well, are you talking a 64 byte packet or a 1500 byte packet?

But this is a good example of why good aggregation is desirable. It doesn't have
to add a lot of latency. you could send 6x as much data in 2x the time by
sending 9K per transmission instead of 1.5K per transmission (+100us/7.5K)

if the aggregation is done lazily (send whatever's pending, don't wait for more 
data if you have an available transmit slot), this can be done with virtually no 
impact on latency, you just have to set a reasonable maximum, and adjust it 
based on your transmission rate.


The problem is that right now thing don't set a reasonable max, and they do 
greedy aggregation (wait until you have a lot of data to send before you send 
anything)



All devices in a BSSID would have to agree that the second radio is to be
used for BSSID "carrier state" information and all energy will be sourced
by the AP serving that BSSID.  (A guess is doing this wouldn't improve the
100 us by enough to justify the cost and that a new MAC protocol is
required.  Just curious to what such a protocol and phy subsystem would
look like assuming collision avoidance could be replaced with collision
detect.)


if the second radio is on a separate band, you have the problem that propogation
isn't going to be the same, so it's very possible to be able to talk to the AP
on the 'normal' channel, but not on the 'coordination' channel.

I'm also not sure what good it would do, once a transmission has been stepped
on, it will need to be re-sent (I guess you would be able to re-send immediatly)

David Lang


Bob



On Mon, Jun 27, 2016 at 1:09 PM, David Lang <da...@lang.hm> wrote:


On Mon, 27 Jun 2016, Bob McMahon wrote:

The ~10K is coming from empirical measurements where all aggregation

technologies are disabled, i.e. only one small IP packet per medium
arbitration/access and where there is only one transmitter and one
receiver.  900Mb/sec is typically a peak-average throughput measurement
where max (or near max) aggregation occurs, amortizing the access overhead
across multiple packets.



so 10K is minimum size packets being transmitted?or around 200
transmissions/sec (plus 200 ack transmissions/sec)?

Yes, devices can be hidden from each other but not from the AP (hence the

use of RTS/CTS per hidden node mitigation.) Isn't it the AP's view of the
"carrier state" that matters (at least in infrastructure mode?)  If that's
the case, what about a different band (and different radio) such that the
strong signal carrying the data could be separated from the the BSSID's
"carrier/energy state" signal?



how do you solve the interference problem on this other band/radio? When
you have other APs in the area operating, you will have the same problem
there.

David Lang


Bob


On Mon, Jun 27, 2016 at 12:40 PM, David Lang <da...@lang.hm> wrote:

On Mon, 27 Jun 2016, Bob McMahon wrote:


Hi All,



This is a very interesting thread - thanks to all for taking the time to
respond.   (Personally, I now have better understanding of the
difficulties
associated with a PHY subsystem that supports a wide 1GHz.)

Not to derail the current discussion, but I am curious to ideas on
addressing the overhead associated with media access per collision
avoidance.  This overhead seems to be limiting transmits to about 10K
per
second (even when a link has no competition for access.)



I'm not sure where you're getting 10K/second from. We do need to limit
the
amount of data transmitted in one session to give other stations a chance
to talk, but if the AP replies immediatly to ack the traffic, and the
airwaves are idle, you can transmit again pretty quickly.

people using -ac equipment with a single station are getting 900Mb/sec
today.

  Is there a way,


maybe using another dedicated radio, to achieve near instantaneous
collision detect (where the CD is driven by the receiver state) such
that
mobile devices can sample RF energy to get theses states and state
changes
much more quickly?



Thi

Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Bob McMahon wrote:


The ~10K is coming from empirical measurements where all aggregation
technologies are disabled, i.e. only one small IP packet per medium
arbitration/access and where there is only one transmitter and one
receiver.  900Mb/sec is typically a peak-average throughput measurement
where max (or near max) aggregation occurs, amortizing the access overhead
across multiple packets.


so 10K is minimum size packets being transmitted?or around 200 transmissions/sec 
(plus 200 ack transmissions/sec)?



Yes, devices can be hidden from each other but not from the AP (hence the
use of RTS/CTS per hidden node mitigation.) Isn't it the AP's view of the
"carrier state" that matters (at least in infrastructure mode?)  If that's
the case, what about a different band (and different radio) such that the
strong signal carrying the data could be separated from the the BSSID's
"carrier/energy state" signal?


how do you solve the interference problem on this other band/radio? When you 
have other APs in the area operating, you will have the same problem there.


David Lang


Bob

On Mon, Jun 27, 2016 at 12:40 PM, David Lang <da...@lang.hm> wrote:


On Mon, 27 Jun 2016, Bob McMahon wrote:

Hi All,


This is a very interesting thread - thanks to all for taking the time to
respond.   (Personally, I now have better understanding of the
difficulties
associated with a PHY subsystem that supports a wide 1GHz.)

Not to derail the current discussion, but I am curious to ideas on
addressing the overhead associated with media access per collision
avoidance.  This overhead seems to be limiting transmits to about 10K per
second (even when a link has no competition for access.)



I'm not sure where you're getting 10K/second from. We do need to limit the
amount of data transmitted in one session to give other stations a chance
to talk, but if the AP replies immediatly to ack the traffic, and the
airwaves are idle, you can transmit again pretty quickly.

people using -ac equipment with a single station are getting 900Mb/sec
today.

  Is there a way,

maybe using another dedicated radio, to achieve near instantaneous
collision detect (where the CD is driven by the receiver state) such that
mobile devices can sample RF energy to get theses states and state changes
much more quickly?



This gets back to the same problems (hidden transmitter , and the
simultanious reception of wildly different signal strengths)

When you are sending, you will hear yourself as a VERY strong signal,
trying to hear if someone else is transmitting at the same time is almost
impossible (100 ft to 1 ft is 4 orders of magnatude, 1 ft to 1 inch is
another 2 orders of magnatude)

And it's very possible that the station that you are colliding with isn't
one you can hear at all.

Any AP is going to have a better antenna than any phone. (sometimes
several orders of magnatude better), so even if you were located at the
same place as the AP, the AP is going to hear signals that you don't.

Then consider the case where you and the other station are on opposite
sides of the AP at max range.

and then add cases where there is a wall between you and the other
station, but the AP can hear both of you.

David Lang




___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Bob McMahon wrote:


Hi All,

This is a very interesting thread - thanks to all for taking the time to
respond.   (Personally, I now have better understanding of the difficulties
associated with a PHY subsystem that supports a wide 1GHz.)

Not to derail the current discussion, but I am curious to ideas on
addressing the overhead associated with media access per collision
avoidance.  This overhead seems to be limiting transmits to about 10K per
second (even when a link has no competition for access.)


I'm not sure where you're getting 10K/second from. We do need to limit the 
amount of data transmitted in one session to give other stations a chance to 
talk, but if the AP replies immediatly to ack the traffic, and the airwaves are 
idle, you can transmit again pretty quickly.


people using -ac equipment with a single station are getting 900Mb/sec today.


  Is there a way,
maybe using another dedicated radio, to achieve near instantaneous
collision detect (where the CD is driven by the receiver state) such that
mobile devices can sample RF energy to get theses states and state changes
much more quickly?


This gets back to the same problems (hidden transmitter , and the simultanious 
reception of wildly different signal strengths)


When you are sending, you will hear yourself as a VERY strong signal, trying to 
hear if someone else is transmitting at the same time is almost impossible (100 
ft to 1 ft is 4 orders of magnatude, 1 ft to 1 inch is another 2 orders of 
magnatude)


And it's very possible that the station that you are colliding with isn't one 
you can hear at all.


Any AP is going to have a better antenna than any phone. (sometimes several 
orders of magnatude better), so even if you were located at the same place as 
the AP, the AP is going to hear signals that you don't.


Then consider the case where you and the other station are on opposite sides of 
the AP at max range.


and then add cases where there is a wall between you and the other station, but 
the AP can hear both of you.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Jason Abele wrote:


The reason you can not just add bits to the ADC is the thermal noise
floor: 
https://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise#Noise_power_in_decibels

If you assume a maximum transmit power of ~20dBm (100mW) and a 160MHz
channel bandwidth (with a consequent thermal noise floor of -92 dBm),
the total possible dynamic range is ~112dB, if you receiver and
transmitter a coupled with no loss.  At ~6dB/bit in the ADC, anything
beyond 19bits is just quantizing noise and wasting power (which is
heat, which raises your local thermal noise floor, etc).  If your
channel bandwidth is 1GHz, the effective noise floor rises by another
~2bits, so ~17bits of dynamic range max, before accounting for path
loss and distortion.

Speaking of distortion, look at the intermod (IP3) or harmonic
distortion figures for those wideband ADC sometime, if the signals of
interest are of widely varying amplitudes in narrower bandwidths, the
performance limit will usually be distortion from the strongest
signal, not the thermal noise floor.  This usually limits dynamic
range to less than 10 effective bits.

Also transmitters are usually only required to suppress their adjacent
channel noise to around -50dB below the transmit power, so a little
over 8bits of dynamic range before the ADC is quantizing an interferer
rather than the signal of interest.


Thanks for the more detailed information.


I am surprised that 802.11 still uses the same spreading code for all
stations.  I am no expert on cellular CDMA deployments, but I think
they have been using different spreading codes for each station to
increase capacity and improve the ability to mathematically remove the
interference of other physically close stations for decades.


Cellular mostly works because they have hundreds/thousands of channels rather 
than tens.


As complex as the 802.11 MAC is becoming, I do not understand why an approach 
like MU-MIMO was chosen over negotiating a separate spreading code per 
station.


compatibility and the fact that stations with different spreading algorithms 
still interfere with each other. Also, coordinating the 'right' spreading 
algorithm for each station with each AP (including ones with hidden SSIDs)



My best guess is that it keeps the complexity (and therefore power) at
the AP rather than in the (increasingly mobile, power-constrained)
station.  Hopefully the rise of mesh / peer-to-peer networks in mobile
stations will apply the right engineering pressure to re-think the
idea of keeping all complexity in the AP.


Almost all the mesh work I see is using a mesh of APs, anything beyond that is 
wishful thinking.


Even mu-mimo requires some client support.

David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, moeller0 wrote:


Hi David,


On Jun 27, 2016, at 09:44 , David Lang <da...@lang.hm> wrote:

On Mon, 27 Jun 2016, Sebastian Moeller wrote:


On a wireless network, with 'normal' omnidirctional antennas, the signal drops 
off with the square of the distance. So if you want to service clients from 1 
ft to 100 ft away, your signal strength varies by 1000 (4 orders of magnatude), 
this is before you include effects of shielding, bounces, bad antenna 
alignment, etc (which can add several more orders of magnatude of variation)

The receiver first normalized the strongest part of the signal to a constant 
value, and then digitizes the result, (usually with a 12-14 bit AD converter). 
Since 1000x is ~10 bits, the result of overlapping tranmissions can be one signal 
at 14 bits, and another at <4 bits. This is why digital processing isn't able 
to receive multiple stations at the same time.


But, I you add 10 Bits to your AD converter you basically solved this. Now, 
most likely this also needs to be of higher quality and of low internal noise, 
so probably expensive... Add to this the wide-band requirement of the sample 
the full band approach and we are looking at a price ad converter. On the 
bright side, mass-producing that might lower the price for nice oscilloscopes...


well, TI only manufactures AD converters up to 16 bit at these speeds, so 24 bit 
converters are hardly something to just buy. They do make 24 and 32 bit ADCs, but 
only ones that could be used for signals <5MHz wide (and we are pushing to 160 
MHz wide channels on wifi)


	But David’s idea was to sample the full 5GHz band simultaneously, so we 
would need something like a down-mixer and an ADC system with around 2GHz 
bandwidth (due to Nyquist), I believe multiplexing multiple slower ADC’s as 
done in better oscilloscopes might work, but that will not help reduce the 
price not solve the bit resolution question.


loosing track of the Davids here :-)

it's not just the super high-speed, high precision ADCs needed, it's also the 
filters to block out the other stuff that you don't want.


If you want to filter a 1 GHz chunk of bandwidth, you need to try and filer out 
signals outside of that 1GHz range. The wider the range that you receive, the 
harder it is to end up with filters that block the stuff outside of it. A strong 
signal outside of the band that you are trying to receive, but that partially 
makes it through the filter is as harmful to your range as a strong signal in 
band.


also note my comment about walls/etc providing shielding that can add a few 
more orders of magnatude on the signals.


	Well, yes, but in the end the normalizing amplifier really can be 
considered a range adjustor that makes up for the ADC’s lack of dynamik 
resolution. I would venture the guess not having to normalize might allow 
speed up the “wifi pre-amble” since one amplifier less to stabilize…


not really, you are still going to have to amplify the signal a LOT before you 
can process it at all, and legacy compatibility wouldn't let you trim the 
beginning of the signal anyway.


And then when you start being able to detect signals at that level, the first 
ones you are going to hit are bounces from your strongest signal off of all 
sorts of things.


	But that is independent of whether you sample to whole 5GHz range in one 
go or not? I would guess as long as the ADC/amplifier does not go into 
saturation both should perform similarly.


if you currently require 8 bits of clean data to handle the data rate (out of 14 
bits sampled) and you move to needing 16 bits of clean data to handle the 
improved data rate out of 24 bits sampled, you haven't gained much ability to 
handle secondary, weak signals


You will also find that noise and distortion in the legitimate strong signal 
is going to be at strengths close to the strength of the weak signal you are 
trying to hear.


	But if that noise and distortion appear in the weak signals frequency 
band we have issues already today?


no, because we aren't trying to decode the weak signal at the same time the 
strong signal is there. We only try to decode the weak signal in the absense of 
the strong signal.


As I said, I see things getting better, but it’s going to be a very hard 
thing to do, and I'd expect to see reverse mu-mimo (similarly strong signals 
from several directions) long before the ability to detect wildly weaker 
signals.


You are probably right.



I also expect that as the ability to more accurately digitize the signal 
grows, we will first take advantage of it for higher speeds.


	Yes, but higher speed currently means mostly wider bands, and the full 
4-5GHz range is sort of the logical end-point ;).


not at all. There is nothing magical about round decimal numbers :-)

And there are other users nearby. As systems get able to handle faster signals, 
we will move up in frequency (say the 10GHz band where police radar guns 
operat

Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Sebastian Moeller wrote:

On a wireless network, with 'normal' omnidirctional antennas, the signal 
drops off with the square of the distance. So if you want to service clients 
from 1 ft to 100 ft away, your signal strength varies by 1000 (4 orders of 
magnatude), this is before you include effects of shielding, bounces, bad 
antenna alignment, etc (which can add several more orders of magnatude of 
variation)


The receiver first normalized the strongest part of the signal to a constant 
value, and then digitizes the result, (usually with a 12-14 bit AD 
converter). Since 1000x is ~10 bits, the result of overlapping tranmissions 
can be one signal at 14 bits, and another at <4 bits. This is why digital 
processing isn't able to receive multiple stations at the same time.


 But, I you add 10 Bits to your AD converter you basically solved this. 
Now, most likely this also needs to be of higher quality and of low internal 
noise, so probably expensive... Add to this the wide-band requirement of the 
sample the full band approach and we are looking at a price ad converter. On 
the bright side, mass-producing that might lower the price for nice 
oscilloscopes...


well, TI only manufactures AD converters up to 16 bit at these speeds, so 24 bit 
converters are hardly something to just buy. They do make 24 and 32 bit ADCs, 
but only ones that could be used for signals <5MHz wide (and we are pushing to 
160 MHz wide channels on wifi)


also note my comment about walls/etc providing shielding that can add a few more 
orders of magnatude on the signals.


And then when you start being able to detect signals at that level, the first 
ones you are going to hit are bounces from your strongest signal off of all 
sorts of things.


You will also find that noise and distortion in the legitimate strong signal is 
going to be at strengths close to the strength of the weak signal you are trying 
to hear.


As I said, I see things getting better, but it's going to be a very hard thing 
to do, and I'd expect to see reverse mu-mimo (similarly strong signals from 
several directions) long before the ability to detect wildly weaker signals.


I also expect that as the ability to more accurately digitize the signal grows, 
we will first take advantage of it for higher speeds.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-26 Thread David Lang

hitting your points almost in reverse order


3) a) Variable rates: arguably a huge engineering design mistake for
multiple reasons (though understandably done this way.)


We know how to send data faster now than we did 5 years ago, which was faster 
than we knew how to do 5 years before that, etc. We also will know how to send 
data faster 5 years from now.


Unless you plan to ignore all future improvements, you cannot stick with just a 
single data rate (and if you do, I'm sure that a competitor of yours who doesn't 
ignore future improvements will eat your lunch in a couple of years)


This completely ignores the fact that some modulation schemes require a better 
signal to noise ratio than others, so unless you want to either give up 
distance, or give up speed, you cannot pick the "one and true" rate to use.


This is why wifi defaults to slowing down when it has trouble getting through. 
if the problem is noise, this is the right thing to do, when the problem is 
hidden transmitters, it's exactly the wrong thing to do) At the time it was 
designed it was extremely expensive, so deployments were expected to be few, so 
the biggest problem expected was noise



   b) Aggregation:  Artifact of media access latency that makes the
transport network incredibly difficult for things like TCP.  Possibly
another eng. design flaw.


it's not that simple. It's not the latency, it's the per-transmission overhead
that's the difference. This is the same reason why wired networks sometimes use
jumbo frames, the per-transmission overhead is the same, so by packing more data
into a single transmission, you get more efficient use of the media, so faster
effective speeds.


2) Hidden transmitters.  Does it really matter?   Superposition applies
hidden or not so will fixing the "receiver confusion" get rid of this issue?


If transmitters aren't hidden from each other, they can wait until the airwaves
are quiet and then transmit. This works very well in practice. But eventually,
any successful network is going to grow until it exceeds the size at which all
stations can hear each other.


1) Drastic variation in signal strength.  Reality of physics?  Seems yes.
Solvable by math?  I suspect so but could be wrong.  Solvable by
engineering now? Not sure.  In ten years?  Not sure.


Improvements in 10 years, yes. solutions, no.

you are constrained by physics. you can only detect the signal that arrives at
your antennas. If you have two transmitters next to each other transmitting at
the same time, they are going to interfere with each other in ways that nothing
is going to be able to solve.

But similarly to the way that mu-mimo is able to use multiple antennas sending
different signals to create useful interference patterns so that multiple
stations that are 'far enough' apart can each receive a useable signal. Further
improvements in signal processing and Analog to Digital converters could make it
so that stations that are far enough apart in angle, but close enough in power
could be deciphered at the same time. I'd give good odds that data rates below
the signal station peak will be required to make this practical.

But the problem of being able to hear a whisper from across the room at the same
time that someone is yelling in your ear is such a problem that I don't believe
that it is ever going to be 'solved' as a general case. The noise and distortion 
of the strong signal can be larger than the weak signal. And the strength of a 
bounce of the strong signal can be larger than the weak signal.


getting to the point where several signals of similar strength could be handled 
at the same time would be a big help.



I ask myself a question, "What happened in 2000 that wi-fi became viable
such that atheros and others were able to form new companies?   Did a
mathematician discover some new math or did a physicist  find a better way
to explain energy as a means for moving information?"


Mo, what happened was Moore's law came to help. It took equipment that was 
selling at ~$1000/station and cut it's price to $100/station (and today better 
equipment is available at <$10/station).


I remember putting a $750 deposit down (the purchase price of the card) at a 
conference to borrow a 802.11b (1-11Mb/sec) pcmcia card for my laptop. At around 
the same time, I spent ~$500 to equip a small office with a proprietary AP and 
two 1Mb pcmcia cards (and considered it a bargin). Today you can buy USB 802.11n 
dongles for <$10 and for ~$100 you can get a 802.11ac device that (under the 
right conditions) can top 1Gb/sec.


The APs were several thousand dollars each. Today a $200-300 AP is near the high 
end of the consumer devices, and you can get them for ~50 if you push (or ~$25 
if you are willing to buy used)


It didn't take higher speeds (AC, N, or even G) to make wifi popular, it just 
required that the equipment come down enough in price.


David Lang


On Sun, 26 Jun 2016, Bob McM

Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-26 Thread David Lang
I don't think anyone is trying to do simultanious receive of different stations. 
That is an incredibly difficult thing to do right.


MU-MIMO is aimed at haivng the AP transmit to multiple stations at the same 
time. For the typical browser/streaming use, this traffic is FAR larger than the 
traffic from the stations to the AP. As such, it is worth focusing on optimizing 
this direction.


While an ideal network may resemble a wired network without guides, I don't 
think it's a good idea to think about wifi networks that way.


The reality is that no matter how good you get, a wireless network is going to 
have lots of things that are just not going to happen with wired networks.


1. drastic variations in signal strength.

  On a wired network with a shared buss, the signal strength from all stations 
on the network is going to be very close to the same (a difference of 2x would 
be extreme)


  On a wireless network, with 'normal' omnidirctional antennas, the signal drops 
off with the square of the distance. So if you want to service clients from 1 ft 
to 100 ft away, your signal strength varies by 1000 (4 orders of magnatude), 
this is before you include effects of shielding, bounces, bad antenna alignment, 
etc (which can add several more orders of magnatude of variation)


  The receiver first normalized the strongest part of the signal to a constant 
value, and then digitizes the result, (usually with a 12-14 bit AD converter). 
Since 1000x is ~10 bits, the result of overlapping tranmissions can be one 
signal at 14 bits, and another at <4 bits. This is why digital processing isn't 
able to receive multiple stations at the same time.


2. 'hidden transmitters'

  On modern wired networks, every link has exactly two stations on it, and both 
can transmit at the same time.


  On wireless networks, it's drastically different. You have an unknown number 
of stations (which can come and go without notice).


  Not every station can hear every other station. This means that they can't 
avoid colliding with each other. In theory you can work around this by having 
some central system coordinate all the clients (either by them being told when 
to transmit, or by being given a schedule and having very precise clocks). But 
in practice the central system doesn't know when the clients have something to 
say and so in practice this doesn't work as well (except for special cases like 
voice where there is a constant amount of data to transmit)


3. variable transmit rates and aggregation

  Depending on how strong the signal is between two stations, you have different 
limits to how fast you can transmit data. There are many different standard 
modulations that you can use, but if you use one that's too fast for the signal 
conditions, the receiver isn't going to be able to decode it. If you use one 
that's too slow, you increase the probability that another station will step on 
your signal, scrambling it as far as the receiver is concerned. We now have 
stations on the network that can vary in speed by 100x, and are nearing 1000x 
(1Mb/sec to 1Gb/sec)


  Because there is so much variation in transmit rates, and older stations will 
not be able to understand the newest rates, each transmission starts off with 
some data being transmitted at the slowest available rate, telling any stations 
that are listening that there is data being transmitted for X amount of time, 
even if they can't tell what's going on as the data is being transmitted.


  The combination of this header being transmitted inefficiently, and the fact 
that stations are waiting for a clear window to transmit, means that when you do 
get a chance to transmit, you should send more than one packet at a time. This 
is something Linux is currently not doing well, qdiscs tend to round-robin 
packets without regard to where they are headed. The current work being done 
here with the queues is improving both throughput and latency by fixing this 
problem.



You really need to think differently when dealing with wireless network. The 
early wifi drivers tried to make them look just like a wired network, and we 
have found that we just needed too much other stuff to be successful whith that 
mindset.


The Analog/Radio side of things really is important, and can't just be 
abstracted away.


David Lang

On Sun, 26 Jun 2016, Bob McMahon wrote:


Is there a specific goal in mind?  This seems an AP tx centric proposal,
though I may not be fully understanding it.  I'm also curious as why not
scale in spatial domain vs the frequency domain, i.e. AP and STAs can also
scale using MiMO.  Why not just do that? So many phones today are 1x1, some
2x2 and few 3x3.   Yet APs are moving to 4x4 and I think the standard
supports 8x8.  (I'm not sure the marginal transistor count increase per
each approach.)  On the AP tx side, MuMIMO is also there which I think is
similar to the DAC proposal.

I'm far from a PHY & DSP expert, but I think the simultaneous AP 

Re: [Cerowrt-devel] Why we are discussing ARM [was: Cross-compiling to armhf]

2016-06-24 Thread David Lang

On Fri, 24 Jun 2016, Eric Johansson wrote:


On 6/24/2016 7:04 AM, Juliusz Chroboczek wrote:

Agreed. Who's going to save us?


 * kickstart our own,
 * license a design and enhance it to our needs (rpi 4??)
 * work a deal with mikrotek to freeup docs on one of their boards so
   we can replace router os with one of our own. see
   
http://www.balticnetworks.com/mikrotik-hap-ac-lite-dual-band-indoor-access-point-built-in-antennas.html.


There are a number of mikrotik boards that are listed as having OpenWRT support, 
including one with 9 gig-e ports and 3 microPCI slots that can hold radios


http://routerboard.com/RB493G
http://routerboard.com/RB493AH
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demand for better wifi

2016-06-23 Thread David Lang

well, with the kickstarter, I think they are selling a bill of goods.

Just using the DFS channels and aggregating them as supported by N and AC 
standards would do wonders (as long as others near you don't do the same)


David Lang

On Thu, 23 Jun 2016, Bob McMahon wrote:


Date: Thu, 23 Jun 2016 20:01:22 -0700
From: Bob McMahon <bob.mcma...@broadcom.com>
To: David Lang <da...@lang.hm>
Cc: dpr...@reed.com, make-wifi-f...@lists.bufferbloat.net,
"cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Make-wifi-fast] more well funded attempts showing market demand
for better wifi

Thanks for the clarification.   Though now I'm confused about how all the
channels would be used simultaneously with an AP only solution (which is my
understanding of the kickstarter campaign.)

Bob

On Thu, Jun 23, 2016 at 7:14 PM, David Lang <da...@lang.hm> wrote:


I think he is meaning when one unit is talking to one AP the signal levels
across multiple channels will be similar. Which is probably fairly true.


David Lang

On Thu, 23 Jun 2016, Bob McMahon wrote:

Curious, where does the "in a LAN setup, the variability in [receive]

signal strength is likely small enough" assertion come?   Any specific
power numbers here? We test with many combinations of "signal strength
variability" (e.g. deltas range from 0 dBm -  50 dBm) and per different
channel conditions.  This includes power variability within the spatial
streams' MiMO transmission.   It would be helpful to have some physics
combined with engineering to produce some pragmatic limits to this.

Also, mobile devices have a goal of reducing power in order to be
efficient
with their battery (vs a goal to balance power such that an AP can
receive simultaneously.)  Power per bit usually trumps most other design
goals.  There market for battery powered wi-fi devices drives a
semi-conductor mfg's revenue so my information come with that bias.

Bob

On Thu, Jun 23, 2016 at 1:48 PM, <dpr...@reed.com> wrote:

The actual issues of transmitting on multiple channels at the same time

are quite minor if you do the work in the digital domain (pre-DAC).  You
just need a higher sampling rate in the DAC and add the two signals
together (and use a wideband filter that covers all the channels).  No RF
problem.

Receiving multiple transmissions in different channels is pretty much the
same problem - just digitize (ADC) a wider bandwidth and separate in the
digital domain.  the only real issue on receive is equalization - if you
receive two different signals at different receive signal strengths, the
lower strength signal won't get as much dynamic range in its samples.

But in a LAN setup, the variability in signal strength is likely small
enough that you can cover that with more ADC bits (or have the MAC
protocol
manage the station transmit power so that signals received at the AP are
nearly the same power.

Equalization at transmit works very well when there is a central AP (as
in
cellular or normal WiFi systems).



On Thursday, June 23, 2016 4:28pm, "Bob McMahon" <
bob.mcma...@broadcom.com>
said:

___

Make-wifi-fast mailing list
make-wifi-f...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast
An AP per room/area, reducing the tx power (beacon range) has been my
approach and has scaled very well.   It does require some wires to each


AP


but I find that paying an electrician to run some quality wiring to


things


that are to remain stationary has been well worth the cost.

just my $0.02,
Bob

On Thu, Jun 23, 2016 at 1:10 PM, David Lang <da...@lang.hm> wrote:

Well, just using the 5GHz DFS channels in 80MHz or 160 MHz wide chunks

would be a huge improvement, not many people are using them (yet), and


the



wide channels let you get a lot of data out at once. If everything is

within a good range of the AP, this would work pretty well. If you end


up



needing multiple APs, or you have many stations, I expect that you will



be



better off with more APs at lower power, each using different channels.


David Lang




On Thu, 23 Jun 2016, Bob McMahon wrote:

Date: Thu, 23 Jun 2016 12:55:19 -0700


From: Bob McMahon <bob.mcma...@broadcom.com>
To: Dave Taht <dave.t...@gmail.com>
Cc: make-wifi-f...@lists.bufferbloat.net,
"cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Make-wifi-fast] more well funded attempts showing market
demand
for better wifi


hmm, I'm skeptical.   To use multiple carriers simultaneously is


difficult



per RF issues.   Even if that is somehow resolved, to increase



throughput



usually requires some form of channel bonding, i.e. needed on both



sides,



and brings in issues with preserving frame ordering.  If this is just

channel hopping, that needs coordination between both

Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demand for better wifi

2016-06-23 Thread David Lang
I think he is meaning when one unit is talking to one AP the signal levels 
across multiple channels will be similar. Which is probably fairly true.


David Lang

On Thu, 23 Jun 2016, Bob McMahon wrote:


Curious, where does the "in a LAN setup, the variability in [receive]
signal strength is likely small enough" assertion come?   Any specific
power numbers here? We test with many combinations of "signal strength
variability" (e.g. deltas range from 0 dBm -  50 dBm) and per different
channel conditions.  This includes power variability within the spatial
streams' MiMO transmission.   It would be helpful to have some physics
combined with engineering to produce some pragmatic limits to this.

Also, mobile devices have a goal of reducing power in order to be efficient
with their battery (vs a goal to balance power such that an AP can
receive simultaneously.)  Power per bit usually trumps most other design
goals.  There market for battery powered wi-fi devices drives a
semi-conductor mfg's revenue so my information come with that bias.

Bob

On Thu, Jun 23, 2016 at 1:48 PM, <dpr...@reed.com> wrote:


The actual issues of transmitting on multiple channels at the same time
are quite minor if you do the work in the digital domain (pre-DAC).  You
just need a higher sampling rate in the DAC and add the two signals
together (and use a wideband filter that covers all the channels).  No RF
problem.

Receiving multiple transmissions in different channels is pretty much the
same problem - just digitize (ADC) a wider bandwidth and separate in the
digital domain.  the only real issue on receive is equalization - if you
receive two different signals at different receive signal strengths, the
lower strength signal won't get as much dynamic range in its samples.

But in a LAN setup, the variability in signal strength is likely small
enough that you can cover that with more ADC bits (or have the MAC protocol
manage the station transmit power so that signals received at the AP are
nearly the same power.

Equalization at transmit works very well when there is a central AP (as in
cellular or normal WiFi systems).



On Thursday, June 23, 2016 4:28pm, "Bob McMahon" <bob.mcma...@broadcom.com>
said:


___
Make-wifi-fast mailing list
make-wifi-f...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast
An AP per room/area, reducing the tx power (beacon range) has been my
approach and has scaled very well.   It does require some wires to each

AP

but I find that paying an electrician to run some quality wiring to

things

that are to remain stationary has been well worth the cost.

just my $0.02,
Bob

On Thu, Jun 23, 2016 at 1:10 PM, David Lang <da...@lang.hm> wrote:


Well, just using the 5GHz DFS channels in 80MHz or 160 MHz wide chunks
would be a huge improvement, not many people are using them (yet), and

the

wide channels let you get a lot of data out at once. If everything is
within a good range of the AP, this would work pretty well. If you end

up

needing multiple APs, or you have many stations, I expect that you will

be

better off with more APs at lower power, each using different channels.

David Lang




On Thu, 23 Jun 2016, Bob McMahon wrote:

Date: Thu, 23 Jun 2016 12:55:19 -0700

From: Bob McMahon <bob.mcma...@broadcom.com>
To: Dave Taht <dave.t...@gmail.com>
Cc: make-wifi-f...@lists.bufferbloat.net,
"cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Make-wifi-fast] more well funded attempts showing market
demand
for better wifi


hmm, I'm skeptical.   To use multiple carriers simultaneously is

difficult

per RF issues.   Even if that is somehow resolved, to increase

throughput

usually requires some form of channel bonding, i.e. needed on both

sides,

and brings in issues with preserving frame ordering.  If this is just
channel hopping, that needs coordination between both sides (and isn't
simultaneous, possibly costing more than any potential gain.)   An AP

only

solution can use channel switch announcements (CSA) but there is a

cost to

those as well.

I guess don't see any break though here and the marketing on the site
seems
to indicate something beyond physics, at least the physics that I
understand.  Always willing to learn and be corrected if I'm
misunderstanding things.

Bob

On Wed, Jun 22, 2016 at 10:18 AM, Dave Taht <dave.t...@gmail.com>

wrote:


On Wed, Jun 22, 2016 at 10:03 AM, Dave Taht <dave.t...@gmail.com>

wrote:








https://www.kickstarter.com/projects/portalwifi/portal-turbocharged-wifi?ref=backerkit




"Portal is the first and only router specifically engineered to cut
through and avoid congestion, delivering consistent, high-performance
WiFi with greater coverage throughout your home.

Its proprietary spectrum turbocharger technology provides access to
300% more of 

Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demand for better wifi

2016-06-23 Thread David Lang

On Thu, 23 Jun 2016, dpr...@reed.com wrote:

The actual issues of transmitting on multiple channels at the same time are 
quite minor if you do the work in the digital domain (pre-DAC).  You just need 
a higher sampling rate in the DAC and add the two signals together (and use a 
wideband filter that covers all the channels).  No RF problem.


that works if you are using channels that are close together, and is how the 
current standard wide channels in N and AC work.


If you try to use channels that aren't adjacent, this is much harder to do.

Remember that the current adjacent channel use goes up to 160MHz wide, going 
wider than that starts getting hard.


Receiving multiple transmissions in different channels is pretty much the same 
problem - just digitize (ADC) a wider bandwidth and separate in the digital 
domain.  the only real issue on receive is equalization - if you receive two 
different signals at different receive signal strengths, the lower strength 
signal won't get as much dynamic range in its samples.


But in a LAN setup, the variability in signal strength is likely small enough 
that you can cover that with more ADC bits (or have the MAC protocol manage 
the station transmit power so that signals received at the AP are nearly the 
same power.


Equalization at transmit works very well when there is a central AP (as in 
cellular or normal WiFi systems).


define 'normal WiFi system'

It's getting very common for even moderate size houses to need more than one AP 
to cover the entire house.


David Lang




On Thursday, June 23, 2016 4:28pm, "Bob McMahon" <bob.mcma...@broadcom.com> 
said:


___
Make-wifi-fast mailing list
make-wifi-f...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast
An AP per room/area, reducing the tx power (beacon range) has been my
approach and has scaled very well.   It does require some wires to each AP
but I find that paying an electrician to run some quality wiring to things
that are to remain stationary has been well worth the cost.

just my $0.02,
Bob

On Thu, Jun 23, 2016 at 1:10 PM, David Lang <da...@lang.hm> wrote:


Well, just using the 5GHz DFS channels in 80MHz or 160 MHz wide chunks
would be a huge improvement, not many people are using them (yet), and the
wide channels let you get a lot of data out at once. If everything is
within a good range of the AP, this would work pretty well. If you end up
needing multiple APs, or you have many stations, I expect that you will be
better off with more APs at lower power, each using different channels.

David Lang




On Thu, 23 Jun 2016, Bob McMahon wrote:

Date: Thu, 23 Jun 2016 12:55:19 -0700

From: Bob McMahon <bob.mcma...@broadcom.com>
To: Dave Taht <dave.t...@gmail.com>
Cc: make-wifi-f...@lists.bufferbloat.net,
"cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Make-wifi-fast] more well funded attempts showing market
demand
for better wifi


hmm, I'm skeptical.   To use multiple carriers simultaneously is difficult
per RF issues.   Even if that is somehow resolved, to increase throughput
usually requires some form of channel bonding, i.e. needed on both sides,
and brings in issues with preserving frame ordering.  If this is just
channel hopping, that needs coordination between both sides (and isn't
simultaneous, possibly costing more than any potential gain.)   An AP only
solution can use channel switch announcements (CSA) but there is a cost to
those as well.

I guess don't see any break though here and the marketing on the site
seems
to indicate something beyond physics, at least the physics that I
understand.  Always willing to learn and be corrected if I'm
misunderstanding things.

Bob

On Wed, Jun 22, 2016 at 10:18 AM, Dave Taht <dave.t...@gmail.com> wrote:

On Wed, Jun 22, 2016 at 10:03 AM, Dave Taht <dave.t...@gmail.com> wrote:






https://www.kickstarter.com/projects/portalwifi/portal-turbocharged-wifi?ref=backerkit



"Portal is the first and only router specifically engineered to cut
through and avoid congestion, delivering consistent, high-performance
WiFi with greater coverage throughout your home.

Its proprietary spectrum turbocharger technology provides access to
300% more of the radio airwaves than any other router, improving
performance by as much as 300x, and range and coverage by as much as
2x in crowded settings, such as city homes and multi-unit apartments"

It sounds like they are promising working DFS support.



It's not clear what chipset they are using (they are claiming wave2) -
but they are at least publicly claiming to be using openwrt. So I
threw in enough to order one for september, just so I could comment on
their kickstarter page. :)

I'd have loved to have got in earlier (early shipments are this month
apparently), but those were sold out.



https://www.kickstarter.com/project

Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demand for better wifi

2016-06-23 Thread David Lang
When I run the wifi network for the Scale conference, I will put multiple APs in 
each room. This last year I tried for ~1 per 50-75 seats in theater format 
(~25-30 in classroom format where there are tables).


David Lang


On Thu, 23 Jun 2016, Bob McMahon wrote:


An AP per room/area, reducing the tx power (beacon range) has been my
approach and has scaled very well.   It does require some wires to each AP
but I find that paying an electrician to run some quality wiring to things
that are to remain stationary has been well worth the cost.

just my $0.02,
Bob

On Thu, Jun 23, 2016 at 1:10 PM, David Lang <da...@lang.hm> wrote:


Well, just using the 5GHz DFS channels in 80MHz or 160 MHz wide chunks
would be a huge improvement, not many people are using them (yet), and the
wide channels let you get a lot of data out at once. If everything is
within a good range of the AP, this would work pretty well. If you end up
needing multiple APs, or you have many stations, I expect that you will be
better off with more APs at lower power, each using different channels.

David Lang




On Thu, 23 Jun 2016, Bob McMahon wrote:

Date: Thu, 23 Jun 2016 12:55:19 -0700

From: Bob McMahon <bob.mcma...@broadcom.com>
To: Dave Taht <dave.t...@gmail.com>
Cc: make-wifi-f...@lists.bufferbloat.net,
"cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Make-wifi-fast] more well funded attempts showing market
demand
for better wifi


hmm, I'm skeptical.   To use multiple carriers simultaneously is difficult
per RF issues.   Even if that is somehow resolved, to increase throughput
usually requires some form of channel bonding, i.e. needed on both sides,
and brings in issues with preserving frame ordering.  If this is just
channel hopping, that needs coordination between both sides (and isn't
simultaneous, possibly costing more than any potential gain.)   An AP only
solution can use channel switch announcements (CSA) but there is a cost to
those as well.

I guess don't see any break though here and the marketing on the site
seems
to indicate something beyond physics, at least the physics that I
understand.  Always willing to learn and be corrected if I'm
misunderstanding things.

Bob

On Wed, Jun 22, 2016 at 10:18 AM, Dave Taht <dave.t...@gmail.com> wrote:

On Wed, Jun 22, 2016 at 10:03 AM, Dave Taht <dave.t...@gmail.com> wrote:






https://www.kickstarter.com/projects/portalwifi/portal-turbocharged-wifi?ref=backerkit



"Portal is the first and only router specifically engineered to cut
through and avoid congestion, delivering consistent, high-performance
WiFi with greater coverage throughout your home.

Its proprietary spectrum turbocharger technology provides access to
300% more of the radio airwaves than any other router, improving
performance by as much as 300x, and range and coverage by as much as
2x in crowded settings, such as city homes and multi-unit apartments"

It sounds like they are promising working DFS support.



It's not clear what chipset they are using (they are claiming wave2) -
but they are at least publicly claiming to be using openwrt. So I
threw in enough to order one for september, just so I could comment on
their kickstarter page. :)

I'd have loved to have got in earlier (early shipments are this month
apparently), but those were sold out.



https://www.kickstarter.com/projects/portalwifi/portal-turbocharged-wifi/comments




--
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org





--
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org
___
Make-wifi-fast mailing list
make-wifi-f...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast



___
Make-wifi-fast mailing list
make-wifi-f...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast


___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demand for better wifi

2016-06-23 Thread David Lang
Well, just using the 5GHz DFS channels in 80MHz or 160 MHz wide chunks would be 
a huge improvement, not many people are using them (yet), and the wide channels 
let you get a lot of data out at once. If everything is within a good range of 
the AP, this would work pretty well. If you end up needing multiple APs, or you 
have many stations, I expect that you will be better off with more APs at lower 
power, each using different channels.


David Lang




On Thu, 23 Jun 2016, Bob McMahon wrote:


Date: Thu, 23 Jun 2016 12:55:19 -0700
From: Bob McMahon <bob.mcma...@broadcom.com>
To: Dave Taht <dave.t...@gmail.com>
Cc: make-wifi-f...@lists.bufferbloat.net,
"cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Make-wifi-fast] more well funded attempts showing market demand
for better wifi

hmm, I'm skeptical.   To use multiple carriers simultaneously is difficult
per RF issues.   Even if that is somehow resolved, to increase throughput
usually requires some form of channel bonding, i.e. needed on both sides,
and brings in issues with preserving frame ordering.  If this is just
channel hopping, that needs coordination between both sides (and isn't
simultaneous, possibly costing more than any potential gain.)   An AP only
solution can use channel switch announcements (CSA) but there is a cost to
those as well.

I guess don't see any break though here and the marketing on the site seems
to indicate something beyond physics, at least the physics that I
understand.  Always willing to learn and be corrected if I'm
misunderstanding things.

Bob

On Wed, Jun 22, 2016 at 10:18 AM, Dave Taht <dave.t...@gmail.com> wrote:


On Wed, Jun 22, 2016 at 10:03 AM, Dave Taht <dave.t...@gmail.com> wrote:



https://www.kickstarter.com/projects/portalwifi/portal-turbocharged-wifi?ref=backerkit


"Portal is the first and only router specifically engineered to cut
through and avoid congestion, delivering consistent, high-performance
WiFi with greater coverage throughout your home.

Its proprietary spectrum turbocharger technology provides access to
300% more of the radio airwaves than any other router, improving
performance by as much as 300x, and range and coverage by as much as
2x in crowded settings, such as city homes and multi-unit apartments"

It sounds like they are promising working DFS support.


It's not clear what chipset they are using (they are claiming wave2) -
but they are at least publicly claiming to be using openwrt. So I
threw in enough to order one for september, just so I could comment on
their kickstarter page. :)

I'd have loved to have got in earlier (early shipments are this month
apparently), but those were sold out.


https://www.kickstarter.com/projects/portalwifi/portal-turbocharged-wifi/comments




--
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org




--
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org
___
Make-wifi-fast mailing list
make-wifi-f...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast

___
Make-wifi-fast mailing list
make-wifi-f...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] anyone tried a wrtnode?

2016-06-21 Thread David Lang
Let me know how things go. I'd be interested in sponsoring (or at least helping 
to provide hardware for) this sort of work.


I'm most interested in the crs125 or similar series of switches, it looks like a 
good number of their devices are supported already.


David Lang

On Tue, 21 Jun 2016, Eric Johansson wrote:


I assume you've seen this list https://wikidevi.com/wiki/Ath9k

I'm starting to play with mikrotek devices.  routeros is a consultants
full employment program so replacing it would be nice.

--- eric

On 6/21/2016 2:36 AM, Dave Taht wrote:

http://wrtnode.com/w/

...

What I am actually looking for is the smallest/cheapest minimum
possible box with an ath9k in it.




___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [LEDE-DEV] lede integration issues remaining from the detrius of cerowrt

2016-06-11 Thread David Lang

On Sat, 11 Jun 2016, Daniel Curran-Dickinson wrote:


Hi Dave,

I don't speak for the LEDE team, but it looks to me a lot of your
problem is that you are using LEDE/openwrt for far bigger iron than the
primary target (standard routers, including pre-AC non-NAND ones, which
are really quite small and low powered).  2 TB+ storage for example, or
using lighttpd instead of uhttpd are really things that don't affect the
primary use case and if you want to support this, you need to find a way
to do that does not negatively affect your typical router (without
external storage).


While CeroWRT has expanded it's aim to be able to support today's faster network 
connections (up to and including the 1G connections now available), that's not 
really the issue here.


Even low-end devices now include a USB port, and it's really easy to plugi in an 
external USB drive that's >2TB. 3TB drives are now <$100


Now, if support for larger drives really does add a lot to the system footprint, 
it should be optional. But how much space are we talking about here? It should 
at least be an easy-to-select option.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [LEDE-DEV] WRT1900ACS Testing

2016-05-20 Thread David Lang
There was a new wifi firmware release today (not yet in LEDE or OpenWRT trunk) 
announced around post 11400 on this forum


https://forum.openwrt.org/post.php?tid=50173=325044

David Lang

On Fri, 20 May 2016, Dave Taht wrote:


Date: Fri, 20 May 2016 14:16:13 -0700
From: Dave Taht <dave.t...@gmail.com>
To: Dheeran Senthilvel <dheeranm...@gmail.com>
Cc: lede-...@lists.infradead.org,
"cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] [LEDE-DEV] WRT1900ACS Testing

We had found some pretty major performance problems on this hardware
as of a few months ago. I am curious if they still exist? A whole
bunch of benchmarks went by on the cerowrt-devel list, also.

They were:

1) broken local ethernet network stack - running 4 copies of netperf
against it - one would grab all the bandwidth and the other three
barely even start or hang completely.

2) No BQL in the ethernet driver

3) Wifi was horribly overbuffered in general

4) Wifi would crash against flent's rrul test.


On Thu, May 19, 2016 at 3:37 AM, Dheeran Senthilvel
<dheeranm...@gmail.com> wrote:

Hi,
I am currently running LEDE r274 build dated 18-May-2015. The build seems to be 
stable as of now, have been flashing the images ever since the snapshot was 
made available in the repository. Current Status of the device is as follows,

# Wireless - both 2.4Ghz (@297MB/s) and 5Ghz(@1079MB/s) are working 
fine.(Speed tested using iperf).

# USB - Not Tested

# LEDs - Only Power, WiFi & LAN leds are functioning. WAN & USB are not 
working (Similar OpenWrt Ticket - https://dev.openwrt.org/ticket/21825)

#Terminal Output



root@Shelby:~# lsmod


ahci_mvebu  1653  0
ahci_platform   2367  0
armada_thermal  3284  0
cfg80211  220675  2 mwlwifi
compat 13025  2 mac80211
crc_ccitt979  1 ppp_async
ehci_hcd   34246  2 ehci_orion
ehci_orion  2563  0
ehci_platform   4368  0
gpio_button_hotplug 5988  0
hwmon   2026  3 tmp421
i2c_core   18339  3 tmp421
i2c_dev 4551  0
i2c_mv64xxx 7033  0
ip6_tables  9625  3 ip6table_raw
ip6t_REJECT 1056  2
ip6table_filter  682  1
ip6table_mangle 1042  1
ip6table_raw 648  0
ip_tables   9775  4 iptable_nat
ipt_MASQUERADE   698  1
ipt_REJECT   914  2
iptable_filter   736  1
iptable_mangle   868  1
iptable_nat 1029  1
iptable_raw  702  0
ledtrig_usbdev  2307  0
libahci19501  3 ahci_mvebu
libahci_platform4501  2 ahci_mvebu
libata127382  5 ahci_mvebu
mac80211  401118  1 mwlwifi
mmc_block  21754  0
mmc_core   77838  2 mvsdio
mvsdio  7362  0
mwlwifi69628  0
nf_conntrack   60523  9 nf_nat_ipv4
nf_conntrack_ipv4   6125 11
nf_conntrack_ipv6   6564  6
nf_conntrack_rtcache2461  0
nf_defrag_ipv4   884  1 nf_conntrack_ipv4
nf_defrag_ipv6 13185  1 nf_conntrack_ipv6
nf_log_common   2407  2 nf_log_ipv4
nf_log_ipv4 3218  0
nf_log_ipv6 3663  0
nf_nat 10036  4 nf_nat_ipv4
nf_nat_ipv4 4054  1 iptable_nat
nf_nat_masquerade_ipv41509  1 ipt_MASQUERADE
nf_nat_redirect  919  1 xt_REDIRECT
nf_reject_ipv4  1911  1 ipt_REJECT
nf_reject_ipv6  2236  1 ip6t_REJECT
nls_base5190  1 usbcore
ppp_async   6521  0
ppp_generic19930  3 pppoe
pppoe   8047  0
pppox   1239  1 pppoe
pwm_fan 2840  0
sata_mv26825  0
scsi_mod   88117  3 usb_storage
sd_mod 23412  0
slhc4543  1 ppp_generic
thermal_sys20307  2 armada_thermal
tmp421  2500  0
usb_common  1676  1 usbcore
usb_storage37368  0
usbcore   120847  8 ledtrig_usbdev
x_tables   10689 26 ipt_REJECT
xhci_hcd   81489  2 xhci_plat_hcd
xhci_pci2324  0
xhci_plat_hcd   3897  0
xt_CT   2797  0
xt_LOG   851  0
xt_REDIRECT  825  0
xt_TCPMSS   2660  2
xt_comment   511 62
xt_conntrack2516 16
xt_id506129
xt_limit1241 20
xt_mac   631  0
xt_mark  704  0
xt_multiport1308  0
xt_nat  1329  0
xt_state 801  0
xt_tcpudp   1800 10
xt_time 1670  0


root@Shelby:~# cat /etc/config/system


config system
option hostname 'Shelby'
option timezone 'IST-5:30'
option ttylogin '0'

config timeserver 'ntp'
 

Re: [Cerowrt-devel] LimeSDR: Flexible, Next-generation, Open Source Software Defined Radio

2016-04-28 Thread David Lang

found it

https://www.crowdsupply.com/lime-micro/limesdr

early bird price $199 100Khz-3.8GHz, >60MHz bandwidth at 12 bit sample depth

4 transmit and 6 receive antennas, two separate transmitters and two separate 
receivers.


USB 3 interface.

looks intereting

David Lang


On Thu, 28 Apr 2016, David Lang wrote:


Date: Thu, 28 Apr 2016 13:58:47 -0700 (PDT)
From: David Lang <da...@lang.hm>
To: Eric Johansson <e...@eggo.org>
Cc: "cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] LimeSDR: Flexible, Next-generation,
Open Source Software Defined Radio

so what are the technical specs, costs, and expected shipping date?

a 10 antenna SDR system with a FPGA opens up some very interesting 
possibilities.


David Lang

On Thu, 28 Apr 2016, Eric Johansson wrote:


Date: Thu, 28 Apr 2016 16:39:35 -0400
From: Eric Johansson <e...@eggo.org>
To: "cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: [Cerowrt-devel] LimeSDR: Flexible, Next-generation,
Open Source Software Defined Radio

http://qrznow.com/limesdr-flexible-next-generation-open-source-software-defined-radio

from the advertising copy:

A Software Defined Radio for Everyone
LimeSDR is a low cost, open source, apps-enabled (more on that later) 
software defined radio (SDR) platform that can be used to support just 
about any type of wireless communication standard, including UMTS, LTE, 
GSM, LoRa, Bluetooth, Zigbee, RFID, and Digital Broadcasting, to name but a 
few.


While most SDRs have remained the domain of RF and protocol experts, 
LimeSDR is usable by anyone familiar with the idea of an app store – 
LimeSDR is the first SDR to integrate with Snappy Ubuntu Core. This means 
you can easily download new LimeSDR apps from developers around the world. 
If you’re a developer yourself, then you can share and/or sell your LimeSDR 
apps through Snappy Ubuntu Core as well.



___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] LimeSDR: Flexible, Next-generation, Open Source Software Defined Radio

2016-04-28 Thread David Lang

so what are the technical specs, costs, and expected shipping date?

a 10 antenna SDR system with a FPGA opens up some very interesting 
possibilities.


David Lang

On Thu, 28 Apr 2016, Eric Johansson wrote:


Date: Thu, 28 Apr 2016 16:39:35 -0400
From: Eric Johansson <e...@eggo.org>
To: "cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: [Cerowrt-devel] LimeSDR: Flexible, Next-generation,
Open Source Software Defined Radio

http://qrznow.com/limesdr-flexible-next-generation-open-source-software-defined-radio

from the advertising copy:

A Software Defined Radio for Everyone
LimeSDR is a low cost, open source, apps-enabled (more on that later) 
software defined radio (SDR) platform that can be used to support just 
about any type of wireless communication standard, including UMTS, LTE, 
GSM, LoRa, Bluetooth, Zigbee, RFID, and Digital Broadcasting, to name 
but a few.


While most SDRs have remained the domain of RF and protocol experts, 
LimeSDR is usable by anyone familiar with the idea of an app store – 
LimeSDR is the first SDR to integrate with Snappy Ubuntu Core. This 
means you can easily download new LimeSDR apps from developers around 
the world. If you’re a developer yourself, then you can share and/or 
sell your LimeSDR apps through Snappy Ubuntu Core as well.



___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] perverse powersave bug with sta/ap mode

2016-04-26 Thread David Lang

On Tue, 26 Apr 2016, Aaron Wood wrote:


Has anyone modeled what the multicast to multiple-unicast efficiency
threshold is?  The point where you go from it being more efficient to send
multicast traffic to individual STAs instead of sending a monstrous (in
time) multicast-rate packet?


is the multicast packet actually multicast over the air? or is it a lot of 
unicast packets?


When the network is encrypted, how can they encrypt the multicast packet so that 
all nodes can hear it?


David Lang



2, 5, 10 STAs?

The per-STA-queue work should make that relatively easy, by allowing the
packet to be dumped into each STA's queue...

-Aaron
___
Make-wifi-fast mailing list
make-wifi-f...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] nuc stuff

2016-03-27 Thread David Lang

what sort of expected price?

On Mon, 28 Mar 2016, Outback Dingo wrote:


sorry guys, miss the reply all

http://www.jetwaycomputer.com/NF592.html

available soon


On Sun, Mar 27, 2016 at 3:01 AM, Valent Turkovic 
wrote:


If you are looking for absolutely most powerful device but in smaller form
factor than NUC then check out PC Engines APU.1D, this amazing board is x86
so it blows all other non-x86 devices away... it should be similar or more
powerful that Mikrotik CCR Cloud Routers.

Software wise it can run FreeBSD, OpenWrt or Mikrotik OS.

[1]
http://www.pluscom.pl/en/pc-engines-apu-1d-3xgigabit-lan-2gb-ram-t40e-cpu-board-p2412.html
- http://www.pcengines.ch/apu1d.htm
[2] https://schemen.me/routeros-on-apu/
- https://wiki.openwrt.org/toh/pcengines/apu
- https://forum.openwrt.org/viewtopic.php?pid=316913

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] might be a good thing

2016-03-21 Thread David Lang
sounds like a good time for someone else to get in the mix, aiming for the low 
end with a simple/cheap RF chipset that supports doing all the 'hard stuff' in 
software. Someone who would be happy to have the community doing the hard stuff 
for them.


David Lang


On Fri, 18 Mar 2016, Aaron Wood wrote:


It will be interesting to see if that includes APs or just clients.  The
OEMs are going to lose one of their big levers on price, if they lose
Broadcom from the mix (and Qualcomm is going to make some good money).
Although Marvell's chunk of that space has been growing...

-Aaron

On Fri, Mar 18, 2016 at 3:46 PM, Dave Taht <dave.t...@gmail.com> wrote:



http://www.bidnessetc.com/65767-apple-component-maker-broadcom-abandon-low-margin-wifi-operations/


--
Dave Täht
Let's go make home routers and wifi faster! With better software!
https://www.gofundme.com/savewifi
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] [bufferbloat-fcc-discuss] arstechnica confirmstp-link router lockdown

2016-03-14 Thread David Lang

On Mon, 14 Mar 2016, Jonathan Morton wrote:


On 14 Mar, 2016, at 16:02, dpr...@reed.com wrote:

The WiFi protocols themselves are not a worry of the FCC at all. Modifying 
them in software is ok. Just the physical emissions spectrum must be 
certified not to be exceeded.


So as a practical matter, one could even satisfy this rule with an external 
filter and power limiter alone, except in part of the 5 GHz band where radios 
must turn off if a radar is detected by a specified algorithm.


That means that the radio software itself could be tasked with a software 
filter in the D/A converter that is burned into the chip, and not bypassable. 
If the update path requires a key that is secret, that should be enough, as 
key based updating is fine for all radios sold for other uses that use 
digital modulation using DSP.


So the problem is that 802.11 chips don't split out the two functions, making 
one hard to update.


To put this another way, what we need is a cleaner separation of ISO Layers 1 
(physical) and 2 (MAC).


The problem is that everything (not just in wifi chips, think about 'software 
defined networking/datacenter) is moving towards less separation of the 
different layers, not more. The benefits of less separation are far more 
flexibility, lower costs, and in some cases, the ability to do things that 
weren't possible with the separation.


Any position that requires bucking this trend is going to have a very hard time 
surviving.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] [bufferbloat-fcc-discuss] arstechnica confirmstp-link router lockdown

2016-03-14 Thread David Lang

On Mon, 14 Mar 2016, dpr...@reed.com wrote:

An external "limit-exceeding signal detector" could also be very inexpensive, 
if it did not need to do ADC from the transmitted signal, but could get access 
to the digital samples and do a simple power measurement.


I agree with this, but have concerns about how you can lock down part of the 
firmware and not all of it.


You still have the problem of telling the chip/algorithm which set of rules to 
enforce, and updating it when the requirements change.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [bufferbloat-fcc-discuss] arstechnica confirms tp-link router lockdown

2016-03-13 Thread David Lang

On Sun, 13 Mar 2016, Adrian Chadd wrote:


You do that in hardware. Do the Mac, phy and RF in hardware.

This is what the qca hardware does.


unfortunantly, that's not what the existing chipsets do.

So unless you can create a new chipset, you can't just change what's done in 
hardware.


David Lang


a
On Mar 13, 2016 5:25 PM, "David Lang" <da...@lang.hm> wrote:


On Sat, 12 Mar 2016, Adrian Chadd wrote:

On 12 March 2016 at 11:14, Henning Rogge <hro...@gmail.com> wrote:



On Sat, Mar 12, 2016 at 3:32 PM, Wayne Workman
<wayne.workman2...@gmail.com> wrote:


I understand that Broadcom was paid to develop the Pi, a totally free
board.

And they already make wireless chipsets.



The question is how easy would it be to build a modern 802.11ac
halfmac chip... the amount of work these chips do (especially with 3*3
or 4*4 MIMO) is not trivial.



It's not that scary - most of the latency sensitive things are:

* channel change - eg background scans
* calibration related things - but most slow calibration could be done
via firmware commands, like the intel chips do!
* transmit a-mpdu / retransmit
* transmit rate control adaptation
* receiving / block-ack things - which is mostly done in hardware anyway
* likely some power save transition-y things too



you are ignoring MU-MIMO, the ability to transmit different signals from
each antenna so that the interference patterns from the different signals
result in different readable data depending on where the receiver is in
relation to the access point is not a trivial thing.

But it's one of the most valuable features in the spec.

David Lang




___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] On building your own routers and the mass market

2016-03-13 Thread David Lang

On Sat, 12 Mar 2016, Outback Dingo wrote:


... and I'm going to order a couple, 'cause their wifi is not as good
as it could be (nobody's is), and he said I could visit periodically
with make-wifi-fast's upcoming fixes. https://eero.com/


the cznic team has already done this.

https://omnia.turris.cz/en/




Looks like a nice device, though can you buy one now, or is this another
smoke and mirrors, they did get a ton of funding. now lets see if they
deliver. Can you source the devices now?


No, they are not shipping yet, but they are expecting to start delivering them 
next month (April 2016)


They have several thousand to deliver based on existing orders.

David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [bufferbloat-fcc-discuss] arstechnica confirms tp-link router lockdown

2016-03-13 Thread David Lang

On Sat, 12 Mar 2016, Wayne Workman wrote:


@David lang,

Dude, Help make it happen.

I don't know all the details. I don't even pretend to know anything about
IC design and manufacturing.

Look if we want a platform that is open, then it'll be an open source
chipset. Yes, the first of its kind.

We need someone who is willing to do this in their free time(as you said),
or a company that is willing to be paid to do it.

We can be pioneers.

We CAN be pioneers.

All we have to do is figure this out. It's been done before, with the Linux
kernel. It can be done again. The person who has made a chipset in their
free time is out there. Let's find them!


I'll be happy to support anyone who tackles this. But your question was if 
something like this was a good use of 'our money'. If you are talking the 
bufferbloat team, make-wifi-fast team, or even team trying to convince teh FCC 
nto to outlaw OpenWRT, then no, funding the development of a chipset from 
scratch is not a good use of 'our money', it will have no effect for several 
years (by which time we may be looking at the next chipset), and we don't have 
anywhere close to the funding needed to pull it off.


I would do us no good to create a fully open chip if the FCC mandates that the 
firmware must be locked down.


Would I like someone to do this, Sure. I'll contribute towards a kickstarter, 
even if it's $100 for a mini-pci card that is the equivalent of what we can get 
today for $30, but it would take tens of thousands of people doing that to fund 
the project, and I have serious doubts if you can get that much funding for 
something with such a long lead time.


If someone does the research and puts together a FPGA version that works and is 
looking for funding to convert it to a ASIC, I think you could get funding. But 
that's not the question in front of us now.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [bufferbloat-fcc-discuss] arstechnica confirms tp-link router lockdown

2016-03-12 Thread David Lang

On Sat, 12 Mar 2016, Adrian Chadd wrote:


On 12 March 2016 at 11:14, Henning Rogge <hro...@gmail.com> wrote:

On Sat, Mar 12, 2016 at 3:32 PM, Wayne Workman
<wayne.workman2...@gmail.com> wrote:

I understand that Broadcom was paid to develop the Pi, a totally free board.

And they already make wireless chipsets.


The question is how easy would it be to build a modern 802.11ac
halfmac chip... the amount of work these chips do (especially with 3*3
or 4*4 MIMO) is not trivial.


It's not that scary - most of the latency sensitive things are:

* channel change - eg background scans
* calibration related things - but most slow calibration could be done
via firmware commands, like the intel chips do!
* transmit a-mpdu / retransmit
* transmit rate control adaptation
* receiving / block-ack things - which is mostly done in hardware anyway
* likely some power save transition-y things too


you are ignoring MU-MIMO, the ability to transmit different signals from each 
antenna so that the interference patterns from the different signals result in 
different readable data depending on where the receiver is in relation to the 
access point is not a trivial thing.


But it's one of the most valuable features in the spec.

David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] arstechnica confirms tp-link router lockdown

2016-03-12 Thread David Lang

On Sat, 12 Mar 2016, Jonathan Morton wrote:

quick note, your quotes mixed things I said with things Alan said


But the biggest barrier is probably that the web interface is
cluttered with features you don't need, so there's a setup wizard you
go through once, and you don't touch that even if you're curious
because you're at risk of resetting it.


That’s a good observation, and suggests a design principle to follow in future.


I think this is a significant factor.


Just because they screwed up the WNDR3800 with too many different
coloured lights, it doesn't invalidate the principle.


It’s not just the WNDR, and not just Netgear.  Every router I’ve seen has too 
many lights which provide too little information - and even I have to squint 
and read the manual to figure out what it’s telling me.

Except Apple.  Then you have *one* light which provides too little information 
- but at least I don’t have to read the manual to figure it out.  :-)


some of the lights are fairly obvious, others less so.


You have a much larger display, which gives you room for help text and images, 
not just a handful of characters.


You might assume that I’m thinking of a 16x2 character display.  I’m not - 
that’s too small to be user-friendly.

Rather, something like this, which gives 128x64 pixels (equivalent to 21x8 
characters with a 6x8 font) and the freedom to draw icons and choose fonts:

https://www.adafruit.com/products/250


a 6x8 font on a 2.7" screen is unreadable for many people, this is about an 11pt 
font on something that is not at your optimum reading distance.



There are also small OLED displays which give a sharper, higher-contrast 
readout, but these are more expensive, lack the capacity of colour-coding 
anything, and appear to be so small that some people might have difficulty 
reading them despite the sharpness and high contrast.


OLEDs do color as well.


The original Macintosh put a whole desktop environment on a tiny (by modern 
standards) 512x384 mono display.  We don’t even need *that* level of 
sophistication.  I’m confident 128x64 mono will be enough if carefully designed 
for - it is substantially more than a classic Nokia phone provided.


don't ignore the DPI, it's not just the number of pixels, it's the size of the 
resulting characters.



A display is nicer than just LEDs, but it's also a lot more expensive.


Yes, it looks like a decent display+controller combination is more expensive 
than a mini-PCIe ath9k card (even discounting the markup associated with 
Adafruit providing a maker-friendly kit rather than raw devices).  It will 
therefore be a significant contributor to the BOM cost.  This is justifiable if 
it also contributes to the USP.  On the upside, with a status display we can 
reduce the number of LEDs and associated optical channels, perhaps all the way 
down to a single power light.


don't forget that you also have to have buttons/switches to go along with the 
display. don't assume that people are going to have a spare USB keyboard around 
to plug in.


There is a substantial population who's only computers are tablets, phones, TVs, 
and other non-traditional devices, but who need wifi to use them.


If I'm going to drag out a full size keyboard, I sure don't want to be trying to 
squint at a 2.7" screen.



I also don't like large glowing displays on devices. I frequently put tape over 
the LEDs to tone things down as well (especially in bedrooms)


An RGB LED backlight can inherently be dimmed - and this could occur 
automatically when out of setup mode (keyboard disconnected) and the overall 
status is OK.  Also, since it illuminates a relatively large area, the colour 
can be discerned without high brightness in the first place.


I don't know if you really can simplify the configuration the way you are 
wanting to, but I'd say give it a try. Take OpenWRT and make a configuration 
program that you think is better.


Yes, I probably should.

You even have a nice browser based tool to start with (luci). If you can make 
a browser based tool work well, then if your tool is better/easier, it can be 
widely used, or you can then try hardware versions of it.


Since the entire point of my proposal is to get away from the “web interface” 
concept altogether, and I have an allergic reaction to “web technology” such 
as JavaScript (spit), that’s *not* what I’m going to do.  Instead, I’ll 
prototype something based around an emulation of the display linked above.


But I will take a careful look at Luci to help generate a requirements 
checklist.


my point is that you can use a browser interface to mock-up what you would do on 
your local display without having to build custom hardware. Yes, it would mean 
you have to work with javascript/etc to build this mockup, but it would let you 
create a bitmap image with buttons/etc that will work the same way that your 
physical device would, but be able to tinker with things that would require 
hardware changes if it 

Re: [Cerowrt-devel] [Make-wifi-fast] arstechnica confirms tp-link router lockdown

2016-03-11 Thread David Lang

On Fri, 11 Mar 2016, Alan Jenkins wrote:


On 11/03/2016, Jonathan Morton <chromati...@gmail.com> wrote:



On 11 Mar, 2016, at 20:22, Luis E. Garcia <l...@bitamins.net> wrote:

Time to start building our own.


A big project in itself - but perhaps a worthwhile one.  We wouldn’t be able
to compete on price against the Taiwanese horde, but price is not the only
market force on the table.  Firmware quality is a bit abstract and nebulous
to sell to ordinary consumers, but there is one thing that might just get
their attention.

Making the damned thing easier to configure.

Almost every router now on the market is a blank box with some ports on the
back, some antennas on top and some lights on the front.  If you’re lucky,
there’ll be a button for WPS (which most consumers would still need to read
the manual to figure out how to use, and most tinkerers would turn right
off) and maybe one or two “feature switches”; my Buffalo device has one
which does “something” to the QoS setup in the stock firmware, and nothing
at all in OpenWRT.

The lights only tell you that “something is happening” and occasionally
“something is wrong”, and are invariably cryptic.  For example, a green
flashing light can mean “it’s setting up but not working yet” or “it’s
working and passing traffic right now”, often on the same light!  A critical
error, such as a cable not plugged in, is often signified only by the
*absence* of one of the several normal lights, which is invisible to the
untrained eye.

To actually configure it, you must first connect a computer to it and point
a Web browser at the right (usually numeric) URL.  This URL varies between
vendors and models, and sometimes even between firmware revisions; the only
infallible way to determine it is to delve into the configuration that DHCP
handed out.


Also, many routers setup a 'standard' name you can go to, so you don't have to 
do it by IP.


But this can be dealt with by adding a QR code or NFC method to get at a basic 
configuration.



You and I can cope with that, but we want something better, and
less-technical people *need* something better if they are to trust their
equipment enough to start actually learning about it.


I don't know if you really can simplify the configuration the way you are 
wanting to, but I'd say give it a try. Take OpenWRT and make a configuration 
program that you think is better. You even have a nice browser based tool to 
start with (luci). If you can make a browser based tool work well, then if your 
tool is better/easier, it can be widely used, or you can then try hardware 
versions of it.



As a starting point, suppose we build a small display into the case, and
invite the user to temporarily plug a keyboard, console controller or even a
mouse directly into the USB port (which most routers now have) to do the
setup?  No Web browser required, and no potentially-vulnerable web server on
the device either.


There are very good reasons why browser setups have replaced built-in displays.

There's a limit to how much you can show on a built-in display, and you have to 
be able to see the display. Not everyone positions their wifi where they can 
easily see it, let alone plug it into a TV. The best place for a router to sit 
is usually not the easiest place to see or get at it.


You have a much larger display, which gives you room for help text and images, 
not just a handful of characters.


A display is nicer than just LEDs, but it's also a lot more expensive.

I also don't like large glowing displays on devices. I frequently put tape over 
the LEDs to tone things down as well (especially in bedrooms)


David Lang


When not in config mode, the input device can be disconnected and returned
to its primary role, and the display can offer status information in a
human-readable format; an RGB-controlled backlight would be sufficient for
at-a-glance is-everything-okay checks (which is all Apple gives you without
firing up their proprietary config software on a connected computer).  Some
high-end router models provide just this, without leveraging the possibility
of easier setup.

 - Jonathan Morton


IMO they already glow quite enough.  Better to invest in the software :P.

Alan
___
Make-wifi-fast mailing list
make-wifi-f...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/make-wifi-fast___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] odroid C1+ status

2016-03-05 Thread David Lang

A blog format for hardware testing would be a good idea.

David Lang

On Sat, 5 Mar 2016, Dave Taht wrote:


Date: Sat, 5 Mar 2016 12:23:36 -0800
From: Dave Taht <dave.t...@gmail.com>
To: moeller0 <moell...@gmx.de>
Cc: "cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>
Subject: [Cerowrt-devel] odroid C1+ status

wow, thx for all the suggestions on alternate x86 router hardware... I
will read more later.

Would using a blog format for things like the following work better
for people? I could more easily revise, including graphics, etc,
etc... could try to hit on our hot buttons (upgradability, bloat,
reliability, kernel versions, manufacturer support) with some sort of
grading system...

http://the-edge.taht.net/post/odroid_c1_plus/ in this case

...

I got the odroid C1+ to work better. (either a cable or power supply
issue, I swapped both). On output it peaks at about 416Mbits with 26%
of cpu being spent in a softirq interrupt.  On input I can get it to
gbit, with 220% of cpu in use.

The rrul tests were pretty normal, aside from the apparent 400mbit
upload limit causing contention on rx/tx (at the moment I have no good
place to put these test results since snapon is now behind a firewall.
I'd like to get more organized about how we store and index these
results also)

There is no BQL support in the odroid driver for it, and it ships with
linux 3.10.80. At least its a LTS version I am totally unfamiliar
with the odroid ecosystem but maybe there is active kernel dev on it
somewhere?

(The pi 2, on the other hand, is kernel 4.1.17-v7 AND only has a
100mbit phy, so it is hard to complain about only getting 400mbit from
the odroid c1+, but, dang it, a much later kernel would be nice in the
odroid)

My goal in life, generally, is to have a set of boxes with known
characteristics to drive tests with, that are reliable enough to setup
once and ignore.

A) this time around, I definitely wanted variety, particularly in tcp
implementations, kernel versions, ethernet and wifi chips - as it
seemed like drawing conclusions from "perfect" drivers like the e1000e
all the time was a bad idea. We have a very repeatable testbed in
karlstad, already - I'm interested in what random sort of traffic can
exist on a home network that messes life up.

One of the things I noticed while using kodi is that the box announces
2k of multicast ipv4 packets every 30 seconds or so on the upnp
port... AND over 4k of multicast ipv6 packets, if ipv6 is enabled.

B) Need to be able to drive 802.11ac as hard as possible with as many
stations as possible.

C) needs to be low power and quiet (cheap is good too!)

Has anyone tried the banana pi? That's what comcast is using in their tests
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] AREDN and Make Wifi Fast

2016-02-26 Thread David Lang

On Sat, 6 Feb 2016, Dave Täht wrote:


Email lists themselves seem to have become passe' - the "discourse"
engine seems like a good idea - but I LIKE email.


Some people like e-mail, some like web forums.

You can combine them and let people use whichever interface to the message 
stream that they want.


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] It's hardware eval time again (basic hardware selection)

2016-02-19 Thread David Lang

On Sat, 13 Feb 2016, Dave Täht wrote:


I have an increasing desire to build (or buy if one exists) a couple
boxes that can "aircap" packet capture across as many bands as possible,
in these locations (and elsewhere, if people here are interested)


while they won't do -ac, the wndr3800 is available for <$30 each on ebay, you 
just need a remote disk to stream the capture data to.


the number of channels you can listen to is limited to how many you want to 
have running in a stack :-)


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Problems testing sqm (solved)

2015-10-26 Thread David Lang

On Mon, 26 Oct 2015, Sebastian Moeller wrote:


Hi David,

On Oct 26, 2015, at 19:15 , David Lang <da...@lang.hm> wrote:


On Mon, 26 Oct 2015, Dave Taht wrote:


in terms of testing wifi, the most useful series of tests to conduct
at the moment - since we plan to fix per station queuing soon


how soon is 'soon'? I'm going to be building my image for Scale in the next 
few weeks, any chance of having this available to put in it?


	Oh, interesting, slightly related question, do you typically disable 
offloads for your scale deployed APs, or do you run with them enabled? (I am 
pondering whether to expose a n offload toggle in luci-ap-sqm, but that only 
makes sense if there are users willing to test the functionality more often 
than once off). (Then again you probably do not use the GUI in the first place 
in your deployment ;) or?)


I'm using WNDR3800's again this year (~120 of them). I don't believe that it 
uses offloads in the first place.


The APs operate in bridged mode connecting to the local wired network. A 
separate firewall/DHCP/etc server provides the Internet connectivity.


see 
https://www.usenix.org/conference/lisa12/technical-sessions/presentation/lang_david_wireless 
for details on what I did a few years ago. This year we are moving to a new, 
larger location, but the same strategy is going to be used.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Problems testing sqm (solved)

2015-10-26 Thread David Lang

On Mon, 26 Oct 2015, Dave Taht wrote:


in terms of testing wifi, the most useful series of tests to conduct
at the moment - since we plan to fix per station queuing soon


how soon is 'soon'? I'm going to be building my image for Scale in the next few 
weeks, any chance of having this available to put in it?


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Problems testing sqm

2015-10-25 Thread David Lang

On Sat, 24 Oct 2015, Jonathan Morton wrote:


On 24 Oct, 2015, at 19:34, David P. Reed  wrote:

Not trying to haggle. Just pointing out that this test configuration has a very 
short RTT. maybe too short for our SQM to adjust to.


It should still get the bandwidth right.  When it does, we’ll know that the 
setup is correct.


bandwidth throttling is actually a much harder thing to do well under all 
conditiions than eliminating bufferbloat.


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Problems testing sqm

2015-10-23 Thread David Lang

On Fri, 23 Oct 2015, Aaron Wood wrote:


On Fri, Oct 23, 2015 at 9:10 AM, Richard Smith <smithb...@gmail.com> wrote:


I have a shiny new Linksys WRT1900ACS to test.

I thought it might be nice to start with some comparisons of factory
firmware vs OpenWRT with sqm enabled.



Here are my results with the same router (unless the WRT1900ACS is
different from the WRT1900AC):
http://burntchrome.blogspot.com/2015/06/sqmscripts-before-and-after-at-160mbps.html


the ACS has more flash (and ram IIRC) and is 1.6GHz instead of 1.2GHz

David Lang






So I built and installed an openwrt trunk but the results were very
non-impressive.  Rrul test reported mulit-seconds of latency and it was
equally non-impressive with sqm enabled or disabled.  So I assumed that sqm
in trunk on this device must not work yet.  Then I wondered how well sqm in
trunk was tested and that perhaps its broken for all devices.



Trunk or the Chaos Calmer release?  (I"m running CC RC1)



My test setup:

Laptop<--1000BaseT-->DUT<--1000baseT-->Server



Which ports on the DUT were you using?  are those local ports, or is one of
those the "internet" port?

-Aaron
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Problems testing sqm

2015-10-23 Thread David Lang
with the wrt1900ACS, the WAN ethernet is connected to a switch before connecting 
to the wire. I believe that this causes some issues with the sqm default setup 
(or is it with the fq_codel?)


David Lang

On Fri, 23 Oct 2015, Dave Taht wrote:


you are most likely applying the qdisc to the wrong ethernet device or
ethernet vlan.
Dave Täht
I just lost five years of my life to making wifi better. And, now...
the FCC wants to make my work, illegal for people to install.
https://www.gofundme.com/savewifi


On Fri, Oct 23, 2015 at 6:10 PM, Richard Smith <smithb...@gmail.com> wrote:

I have a shiny new Linksys WRT1900ACS to test.

I thought it might be nice to start with some comparisons of factory
firmware vs OpenWRT with sqm enabled.

So I built and installed an openwrt trunk but the results were very
non-impressive.  Rrul test reported mulit-seconds of latency and it was
equally non-impressive with sqm enabled or disabled.  So I assumed that sqm
in trunk on this device must not work yet.  Then I wondered how well sqm in
trunk was tested and that perhaps its broken for all devices.

So I tested openwrt trunk on my Netgear 3700v2 and saw the same results.
Then I tried openwrt cc and got the same results.

Finally, I went to the reference implementation: cerowrt 3.10.50-1 on my
3700v2.  Same results.

So at this point I'm thinking there's a PEBKAC issue and I'm not really
turning it on.

Here's my enable procedure:

Go the sqm tab in the GUI and set egress and ingress to 1, set the
interface to the upstream interface,  click enable, click save and apply.
Everything else is left at default. ie fq_codel and simple.qos.

I've also tried a reboot after enabling those settings and then gone back to
the gui to verify they were still set.

My test setup:

Laptop<--1000BaseT-->DUT<--1000baseT-->Server

I run netperf-wrapper -H Server -l 30 rrul and look at the 'totals' or 'all'
plot.

If I run the above with this setup.

Laptop<--1000baseT-->Server

Then I get the expected 800-900Mbit/s with latencies < 15ms.  So I don't
think there's a problem with my test infrastructure.

What am I missing and or what's the next step in figuring out whats wrong?

--
Richard A. Smith
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] anyone have real info on the google router?

2015-08-19 Thread David Lang

On Wed, 19 Aug 2015, Steven Barth wrote:


Wow, I must imagine it to be especially painful to reinvent the wheel
for all things QoS, WiFi optimizations, IPv6, Firewalling etc. and
rebuilding that on top of ChromeOS.


It's all Linux under the covers, so there's probably far less reinventing than 
you think.


I'll bet a lot of that stuff is either used as-is (with a different config 
system), or they just don't implement it (I doubt it has QoS for example)


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] google wifi

2015-08-18 Thread David Lang

On Tue, 18 Aug 2015, Matt Taggart wrote:


Matt Taggart writes:

Google is working with TP-LINK (and soon ASUS) on wifi (is there a
make-wifi-fast list this should have gone to?)

Google Blog: Meet OnHub: a new router for a new way to Wi-Fi
https://tinyurl.com/nloy3jm

product page
https://on.google.com/hub/


I talked to a friend that worked on it:

ggg the kernel for the Onhub router is under whirlwind project
name in the chromium.org source tree. The firmware is coreboot and is
also public.
ggg Openwrt has all the support for the Qualcom chipset but not
this board.
ggg Openwrt also require fastboot and won't work with Coreboot.
ggg Key bits are the Device Tree description of the HW in this directory:
ggg 
https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-3.14/arch/arm/boot/dts/
ggg qcom-apq8084-mtp.dts
ggg qcom-apq8084.dtsi
ggg qcom-ipq8064-ap148.dts
ggg qcom-ipq8064-arkham.dts
ggg qcom-ipq8064-storm.dts
ggg qcom-ipq8064-thermal.dtsi
ggg qcom-ipq8064-v1.0.dtsi
ggg qcom-ipq8064-whirlwind-sp3.dts
ggg qcom-ipq8064-whirlwind-sp5.dts
ggg qcom-ipq8064.dtsi
ggg whirlwind-sp5 is what shipped. (AFAIK)
ggg btw, all of this was reviewed on a public chromium.org gerrit
server.
ggg openwrt does support AP148
ggg and at some point chromeos was booting on AP148 though I don't
expect it to work out of the box


sounds promising.

how open is the wifi driver? Is it something that we can dive into and modify 
for make-wifi-fast? or is it a typical vendor blob?


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


[Cerowrt-devel] anyone have real info on the google router?

2015-08-18 Thread David Lang

http://googleblog.blogspot.com/2015/08/meet-onhub-new-router-for-new-way-to-wi.html

specifically the issues of firmware and drivers?

David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] anyone have real info on the google router?

2015-08-18 Thread David Lang

On Tue, 18 Aug 2015, Jim Gettys wrote:


On Tue, Aug 18, 2015 at 3:31 PM, David Lang da...@lang.hm wrote:



http://googleblog.blogspot.com/2015/08/meet-onhub-new-router-for-new-way-to-wi.html

specifically the issues of firmware and drivers?



​
It runs a very recent kernel (3.18, IIRC).
​  The devel prototype originally ran OpenWrt; should be pretty easy to
make OpenWrt run on it again.  Has a speaker on it (so you can do proof of
actual presence to help bootstrap security stuff).  Ath10k wireless.  Has a
TPM module on board, so you could do a poor man's hsm for storing keys.​


IIRC the Ath10k wireless is not very hackable (big binary blob), so that would 
mean that unless Google has been able to do something new here, it's not very 
hackable for make-wifi-fast.


Am I remembering correctly?

otherwise it sounds like a nice device for the price.

David Lang


​

​ Android style unlockable bootloader.​  The intent is it be an open
platform. Has a hardware packet assist engine: there will be the usual
issues of latency vs. bandwidth; not much use of that this instant.

Current software admin is done entirely via Android apps and a google back
end: no web gui for it at the moment (Trond isn't comfortable with a web
server on the router).  Secure upgrade of the firmware.
   - Jim




David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-13 Thread David Lang

On Sun, 9 Aug 2015, Jonathan Morton wrote:


This is the difference between the typical 802.11n situation (one checksum
per aggregate) and the mandatory 802.11ac capability of a checksum per
packet.  As long as you also employ RTS/CTS when appropriate, the
possibility of collisions is no longer a reason to avoid aggregating.


you say the 'typical' 802.11n situation is one checksum per transmission. Is 
this configurable in OpenWRT? or is it a driver/hardware issue? Does it require 
special client support?


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-13 Thread David Lang

On Fri, 14 Aug 2015, Jonathan Morton wrote:


The only mandatory form of aggregation in 'n' is of the one-checksum type,
even though other types are permitted.  The overhead at the data layer is
slightly less, and checksum failure handling on receive is simpler (just
throw the whole thing out and nak it), as is handling the nak at the
transmitter (just retransmit the whole aggregate at the next opportunity).
Most 'n' hardware thus caters only to this lowest common denominator.

I'm not sure whether soft-MAC type hardware (like ath9k) can also support
the more flexible type via driver support - I would hope so - but hard-MAC
hardware almost certainly can't be retrofitted in this way.


if the ath9k driver could support this, would this cause a problem on stock 
clients?



However, since 'ac' hardware is required to support individual-checksum
aggregation, a dual-band 'ac' card running on 2.4 GHz will effectively be
an 'n' card with such support, even if it's a hard-MAC.


right, I'm looking at what I can do to improve things even on non-ac stuff.

David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-09 Thread David Lang

On Sat, 8 Aug 2015, dpr...@reed.com wrote:

There's a lot of folklore out there about radio systems and WiFi that is 
quite wrong, and you seem to be quoting some of it - e.g. the idea that the 1 
Mb/s waveform of 802.11b DSSS is somehow more reliable than the lowest-rate 
OFDM modulations, which is often false.


I agree with you, but my understanding is that the current algorithms always 
assume that slower == more robust transmissions. My point was that in a weak 
signal environement where you have troble decoding individual bits this is true 
(or close enough to true for failed transmission - retransmit at a slower 
rate to be a very useful algorithm, but in a congested environment where your 
biggest problem is being stepped on by other tranmissions, this is close to 
suicide instead.


The 20 MHz-wide M0 modulation with 800ns GI gives 6.2 Mb/s and typically much 
more reliable than than the 802.11b standard 1 Mb/sec DSSS signals in normal 
environments, with typical receiver designs.


Interesting and good to know.

It's not the case that beacon frames are transmitted at 1 Mb/sec. - 
that is only true when there are 802.11b stations *associated* with the access 
point (which cannot happen at 5 GHz).


Also interesting. I wish I knew of a way to disable the 802.11b modes on teh 
wndr3800 or wrt1200 series APs. I've seen some documentation online talking 
about it, but it's never worked when I've tried it.


Dave Taht did some experimentation with cerowrt in increasing the broadcase 
rate, but my understanding is that he had to back out those changes because they 
didnt' work well in the real world.


Nor is it true that the preamble for ERP 
frames is wastefully long. The preamble for an ERP (OFDM operation) frame is 
about 6 microseconds long, except in the odd case on 2.4GHz of 
compatibility-mode (OFDM-DSSS) operation, where the DSSS preamble is used. 
The DSSS preamble is 72 usec. long, because 72 bits at 1 Mb/sec takes that 
long, but the ERP frame's preamble is much shorter.


Is compatibility mode needed for 802.11g or 802.11b compatibility?

In any case, my main points were about the fact that channel estimation is 
the key issue in deciding on a modulation to use (and MIMO settings to use), 
and the problem with that is that channels change characteristics quite 
quickly indoors! A spinning fan blade can create significant variation in the 
impulse response over a period of a couple milliseconds.  To do well on 
channel estimation to pick a high data rate, you need to avoid a backlog in 
the collection of outbound packets on all stations - which means minimizing 
queue buildup (even if that means sending shorter packets, getting a higher 
data rate will minimize channel occupancy).


Long frames make congested networks work badly - ideally there would only be 
one frame ready to go when the current frame is transmitted, but the longer 
the frame, the more likely more than one station will be ready, and the longer 
the frames will be (if they are being combined).  That means that the penalty 
due to, and frequency of, collisions where more than one frame are being sent 
at the same time grows, wasting airtime with collisions.  That's why CTS/RTS 
is often a good approach (the CTS/RTS frames are short, so a collision will be 
less wasteful of airtime).


I run the wireless network for the Scale conference where we get a couple 
thousand people showing up with their equipment. I'm gearing up for next year's 
conference (decideing what I'm going to try, what equipment I'm going to need, 
etc). I would love to get any help you can offer on this, and I'm willing to do 
a fair bit of experimentation and a lot of measurements to see what's happening 
in the real world. I haven't been setting anything to specifically enable 
RTS/CTS in the past.


But due to preamble size, etc., CTS/RTS can't be 
very short, so an alternative hybrid approach is useful (assume that all 
stations transmit CTS frames at the same time, you can use the synchronization 
acquired during the CTS to mitigate the need for a preamble on the packet sent 
after the RTS).  (One of the papers I did with my student Aggelos Bletsas on 
Cooperative Diversity uses CTS/RTS in this clever way - to measure the channel 
while acquiring it).


how do you get the stations synchronized?

David Lang





On Friday, August 7, 2015 6:31pm, David Lang da...@lang.hm said:




On Fri, 7 Aug 2015, dpr...@reed.com wrote:

 On Friday, August 7, 2015 4:03pm, David Lang da...@lang.hm said:


 Wifi is the only place I know of where the transmit bit rate is going to
vary
 depending on the next hop address.


 This is an interesting core issue. The question is whether additional
 queueing helps or hurts this, and whether the MAC protocol of WiFi deals well
 or poorly with this issue. It is clear that this is a peculiarly WiFi'ish
 issue.

 It's not clear that the best transmit rate remains stable for very long, or
 even how to predict the best rate

Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-08 Thread David Lang
I'll reply more later. I don't think you are an idiot, I think you are too 
caught up in the no-olds-barred world of your ham tinkering to take into account 
the realities of the existing wifi reality.


for make-wifi-fast we don't get to start from scratch, there is a huge installed 
base that we can't change and have to interact with. Like the bufferbloat 
effort, we cn find better ways of doing things and try to get them ot there, but 
they have to work seamlessly with the existing equipment and protocols.


also, I tend to give a lot of background to justify my conclusions, not because 
I assume yuo dont know any of it, but for the two reasons that if I have a logic 
flaw or am basing my results on faulty info I can be corrected, and so that 
others in or watching the discussion who don't know these details can be brought 
up to speed and contribute.


David Lang


 On Sat, 8 Aug 2015, dpr...@reed.com wrote:


Date: Sat, 8 Aug 2015 16:46:05 -0400 (EDT)
From: dpr...@reed.com
To: David Lang da...@lang.hm
Cc: Jonathan Morton chromati...@gmail.com,
cerowrt-devel@lists.bufferbloat.net, make-wifi-f...@lists.bufferbloat.net
Subject: Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on
draft-szigeti-tsvwg-ieee-802-11e


David - I find it interesting that you think I am an idiot.  I design waveforms 
for radios, and am, among other things, a fully trained electrical engineer 
with deep understanding of information theory, EM waves, propagation, etc. as 
well as an Amateur Radio builder focused on building experimental radio network 
systems in the 5 GHz and 10 GHz Amateur Radio bands.

I know a heck of a lot about 802.11 PHY layer and modulation, propagation, 
etc., and have been measuring the signals in my personal lab, as well as having 
done so when I was teaching at MIT, working on cooperative network diversity 
protocols (physical layers for mesh cooperation in digital networks).

And I was there with Metcalfe and Boggs when they designed Ethernet's PHY and MAC, and personally 
worked on the protocol layers in what became the Token Ring standard as well - so I understand the 
backoff and other issues associated with LANs.  (I wrote an invited paper in IEEE Proceedings 
An Introduction to Local Area Networks that appeared in the same special issue as the 
Cerf and Kahn paper entitled A Transmission Control Protocol that described the first 
Internet protocol concept..)

I guess what I'm saying is not that I'm always correct - no one is, but I would 
suggest that it's worth considering that I might know a little more than most 
people about some things - especially the physical and MAC layers of 802.11, 
but also about the internal electronic design of radio transceivers and digital 
interfaces to them. From some of your comments below, I think you either 
misunderstood my point (my fault for not explaining it better) or are 
misinformed.

There's a lot of folklore out there about radio systems and WiFi that is 
quite wrong, and you seem to be quoting some of it - e.g. the idea that the 1 Mb/s 
waveform of 802.11b DSSS is somehow more reliable than the lowest-rate OFDM modulations, 
which is often false.  The 20 MHz-wide M0 modulation with 800ns GI gives 6.2 Mb/s and 
typically much more reliable than than the 802.11b standard 1 Mb/sec DSSS signals in 
normal environments, with typical receiver designs. It's not the case that beacon frames 
are transmitted at 1 Mb/sec. - that is only true when there are 802.11b stations 
*associated* with the access point (which cannot happen at 5 GHz). Nor is it true that 
the preamble for ERP frames is wastefully long. The preamble for an ERP (OFDM operation) 
frame is about 6 microseconds long, except in the odd case on 2.4GHz of 
compatibility-mode (OFDM-DSSS) operation, where the DSSS preamble is used.   The DSSS 
preamble is 72 usec. long, because 72 bits at 1 Mb/sec takes that!

 long, but the ERP frame's preamble is much shorter.


In any case, my main points were about the fact that channel estimation is 
the key issue in deciding on a modulation to use (and MIMO settings to use), and the 
problem with that is that channels change characteristics quite quickly indoors! A 
spinning fan blade can create significant variation in the impulse response over a period 
of a couple milliseconds.  To do well on channel estimation to pick a high data rate, you 
need to avoid a backlog in the collection of outbound packets on all stations - which 
means minimizing queue buildup (even if that means sending shorter packets, getting a 
higher data rate will minimize channel occupancy).

Long frames make congested networks work badly - ideally there would only be 
one frame ready to go when the current frame is transmitted, but the longer the 
frame, the more likely more than one station will be ready, and the longer the 
frames will be (if they are being combined).  That means that the penalty due 
to, and frequency of, collisions where more than one frame are being

Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-07 Thread David Lang

On Fri, 7 Aug 2015, Jonathan Morton wrote:


On 7 Aug, 2015, at 15:22, Rich Brown richb.hano...@gmail.com wrote:

- At that time, the wifi driver requests packets from fq_codel until a) the 
the fq_codel queues are empty, or b) the wifi frame is full. In either case, 
the wifi driver sends what it has.


There’s one big flaw with this: if packets are available for multiple 
destinations, fq_codel will generally give you a variety pack of packets for 
each of them.  But a wifi TXOP is for a single destination, so only some of 
the packets would be eligible for the same aggregate frame.


So what’s needed is a way for the wifi driver to tell the queue that it wants 
packets for the *same* destination as it’s transmitting to.


how about when the queue hands packets to the wifi driver, it hands all packets 
to that same destination, no matter where they are in the queue (up to a max 
size, and the queue may adjust that max size within a range for fairness)


- Once the transmit opportunity has come around, it's a matter of 
microseconds (I assume) to pull in a wifi frame's worth of packets from 
fq_codel


This is hard to guarantee in software in a general-purpose OS.


you really want to have the pckets assembled and ready to go rather than trying 
to pull them at that point.


But what happens right now is that the queue hands packets to the wifi driver, 
then the wifi driver has it's own queues that it uses to gather the packets for 
each destination.


If we can find a way to make it reasonable to short-circuit the wifi driver 
queues by making it efficient to work from the main network queues, we can work 
to eliminate the second layer of queues.




so thinking about the requierments from the driver point of view

It needs to be able to pull a chunk of data to transmit (multple packets), it 
isn't going to know very much ahead of time what speed it's going to use to talk 
to this destination, and this is going to drastically affect how long it takes 
to transmit the bits. so when it grabs data from the queue, it needs to feed 
back to the queue the transmit time for those bits, and the queue uses that 
instead of the count of bits to determine fairness.


The queue will be deciding fairness based on whois behind in their 'fair share' 
of transmit time. So the wifi driver isn't going to know when it asks for the 
net chunk of data to transmit, who it will be going to. So it will need to get 
the destination, see the speed to use to that destination, pass the speed to 
a calculation for how much data to send, then grab that much data (rounded to a 
packet boundry)




Is this sort of logic useful anywhere other than in wifi?

Wifi is the only place I know of where the transmit bit rate is going to vary 
depending on the next hop address.


I know that the inter-packet gap defined for ethernet can amount to a large 
percentage of bandwidth at high speeds. Can multiple packets to the same 
destination be combined with a smaller gap between packets? (the gap timing was 
based on the speed-of-light time needed for the entire shared bus to quiece back 
in the 10mb half-duplex hub days). If so, then there's value in bundling packets 
to the same destination together.



If this sort of logic is not useful anywhere other than in wifi, maby the right 
answer is to have a way of short-circuiting the main OS queues and have a wifi 
specific queue that can directly look at the per-client speeds/etc when deciding 
who goes next and how much to dispatch?


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-07 Thread David Lang

On Fri, 7 Aug 2015, dpr...@reed.com wrote:


On Friday, August 7, 2015 4:03pm, David Lang da...@lang.hm said:





Wifi is the only place I know of where the transmit bit rate is going to vary
depending on the next hop address.



This is an interesting core issue.  The question is whether additional 
queueing helps or hurts this, and whether the MAC protocol of WiFi deals well 
or poorly with this issue.  It is clear that this is a peculiarly WiFi'ish 
issue.


It's not clear that the best transmit rate remains stable for very long, or 
even how to predict the best rate for the next station since the next 
station is one you may not have transmitted to for a long time, so your best 
rate information is old.


I wasn't even talking about the stability of the data rate to one destination. I 
was talking about the fact that you may have a 1.3Gb connection to system A (a 
desktop with a -ac 3x3 radio) and a 1Mb connection to machine B (an IoT 802.11b 
thermostat)


trying to do BQL across 3+ orders of magnatude in speed isn't going to work 
wihtout taking the speed into account.


Even if all you do is estimate with the last known speed, you will do better 
than ignorming the speed entirely.


If the wifi can 'return' data to the queue when the transmission fails, it can 
then fetch less data when it 're-transmits' the data at a lower speed.


 Queueing makes information about the channel older, 
by binding it too early.  Sending longer frames means retransmitting longer 
frames when they don't get through, rather than agilely picking a better rate 
after a few bits.


As I understand wifi, once a transmission starts, it must continue at that same 
data rate, it can't change mid-transmission (and tehre would be no way of 
getting feedback in the middle of a transmission to know that it would need to 
change)


The MAC protocol really should give the receiver some opportunity to control 
the rate of the next packet it gets (which it can do because it can measure 
the channel from the transmitter to itself, by listening to prior 
transmissions).  Or at least to signal channel changes that might require a 
new signalling rate.


This suggests that a transmitter might want to warn a receiver that some 
packets will be coming its way, so the receiver can preemptively change the 
desired rate.  Thus, perhaps an RTS-CTS like mechanism can be embedded in the 
MAC protocol, which requires that the device look ahead at the packets it 
might be sending.


the recipient will receive a signal at any data rate, you don't have to tell it 
ahead of time what rate is going to be sent. If it's being sent with a known 
encoding, it will be decoded.


The sender picks the rate based on a number of things

1. what the other end said they could do based on the mode that they are 
connected with (b vs g vs n vs bonded n vs ac vs 2x2 ac etc)


2. what has worked in the past. (with failed transmissions resulting in dropping 
the rate)


there may be other data like last known signal strength in the mix as well.


On the other hand, that only works if the transmitter deliberately congests 
itself so that it has a queue built up to look at.


no, the table of associated devices keeps track of things like the last known 
signal strength, connection mode, etc. no congestion needed.


The tradeoffs are not obvious here at all.  On the other hand, one could do 
something much simpler - just have the transmitter slow down to the worst-case 
rate required by any receiving system.


that's 1Mb/sec. This is the rate used for things like SSID broadcasts.

Once a system connects, you know from the connection handshake what speeds could 
work. no need to limit yourself the the minimum that they all can know at that 
point.


As the number of stations in range gets larger, though, it seems unlikely that 
batching multiple packets to the same destination is a good idea at all - 
because to achieve that, one must have n_destinations * batch_size chunks of 
data queued in the system as a whole, and that gets quite large.  I suspect it 
would be better to find a lower level way to just keep the packets going out 
as fast as they arrive, so no clog occurs, and to slow down the stuff at the 
source as quickly as possible.


no, no, no

you are falling into the hardware designer trap that we just talked about :-)

you don't wait for the buffers to fill and always send full buffers, you 
oppertunisticaly send data up to the max size.


you do want to send multiple packets if you have them waiting. Because if you 
can send 10 packets to machine A and 10 packets to machine B in the time that it 
would take to send one packet to A, one packet to B, a second packet to A and a 
second packet to B, you have a substantial win for both A and B at the cost of 
very little latency for either.


If there is so little traffic that sending the packets out one at a time doesn't 
generate any congeston, then good, do that [1]. but when you max out the 
airtime

Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-03 Thread David Lang

On Mon, 3 Aug 2015, dpr...@reed.com wrote:

I design and build physical layer radio hardware (using SDR reception and 
transmission in the 5 GHz and 10 GHz Amateur radio bands).


Fairness is easy in a MAC. 1 usec. is 1,000 linear feet.  If the next station 
knows when its turn is, it can start transmitting within a couple of 
microseconds of seeing the tail of the last packet, and if there is adequate 
sounding of the physical environment, you can calculate tighter bounds than 
that.  Even if the transmission is 1 Gb/sec, 1 usec. is only 1,000 bits at 
most.


That requires central coordination of the stations. Something we don't have in 
wifi. Wifi lives and dies with 'listen for a gap, try transmitting, and if you 
collide, backoff a random period'


But at the end-to-end layer, today's networks are only at most 20 msec. 
end-to-end across a continent.  The most buffering you want to see on an 
end-to-end basis is 10 msec.


I disagree strongly that mice - small packets - need to be compressed. 
Most small packets are very, very latency sensitive (acks, etc.). As long as 
they are a relatively small portion of capacity, they aren't the place to 
trade latency degradation for throughput.  That's another example of focusing 
on the link rather than the end-to-end network context. (local link acks can 
be piggy-backed in various ways, so when the local Wireless Ethernet domain is 
congested, there should be no naked ack packets)


umm, you misread what I was saying. I didn't say that we should hold up packets 
in search of throughput. I said the opposite. Instead of holding up small 
packets to maximize throughput, send the first small packet (and pay the 
overhead of doing so that makes it take a long time to do so), and while this is 
happening, additional packets will accumulate. The next transmission slot you 
have available, transmit all (up to a cap) the packets you have pending.


this produces the best possible latency, but uses more air-time than the 
'normal' approach where they hold up packets for a short time to see if they can 
be combined with others.


Why does anyone measure a link in terms of a measurement of small-packet 
efficiency?  The end-to-end protocols shouldn't be sending small packets once 
a queue builds up at the source endpoint.


what if the small packets are not all from the same source? or even if they are 
from the same source IP, are from different ports? they can't just be combined 
at the IP layer.


David Lang




On Monday, August 3, 2015 12:14pm, David Lang da...@lang.hm said:




On Mon, 3 Aug 2015, dpr...@reed.com wrote:

 It's not infeasible to make queues shorter. In any case, the throughput of a
 link does not increase above the point where there is always one packet ready
 to go by the time the currently outgoing packet is completed. It physically
 cannot do better than that.

change 'one packet' to 'one transmissions worth of packets' and I'll agree

 If hardware designers can't create an interface that achieves that bound I'd
 be suspicious that they understand how to design hardware. In the case of
 WiFi, this also includes the MAC protocol being designed so that when the
 current packet on the air terminates, the next packet can be immediately
begun
 - that's a little more subtle.

on a shared medium (like radio) things are a bit messier.

There are two issues

1. You shouldn't just transmit to a new station once you finish sending to the
first. Fairness requires that you pause and give other stations a chance to
transmit as well.

1. There is per-transmission overhead (including the pause mentioned above) that
can be very significant for small packets, so there is considerable value in
sending multiple packets at once. It's a lighter version of what you run into
inserting into reliable databases. You can insert 1000 records in about the same
time you can insert 2 records sperately.

The stock answer to this is for hardware and software folks to hold off on
sending anything in case there is more to send later that it can be batched
with. This maximizes throughput at the cost of latency.

What should be done instead is to send what you have immediatly, and while it's
sending, queue whatever continues to arrive and the next chance you have to
send, you will have more to send. This scales the batch size with congestion,
minimizing latency at the cost of keeping the channel continually busy, but
inefficiently busy if you aren't at capacity.

 But my point here is that one needs to look at the efficiency of the system
as
 a whole (in context), and paradoxically to the hardware designer mindset, the
 proper way to think about that efficiency is NOT about link throughput
 maximization - instead it is an end-to-end property. One has very little to
 do with the other. Queueing doesn't affect link throughput beyond the
double
 buffering effect noted above: at most one packet queued behind the currently
 transmitting packet.

 Regarding TXOP overhead - rather than

Re: [Cerowrt-devel] wrt1900ac v1 vs v2

2015-07-07 Thread David Lang

On Tue, 7 Jul 2015, Mikael Abrahamsson wrote:


On Mon, 6 Jul 2015, John Yates wrote:


There are refurbished wrt1900ac units available quite cheap ($10 or $20
more than wrt1200ac).  I assume that they are v1 units as the v2 units have
only been on the market for a few months.  From lurking on this list I get
the sense that these will support full sqm in short order (correct?).

So what are the differences between wrt1900ac v1 and v2?  Is there any
reason to pay nearly $100 more for a v2?


v1 has Armada XP chipset which has packet accelerator HW in it that OpenWrt 
doesn't use. v2 has Armada 385 which doesn't have a packet accelerator, but 
instead has a much better CPU for forwarding packets.


So basically if you buy a v1 you'll get a third or so in forwarding 
performance over the v2 with OpenWrt. With the Linksys firmware I could 
imagine the v1 is faster than the v2. The v1 has a fan, v2 does not.


From watching the openwrt discussion, it looks like 1900v2 and 1200 share the 
same driver, which has direct openwrt support from the vendor while 1900v1 had a 
closed driver that there was later a source version discovered for.


I would expect the future support for the 1900v2 and 1200 to be far better than 
the 1900v1 (especially as supplies of them dry up)


so the question is if 3x3 gives you enough value over 2x2 to make it worth 
getting a 1900v2 instead of a 1200


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] performance numbers from WRT1200AC (Re: Latest build test - new sqm-scripts seem to work; cake overhead 40 didn't)

2015-07-06 Thread David Lang
It looks like the 1900v2 and the 1200 have the same chipset, i saw that the 
1900v2 got more memory and a faster cpu to bring it up to match the 1200.


My understanding is that the only difference between the two is 2x2 vs 3x3 and 
the cost.


David Lang


 On Thu, 2 Jul 2015, dpr...@reed.com wrote:


Having not bought a 1200ac yet, I was wondering if I should splurge for the 
1900ac v2 (which has lots of memory unlike the 1900ac v1).

Any thoughts on the compatibility of this with the 1200ac?

Current plans are to deploy Supermicro Mini ITX A1SRI-2558F-O Quad Core (Rangely) as my 
externally facing router and services platform, and either one of the above 
as my experimental wireless solution.



On Thursday, July 2, 2015 11:47am, Toke Høiland-Jørgensen t...@toke.dk said:




Mikael Abrahamsson swm...@swm.pp.se writes:

 Do you have a link to your .config for your builds somewhere?

 http://swm.pp.se/aqm/wrt1200ac.config

Cool, thanks!

 BUT! I have had problems getting WPA2 to work properly with this
 .config. I must have missed something that is needed that has to do
 with the password/crypto handling.

 There already is a profile for the WRT1200AC (caiman) in Chaos Calmer
 RC and trunk, so it's actually not that hard to get working. The
 biggest problem is finding all those utilities one wants and making
 sure they're compiled into the image so one doesn't have to add them
 later.

Yeah, realise that. Still have my old .config from when I used to build
cerowrt for the WNDR lying around somewhere, so will take a look at
that and make sure everything is in there :)

-Toke
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Build instructions for regular OpenWRT with Ceropackages

2015-07-01 Thread David Lang

On Tue, 30 Jun 2015, Mikael Abrahamsson wrote:


On Tue, 30 Jun 2015, dpr...@reed.com wrote:

What happens if the SoC ports aren't saturated, but the link is GigE? That 
is, suppose this is an access link to a GigE home or office LAN with wired 
servers?


As far as I can tell, the device looks like this:

wifi2--
wifi1\|
SOC2 6-|
SOC1 5-|
WAN  4-|
LAN1 3-| (switch)
LAN2 2-|
LAN3 1-|
LAN4 0-|

LAN1-4 and SOC2 is in one vlan, and SOC1 and WAN is in a second vlan. This 
basically means there is no way to get traffic into SOC1 that goes out SOC2 
that will saturate either port, because they're both gige. Only way to 
saturate the SOC port would be if the SOC itself created traffic, for 
instance by being a fileserver, or if there is significant traffic on the 
wifi (which has PCI-E connectivity).


So it's impossible to congest SOC1 or SOC2 (egress) by running traffic 
LAN-WAN alone.


not true, the switch doesn't give any way for traffic to get from one vlan to 
the other one, so if you have gig-e connections on both sides, the traffic going 
from one to the other will have to go through the soc, so if there is more than 
1Gb of traffic in either direction the interface will be saturated.


The problem is if you have a slower connection, the bottleneck is in the switch 
not the soc. you may be able to set the soc-switch interface to 100Mb (make sure 
you have access through another interface in case you cut yourself off) and that 
would make the soc see the queue directly.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] [Cake] active sensing queue management

2015-06-12 Thread David Lang

On Fri, 12 Jun 2015, Daniel Havey wrote:


ii) Sebastian points out if you implement AQSM in the modem (as the paper
claims :p), you may as well BQL the modem drivers and run AQM.  *But that
doesn't work on ingress* - ingress requires tbf/htb with a set rate - but
the achievable rate is lower in peak hours. So run AQSM on ingress only!
Point being that download bloat could be improved without changing the other
end (CMTS).


This is pretty cool.  I had not considered BQL (though Dave and Jim
were evangelizing about it at the time :).  This solves the
upload/download problem which I was not able to get past in the paper.
BQL on the egress and ASQM for the ingress.  BQL will make sure that
the upload is under control so that ASQM can get a good measurement on
the download side.  Woot!  Woot!  Uncooperative ISP problem solved!

BTW...Why doesn't BQL work on the ingress?


for the same reason that AQM doesn't work on inbound connections. you're on the 
wrong side of the link :-)


implementing AQM and BQL on the ISPs device that's sending to the home user is 
a perfectly reasonable thing to do and is really the right answer for the 
problem.


all the rest of this is a matter of we can't get at where the problem really is 
to fix it, so let's figure out what else we can do to mitigate the problem


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] active sensing queue management

2015-06-12 Thread David Lang

On Fri, 12 Jun 2015, Daniel Havey wrote:


On Thu, Jun 11, 2015 at 6:49 PM, David Lang da...@lang.hm wrote:

On Wed, 10 Jun 2015, Daniel Havey wrote:


We know that (see Kathy and Van's paper) that AQM algorithms only work
when they are placed at the slowest queue.  However, the AQM is placed
at the queue that is capable of providing 8 Mbps and this is not the
slowest queue.  The AQM algorithm will not work in these conditions.



so the answer is that you don't deploy the AQM algorithm only at the
perimeter, you deploy it much more widely.

Eventually you get to core devices that have multiple routes they can use to
get to a destination. Those devices should notice that one route is getting
congested and start sending the packets through alternate paths.

Now, if the problem is that the aggregate of inbound packets to your
downstreams where you are the only path becomes higher than the available
downstream bandwidth, you need to be running an AQM to handle things.

David Lang



H, that is interesting.  There might be a problem with processing
power at the core though.  It could be difficult to manage all of
those packets flying through the core routers.


And that is the question that people are looking at.

But part of the practical question is at what speeds do you start to run into 
problems?


the core of the Internet is already doing dynamic routing of packets, spreading 
them across multiple parallel paths (peering points have multiple 10G links 
between peers), so this should be more of the same, with possibly a small 
variation to use more expensive paths if the cheap ones are congested.


But as you move out from there towards the edge, the packet handling 
requirements drop rather quickly, and I'll bet that you don't have to get very 
far out before you can start affording to implement AQM algorithms. I'm betting 
that you reach that point before you get to the point in the network where you 
no longer have multiple paths available



David does bring up an interesting point though.  The ASQM algorithm
was originally designed to solve the Uncooperative ISP problem.  I
coined the phrase, but, you can fill in your own adjective to fit your
personal favorite ISP :^)

The paper doesn't indicate this because I got roasted by a bunch of
reviewers for it, but, why not use an ASQM like algorithm other places
than the edge.  Suppose you are netflix and your ISP is shaping your
packets?  You cant do anything about the bandwidth reduction, but, you
can at least reduce the queuing...Just food for thought. :^)


unfortunantly if you are trapped by the ISP/netflix peering war, you reducing 
the number of packets in flight for yourself isn't going to help any. It would 
have to happen on the netflix side of the bottleneck.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] [Cake] active sensing queue management

2015-06-12 Thread David Lang

On Fri, 12 Jun 2015, Benjamin Cronce wrote:


On 12/06/15 02:44, David Lang wrote:

On Thu, 11 Jun 2015, Sebastian Moeller wrote:



On Jun 11, 2015, at 03:05 , Alan Jenkins
alan.christopher.jenkins at gmail.com wrote:


On 10/06/15 21:54, Sebastian Moeller wrote:



One solution would be if ISPs made sure upload is 100% provisioned.
Could be cheaper than for (the higher rate) download.


Not going to happen, in my opinion, as economically unfeasible
for a publicly traded ISP. I would settle for that approach as long
as the ISP is willing to fix its provisioning so that
oversubscription episodes are reasonable rare, though.


not going to happen on any network, publicly traded or not.


Sorry if this is a tangent from where the current discussion has gone, but
I wanted to correct someone saying something is impossible.


snip


I guess I went off on this tangent because Not going to happen, in my
opinion, as economically unfeasible and not going to happen on any
network, publicly traded or not. are too absolute. It can be done, it is
being done, it is being done for cheap, and being done with business
class professionalism. Charter Comm is 1/2 the download speed for the same
price and they don't even have symmetrical or dedicated.


not being oversubscribed includes the trunk. Who cares if there is no congestion 
within the ISP if you reach the trunk and everything comes to a screeching halt.


The reason I used the word imossible is because we only have the ability to 
make links so fast. Right now we have 10G common, 40G in places, and research 
into 100G, if you go back a few years, 1G was the limit. While the fastest 
connections have increased by a factor of 100, the home connections have 
increased by close to a factor of 1000 during that time (1.5Mb theoretical DSL 
vs 1Gb fiber), and 10G is getting cheap enough to be used for the corporate 
networks.


so the ratio between the fastest link that's possible and the subscribers has 
dropped from 1000:1 to 100:1 with 10:1 uncomfortably close.


some of this can be covered up by deploying more lines, but that only goes so 
far.


If you are trying to guarantee no bandwidth limits on your network under any 
conditions, you need that sort of bandwith between all possible sets of 
customers as well, which means that as you scale to more customers, your 
bandwidth requirements are going up O(n^2).


And then the pricing comes into it. 1G fiber to the home is $100/month (if it 
exists at all) isn't going to pay for that sort of idle bandwidth.


But the good thing is, you don't actually need that much bandwidth to keep your 
customer's happy either.


If the 'guaranteed bandwidth' is written into the contracts properly, there is a 
penalty to the company if they can't provide you the bandwidth. That leaves them 
the possibility of not actually building out the O(n^2) network, just keeping 
ahead of actual reqirements, and occasionally paying a penalty if they don't.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] [Cake] active sensing queue management

2015-06-12 Thread David Lang

On Fri, 12 Jun 2015, Sebastian Moeller wrote:


Hi David,

On Jun 12, 2015, at 03:44 , David Lang da...@lang.hm wrote:

The problem shows up when either usage changes rapidly, or the network 
operator is not keeping up with required upgrades as gradual usage changes 
happen (including when they are prevented from upgrading because a peer won't 
cooperate)


	Good point, I was too narrowly focussing on the access link but peering 
is another hot potato”. Often end users try to use traceroute and friends and 
VPNs to uncontested peers to discern access-network congestion from 
“under-peering” even though at the end of the day the effects are similar. 
Thinking of it I believe that under-peering shows up more as a bandwidth loss 
as compared to the combined bandwidth loss and latency increase often seen on 
the access side (but this is conjecture as I have never seen traffic data from 
a congested peering connection).


At the peering point where congestion happens, queues will expand to the max 
avaialble and packets will be dropped. Layer 3 had some posts showing stats of a 
congested peeing point back before netflix caved a year or so ago.




As for the 100% provisioning ideal, think through the theoretical aggregate 
and realize that before you get past very many layers, you get to a bandwidh 
requirement that it's not technically possible to provide.


	Well, I still believe that an ISP is responsible to keep its part of a 
contract by at least offering a considerable percentage of the sold access 
bandwidth into its own core network. But 100 is not going to be that 
percentage, I agree and I am happy to accept congestion as long as it is 
transiently (and I do not mean every evening it gets bad, but clears up over 
night, but rather that the ISP increases bandwidth to keep congestion periods 
rare)…


I think that the target the ISP should be striving for is 0 congestion, not 0 
overprovisioning. And I deliberately say striving for because it's not going 
to be perfect. And the question of if you are congested or not depends on the 
timescale you look at (at any instant, a given link is either 0% utilized or 
100% utilized, nowhere in between)


Good AQM will make it so that when congestion does happen, all that happens is 
that the bandwidth ends up getting shared. Everyone continues to operate with 
minimal noticable degredation (ideally all suffered by the non-time critical 
bulk data transfers)


After all, if you are streaming a video from netflix, does it really matter if 
the 2-hour movie is entirely delivered to your local box in 1 min instead of 20 
min? If you're downloading it to put it on a mobile device before you leave, 
possibly, but if you're just watching it, not at all.


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] active sensing queue management

2015-06-11 Thread David Lang

On Wed, 10 Jun 2015, Daniel Havey wrote:


We know that (see Kathy and Van's paper) that AQM algorithms only work
when they are placed at the slowest queue.  However, the AQM is placed
at the queue that is capable of providing 8 Mbps and this is not the
slowest queue.  The AQM algorithm will not work in these conditions.


so the answer is that you don't deploy the AQM algorithm only at the perimeter, 
you deploy it much more widely.


Eventually you get to core devices that have multiple routes they can use to get 
to a destination. Those devices should notice that one route is getting 
congested and start sending the packets through alternate paths.


Now, if the problem is that the aggregate of inbound packets to your downstreams 
where you are the only path becomes higher than the available downstream 
bandwidth, you need to be running an AQM to handle things.


David Lang

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] [Cake] active sensing queue management

2015-06-11 Thread David Lang

On Thu, 11 Jun 2015, Sebastian Moeller wrote:



On Jun 11, 2015, at 03:05 , Alan Jenkins alan.christopher.jenk...@gmail.com 
wrote:


On 10/06/15 21:54, Sebastian Moeller wrote:

One solution would be if ISPs made sure upload is 100% provisioned. Could be 
cheaper than for (the higher rate) download.


	Not going to happen, in my opinion, as economically unfeasible for a 
publicly traded ISP. I would settle for that approach as long as the ISP is 
willing to fix its provisioning so that oversubscription episodes are 
reasonable rare, though.


not going to happen on any network, publicly traded or not.

The question is not can the theoretical max of all downstream devices exceed 
the upstream bandwidth because that answer is going to be yes for every 
network built, LAN or WAN, but rather does the demand in practice of the 
combined downstream devices exceed the upstream bandwidth for long enough to be 
a problem


it's not even a matter of what percentage are they oversubscribed.

someone with 100 1.5Mb DSL lines downstream and a 50Mb upstream (30% of 
theoretical requirements) is probably a lot worse than someone with 100 1G lines 
downstream and a 10G upstream (10% of theoretical requirements) because it's far 
less likely that the users of the 1G lines are actually going to saturate them 
(let alone simultaniously for a noticable timeframe), while it's very likely 
that the users of the 1.5M DSL lines are going to saturate their lines for 
extended timeframes.


The problem shows up when either usage changes rapidly, or the network operator 
is not keeping up with required upgrades as gradual usage changes happen 
(including when they are prevented from upgrading because a peer won't 
cooperate)


As for the 100% provisioning ideal, think through the theoretical aggregate 
and realize that before you get past very many layers, you get to a bandwidh 
requirement that it's not technically possible to provide.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] sqm-scripts on WRT1900AC

2015-06-02 Thread David Lang
Ok, I think I'm understanding that unless the client is mimo enabled, mimo on 
the the AP doesn't do any good. I'm focused on the high density conference type 
setup and was wondering if going to these models would result in any mor 
effective airtime. It sounds like the answer is no.


David Lang

On Fri, 29 May 2015, Pedro Tumusok wrote:


Is the 1900AC MU-Mimo? If not then its still normal Airtime limitations,
unless you consider concurrent 2x2 2.4GHz and 3x3 5GHz as a MU setup.
Also there are very few  devices with builtin 3x3 ac client. From the top
of my head I can not think of one.

Pedro

On Tue, May 26, 2015 at 1:55 AM, David Lang da...@lang.hm wrote:


looking at the 1900ac vs the 1200ac, one question. what is needed to
benefit from the 3x3 vs the 2x2?

In theory the 3x3 can transmit to three clients at the same time while the
2x2 can transmit to two clients at the same time.

But does the client need specific support for this? (mimo or -ac) Or will
this work for 802.11n clients as well?

David Lang


On Sat, 23 May 2015, Aaron Wood wrote:

 Date: Sat, 23 May 2015 23:19:19 -0700

From: Aaron Wood wood...@gmail.com
To: bloat bl...@lists.bufferbloat.net,
cerowrt-devel cerowrt-devel@lists.bufferbloat.net,
Dave Taht dave.t...@gmail.com
Subject: Re: [Bloat] sqm-scripts on WRT1900AC


After more tweaking, and after Comcast's network settled down some, I have
some rather quite nice results:


http://burntchrome.blogspot.com/2015/05/sqm-scripts-on-linksys-wrt1900ac-part-1.html



So it looks like the WRT1900AC is a definite contender for our faster
cable
services.  I'm not sure if it will hold out to the 300Mbps that you want,
Dave, but it's got plenty for what Comcast is selling right now.

-Aaron

P.S.  Broken wifi to the MacBook was a MacBook issue, not a router issue
(sorted itself out after I put the laptop into monitor mode to capture
packets).

On Sat, May 23, 2015 at 10:17 PM, Aaron Wood wood...@gmail.com wrote:

 All,


I've been lurking on the OpenWRT forum, looking to see when the CC builds
for the WRT1900AC stabilized, and they seem to be so (for a very
beta-ish
version of stable).

So I went ahead and loaded up the daily ( CHAOS CALMER (Bleeding Edge,
r45715)).

After getting Luci and sqm-scripts installed, I did a few baseline tests.
Wifi to the MacBook Pro is...  broken.  30Mbps vs. 90+ on the stock
firmware.  iPhone is fine (80-90Mbps download speed from the internet).

After some rrul runs, this is what I ended up with:
http://www.dslreports.com/speedtest/538967

sqm-scripts are set for:
100Mbps download
10Mbps upload
fq_codel
ECN
no-squash
don't ignore

Here's a before run, with the stock firmware:
http://www.dslreports.com/speedtest/337392

So, unfortunately, it's still leaving 50Mbps on the table.

However, if I set the ingress limit higher (130Mbps), buffering is still
controlled.  Not as well, though.  from +5ms to +10ms, with lots of
jitter.  But it still looks great to the dslreports test:
http://www.dslreports.com/speedtest/538990

But the upside?  load is practically nil.  The WRT1900AC, with it's
dual-core processor is more than enough to keep up with this (from a load
point of view), but it seems like the bottleneck isn't the raw CPU power
(cache?).

I'll get a writeup with graphs on the blog tomorrow (I hope).

-Aaron



___
Bloat mailing list
bl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

___
Bloat mailing list
bl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat







___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] sqm-scripts on WRT1900AC

2015-06-02 Thread David Lang

I'm not sure what the difference bwtwen mimo and mu-mimo is, pointer please?

David Lang

On Fri, 29 May 2015, Pedro Tumusok wrote:


From my understanding you need an AP that supports mu-mimo and then you
have different scenarios of of how to support clients. If the client
supports mu-mimo then you get the full mi-mimo experience. If the client
does not support it, you do not get the full mu-mimo experience for that
or those clients.

Example if you got an 8x8 mu-mimo ap, then you can for instance use 4 of
those 8 for a mu-mimo setup and the last 4 can be used for 4 groups of
single stream connections or one 3x3 and 1x1. And probably many more
combinations like that.
But I might be way off on this, do not have any wave 2 products to play
with yet.

Pedro


On Fri, May 29, 2015 at 11:09 AM, David Lang da...@lang.hm wrote:


Ok, I think I'm understanding that unless the client is mimo enabled, mimo
on the the AP doesn't do any good. I'm focused on the high density
conference type setup and was wondering if going to these models would
result in any mor effective airtime. It sounds like the answer is no.

David Lang


On Fri, 29 May 2015, Pedro Tumusok wrote:

 Is the 1900AC MU-Mimo? If not then its still normal Airtime limitations,

unless you consider concurrent 2x2 2.4GHz and 3x3 5GHz as a MU setup.
Also there are very few  devices with builtin 3x3 ac client. From the top
of my head I can not think of one.

Pedro

On Tue, May 26, 2015 at 1:55 AM, David Lang da...@lang.hm wrote:

 looking at the 1900ac vs the 1200ac, one question. what is needed to

benefit from the 3x3 vs the 2x2?

In theory the 3x3 can transmit to three clients at the same time while
the
2x2 can transmit to two clients at the same time.

But does the client need specific support for this? (mimo or -ac) Or will
this work for 802.11n clients as well?

David Lang


On Sat, 23 May 2015, Aaron Wood wrote:

 Date: Sat, 23 May 2015 23:19:19 -0700


From: Aaron Wood wood...@gmail.com
To: bloat bl...@lists.bufferbloat.net,
cerowrt-devel cerowrt-devel@lists.bufferbloat.net,
Dave Taht dave.t...@gmail.com
Subject: Re: [Bloat] sqm-scripts on WRT1900AC


After more tweaking, and after Comcast's network settled down some, I
have
some rather quite nice results:



http://burntchrome.blogspot.com/2015/05/sqm-scripts-on-linksys-wrt1900ac-part-1.html



So it looks like the WRT1900AC is a definite contender for our faster
cable
services.  I'm not sure if it will hold out to the 300Mbps that you
want,
Dave, but it's got plenty for what Comcast is selling right now.

-Aaron

P.S.  Broken wifi to the MacBook was a MacBook issue, not a router issue
(sorted itself out after I put the laptop into monitor mode to capture
packets).

On Sat, May 23, 2015 at 10:17 PM, Aaron Wood wood...@gmail.com wrote:

 All,



I've been lurking on the OpenWRT forum, looking to see when the CC
builds
for the WRT1900AC stabilized, and they seem to be so (for a very
beta-ish
version of stable).

So I went ahead and loaded up the daily ( CHAOS CALMER (Bleeding Edge,
r45715)).

After getting Luci and sqm-scripts installed, I did a few baseline
tests.
Wifi to the MacBook Pro is...  broken.  30Mbps vs. 90+ on the stock
firmware.  iPhone is fine (80-90Mbps download speed from the internet).

After some rrul runs, this is what I ended up with:
http://www.dslreports.com/speedtest/538967

sqm-scripts are set for:
100Mbps download
10Mbps upload
fq_codel
ECN
no-squash
don't ignore

Here's a before run, with the stock firmware:
http://www.dslreports.com/speedtest/337392

So, unfortunately, it's still leaving 50Mbps on the table.

However, if I set the ingress limit higher (130Mbps), buffering is
still
controlled.  Not as well, though.  from +5ms to +10ms, with lots of
jitter.  But it still looks great to the dslreports test:
http://www.dslreports.com/speedtest/538990

But the upside?  load is practically nil.  The WRT1900AC, with it's
dual-core processor is more than enough to keep up with this (from a
load
point of view), but it seems like the bottleneck isn't the raw CPU
power
(cache?).

I'll get a writeup with graphs on the blog tomorrow (I hope).

-Aaron


 ___

Bloat mailing list
bl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

___
Bloat mailing list
bl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat












___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] sqm-scripts on WRT1900AC

2015-05-28 Thread David Lang
looking at the 1900ac vs the 1200ac, one question. what is needed to benefit 
from the 3x3 vs the 2x2?


In theory the 3x3 can transmit to three clients at the same time while the 2x2 
can transmit to two clients at the same time.


But does the client need specific support for this? (mimo or -ac) Or will this 
work for 802.11n clients as well?


David Lang


On Sat, 23 May 2015, Aaron Wood wrote:


Date: Sat, 23 May 2015 23:19:19 -0700
From: Aaron Wood wood...@gmail.com
To: bloat bl...@lists.bufferbloat.net,
cerowrt-devel cerowrt-devel@lists.bufferbloat.net,
Dave Taht dave.t...@gmail.com
Subject: Re: [Bloat] sqm-scripts on WRT1900AC

After more tweaking, and after Comcast's network settled down some, I have
some rather quite nice results:

http://burntchrome.blogspot.com/2015/05/sqm-scripts-on-linksys-wrt1900ac-part-1.html



So it looks like the WRT1900AC is a definite contender for our faster cable
services.  I'm not sure if it will hold out to the 300Mbps that you want,
Dave, but it's got plenty for what Comcast is selling right now.

-Aaron

P.S.  Broken wifi to the MacBook was a MacBook issue, not a router issue
(sorted itself out after I put the laptop into monitor mode to capture
packets).

On Sat, May 23, 2015 at 10:17 PM, Aaron Wood wood...@gmail.com wrote:


All,

I've been lurking on the OpenWRT forum, looking to see when the CC builds
for the WRT1900AC stabilized, and they seem to be so (for a very beta-ish
version of stable).

So I went ahead and loaded up the daily ( CHAOS CALMER (Bleeding Edge,
r45715)).

After getting Luci and sqm-scripts installed, I did a few baseline tests.
Wifi to the MacBook Pro is...  broken.  30Mbps vs. 90+ on the stock
firmware.  iPhone is fine (80-90Mbps download speed from the internet).

After some rrul runs, this is what I ended up with:
http://www.dslreports.com/speedtest/538967

sqm-scripts are set for:
100Mbps download
10Mbps upload
fq_codel
ECN
no-squash
don't ignore

Here's a before run, with the stock firmware:
http://www.dslreports.com/speedtest/337392

So, unfortunately, it's still leaving 50Mbps on the table.

However, if I set the ingress limit higher (130Mbps), buffering is still
controlled.  Not as well, though.  from +5ms to +10ms, with lots of
jitter.  But it still looks great to the dslreports test:
http://www.dslreports.com/speedtest/538990

But the upside?  load is practically nil.  The WRT1900AC, with it's
dual-core processor is more than enough to keep up with this (from a load
point of view), but it seems like the bottleneck isn't the raw CPU power
(cache?).

I'll get a writeup with graphs on the blog tomorrow (I hope).

-Aaron

___
Bloat mailing list
bl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] better business bufferbloat monitoring tools?

2015-05-12 Thread David Lang

On Tue, 12 May 2015, Dave Taht wrote:


One thread bothering me on dslreports.com is that some folk seem to
think you only get bufferbloat if you stress test the network, where
transient bufferbloat is happening all the time, everywhere.

On one of my main sqm'd network gateways, day in, day out, it reports
about 6000 drops or ecn marks on ingress, and about 300 on egress.
Before I doubled the bandwidth that main box got, the drop rate used
to be much higher, and a great deal of the bloat, drops, etc, has now
moved into the wifi APs deeper into the network where I am not
monitoring it effectively.

I would love to see tools like mrtg, cacti, nagios and smokeping[1] be
more closely integrated, with bloat related plugins, and in
particular, as things like fq_codel and other ecn enabled aqms deploy,
start also tracking congestive events like loss and ecn CE markings on
the bandwidth tracking graphs.

This would counteract to some extent the classic 5 minute bandwidth
summaries everyone looks at, that hide real traffic bursts, latencies
and loss at sub 5 minute timescales.


The problem is that too many people don't realize that network utilization is 
never 50%, it's always 0% or 100% (if you look at a small enough timeslice). 
With a 5 min ave, 20% utilization could be 100% maxed out and buffering for 60 
seconds, and then idle for the remainder of the time.


I always set my graphs for 1 min samples (and am very tempted to go shorter) 
just because 5 min hides so much.


David Lang


mrtg and cacti rely on snmp. While loss statistics are deeply part of
snmp, I am not aware of there being a mib for CE events and a quick
google search was unrevealing. ?

There is also a need for more cross-network monitoring using tools
such as that done by this excellent paper.

http://www.caida.org/publications/papers/2014/measurement_analysis_internet_interconnection/measurement_analysis_internet_interconnection.pdf

[1] the network monitoring tools market is quite vast and has many
commercial applications, like intermapper, forks of nagios, vendor
specific producs from cisco, etc, etc. Far too many to list, and so
far as I know, none are reporting ECN related stats, nor combining
latency and loss with bandwidth graphs. I would love to know if any
products, commercial or open source, did

--
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Fwd: [IP] To keep a Boeing Dreamliner flying, reboot once every 248 days A software bug again

2015-05-04 Thread David Lang

On Mon, 4 May 2015, Stephen Hemminger wrote:


On Mon, 4 May 2015 10:21:53 -0700 (PDT)
David Lang da...@lang.hm wrote:


The kernel starts the clock at -some hours so that it hits a wrap-around not
that long after startup.

David Lang


That went in in 2.5 development cycle. I wouldn't be have surprised if Boeing
was using 2.4 kernel given the length of development time for a flight chassis.


I'd actually be a bit surprised if they were using Linux for this at all.

I was referring to Dave Taht's desire to test embedded devices.

David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Fwd: [IP] To keep a Boeing Dreamliner flying, reboot once every 248 days A software bug again

2015-05-04 Thread David Lang
The kernel starts the clock at -some hours so that it hits a wrap-around not 
that long after startup.


David Lang

On Sat, 2 May 2015, Dave Taht wrote:


Date: Sat, 2 May 2015 09:42:07 -0700
From: Dave Taht dave.t...@gmail.com
To: cerowrt-devel@lists.bufferbloat.net
cerowrt-devel@lists.bufferbloat.net
Subject: [Cerowrt-devel] Fwd: [IP] To keep a Boeing Dreamliner flying,
reboot once every 248 days A software bug again

Somehow I really would like to be able to run a firmware's clock
forward WAY faster and transparently to catch bugs like this.


-- Forwarded message --
From: David Farber far...@gmail.com
Date: Sat, May 2, 2015 at 7:41 AM
Subject: [IP] To keep a Boeing Dreamliner flying, reboot once every
248 days A software bug again
To: ip i...@listbox.com




http://www.engadget.com/2015/05/01/boeing-787-dreamliner-software-bug/#continued 
http://www.engadget.com/2015/05/01/boeing-787-dreamliner-software-bug/#continued



---
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/26973280-5ba6a701
Modify Your Subscription:
https://www.listbox.com/member/?member_id=26973280id_secret=26973280-3b04af21
Unsubscribe Now:
https://www.listbox.com/unsubscribe/?member_id=26973280id_secret=26973280-063e9b28post_id=20150502104110:446F186A-F0D9-11E4-B42B-AC84EB0E84BC
Powered by Listbox: http://www.listbox.com


--
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] capturing packets and applying qdiscs

2015-03-27 Thread David Lang

I gathered a bunch of stats from the Scale conference this year

http://lang.hm/scale/2015/stats/

this includes very frequent dumps of transmission speed data per MAC address per 
AP


David Lang

On Fri, 27 Mar 2015, Isaac Konikoff wrote:


Thanks for pointing out horst.

I've been trying wireshark io graphs such as:
retry comparison:  wlan.fc.retry==0 (line) to wlan.fc.retry==1 (impulse)
beacon delays:  wlan.fc.type_subtype==0x08 AVG frame.time_delta_displayed

I've uploaded my pcap files, netperf-wrapper results and lanforge script 
reports which have some aggregate graphs below all of the pie charts. The 
pcap files with 64sta in the name correspond to the script reports.


candelatech.com/downloads/wifi-reports/trial1

I'll upload more once I try the qdisc suggestions and I'll generate 
comparison plots.


Isaac

On 03/27/2015 10:21 AM, Aaron Wood wrote:



On Fri, Mar 27, 2015 at 8:08 AM, Richard Smith smithb...@gmail.com 
mailto:smithb...@gmail.com wrote:


Using horst I've discovered that the major reason our WiFi network
sucks is because 90% of the packets are sent at the 6mbit rate. 
Most of the rest show up in the 12 and 24mbit zone with a tiny

fraction of them using the higher MCS rates.

Trying to couple the radiotap info with the packet decryption to
discover the sources of those low-bit rate packets is where I've
been running into difficulty.  I can see the what but I haven't
had much luck on the why.

I totally agree with you that tools other than wireshark for
analyzing this seem to be non-existent.


Using the following filter in Wireshark should get you all that 6Mbps 
traffic:


radiotap.datarate == 6

Then it's pretty easy to dig into what those are (by wifi frame-type, at 
least).  At my network, that's mostly broadcast traffic (AP beacons and 
whatnot), as the corporate wifi has been set to use that rate as the 
broadcast rate.


without capturing the WPA exchange, the contents of the data frames can't 
be seen, of course.


-Aaron



___
Bloat mailing list
bl...@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] archer c7 v2, policing, hostapd, test openwrt build

2015-03-23 Thread David Lang

On Mon, 23 Mar 2015, Dave Taht wrote:


On Mon, Mar 23, 2015 at 9:17 AM, Sebastian Moeller moell...@gmx.de wrote:

Hi Dave,

I take it policing is still not cutting it then


I didn't think it would, but I was looking for an illustrative example
to use as a cluebat on people that think policing works. I have a
string of articles to write
about so many different technologies...

... and I'd felt that maybe if I merely added ecn to an existing
policer I'd get a good result, just haven't - like so many things -
got round to it. I do have reasonable hopes for bobbie, also...


, and the “hunt” for a wndr3[7|8]000 is still on?


Yep. I figure we're gonna find an x86 box to do the higher end stuff
in the near term, unless one of the new dual a9 boxen works out.


It look the archer c7v2 does roughly twice as good as the old cerowrt reference 
model, a decent improvement, but not yet present-safe let alone future-safe...


Well, the big part of the upgrade was from linux 3.10 to linux 3.18. I
got nearly 600mbit forwarding rates out of that (up from 340 or so) on
the wndr3800. I have not rebuilt those with the latest code, my goal
is to find *some* platform still being made to use, and the tplink has
the benefit of also doing ac...

IF you have a spare wndr3800 to reflash with what I built friday, goferit...


I have a few spare 3800s if some of you developers need one.

unfortunantly I don't have a fast connection to test on.

David Lang


I think part of the bonus performance we are also getting out of cake
is in getting rid of a bunch of firewall and tc classification rules.

(New feature request for cake might be to do dscp squashing and get
rid of that rule...! I'd like cake to basically be a drop in
replacement for the sqm scripts.
I wouldn't mind if it ended up being called sqm, rather than cake, in
the long run, with what little branding we have being used. Google for
cake shaper
if you want to get a grip on how hard marketing cake would be...)

.



Best  Regards
Sebastian

On Mar 23, 2015, at 01:24 , Dave Taht dave.t...@gmail.com wrote:


so I had discarded policing for inbound traffic management a long
while back due to it not
handling varying RTTs very well, the burst parameter being hard, maybe
even impossible to tune, etc.

And I'd been encouraging other people to try it for a while, with no
luck. So anyway...

1) A piece of good news is - using the current versions of cake and
cake2, that I can get on linux 3.18/chaos calmer, on the archer c7v2
shaping 115mbit download with 12mbit upload... on a cable modem...
with 5% cpu to spare. I haven't tried a wndr3800 yet...

htb + fq_codel ran out of cpu at 94mbit...

2) On the same test rig I went back to try policing. With a 10k burst
parameter, it cut download rates in half...

However, with a 100k burst parameter, on the rrul and tcp_download
tests, at a very short RTT (ethernet) I did get full throughput and
lower latency.

How to try it:

run sqm with whatever settings you want. Then plunk in the right rate
below for your downlink.

tc qdisc del dev eth0 handle : ingress
tc qdisc add dev eth0 handle : ingress
tc filter add dev eth0 parent : protocol ip prio 50 u32 match ip
src 0.0.0.0/0 police rate 115000kbit burst 100k drop flowid :1

I don't know how to have it match all traffic, including ipv6
traffic(anyone??), but that was encouraging.

However, the core problem with policing is that it doesn't handle
different RTTs very well, and the exact same settings on a 16ms
path cut download throughput by a factor of 10. - set to
115000kbit I got 16mbits on rrul.  :(

Still...

I have long maintained it was possible to build a better fq_codel-like
policer without doing htb rate shaping, (bobbie), and I am tempted
to give it a go in the coming months. However I tend to think
backporting the FIB patches and making cake run faster might be more
fruitful. (or finding faster hardware)

3) There may be some low hanging fruit in how hostapd operates. Right
now I see it chewing up cpu, and when running, costing 50mbit of
throughput at higher rates, doing something odd, over and over again.

clock_gettime(CLOCK_MONOTONIC, {1240, 843487389}) = 0
recvmsg(12, {msg_name(12)={sa_family=AF_NETLINK, pid=0,
groups=},
msg_iov(1)=[{\0\0\1\20\0\25\0\0\0\0\0\0\0\0\0\0;\1\0\0\0\10\0\1\0\0\0\1\0\10\0...,
16384}], msg_controllen=0, msg_flags=0}, 0) = 272
clock_gettime(CLOCK_MONOTONIC, {1240, 845060156}) = 0
clock_gettime(CLOCK_MONOTONIC, {1240, 845757477}) = 0
_newselect(19, [3 5 8 12 15 16 17 18], [], [], {3, 928211}) = 1 (in
[12], left {3, 920973})

I swear I'd poked into this and fixed it in cerowrt 3.10, but I guess
I'll have to go poking through the patch set. Something involving
random number obtaining, as best as I recall.

4) I got a huge improvement in p2p wifi tcp throughput between linux
3.18 and linux 3.18 + the minstrel-blues and andrew's minimum variance
patches - a jump of over 30% on the ubnt nanostation m5.

5) Aside from that, so

Re: [Cerowrt-devel] archer c7 v2, policing, hostapd, test openwrt build

2015-03-23 Thread David Lang

On Mon, 23 Mar 2015, Jonathan Morton wrote:


On 23 Mar, 2015, at 03:45, David Lang da...@lang.hm wrote:

are we running into performance issues with fq_codel? I thought all the 
problems were with HTB or ingress shaping.


Cake is, in part, a response to the HTB problem; it is a few percent more 
efficient so far than an equivalent HTB+fq_codel combination.  It will have a 
few other novel features, too.

Bobbie is a response to the ingress-shaping problem.  A policer (with no queue) 
can be run without involving an IFB device, which we believe has a large 
overhead.


Thanks for the clarification, I hadn't put the pieces together to understand 
this.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] archer c7 v2, policing, hostapd, test openwrt build

2015-03-22 Thread David Lang

On Mon, 23 Mar 2015, Jonathan Morton wrote:


On 23 Mar, 2015, at 02:24, Dave Taht dave.t...@gmail.com wrote:

I have long maintained it was possible to build a better fq_codel-like
policer without doing htb rate shaping, (bobbie), and I am tempted
to give it a go in the coming months.


I have a hazy picture in my mind, now, of how it could be made to work.

A policer doesn’t actually maintain a queue, but it is possible to calculate 
when the currently-arriving packet would be scheduled for sending if a shaped 
FIFO was present, in much the same way that cake actually performs such 
scheduling at the head of a real queue.  The difference between that time and 
the current time is a virtual sojourn time which can be fed into the Codel 
algorithm.  Then, when Codel says to drop a packet, you do so.

Because there’s no queue management, timer interrupts nor flow segregation, the 
overhead should be significantly lower than an actual queue.  And there’s a 
reasonable hope that involving Codel will give better results than either a 
brick-wall or a token bucket.


are we running into performance issues with fq_codel? I thought all the problems 
were with HTB or ingress shaping.


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation?

2015-03-20 Thread David Lang

On Fri, 20 Mar 2015, Michael Welzl wrote:


On 20. mar. 2015, at 17.31, Jonathan Morton chromati...@gmail.com wrote:



On 20 Mar, 2015, at 16:54, Michael Welzl mich...@ifi.uio.no wrote:

I'd like people to understand that packet loss often also comes with delay - 
for having to retransmit.


Or, turning it upside down, it’s always a win to drop packets (in the service 
of signalling congestion) if the induced delay exceeds the inherent RTT.


Actually, no: as I said, the delay caused by a dropped packet can be more than 
1 RTT - even much more under some circumstances. Consider this quote from the 
intro of https://tools.ietf.org/html/draft-dukkipati-tcpm-tcp-loss-probe-01  :


You are viewing this as a question to drop a packet or not drop a packet.

The problem is that isn't the actual question.

The question is to drop a packet early and have the sender slow down, or wait 
until the sender has filled the buffer to the point that all traffic (including 
acks) is experiencing multi-second latency and then drop a bunch of packets.


In theory ECN would allow for feedback to the sender to have it slow down 
without any packet being dropped, but in the real world it doesn't work that 
well.


1. If you mark packets as congested if they have ECN and drop them if they 
don't, programmers will mark everything ECN (and not slow transmission) because 
doing so gives them an advantage over applications that don't mark their packets 
with ECN


  marking packets with ECN gives an advantage to them in mixed environments

2. If you mark packets as congested at a lower level than where you drop them, 
no programmer is going to enable ECN because flows with ECN will be prioritized 
below flows without ECN


If everyone use ECN you don't have a problem, but if only some 
users/applications do, there's no way to make it equal, so one or the other is 
going to have an advantage, programmers will game the system to do whatever 
gives them the advantage


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation?

2015-03-20 Thread David Lang

On Sat, 21 Mar 2015, Jonathan Morton wrote:


On 21 Mar, 2015, at 02:25, David Lang da...@lang.hm wrote:

As I said, there are two possibilities

1. if you mark packets sooner than you would drop them, advantage non-ECN

2. if you mark packets and don't drop them until higher levels, advantage ECN, 
and big advantage to fake ECN


3: if you have flow isolation with drop-from-longest-queue-on-overflow, faking 
ECN doesn’t matter to other traffic - it just turns the faker’s allocation of 
queue into a dumb, non-AQM one.  No problem.


so if every flow is isolated so that what it generates has no effect on any 
other traffic, what value does ECN provide?


and how do you decide what the fair allocation of bandwidth is between all the 
threads?


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation?

2015-03-20 Thread David Lang

On Sat, 21 Mar 2015, Michael Welzl wrote:


On 21. mar. 2015, at 01.03, David Lang da...@lang.hm wrote:

On Fri, 20 Mar 2015, Michael Welzl wrote:


On 20. mar. 2015, at 17.31, Jonathan Morton chromati...@gmail.com wrote:

On 20 Mar, 2015, at 16:54, Michael Welzl mich...@ifi.uio.no wrote:
I'd like people to understand that packet loss often also comes with delay - 
for having to retransmit.

Or, turning it upside down, it’s always a win to drop packets (in the service 
of signalling congestion) if the induced delay exceeds the inherent RTT.


Actually, no: as I said, the delay caused by a dropped packet can be more than 
1 RTT - even much more under some circumstances. Consider this quote from the 
intro of https://tools.ietf.org/html/draft-dukkipati-tcpm-tcp-loss-probe-01  :


You are viewing this as a question to drop a packet or not drop a packet.

The problem is that isn't the actual question.

The question is to drop a packet early and have the sender slow down, or wait 
until the sender has filled the buffer to the point that all traffic 
(including acks) is experiencing multi-second latency and then drop a bunch 
of packets.


In theory ECN would allow for feedback to the sender to have it slow down 
without any packet being dropped, but in the real world it doesn't work that 
well.


I think it's about time we finally turn it on in the real world.


1. If you mark packets as congested if they have ECN and drop them if they 
don't, programmers will mark everything ECN (and not slow transmission) 
because doing so gives them an advantage over applications that don't mark 
their packets with ECN


I heard this before but don't buy this as being a significant problem (and 
haven't seen evidence thereof either). Getting more queue space and 
occasionally getting a packet through that others don't isn't that much of an 
advantage - it comes at the cost of latency for your own application too 
unless you react to congestion.


but the router will still be working to reduce traffic, so more non-ECN flows 
will get packets dropped to reduce the 
loadhttp://email.chase.com/10385c493layfousub74lnvqaahg7lbwdgdvonyya/C?V=emlwX2NvZGUBAUNVU1RfTEFTVF9OTQFMQU5HAVJFV0FSRFNfQkF

MQU5DRQExNi43MwFnX2luZGV4AQFDVVNUX0ZJUlNUX05NAURBVklEAUxBU1RfNAE1NDE3AWxfaW5kZXgBAXByb2ZpbGVfaWQBNDg0Mzk5MjEyAW1haWxpbmdfaWQBMTE
0OTI5NTU5AV9XQVZFX0lEXwE4NTY2MDAxNzQBX1BMSVNUX0lEXwExNjgwMTYwMQFVTlFfRU5STF9DRAEyMTEyMzkzOTE1AWVtYWlsX2FkX2lkAQFMU1RfU1RNVF9EQVR
FATAyLzAxLzE1AWVtYWlsX2FkZHIBZGF2aWRAbGFuZy5obQFfU0NIRF9UTV8BMjAxNTAzMjAyMTAwMDABcHJvZmlsZV9rZXkBQTE0NjQ3MjgxMTQ%3DKwXv5L3yGN8q
uPM67mqc0Q




 marking packets with ECN gives an advantage to them in mixed environments

2. If you mark packets as congested at a lower level than where you drop 
them, no programmer is going to enable ECN because flows with ECN will be 
prioritized below flows without ECN


Well longer story. Let me just say that marking where you would otherwise 
drop would be fine as a starting point. You don't HAVE to mark lower than 
you'd drop.



If everyone use ECN you don't have a problem, but if only some 
users/applications do, there's no way to make it equal, so one or the other 
is going to have an advantage, programmers will game the system to do 
whatever gives them the advantage


I don't buy this at all. Game to gain what advantage? Anyway I can be more 
aggressive than everyone else if I want to, by backing off less, or not 
backing off at all, with or without ECN. Setting ECN-capable lets me do this 
with also getting a few more packets through without dropping - but packets 
get dropped at the hard queue limit anyway. So what's the big deal? What is 
the major gain that can be gained over others?


for gamers, even a small gain can be major. Don't forget that there's also the 
perceived advantage If I do this, everyone else's packets will be dropped and 
mine will get through, WIN!!!


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation?

2015-03-20 Thread David Lang

On Sat, 21 Mar 2015, Steinar H. Gunderson wrote:


On Fri, Mar 20, 2015 at 05:03:16PM -0700, David Lang wrote:

1. If you mark packets as congested if they have ECN and drop them
if they don't, programmers will mark everything ECN (and not slow
transmission) because doing so gives them an advantage over
applications that don't mark their packets with ECN


I'm not sure if this is actually true. Somehow TCP stacks appear to be tricky
enough to mess with that the people who are capable of gaming congestion
control algorithms are also wise enough not to do so. Granted, we are seeing
some mild IW escalation, but you could very well make a TCP that's
dramatically unfair to everything else and deploy that on your CDN, and
somehow we're not seeing that.


It doesn't take deep mucking with the TCP stack. A simple iptables rule to OR a 
bit on as it's leaving the box would make the router think that the system has 
ECN enabled (or do it on your local gateway if you think it gives you higher 
priority over the wider network)


If you start talking about ECN and UDP things are even simpler, there's no need 
to go through the OS stack at all, craft your own packets and send the raw 
packets



(OK, concession #2, “download accelerators” are doing really bad things with
multiple connections to gain TCP unfairness, but that's on the client side
only, not the server side.)

Based on this, I'm not convinced that people would bulk-mark their packets as
ECN-capable just to get ahead in the queues.


Given the money they will spend and the cargo-cult steps that gamers will do in 
the hope of gaining even a slight advantage, I can easily see this happening



It _is_ hard to know when to
drop and when to ECN-mark, though; maybe you could imagine the benefits of
ECN (for the flow itself) to be big enough that you don't actually need to
lower the drop probability (just make the ECN probability a bit higher),
but this is pure unfounded speculation on my behalf.


As I said, there are two possibilities

1. if you mark packets sooner than you would drop them, advantage non-ECN

2. if you mark packets and don't drop them until higher levels, advantage ECN, 
and big advantage to fake ECN


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] the 3 focus problems we have

2015-03-19 Thread David Lang

On Thu, 19 Mar 2015, Aaron Wood wrote:


Subject: Re: [Cerowrt-devel] the 3 focus problems we have

On Thu, Mar 19, 2015 at 9:01 AM, Richard Smith smithb...@gmail.com wrote:


On 03/16/2015 01:49 PM, David Lang wrote:


On Mon, 16 Mar 2015, Dave Taht wrote:

 On Mon, Mar 16, 2015 at 10:34 AM, David Lang da...@lang.hm wrote:


 On Mon, 16 Mar 2015, Dave Taht wrote:


 1) We need a new box that can do inbound shaping at up to 300mbit.
So far


that box has not appeared. We have not explored policing as an
alternative.



If you are using a x86 processor, how much cpu does this take?




diddly.



this isn't sold as an AP, it's a bit pricy, but not insanely expensive
($200-250 depending on options)

1GHz dual core AMD cpu, 4G  1066RAM 2 mini PCIe 3xGid-E




Has anyone rrul'd a WRT1900AC running ToT firmware?  It sounds like it's
pretty beastly at 2x 1.2GHz ARM.  If it has the cache bandwidth (one of the
suspected culprits for the low bandwidth capabilities of the various MIPS
chipsets), it certainly has the compute power...

It's $250 at BestBuy, $212 on Amazon.


the WRT1200ac is also due out soon, 2x2 instead of 3x3 but otherwise slightly 
better specs


1900 2x1.2GHz ARM, 256M ram, 128M flash list $250
1200 2x1,3GHz ARM, 512M ram, 128M flash list $180

watching the OpenWRT mailing list, it also looks like the 1200 will have better 
support than the 1900 did (I'm not sure if the 1900 now has full support or not)


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] SQM and PPPoE, more questions than answers...

2015-03-18 Thread David Lang

On Wed, 18 Mar 2015, Alan Jenkins wrote:


Once SQM on ge00 actually dives into the PPPoE packets and
applies/tests u32 filters the LUL increases to be almost identical to
pppoe-ge00’s if both ingress and egress classification are active and
do work. So it looks like the u32 filters I naively set up are quite
costly. Maybe there is a better way to set these up...


Later you mentioned testing for coupling with egress rate.  But you didn't 
test coupling with classification!


I switched from simple.qos to simplest.qos, and that achieved the lower 
latency on pppoe-wan.  So I think your naive u32 filter setup wasn't the real 
problem.


I did think ECN wouldn't be applied on eth1, and that would be the cause of 
the latency.  But disabling ECN didn't affect it.  See files 3 to 6:


https://www.dropbox.com/sh/shwz0l7j4syp2ea/AAAxrhDkJ3TTy_Mq5KiFF3u2a?dl=0

I also admit surprise at fq_codel working within 20%/10ms on eth1.  I thought 
it'd really hurt, by breaking the FQ part.  Now I guess it doesn't.  I still 
wonder about ECN marking, though I didn't check my endpoint is using ECN.


ECN should never increase latency, if it has any effect it should improve 
latency because you slow down sending packets when some hop along the path is 
overloaded rather than sending the packets anyway and having them sit in a 
buffer for a while. This doesn't decrease actual throughput either (although if 
you are doing a test that doesn't actually wait for all the packets to arrive at 
the far end, it will look like it decreases throughput)




3) SQM on pppoe-ge00 has a rough 20% higher egress rate than SQM on
ge00 (with ingress more or less identical between the two). Also 2)
and 3) do not seem to be coupled, artificially reducing the egress
rate on pppoe-ge00 to yield the same egress rate as seen on ge00
does not reduce the LULI to the ge00 typical 10ms, but it stays at
20ms.

For this I also have no good hypothesis, any ideas?


With classification fixed the difference in egress rate shrinks to
~10% instead of 20, so this partly seems related to the
classification issue as well.


My tests look like simplest.qos gives a lower egress rate, but not as low as 
eth1.  (Like 20% vs 40%).  So that's also similar.



So the current choice is either to accept a noticeable increase in
LULI (but note some years ago even an average of 20ms most likely
was rare in the real life) or a equally noticeable decrease in
egress bandwidth…


I guess it is back to the drawing board to figure out how to speed up
the classification… and then revisit the PPPoE question again…


so maybe the question is actually classification v.s. not?

+ IMO slow asymmetric links don't want to lose more upload bandwidth than 
necessary.  And I'm losing a *lot* in this test.
+ As you say, having only 20ms excess would still be a big improvement.  We 
could ignore the bait of 10ms right now.


vs

- lowest latency I've seen testing my link. almost suspicious. looks close 
to 10ms average, when the dsl rate puts a lower bound of 7ms on the average.
- fq_codel honestly works miracles already. classification is the knob 
people had to use previously, who had enough time to twiddle it.


That's what most people find when they try it. Classification doesn't result in 
throughput vs latency tradeoffs as much as it gives absolute priority to some 
types of traffic. But unless you are really up against your bandwidth limit, 
this seldom matters in the real world. As long as latency is kept low, 
everything works so you don't need to give VoIP priority over other traffic or 
things like that.


David Lang___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Fwd: Dave's wishlist [was: Source-specific routing merged]

2015-03-17 Thread David Lang

On Tue, 17 Mar 2015, Dave Taht wrote:


My quest is always for an extra 9 of reliability. Anyplace where you can
make something more robust (even if it is out at the .99) level, I
tend to like to do in order to have the highest MTBF possible in
combination with all the other moving parts on the spacecraft (spaceship
earth).


There are different ways to add reliability

one is to try and make sure nothing ever fails

the second is to have a way of recovering when things go wrong.


Bufferbloat came about because people got trapped into the first mode of 
thinking (packets should never get lost), when the right answer ended up being 
to realize that we have a recovery method and use it.


Sometimes trying to make sure nothing ever fails adds a lot of complexity to the 
code to handle all the corner cases, and the overall reliability will improve by 
instead simplify normal flow, even if it add a small number of failures, if that 
means that you can have a common set of recovery code that's well excercised and 
tested.


As you are talking about loosing packets with route changes, watch out that you 
don't fall into this trap.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] the 3 focus problems we have

2015-03-16 Thread David Lang

On Mon, 16 Mar 2015, Dave Taht wrote:


On Mon, Mar 16, 2015 at 10:34 AM, David Lang da...@lang.hm wrote:


On Mon, 16 Mar 2015, Dave Taht wrote:

 1) We need a new box that can do inbound shaping at up to 300mbit. So far

that box has not appeared. We have not explored policing as an
alternative.



If you are using a x86 processor, how much cpu does this take?



diddly.


this isn't sold as an AP, it's a bit pricy, but not insanely expensive ($200-250 
depending on options)


1GHz dual core AMD cpu, 4G  1066RAM 2 mini PCIe 3xGid-E

being that it's not a consumer AP, it's nto going to be as cheap, but it's also 
not going to be phased out as quickly in favor of a new model.


using a mini PCIe wifi card makes it easier to switch to something with good 
support.


does it look like it would do the job?

David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


  1   2   >