Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Bob McMahon wrote:


While the 802.11 ack doesn't need to do collision avoidance, it does need
to wait a SIFS, send a PHY header and its typically transmitted at lower
PHY rate.   My estimate is 40 us for that overhead.  So yes, one would have
to get rid of that too, e.g. assume a transmit without a collision
succeeded - hopefully negating the need for the 802.11 ack.


don't forget that while there is teh 802.11 ack, there is also the TCP ack that 
will show up later as well.



(It does seem the wired engineers have it much easier per the point/point,
full duplex and wave guides.)


yep. even wireless is far easier when you can do point-to-point with highly 
directional antennas. Even if you don't do full duplex as well.


It's the mobility and unpredictability of the stations that makes things hard. 
The fact that Wifi works as well as it does is impressive, given the amount 
things have changed since it was designed, and the fact that backwards 
compatibility has been maintained.


David Lang


Bob

On Mon, Jun 27, 2016 at 2:09 PM, David Lang  wrote:


On Mon, 27 Jun 2016, Bob McMahon wrote:

packet size is smallest udp payload per a socket write() which in turn

drives the smallest packet supported by "the wire."

Here is a back of the envelope calculation giving ~100 microseconds per BE
access.

# Overhead estimates (slot time is 9 us):
# o DIFS 50 us or *AIFS (3 * 9 us) = 27 us
# o *Backoff Slot * CWmin,  9 us * rand[0,xf] (avg) = 7 * 9=63 us
# o 5G 20 us
# o Multimode header 20 us
# o PLCP (symbols) 2 * 4 us = 8 us
# o *SIFS 16 us
# o ACK 40 us



isn't the ack a separate transmission by the other end of the connection?
(subject to all the same overhead)

#

# Even if there is no collision and the CW stays at the aCWmin, the
average
# backoff time incurred by CSMA/CA is aDIFS + aCWmin/2 * aSlotTime = 16 µs
# +(2+7.5)*9 µs = 101.5 µs for OFDM PHY, while the data rate with OFDM PHY
# can reach 600 Mbps in 802.11n, leading to a transmission time of 20 µs
# for a 1500 byte packet.



well, are you talking a 64 byte packet or a 1500 byte packet?

But this is a good example of why good aggregation is desirable. It
doesn't have
to add a lot of latency. you could send 6x as much data in 2x the time by
sending 9K per transmission instead of 1.5K per transmission (+100us/7.5K)

if the aggregation is done lazily (send whatever's pending, don't wait for
more data if you have an available transmit slot), this can be done with
virtually no impact on latency, you just have to set a reasonable maximum,
and adjust it based on your transmission rate.

The problem is that right now thing don't set a reasonable max, and they
do greedy aggregation (wait until you have a lot of data to send before you
send anything)

All devices in a BSSID would have to agree that the second radio is to be

used for BSSID "carrier state" information and all energy will be sourced
by the AP serving that BSSID.  (A guess is doing this wouldn't improve the
100 us by enough to justify the cost and that a new MAC protocol is
required.  Just curious to what such a protocol and phy subsystem would
look like assuming collision avoidance could be replaced with collision
detect.)



if the second radio is on a separate band, you have the problem that
propogation
isn't going to be the same, so it's very possible to be able to talk to
the AP
on the 'normal' channel, but not on the 'coordination' channel.

I'm also not sure what good it would do, once a transmission has been
stepped
on, it will need to be re-sent (I guess you would be able to re-send
immediatly)


David Lang

Bob




On Mon, Jun 27, 2016 at 1:09 PM, David Lang  wrote:

On Mon, 27 Jun 2016, Bob McMahon wrote:


The ~10K is coming from empirical measurements where all aggregation


technologies are disabled, i.e. only one small IP packet per medium
arbitration/access and where there is only one transmitter and one
receiver.  900Mb/sec is typically a peak-average throughput measurement
where max (or near max) aggregation occurs, amortizing the access
overhead
across multiple packets.



so 10K is minimum size packets being transmitted?or around 200
transmissions/sec (plus 200 ack transmissions/sec)?

Yes, devices can be hidden from each other but not from the AP (hence the


use of RTS/CTS per hidden node mitigation.) Isn't it the AP's view of
the
"carrier state" that matters (at least in infrastructure mode?)  If
that's
the case, what about a different band (and different radio) such that
the
strong signal carrying the data could be separated from the the BSSID's
"carrier/energy state" signal?



how do you solve the interference problem on this other band/radio? When
you have other APs in the area operating, you will have the same problem
there.

David Lang


Bob



On Mon, Jun 27, 2016 at 12:40 PM, David Lang  wrote:

On Mon, 27 Jun 2016, Bob McMahon wrote:



Hi All,



This is a very interesting thread - thanks to 

Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Bob McMahon wrote:


packet size is smallest udp payload per a socket write() which in turn
drives the smallest packet supported by "the wire."

Here is a back of the envelope calculation giving ~100 microseconds per BE
access.

# Overhead estimates (slot time is 9 us):
# o DIFS 50 us or *AIFS (3 * 9 us) = 27 us
# o *Backoff Slot * CWmin,  9 us * rand[0,xf] (avg) = 7 * 9=63 us
# o 5G 20 us
# o Multimode header 20 us
# o PLCP (symbols) 2 * 4 us = 8 us
# o *SIFS 16 us
# o ACK 40 us


isn't the ack a separate transmission by the other end of the connection? 
(subject to all the same overhead)



#
# Even if there is no collision and the CW stays at the aCWmin, the average
# backoff time incurred by CSMA/CA is aDIFS + aCWmin/2 * aSlotTime = 16 µs
# +(2+7.5)*9 µs = 101.5 µs for OFDM PHY, while the data rate with OFDM PHY
# can reach 600 Mbps in 802.11n, leading to a transmission time of 20 µs
# for a 1500 byte packet.


well, are you talking a 64 byte packet or a 1500 byte packet?

But this is a good example of why good aggregation is desirable. It doesn't have
to add a lot of latency. you could send 6x as much data in 2x the time by
sending 9K per transmission instead of 1.5K per transmission (+100us/7.5K)

if the aggregation is done lazily (send whatever's pending, don't wait for more 
data if you have an available transmit slot), this can be done with virtually no 
impact on latency, you just have to set a reasonable maximum, and adjust it 
based on your transmission rate.


The problem is that right now thing don't set a reasonable max, and they do 
greedy aggregation (wait until you have a lot of data to send before you send 
anything)



All devices in a BSSID would have to agree that the second radio is to be
used for BSSID "carrier state" information and all energy will be sourced
by the AP serving that BSSID.  (A guess is doing this wouldn't improve the
100 us by enough to justify the cost and that a new MAC protocol is
required.  Just curious to what such a protocol and phy subsystem would
look like assuming collision avoidance could be replaced with collision
detect.)


if the second radio is on a separate band, you have the problem that propogation
isn't going to be the same, so it's very possible to be able to talk to the AP
on the 'normal' channel, but not on the 'coordination' channel.

I'm also not sure what good it would do, once a transmission has been stepped
on, it will need to be re-sent (I guess you would be able to re-send immediatly)

David Lang


Bob



On Mon, Jun 27, 2016 at 1:09 PM, David Lang  wrote:


On Mon, 27 Jun 2016, Bob McMahon wrote:

The ~10K is coming from empirical measurements where all aggregation

technologies are disabled, i.e. only one small IP packet per medium
arbitration/access and where there is only one transmitter and one
receiver.  900Mb/sec is typically a peak-average throughput measurement
where max (or near max) aggregation occurs, amortizing the access overhead
across multiple packets.



so 10K is minimum size packets being transmitted?or around 200
transmissions/sec (plus 200 ack transmissions/sec)?

Yes, devices can be hidden from each other but not from the AP (hence the

use of RTS/CTS per hidden node mitigation.) Isn't it the AP's view of the
"carrier state" that matters (at least in infrastructure mode?)  If that's
the case, what about a different band (and different radio) such that the
strong signal carrying the data could be separated from the the BSSID's
"carrier/energy state" signal?



how do you solve the interference problem on this other band/radio? When
you have other APs in the area operating, you will have the same problem
there.

David Lang


Bob


On Mon, Jun 27, 2016 at 12:40 PM, David Lang  wrote:

On Mon, 27 Jun 2016, Bob McMahon wrote:


Hi All,



This is a very interesting thread - thanks to all for taking the time to
respond.   (Personally, I now have better understanding of the
difficulties
associated with a PHY subsystem that supports a wide 1GHz.)

Not to derail the current discussion, but I am curious to ideas on
addressing the overhead associated with media access per collision
avoidance.  This overhead seems to be limiting transmits to about 10K
per
second (even when a link has no competition for access.)



I'm not sure where you're getting 10K/second from. We do need to limit
the
amount of data transmitted in one session to give other stations a chance
to talk, but if the AP replies immediatly to ack the traffic, and the
airwaves are idle, you can transmit again pretty quickly.

people using -ac equipment with a single station are getting 900Mb/sec
today.

  Is there a way,


maybe using another dedicated radio, to achieve near instantaneous
collision detect (where the CD is driven by the receiver state) such
that
mobile devices can sample RF energy to get theses states and state
changes
much more quickly?



This gets back to the same problems (hidden transmitter , 

Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Bob McMahon wrote:


The ~10K is coming from empirical measurements where all aggregation
technologies are disabled, i.e. only one small IP packet per medium
arbitration/access and where there is only one transmitter and one
receiver.  900Mb/sec is typically a peak-average throughput measurement
where max (or near max) aggregation occurs, amortizing the access overhead
across multiple packets.


so 10K is minimum size packets being transmitted?or around 200 transmissions/sec 
(plus 200 ack transmissions/sec)?



Yes, devices can be hidden from each other but not from the AP (hence the
use of RTS/CTS per hidden node mitigation.) Isn't it the AP's view of the
"carrier state" that matters (at least in infrastructure mode?)  If that's
the case, what about a different band (and different radio) such that the
strong signal carrying the data could be separated from the the BSSID's
"carrier/energy state" signal?


how do you solve the interference problem on this other band/radio? When you 
have other APs in the area operating, you will have the same problem there.


David Lang


Bob

On Mon, Jun 27, 2016 at 12:40 PM, David Lang  wrote:


On Mon, 27 Jun 2016, Bob McMahon wrote:

Hi All,


This is a very interesting thread - thanks to all for taking the time to
respond.   (Personally, I now have better understanding of the
difficulties
associated with a PHY subsystem that supports a wide 1GHz.)

Not to derail the current discussion, but I am curious to ideas on
addressing the overhead associated with media access per collision
avoidance.  This overhead seems to be limiting transmits to about 10K per
second (even when a link has no competition for access.)



I'm not sure where you're getting 10K/second from. We do need to limit the
amount of data transmitted in one session to give other stations a chance
to talk, but if the AP replies immediatly to ack the traffic, and the
airwaves are idle, you can transmit again pretty quickly.

people using -ac equipment with a single station are getting 900Mb/sec
today.

  Is there a way,

maybe using another dedicated radio, to achieve near instantaneous
collision detect (where the CD is driven by the receiver state) such that
mobile devices can sample RF energy to get theses states and state changes
much more quickly?



This gets back to the same problems (hidden transmitter , and the
simultanious reception of wildly different signal strengths)

When you are sending, you will hear yourself as a VERY strong signal,
trying to hear if someone else is transmitting at the same time is almost
impossible (100 ft to 1 ft is 4 orders of magnatude, 1 ft to 1 inch is
another 2 orders of magnatude)

And it's very possible that the station that you are colliding with isn't
one you can hear at all.

Any AP is going to have a better antenna than any phone. (sometimes
several orders of magnatude better), so even if you were located at the
same place as the AP, the AP is going to hear signals that you don't.

Then consider the case where you and the other station are on opposite
sides of the AP at max range.

and then add cases where there is a wall between you and the other
station, but the AP can hear both of you.

David Lang




___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Bob McMahon wrote:


Hi All,

This is a very interesting thread - thanks to all for taking the time to
respond.   (Personally, I now have better understanding of the difficulties
associated with a PHY subsystem that supports a wide 1GHz.)

Not to derail the current discussion, but I am curious to ideas on
addressing the overhead associated with media access per collision
avoidance.  This overhead seems to be limiting transmits to about 10K per
second (even when a link has no competition for access.)


I'm not sure where you're getting 10K/second from. We do need to limit the 
amount of data transmitted in one session to give other stations a chance to 
talk, but if the AP replies immediatly to ack the traffic, and the airwaves are 
idle, you can transmit again pretty quickly.


people using -ac equipment with a single station are getting 900Mb/sec today.


  Is there a way,
maybe using another dedicated radio, to achieve near instantaneous
collision detect (where the CD is driven by the receiver state) such that
mobile devices can sample RF energy to get theses states and state changes
much more quickly?


This gets back to the same problems (hidden transmitter , and the simultanious 
reception of wildly different signal strengths)


When you are sending, you will hear yourself as a VERY strong signal, trying to 
hear if someone else is transmitting at the same time is almost impossible (100 
ft to 1 ft is 4 orders of magnatude, 1 ft to 1 inch is another 2 orders of 
magnatude)


And it's very possible that the station that you are colliding with isn't one 
you can hear at all.


Any AP is going to have a better antenna than any phone. (sometimes several 
orders of magnatude better), so even if you were located at the same place as 
the AP, the AP is going to hear signals that you don't.


Then consider the case where you and the other station are on opposite sides of 
the AP at max range.


and then add cases where there is a wall between you and the other station, but 
the AP can hear both of you.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Jason Abele wrote:


The reason you can not just add bits to the ADC is the thermal noise
floor: 
https://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise#Noise_power_in_decibels

If you assume a maximum transmit power of ~20dBm (100mW) and a 160MHz
channel bandwidth (with a consequent thermal noise floor of -92 dBm),
the total possible dynamic range is ~112dB, if you receiver and
transmitter a coupled with no loss.  At ~6dB/bit in the ADC, anything
beyond 19bits is just quantizing noise and wasting power (which is
heat, which raises your local thermal noise floor, etc).  If your
channel bandwidth is 1GHz, the effective noise floor rises by another
~2bits, so ~17bits of dynamic range max, before accounting for path
loss and distortion.

Speaking of distortion, look at the intermod (IP3) or harmonic
distortion figures for those wideband ADC sometime, if the signals of
interest are of widely varying amplitudes in narrower bandwidths, the
performance limit will usually be distortion from the strongest
signal, not the thermal noise floor.  This usually limits dynamic
range to less than 10 effective bits.

Also transmitters are usually only required to suppress their adjacent
channel noise to around -50dB below the transmit power, so a little
over 8bits of dynamic range before the ADC is quantizing an interferer
rather than the signal of interest.


Thanks for the more detailed information.


I am surprised that 802.11 still uses the same spreading code for all
stations.  I am no expert on cellular CDMA deployments, but I think
they have been using different spreading codes for each station to
increase capacity and improve the ability to mathematically remove the
interference of other physically close stations for decades.


Cellular mostly works because they have hundreds/thousands of channels rather 
than tens.


As complex as the 802.11 MAC is becoming, I do not understand why an approach 
like MU-MIMO was chosen over negotiating a separate spreading code per 
station.


compatibility and the fact that stations with different spreading algorithms 
still interfere with each other. Also, coordinating the 'right' spreading 
algorithm for each station with each AP (including ones with hidden SSIDs)



My best guess is that it keeps the complexity (and therefore power) at
the AP rather than in the (increasingly mobile, power-constrained)
station.  Hopefully the rise of mesh / peer-to-peer networks in mobile
stations will apply the right engineering pressure to re-think the
idea of keeping all complexity in the AP.


Almost all the mesh work I see is using a mesh of APs, anything beyond that is 
wishful thinking.


Even mu-mimo requires some client support.

David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, moeller0 wrote:


Hi David,


On Jun 27, 2016, at 09:44 , David Lang  wrote:

On Mon, 27 Jun 2016, Sebastian Moeller wrote:


On a wireless network, with 'normal' omnidirctional antennas, the signal drops 
off with the square of the distance. So if you want to service clients from 1 
ft to 100 ft away, your signal strength varies by 1000 (4 orders of magnatude), 
this is before you include effects of shielding, bounces, bad antenna 
alignment, etc (which can add several more orders of magnatude of variation)

The receiver first normalized the strongest part of the signal to a constant 
value, and then digitizes the result, (usually with a 12-14 bit AD converter). 
Since 1000x is ~10 bits, the result of overlapping tranmissions can be one signal 
at 14 bits, and another at <4 bits. This is why digital processing isn't able 
to receive multiple stations at the same time.


But, I you add 10 Bits to your AD converter you basically solved this. Now, 
most likely this also needs to be of higher quality and of low internal noise, 
so probably expensive... Add to this the wide-band requirement of the sample 
the full band approach and we are looking at a price ad converter. On the 
bright side, mass-producing that might lower the price for nice oscilloscopes...


well, TI only manufactures AD converters up to 16 bit at these speeds, so 24 bit 
converters are hardly something to just buy. They do make 24 and 32 bit ADCs, but 
only ones that could be used for signals <5MHz wide (and we are pushing to 160 
MHz wide channels on wifi)


	But David’s idea was to sample the full 5GHz band simultaneously, so we 
would need something like a down-mixer and an ADC system with around 2GHz 
bandwidth (due to Nyquist), I believe multiplexing multiple slower ADC’s as 
done in better oscilloscopes might work, but that will not help reduce the 
price not solve the bit resolution question.


loosing track of the Davids here :-)

it's not just the super high-speed, high precision ADCs needed, it's also the 
filters to block out the other stuff that you don't want.


If you want to filter a 1 GHz chunk of bandwidth, you need to try and filer out 
signals outside of that 1GHz range. The wider the range that you receive, the 
harder it is to end up with filters that block the stuff outside of it. A strong 
signal outside of the band that you are trying to receive, but that partially 
makes it through the filter is as harmful to your range as a strong signal in 
band.


also note my comment about walls/etc providing shielding that can add a few 
more orders of magnatude on the signals.


	Well, yes, but in the end the normalizing amplifier really can be 
considered a range adjustor that makes up for the ADC’s lack of dynamik 
resolution. I would venture the guess not having to normalize might allow 
speed up the “wifi pre-amble” since one amplifier less to stabilize…


not really, you are still going to have to amplify the signal a LOT before you 
can process it at all, and legacy compatibility wouldn't let you trim the 
beginning of the signal anyway.


And then when you start being able to detect signals at that level, the first 
ones you are going to hit are bounces from your strongest signal off of all 
sorts of things.


	But that is independent of whether you sample to whole 5GHz range in one 
go or not? I would guess as long as the ADC/amplifier does not go into 
saturation both should perform similarly.


if you currently require 8 bits of clean data to handle the data rate (out of 14 
bits sampled) and you move to needing 16 bits of clean data to handle the 
improved data rate out of 24 bits sampled, you haven't gained much ability to 
handle secondary, weak signals


You will also find that noise and distortion in the legitimate strong signal 
is going to be at strengths close to the strength of the weak signal you are 
trying to hear.


	But if that noise and distortion appear in the weak signals frequency 
band we have issues already today?


no, because we aren't trying to decode the weak signal at the same time the 
strong signal is there. We only try to decode the weak signal in the absense of 
the strong signal.


As I said, I see things getting better, but it’s going to be a very hard 
thing to do, and I'd expect to see reverse mu-mimo (similarly strong signals 
from several directions) long before the ability to detect wildly weaker 
signals.


You are probably right.



I also expect that as the ability to more accurately digitize the signal 
grows, we will first take advantage of it for higher speeds.


	Yes, but higher speed currently means mostly wider bands, and the full 
4-5GHz range is sort of the logical end-point ;).


not at all. There is nothing magical about round decimal numbers :-)

And there are other users nearby. As systems get able to handle faster signals, 
we will move up in frequency (say the 10GHz band where police radar guns 
operate) or higher.


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread moeller0
Hi David,

> On Jun 27, 2016, at 09:44 , David Lang  wrote:
> 
> On Mon, 27 Jun 2016, Sebastian Moeller wrote:
> 
>>> On a wireless network, with 'normal' omnidirctional antennas, the signal 
>>> drops off with the square of the distance. So if you want to service 
>>> clients from 1 ft to 100 ft away, your signal strength varies by 1000 (4 
>>> orders of magnatude), this is before you include effects of shielding, 
>>> bounces, bad antenna alignment, etc (which can add several more orders of 
>>> magnatude of variation)
>>> 
>>> The receiver first normalized the strongest part of the signal to a 
>>> constant value, and then digitizes the result, (usually with a 12-14 bit AD 
>>> converter). Since 1000x is ~10 bits, the result of overlapping tranmissions 
>>> can be one signal at 14 bits, and another at <4 bits. This is why digital 
>>> processing isn't able to receive multiple stations at the same time.
>> 
>> But, I you add 10 Bits to your AD converter you basically solved this. 
>> Now, most likely this also needs to be of higher quality and of low internal 
>> noise, so probably expensive... Add to this the wide-band requirement of the 
>> sample the full band approach and we are looking at a price ad converter. On 
>> the bright side, mass-producing that might lower the price for nice 
>> oscilloscopes...
> 
> well, TI only manufactures AD converters up to 16 bit at these speeds, so 24 
> bit converters are hardly something to just buy. They do make 24 and 32 bit 
> ADCs, but only ones that could be used for signals <5MHz wide (and we are 
> pushing to 160 MHz wide channels on wifi)

But David’s idea was to sample the full 5GHz band simultaneously, so we 
would need something like a down-mixer and an ADC system with around 2GHz 
bandwidth (due to Nyquist), I believe multiplexing multiple slower ADC’s as 
done in better oscilloscopes might work, but that will not help reduce the 
price not solve the bit resolution question.

> 
> also note my comment about walls/etc providing shielding that can add a few 
> more orders of magnatude on the signals.

Well, yes, but in the end the normalizing amplifier really can be 
considered a range adjustor that makes up for the ADC’s lack of dynamik 
resolution. I would venture the guess not having to normalize might allow speed 
up the “wifi pre-amble” since one amplifier less to stabilize…

> 
> And then when you start being able to detect signals at that level, the first 
> ones you are going to hit are bounces from your strongest signal off of all 
> sorts of things.

But that is independent of whether you sample to whole 5GHz range in 
one go or not? I would guess as long as the ADC/amplifier does not go into 
saturation both should perform similarly.

> 
> You will also find that noise and distortion in the legitimate strong signal 
> is going to be at strengths close to the strength of the weak signal you are 
> trying to hear.

But if that noise and distortion appear in the weak signals frequency 
band we have issues already today?

> 
> As I said, I see things getting better, but it’s going to be a very hard 
> thing to do, and I'd expect to see reverse mu-mimo (similarly strong signals 
> from several directions) long before the ability to detect wildly weaker 
> signals.

You are probably right.

> 
> I also expect that as the ability to more accurately digitize the signal 
> grows, we will first take advantage of it for higher speeds.

Yes, but higher speed currently means mostly wider bands, and the full 
4-5GHz range is sort of the logical end-point ;).

Best Regards
Sebastian

> 
> David Lang

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread David Lang

On Mon, 27 Jun 2016, Sebastian Moeller wrote:

On a wireless network, with 'normal' omnidirctional antennas, the signal 
drops off with the square of the distance. So if you want to service clients 
from 1 ft to 100 ft away, your signal strength varies by 1000 (4 orders of 
magnatude), this is before you include effects of shielding, bounces, bad 
antenna alignment, etc (which can add several more orders of magnatude of 
variation)


The receiver first normalized the strongest part of the signal to a constant 
value, and then digitizes the result, (usually with a 12-14 bit AD 
converter). Since 1000x is ~10 bits, the result of overlapping tranmissions 
can be one signal at 14 bits, and another at <4 bits. This is why digital 
processing isn't able to receive multiple stations at the same time.


 But, I you add 10 Bits to your AD converter you basically solved this. 
Now, most likely this also needs to be of higher quality and of low internal 
noise, so probably expensive... Add to this the wide-band requirement of the 
sample the full band approach and we are looking at a price ad converter. On 
the bright side, mass-producing that might lower the price for nice 
oscilloscopes...


well, TI only manufactures AD converters up to 16 bit at these speeds, so 24 bit 
converters are hardly something to just buy. They do make 24 and 32 bit ADCs, 
but only ones that could be used for signals <5MHz wide (and we are pushing to 
160 MHz wide channels on wifi)


also note my comment about walls/etc providing shielding that can add a few more 
orders of magnatude on the signals.


And then when you start being able to detect signals at that level, the first 
ones you are going to hit are bounces from your strongest signal off of all 
sorts of things.


You will also find that noise and distortion in the legitimate strong signal is 
going to be at strengths close to the strength of the weak signal you are trying 
to hear.


As I said, I see things getting better, but it's going to be a very hard thing 
to do, and I'd expect to see reverse mu-mimo (similarly strong signals from 
several directions) long before the ability to detect wildly weaker signals.


I also expect that as the ability to more accurately digitize the signal grows, 
we will first take advantage of it for higher speeds.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] more well funded attempts showing market demandfor better wifi

2016-06-27 Thread Sebastian Moeller
Hi Dave, 

On June 27, 2016 2:00:55 AM GMT+02:00, David Lang  wrote:
>I don't think anyone is trying to do simultanious receive of different
>stations. 
>That is an incredibly difficult thing to do right.
>
>MU-MIMO is aimed at haivng the AP transmit to multiple stations at the
>same 
>time. For the typical browser/streaming use, this traffic is FAR larger
>than the 
>traffic from the stations to the AP. As such, it is worth focusing on
>optimizing 
>this direction.
>
>While an ideal network may resemble a wired network without guides, I
>don't 
>think it's a good idea to think about wifi networks that way.
>
>The reality is that no matter how good you get, a wireless network is
>going to 
>have lots of things that are just not going to happen with wired
>networks.
>
>1. drastic variations in signal strength.
>
>On a wired network with a shared buss, the signal strength from all
>stations 
>on the network is going to be very close to the same (a difference of
>2x would 
>be extreme)
>
>On a wireless network, with 'normal' omnidirctional antennas, the
>signal drops 
>off with the square of the distance. So if you want to service clients
>from 1 ft 
>to 100 ft away, your signal strength varies by 1000 (4 orders of
>magnatude), 
>this is before you include effects of shielding, bounces, bad antenna
>alignment, 
>etc (which can add several more orders of magnatude of variation)
>
>The receiver first normalized the strongest part of the signal to a
>constant 
>value, and then digitizes the result, (usually with a 12-14 bit AD
>converter). 
>Since 1000x is ~10 bits, the result of overlapping tranmissions can be
>one 
>signal at 14 bits, and another at <4 bits. This is why digital
>processing isn't 
>able to receive multiple stations at the same time.

  But, I you add 10 Bits to your AD converter you basically solved this. 
Now, most likely this also needs to be of higher quality and of low internal 
noise, so probably expensive... Add to this the wide-band requirement of the 
sample the full band approach and we are looking at a price ad converter. On 
the bright side, mass-producing that might lower the price for nice 
oscilloscopes...


Best Regards
   Sebastian 


>
>2. 'hidden transmitters'
>
>On modern wired networks, every link has exactly two stations on it,
>and both 
>can transmit at the same time.
>
>On wireless networks, it's drastically different. You have an unknown
>number 
>of stations (which can come and go without notice).
>
>Not every station can hear every other station. This means that they
>can't 
>avoid colliding with each other. In theory you can work around this by
>having 
>some central system coordinate all the clients (either by them being
>told when 
>to transmit, or by being given a schedule and having very precise
>clocks). But 
>in practice the central system doesn't know when the clients have
>something to 
>say and so in practice this doesn't work as well (except for special
>cases like 
>voice where there is a constant amount of data to transmit)
>
>3. variable transmit rates and aggregation
>
>Depending on how strong the signal is between two stations, you have
>different 
>limits to how fast you can transmit data. There are many different
>standard 
>modulations that you can use, but if you use one that's too fast for
>the signal 
>conditions, the receiver isn't going to be able to decode it. If you
>use one 
>that's too slow, you increase the probability that another station will
>step on 
>your signal, scrambling it as far as the receiver is concerned. We now
>have 
>stations on the network that can vary in speed by 100x, and are nearing
>1000x 
>(1Mb/sec to 1Gb/sec)
>
>Because there is so much variation in transmit rates, and older
>stations will 
>not be able to understand the newest rates, each transmission starts
>off with 
>some data being transmitted at the slowest available rate, telling any
>stations 
>that are listening that there is data being transmitted for X amount of
>time, 
>even if they can't tell what's going on as the data is being
>transmitted.
>
>The combination of this header being transmitted inefficiently, and the
>fact 
>that stations are waiting for a clear window to transmit, means that
>when you do 
>get a chance to transmit, you should send more than one packet at a
>time. This 
>is something Linux is currently not doing well, qdiscs tend to
>round-robin 
>packets without regard to where they are headed. The current work being
>done 
>here with the queues is improving both throughput and latency by fixing
>this 
>problem.
>
>
>You really need to think differently when dealing with wireless
>network. The 
>early wifi drivers tried to make them look just like a wired network,
>and we 
>have found that we just needed too much other stuff to be successful
>whith that 
>mindset.
>
>The Analog/Radio side of things really is important, and can't just be 
>abstracted away.
>
>David Lang
>
>On Sun, 26 Jun 2016, Bob McMahon wrote:
>