Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-09 Thread Jonathan Morton
The question of whether to aggregate under congested conditions is
controversial, probably because it depends on complex conditions.  There
are arguments both for and against.

It may be worth considering it as a risk/reward tradeoff.  Given N packets
(which for brevity I'll assume are equal MTU sized), the reward is
obviously proportional to N.  Risk however is calculated as probability *
consequence.

Assuming all packets in the aggregate are lost on collision, the risk of
collision scales with L*N, where L is N plus the overhead of the TXOP.
Under that argument, usually you should not aggregate if the probability of
collision is high.

However, if only one packet is lost due to collision with, for example, a
small RTS probe which is not answered, the risk scales with L, which is
sublinear compared to the reward relative to the amount of aggregation
(especially at high data rates where the TXOP overhead is substantial).
Under this assumption, aggregation is usually profitable even with a high
collision probability, and results in overall higher efficiency whether or
not collisions are likely.

This is the difference between the typical 802.11n situation (one checksum
per aggregate) and the mandatory 802.11ac capability of a checksum per
packet.  As long as you also employ RTS/CTS when appropriate, the
possibility of collisions is no longer a reason to avoid aggregating.

- Jonathan Morton
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-09 Thread David Lang

On Sun, 9 Aug 2015, Jonathan Morton wrote:


The question of whether to aggregate under congested conditions is
controversial, probably because it depends on complex conditions.  There
are arguments both for and against.

It may be worth considering it as a risk/reward tradeoff.  Given N packets
(which for brevity I'll assume are equal MTU sized), the reward is
obviously proportional to N.  Risk however is calculated as probability *
consequence.

Assuming all packets in the aggregate are lost on collision, the risk of
collision scales with L*N, where L is N plus the overhead of the TXOP.
Under that argument, usually you should not aggregate if the probability of
collision is high.

However, if only one packet is lost due to collision with, for example, a
small RTS probe which is not answered, the risk scales with L, which is
sublinear compared to the reward relative to the amount of aggregation
(especially at high data rates where the TXOP overhead is substantial).
Under this assumption, aggregation is usually profitable even with a high
collision probability, and results in overall higher efficiency whether or
not collisions are likely.

This is the difference between the typical 802.11n situation (one checksum
per aggregate) and the mandatory 802.11ac capability of a checksum per
packet.  As long as you also employ RTS/CTS when appropriate, the
possibility of collisions is no longer a reason to avoid aggregating.


remember that there are stations out there that aren't going to hear your 
RTS/CTS, especially in dense layouts.


Just like wired networks benefit greatly from time-based queues rather than 
packet count based queues, I think that wifi aggregation should not be based on 
packet count (or even aggregate size) but rather the amont of airtime that's 
going to be used (aggregate size * bit rate + overhead)


If the AP can keep track of how many collions it's had/seen over the last X 
time, that can factor in as well. I agree that the 802.11ac ability to only 
loose a packet instead of the entire transmission is a big step forwards, 
unfortunantly there's not that much equipment out there yet that will take 
advantage of it. But it does mean that it's probably worth having two different 
algorithms for the -ac and non -ac endpoints.


Which makes it even more important that the queue logic get information about 
the particular endpoints when deciding what data should be transmitted next.


David Lang
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-09 Thread David Lang

On Sat, 8 Aug 2015, dpr...@reed.com wrote:

There's a lot of "folklore" out there about radio systems and WiFi that is 
quite wrong, and you seem to be quoting some of it - e.g. the idea that the 1 
Mb/s waveform of 802.11b DSSS is somehow more reliable than the lowest-rate 
OFDM modulations, which is often false.


I agree with you, but my understanding is that the current algorithms always 
assume that slower == more robust transmissions. My point was that in a weak 
signal environement where you have troble decoding individual bits this is true 
(or close enough to true for "failed transmission" -> "retransmit at a slower 
rate" to be a very useful algorithm, but in a congested environment where your 
biggest problem is being stepped on by other tranmissions, this is close to 
suicide instead.


The 20 MHz-wide M0 modulation with 800ns GI gives 6.2 Mb/s and typically much 
more reliable than than the 802.11b standard 1 Mb/sec DSSS signals in normal 
environments, with typical receiver designs.


Interesting and good to know.

It's not the case that beacon frames are transmitted at 1 Mb/sec. - 
that is only true when there are 802.11b stations *associated* with the access 
point (which cannot happen at 5 GHz).


Also interesting. I wish I knew of a way to disable the 802.11b modes on teh 
wndr3800 or wrt1200 series APs. I've seen some documentation online talking 
about it, but it's never worked when I've tried it.


Dave Taht did some experimentation with cerowrt in increasing the broadcase 
rate, but my understanding is that he had to back out those changes because they 
didnt' work well in the real world.


Nor is it true that the preamble for ERP 
frames is wastefully long. The preamble for an ERP (OFDM operation) frame is 
about 6 microseconds long, except in the odd case on 2.4GHz of 
compatibility-mode (OFDM-DSSS) operation, where the DSSS preamble is used. 
The DSSS preamble is 72 usec. long, because 72 bits at 1 Mb/sec takes that 
long, but the ERP frame's preamble is much shorter.


Is compatibility mode needed for 802.11g or 802.11b compatibility?

In any case, my main points were about the fact that "channel estimation" is 
the key issue in deciding on a modulation to use (and MIMO settings to use), 
and the problem with that is that channels change characteristics quite 
quickly indoors! A spinning fan blade can create significant variation in the 
impulse response over a period of a couple milliseconds.  To do well on 
channel estimation to pick a high data rate, you need to avoid a backlog in 
the collection of outbound packets on all stations - which means minimizing 
queue buildup (even if that means sending shorter packets, getting a higher 
data rate will minimize channel occupancy).


Long frames make congested networks work badly - ideally there would only be 
one frame ready to go when the current frame is transmitted, but the longer 
the frame, the more likely more than one station will be ready, and the longer 
the frames will be (if they are being combined).  That means that the penalty 
due to, and frequency of, collisions where more than one frame are being sent 
at the same time grows, wasting airtime with collisions.  That's why CTS/RTS 
is often a good approach (the CTS/RTS frames are short, so a collision will be 
less wasteful of airtime).


I run the wireless network for the Scale conference where we get a couple 
thousand people showing up with their equipment. I'm gearing up for next year's 
conference (decideing what I'm going to try, what equipment I'm going to need, 
etc). I would love to get any help you can offer on this, and I'm willing to do 
a fair bit of experimentation and a lot of measurements to see what's happening 
in the real world. I haven't been setting anything to specifically enable 
RTS/CTS in the past.


But due to preamble size, etc., CTS/RTS can't be 
very short, so an alternative hybrid approach is useful (assume that all 
stations transmit CTS frames at the same time, you can use the synchronization 
acquired during the CTS to mitigate the need for a preamble on the packet sent 
after the RTS).  (One of the papers I did with my student Aggelos Bletsas on 
Cooperative Diversity uses CTS/RTS in this clever way - to measure the channel 
while acquiring it).


how do you get the stations synchronized?

David Lang





On Friday, August 7, 2015 6:31pm, "David Lang"  said:




On Fri, 7 Aug 2015, dpr...@reed.com wrote:

> On Friday, August 7, 2015 4:03pm, "David Lang"  said:
>>
>
>> Wifi is the only place I know of where the transmit bit rate is going to
vary
>> depending on the next hop address.
>
>
> This is an interesting core issue. The question is whether additional
> queueing helps or hurts this, and whether the MAC protocol of WiFi deals well
> or poorly with this issue. It is clear that this is a peculiarly WiFi'ish
> issue.
>
> It's not clear that the best transmit rate remains stable for very long, or
> even how to predict the "best rate" f

Re: [Cerowrt-devel] [Make-wifi-fast] [tsvwg] Comments on draft-szigeti-tsvwg-ieee-802-11e

2015-08-09 Thread Mikael Abrahamsson

On Sun, 9 Aug 2015, David Lang wrote:

Just like wired networks benefit greatly from time-based queues rather 
than packet count based queues, I think that wifi aggregation should not 
be based on packet count (or even aggregate size) but rather the amont 
of airtime that's going to be used (aggregate size * bit rate + 
overhead)


I have been involved in 3GPP networking. In for instnace LTE, you can tune 
the scheduler to allocate resources in multiple ways, for instance so that 
each user gets similar amount of transfered data/second, or they get 
access to equal amount of "airtime resources" (which is called TTI 
(https://en.wikipedia.org/wiki/Transmission_Time_Interval), which is time 
slot and frequency divided in LTE (LTE has a lot of subcarriers (OFDM) and 
each subcarrier has 1ms TTIs)).


Personally I favor the "airtime resource fairness", becuase that means a 
station with bad connectivity doesn't harm a station with good 
connectivity. I think it's also intuitive to people that if they have bad 
radio conditions, their network performance goes down. If you give 
everybody the same speed even though some needs a lot more airtime 
resource to attain that speed, that person will never know they're hogging 
resources and will never try to improve the situation.


So if I understood you correctly above, my opinion is in agreement with 
what you wrote.


--
Mikael Abrahamssonemail: swm...@swm.pp.se
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel