On 24 Aug, 2014, at 4:33 am, David Lang wrote:

> On Sun, 24 Aug 2014, Jonathan Morton wrote:
> 
>> My general impression is that 802.11ac makes a serious effort to improve 
>> matters in heavily-congested, many-clients scenarios, which was where 
>> earlier variants had the most trouble.  If you're planning to set up or go 
>> to a major conference, the best easy thing you can do is get 'ac' equipment 
>> all round - if nothing else, it's guaranteed to support the 5GHz band.  Of 
>> course, we're not just considering the easy solutions.
> 
> If ac had reasonable drivers available I would agree, but when you are 
> limited to factory firmware, it's not good.

Hmm.  What are the current limitations, compared to 'n' equipment?

>> Wider bandwidth channels can be used to shorten the time taken for each 
>> transmission.  However, this effect is not linear, because the RTS/CTS 
>> handshake and preamble are fixed overheads (since they must be transmitted 
>> at a low speed to ensure that all clients can hear them), taking the same 
>> length of time regardless of any other enhancements.  This implies that in 
>> seriously geographically-congested scenarios, 20MHz channels (and lots of 
>> APs to use them all) are still the most efficient.  MIMO can still be used 
>> to beneficial effect in these situations.
> 
> Another good reason for sticking to 20MHz channels is that it gives you more 
> channels available, so you can deploy more APs without them interfering with 
> each other's footprints. This can significantly reduce the distance between 
> the mobile user and the closest AP.

No, that is *the* good reason.  If you don't have a lot of APs, you might as 
well use 40MHz or 80MHz channels to increase the throughput per AP.

>> Multi-target MIMO allows an AP to transmit to several clients 
>> simultaneously, without requiring the client to support MIMO themselves.  
>> This requires the AP's antennas and radios to be dynamically reconfigured 
>> for beamforming - giving each client a clear version of its own signal and a 
>> null for the other signals - which is a tricky procedure.  APs that do 
>> implement this well are highly valuable in congested situations.
> 
> how many different targets can such APs handle? if it's only a small number, 
> I'm not sure it helps much.

The diagram I saw on Cisco's website demonstrated the process for three 
clients, so I assume that's their present target.  I think four targets is 
plausible at the high end, once implementations mature, though the standard 
permits 8-way in theory.  The RF hardware requirements are similar to 'n'-style 
single-target MIMO.

> Also, is this a transmit-only feature? or can it help decipher multiple 
> mobile devices transmitting at the same time?

I think it *could* be used for receive as well.  The AP could hear several RTS 
packets, configure itself for multiple receive, then send CTS in multiple, in 
order to signal that.  The trick is with hearing the multiple RTSes, I think.

>> Single-target MIMO allows higher bandwidth between one client at a time and 
>> the AP.  Both the AP and the client must support MIMO for this to work. 
>> There are physical constraints which limit the ability for handheld devices 
>> to support MIMO.  In general, this form of MIMO improves throughput in the 
>> home, but is not very useful in congested situations.  High individual 
>> throughput is not what's needed in a crowded arena; rather, reliable if slow 
>> individual throughput, reasonable latency, and high aggregate throughput.
> 
> well, if the higher bandwidth to an individual user ended up reducing the 
> airtime that user takes up, it could help. but I suspect that the devices 
> that do this couldn't keep track of a few dozen endpoints.

I think multi-target MIMO is more useful than single-target MIMO for the 
congested case.  It certainly helps that the client doesn't need to explicitly 
support MIMO for it to work.

> This needs to be tweakable. In low-congestion, high throughput situations, 
> you want to do a lot of aggregation, in high-congestion situations, you want 
> to limit this.

Yes, that makes sense.  The higher the latency, beyond some threshold, the more 
likely that spurious retransmits (TCP or UDP) will occur, making the congestion 
worse and crippling goodput.  So latency trumps throughput in the congested 
case.

However, I think there is a sliding scale on this.  With the modern modulation 
schemes (and especially with wide channels), the handshake and preamble really 
are a lot of overhead.  If you have the chance to triple your throughput for a 
20% increase in channel occupation, you need a *really* good reason not to take 
it.

>> There should be enough buffering to allow effective aggregation, but as 
>> little as possible on top of that.  I don't know how much aggregation can be 
>> done, but I assume that there is a limit, and that it's not especially high 
>> in terms of full-length packets.  After all, tying up the channel for long 
>> periods of time is unfair to other clients - a typical latency/throughput 
>> tradeoff.
> 
> Aggregation is not necessarily worth pursuing.
> 
>> Equally clearly, in a heavily congested scenario the AP benefits from having 
>> a lot of buffer divided among a large number of clients, but each client 
>> should have only a small buffer.
> 
> the key thing is how long the data sits in the buffer. If it sits too long, 
> it doesn't matter that it's the only packet for this client, it still is too 
> much buffering.

Even if it's a Fair Queue, so *every* client has only a single packet waiting?

>> Modern wifi variants use packet aggregation to improve efficiency.  This 
>> only works when there are multiple packets to send at a time from one place 
>> to a specific other place - which is more likely when the link is congested. 
>>  In the event of a retry, it makes sense to aggregate newly buffered packets 
>> with the original ones, to reduce the number of negotiation and retry cycles.
> 
> up to a point. It could easily be that the right thing to do is NOT to 
> aggregate the new packets because it will make it far more likely that they 
> will all fail (ac mitigates this in theory, but until there is really driver 
> support, the practice is questionable)

>From what I read, I got the impression that 'ac' *forbids* the use of the 
>fragile aggregation schemes.  Are the drivers really so awful that they are 
>noncompliant?

and from Steinar...

>> Yep... I remember a neat paper from colleagues at Trento University that 
>> piggybacked TCP's ACKs on link layer ACKs, thereby avoiding the collisions 
>> between TCP's ACKs and other data packets - really nice. Not sure if it 
>> wasn't just simulations, though.
> 
> that's a neat hack, but I don't see it working, except when one end of the 
> wireless link is also the endpoint of the TCP connection (and then only for 
> acks from that device)
> 
> so in a typical wifi environment, it would be one less transmission from the 
> laptop, no change to the AP.
> 
> But even with that, doesn't TCP try to piggyback the ack on the next packet 
> of data anyway? so unless it's a purely one-way dataflow, this still wouldn't 
> help.

Once established, a HTTP session looks exactly like that.  I also see no reason 
in theory why a TCP ack couldn't be piggybacked on the *next* available link 
ack, which would relax the latency requirements considerably.

If that were implemented and deployed successfully, it would mean that the 
majority of RTS/CTS handshakes initiated by clients would be to send DNS 
queries, TCP handshakes and HTTP request headers, all of which are actually 
important.  It would, I think, typically reduce contention by a large margin.

 - Jonathan Morton

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to