On Sun, 24 Aug 2014, Jonathan Morton wrote:

On 24 Aug, 2014, at 4:33 am, David Lang wrote:

On Sun, 24 Aug 2014, Jonathan Morton wrote:

My general impression is that 802.11ac makes a serious effort to improve 
matters in heavily-congested, many-clients scenarios, which was where earlier 
variants had the most trouble.  If you're planning to set up or go to a major 
conference, the best easy thing you can do is get 'ac' equipment all round - if 
nothing else, it's guaranteed to support the 5GHz band.  Of course, we're not 
just considering the easy solutions.

If ac had reasonable drivers available I would agree, but when you are limited to factory firmware, it's not good.

Hmm.  What are the current limitations, compared to 'n' equipment?

simply that when you ask the OpenWRT developers about ac equipment, they respond that they're working on it, but right now the best you have is binary drivers that only work with specific kernel versions provided by the manufacturers. Any source level drivers they have are "junk" and should be avoided.

The inability to create a custom system build is crippling for doing things like high-density networking

Single-target MIMO allows higher bandwidth between one client at a time and the AP. Both the AP and the client must support MIMO for this to work. There are physical constraints which limit the ability for handheld devices to support MIMO. In general, this form of MIMO improves throughput in the home, but is not very useful in congested situations. High individual throughput is not what's needed in a crowded arena; rather, reliable if slow individual throughput, reasonable latency, and high aggregate throughput.

well, if the higher bandwidth to an individual user ended up reducing the airtime that user takes up, it could help. but I suspect that the devices that do this couldn't keep track of a few dozen endpoints.

I think multi-target MIMO is more useful than single-target MIMO for the congested case. It certainly helps that the client doesn't need to explicitly support MIMO for it to work.

better yes, but at what price difference? :-)

If the APs cost $1000 each instead of $100 each, you are better off with more of the cheaper APs. If they are $200 instead of $100, it may help more, but it's all a matter of how much traffic is going in each direction.

This needs to be tweakable. In low-congestion, high throughput situations, you want to do a lot of aggregation, in high-congestion situations, you want to limit this.

Yes, that makes sense. The higher the latency, beyond some threshold, the more likely that spurious retransmits (TCP or UDP) will occur, making the congestion worse and crippling goodput. So latency trumps throughput in the congested case.

it's not latency, it's how long the radio transmission takes. sending all pending packets for that destination in one long transmission will minimize the latency for that destination, and provide the best overall throughput for the system, in a quiet RF environment. but if there is a n% chance every ms of someone else transmitting, then you really want to keep your transmissions as small as possible

However, I think there is a sliding scale on this. With the modern modulation schemes (and especially with wide channels), the handshake and preamble really are a lot of overhead. If you have the chance to triple your throughput for a 20% increase in channel occupation, you need a *really* good reason not to take it.

if you can send 300% as much data in 120% the time, then the overhead of sending a single packet is huge (you _far_ spend more airtime on the overhead than the packet itself)

now, this may be true for small packets, which is why this should be configurable, and configurable in terms of data size, not packet count.

by the way, the same effect happens on wired ethernet networks, see Jumbo Frames and the advantages of using them.

the advantages are probably not 300% data in 120% time, but more like 300% data in 270% time, and at that point, the fact that you are 2.7x as likely to loose the packet to another transmission very quickly make it the wrong thing to do.


Equally clearly, in a heavily congested scenario the AP benefits from having a lot of buffer divided among a large number of clients, but each client should have only a small buffer.

the key thing is how long the data sits in the buffer. If it sits too long, it doesn't matter that it's the only packet for this client, it still is too much buffering.

Even if it's a Fair Queue, so *every* client has only a single packet waiting?

yep, if there are too many clients, even one packet per client can end up with the overall delay being excessive.

you won't hit this with 20 clients, but you sure would with 1000 clients.

I don't know where the crossover point would be, but I know that you do want a limit on the total buffer size.

Modern wifi variants use packet aggregation to improve efficiency. This only works when there are multiple packets to send at a time from one place to a specific other place - which is more likely when the link is congested. In the event of a retry, it makes sense to aggregate newly buffered packets with the original ones, to reduce the number of negotiation and retry cycles.

up to a point. It could easily be that the right thing to do is NOT to aggregate the new packets because it will make it far more likely that they will all fail (ac mitigates this in theory, but until there is really driver support, the practice is questionable)

From what I read, I got the impression that 'ac' *forbids* the use of the fragile aggregation schemes. Are the drivers really so awful that they are noncompliant?

without having looked at any driver, I can tell you the answer is a strong YES :-)

besides, the other end of the connection may not be ac, it may only be n, and it would do what it wants.

Yep... I remember a neat paper from colleagues at Trento University that piggybacked TCP's ACKs on link layer ACKs, thereby avoiding the collisions between TCP's ACKs and other data packets - really nice. Not sure if it wasn't just simulations, though.

that's a neat hack, but I don't see it working, except when one end of the wireless link is also the endpoint of the TCP connection (and then only for acks from that device)

so in a typical wifi environment, it would be one less transmission from the laptop, no change to the AP.

But even with that, doesn't TCP try to piggyback the ack on the next packet of data anyway? so unless it's a purely one-way dataflow, this still wouldn't help.

Once established, a HTTP session looks exactly like that. I also see no reason in theory why a TCP ack couldn't be piggybacked on the *next* available link ack, which would relax the latency requirements considerably.

I don't understand (or we are talking past each other again)

laptop -- ap -- 50 hops -- server

packets from the server to the laptop could have an ack piggybacked by the driver on the wifi link ack, but for packets the other direction, the ap can't possibly know that the server will ever respond, so it can't reply with a TCP level ack when it does the link level ack.

If the ack packets are already combined by the laptop and server with the next data packet, stand-alone ack packets should be pretty rare.

to be clear, what I'm thinking of is TCP offload type of operation on the laptop, something along the lines that when the driver receives a TCP packet destined for the laptop, the driver will consider the ack sent (and update the OS state accordingly), meanwhile at the AP, if the next hop is the final destination, then when it gets the link-level ack back from the wifi hop it could then generate a ack back to the server. without the need for the TCP ack packet to ever go over the air.

If that were implemented and deployed successfully, it would mean that the majority of RTS/CTS handshakes initiated by clients would be to send DNS queries, TCP handshakes and HTTP request headers, all of which are actually important. It would, I think, typically reduce contention by a large margin.

only if stand-alone ack packets are really a significant portion of the network traffic.

David Lang
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to