On 24 Aug, 2014, at 8:12 am, David Lang wrote: > On Sun, 24 Aug 2014, Jonathan Morton wrote:
>> I think multi-target MIMO is more useful than single-target MIMO for the >> congested case. It certainly helps that the client doesn't need to >> explicitly support MIMO for it to work. > > better yes, but at what price difference? :-) > > If the APs cost $1000 each instead of $100 each, you are better off with more > of the cheaper APs. ...until you run out of channels to run them on. Then, if you still need more capacity, multi-target MIMO is probably still worth it. Hopefully, it won't be as much as a tenfold price difference. >> However, I think there is a sliding scale on this. With the modern >> modulation schemes (and especially with wide channels), the handshake and >> preamble really are a lot of overhead. If you have the chance to triple >> your throughput for a 20% increase in channel occupation, you need a >> *really* good reason not to take it. > > if you can send 300% as much data in 120% the time, then the overhead of > sending a single packet is huge (you _far_ spend more airtime on the overhead > than the packet itself) > > now, this may be true for small packets, which is why this should be > configurable, and configurable in terms of data size, not packet count. > > by the way, the same effect happens on wired ethernet networks, see Jumbo > Frames and the advantages of using them. > > the advantages are probably not 300% data in 120% time, but more like 300% > data in 270% time, and at that point, the fact that you are 2.7x as likely to > loose the packet to another transmission very quickly make it the wrong thing > to do. The conditions are probably different in each direction. The AP is more likely to be sending large packets (DNS response, HTTP payload) while the client is more likely to send small packets (DNS request, TCP SYN, HTTP GET). The AP is also likely to want to aggregate a TCP SYN/ACK with another packet. So yes, intelligence of some sort is needed. And I should probably look up just how big the handshake and preamble are in relative terms - but I do already know that under ideal conditions, recent wifi variants still get a remarkably small percentage of their theoretical data rate as actual throughput - and that's with big packets and aggregation. >>> But even with that, doesn't TCP try to piggyback the ack on the next packet >>> of data anyway? so unless it's a purely one-way dataflow, this still >>> wouldn't help. >> >> Once established, a HTTP session looks exactly like that. I also see no >> reason in theory why a TCP ack couldn't be piggybacked on the *next* >> available link ack, which would relax the latency requirements considerably. > > I don't understand (or we are talking past each other again) > > laptop -- ap -- 50 hops -- server > > packets from the server to the laptop could have an ack piggybacked by the > driver on the wifi link ack, but for packets the other direction, the ap > can't possibly know that the server will ever respond, so it can't reply with > a TCP level ack when it does the link level ack. Which is fine, because the bulk of the traffic will be from the AP to the client. Unless you're running servers wirelessly, which seems dumb, or you've got a bunch of journalists uploading copy and photos, which seems like a more reasonable use case. But what I meant is that the TCP ack doesn't need to be piggybacked on the link-level ack for the same packet - it can go on a later one. Think VJ compression in PPP - there's a small lookup table which can be used to fill in some of the data. >> If that were implemented and deployed successfully, it would mean that the >> majority of RTS/CTS handshakes initiated by clients would be to send DNS >> queries, TCP handshakes and HTTP request headers, all of which are actually >> important. It would, I think, typically reduce contention by a large margin. > > only if stand-alone ack packets are really a significant portion of the > network traffic. I think they are significant, in terms of the number of uncoordinated contentions for the channel. Remember, the AP occupies a privileged position in the network. It transmits the bulk of the data, and the bulk of the number of individual packets. It knows when it's already busy itself, so the backoff algorithm never kicks in for the noise it makes itself. It can be a model citizen of the wireless spectrum. By contrast, clients send much less on an individual basis, but they have to negotiate with the AP *and* every other client for airtime to do so. Every TCP ack disrupts the AP's flow of traffic. If the AP aggregates three HTTP payload packets into a single transmission, then it must expect to receive a TCP ack coming the other way - in other words, to be interrupted - for every such aggregate packet it has sent. The less often clients have to contend for the channel, the more time the AP can spend distributing its self-coordinated, useful traffic. Let's suppose a typical HTTP payload is 45kB (including TCP/IP wrapping). That can be transmitted in 10 triples of 1500B packets. There would also be a DNS request and response, a TCP handshake (SYN, SYN/ACK), a HTTP request (ACK/GET), and a TCP close (FIN/ACK, ACK), which I'll assume can't be aggregated with other traffic, associated with the transaction. So the AP must transmit 13 times to complete this small request. As things currently stand, the client must *also* transmit - 14 times. The wireless channel is therefore contended for 27 times, of which 10 (37%) are pure TCP acks that could piggyback on a subsequent link-layer ack. I'd say 37% is significant, wouldn't you? - Jonathan Morton _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat