On Sun, 24 Aug 2014, Jonathan Morton wrote:
On 24 Aug, 2014, at 8:12 am, David Lang wrote:
On Sun, 24 Aug 2014, Jonathan Morton wrote:
I think multi-target MIMO is more useful than single-target MIMO for the
congested case. It certainly helps that the client doesn't need to
explicitly support MIMO for it to work.
better yes, but at what price difference? :-)
If the APs cost $1000 each instead of $100 each, you are better off with more
of the cheaper APs.
...until you run out of channels to run them on. Then, if you still need more
capacity, multi-target MIMO is probably still worth it.
keep in mind that MIMO only increases your capacity in one direction.
also with 5GHz, you have quite a few channels available. By turning the power
down (especially if you can tell the clients to do so as well), you can pack a
LOT of APs into an area. Yes, you will evenually run out of channels, but now
you are talking extreme density, not just high density
Hopefully, it won't be as much as a tenfold price difference.
Until we have open drivers for the commodity APs to let them be fully
controlled, you are comparing sub $100 home APs to the high end centralized AP
systems from Cisco and similar, a tenfold price difference is actually pretty
close to accurate. ($700+ per AP, plus the central system to run them, then
software licenses, maintinance contracts, etc...)
It's only a couple years ago that a building I would have setup for a couple
thousand first had a $50K proprietary system purchased for it, which was then
replaced by an even more expensive system
However, I think there is a sliding scale on this. With the modern
modulation schemes (and especially with wide channels), the handshake and
preamble really are a lot of overhead. If you have the chance to triple
your throughput for a 20% increase in channel occupation, you need a
*really* good reason not to take it.
if you can send 300% as much data in 120% the time, then the overhead of
sending a single packet is huge (you _far_ spend more airtime on the overhead
than the packet itself)
now, this may be true for small packets, which is why this should be
configurable, and configurable in terms of data size, not packet count.
by the way, the same effect happens on wired ethernet networks, see Jumbo
Frames and the advantages of using them.
the advantages are probably not 300% data in 120% time, but more like 300%
data in 270% time, and at that point, the fact that you are 2.7x as likely to
loose the packet to another transmission very quickly make it the wrong thing
to do.
The conditions are probably different in each direction. The AP is more
likely to be sending large packets (DNS response, HTTP payload) while the
client is more likely to send small packets (DNS request, TCP SYN, HTTP GET).
The AP is also likely to want to aggregate a TCP SYN/ACK with another packet.
If your use case is web browsing or streaming video yes. If it's gaming or other
interactive use, much less so.
So yes, intelligence of some sort is needed. And I should probably look up
just how big the handshake and preamble are in relative terms - but I do
already know that under ideal conditions, recent wifi variants still get a
remarkably small percentage of their theoretical data rate as actual
throughput - and that's with big packets and aggregation.
That is very true and not something I'm disagreeing with
ac type aggregation where individual packets are acked so that only the ones
that get clobbered need to be re-sent makes this a lot less painful.
but even there, if the second and tenth packets get clobbered, is it smart
enough to only resend those two? or will it resend 2-10?
But even with that, doesn't TCP try to piggyback the ack on the next packet
of data anyway? so unless it's a purely one-way dataflow, this still
wouldn't help.
Once established, a HTTP session looks exactly like that. I also see no
reason in theory why a TCP ack couldn't be piggybacked on the *next*
available link ack, which would relax the latency requirements considerably.
I don't understand (or we are talking past each other again)
laptop -- ap -- 50 hops -- server
packets from the server to the laptop could have an ack piggybacked by the
driver on the wifi link ack, but for packets the other direction, the ap
can't possibly know that the server will ever respond, so it can't reply with
a TCP level ack when it does the link level ack.
Which is fine, because the bulk of the traffic will be from the AP to the
client. Unless you're running servers wirelessly, which seems dumb, or you've
got a bunch of journalists uploading copy and photos, which seems like a more
reasonable use case.
But what I meant is that the TCP ack doesn't need to be piggybacked on the
link-level ack for the same packet - it can go on a later one. Think VJ
compression in PPP - there's a small lookup table which can be used to fill in
some of the data.
If that were implemented and deployed successfully, it would mean that the
majority of RTS/CTS handshakes initiated by clients would be to send DNS
queries, TCP handshakes and HTTP request headers, all of which are actually
important. It would, I think, typically reduce contention by a large
margin.
only if stand-alone ack packets are really a significant portion of the
network traffic.
I think they are significant, in terms of the number of uncoordinated
contentions for the channel.
Remember, the AP occupies a privileged position in the network. It transmits
the bulk of the data, and the bulk of the number of individual packets. It
knows when it's already busy itself, so the backoff algorithm never kicks in
for the noise it makes itself.
not for the noise it makes itself, but the noise from all the other APs and the
large mass of clients talking to those other APs is a problem
It can be a model citizen of the wireless spectrum.
agreed, it's also the one part that we (as network admins) have a hope of
controlling
By contrast, clients send much less on an individual basis, but they have to
negotiate with the AP *and* every other client for airtime to do so. Every
TCP ack disrupts the AP's flow of traffic. If the AP aggregates three HTTP
payload packets into a single transmission, then it must expect to receive a
TCP ack coming the other way - in other words, to be interrupted - for every
such aggregate packet it has sent. The less often clients have to contend for
the channel, the more time the AP can spend distributing its self-coordinated,
useful traffic.
Let's suppose a typical HTTP payload is 45kB (including TCP/IP wrapping).
That can be transmitted in 10 triples of 1500B packets. There would also be a
DNS request and response, a TCP handshake (SYN, SYN/ACK), a HTTP request
(ACK/GET), and a TCP close (FIN/ACK, ACK), which I'll assume can't be
aggregated with other traffic, associated with the transaction.
So the AP must transmit 13 times to complete this small request. As things
currently stand, the client must *also* transmit - 14 times. The wireless
channel is therefore contended for 27 times, of which 10 (37%) are pure TCP
acks that could piggyback on a subsequent link-layer ack.
not all transmissions are equal.
a transmission of 3x1500 byte packets takes a LOT longer than of a single 64
byte ack packet (and remember that ack packets can be aggregated as well, so
it's not one tcp ack per aggregate sent, but it is one RF ack per aggregate
sent)
so the overhead is a lot less than 37%
but it is a lot larger than simple packet size would indicate, because the
encapsulation per transmission is a fixed size (and no, I don't know how large
it is)
David Lang
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat