On Wed, 7 Oct 2015, dpr...@reed.com wrote:

On Wednesday, October 7, 2015 4:18pm, "Mikael Abrahamsson" <swm...@swm.pp.se> 
said:



On Wed, 7 Oct 2015, dpr...@reed.com wrote:

> not built up! The purpose of queueing is to absorb bursts that can't be
> anticipated, not to build up congestion in order to have enough data to
> perform a dubious optimization that can best be done at the source of
> traffic in cooperation with the destination.

There is no congestion when the ACK suppression kicks in and is useful.

We may be defining congestion *differently*. If a backlog of packets from one endpoint to another builds up in an outbound queue, that is congestion. That it might be caused by scheduling the "channel" is not relevant, really - at least in my definition of congestion.

So let's look at what TCP can do (or any other end-to-end protocol can do) to avoid such congestion:

1) knowing that the packets will be "hung up" in the middle (which it can observe by noticing that acks do get hung up), it can delay sending acks, since it will make no difference to send a cumulative ack rather than multiple acks. That's assuming that the link in question is the "bottleneck" link on the path.

2) the ideal TCP behavior here would be to bundle packets and to bundle ACKs, given that it can see that the bottleneck link has intermittent traffic openings with long gaps between open packet slots.

the endpoints may be sufficiently isolated from the bottleneck that they can't tell the timeing of packets precisely enough to detect this traffic pattern

Let's look at what the MAC layer of the shared link can do:

1) It can bundle IP packets being sent from a station (from an arbitrary number of sources) into a cumulative bundle (concatenating them). This is far more general than ACK consolidation, and covers protocols that have small packets, no ACKs at all, etc. Obviously this is dependent on the specific MAC protocol, and the MAC protocol should maintain some degree of "fair sharing", because in essence there is one queue waiting for airtime, it is just distributed across many transmitters. Thus the amount of bundling needs to be bounded. Since arbitration time for channel access is wasted, the critical parameter is the arbitration overhead, as a percentage of maximum load - this controls the desired bundle size.

2) to the extent that packets will be bundled, there is a certain degree of compressability of the data. For example, simple adaptive compression algorithms like Lempel-Ziv encoding are protocol independent. If the frequent case is a series of ACKs, for example, Lempel-Ziv encoding the bundle will reduce the air time occupied by the bundle.

3) note that the critical parameter here is the arbitration overhead. For the normal mode of the WiFi MAC, that is huge. The preambles, etc. needed to deal with unequal received signal strength are relative large (multiple microseconds), plus the cost of a collision (due to lack of RTS/CTS usage that limits the cost of a collision). Reducing the arbitration overhead is much more effective than ACK thinning. On the principle of working on the big payoffs first, the idea of ACK thinning just doesn't seem to make sense outside a hypothetical that disallows much more significant improvements that don't require specialization to a particular version of TCP at a specific point in time.

packets that don't have to be sent beat packets that are bundled and/or compressed by a large margin :-)

compression takes a lot of cpu, and in many cases the devices have enough trouble shoving packets around, doing compression on them is going to end up with a net loss in throughput.

David Lang
_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm
_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to