David,

On 10/11/2015 3:49 PM, David Lang wrote:
>> One of the reasons we use packets is to provide more timely,
>> fine-grained feedback between endpoints. Aggregating them at the source
>> or in the network (rather than at the receiver) can amplify reaction to
>> packet loss (when the aggregate ACK is lost)
> 
> the issue of amplified reaction to packet loss is a valid one, but
> packet loss is still pretty rare,

But you claim to need to drop the ACKs to avoid dropping other packets.
What's to say that 1) the ACKs won't end up getting dropped in your
system or 2) the ACKs won't get dropped further downstream?

Again, *you* don't know.

...
>> and increases
>> discretization effects in the TCP algorithms.
> 
> This is where I disagree with you. The ACK packets are already collapsed
> time-wise.

Time is only one element of the discretization. The other is fate
sharing - dropping one aggregate ACK has the effect of dropping the
entire set of un-aggregated ones. The stream of unaggregated ACKs gives
TCP a robustness to drops, but you're now forcing a large burst of drops
for a specific packet type. That's the problem.

We've gone round this issue for a lot of messages, but the key point is
that this isn't a new idea, nor are the numerous considerations and
concerns.

Ultimately, if you have too many packets from a protocol that reacts to
congestion, the correct solution is to provide that feedback and let the
ends figure out how to respond. That's the only solution that will work
when (not if) new messages or encrypted headers enter the picture.

These kinds of in-network hacks are what's really killing the ability of
the Internet to evolve.

Joe

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to