Well, ACK filtering/thinning is a simple trade-off: redundancy versus 
bandwidth. Since the RFCs say a receiver should acknoledge every second full 
MSS I think the decision whether to filter or not should be kept to the enduser 
and not some misguided middle boxes; if a DOCSIS ISP wants to secure precious 
upstream bandwidth they should at least re-synthesize the filtered ACKs after 
passing their upstream bottleneck IMHO. This is not reasonable network 
management in my irrelevant opinion unless actively opted-in by the user. Or 
put differently the real fix for DOCSIS ISPs is to simply not sell internet 
connections with asymmetries that make it impossible to saturate the link with 
TCP traffic without heroic measures like ack filtering. 
So I am all for cake learning to do that, but I am 100% against recommending 
using it unless one is "blessed" with a clue-less ISP that has problems 
calculating the maximal permissible Up/Down asymmetry for TCP...
BTW, I believe older TCPs used the reception of an ACK and not the acknowledged 
byte increment for widening their send/congestion windows, ack filtering should 
make slow start behave more sluggish for such hosts. As far as I can tell linux 
recently learned to deal with this fact as GRO in essence will also make the 
receiver ACK more rarely (once every 2 super-packets), so linux I think now 
evaluates the number of acknoledged bytes. But I have no idea about windows or 
BSD tcp implementations.

Best Regards



> On Nov 29, 2017, at 07:09, Mikael Abrahamsson <swm...@swm.pp.se> wrote:
> 
> On Tue, 28 Nov 2017, Dave Taht wrote:
> 
>> Recently Ryan Mounce added ack filtering cabilities to the cake qdisc.
>> 
>> The benefits were pretty impressive at a 50x1 Down/Up ratio:
>> 
>> http://blog.cerowrt.org/post/ack_filtering/
>> 
>> And quite noticeable at 16x1 ratios as well.
>> 
>> I'd rather like to have a compelling list of reasons why not to do
>> this! And ways to do it better, if not. The relevant code is hovering
>> at:
>> 
>> https://github.com/dtaht/sch_cake/blob/cobalt/sch_cake.c#L902
> 
> Your post is already quite comprehensive when it comes to downsides.
> 
> The better solution would of course be to have the TCP peeps change the way 
> TCP works so that it sends fewer ACKs. I don't want middle boxes making 
> "smart" decisions when the proper solution is for both end TCP speakers to do 
> less work by sending fewer ACKs. In the TCP implementations I tcpdump 
> regularily, it seems they send one ACK per 2 downstream packets.
> 
> At 1 gigabit/s that's in the order of 35k pps of ACKs (100 megabyte/s divided 
> by 1440 divided by 2). That's in my opinion completely ludicrous rate of ACKs 
> for no good reason.
> 
> I don't know what the formula should be, but it sounds like the ACK sending 
> ratio should be influenced by how many in-flight ACKs there might be. Is 
> there any reason to have more than 100 ACKs in flight at any given time? 500? 
> 1000?
> 
> My DOCSIS connection (inferred through observation) seems to run on 1ms 
> upstream time slots, and my modem will delete contigous ACKs at 16 or 32 ACK 
> intervals, ending up running at typically 1-2 ACKs per 1ms time slot. This 
> cuts down the ACK rate when I do 250 megabit/s downloads from 5-8 megabit/s 
> to 400 kilobit/s of used upstream bw.
> 
> Since this ACK reduction is done on probably hundreds of millions of 
> fixed-line subscriber lines today, what arguments do designers of TCP have to 
> keep sending one ACK per 2 received TCP packets?
> 
> -- 
> Mikael Abrahamsson    email: swm...@swm.pp.se
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to