The attached patch attempts to deprioritize bulk ack flows in cake. Once we start accumulating enough acks to filter out, and we start filtering them out, the "sparse flow optimization" in cake will start prioritizing the shorter queues. This patch attempts to stop that, which should give more time for more acks to be eliminated, and deprioritize bulk ack flows slightly in favor of other traffic that needs less latency.
Interesting stats from testing would be to measure a change in ack_drops between the two versions, and changes in the rate of increase of tcp. The 50x1 settings would be the most dramatic test... -- Dave Täht CEO, TekLibre, LLC http://www.teklibre.com Tel: 1-669-226-2619
From dd5bb48a4744cbd5e3081379e1e0954a31fbca02 Mon Sep 17 00:00:00 2001 From: Dave Taht <dave.t...@gmail.com> Date: Tue, 5 Dec 2017 13:52:30 -0800 Subject: [PATCH] set filtered ack flows to bulk mode Once we start accumulating enough acks to filter out, and we start filtering them out, the "sparse flow optimization" in cake will start prioritizing the shorter queues. This patch attempts to stop that, which should give more time for more acks to be eliminated, and deprioritize the ack flows slightly in favor of other traffic that needs less latency. --- sch_cake.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/sch_cake.c b/sch_cake.c index 4010388..72ab02b 100644 --- a/sch_cake.c +++ b/sch_cake.c @@ -1492,6 +1492,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, ack = cake_ack_filter(q, flow); if (ack) { + flow->set = CAKE_SET_BULK; b->ack_drops++; sch->qstats.drops++; b->bytes += ack->len; @@ -1532,6 +1533,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, ack = cake_ack_filter(q, flow); if (ack) { + flow->set = CAKE_SET_BULK; b->ack_drops++; sch->qstats.drops++; b->bytes += qdisc_pkt_len(ack); -- 2.7.4
_______________________________________________ Cake mailing list Cake@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/cake