--- Begin Message --- I totally understand what you are saying. However, I believe cake's egress and ingress modes currently behave as two extremes. One could argue that neither of them is the golden mean. With a patch in ingress mode (see below) and a single host using 32 flows to download I managed to increase throughput from ~7Mbps to ~10Mbps (configured limit 12200kbps) while latency increased from ~10ms to ~50ms, which would still be acceptable. As a comparison egress mode in the same setup gives me throughput ~11.5Mbps and latency ~500ms.

I would like to hear your thoughts about this idea: the patch is incrementing q->time_next_packet for dropped packets differently than for passed-through ones. Please focus on the idea, not the actual implementation :) (also pasted in https://pastebin.com/SZ14WiYw)

=============8<=============

diff --git a/sch_cake.c b/sch_cake.c
index 82f264f..a3a4a88 100644
--- a/sch_cake.c
+++ b/sch_cake.c
@@ -769,6 +769,7 @@ static void cake_heapify_up(struct cake_sched_data *q, u16 i)
 }

 static void cake_advance_shaper(struct cake_sched_data *q, struct cake_tin_data *b, u32 len, u64 now); +static void cake_advance_shaper2(struct cake_sched_data *q, struct cake_tin_data *b, u32 len, u64 now);

 #if LINUX_VERSION_CODE < KERNEL_VERSION(4, 8, 0)
 static unsigned int cake_drop(struct Qdisc *sch)
@@ -1274,7 +1275,7 @@ retry:
                /* drop this packet, get another one */
                if(q->rate_flags & CAKE_FLAG_INGRESS) {
                        len = cake_overhead(q, qdisc_pkt_len(skb));
-                       cake_advance_shaper(q, b, len, now);
+                       cake_advance_shaper2(q, b, len, now);
                        flow->deficit -= len;
                        b->tin_deficit -= len;
                }
@@ -1286,8 +1287,6 @@ retry:
                qdisc_qstats_drop(sch);
                kfree_skb(skb);
 #endif
-               if(q->rate_flags & CAKE_FLAG_INGRESS)
-                       goto retry;
        }

        b->tin_ecn_mark += !!flow->cvars.ecn_marked;
@@ -1351,6 +1350,24 @@ static void cake_advance_shaper(struct cake_sched_data *q, struct cake_tin_data
        }
 }

+static void cake_advance_shaper2(struct cake_sched_data *q, struct cake_tin_data *b, u32 len, u64 now)
+{
+       /* charge packet bandwidth to this tin, lower tins,
+        * and to the global shaper.
+        */
+       if(q->rate_ns) {
+               s64 tdiff1 = b->tin_time_next_packet - now;
+               s64 tdiff2 = (len * (u64)b->tin_rate_ns) >> b->tin_rate_shft;
+               s64 tdiff3 = (len * (u64)q->rate_ns) >> q->rate_shft;
+
+               if(tdiff1 < 0)
+                       b->tin_time_next_packet += tdiff2;
+               else if(tdiff1 < tdiff2)
+                       b->tin_time_next_packet = now + tdiff2;
+
+               q->time_next_packet += (tdiff3*27)>>5;
+       }
+}
 static void cake_reset(struct Qdisc *sch)
 {
        u32 c;

=============8<=============

On 11/10/2017 4:50 PM, Jonathan Morton wrote:

In fact, that's why I put a failsafe into ingress mode, so that it would never stall completely.  It can happen, however, that throughput is significantly reduced when the drop rate is high.

If throughput is more important to you than induced latency, switch to egress mode.

Unfortunately it's not possible to guarantee both low latency and high throughput when operating downstream of the bottleneck link.  ECN gives you better results, though.

- Jonathan Morton



--- End Message ---
_______________________________________________
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake

Reply via email to