Re: [Cake] Hopefully fixed ACK filter for v6

2018-05-04 Thread Jonathan Morton
>> I certainly am thankful for your work, and believe you deserve $CAKE
>> and $BEVERAGE, I am also leaf to believe 'the cake is a lie'
>> https://m.youtube.com/watch?v=qdrs3gr_GAs ;)
> 
> Haha, yes, of course I am aware that the cake really is a lie. Which
> makes us in league with GLADOS, I suppose, since we're promising
> everyone CAKE ;)

In lieu of sending a home-baked cake to Denmark, I have obtained an allegedly 
Danish apple cake from the local supermarket.  Though with the inherent 
ambiguity of the English language, it's unclear whether it is the cake itself 
that's supposed to be Danish, or the apples used to make it.  (According to the 
ingredients list, only 2% of the cake is actually apples.)

Similarly vague and imprecise are the serving instructions, which talk about a 
pre-heated oven but say nothing about the required temperature!  Luckily, I do 
enough baking to know that 175°C works well in this oven.

 - Jonathan Morton

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] [PATCH net-next v8 1/7] sched: Add Common Applications Kept Enhanced (cake) qdisc

2018-05-04 Thread Toke Høiland-Jørgensen
Thank you for the review! A few comments below, I'll fix the rest.

> [...]
> 
> So sch_cake doesn't accept normal tc filters? Is this intentional?
> If so, why?

For two reasons:

- The two-level scheduling used in CAKE (tins / diffserv classes, and
  flow hashing) does not map in an obvious way to the classification
  index of tc filters.

- No one has asked for it. We have done our best to accommodate the
  features people want in a home router qdisc directly in CAKE, and the
  ability to integrate tc filters has never been requested.

>> +static u16 quantum_div[CAKE_QUEUES + 1] = {0};
>> +
>> +#define REC_INV_SQRT_CACHE (16)
>> +static u32 cobalt_rec_inv_sqrt_cache[REC_INV_SQRT_CACHE] = {0};
>> +
>> +/* http://en.wikipedia.org/wiki/Methods_of_computing_square_roots
>> + * new_invsqrt = (invsqrt / 2) * (3 - count * invsqrt^2)
>> + *
>> + * Here, invsqrt is a fixed point number (< 1.0), 32bit mantissa, aka Q0.32
>> + */
>> +
>> +static void cobalt_newton_step(struct cobalt_vars *vars)
>> +{
>> +   u32 invsqrt = vars->rec_inv_sqrt;
>> +   u32 invsqrt2 = ((u64)invsqrt * invsqrt) >> 32;
>> +   u64 val = (3LL << 32) - ((u64)vars->count * invsqrt2);
>> +
>> +   val >>= 2; /* avoid overflow in following multiply */
>> +   val = (val * invsqrt) >> (32 - 2 + 1);
>> +
>> +   vars->rec_inv_sqrt = val;
>> +}
>> +
>> +static void cobalt_invsqrt(struct cobalt_vars *vars)
>> +{
>> +   if (vars->count < REC_INV_SQRT_CACHE)
>> +   vars->rec_inv_sqrt = cobalt_rec_inv_sqrt_cache[vars->count];
>> +   else
>> +   cobalt_newton_step(vars);
>> +}
>
> Looks pretty much duplicated with codel...

Cobalt is derived from CoDel, and so naturally shares some features with
it. However, it is quite different in other respects, so we can't just
use the existing CoDel code for the parts that are similar. We don't
feel quite confident enough in Cobalt (yet) to propose it replace CoDel
everywhere else in the kernel; so we have elected to keep it internal to
CAKE instead.

>> [...]
>>
>> +static int cake_init(struct Qdisc *sch, struct nlattr *opt,
>> +struct netlink_ext_ack *extack)
>> +{
>> +   struct cake_sched_data *q = qdisc_priv(sch);
>> +   int i, j;
>> +
>> +   sch->limit = 10240;
>> +   q->tin_mode = CAKE_DIFFSERV_BESTEFFORT;
>> +   q->flow_mode  = CAKE_FLOW_TRIPLE;
>> +
>> +   q->rate_bps = 0; /* unlimited by default */
>> +
>> +   q->interval = 10; /* 100ms default */
>> +   q->target   =   5000; /* 5ms: codel RFC argues
>> +  * for 5 to 10% of interval
>> +  */
>> +
>> +   q->cur_tin = 0;
>> +   q->cur_flow  = 0;
>> +
>> +   if (opt) {
>> +   int err = cake_change(sch, opt, extack);
>> +
>> +   if (err)
>> +   return err;
>
>
> Not sure if you really want to reallocate q->tines below for this
> case.

I'm not sure what you mean here? If there's an error we return it and
the qdisc is not created. If there's not, we allocate and on subsequent
changes cake_change() will be called directly, or? Can the init function
ever be called again during the lifetime of the qdisc?

-Toke
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] [PATCH net-next v8 1/7] sched: Add Common Applications Kept Enhanced (cake) qdisc

2018-05-04 Thread Cong Wang
On Fri, May 4, 2018 at 7:02 AM, Toke Høiland-Jørgensen  wrote:
> +struct cake_sched_data {
> +   struct cake_tin_data *tins;
> +
> +   struct cake_heap_entry overflow_heap[CAKE_QUEUES * CAKE_MAX_TINS];
> +   u16 overflow_timeout;
> +
> +   u16 tin_cnt;
> +   u8  tin_mode;
> +   u8  flow_mode;
> +   u8  ack_filter;
> +   u8  atm_mode;
> +
> +   /* time_next = time_this + ((len * rate_ns) >> rate_shft) */
> +   u16 rate_shft;
> +   u64 time_next_packet;
> +   u64 failsafe_next_packet;
> +   u32 rate_ns;
> +   u32 rate_bps;
> +   u16 rate_flags;
> +   s16 rate_overhead;
> +   u16 rate_mpu;
> +   u32 interval;
> +   u32 target;
> +
> +   /* resource tracking */
> +   u32 buffer_used;
> +   u32 buffer_max_used;
> +   u32 buffer_limit;
> +   u32 buffer_config_limit;
> +
> +   /* indices for dequeue */
> +   u16 cur_tin;
> +   u16 cur_flow;
> +
> +   struct qdisc_watchdog watchdog;
> +   const u8*tin_index;
> +   const u8*tin_order;
> +
> +   /* bandwidth capacity estimate */
> +   u64 last_packet_time;
> +   u64 avg_packet_interval;
> +   u64 avg_window_begin;
> +   u32 avg_window_bytes;
> +   u32 avg_peak_bandwidth;
> +   u64 last_reconfig_time;
> +
> +   /* packet length stats */
> +   u32 avg_netoff;
> +   u16 max_netlen;
> +   u16 max_adjlen;
> +   u16 min_netlen;
> +   u16 min_adjlen;
> +};


So sch_cake doesn't accept normal tc filters? Is this intentional?
If so, why?


> +static u16 quantum_div[CAKE_QUEUES + 1] = {0};
> +
> +#define REC_INV_SQRT_CACHE (16)
> +static u32 cobalt_rec_inv_sqrt_cache[REC_INV_SQRT_CACHE] = {0};
> +
> +/* http://en.wikipedia.org/wiki/Methods_of_computing_square_roots
> + * new_invsqrt = (invsqrt / 2) * (3 - count * invsqrt^2)
> + *
> + * Here, invsqrt is a fixed point number (< 1.0), 32bit mantissa, aka Q0.32
> + */
> +
> +static void cobalt_newton_step(struct cobalt_vars *vars)
> +{
> +   u32 invsqrt = vars->rec_inv_sqrt;
> +   u32 invsqrt2 = ((u64)invsqrt * invsqrt) >> 32;
> +   u64 val = (3LL << 32) - ((u64)vars->count * invsqrt2);
> +
> +   val >>= 2; /* avoid overflow in following multiply */
> +   val = (val * invsqrt) >> (32 - 2 + 1);
> +
> +   vars->rec_inv_sqrt = val;
> +}
> +
> +static void cobalt_invsqrt(struct cobalt_vars *vars)
> +{
> +   if (vars->count < REC_INV_SQRT_CACHE)
> +   vars->rec_inv_sqrt = cobalt_rec_inv_sqrt_cache[vars->count];
> +   else
> +   cobalt_newton_step(vars);
> +}


Looks pretty much duplicated with codel...


> +
> +/* There is a big difference in timing between the accurate values placed in
> + * the cache and the approximations given by a single Newton step for small
> + * count values, particularly when stepping from count 1 to 2 or vice versa.
> + * Above 16, a single Newton step gives sufficient accuracy in either
> + * direction, given the precision stored.
> + *
> + * The magnitude of the error when stepping up to count 2 is such as to give
> + * the value that *should* have been produced at count 4.
> + */
> +
> +static void cobalt_cache_init(void)
> +{
> +   struct cobalt_vars v;
> +
> +   memset(&v, 0, sizeof(v));
> +   v.rec_inv_sqrt = ~0U;
> +   cobalt_rec_inv_sqrt_cache[0] = v.rec_inv_sqrt;
> +
> +   for (v.count = 1; v.count < REC_INV_SQRT_CACHE; v.count++) {
> +   cobalt_newton_step(&v);
> +   cobalt_newton_step(&v);
> +   cobalt_newton_step(&v);
> +   cobalt_newton_step(&v);
> +
> +   cobalt_rec_inv_sqrt_cache[v.count] = v.rec_inv_sqrt;
> +   }
> +}
> +
> +static void cobalt_vars_init(struct cobalt_vars *vars)
> +{
> +   memset(vars, 0, sizeof(*vars));
> +
> +   if (!cobalt_rec_inv_sqrt_cache[0]) {
> +   cobalt_cache_init();
> +   cobalt_rec_inv_sqrt_cache[0] = ~0;
> +   }
> +}
> +
> +/* CoDel control_law is t + interval/sqrt(count)
> + * We maintain in rec_inv_sqrt the reciprocal value of sqrt(count) to avoid
> + * both sqrt() and divide operation.
> + */
> +static cobalt_time_t cobalt_control(cobalt_time_t t,
> +   cobalt_time_t interval,
> +   u32 rec_inv_sqrt)
> +{
> +   return t + reciprocal_scale(interval, rec_inv_sqrt);
> +}
> +
> +/* Call this when a packet had to be dropped due to queue overflow.  Returns
> + * true if the BLUE state was quiescent before but active after this call.
> + */
> +static bool cobalt_queue_full(struct cobalt_vars *vars,
> +

[Cake] [PATCH net-next v8 3/7] sch_cake: Add optional ACK filter

2018-05-04 Thread Toke Høiland-Jørgensen
The ACK filter is an optional feature of CAKE which is designed to improve
performance on links with very asymmetrical rate limits. On such links
(which are unfortunately quite prevalent, especially for DSL and cable
subscribers), the downstream throughput can be limited by the number of
ACKs capable of being transmitted in the *upstream* direction.

Filtering ACKs can, in general, have adverse effects on TCP performance
because it interferes with ACK clocking (especially in slow start), and it
reduces the flow's resiliency to ACKs being dropped further along the path.
To alleviate these drawbacks, the ACK filter in CAKE tries its best to
always keep enough ACKs queued to ensure forward progress in the TCP flow
being filtered. It does this by only filtering redundant ACKs. In its
default 'conservative' mode, the filter will always keep at least two
redundant ACKs in the queue, while in 'aggressive' mode, it will filter
down to a single ACK.

The ACK filter works by inspecting the per-flow queue on every packet
enqueue. Starting at the head of the queue, the filter looks for another
eligible packet to drop (so the ACK being dropped is always closer to the
head of the queue than the packet being enqueued). An ACK is eligible only
if it ACKs *fewer* cumulative bytes than the new packet being enqueued.
This prevents duplicate ACKs from being filtered (unless there is also SACK
options present), to avoid interfering with retransmission logic. In
aggressive mode, an eligible packet is always dropped, while in
conservative mode, at least two ACKs are kept in the queue. Only pure ACKs
(with no data segments) are considered eligible for dropping, but when an
ACK with data segments is enqueued, this can cause another pure ACK to
become eligible for dropping.

The approach described above ensures that this ACK filter avoids most of
the drawbacks of a naive filtering mechanism that only keeps flow state but
does not inspect the queue. This is the rationale for including the ACK
filter in CAKE itself rather than as separate module (as the TC filter, for
instance).

Our performance evaluation has shown that on a 30/1 Mbps link with a
bidirectional traffic test (RRUL), turning on the ACK filter on the
upstream link improves downstream throughput by ~20% (both modes) and
upstream throughput by ~12% in conservative mode and ~40% in aggressive
mode, at the cost of ~5ms of inter-flow latency due to the increased
congestion.

In *really* pathological cases, the effect can be a lot more; for instance,
the ACK filter increases the achievable downstream throughput on a link
with 100 Kbps in the upstream direction by an order of magnitude (from ~2.5
Mbps to ~25 Mbps).

Finally, even though we consider the ACK filter to be safer than most, we
do not recommend turning it on everywhere: on more symmetrical link
bandwidths the effect is negligible at best.

Signed-off-by: Toke Høiland-Jørgensen 
---
 net/sched/sch_cake.c |  261 +-
 1 file changed, 255 insertions(+), 6 deletions(-)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 7ca86e3ed14c..9a70e99afe7e 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -127,7 +127,6 @@ struct cake_flow {
/* this stuff is all needed per-flow at dequeue time */
struct sk_buff*head;
struct sk_buff*tail;
-   struct sk_buff*ackcheck;
struct list_head  flowchain;
s32   deficit;
struct cobalt_vars cvars;
@@ -712,9 +711,6 @@ static struct sk_buff *dequeue_head(struct cake_flow *flow)
if (skb) {
flow->head = skb->next;
skb->next = NULL;
-
-   if (skb == flow->ackcheck)
-   flow->ackcheck = NULL;
}
 
return skb;
@@ -732,6 +728,239 @@ static void flow_queue_add(struct cake_flow *flow, struct 
sk_buff *skb)
skb->next = NULL;
 }
 
+static struct iphdr *cake_get_iphdr(const struct sk_buff *skb,
+   struct ipv6hdr *buf)
+{
+   unsigned int offset = skb_network_offset(skb);
+   struct iphdr *iph;
+
+   iph = skb_header_pointer(skb, offset, sizeof(struct iphdr), buf);
+
+   if (!iph)
+   return NULL;
+
+   if (iph->version == 4 && iph->protocol == IPPROTO_IPV6)
+   return skb_header_pointer(skb, offset + iph->ihl * 4,
+ sizeof(struct ipv6hdr), buf);
+
+   else if (iph->version == 4)
+   return iph;
+
+   else if (iph->version == 6)
+   return skb_header_pointer(skb, offset, sizeof(struct ipv6hdr),
+ buf);
+
+   return NULL;
+}
+
+static struct tcphdr *cake_get_tcphdr(const struct sk_buff *skb,
+ void *buf, unsigned int bufsize)
+{
+   unsigned int offset = skb_network_offset(skb);
+   const struct ipv6hdr *ipv6h;
+   const struct tcphd

[Cake] [PATCH net-next v8 4/7] sch_cake: Add NAT awareness to packet classifier

2018-05-04 Thread Toke Høiland-Jørgensen
When CAKE is deployed on a gateway that also performs NAT (which is a
common deployment mode), the host fairness mechanism cannot distinguish
internal hosts from each other, and so fails to work correctly.

To fix this, we add an optional NAT awareness mode, which will query the
kernel conntrack mechanism to obtain the pre-NAT addresses for each packet
and use that in the flow and host hashing.

When the shaper is enabled and the host is already performing NAT, the cost
of this lookup is negligible. However, in unlimited mode with no NAT being
performed, there is a significant CPU cost at higher bandwidths. For this
reason, the feature is turned off by default.

Signed-off-by: Toke Høiland-Jørgensen 
---
 net/sched/sch_cake.c |   70 ++
 1 file changed, 70 insertions(+)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 9a70e99afe7e..cc45a56d35d6 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -70,6 +70,12 @@
 #include 
 #include 
 
+#if IS_REACHABLE(CONFIG_NF_CONNTRACK)
+#include 
+#include 
+#include 
+#endif
+
 #define CAKE_SET_WAYS (8)
 #define CAKE_MAX_TINS (8)
 #define CAKE_QUEUES (1024)
@@ -519,6 +525,61 @@ static bool cobalt_should_drop(struct cobalt_vars *vars,
return drop;
 }
 
+#if IS_REACHABLE(CONFIG_NF_CONNTRACK)
+
+static void cake_update_flowkeys(struct flow_keys *keys,
+const struct sk_buff *skb)
+{
+   enum ip_conntrack_info ctinfo;
+   bool rev = false;
+
+   struct nf_conn *ct;
+   const struct nf_conntrack_tuple *tuple;
+
+   if (tc_skb_protocol(skb) != htons(ETH_P_IP))
+   return;
+
+   ct = nf_ct_get(skb, &ctinfo);
+   if (ct) {
+   tuple = nf_ct_tuple(ct, CTINFO2DIR(ctinfo));
+   } else {
+   const struct nf_conntrack_tuple_hash *hash;
+   struct nf_conntrack_tuple srctuple;
+
+   if (!nf_ct_get_tuplepr(skb, skb_network_offset(skb),
+  NFPROTO_IPV4, dev_net(skb->dev),
+  &srctuple))
+   return;
+
+   hash = nf_conntrack_find_get(dev_net(skb->dev),
+&nf_ct_zone_dflt,
+&srctuple);
+   if (!hash)
+   return;
+
+   rev = true;
+   ct = nf_ct_tuplehash_to_ctrack(hash);
+   tuple = nf_ct_tuple(ct, !hash->tuple.dst.dir);
+   }
+
+   keys->addrs.v4addrs.src = rev ? tuple->dst.u3.ip : tuple->src.u3.ip;
+   keys->addrs.v4addrs.dst = rev ? tuple->src.u3.ip : tuple->dst.u3.ip;
+
+   if (keys->ports.ports) {
+   keys->ports.src = rev ? tuple->dst.u.all : tuple->src.u.all;
+   keys->ports.dst = rev ? tuple->src.u.all : tuple->dst.u.all;
+   }
+   if (rev)
+   nf_ct_put(ct);
+}
+#else
+static void cake_update_flowkeys(struct flow_keys *keys,
+const struct sk_buff *skb)
+{
+   /* There is nothing we can do here without CONNTRACK */
+}
+#endif
+
 /* Cake has several subtle multiple bit settings. In these cases you
  *  would be matching triple isolate mode as well.
  */
@@ -546,6 +607,9 @@ static u32 cake_hash(struct cake_tin_data *q, const struct 
sk_buff *skb,
skb_flow_dissect_flow_keys(skb, &keys,
   FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL);
 
+   if (flow_mode & CAKE_FLOW_NAT_FLAG)
+   cake_update_flowkeys(&keys, skb);
+
/* flow_hash_from_keys() sorts the addresses by value, so we have
 * to preserve their order in a separate data structure to treat
 * src and dst host addresses as independently selectable.
@@ -1673,6 +1737,12 @@ static int cake_change(struct Qdisc *sch, struct nlattr 
*opt,
q->flow_mode = (nla_get_u32(tb[TCA_CAKE_FLOW_MODE]) &
CAKE_FLOW_MASK);
 
+   if (tb[TCA_CAKE_NAT]) {
+   q->flow_mode &= ~CAKE_FLOW_NAT_FLAG;
+   q->flow_mode |= CAKE_FLOW_NAT_FLAG *
+   !!nla_get_u32(tb[TCA_CAKE_NAT]);
+   }
+
if (tb[TCA_CAKE_RTT]) {
q->interval = nla_get_u32(tb[TCA_CAKE_RTT]);
 

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


[Cake] [PATCH net-next v8 5/7] sch_cake: Add DiffServ handling

2018-05-04 Thread Toke Høiland-Jørgensen
This adds support for DiffServ-based priority queueing to CAKE. If the
shaper is in use, each priority tier gets its own virtual clock, which
limits that tier's rate to a fraction of the overall shaped rate, to
discourage trying to game the priority mechanism.

CAKE defaults to a simple, three-tier mode that interprets most code points
as "best effort", but places CS1 traffic into a low-priority "bulk" tier
which is assigned 1/16 of the total rate, and a few code points indicating
latency-sensitive or control traffic (specifically TOS4, VA, EF, CS6, CS7)
into a "latency sensitive" high-priority tier, which is assigned 1/4 rate.
The other supported DiffServ modes are a 4-tier mode matching the 802.11e
precedence rules, as well as two 8-tier modes, one of which implements
strict precedence of the eight priority levels.

This commit also adds an optional DiffServ 'wash' mode, which will zero out
the DSCP fields of any packet passing through CAKE. While this can
technically be done with other mechanisms in the kernel, having the feature
available in CAKE significantly decreases configuration complexity; and the
implementation cost is low on top of the other DiffServ-handling code.

Signed-off-by: Toke Høiland-Jørgensen 
---
 net/sched/sch_cake.c |  394 +-
 1 file changed, 387 insertions(+), 7 deletions(-)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index cc45a56d35d6..1e5951d26ed2 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -305,6 +305,68 @@ static void cobalt_set_enqueue_time(struct sk_buff *skb,
 
 static u16 quantum_div[CAKE_QUEUES + 1] = {0};
 
+/* Diffserv lookup tables */
+
+static const u8 precedence[] = {
+   0, 0, 0, 0, 0, 0, 0, 0,
+   1, 1, 1, 1, 1, 1, 1, 1,
+   2, 2, 2, 2, 2, 2, 2, 2,
+   3, 3, 3, 3, 3, 3, 3, 3,
+   4, 4, 4, 4, 4, 4, 4, 4,
+   5, 5, 5, 5, 5, 5, 5, 5,
+   6, 6, 6, 6, 6, 6, 6, 6,
+   7, 7, 7, 7, 7, 7, 7, 7,
+};
+
+static const u8 diffserv8[] = {
+   2, 5, 1, 2, 4, 2, 2, 2,
+   0, 2, 1, 2, 1, 2, 1, 2,
+   5, 2, 4, 2, 4, 2, 4, 2,
+   3, 2, 3, 2, 3, 2, 3, 2,
+   6, 2, 3, 2, 3, 2, 3, 2,
+   6, 2, 2, 2, 6, 2, 6, 2,
+   7, 2, 2, 2, 2, 2, 2, 2,
+   7, 2, 2, 2, 2, 2, 2, 2,
+};
+
+static const u8 diffserv4[] = {
+   0, 2, 0, 0, 2, 0, 0, 0,
+   1, 0, 0, 0, 0, 0, 0, 0,
+   2, 0, 2, 0, 2, 0, 2, 0,
+   2, 0, 2, 0, 2, 0, 2, 0,
+   3, 0, 2, 0, 2, 0, 2, 0,
+   3, 0, 0, 0, 3, 0, 3, 0,
+   3, 0, 0, 0, 0, 0, 0, 0,
+   3, 0, 0, 0, 0, 0, 0, 0,
+};
+
+static const u8 diffserv3[] = {
+   0, 0, 0, 0, 2, 0, 0, 0,
+   1, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 2, 0, 2, 0,
+   2, 0, 0, 0, 0, 0, 0, 0,
+   2, 0, 0, 0, 0, 0, 0, 0,
+};
+
+static const u8 besteffort[] = {
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+   0, 0, 0, 0, 0, 0, 0, 0,
+};
+
+/* tin priority order for stats dumping */
+
+static const u8 normal_order[] = {0, 1, 2, 3, 4, 5, 6, 7};
+static const u8 bulk_order[] = {1, 0, 2, 3};
+
 #define REC_INV_SQRT_CACHE (16)
 static u32 cobalt_rec_inv_sqrt_cache[REC_INV_SQRT_CACHE] = {0};
 
@@ -1189,6 +1251,46 @@ static unsigned int cake_drop(struct Qdisc *sch, struct 
sk_buff **to_free)
return idx + (tin << 16);
 }
 
+static void cake_wash_diffserv(struct sk_buff *skb)
+{
+   switch (skb->protocol) {
+   case htons(ETH_P_IP):
+   ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, 0);
+   break;
+   case htons(ETH_P_IPV6):
+   ipv6_change_dsfield(ipv6_hdr(skb), INET_ECN_MASK, 0);
+   break;
+   default:
+   break;
+   };
+}
+
+static u8 cake_handle_diffserv(struct sk_buff *skb, u16 wash)
+{
+   u8 dscp;
+
+   switch (skb->protocol) {
+   case htons(ETH_P_IP):
+   dscp = ipv4_get_dsfield(ip_hdr(skb)) >> 2;
+   if (wash && dscp)
+   ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, 0);
+   return dscp;
+
+   case htons(ETH_P_IPV6):
+   dscp = ipv6_get_dsfield(ipv6_hdr(skb)) >> 2;
+   if (wash && dscp)
+   ipv6_change_dsfield(ipv6_hdr(skb), INET_ECN_MASK, 0);
+   return dscp;
+
+   case htons(ETH_P_ARP):
+   return 0x38;  /* CS7 - Net Control */
+
+   default:
+   /* If there is no Diffserv field, treat as best-effort */
+   return 0;
+   };
+}
+
 static void cake_reconfigure(struct Qdisc *sch);
 
 static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
@@ -1203,7 +1305,19 @@ static s32 cake_enqueue(struct sk_buff *skb, struct 
Qdisc *sch,
u64 now = cobalt_get_time();

[Cake] [PATCH net-next v8 0/7] sched: Add Common Applications Kept Enhanced (cake) qdisc

2018-05-04 Thread Toke Høiland-Jørgensen
This patch series adds the CAKE qdisc, and has been split up to ease
review.

I have attempted to split out each configurable feature into its own patch.
The first commit adds the base shaper and packet scheduler, while
subsequent commits add the optional features. The full userspace API and
most data structures are included in this commit, but options not
understood in the base version will be ignored.

The result of applying the entire series is identical to the out of tree
version that have seen extensive testing in previous deployments, most
notably as an out of tree patch to OpenWrt. However, note that I have only
compile tested the individual patches; so the whole series should be
considered as a unit.

---
Changelog

v8:
  - Remove inline keyword from function definitions
  - Simplify ACK filter; remove the complex state handling to make the
logic easier to follow. This will potentially be a bit less efficient,
but I have not been able to measure a difference.

v7:
  - Split up patch into a series to ease review.
  - Constify the ACK filter.

v6:
  - Fix 6in4 encapsulation checks in ACK filter code
  - Checkpatch fixes

v5:
  - Refactor ACK filter code and hopefully fix the safety issues
properly this time.

v4:
  - Only split GSO packets if shaping at speeds <= 1Gbps
  - Fix overhead calculation code to also work for GSO packets
  - Don't re-implement kvzalloc()
  - Remove local header include from out-of-tree build (fixes kbuild-bot
complaint).
  - Several fixes to the ACK filter:
- Check pskb_may_pull() before deref of transport headers.
- Don't run ACK filter logic on split GSO packets
- Fix TCP sequence number compare to deal with wraparounds

v3:
  - Use IS_REACHABLE() macro to fix compilation when sch_cake is
built-in and conntrack is a module.
  - Switch the stats output to use nested netlink attributes instead
of a versioned struct.
  - Remove GPL boilerplate.
  - Fix array initialisation style.

v2:
  - Fix kbuild test bot complaint
  - Clean up the netlink ABI
  - Fix checkpatch complaints
  - A few tweaks to the behaviour of cake based on testing carried out
while writing the paper.

---

Toke Høiland-Jørgensen (7):
  sched: Add Common Applications Kept Enhanced (cake) qdisc
  sch_cake: Add ingress mode
  sch_cake: Add optional ACK filter
  sch_cake: Add NAT awareness to packet classifier
  sch_cake: Add DiffServ handling
  sch_cake: Add overhead compensation support to the rate shaper
  sch_cake: Conditionally split GSO segments


 include/uapi/linux/pkt_sched.h |  105 ++
 net/sched/Kconfig  |   11 
 net/sched/Makefile |1 
 net/sched/sch_cake.c   | 2595 
 4 files changed, 2712 insertions(+)
 create mode 100644 net/sched/sch_cake.c

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


[Cake] [PATCH net-next v8 1/7] sched: Add Common Applications Kept Enhanced (cake) qdisc

2018-05-04 Thread Toke Høiland-Jørgensen
sch_cake targets the home router use case and is intended to squeeze the
most bandwidth and latency out of even the slowest ISP links and routers,
while presenting an API simple enough that even an ISP can configure it.

Example of use on a cable ISP uplink:

tc qdisc add dev eth0 cake bandwidth 20Mbit nat docsis ack-filter

To shape a cable download link (ifb and tc-mirred setup elided)

tc qdisc add dev ifb0 cake bandwidth 200mbit nat docsis ingress wash

CAKE is filled with:

* A hybrid Codel/Blue AQM algorithm, "Cobalt", tied to an FQ_Codel
  derived Flow Queuing system, which autoconfigures based on the bandwidth.
* A novel "triple-isolate" mode (the default) which balances per-host
  and per-flow FQ even through NAT.
* An deficit based shaper, that can also be used in an unlimited mode.
* 8 way set associative hashing to reduce flow collisions to a minimum.
* A reasonable interpretation of various diffserv latency/loss tradeoffs.
* Support for zeroing diffserv markings for entering and exiting traffic.
* Support for interacting well with Docsis 3.0 shaper framing.
* Extensive support for DSL framing types.
* Support for ack filtering.
* Extensive statistics for measuring, loss, ecn markings, latency
  variation.

A paper describing the design of CAKE is available at
https://arxiv.org/abs/1804.07617

This patch adds the base shaper and packet scheduler, while subsequent
commits add the optional (configurable) features. The full userspace API
and most data structures are included in this commit, but options not
understood in the base version will be ignored.

Various versions baking have been available as an out of tree build for
kernel versions going back to 3.10, as the embedded router world has been
running a few years behind mainline Linux. A stable version has been
generally available on lede-17.01 and later.

sch_cake replaces a combination of iptables, tc filter, htb and fq_codel
in the sqm-scripts, with sane defaults and vastly simpler configuration.

CAKE's principal author is Jonathan Morton, with contributions from
Kevin Darbyshire-Bryant, Toke Høiland-Jørgensen, Sebastian Moeller,
Ryan Mounce, Guido Sarducci, Dean Scarff, Nils Andreas Svee, Dave Täht,
and Loganaden Velvindron.

Testing from Pete Heist, Georgios Amanakis, and the many other members of
the cake@lists.bufferbloat.net mailing list.

tc -s qdisc show dev eth2
qdisc cake 1: root refcnt 2 bandwidth 100Mbit diffserv3 triple-isolate rtt 
100.0ms raw overhead 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 0b of 500b
 capacity estimate: 100Mbit
 min/max network layer size:65535 /   0
 min/max overhead-adjusted size:65535 /   0
 average network hdr offset:0

   Bulk  Best EffortVoice
  thresh   6250Kbit  100Mbit   25Mbit
  target  5.0ms5.0ms5.0ms
  interval  100.0ms  100.0ms  100.0ms
  pk_delay  0us  0us  0us
  av_delay  0us  0us  0us
  sp_delay  0us  0us  0us
  pkts000
  bytes   000
  way_inds000
  way_miss000
  way_cols000
  drops   000
  marks   000
  ack_drop000
  sp_flows000
  bk_flows000
  un_flows000
  max_len 000
  quantum   300 1514  762

Tested-by: Pete Heist 
Tested-by: Georgios Amanakis 
Signed-off-by: Dave Taht 
Signed-off-by: Toke Høiland-Jørgensen 
---
 include/uapi/linux/pkt_sched.h |  105 ++
 net/sched/Kconfig  |   11 
 net/sched/Makefile |1 
 net/sched/sch_cake.c   | 1683 
 4 files changed, 1800 insertions(+)
 create mode 100644 net/sched/sch_cake.c

diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
index 37b5096ae97b..bc581473c0b0 100644
--- a/include/uapi/linux/pkt_sched.h
+++ b/include/uapi/linux/pkt_sched.h
@@ -934,4 +934,109 @@ enum {
 
 #define TCA_CBS_MAX (__TCA_CBS_MAX - 1)
 
+/* CAKE */
+enum {
+   TCA_CAKE_UNSPEC,
+   TCA_CAKE_BASE_RATE,
+   TCA_CAKE_DIFFSERV_MODE,
+   TCA_CAKE_ATM,
+   TCA_CAKE_FLOW_MODE,
+   TCA_CAKE_OVERHEAD,
+   TCA_CAKE_RTT,
+   TCA_CAKE_TARGET,
+   TCA_CAKE_AUTORATE,
+   TCA_CAKE_MEMORY,
+   TCA_CAKE_NAT,
+   TCA_CAKE_RAW,
+   TCA_CAKE_WASH,
+   TCA_CAKE_MPU,
+   TCA_CAKE_INGRESS,
+   TCA_CAKE_ACK_FILTER,
+   TCA_CAKE_SPLIT_GSO,
+   __TCA_CAKE_MAX
+};
+#define TCA_CAKE_MAX   (__TCA_CAKE_MAX - 1)
+
+enum {
+   __TCA_CAKE_STATS_INVAL

[Cake] [PATCH net-next v8 7/7] sch_cake: Conditionally split GSO segments

2018-05-04 Thread Toke Høiland-Jørgensen
At lower bandwidths, the transmission time of a single GSO segment can add
an unacceptable amount of latency due to HOL blocking. Furthermore, with a
software shaper, any tuning mechanism employed by the kernel to control the
maximum size of GSO segments is thrown off by the artificial limit on
bandwidth. For this reason, we split GSO segments into their individual
packets iff the shaper is active and configured to a bandwidth <= 1 Gbps.

Signed-off-by: Toke Høiland-Jørgensen 
---
 net/sched/sch_cake.c |   95 --
 1 file changed, 69 insertions(+), 26 deletions(-)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index cb978a0f8969..6b67a5c1418e 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -81,6 +81,7 @@
 #define CAKE_QUEUES (1024)
 #define CAKE_FLOW_MASK 63
 #define CAKE_FLOW_NAT_FLAG 64
+#define CAKE_SPLIT_GSO_THRESHOLD (12500) /* 1Gbps */
 #define US2TIME(a) (a * (u64)NSEC_PER_USEC)
 
 typedef u64 cobalt_time_t;
@@ -1428,36 +1429,73 @@ static s32 cake_enqueue(struct sk_buff *skb, struct 
Qdisc *sch,
if (unlikely(len > b->max_skblen))
b->max_skblen = len;
 
-   cobalt_set_enqueue_time(skb, now);
-   get_cobalt_cb(skb)->adjusted_len = cake_overhead(q, skb);
-   flow_queue_add(flow, skb);
-
-   if (q->ack_filter)
-   ack = cake_ack_filter(q, flow);
+   if (skb_is_gso(skb) && q->rate_flags & CAKE_FLAG_SPLIT_GSO) {
+   struct sk_buff *segs, *nskb;
+   netdev_features_t features = netif_skb_features(skb);
+   unsigned int slen = 0;
+
+   segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
+   if (IS_ERR_OR_NULL(segs))
+   return qdisc_drop(skb, sch, to_free);
+
+   while (segs) {
+   nskb = segs->next;
+   segs->next = NULL;
+   qdisc_skb_cb(segs)->pkt_len = segs->len;
+   cobalt_set_enqueue_time(segs, now);
+   get_cobalt_cb(segs)->adjusted_len = cake_overhead(q,
+ segs);
+   flow_queue_add(flow, segs);
+
+   sch->q.qlen++;
+   slen += segs->len;
+   q->buffer_used += segs->truesize;
+   b->packets++;
+   segs = nskb;
+   }
 
-   if (ack) {
-   b->ack_drops++;
-   sch->qstats.drops++;
-   b->bytes += qdisc_pkt_len(ack);
-   len -= qdisc_pkt_len(ack);
-   q->buffer_used += skb->truesize - ack->truesize;
-   if (q->rate_flags & CAKE_FLAG_INGRESS)
-   cake_advance_shaper(q, b, ack, now, true);
+   /* stats */
+   b->bytes+= slen;
+   b->backlogs[idx]+= slen;
+   b->tin_backlog  += slen;
+   sch->qstats.backlog += slen;
+   q->avg_window_bytes += slen;
 
-   qdisc_tree_reduce_backlog(sch, 1, qdisc_pkt_len(ack));
-   consume_skb(ack);
+   qdisc_tree_reduce_backlog(sch, 1, len);
+   consume_skb(skb);
} else {
-   sch->q.qlen++;
-   q->buffer_used  += skb->truesize;
-   }
+   /* not splitting */
+   cobalt_set_enqueue_time(skb, now);
+   get_cobalt_cb(skb)->adjusted_len = cake_overhead(q, skb);
+   flow_queue_add(flow, skb);
+
+   if (q->ack_filter)
+   ack = cake_ack_filter(q, flow);
+
+   if (ack) {
+   b->ack_drops++;
+   sch->qstats.drops++;
+   b->bytes += qdisc_pkt_len(ack);
+   len -= qdisc_pkt_len(ack);
+   q->buffer_used += skb->truesize - ack->truesize;
+   if (q->rate_flags & CAKE_FLAG_INGRESS)
+   cake_advance_shaper(q, b, ack, now, true);
+
+   qdisc_tree_reduce_backlog(sch, 1, qdisc_pkt_len(ack));
+   consume_skb(ack);
+   } else {
+   sch->q.qlen++;
+   q->buffer_used  += skb->truesize;
+   }
 
-   /* stats */
-   b->packets++;
-   b->bytes+= len;
-   b->backlogs[idx]+= len;
-   b->tin_backlog  += len;
-   sch->qstats.backlog += len;
-   q->avg_window_bytes += len;
+   /* stats */
+   b->packets++;
+   b->bytes+= len;
+   b->backlogs[idx]+= len;
+   b->tin_backlog  += len;
+   sch->qstats.backlog += len;
+   q->avg_window_bytes += len;
+   }
 
if (q->overflow_timeout)
cake_heapify_up(q,

[Cake] [PATCH net-next v8 2/7] sch_cake: Add ingress mode

2018-05-04 Thread Toke Høiland-Jørgensen
The ingress mode is meant to be enabled when CAKE runs downlink of the
actual bottleneck (such as on an IFB device). The mode changes the shaper
to also account dropped packets to the shaped rate, as these have already
traversed the bottleneck.

Enabling ingress mode will also tune the AQM to always keep at least two
packets queued *for each flow*. This is done by scaling the minimum queue
occupancy level that will disable the AQM by the number of active bulk
flows. The rationale for this is that retransmits are more expensive in
ingress mode, since dropped packets have to traverse the bottleneck again
when they are retransmitted; thus, being more lenient and keeping a minimum
number of packets queued will improve throughput in cases where the number
of active flows are so large that they saturate the bottleneck even at
their minimum window size.

This commit also adds a separate switch to enable ingress mode rate
autoscaling. If enabled, the autoscaling code will observe the actual
traffic rate and adjust the shaper rate to match it. This can help avoid
latency increases in the case where the actual bottleneck rate decreases
below the shaped rate. The scaling filters out spikes by an EWMA filter.

Signed-off-by: Toke Høiland-Jørgensen 
---
 net/sched/sch_cake.c |   70 +++---
 1 file changed, 66 insertions(+), 4 deletions(-)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 8e2f2ba2ed5d..7ca86e3ed14c 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -438,7 +438,8 @@ static bool cobalt_queue_empty(struct cobalt_vars *vars,
 static bool cobalt_should_drop(struct cobalt_vars *vars,
   struct cobalt_params *p,
   cobalt_time_t now,
-  struct sk_buff *skb)
+  struct sk_buff *skb,
+  u32 bulk_flows)
 {
bool drop = false;
 
@@ -463,6 +464,7 @@ static bool cobalt_should_drop(struct cobalt_vars *vars,
cobalt_tdiff_t schedule = now - vars->drop_next;
 
bool over_target = sojourn > p->target &&
+  sojourn > p->mtu_time * bulk_flows * 2 &&
   sojourn > p->mtu_time * 4;
bool next_due= vars->count && schedule >= 0;
 
@@ -883,6 +885,9 @@ static unsigned int cake_drop(struct Qdisc *sch, struct 
sk_buff **to_free)
b->tin_dropped++;
sch->qstats.drops++;
 
+   if (q->rate_flags & CAKE_FLAG_INGRESS)
+   cake_advance_shaper(q, b, skb, now, true);
+
__qdisc_drop(skb, to_free);
sch->q.qlen--;
 
@@ -951,8 +956,39 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc 
*sch,
cake_heapify_up(q, b->overflow_idx[idx]);
 
/* incoming bandwidth capacity estimate */
-   q->avg_window_bytes = 0;
-   q->last_packet_time = now;
+   if (q->rate_flags & CAKE_FLAG_AUTORATE_INGRESS) {
+   u64 packet_interval = now - q->last_packet_time;
+
+   if (packet_interval > NSEC_PER_SEC)
+   packet_interval = NSEC_PER_SEC;
+
+   /* filter out short-term bursts, eg. wifi aggregation */
+   q->avg_packet_interval = cake_ewma(q->avg_packet_interval,
+  packet_interval,
+   packet_interval > q->avg_packet_interval ? 2 : 8);
+
+   q->last_packet_time = now;
+
+   if (packet_interval > q->avg_packet_interval) {
+   u64 window_interval = now - q->avg_window_begin;
+   u64 b = q->avg_window_bytes * (u64)NSEC_PER_SEC;
+
+   do_div(b, window_interval);
+   q->avg_peak_bandwidth =
+   cake_ewma(q->avg_peak_bandwidth, b,
+ b > q->avg_peak_bandwidth ? 2 : 8);
+   q->avg_window_bytes = 0;
+   q->avg_window_begin = now;
+
+   if (now - q->last_reconfig_time > (NSEC_PER_SEC / 4)) {
+   q->rate_bps = (q->avg_peak_bandwidth * 15) >> 4;
+   cake_reconfigure(sch);
+   }
+   }
+   } else {
+   q->avg_window_bytes = 0;
+   q->last_packet_time = now;
+   }
 
/* flowchain */
if (!flow->set || flow->set == CAKE_SET_DECAYING) {
@@ -1207,14 +1243,26 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
}
 
/* Last packet in queue may be marked, shouldn't be dropped */
-   if (!cobalt_should_drop(&flow->cvars, &b->cparams, now, skb) ||
+   if (!cobalt_should_drop(&flow->cvars, &b->cparams, now, skb,
+   (b->bulk_flow_count *
+!!(q->rate_flags &
+ 

[Cake] [PATCH net-next v8 6/7] sch_cake: Add overhead compensation support to the rate shaper

2018-05-04 Thread Toke Høiland-Jørgensen
This commit adds configurable overhead compensation support to the rate
shaper. With this feature, userspace can configure the actual bottleneck
link overhead and encapsulation mode used, which will be used by the shaper
to calculate the precise duration of each packet on the wire.

This feature is needed because CAKE is often deployed one or two hops
upstream of the actual bottleneck (which can be, e.g., inside a DSL or
cable modem). In this case, the link layer characteristics and overhead
reported by the kernel does not match the actual bottleneck. Being able to
set the actual values in use makes it possible to configure the shaper rate
much closer to the actual bottleneck rate (our experience shows it is
possible to get with 0.1% of the actual physical bottleneck rate), thus
keeping latency low without sacrificing bandwidth.

The overhead compensation has three tunables: A fixed per-packet overhead
size (which, if set, will be accounted from the IP packet header), a
minimum packet size (MPU) and a framing mode supporting either ATM or PTM
framing. We include a set of common keywords in TC to help users configure
the right parameters. If no overhead value is set, the value reported by
the kernel is used.

Signed-off-by: Toke Høiland-Jørgensen 
---
 net/sched/sch_cake.c |  110 ++
 1 file changed, 109 insertions(+), 1 deletion(-)

diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
index 1e5951d26ed2..cb978a0f8969 100644
--- a/net/sched/sch_cake.c
+++ b/net/sched/sch_cake.c
@@ -273,6 +273,7 @@ enum {
 
 struct cobalt_skb_cb {
cobalt_time_t enqueue_time;
+   u32   adjusted_len;
 };
 
 static cobalt_time_t cobalt_get_time(void)
@@ -1095,6 +1096,87 @@ static cobalt_time_t cake_ewma(cobalt_time_t avg, 
cobalt_time_t sample,
return avg;
 }
 
+static u32 cake_overhead(struct cake_sched_data *q, struct sk_buff *skb)
+{
+   const struct skb_shared_info *shinfo = skb_shinfo(skb);
+   u32 off = skb_network_offset(skb);
+   u32 len = qdisc_pkt_len(skb);
+   u16 segs = 1;
+
+   if (unlikely(shinfo->gso_size)) {
+   /* borrowed from qdisc_pkt_len_init() */
+   unsigned int hdr_len;
+
+   hdr_len = skb_transport_header(skb) - skb_mac_header(skb);
+
+   /* + transport layer */
+   if (likely(shinfo->gso_type & (SKB_GSO_TCPV4 |
+  SKB_GSO_TCPV6))) {
+   const struct tcphdr *th;
+   struct tcphdr _tcphdr;
+
+   th = skb_header_pointer(skb, skb_transport_offset(skb),
+   sizeof(_tcphdr), &_tcphdr);
+   if (likely(th))
+   hdr_len += __tcp_hdrlen(th);
+   } else {
+   struct udphdr _udphdr;
+
+   if (skb_header_pointer(skb, skb_transport_offset(skb),
+  sizeof(_udphdr), &_udphdr))
+   hdr_len += sizeof(struct udphdr);
+   }
+
+   if (unlikely(shinfo->gso_type & SKB_GSO_DODGY))
+   segs = DIV_ROUND_UP(skb->len - hdr_len,
+   shinfo->gso_size);
+   else
+   segs = shinfo->gso_segs;
+
+   /* The last segment may be shorter; we ignore this, which means
+* that we will over-estimate the size of the whole GSO segment
+* by the difference in size. This is conservative, so we live
+* with that to avoid the complexity of dealing with it.
+*/
+   len = shinfo->gso_size + hdr_len;
+   }
+
+   q->avg_netoff = cake_ewma(q->avg_netoff, off << 16, 8);
+
+   if (q->rate_flags & CAKE_FLAG_OVERHEAD)
+   len -= off;
+
+   if (q->max_netlen < len)
+   q->max_netlen = len;
+   if (q->min_netlen > len)
+   q->min_netlen = len;
+
+   len += q->rate_overhead;
+
+   if (len < q->rate_mpu)
+   len = q->rate_mpu;
+
+   if (q->atm_mode == CAKE_ATM_ATM) {
+   len += 47;
+   len /= 48;
+   len *= 53;
+   } else if (q->atm_mode == CAKE_ATM_PTM) {
+   /* Add one byte per 64 bytes or part thereof.
+* This is conservative and easier to calculate than the
+* precise value.
+*/
+   len += (len + 63) / 64;
+   }
+
+   if (q->max_adjlen < len)
+   q->max_adjlen = len;
+   if (q->min_adjlen > len)
+   q->min_adjlen = len;
+
+   get_cobalt_cb(skb)->adjusted_len = len * segs;
+   return len;
+}
+
 static void cake_heap_swap(struct cake_sched_data *q, u16 i, u16 j)
 {
struct cake_heap_entry ii = q->overflow_heap[i];
@@ -1172,7 +1254,7 @@ static int cake_

Re: [Cake] Merged the simplified ACK filter...

2018-05-04 Thread Toke Høiland-Jørgensen
Georgios Amanakis  writes:

> There is a plain "n" (probably leftover) at line 1026 and it doesn't
> compile.

Ah, right, didn't realise that had made it into the github version as
well; fixed! :)

> Thank you for all the effort you are putting in!

You're welcome; testing much appreciated! :)

-Toke
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] Merged the simplified ACK filter...

2018-05-04 Thread Georgios Amanakis
There is a plain "n" (probably leftover) at line 1026 and it doesn't
compile.

Thank you for all the effort you are putting in!

George

On Fri, 2018-05-04 at 11:45 +0200, Toke Høiland-Jørgensen wrote:
> ...since no one complained. If no one continues to complain, I'll
> resubmit to net-next with that version later today :)
> 
> -Toke
> ___
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


[Cake] Merged the simplified ACK filter...

2018-05-04 Thread Toke Høiland-Jørgensen
...since no one complained. If no one continues to complain, I'll
resubmit to net-next with that version later today :)

-Toke
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake