4.4-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Eric Dumazet <[email protected]>

[ Upstream commit 6c0d54f1897d229748d4f41ef919078db6db2123 ]

When the qdisc is full, we drop a packet at the head of the queue,
queue the current skb and return NET_XMIT_CN

Now we track backlog on upper qdiscs, we need to call
qdisc_tree_reduce_backlog(), even if the qlen did not change.

Fixes: 2ccccf5fb43f ("net_sched: update hierarchical backlog too")
Signed-off-by: Eric Dumazet <[email protected]>
Cc: WANG Cong <[email protected]>
Cc: Jamal Hadi Salim <[email protected]>
Acked-by: Cong Wang <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
 net/sched/sch_fifo.c |    4 ++++
 1 file changed, 4 insertions(+)

--- a/net/sched/sch_fifo.c
+++ b/net/sched/sch_fifo.c
@@ -37,14 +37,18 @@ static int pfifo_enqueue(struct sk_buff
 
 static int pfifo_tail_enqueue(struct sk_buff *skb, struct Qdisc *sch)
 {
+       unsigned int prev_backlog;
+
        if (likely(skb_queue_len(&sch->q) < sch->limit))
                return qdisc_enqueue_tail(skb, sch);
 
+       prev_backlog = sch->qstats.backlog;
        /* queue full, remove one skb to fulfill the limit */
        __qdisc_queue_drop_head(sch, &sch->q);
        qdisc_qstats_drop(sch);
        qdisc_enqueue_tail(skb, sch);
 
+       qdisc_tree_reduce_backlog(sch, 0, prev_backlog - sch->qstats.backlog);
        return NET_XMIT_CN;
 }
 


Reply via email to