On Thu, 2020-07-02 at 11:08 -0700, Josh Hunt wrote: > On 7/2/20 2:45 AM, Paolo Abeni wrote: > > Hi all, > > > > On Thu, 2020-07-02 at 08:14 +0200, Jonas Bonn wrote: > > > Hi Cong, > > > > > > On 01/07/2020 21:58, Cong Wang wrote: > > > > On Wed, Jul 1, 2020 at 9:05 AM Cong Wang <xiyou.wangc...@gmail.com> > > > > wrote: > > > > > On Tue, Jun 30, 2020 at 2:08 PM Josh Hunt <joh...@akamai.com> wrote: > > > > > > Do either of you know if there's been any development on a fix for > > > > > > this > > > > > > issue? If not we can propose something. > > > > > > > > > > If you have a reproducer, I can look into this. > > > > > > > > Does the attached patch fix this bug completely? > > > > > > It's easier to comment if you inline the patch, but after taking a quick > > > look it seems too simplistic. > > > > > > i) Are you sure you haven't got the return values on qdisc_run reversed? > > > > qdisc_run() returns true if it was able to acquire the seq lock. We > > need to take special action in the opposite case, so Cong's patch LGTM > > from a functional PoV. > > > > > ii) There's a "bypass" path that skips the enqueue/dequeue operation if > > > the queue is empty; that needs a similar treatment: after releasing > > > seqlock it needs to ensure that another packet hasn't been enqueued > > > since it last checked. > > > > That has been reverted with > > commit 379349e9bc3b42b8b2f8f7a03f64a97623fff323 > > > > --- > > > diff --git a/net/core/dev.c b/net/core/dev.c > > > index 90b59fc50dc9..c7e48356132a 100644 > > > --- a/net/core/dev.c > > > +++ b/net/core/dev.c > > > @@ -3744,7 +3744,8 @@ static inline int __dev_xmit_skb(struct sk_buff > > > *skb, struct Qdisc *q, > > > > > > if (q->flags & TCQ_F_NOLOCK) { > > > rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK; > > > - qdisc_run(q); > > > + if (!qdisc_run(q) && rc == NET_XMIT_SUCCESS) > > > + __netif_schedule(q); > > > > I fear the __netif_schedule() call may cause performance regression to > > the point of making a revert of TCQ_F_NOLOCK preferable. I'll try to > > collect some data. > > Initial results with Cong's patch look promising, so far no stalls. We > will let it run over the long weekend and report back on Tuesday. > > Paolo - I have concerns about possible performance regression with the > change as well. If you can gather some data that would be great.
I finally had the time to run some performance tests vs the above with mixed results. Using several netperf threadsover a single pfifo_fast queue with small UDP packets, perf differences vs vanilla are just above noise range (1- 1,5%) Using pktgen in 'queue_xmit' mode on a dummy device (this should maximise the pkt-rate and thus the contention) I see: pktgen threads vanilla patched delta nr kpps kpps % 1 3240 3240 0 2 3910 2710 -30.5 4 5140 4920 -4 A relevant source of the measured overhead is due to the contention on q->state in __netif_schedule, so the following helps a bit: --- diff --git a/net/core/dev.c b/net/core/dev.c index b8e8286a0a34..3cad6e086fac 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3750,7 +3750,8 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, if (q->flags & TCQ_F_NOLOCK) { rc = q->enqueue(skb, q, NULL, &to_free) & NET_XMIT_MASK; - if (!qdisc_run(q) && rc == NET_XMIT_SUCCESS) + if (!qdisc_run(q) && rc == NET_XMIT_SUCCESS && + !test_bit(__QDISC_STATE_SCHED, &q->state)) __netif_schedule(q); if (unlikely(to_free)) --- With the above incremental patch applied I see: pktgen threads vanilla patched[II] delta nr kpps kpps % 1 3240 3240 0 2 3910 2830 -27% 4 5140 5140 0 So the regression with 2 pktgen threads is still relevant. 'perf' shows relevant time spent into net_tx_action() and __netif_schedule(). Cheers, Paolo.