On Tue, Sep 11, 2018 at 11:35 PM Jason Wang <jasow...@redhat.com> wrote:
>
>
>
> On 2018年09月11日 09:14, Willem de Bruijn wrote:
> >>>> I cook a fixup, and it looks works in my setup:
> >>>>
> >>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >>>> index b320b6b14749..9181c3f2f832 100644
> >>>> --- a/drivers/net/virtio_net.c
> >>>> +++ b/drivers/net/virtio_net.c
> >>>> @@ -2204,10 +2204,17 @@ static int virtnet_set_coalesce(struct
> >>>> net_device *dev,
> >>>>                    return -EINVAL;
> >>>>
> >>>>            if (napi_weight ^ vi->sq[0].napi.weight) {
> >>>> -               if (dev->flags & IFF_UP)
> >>>> -                       return -EBUSY;
> >>>> -               for (i = 0; i < vi->max_queue_pairs; i++)
> >>>> +               for (i = 0; i < vi->max_queue_pairs; i++) {
> >>>> +                       struct netdev_queue *txq =
> >>>> +                              netdev_get_tx_queue(vi->dev, i);
> >>>> +
> >>>> + virtnet_napi_tx_disable(&vi->sq[i].napi);
> >>>> +                       __netif_tx_lock_bh(txq);
> >>>>                            vi->sq[i].napi.weight = napi_weight;
> >>>> +                       __netif_tx_unlock_bh(txq);
> >>>> +                       virtnet_napi_tx_enable(vi, vi->sq[i].vq,
> >>>> + &vi->sq[i].napi);
> >>>> +               }
> >>>>            }
> >>>>
> >>>>            return 0;
> >>> Thanks! It passes my simple stress test, too. Which consists of two
> >>> concurrent loops, one toggling the ethtool option, another running
> >>> TCP_RR.
> >>>
> >>>> The only left case is the speculative tx polling in RX NAPI. I think we
> >>>> don't need to care in this case since it was not a must for correctness.
> >>> As long as the txq lock is held that will be a noop, anyway. The other
> >>> concurrent action is skb_xmit_done. It looks correct to me, but need
> >>> to think about it a bit. The tricky transition is coming out of napi 
> >>> without
> >>> having >= 2 + MAX_SKB_FRAGS clean descriptors. If the queue is
> >>> stopped it may deadlock transmission in no-napi mode.
> >> Yes, maybe we can enable tx queue when napi weight is zero in
> >> virtnet_poll_tx().
> > Yes, that precaution should resolve that edge case.
> >
>
> I've done a stress test and it passes. The test contains:
>
> - vm with 2 queues
> - a bash script to enable and disable tx napi
> - two netperf UDP_STREAM sessions to send small packets

Great. That matches my results. Do you want to send the v2?

Reply via email to