We always poll tx for socket, this is sub optimal since this will
slightly increase the waitqueue traversing time and more important,
vhost could not benefit from commit 9e641bdcfa4e ("net-tun:
restructure tun_do_read for better sleep/wakeup efficiency") even if
we've stopped rx polling during handle_rx(), tx poll were still left
in the waitqueue.

Pktgen from a remote host to VM over mlx4 on two 2.00GHz Xeon E5-2650
shows 11.7% improvements on rx PPS. (from 1.28Mpps to 1.44Mpps)

Cc: Wei Xu <w...@redhat.com>
Cc: Matthew Rosato <mjros...@linux.vnet.ibm.com>
Signed-off-by: Jason Wang <jasow...@redhat.com>
---
Changes from V1:
- don't try to disable tx polling during start
- poll tx on error unconditonally
---
 drivers/vhost/net.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 68677d9..8d626d7 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -471,6 +471,7 @@ static void handle_tx(struct vhost_net *net)
                goto out;
 
        vhost_disable_notify(&net->dev, vq);
+       vhost_net_disable_vq(net, vq);
 
        hdr_size = nvq->vhost_hlen;
        zcopy = nvq->ubufs;
@@ -556,6 +557,7 @@ static void handle_tx(struct vhost_net *net)
                                        % UIO_MAXIOV;
                        }
                        vhost_discard_vq_desc(vq, 1);
+                       vhost_net_enable_vq(net, vq);
                        break;
                }
                if (err != len)
-- 
2.7.4

Reply via email to