Performance tests with the OVS DPDK datapath have shown that the achievable tx 
throughput over a vhostuser port into a VM with an interrupt-based virtio 
driver is limited by the overhead incurred by virtio interrupts. The OVS PMD 
spends up to 30% of its cycles in system calls kicking the eventfd, but also 
the core running the vCPU is heavily loaded with generating the virtio 
interrupts in KVM on the host and handling these interrupts in the virtio-net 
driver in the guest. This limits the throughput to about 500-700 Kpps with a 
single vCPU.

OVS is addressing this issue with batching packets to a vhostuser port for some 
time to limit the virtio interrupt frequency. With a 50 us batching period we 
have measured an iperf3  throughput increase by 15% and a PMD utilization 
decrease from 45% to 30%.

On the other hand, guests using DPDK virtio PMDs do not profit from time-based 
tx batching. Instead they experience a 2-3% performance penalty and an average 
latency increase of 30-40 us. OVS therefore intends to apply time-based tx 
batching only for vhostuser tx queues that need to trigger virtio interrupts.

Today this information is hidden inside the rte_vhost library and not 
accessible to users of the API. This patch adds a function to the API to query 
this.

Signed-off-by: Jan Scheurich <jan.scheur...@ericsson.com>

---

 lib/librte_vhost/rte_vhost.h | 12 ++++++++++++
 lib/librte_vhost/vhost.c     | 19 +++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
index 8c974eb..d62338b 100644
--- a/lib/librte_vhost/rte_vhost.h
+++ b/lib/librte_vhost/rte_vhost.h
@@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
  */
 uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid);

+/**
+ * Does the virtio driver request interrupts for a vhost tx queue?
+ *
+ * @param vid
+ *  vhost device ID
+ * @param qid
+ *  virtio queue index in mq case
+ * @return
+ *  1 if true, 0 if false
+ */
+int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 0b6aa1c..bd1ebf9 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -503,3 +503,22 @@ struct virtio_net *

        return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx;
 }
+
+int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid)
+{
+    struct virtio_net *dev;
+    struct vhost_virtqueue *vq;
+
+    dev = get_device(vid);
+    if (dev == NULL)
+        return 0;
+
+    vq = dev->virtqueue[qid];
+    if (vq == NULL)
+        return 0;
+
+    if (unlikely(vq->enabled == 0 || vq->avail == NULL))
+        return 0;
+
+    return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT);
+}

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to