[PATCH v1] vsock: don't send messages when vsock device is not started
Added a check in vhost_vsock_common_send_transport_reset, and only send messages on event queue when the vsock host device is started. Suggested-by: Stefano Garzarella Signed-off-by: Jiang Wang --- hw/virtio/vhost-vsock-common.c | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c index 4ad6e234ad..64425719a2 100644 --- a/hw/virtio/vhost-vsock-common.c +++ b/hw/virtio/vhost-vsock-common.c @@ -138,8 +138,10 @@ static void vhost_vsock_common_send_transport_reset(VHostVSockCommon *vvc) goto out; } -virtqueue_push(vq, elem, sizeof(event)); -virtio_notify(VIRTIO_DEVICE(vvc), vq); +if (vvc->vhost_dev.started) { +virtqueue_push(vq, elem, sizeof(event)); +virtio_notify(VIRTIO_DEVICE(vvc), vq); +} out: g_free(elem); -- 2.20.1
Re: [External] Re: [RFC v7] virtio/vsock: add two more queues for datagram types
On Thu, Sep 23, 2021 at 2:18 AM Stefano Garzarella wrote: > > On Wed, Sep 22, 2021 at 10:36:24AM -0700, Jiang Wang . wrote: > >On Wed, Sep 22, 2021 at 2:23 AM Stefano Garzarella > >wrote: > >> > >> On Wed, Sep 22, 2021 at 12:00:24AM +, Jiang Wang wrote: > >> >Datagram sockets are connectionless and unreliable. > >> >The sender does not know the capacity of the receiver > >> >and may send more packets than the receiver can handle. > >> > > >> >Add two more dedicate virtqueues for datagram sockets, > >> >so that it will not unfairly steal resources from > >> >stream and future connection-oriented sockets. > >> > > >> >The two new virtqueues are enabled by default and will > >> >be removed if the guest does not support. This will help > >> >migration work. > >> > > >> >btw: enable_dgram argument in vhost_vsock_common_realize > >> >is redundant for now, but will be used later when we > >> >want to disable DGRAM feature bit for old versions. > >> > > >> >Signed-off-by: Jiang Wang > >> >--- > >> >v1 -> v2: use qemu cmd option to control number of queues, > >> >removed configuration settings for dgram. > >> >v2 -> v3: use ioctl to get features and decide number of > >> >virt queues, instead of qemu cmd option. > >> >v3 -> v4: change DGRAM feature bit value to 2. Add an argument > >> >in vhost_vsock_common_realize to indicate dgram is supported or > >> > not. > >> >v4 -> v5: don't open dev to get vhostfd. Removed leftover definition of > >> >enable_dgram > >> >v5 -> v6: fix style errors. Imporve error handling of > >> >vhost_vsock_dgram_supported. Rename MAX_VQS_WITH_DGRAM and > >> > another one. > >> >v6 -> v7: Always enable dgram for vhost-user and vhost kernel. > >> >Delete unused virtqueues at the beginning of > >> >vhost_vsock_common_start for migration. Otherwise, migration will > >> > fail. > >> > > >> > hw/virtio/vhost-user-vsock.c | 2 +- > >> > hw/virtio/vhost-vsock-common.c| 32 +-- > >> > hw/virtio/vhost-vsock.c | 6 +++- > >> > include/hw/virtio/vhost-vsock-common.h| 6 ++-- > >> > include/hw/virtio/vhost-vsock.h | 3 ++ > >> > include/standard-headers/linux/virtio_vsock.h | 1 + > >> > 6 files changed, 43 insertions(+), 7 deletions(-) > >> > > >> >diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c > >> >index 6095ed7349..9823a2f3bd 100644 > >> >--- a/hw/virtio/vhost-user-vsock.c > >> >+++ b/hw/virtio/vhost-user-vsock.c > >> >@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, > >> >Error **errp) > >> > return; > >> > } > >> > > >> >-vhost_vsock_common_realize(vdev, "vhost-user-vsock"); > >> >+vhost_vsock_common_realize(vdev, "vhost-user-vsock", true); > >> > > >> > vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); > >> > > >> >diff --git a/hw/virtio/vhost-vsock-common.c > >> >b/hw/virtio/vhost-vsock-common.c > >> >index 4ad6e234ad..7d89b4d242 100644 > >> >--- a/hw/virtio/vhost-vsock-common.c > >> >+++ b/hw/virtio/vhost-vsock-common.c > >> >@@ -26,6 +26,18 @@ int vhost_vsock_common_start(VirtIODevice *vdev) > >> > int ret; > >> > int i; > >> > > >> >+if (!virtio_has_feature(vdev->guest_features, VIRTIO_VSOCK_F_DGRAM)) > >> >{ > >> >+struct vhost_virtqueue *vqs; > >> >+virtio_delete_queue(vvc->dgram_recv_vq); > >> >+virtio_delete_queue(vvc->dgram_trans_vq); > >> >+ > >> > >> Are you sure it works removing queues here? > >> The event_queue would always be #4, but the guest will use #2 which > >> we're removing. > >> > >Yes, this works. In fact, I should include this in v6 and my tests done > >in my previous emails have these changes. But before I submitted the patch, > >I thought this code was redundant and removed it without further testing. > > Just tried on an host that doesn't support F_DGRAM and I have the &
Re: Re: [RFC v7] virtio/vsock: add two more queues for datagram types
On Wed, Sep 22, 2021 at 2:23 AM Stefano Garzarella wrote: > > On Wed, Sep 22, 2021 at 12:00:24AM +, Jiang Wang wrote: > >Datagram sockets are connectionless and unreliable. > >The sender does not know the capacity of the receiver > >and may send more packets than the receiver can handle. > > > >Add two more dedicate virtqueues for datagram sockets, > >so that it will not unfairly steal resources from > >stream and future connection-oriented sockets. > > > >The two new virtqueues are enabled by default and will > >be removed if the guest does not support. This will help > >migration work. > > > >btw: enable_dgram argument in vhost_vsock_common_realize > >is redundant for now, but will be used later when we > >want to disable DGRAM feature bit for old versions. > > > >Signed-off-by: Jiang Wang > >--- > >v1 -> v2: use qemu cmd option to control number of queues, > >removed configuration settings for dgram. > >v2 -> v3: use ioctl to get features and decide number of > >virt queues, instead of qemu cmd option. > >v3 -> v4: change DGRAM feature bit value to 2. Add an argument > >in vhost_vsock_common_realize to indicate dgram is supported or not. > >v4 -> v5: don't open dev to get vhostfd. Removed leftover definition of > >enable_dgram > >v5 -> v6: fix style errors. Imporve error handling of > >vhost_vsock_dgram_supported. Rename MAX_VQS_WITH_DGRAM and another > > one. > >v6 -> v7: Always enable dgram for vhost-user and vhost kernel. > >Delete unused virtqueues at the beginning of > >vhost_vsock_common_start for migration. Otherwise, migration will > > fail. > > > > hw/virtio/vhost-user-vsock.c | 2 +- > > hw/virtio/vhost-vsock-common.c| 32 +-- > > hw/virtio/vhost-vsock.c | 6 +++- > > include/hw/virtio/vhost-vsock-common.h| 6 ++-- > > include/hw/virtio/vhost-vsock.h | 3 ++ > > include/standard-headers/linux/virtio_vsock.h | 1 + > > 6 files changed, 43 insertions(+), 7 deletions(-) > > > >diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c > >index 6095ed7349..9823a2f3bd 100644 > >--- a/hw/virtio/vhost-user-vsock.c > >+++ b/hw/virtio/vhost-user-vsock.c > >@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error > >**errp) > > return; > > } > > > >-vhost_vsock_common_realize(vdev, "vhost-user-vsock"); > >+vhost_vsock_common_realize(vdev, "vhost-user-vsock", true); > > > > vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); > > > >diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c > >index 4ad6e234ad..7d89b4d242 100644 > >--- a/hw/virtio/vhost-vsock-common.c > >+++ b/hw/virtio/vhost-vsock-common.c > >@@ -26,6 +26,18 @@ int vhost_vsock_common_start(VirtIODevice *vdev) > > int ret; > > int i; > > > >+if (!virtio_has_feature(vdev->guest_features, VIRTIO_VSOCK_F_DGRAM)) { > >+struct vhost_virtqueue *vqs; > >+virtio_delete_queue(vvc->dgram_recv_vq); > >+virtio_delete_queue(vvc->dgram_trans_vq); > >+ > > Are you sure it works removing queues here? > The event_queue would always be #4, but the guest will use #2 which > we're removing. > Yes, this works. In fact, I should include this in v6 and my tests done in my previous emails have these changes. But before I submitted the patch, I thought this code was redundant and removed it without further testing. To explain it, I think the event queue number does not matter for the vhost and qemu. The vhost-vsock kernel module does not allocate any data structure for the event queue. Qemu also only allocates an array of size 2 or 4 for the tx, rx queues. The event queue is handled separately. The event queue number only shows up in the spec, but in real code, I don't see anywhere that the event queue number is used explicitly or implicitly. The Event queue looks independent of tx, rx queues. > >+vqs = vvc->vhost_dev.vqs; > >+vvc->vhost_dev.nvqs = MAX_VQS_WITHOUT_DGRAM; > >+vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue, > >+ vvc->vhost_dev.nvqs); > >+g_free(vqs); > >+} > >+ > > if (!k->set_guest_notifiers) { > > error_report("binding does not support guest notifiers"); > > return -ENOSYS; > >@@ -196,9 +2
[RFC v7] virtio/vsock: add two more queues for datagram types
Datagram sockets are connectionless and unreliable. The sender does not know the capacity of the receiver and may send more packets than the receiver can handle. Add two more dedicate virtqueues for datagram sockets, so that it will not unfairly steal resources from stream and future connection-oriented sockets. The two new virtqueues are enabled by default and will be removed if the guest does not support. This will help migration work. btw: enable_dgram argument in vhost_vsock_common_realize is redundant for now, but will be used later when we want to disable DGRAM feature bit for old versions. Signed-off-by: Jiang Wang --- v1 -> v2: use qemu cmd option to control number of queues, removed configuration settings for dgram. v2 -> v3: use ioctl to get features and decide number of virt queues, instead of qemu cmd option. v3 -> v4: change DGRAM feature bit value to 2. Add an argument in vhost_vsock_common_realize to indicate dgram is supported or not. v4 -> v5: don't open dev to get vhostfd. Removed leftover definition of enable_dgram v5 -> v6: fix style errors. Imporve error handling of vhost_vsock_dgram_supported. Rename MAX_VQS_WITH_DGRAM and another one. v6 -> v7: Always enable dgram for vhost-user and vhost kernel. Delete unused virtqueues at the beginning of vhost_vsock_common_start for migration. Otherwise, migration will fail. hw/virtio/vhost-user-vsock.c | 2 +- hw/virtio/vhost-vsock-common.c| 32 +-- hw/virtio/vhost-vsock.c | 6 +++- include/hw/virtio/vhost-vsock-common.h| 6 ++-- include/hw/virtio/vhost-vsock.h | 3 ++ include/standard-headers/linux/virtio_vsock.h | 1 + 6 files changed, 43 insertions(+), 7 deletions(-) diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c index 6095ed7349..9823a2f3bd 100644 --- a/hw/virtio/vhost-user-vsock.c +++ b/hw/virtio/vhost-user-vsock.c @@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error **errp) return; } -vhost_vsock_common_realize(vdev, "vhost-user-vsock"); +vhost_vsock_common_realize(vdev, "vhost-user-vsock", true); vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c index 4ad6e234ad..7d89b4d242 100644 --- a/hw/virtio/vhost-vsock-common.c +++ b/hw/virtio/vhost-vsock-common.c @@ -26,6 +26,18 @@ int vhost_vsock_common_start(VirtIODevice *vdev) int ret; int i; +if (!virtio_has_feature(vdev->guest_features, VIRTIO_VSOCK_F_DGRAM)) { +struct vhost_virtqueue *vqs; +virtio_delete_queue(vvc->dgram_recv_vq); +virtio_delete_queue(vvc->dgram_trans_vq); + +vqs = vvc->vhost_dev.vqs; +vvc->vhost_dev.nvqs = MAX_VQS_WITHOUT_DGRAM; +vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue, + vvc->vhost_dev.nvqs); +g_free(vqs); +} + if (!k->set_guest_notifiers) { error_report("binding does not support guest notifiers"); return -ENOSYS; @@ -196,9 +208,11 @@ int vhost_vsock_common_post_load(void *opaque, int version_id) return 0; } -void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) +void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, + bool enable_dgram) { VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); +int nvqs = MAX_VQS_WITH_DGRAM; virtio_init(vdev, name, VIRTIO_ID_VSOCK, sizeof(struct virtio_vsock_config)); @@ -209,12 +223,17 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); +vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); + /* The event queue belongs to QEMU */ vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); -vvc->vhost_dev.nvqs = ARRAY_SIZE(vvc->vhost_vqs); -vvc->vhost_dev.vqs = vvc->vhost_vqs; +vvc->vhost_dev.nvqs = nvqs; +vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue, vvc->vhost_dev.nvqs); vvc->post_load_timer = NULL; } @@ -227,6 +246,13 @@ void vhost_vsock_common_unrealize(VirtIODevice *vdev) virtio_delete_queue(vvc->recv_vq); virtio_delete_queue(vvc->trans_vq); +if (vvc->vhost_dev.nvqs == MAX_VQS_WITH_DGRAM) { +
Re: Re: [RFC v6] virtio/vsock: add two more queues for datagram types
On Tue, Sep 14, 2021 at 5:46 AM Stefan Hajnoczi wrote: > > On Mon, Sep 13, 2021 at 10:18:43PM +, Jiang Wang wrote: > > Datagram sockets are connectionless and unreliable. > > The sender does not know the capacity of the receiver > > and may send more packets than the receiver can handle. > > > > Add two more dedicate virtqueues for datagram sockets, > > so that it will not unfairly steal resources from > > stream and future connection-oriented sockets. > > > > Signed-off-by: Jiang Wang > > --- > > v1 -> v2: use qemu cmd option to control number of queues, > > removed configuration settings for dgram. > > v2 -> v3: use ioctl to get features and decide number of > > virt queues, instead of qemu cmd option. > > v3 -> v4: change DGRAM feature bit value to 2. Add an argument > > in vhost_vsock_common_realize to indicate dgram is supported or not. > > v4 -> v5: don't open dev to get vhostfd. Removed leftover definition of > > enable_dgram > > v5 -> v6: fix style errors. Imporve error handling of > > vhost_vsock_dgram_supported. Rename MAX_VQS_WITH_DGRAM and another > > one. > > > > hw/virtio/vhost-user-vsock.c | 2 +- > > hw/virtio/vhost-vsock-common.c| 25 -- > > hw/virtio/vhost-vsock.c | 34 ++- > > include/hw/virtio/vhost-vsock-common.h| 6 ++-- > > include/hw/virtio/vhost-vsock.h | 3 ++ > > include/standard-headers/linux/virtio_vsock.h | 1 + > > 6 files changed, 64 insertions(+), 7 deletions(-) > > > > diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c > > index 6095ed7349..e9ec0e1c00 100644 > > --- a/hw/virtio/vhost-user-vsock.c > > +++ b/hw/virtio/vhost-user-vsock.c > > @@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error > > **errp) > > return; > > } > > > > -vhost_vsock_common_realize(vdev, "vhost-user-vsock"); > > +vhost_vsock_common_realize(vdev, "vhost-user-vsock", false); > > VIRTIO_VSOCK_F_DGRAM support should work equally well for > vhost-user-vsock. I don't think there is a need to disable it here. > Stefano mentioned in previous email ( V3 ) that I can disable dgram for vhost-user-vsock for now. I think we need some extra changes to fully support vhost-vsock-user, like feature discovery? > > > > vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); > > > > diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c > > index 4ad6e234ad..d94636e04e 100644 > > --- a/hw/virtio/vhost-vsock-common.c > > +++ b/hw/virtio/vhost-vsock-common.c > > @@ -17,6 +17,8 @@ > > #include "hw/virtio/vhost-vsock.h" > > #include "qemu/iov.h" > > #include "monitor/monitor.h" > > +#include > > +#include > > > > int vhost_vsock_common_start(VirtIODevice *vdev) > > { > > @@ -196,9 +198,11 @@ int vhost_vsock_common_post_load(void *opaque, int > > version_id) > > return 0; > > } > > > > -void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) > > +void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, > > + bool enable_dgram) > > { > > VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); > > +int nvqs = VHOST_VSOCK_NVQS; > > > > virtio_init(vdev, name, VIRTIO_ID_VSOCK, > > sizeof(struct virtio_vsock_config)); > > @@ -209,12 +213,20 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, > > const char *name) > > vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > > vhost_vsock_common_handle_output); > > > > +if (enable_dgram) { > > +nvqs = VHOST_VSOCK_NVQS_DGRAM; > > +vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > > + > > vhost_vsock_common_handle_output); > > +vvc->dgram_trans_vq = virtio_add_queue(vdev, > > VHOST_VSOCK_QUEUE_SIZE, > > + > > vhost_vsock_common_handle_output); > > +} > > I think the virtqueues can be added unconditionally. > OK. > > + > > /* The event queue belongs to QEMU */ > > vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > >
[RFC v6] virtio/vsock: add two more queues for datagram types
Datagram sockets are connectionless and unreliable. The sender does not know the capacity of the receiver and may send more packets than the receiver can handle. Add two more dedicate virtqueues for datagram sockets, so that it will not unfairly steal resources from stream and future connection-oriented sockets. Signed-off-by: Jiang Wang --- v1 -> v2: use qemu cmd option to control number of queues, removed configuration settings for dgram. v2 -> v3: use ioctl to get features and decide number of virt queues, instead of qemu cmd option. v3 -> v4: change DGRAM feature bit value to 2. Add an argument in vhost_vsock_common_realize to indicate dgram is supported or not. v4 -> v5: don't open dev to get vhostfd. Removed leftover definition of enable_dgram v5 -> v6: fix style errors. Imporve error handling of vhost_vsock_dgram_supported. Rename MAX_VQS_WITH_DGRAM and another one. hw/virtio/vhost-user-vsock.c | 2 +- hw/virtio/vhost-vsock-common.c| 25 -- hw/virtio/vhost-vsock.c | 34 ++- include/hw/virtio/vhost-vsock-common.h| 6 ++-- include/hw/virtio/vhost-vsock.h | 3 ++ include/standard-headers/linux/virtio_vsock.h | 1 + 6 files changed, 64 insertions(+), 7 deletions(-) diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c index 6095ed7349..e9ec0e1c00 100644 --- a/hw/virtio/vhost-user-vsock.c +++ b/hw/virtio/vhost-user-vsock.c @@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error **errp) return; } -vhost_vsock_common_realize(vdev, "vhost-user-vsock"); +vhost_vsock_common_realize(vdev, "vhost-user-vsock", false); vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c index 4ad6e234ad..d94636e04e 100644 --- a/hw/virtio/vhost-vsock-common.c +++ b/hw/virtio/vhost-vsock-common.c @@ -17,6 +17,8 @@ #include "hw/virtio/vhost-vsock.h" #include "qemu/iov.h" #include "monitor/monitor.h" +#include +#include int vhost_vsock_common_start(VirtIODevice *vdev) { @@ -196,9 +198,11 @@ int vhost_vsock_common_post_load(void *opaque, int version_id) return 0; } -void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) +void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, + bool enable_dgram) { VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); +int nvqs = VHOST_VSOCK_NVQS; virtio_init(vdev, name, VIRTIO_ID_VSOCK, sizeof(struct virtio_vsock_config)); @@ -209,12 +213,20 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); +if (enable_dgram) { +nvqs = VHOST_VSOCK_NVQS_DGRAM; +vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +} + /* The event queue belongs to QEMU */ vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); -vvc->vhost_dev.nvqs = ARRAY_SIZE(vvc->vhost_vqs); -vvc->vhost_dev.vqs = vvc->vhost_vqs; +vvc->vhost_dev.nvqs = nvqs; +vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue, vvc->vhost_dev.nvqs); vvc->post_load_timer = NULL; } @@ -227,6 +239,13 @@ void vhost_vsock_common_unrealize(VirtIODevice *vdev) virtio_delete_queue(vvc->recv_vq); virtio_delete_queue(vvc->trans_vq); +if (vvc->vhost_dev.nvqs == VHOST_VSOCK_NVQS_DGRAM) { +virtio_delete_queue(vvc->dgram_recv_vq); +virtio_delete_queue(vvc->dgram_trans_vq); +} + +g_free(vvc->vhost_dev.vqs); + virtio_delete_queue(vvc->event_vq); virtio_cleanup(vdev); } diff --git a/hw/virtio/vhost-vsock.c b/hw/virtio/vhost-vsock.c index 1b1a5c70ed..891d38e226 100644 --- a/hw/virtio/vhost-vsock.c +++ b/hw/virtio/vhost-vsock.c @@ -20,9 +20,12 @@ #include "hw/qdev-properties.h" #include "hw/virtio/vhost-vsock.h" #include "monitor/monitor.h" +#include +#include const int feature_bits[] = { VIRTIO_VSOCK_F_SEQPACKET, +VIRTIO_VSOCK_F_DGRAM, VHOST_INVALID_FEATURE_BIT }; @@ -116,6 +119,9 @@ static uint64_t vhost_vsock_get_features(VirtIODevice *vdev, VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); virtio_add_feature(&requested_features, VIRTIO_VSOCK_F_SEQPACKET);
[RFC v5] virtio/vsock: add two more queues for datagram types
Datagram sockets are connectionless and unreliable. The sender does not know the capacity of the receiver and may send more packets than the receiver can handle. Add two more dedicate virtqueues for datagram sockets, so that it will not unfairly steal resources from stream and future connection-oriented sockets. Signed-off-by: Jiang Wang --- v1 -> v2: use qemu cmd option to control number of queues, removed configuration settings for dgram. v2 -> v3: use ioctl to get features and decide number of virt queues, instead of qemu cmd option. v3 -> v4: change DGRAM feature bit value to 2. Add an argument in vhost_vsock_common_realize to indicate dgram is supported or not. v4 -> v5: don't open dev to get vhostfd. Removed leftover definition of enable_dgram hw/virtio/vhost-user-vsock.c | 2 +- hw/virtio/vhost-vsock-common.c| 27 --- hw/virtio/vhost-vsock.c | 27 ++- include/hw/virtio/vhost-vsock-common.h| 6 +++-- include/hw/virtio/vhost-vsock.h | 3 +++ include/standard-headers/linux/virtio_vsock.h | 1 + 6 files changed, 59 insertions(+), 7 deletions(-) diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c index 6095ed7349..e9ec0e1c00 100644 --- a/hw/virtio/vhost-user-vsock.c +++ b/hw/virtio/vhost-user-vsock.c @@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error **errp) return; } -vhost_vsock_common_realize(vdev, "vhost-user-vsock"); +vhost_vsock_common_realize(vdev, "vhost-user-vsock", false); vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c index 4ad6e234ad..f48b5a69df 100644 --- a/hw/virtio/vhost-vsock-common.c +++ b/hw/virtio/vhost-vsock-common.c @@ -17,6 +17,8 @@ #include "hw/virtio/vhost-vsock.h" #include "qemu/iov.h" #include "monitor/monitor.h" +#include +#include int vhost_vsock_common_start(VirtIODevice *vdev) { @@ -196,9 +198,10 @@ int vhost_vsock_common_post_load(void *opaque, int version_id) return 0; } -void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) +void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, bool enable_dgram) { VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); +int nvqs = MAX_VQS_WITHOUT_DGRAM; virtio_init(vdev, name, VIRTIO_ID_VSOCK, sizeof(struct virtio_vsock_config)); @@ -209,12 +212,22 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); +if (!enable_dgram) +nvqs = MAX_VQS_WITHOUT_DGRAM; +else { +nvqs = MAX_VQS_WITH_DGRAM; +vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +} + /* The event queue belongs to QEMU */ vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); -vvc->vhost_dev.nvqs = ARRAY_SIZE(vvc->vhost_vqs); -vvc->vhost_dev.vqs = vvc->vhost_vqs; +vvc->vhost_dev.nvqs = nvqs; +vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue, vvc->vhost_dev.nvqs); vvc->post_load_timer = NULL; } @@ -227,6 +240,14 @@ void vhost_vsock_common_unrealize(VirtIODevice *vdev) virtio_delete_queue(vvc->recv_vq); virtio_delete_queue(vvc->trans_vq); +if (vvc->vhost_dev.nvqs == MAX_VQS_WITH_DGRAM) { +virtio_delete_queue(vvc->dgram_recv_vq); +virtio_delete_queue(vvc->dgram_trans_vq); +} + +if (vvc->vhost_dev.vqs) +g_free(vvc->vhost_dev.vqs); + virtio_delete_queue(vvc->event_vq); virtio_cleanup(vdev); } diff --git a/hw/virtio/vhost-vsock.c b/hw/virtio/vhost-vsock.c index 1b1a5c70ed..1fee25f144 100644 --- a/hw/virtio/vhost-vsock.c +++ b/hw/virtio/vhost-vsock.c @@ -20,9 +20,12 @@ #include "hw/qdev-properties.h" #include "hw/virtio/vhost-vsock.h" #include "monitor/monitor.h" +#include +#include const int feature_bits[] = { VIRTIO_VSOCK_F_SEQPACKET, +VIRTIO_VSOCK_F_DGRAM, VHOST_INVALID_FEATURE_BIT }; @@ -116,6 +119,8 @@ static uint64_t vhost_vsock_get_features(VirtIODevice *vdev, VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); virtio_add_feature(&requested_features, VIRTIO_VSOCK_F_SEQPACKET); +if (vvc->vhost_dev.nvqs == MAX_VQS_WITH_DGRAM) +v
Re: Re: [PATCH v4] virtio/vsock: add two more queues for datagram types
On Mon, Aug 9, 2021 at 3:58 AM Stefano Garzarella wrote: > > On Thu, Aug 05, 2021 at 12:07:02PM -0700, Jiang Wang . wrote: > >On Wed, Aug 4, 2021 at 1:13 AM Stefano Garzarella > >wrote: > >> > >> On Tue, Aug 03, 2021 at 11:41:32PM +, Jiang Wang wrote: > >> >Datagram sockets are connectionless and unreliable. > >> >The sender does not know the capacity of the receiver > >> >and may send more packets than the receiver can handle. > >> > > >> >Add two more dedicate virtqueues for datagram sockets, > >> >so that it will not unfairly steal resources from > >> >stream and future connection-oriented sockets. > >> > > >> >Signed-off-by: Jiang Wang > >> >--- > >> >v1 -> v2: use qemu cmd option to control number of queues, > >> > removed configuration settings for dgram. > >> >v2 -> v3: use ioctl to get features and decide number of > >> >virt queues, instead of qemu cmd option. > >> >v3 -> v4: change DGRAM feature bit value to 2. Add an argument > >> > in vhost_vsock_common_realize to indicate dgram is supported or > >> > not. > >> > > >> > hw/virtio/vhost-user-vsock.c | 2 +- > >> > hw/virtio/vhost-vsock-common.c| 58 ++- > >> > hw/virtio/vhost-vsock.c | 5 +- > >> > include/hw/virtio/vhost-vsock-common.h| 6 +- > >> > include/hw/virtio/vhost-vsock.h | 4 ++ > >> > include/standard-headers/linux/virtio_vsock.h | 1 + > >> > 6 files changed, 69 insertions(+), 7 deletions(-) > >> > > >> >diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c > >> >index 6095ed7349..e9ec0e1c00 100644 > >> >--- a/hw/virtio/vhost-user-vsock.c > >> >+++ b/hw/virtio/vhost-user-vsock.c > >> >@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, > >> >Error **errp) > >> > return; > >> > } > >> > > >> >-vhost_vsock_common_realize(vdev, "vhost-user-vsock"); > >> >+vhost_vsock_common_realize(vdev, "vhost-user-vsock", false); > >> > > >> > vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); > >> > > >> >diff --git a/hw/virtio/vhost-vsock-common.c > >> >b/hw/virtio/vhost-vsock-common.c > >> >index 4ad6e234ad..c78536911a 100644 > >> >--- a/hw/virtio/vhost-vsock-common.c > >> >+++ b/hw/virtio/vhost-vsock-common.c > >> >@@ -17,6 +17,8 @@ > >> > #include "hw/virtio/vhost-vsock.h" > >> > #include "qemu/iov.h" > >> > #include "monitor/monitor.h" > >> >+#include > >> >+#include > >> > > >> > int vhost_vsock_common_start(VirtIODevice *vdev) > >> > { > >> >@@ -196,9 +198,39 @@ int vhost_vsock_common_post_load(void *opaque, int > >> >version_id) > >> > return 0; > >> > } > >> > > >> >-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) > >> >+static int vhost_vsock_get_max_qps(bool enable_dgram) > >> >+{ > >> >+uint64_t features; > >> >+int ret; > >> >+int fd = -1; > >> >+ > >> >+if (!enable_dgram) > >> >+return MAX_VQS_WITHOUT_DGRAM; > >> >+ > >> >+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY); > >> > >> > >> As I said in the previous version, we cannot directly open > >> /dev/vhost-vsock, for two reasons: > >> > >>1. this code is common with vhost-user-vsock which does not use > >>/dev/vhost-vsock. > >> > >>2. the fd may have been passed from the management layer and qemu may > >>not be able to directly open /dev/vhost-vsock. > >> > >> I think is better to move this function in hw/virtio/vhost-vsock.c, > >> using the `vhostfd`, returning true or false if dgram is supported, then > >> you can use it for `enable_dgram` param ... > >> > > > >Yes, you are right. Now I remember you said that before but I forgot about > >that > >when I changed the code. I will fix it. Sorry about that. > > No problem :-) > > > > >> >+if (fd == -1) { > >> >+error_report
Re: Re: [PATCH v4] virtio/vsock: add two more queues for datagram types
On Mon, Aug 9, 2021 at 3:58 AM Stefano Garzarella wrote: > > On Thu, Aug 05, 2021 at 12:07:02PM -0700, Jiang Wang . wrote: > >On Wed, Aug 4, 2021 at 1:13 AM Stefano Garzarella > >wrote: > >> > >> On Tue, Aug 03, 2021 at 11:41:32PM +, Jiang Wang wrote: > >> >Datagram sockets are connectionless and unreliable. > >> >The sender does not know the capacity of the receiver > >> >and may send more packets than the receiver can handle. > >> > > >> >Add two more dedicate virtqueues for datagram sockets, > >> >so that it will not unfairly steal resources from > >> >stream and future connection-oriented sockets. > >> > > >> >Signed-off-by: Jiang Wang > >> >--- > >> >v1 -> v2: use qemu cmd option to control number of queues, > >> > removed configuration settings for dgram. > >> >v2 -> v3: use ioctl to get features and decide number of > >> >virt queues, instead of qemu cmd option. > >> >v3 -> v4: change DGRAM feature bit value to 2. Add an argument > >> > in vhost_vsock_common_realize to indicate dgram is supported or > >> > not. > >> > > >> > hw/virtio/vhost-user-vsock.c | 2 +- > >> > hw/virtio/vhost-vsock-common.c| 58 ++- > >> > hw/virtio/vhost-vsock.c | 5 +- > >> > include/hw/virtio/vhost-vsock-common.h| 6 +- > >> > include/hw/virtio/vhost-vsock.h | 4 ++ > >> > include/standard-headers/linux/virtio_vsock.h | 1 + > >> > 6 files changed, 69 insertions(+), 7 deletions(-) > >> > > >> >diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c > >> >index 6095ed7349..e9ec0e1c00 100644 > >> >--- a/hw/virtio/vhost-user-vsock.c > >> >+++ b/hw/virtio/vhost-user-vsock.c > >> >@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, > >> >Error **errp) > >> > return; > >> > } > >> > > >> >-vhost_vsock_common_realize(vdev, "vhost-user-vsock"); > >> >+vhost_vsock_common_realize(vdev, "vhost-user-vsock", false); > >> > > >> > vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); > >> > > >> >diff --git a/hw/virtio/vhost-vsock-common.c > >> >b/hw/virtio/vhost-vsock-common.c > >> >index 4ad6e234ad..c78536911a 100644 > >> >--- a/hw/virtio/vhost-vsock-common.c > >> >+++ b/hw/virtio/vhost-vsock-common.c > >> >@@ -17,6 +17,8 @@ > >> > #include "hw/virtio/vhost-vsock.h" > >> > #include "qemu/iov.h" > >> > #include "monitor/monitor.h" > >> >+#include > >> >+#include > >> > > >> > int vhost_vsock_common_start(VirtIODevice *vdev) > >> > { > >> >@@ -196,9 +198,39 @@ int vhost_vsock_common_post_load(void *opaque, int > >> >version_id) > >> > return 0; > >> > } > >> > > >> >-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) > >> >+static int vhost_vsock_get_max_qps(bool enable_dgram) > >> >+{ > >> >+uint64_t features; > >> >+int ret; > >> >+int fd = -1; > >> >+ > >> >+if (!enable_dgram) > >> >+return MAX_VQS_WITHOUT_DGRAM; > >> >+ > >> >+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY); > >> > >> > >> As I said in the previous version, we cannot directly open > >> /dev/vhost-vsock, for two reasons: > >> > >>1. this code is common with vhost-user-vsock which does not use > >>/dev/vhost-vsock. > >> > >>2. the fd may have been passed from the management layer and qemu may > >>not be able to directly open /dev/vhost-vsock. > >> > >> I think is better to move this function in hw/virtio/vhost-vsock.c, > >> using the `vhostfd`, returning true or false if dgram is supported, then > >> you can use it for `enable_dgram` param ... > >> > > > >Yes, you are right. Now I remember you said that before but I forgot about > >that > >when I changed the code. I will fix it. Sorry about that. > > No problem :-) > > > > >> >+if (fd == -1) { > >> >+error_report
Re: Re: [RFC v3] virtio/vsock: add two more queues for datagram types
On Mon, Aug 9, 2021 at 3:49 AM Stefano Garzarella wrote: > > On Thu, Aug 05, 2021 at 12:00:05PM -0700, Jiang Wang . wrote: > >On Tue, Aug 3, 2021 at 11:49 PM Stefano Garzarella > >wrote: > >> > >> On Wed, Aug 4, 2021 at 8:41 AM Stefano Garzarella > >> wrote: > >> > > >> > On Tue, Aug 03, 2021 at 11:58:27AM -0700, Jiang Wang . wrote: > >> > >On Wed, Jul 7, 2021 at 10:27 AM Stefano Garzarella > >> > > wrote: > >> > >> On Wed, Jul 07, 2021 at 09:52:46AM -0700, Jiang Wang . wrote: > >> > >> >On Wed, Jul 7, 2021 at 1:33 AM Stefano Garzarella > >> > >> > wrote: > >> > >> >> On Tue, Jul 06, 2021 at 10:26:07PM +, Jiang Wang wrote: > >> > > >> > [...] > >> > > >> > >> >> >+ > >> > >> >> >+if (nvqs < 0) > >> > >> >> >+nvqs = MAX_VQS_WITHOUT_DGRAM; > >> > >> >> >+ > >> > >> >> >+if (nvqs == MAX_VQS_WITH_DGRAM) { > >> > >> >> >+vvc->dgram_recv_vq = virtio_add_queue(vdev, > >> > >> >> >VHOST_VSOCK_QUEUE_SIZE, > >> > >> >> >+ > >> > >> >> >vhost_vsock_common_handle_output); > >> > >> >> >+vvc->dgram_trans_vq = virtio_add_queue(vdev, > >> > >> >> >VHOST_VSOCK_QUEUE_SIZE, > >> > >> >> >+ > >> > >> >> >vhost_vsock_common_handle_output); > >> > >> >> >+} > >> > >> >> >+ > >> > >> >> > /* The event queue belongs to QEMU */ > >> > >> >> > vvc->event_vq = virtio_add_queue(vdev, > >> > >> >> > VHOST_VSOCK_QUEUE_SIZE, > >> > >> >> > > >> > >> >> > vhost_vsock_common_handle_output); > >> > >> >> > >> > >> >> Did you do a test with a guest that doesn't support datagram with > >> > >> >> QEMU > >> > >> >> and hosts that do? > >> > >> >> > >> > >> >Yes, and it works. > >> > >> > > >> > >> >> I repost my thoughts that I had on v2: > >> > >> >> > >> > >> >> What happen if the guest doesn't support dgram? > >> > >> >> > >> > >> >> I think we should dynamically use the 3rd queue or the 5th > >> > >> >> queue for > >> > >> >> the events at runtime after the guest acked the features. > >> > >> >> > >> > >> >> Maybe better to switch to an array of VirtQueue. > >> > >> >> > >> > >> >I think in current V3, it already dynamically use 3rd or 5th queue > >> > >> >depending > >> > >> >on the feature bit. > >> > >> > >> > >> I'm not sure. IIUC when vhost_vsock_common_realize() is called, we > >> > >> don't > >> > >> know the features acked by the guest, so how can it be dynamic? > >> > >> > >> > >> Here we should know only if the host kernel supports it. > >> > >> > >> > >> Maybe it works, because in QEMU we use the event queue only after a > >> > >> migration to send a reset event, so you can try to migrate a guest to > >> > >> check this path. > >> > >> > >> > >I tried VM migration and didn't see any problems. The migration looks > >> > >fine > >> > >and vsock dgram still works after migration. Is there any more specific > >> > >test > >> > >you want to run to check for this code path? > >> > > > >> > > >> > I meant a migration of a guest from QEMU without this patch to a QEMU > >> > with this patch. Of course in that case testing a socket stream. > >> > > >> > >> Sorry, I meant the opposite. > >> > >> You should try to migrate a guest that does not support dgrams starting > >> from a QEMU with this patch (and kernel th
Re: Re: [PATCH v4] virtio/vsock: add two more queues for datagram types
On Wed, Aug 4, 2021 at 1:13 AM Stefano Garzarella wrote: > > On Tue, Aug 03, 2021 at 11:41:32PM +, Jiang Wang wrote: > >Datagram sockets are connectionless and unreliable. > >The sender does not know the capacity of the receiver > >and may send more packets than the receiver can handle. > > > >Add two more dedicate virtqueues for datagram sockets, > >so that it will not unfairly steal resources from > >stream and future connection-oriented sockets. > > > >Signed-off-by: Jiang Wang > >--- > >v1 -> v2: use qemu cmd option to control number of queues, > > removed configuration settings for dgram. > >v2 -> v3: use ioctl to get features and decide number of > >virt queues, instead of qemu cmd option. > >v3 -> v4: change DGRAM feature bit value to 2. Add an argument > > in vhost_vsock_common_realize to indicate dgram is supported or not. > > > > hw/virtio/vhost-user-vsock.c | 2 +- > > hw/virtio/vhost-vsock-common.c| 58 ++- > > hw/virtio/vhost-vsock.c | 5 +- > > include/hw/virtio/vhost-vsock-common.h| 6 +- > > include/hw/virtio/vhost-vsock.h | 4 ++ > > include/standard-headers/linux/virtio_vsock.h | 1 + > > 6 files changed, 69 insertions(+), 7 deletions(-) > > > >diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c > >index 6095ed7349..e9ec0e1c00 100644 > >--- a/hw/virtio/vhost-user-vsock.c > >+++ b/hw/virtio/vhost-user-vsock.c > >@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error > >**errp) > > return; > > } > > > >-vhost_vsock_common_realize(vdev, "vhost-user-vsock"); > >+vhost_vsock_common_realize(vdev, "vhost-user-vsock", false); > > > > vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); > > > >diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c > >index 4ad6e234ad..c78536911a 100644 > >--- a/hw/virtio/vhost-vsock-common.c > >+++ b/hw/virtio/vhost-vsock-common.c > >@@ -17,6 +17,8 @@ > > #include "hw/virtio/vhost-vsock.h" > > #include "qemu/iov.h" > > #include "monitor/monitor.h" > >+#include > >+#include > > > > int vhost_vsock_common_start(VirtIODevice *vdev) > > { > >@@ -196,9 +198,39 @@ int vhost_vsock_common_post_load(void *opaque, int > >version_id) > > return 0; > > } > > > >-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) > >+static int vhost_vsock_get_max_qps(bool enable_dgram) > >+{ > >+uint64_t features; > >+int ret; > >+int fd = -1; > >+ > >+if (!enable_dgram) > >+return MAX_VQS_WITHOUT_DGRAM; > >+ > >+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY); > > > As I said in the previous version, we cannot directly open > /dev/vhost-vsock, for two reasons: > >1. this code is common with vhost-user-vsock which does not use >/dev/vhost-vsock. > >2. the fd may have been passed from the management layer and qemu may >not be able to directly open /dev/vhost-vsock. > > I think is better to move this function in hw/virtio/vhost-vsock.c, > using the `vhostfd`, returning true or false if dgram is supported, then > you can use it for `enable_dgram` param ... > Yes, you are right. Now I remember you said that before but I forgot about that when I changed the code. I will fix it. Sorry about that. > >+if (fd == -1) { > >+error_report("vhost-vsock: failed to open device. %s", > >strerror(errno)); > >+return -1; > >+} > >+ > >+ret = ioctl(fd, VHOST_GET_FEATURES, &features); > >+if (ret) { > >+error_report("vhost-vsock: failed to read device. %s", > >strerror(errno)); > >+qemu_close(fd); > >+return ret; > >+} > >+ > >+qemu_close(fd); > >+if (features & (1 << VIRTIO_VSOCK_F_DGRAM)) > >+return MAX_VQS_WITH_DGRAM; > >+ > >+return MAX_VQS_WITHOUT_DGRAM; > >+} > >+ > >+void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, bool > >enable_dgram) > > { > > VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); > >+int nvqs = MAX_VQS_WITHOUT_DGRAM; > > > > virtio_init(vdev, name, VIRTIO_ID_VSOCK, > > sizeof(struct virtio_vsock_config)); > >@@ -209,12
Re: Re: [RFC v3] virtio/vsock: add two more queues for datagram types
On Tue, Aug 3, 2021 at 11:49 PM Stefano Garzarella wrote: > > On Wed, Aug 4, 2021 at 8:41 AM Stefano Garzarella > wrote: > > > > On Tue, Aug 03, 2021 at 11:58:27AM -0700, Jiang Wang . wrote: > > >On Wed, Jul 7, 2021 at 10:27 AM Stefano Garzarella > > >wrote: > > >> On Wed, Jul 07, 2021 at 09:52:46AM -0700, Jiang Wang . wrote: > > >> >On Wed, Jul 7, 2021 at 1:33 AM Stefano Garzarella > > >> >wrote: > > >> >> On Tue, Jul 06, 2021 at 10:26:07PM +, Jiang Wang wrote: > > > > [...] > > > > >> >> >+ > > >> >> >+if (nvqs < 0) > > >> >> >+nvqs = MAX_VQS_WITHOUT_DGRAM; > > >> >> >+ > > >> >> >+if (nvqs == MAX_VQS_WITH_DGRAM) { > > >> >> >+vvc->dgram_recv_vq = virtio_add_queue(vdev, > > >> >> >VHOST_VSOCK_QUEUE_SIZE, > > >> >> >+ > > >> >> >vhost_vsock_common_handle_output); > > >> >> >+vvc->dgram_trans_vq = virtio_add_queue(vdev, > > >> >> >VHOST_VSOCK_QUEUE_SIZE, > > >> >> >+ > > >> >> >vhost_vsock_common_handle_output); > > >> >> >+} > > >> >> >+ > > >> >> > /* The event queue belongs to QEMU */ > > >> >> > vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > > >> >> > > > >> >> > vhost_vsock_common_handle_output); > > >> >> > > >> >> Did you do a test with a guest that doesn't support datagram with QEMU > > >> >> and hosts that do? > > >> >> > > >> >Yes, and it works. > > >> > > > >> >> I repost my thoughts that I had on v2: > > >> >> > > >> >> What happen if the guest doesn't support dgram? > > >> >> > > >> >> I think we should dynamically use the 3rd queue or the 5th queue > > >> >> for > > >> >> the events at runtime after the guest acked the features. > > >> >> > > >> >> Maybe better to switch to an array of VirtQueue. > > >> >> > > >> >I think in current V3, it already dynamically use 3rd or 5th queue > > >> >depending > > >> >on the feature bit. > > >> > > >> I'm not sure. IIUC when vhost_vsock_common_realize() is called, we don't > > >> know the features acked by the guest, so how can it be dynamic? > > >> > > >> Here we should know only if the host kernel supports it. > > >> > > >> Maybe it works, because in QEMU we use the event queue only after a > > >> migration to send a reset event, so you can try to migrate a guest to > > >> check this path. > > >> > > >I tried VM migration and didn't see any problems. The migration looks fine > > >and vsock dgram still works after migration. Is there any more specific > > >test > > >you want to run to check for this code path? > > > > > > > I meant a migration of a guest from QEMU without this patch to a QEMU > > with this patch. Of course in that case testing a socket stream. > > > > Sorry, I meant the opposite. > > You should try to migrate a guest that does not support dgrams starting > from a QEMU with this patch (and kernel that supports dgram, so qemu > uses the 5th queue for event), to a QEMU without this patch. > Got it. I tried what you said and saw errors on the destination qemu. Then I moved event queue up to be number 3 (before adding dgram vqs), as the following: +/* The event queue belongs to QEMU */ +vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); + nvqs = vhost_vsock_get_max_qps(enable_dgram); @@ -253,10 +257,6 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, bool enabl vhost_vsock_common_handle_output); } -/* The event queue belongs to QEMU */ -vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, - vhost_vsock_common_handle_output); - But I still see following errors on the destination qemu: qemu-system-x86_64: Error starting vhost vsock: 14 Any idea if my
[PATCH v4] virtio/vsock: add two more queues for datagram types
Datagram sockets are connectionless and unreliable. The sender does not know the capacity of the receiver and may send more packets than the receiver can handle. Add two more dedicate virtqueues for datagram sockets, so that it will not unfairly steal resources from stream and future connection-oriented sockets. Signed-off-by: Jiang Wang --- v1 -> v2: use qemu cmd option to control number of queues, removed configuration settings for dgram. v2 -> v3: use ioctl to get features and decide number of virt queues, instead of qemu cmd option. v3 -> v4: change DGRAM feature bit value to 2. Add an argument in vhost_vsock_common_realize to indicate dgram is supported or not. hw/virtio/vhost-user-vsock.c | 2 +- hw/virtio/vhost-vsock-common.c| 58 ++- hw/virtio/vhost-vsock.c | 5 +- include/hw/virtio/vhost-vsock-common.h| 6 +- include/hw/virtio/vhost-vsock.h | 4 ++ include/standard-headers/linux/virtio_vsock.h | 1 + 6 files changed, 69 insertions(+), 7 deletions(-) diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c index 6095ed7349..e9ec0e1c00 100644 --- a/hw/virtio/vhost-user-vsock.c +++ b/hw/virtio/vhost-user-vsock.c @@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error **errp) return; } -vhost_vsock_common_realize(vdev, "vhost-user-vsock"); +vhost_vsock_common_realize(vdev, "vhost-user-vsock", false); vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c index 4ad6e234ad..c78536911a 100644 --- a/hw/virtio/vhost-vsock-common.c +++ b/hw/virtio/vhost-vsock-common.c @@ -17,6 +17,8 @@ #include "hw/virtio/vhost-vsock.h" #include "qemu/iov.h" #include "monitor/monitor.h" +#include +#include int vhost_vsock_common_start(VirtIODevice *vdev) { @@ -196,9 +198,39 @@ int vhost_vsock_common_post_load(void *opaque, int version_id) return 0; } -void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) +static int vhost_vsock_get_max_qps(bool enable_dgram) +{ +uint64_t features; +int ret; +int fd = -1; + +if (!enable_dgram) +return MAX_VQS_WITHOUT_DGRAM; + +fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY); +if (fd == -1) { +error_report("vhost-vsock: failed to open device. %s", strerror(errno)); +return -1; +} + +ret = ioctl(fd, VHOST_GET_FEATURES, &features); +if (ret) { +error_report("vhost-vsock: failed to read device. %s", strerror(errno)); +qemu_close(fd); +return ret; +} + +qemu_close(fd); +if (features & (1 << VIRTIO_VSOCK_F_DGRAM)) +return MAX_VQS_WITH_DGRAM; + +return MAX_VQS_WITHOUT_DGRAM; +} + +void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, bool enable_dgram) { VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); +int nvqs = MAX_VQS_WITHOUT_DGRAM; virtio_init(vdev, name, VIRTIO_ID_VSOCK, sizeof(struct virtio_vsock_config)); @@ -209,12 +241,24 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); +nvqs = vhost_vsock_get_max_qps(enable_dgram); + +if (nvqs < 0) +nvqs = MAX_VQS_WITHOUT_DGRAM; + +if (nvqs == MAX_VQS_WITH_DGRAM) { +vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +} + /* The event queue belongs to QEMU */ vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); -vvc->vhost_dev.nvqs = ARRAY_SIZE(vvc->vhost_vqs); -vvc->vhost_dev.vqs = vvc->vhost_vqs; +vvc->vhost_dev.nvqs = nvqs; +vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue, vvc->vhost_dev.nvqs); vvc->post_load_timer = NULL; } @@ -227,6 +271,14 @@ void vhost_vsock_common_unrealize(VirtIODevice *vdev) virtio_delete_queue(vvc->recv_vq); virtio_delete_queue(vvc->trans_vq); +if (vvc->vhost_dev.nvqs == MAX_VQS_WITH_DGRAM) { +virtio_delete_queue(vvc->dgram_recv_vq); +virtio_delete_queue(vvc->dgram_trans_vq); +} + +if (vvc->vhost_dev.vqs) +g_free(vvc->vhost_dev.vqs); + virtio_delete_queue(vvc->event_vq); virtio_cleanup(vdev); } diff --git a/hw/virtio/vh
Re: Re: [RFC v3] virtio/vsock: add two more queues for datagram types
On Wed, Jul 7, 2021 at 10:27 AM Stefano Garzarella wrote: > > On Wed, Jul 07, 2021 at 09:52:46AM -0700, Jiang Wang . wrote: > >On Wed, Jul 7, 2021 at 1:33 AM Stefano Garzarella > >wrote: > >> > >> On Tue, Jul 06, 2021 at 10:26:07PM +, Jiang Wang wrote: > >> >Datagram sockets are connectionless and unreliable. > >> >The sender does not know the capacity of the receiver > >> >and may send more packets than the receiver can handle. > >> > > >> >Add two more dedicate virtqueues for datagram sockets, > >> >so that it will not unfairly steal resources from > >> >stream and future connection-oriented sockets. > >> >--- > >> >v1 -> v2: use qemu cmd option to control number of queues, > >> >removed configuration settings for dgram. > >> >v2 -> v3: use ioctl to get features and decie numbr of > >> > virt queues, instead of qemu cmd option. > >> > > >> >btw: this patch is based on Arseny's SEQPACKET patch. > >> > > >> > hw/virtio/vhost-vsock-common.c| 53 > >> > ++- > >> > hw/virtio/vhost-vsock.c | 3 ++ > >> > include/hw/virtio/vhost-vsock-common.h| 3 +- > >> > include/hw/virtio/vhost-vsock.h | 4 ++ > >> > include/standard-headers/linux/virtio_vsock.h | 3 ++ > >> > 5 files changed, 63 insertions(+), 3 deletions(-) > >> > > >> >diff --git a/hw/virtio/vhost-vsock-common.c > >> >b/hw/virtio/vhost-vsock-common.c > >> >index 4ad6e234ad..8164e09445 100644 > >> >--- a/hw/virtio/vhost-vsock-common.c > >> >+++ b/hw/virtio/vhost-vsock-common.c > >> >@@ -17,6 +17,8 @@ > >> > #include "hw/virtio/vhost-vsock.h" > >> > #include "qemu/iov.h" > >> > #include "monitor/monitor.h" > >> >+#include > >> >+#include > >> > > >> > int vhost_vsock_common_start(VirtIODevice *vdev) > >> > { > >> >@@ -196,9 +198,36 @@ int vhost_vsock_common_post_load(void *opaque, int > >> >version_id) > >> > return 0; > >> > } > >> > > >> >+static int vhost_vsock_get_max_qps(void) > >> >+{ > >> >+uint64_t features; > >> >+int ret; > >> >+int fd = -1; > >> >+ > >> >+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY); > >> >+if (fd == -1) { > >> >+error_report("vhost-vsock: failed to open device. %s", > >> >strerror(errno)); > >> >+return -1; > >> >+} > >> > >> You should use the `vhostfd` already opened in > >> vhost_vsock_device_realize(), since QEMU may not have permission to > >> access to the device, and the file descriptor can be passed from the > >> management layer. > >> > >Sure. Will do. > > > >> >+ > >> >+ret = ioctl(fd, VHOST_GET_FEATURES, &features); > >> >+if (ret) { > >> >+error_report("vhost-vsock: failed to read device. %s", > >> >strerror(errno)); > >> >+qemu_close(fd); > >> >+return ret; > >> >+} > >> >+ > >> >+qemu_close(fd); > >> >+if (features & (1 << VIRTIO_VSOCK_F_DGRAM)) > >> >+return MAX_VQS_WITH_DGRAM; > >> >+ > >> >+return MAX_VQS_WITHOUT_DGRAM; > >> >+} > >> >+ > >> > void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) > >> > { > >> > VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); > >> >+int nvqs = MAX_VQS_WITHOUT_DGRAM; > >> > > >> > virtio_init(vdev, name, VIRTIO_ID_VSOCK, > >> > sizeof(struct virtio_vsock_config)); > >> >@@ -209,12 +238,24 @@ void vhost_vsock_common_realize(VirtIODevice > >> >*vdev, const char *name) > >> > vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > >> >vhost_vsock_common_handle_output); > >> > > >> >+nvqs = vhost_vsock_get_max_qps(); > >> > >> You can't do this here, since the vhost-vsock-common.c functions are > >> used also by vhost-user-vsock, that doesn't use the /dev/vhost-vsock &g
Re: Re: [RFC v3] virtio/vsock: add two more queues for datagram types
On Wed, Jul 7, 2021 at 10:27 AM Stefano Garzarella wrote: > > On Wed, Jul 07, 2021 at 09:52:46AM -0700, Jiang Wang . wrote: > >On Wed, Jul 7, 2021 at 1:33 AM Stefano Garzarella > >wrote: > >> > >> On Tue, Jul 06, 2021 at 10:26:07PM +, Jiang Wang wrote: > >> >Datagram sockets are connectionless and unreliable. > >> >The sender does not know the capacity of the receiver > >> >and may send more packets than the receiver can handle. > >> > > >> >Add two more dedicate virtqueues for datagram sockets, > >> >so that it will not unfairly steal resources from > >> >stream and future connection-oriented sockets. > >> >--- > >> >v1 -> v2: use qemu cmd option to control number of queues, > >> >removed configuration settings for dgram. > >> >v2 -> v3: use ioctl to get features and decie numbr of > >> > virt queues, instead of qemu cmd option. > >> > > >> >btw: this patch is based on Arseny's SEQPACKET patch. > >> > > >> > hw/virtio/vhost-vsock-common.c| 53 > >> > ++- > >> > hw/virtio/vhost-vsock.c | 3 ++ > >> > include/hw/virtio/vhost-vsock-common.h| 3 +- > >> > include/hw/virtio/vhost-vsock.h | 4 ++ > >> > include/standard-headers/linux/virtio_vsock.h | 3 ++ > >> > 5 files changed, 63 insertions(+), 3 deletions(-) > >> > > >> >diff --git a/hw/virtio/vhost-vsock-common.c > >> >b/hw/virtio/vhost-vsock-common.c > >> >index 4ad6e234ad..8164e09445 100644 > >> >--- a/hw/virtio/vhost-vsock-common.c > >> >+++ b/hw/virtio/vhost-vsock-common.c > >> >@@ -17,6 +17,8 @@ > >> > #include "hw/virtio/vhost-vsock.h" > >> > #include "qemu/iov.h" > >> > #include "monitor/monitor.h" > >> >+#include > >> >+#include > >> > > >> > int vhost_vsock_common_start(VirtIODevice *vdev) > >> > { > >> >@@ -196,9 +198,36 @@ int vhost_vsock_common_post_load(void *opaque, int > >> >version_id) > >> > return 0; > >> > } > >> > > >> >+static int vhost_vsock_get_max_qps(void) > >> >+{ > >> >+uint64_t features; > >> >+int ret; > >> >+int fd = -1; > >> >+ > >> >+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY); > >> >+if (fd == -1) { > >> >+error_report("vhost-vsock: failed to open device. %s", > >> >strerror(errno)); > >> >+return -1; > >> >+} > >> > >> You should use the `vhostfd` already opened in > >> vhost_vsock_device_realize(), since QEMU may not have permission to > >> access to the device, and the file descriptor can be passed from the > >> management layer. > >> > >Sure. Will do. > > > >> >+ > >> >+ret = ioctl(fd, VHOST_GET_FEATURES, &features); > >> >+if (ret) { > >> >+error_report("vhost-vsock: failed to read device. %s", > >> >strerror(errno)); > >> >+qemu_close(fd); > >> >+return ret; > >> >+} > >> >+ > >> >+qemu_close(fd); > >> >+if (features & (1 << VIRTIO_VSOCK_F_DGRAM)) > >> >+return MAX_VQS_WITH_DGRAM; > >> >+ > >> >+return MAX_VQS_WITHOUT_DGRAM; > >> >+} > >> >+ > >> > void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) > >> > { > >> > VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); > >> >+int nvqs = MAX_VQS_WITHOUT_DGRAM; > >> > > >> > virtio_init(vdev, name, VIRTIO_ID_VSOCK, > >> > sizeof(struct virtio_vsock_config)); > >> >@@ -209,12 +238,24 @@ void vhost_vsock_common_realize(VirtIODevice > >> >*vdev, const char *name) > >> > vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > >> >vhost_vsock_common_handle_output); > >> > > >> >+nvqs = vhost_vsock_get_max_qps(); > >> > >> You can't do this here, since the vhost-vsock-common.c functions are > >> used also by vhost-user-vsock, that doesn't use the /dev/vhost-vsoc
Re: Re: [RFC v3] virtio/vsock: add two more queues for datagram types
On Wed, Jul 7, 2021 at 1:33 AM Stefano Garzarella wrote: > > On Tue, Jul 06, 2021 at 10:26:07PM +, Jiang Wang wrote: > >Datagram sockets are connectionless and unreliable. > >The sender does not know the capacity of the receiver > >and may send more packets than the receiver can handle. > > > >Add two more dedicate virtqueues for datagram sockets, > >so that it will not unfairly steal resources from > >stream and future connection-oriented sockets. > >--- > >v1 -> v2: use qemu cmd option to control number of queues, > >removed configuration settings for dgram. > >v2 -> v3: use ioctl to get features and decie numbr of > > virt queues, instead of qemu cmd option. > > > >btw: this patch is based on Arseny's SEQPACKET patch. > > > > hw/virtio/vhost-vsock-common.c| 53 > > ++- > > hw/virtio/vhost-vsock.c | 3 ++ > > include/hw/virtio/vhost-vsock-common.h| 3 +- > > include/hw/virtio/vhost-vsock.h | 4 ++ > > include/standard-headers/linux/virtio_vsock.h | 3 ++ > > 5 files changed, 63 insertions(+), 3 deletions(-) > > > >diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c > >index 4ad6e234ad..8164e09445 100644 > >--- a/hw/virtio/vhost-vsock-common.c > >+++ b/hw/virtio/vhost-vsock-common.c > >@@ -17,6 +17,8 @@ > > #include "hw/virtio/vhost-vsock.h" > > #include "qemu/iov.h" > > #include "monitor/monitor.h" > >+#include > >+#include > > > > int vhost_vsock_common_start(VirtIODevice *vdev) > > { > >@@ -196,9 +198,36 @@ int vhost_vsock_common_post_load(void *opaque, int > >version_id) > > return 0; > > } > > > >+static int vhost_vsock_get_max_qps(void) > >+{ > >+uint64_t features; > >+int ret; > >+int fd = -1; > >+ > >+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY); > >+if (fd == -1) { > >+error_report("vhost-vsock: failed to open device. %s", > >strerror(errno)); > >+return -1; > >+} > > You should use the `vhostfd` already opened in > vhost_vsock_device_realize(), since QEMU may not have permission to > access to the device, and the file descriptor can be passed from the > management layer. > Sure. Will do. > >+ > >+ret = ioctl(fd, VHOST_GET_FEATURES, &features); > >+if (ret) { > >+error_report("vhost-vsock: failed to read device. %s", > >strerror(errno)); > >+qemu_close(fd); > >+return ret; > >+} > >+ > >+qemu_close(fd); > >+if (features & (1 << VIRTIO_VSOCK_F_DGRAM)) > >+return MAX_VQS_WITH_DGRAM; > >+ > >+return MAX_VQS_WITHOUT_DGRAM; > >+} > >+ > > void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) > > { > > VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); > >+int nvqs = MAX_VQS_WITHOUT_DGRAM; > > > > virtio_init(vdev, name, VIRTIO_ID_VSOCK, > > sizeof(struct virtio_vsock_config)); > >@@ -209,12 +238,24 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, > >const char *name) > > vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > >vhost_vsock_common_handle_output); > > > >+nvqs = vhost_vsock_get_max_qps(); > > You can't do this here, since the vhost-vsock-common.c functions are > used also by vhost-user-vsock, that doesn't use the /dev/vhost-vsock > device since the device is emulated in a separate user process. > > I think you can use something similar to what you did in v2, where you > passed a parameter to vhost_vsock_common_realize() to enable or not the > datagram queues. > Just to make sure, the qemu parameter will only be used for vhost-user-vsock, right? I think for the vhost-vsock kernel module, we will use ioctl and ignore the qemu parameter? > >+ > >+if (nvqs < 0) > >+nvqs = MAX_VQS_WITHOUT_DGRAM; > >+ > >+if (nvqs == MAX_VQS_WITH_DGRAM) { > >+vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > >+ > >vhost_vsock_common_handle_output); > >+vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > >+ > >vhost_vsock_common_handle_output); > >+} > >+ > > /* The event queue
[RFC v3] virtio/vsock: add two more queues for datagram types
Datagram sockets are connectionless and unreliable. The sender does not know the capacity of the receiver and may send more packets than the receiver can handle. Add two more dedicate virtqueues for datagram sockets, so that it will not unfairly steal resources from stream and future connection-oriented sockets. --- v1 -> v2: use qemu cmd option to control number of queues, removed configuration settings for dgram. v2 -> v3: use ioctl to get features and decie numbr of virt queues, instead of qemu cmd option. btw: this patch is based on Arseny's SEQPACKET patch. hw/virtio/vhost-vsock-common.c| 53 ++- hw/virtio/vhost-vsock.c | 3 ++ include/hw/virtio/vhost-vsock-common.h| 3 +- include/hw/virtio/vhost-vsock.h | 4 ++ include/standard-headers/linux/virtio_vsock.h | 3 ++ 5 files changed, 63 insertions(+), 3 deletions(-) diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c index 4ad6e234ad..8164e09445 100644 --- a/hw/virtio/vhost-vsock-common.c +++ b/hw/virtio/vhost-vsock-common.c @@ -17,6 +17,8 @@ #include "hw/virtio/vhost-vsock.h" #include "qemu/iov.h" #include "monitor/monitor.h" +#include +#include int vhost_vsock_common_start(VirtIODevice *vdev) { @@ -196,9 +198,36 @@ int vhost_vsock_common_post_load(void *opaque, int version_id) return 0; } +static int vhost_vsock_get_max_qps(void) +{ +uint64_t features; +int ret; +int fd = -1; + +fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY); +if (fd == -1) { +error_report("vhost-vsock: failed to open device. %s", strerror(errno)); +return -1; +} + +ret = ioctl(fd, VHOST_GET_FEATURES, &features); +if (ret) { +error_report("vhost-vsock: failed to read device. %s", strerror(errno)); +qemu_close(fd); +return ret; +} + +qemu_close(fd); +if (features & (1 << VIRTIO_VSOCK_F_DGRAM)) +return MAX_VQS_WITH_DGRAM; + +return MAX_VQS_WITHOUT_DGRAM; +} + void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) { VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); +int nvqs = MAX_VQS_WITHOUT_DGRAM; virtio_init(vdev, name, VIRTIO_ID_VSOCK, sizeof(struct virtio_vsock_config)); @@ -209,12 +238,24 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); +nvqs = vhost_vsock_get_max_qps(); + +if (nvqs < 0) +nvqs = MAX_VQS_WITHOUT_DGRAM; + +if (nvqs == MAX_VQS_WITH_DGRAM) { +vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +} + /* The event queue belongs to QEMU */ vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); -vvc->vhost_dev.nvqs = ARRAY_SIZE(vvc->vhost_vqs); -vvc->vhost_dev.vqs = vvc->vhost_vqs; +vvc->vhost_dev.nvqs = nvqs; +vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue, vvc->vhost_dev.nvqs); vvc->post_load_timer = NULL; } @@ -227,6 +268,14 @@ void vhost_vsock_common_unrealize(VirtIODevice *vdev) virtio_delete_queue(vvc->recv_vq); virtio_delete_queue(vvc->trans_vq); +if (vvc->vhost_dev.nvqs == MAX_VQS_WITH_DGRAM) { +virtio_delete_queue(vvc->dgram_recv_vq); +virtio_delete_queue(vvc->dgram_trans_vq); +} + +if (vvc->vhost_dev.vqs) +g_free(vvc->vhost_dev.vqs); + virtio_delete_queue(vvc->event_vq); virtio_cleanup(vdev); } diff --git a/hw/virtio/vhost-vsock.c b/hw/virtio/vhost-vsock.c index 1b1a5c70ed..33bbe16983 100644 --- a/hw/virtio/vhost-vsock.c +++ b/hw/virtio/vhost-vsock.c @@ -23,6 +23,7 @@ const int feature_bits[] = { VIRTIO_VSOCK_F_SEQPACKET, +VIRTIO_VSOCK_F_DGRAM, VHOST_INVALID_FEATURE_BIT }; @@ -116,6 +117,8 @@ static uint64_t vhost_vsock_get_features(VirtIODevice *vdev, VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); virtio_add_feature(&requested_features, VIRTIO_VSOCK_F_SEQPACKET); +if (vvc->vhost_dev.nvqs == MAX_VQS_WITH_DGRAM) +virtio_add_feature(&requested_features, VIRTIO_VSOCK_F_DGRAM); return vhost_get_features(&vvc->vhost_dev, feature_bits, requested_features); } diff --git a/include/hw/virtio/vhost-vsock-common.h b/include/hw/virtio/vhost-vsock-common.h index e412b5ee98..798715241f 100644 --- a/include/hw/virtio/vhost-vsock-common.h +++ b/include/hw/virtio/vhost-vsock-common.h @@ -27,12 +27,13 @@ enum { struct VHostVSockCommon { VirtIODevice parent; -
Re: [External] Re: [RFC v1] virtio/vsock: add two more queues for datagram types
On Wed, Jun 30, 2021 at 11:55 PM Stefano Garzarella wrote: > > On Wed, Jun 30, 2021 at 03:44:17PM -0700, Jiang Wang . wrote: > >On Thu, Jun 24, 2021 at 7:31 AM Stefano Garzarella > >wrote: > >> > >> On Wed, Jun 23, 2021 at 11:50:33PM -0700, Jiang Wang . wrote: > >> >Hi Stefano, > >> > > >> >I checked virtio_net_set_multiqueue(), which will help with following > >> >changes in my patch: > >> > > >> >#ifdef CONFIG_VHOST_VSOCK_DGRAM > >> >vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > >> >vhost_vsock_common_handle_output); > >> >vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > >> >vhost_vsock_common_handle_output); > >> >#endif > >> > > >> >But I think there is still an issue with the following lines, right? > >> > >> Yep, I think so. > >> > >> > > >> >#ifdef CONFIG_VHOST_VSOCK_DGRAM > >> >struct vhost_virtqueue vhost_vqs[4]; > >> >#else > >> >struct vhost_virtqueue vhost_vqs[2]; > >> >#endif > >> > > >> >I think the problem with feature bits is that they are set and get after > >> >vhost_vsock_common_realize() and after vhost_dev_init() in > >> >drivers/vhost/vsock.c > >> >But those virtqueues need to be set up correctly beforehand. > >> > >> I think we can follow net and scsi vhost devices, so we can set a > >> VHOST_VSOCK_VQ_MAX(5), allocates all the queues in any case and then use > >> only the queues acked by the guest. > >> > >Thanks for the advice. I checked both net and scsi and scsi is more helpful. > > Yeah :-) > > > > >> > > >> >I tried to test with the host kernel allocating 4 vqs, but qemu only > >> >allocated 2 vqs, and > >> >guest kernel will not be able to send even the vsock stream packets. I > >> >think the host > >> >kernel and the qemu have to agree on the number of vhost_vqs. Do you > >> >agree? > >> >Did I miss something? > >> > >> Mmm, I need to check, but for example vhost-net calls vhost_dev_init() > >> with VHOST_NET_VQ_MAX, but then the guest can decide to use only one > >> couple of TX and RX queues. > >> > >> I'm not sure about qemu point of view, but I expected that QEMU can set > >> less queues then queues allocated by the kernel. `vhost_dev.nvqs` should > >> be set with the amount of queue that QEMU can handle. > >> > >I checked that vhost_dev.nvqs is still the maximum number of queues (4 > >queues). > >But I found a way to workaround it. More details in the following text. > > > >> > > >> >Another idea to make the setting in runtime instead of compiling time > >> >is to use > >> >qemu cmd-line options, then qemu can allocate 2 or 4 queues depending > >> >on > >> >the cmd line. This will solve the issue when the host kernel is an old > >> >one( no dgram > >> >support) and the qemu is a new one. > >> > >> I don't think this is a good idea, at most we can add an ioctl that qemu > >> can use to query the kernel about allocated queues, but I still need to > >> understand better if we really we need this. > >> > > > >Hmm. Both net and scsi use the qemu cmd line option to configure > >number of queues. Qemu cmdline is a runtime setting and flexible. > >I think qemu cmdline is better than ioctl. I also make the qemu cmd > >line option default to only allocate two queues to be compatible with > >old versions. > > Can we avoid both and allocate the maximum number of queue that QEMU can > handle? > I did not find any way to do that. When the host kernel is new, QEMU can handle 4 queues. When the host kernel is an old one, QEMU can only handle 2 queues. > I'm not sure that adding a parameter to QEMU is a good idea. If possible > it should be automatic. > OK. I will keep that in mind and dig more to see if there is a way to do that. Thanks. > > > >> > > >> >But there is still an issue when the host kernel is a new one, while > >> >the qemu > >> >is an old one. I am not sure how to make the virtqueues numbers to > >> >change in run-time > >> >for the host kernel. In another email thread, you mentioned removing > >> >kconfig > >> >in the linux kernel, I believe that is related to this qemu patch, > >
[RFC v2] virtio/vsock: add two more queues for datagram types
Datagram sockets are connectionless and unreliable. The sender does not know the capacity of the receiver and may send more packets than the receiver can handle. Add two more dedicate virtqueues for datagram sockets, so that it will not unfairly steal resources from stream and future connection-oriented sockets. Signed-off-by: Jiang Wang --- v1 -> v2: use qemu cmd option to control number of queues, removed configuration settings for dgram. hw/virtio/vhost-user-vsock.c | 6 +++--- hw/virtio/vhost-vsock-common.c| 23 ++- hw/virtio/vhost-vsock.c | 11 --- include/hw/virtio/vhost-vsock-common.h| 8 +--- include/hw/virtio/vhost-vsock.h | 1 + include/standard-headers/linux/virtio_vsock.h | 3 +++ 6 files changed, 38 insertions(+), 14 deletions(-) diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c index a6f08c26b9..409286aa8d 100644 --- a/hw/virtio/vhost-user-vsock.c +++ b/hw/virtio/vhost-user-vsock.c @@ -103,7 +103,7 @@ static void vuv_device_realize(DeviceState *dev, Error **errp) return; } -vhost_vsock_common_realize(vdev, "vhost-user-vsock"); +vhost_vsock_common_realize(vdev, "vhost-user-vsock", false); vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops); @@ -126,7 +126,7 @@ static void vuv_device_realize(DeviceState *dev, Error **errp) err_vhost_dev: vhost_dev_cleanup(&vvc->vhost_dev); err_virtio: -vhost_vsock_common_unrealize(vdev); +vhost_vsock_common_unrealize(vdev, false); vhost_user_cleanup(&vsock->vhost_user); return; } @@ -142,7 +142,7 @@ static void vuv_device_unrealize(DeviceState *dev) vhost_dev_cleanup(&vvc->vhost_dev); -vhost_vsock_common_unrealize(vdev); +vhost_vsock_common_unrealize(vdev, false); vhost_user_cleanup(&vsock->vhost_user); diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c index 4ad6e234ad..a36cb36496 100644 --- a/hw/virtio/vhost-vsock-common.c +++ b/hw/virtio/vhost-vsock-common.c @@ -196,9 +196,10 @@ int vhost_vsock_common_post_load(void *opaque, int version_id) return 0; } -void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) +void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, bool enable_dgram) { VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); +int nvq = 2; virtio_init(vdev, name, VIRTIO_ID_VSOCK, sizeof(struct virtio_vsock_config)); @@ -209,17 +210,24 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); -/* The event queue belongs to QEMU */ +if (enable_dgram) { +vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); + nvq = 4; +} + /* The event queue belongs to QEMU */ vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); -vvc->vhost_dev.nvqs = ARRAY_SIZE(vvc->vhost_vqs); -vvc->vhost_dev.vqs = vvc->vhost_vqs; +vvc->vhost_dev.nvqs = nvq; +vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue, vvc->vhost_dev.nvqs); vvc->post_load_timer = NULL; } -void vhost_vsock_common_unrealize(VirtIODevice *vdev) +void vhost_vsock_common_unrealize(VirtIODevice *vdev, bool enable_dgram) { VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); @@ -227,6 +235,11 @@ void vhost_vsock_common_unrealize(VirtIODevice *vdev) virtio_delete_queue(vvc->recv_vq); virtio_delete_queue(vvc->trans_vq); +if (enable_dgram) { +virtio_delete_queue(vvc->dgram_recv_vq); +virtio_delete_queue(vvc->dgram_trans_vq); +} + virtio_delete_queue(vvc->event_vq); virtio_cleanup(vdev); } diff --git a/hw/virtio/vhost-vsock.c b/hw/virtio/vhost-vsock.c index 5eaa207504..d15c672c38 100644 --- a/hw/virtio/vhost-vsock.c +++ b/hw/virtio/vhost-vsock.c @@ -23,6 +23,7 @@ const int feature_bits[] = { VIRTIO_VSOCK_F_SEQPACKET, +VIRTIO_VSOCK_F_DGRAM, VHOST_INVALID_FEATURE_BIT }; @@ -116,6 +117,8 @@ static uint64_t vhost_vsock_get_features(VirtIODevice *vdev, VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev); virtio_add_feature(&requested_features, VIRTIO_VSOCK_F_SEQPACKET); +if (vvc->vhost_dev.nvqs == 4) /* 4 means has dgram support */ +virtio_add_feature(&requested_features, VIRTIO_VSOCK_F_DGRAM); return vhost_get
Re: [External] Re: [RFC v1] virtio/vsock: add two more queues for datagram types
On Thu, Jun 24, 2021 at 7:31 AM Stefano Garzarella wrote: > > On Wed, Jun 23, 2021 at 11:50:33PM -0700, Jiang Wang . wrote: > >Hi Stefano, > > > >I checked virtio_net_set_multiqueue(), which will help with following > >changes in my patch: > > > >#ifdef CONFIG_VHOST_VSOCK_DGRAM > >vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > >vhost_vsock_common_handle_output); > >vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, > >vhost_vsock_common_handle_output); > >#endif > > > >But I think there is still an issue with the following lines, right? > > Yep, I think so. > > > > >#ifdef CONFIG_VHOST_VSOCK_DGRAM > >struct vhost_virtqueue vhost_vqs[4]; > >#else > >struct vhost_virtqueue vhost_vqs[2]; > >#endif > > > >I think the problem with feature bits is that they are set and get after > >vhost_vsock_common_realize() and after vhost_dev_init() in > >drivers/vhost/vsock.c > >But those virtqueues need to be set up correctly beforehand. > > I think we can follow net and scsi vhost devices, so we can set a > VHOST_VSOCK_VQ_MAX(5), allocates all the queues in any case and then use > only the queues acked by the guest. > Thanks for the advice. I checked both net and scsi and scsi is more helpful. > > > >I tried to test with the host kernel allocating 4 vqs, but qemu only > >allocated 2 vqs, and > >guest kernel will not be able to send even the vsock stream packets. I > >think the host > >kernel and the qemu have to agree on the number of vhost_vqs. Do you agree? > >Did I miss something? > > Mmm, I need to check, but for example vhost-net calls vhost_dev_init() > with VHOST_NET_VQ_MAX, but then the guest can decide to use only one > couple of TX and RX queues. > > I'm not sure about qemu point of view, but I expected that QEMU can set > less queues then queues allocated by the kernel. `vhost_dev.nvqs` should > be set with the amount of queue that QEMU can handle. > I checked that vhost_dev.nvqs is still the maximum number of queues (4 queues). But I found a way to workaround it. More details in the following text. > > > >Another idea to make the setting in runtime instead of compiling time > >is to use > >qemu cmd-line options, then qemu can allocate 2 or 4 queues depending > >on > >the cmd line. This will solve the issue when the host kernel is an old > >one( no dgram > >support) and the qemu is a new one. > > I don't think this is a good idea, at most we can add an ioctl that qemu > can use to query the kernel about allocated queues, but I still need to > understand better if we really we need this. > Hmm. Both net and scsi use the qemu cmd line option to configure number of queues. Qemu cmdline is a runtime setting and flexible. I think qemu cmdline is better than ioctl. I also make the qemu cmd line option default to only allocate two queues to be compatible with old versions. > > > >But there is still an issue when the host kernel is a new one, while > >the qemu > >is an old one. I am not sure how to make the virtqueues numbers to > >change in run-time > >for the host kernel. In another email thread, you mentioned removing kconfig > >in the linux kernel, I believe that is related to this qemu patch, > >right? > > It was related to both, I don't think we should build QEMU and Linux > with or without dgram support. > > > If so, > >any ideas that I can make the host kernel change the number of vqs in > >the run-time > >or when starting up vsock? The only way I can think of is to use a > >kernel module parameter > >for the vsock_vhost module. Any other ideas? Thanks. > > I need to check better, but we should be able to do all at run time > looking at the features field. As I said, both QEMU and kernel can > allocate the maximum number of queues that they can handle, then enable > only the queues allocated by the guest (e.g. during > vhost_vsock_common_start()). > Yes. I checked the code and found there is an implementation bug ( or limitation) in drivers/vhost/vsock.c. In vhost_vsock_start(), if a queue failed to init, the code will clean up all previous successfully allocated queues. That is why V1 code does not work when host kernel is new, but qemu and guest kernel is old. I made a change there and it works now. I will clean up the patch a little bit and send V2 soon. > > > >btw, I searched Linux kernel code but did not find any examples. > > > > I'm a bit busy this week, I'll try to write some PoC next week if you > can't find a working solution. (without any #ifdef :-) > > Thanks, > Stefano >
Re: Re: [RFC v1] virtio/vsock: add two more queues for datagram types
Hi Stefano, I checked virtio_net_set_multiqueue(), which will help with following changes in my patch: #ifdef CONFIG_VHOST_VSOCK_DGRAM vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); #endif But I think there is still an issue with the following lines, right? #ifdef CONFIG_VHOST_VSOCK_DGRAM struct vhost_virtqueue vhost_vqs[4]; #else struct vhost_virtqueue vhost_vqs[2]; #endif I think the problem with feature bits is that they are set and get after vhost_vsock_common_realize() and after vhost_dev_init() in drivers/vhost/vsock.c But those virtqueues need to be set up correctly beforehand. I tried to test with the host kernel allocating 4 vqs, but qemu only allocated 2 vqs, and guest kernel will not be able to send even the vsock stream packets. I think the host kernel and the qemu have to agree on the number of vhost_vqs. Do you agree? Did I miss something? Another idea to make the setting in runtime instead of compiling time is to use qemu cmd-line options, then qemu can allocate 2 or 4 queues depending on the cmd line. This will solve the issue when the host kernel is an old one( no dgram support) and the qemu is a new one. But there is still an issue when the host kernel is a new one, while the qemu is an old one. I am not sure how to make the virtqueues numbers to change in run-time for the host kernel. In another email thread, you mentioned removing kconfig in the linux kernel, I believe that is related to this qemu patch, right? If so, any ideas that I can make the host kernel change the number of vqs in the run-time or when starting up vsock? The only way I can think of is to use a kernel module parameter for the vsock_vhost module. Any other ideas? Thanks. btw, I searched Linux kernel code but did not find any examples. Regards, Jiang On Thu, Jun 10, 2021 at 10:29 AM Jiang Wang . wrote: > > On Thu, Jun 10, 2021 at 2:40 AM Stefano Garzarella > wrote: > > > > On Thu, Jun 10, 2021 at 12:14:24AM +, Jiang Wang wrote: > > >Datagram sockets are connectionless and unreliable. > > >The sender does not know the capacity of the receiver > > >and may send more packets than the receiver can handle. > > > > > >Add two more dedicate virtqueues for datagram sockets, > > >so that it will not unfairly steal resources from > > >stream and future connection-oriented sockets. > > > > > >The virtio spec patch is here: > > >https://www.spinics.net/lists/linux-virtualization/msg50027.html > > > > > >Here is the link for the linux kernel git repo with patches > > >to support dgram sockets: > > >https://github.com/Jiang1155/linux/tree/vsock-dgram-v1 > > > > > >Signed-off-by: Jiang Wang > > >--- > > > configure | 13 + > > > hw/virtio/vhost-vsock-common.c| 11 ++- > > > hw/virtio/vhost-vsock.c | 8 +--- > > > include/hw/virtio/vhost-vsock-common.h| 10 +- > > > include/standard-headers/linux/virtio_vsock.h | 3 +++ > > > meson.build | 1 + > > > 6 files changed, 41 insertions(+), 5 deletions(-) > > > > > >diff --git a/configure b/configure > > >index 9f016b06b5..6455b283a5 100755 > > >--- a/configure > > >+++ b/configure > > >@@ -343,6 +343,7 @@ vhost_net="$default_feature" > > > vhost_crypto="$default_feature" > > > vhost_scsi="$default_feature" > > > vhost_vsock="$default_feature" > > >+vhost_vsock_dgram="no" > > > vhost_user="no" > > > vhost_user_blk_server="auto" > > > vhost_user_fs="$default_feature" > > >@@ -1272,6 +1273,10 @@ for opt do > > > ;; > > > --enable-vhost-vsock) vhost_vsock="yes" > > > ;; > > >+ --disable-vhost-vsock-dgram) vhost_vsock_dgram="no" > > >+ ;; > > >+ --enable-vhost-vsock-dgram) vhost_vsock_dgram="yes" > > >+ ;; > > > > I don't think we should add a configuration option to enable/disable the > > dgram support at build time. > > > > I think we should do it at runtime looking at the features negiotated. > > > > Take a look at virtio_net_set_multiqueue(). > > Got it. Will check. Thanks. > > > Thanks, > > Stefano > >
Re: Re: [RFC v1] virtio/vsock: add two more queues for datagram types
On Thu, Jun 10, 2021 at 2:40 AM Stefano Garzarella wrote: > > On Thu, Jun 10, 2021 at 12:14:24AM +, Jiang Wang wrote: > >Datagram sockets are connectionless and unreliable. > >The sender does not know the capacity of the receiver > >and may send more packets than the receiver can handle. > > > >Add two more dedicate virtqueues for datagram sockets, > >so that it will not unfairly steal resources from > >stream and future connection-oriented sockets. > > > >The virtio spec patch is here: > >https://www.spinics.net/lists/linux-virtualization/msg50027.html > > > >Here is the link for the linux kernel git repo with patches > >to support dgram sockets: > >https://github.com/Jiang1155/linux/tree/vsock-dgram-v1 > > > >Signed-off-by: Jiang Wang > >--- > > configure | 13 + > > hw/virtio/vhost-vsock-common.c| 11 ++- > > hw/virtio/vhost-vsock.c | 8 +--- > > include/hw/virtio/vhost-vsock-common.h| 10 +- > > include/standard-headers/linux/virtio_vsock.h | 3 +++ > > meson.build | 1 + > > 6 files changed, 41 insertions(+), 5 deletions(-) > > > >diff --git a/configure b/configure > >index 9f016b06b5..6455b283a5 100755 > >--- a/configure > >+++ b/configure > >@@ -343,6 +343,7 @@ vhost_net="$default_feature" > > vhost_crypto="$default_feature" > > vhost_scsi="$default_feature" > > vhost_vsock="$default_feature" > >+vhost_vsock_dgram="no" > > vhost_user="no" > > vhost_user_blk_server="auto" > > vhost_user_fs="$default_feature" > >@@ -1272,6 +1273,10 @@ for opt do > > ;; > > --enable-vhost-vsock) vhost_vsock="yes" > > ;; > >+ --disable-vhost-vsock-dgram) vhost_vsock_dgram="no" > >+ ;; > >+ --enable-vhost-vsock-dgram) vhost_vsock_dgram="yes" > >+ ;; > > I don't think we should add a configuration option to enable/disable the > dgram support at build time. > > I think we should do it at runtime looking at the features negiotated. > > Take a look at virtio_net_set_multiqueue(). Got it. Will check. Thanks. > Thanks, > Stefano >
[RFC v1] virtio/vsock: add two more queues for datagram types
Datagram sockets are connectionless and unreliable. The sender does not know the capacity of the receiver and may send more packets than the receiver can handle. Add two more dedicate virtqueues for datagram sockets, so that it will not unfairly steal resources from stream and future connection-oriented sockets. The virtio spec patch is here: https://www.spinics.net/lists/linux-virtualization/msg50027.html Here is the link for the linux kernel git repo with patches to support dgram sockets: https://github.com/Jiang1155/linux/tree/vsock-dgram-v1 Signed-off-by: Jiang Wang --- configure | 13 + hw/virtio/vhost-vsock-common.c| 11 ++- hw/virtio/vhost-vsock.c | 8 +--- include/hw/virtio/vhost-vsock-common.h| 10 +- include/standard-headers/linux/virtio_vsock.h | 3 +++ meson.build | 1 + 6 files changed, 41 insertions(+), 5 deletions(-) diff --git a/configure b/configure index 9f016b06b5..6455b283a5 100755 --- a/configure +++ b/configure @@ -343,6 +343,7 @@ vhost_net="$default_feature" vhost_crypto="$default_feature" vhost_scsi="$default_feature" vhost_vsock="$default_feature" +vhost_vsock_dgram="no" vhost_user="no" vhost_user_blk_server="auto" vhost_user_fs="$default_feature" @@ -1272,6 +1273,10 @@ for opt do ;; --enable-vhost-vsock) vhost_vsock="yes" ;; + --disable-vhost-vsock-dgram) vhost_vsock_dgram="no" + ;; + --enable-vhost-vsock-dgram) vhost_vsock_dgram="yes" + ;; --disable-vhost-user-blk-server) vhost_user_blk_server="disabled" ;; --enable-vhost-user-blk-server) vhost_user_blk_server="enabled" @@ -1839,6 +1844,7 @@ disabled with --disable-FEATURE, default is enabled if available attrattr and xattr support vhost-net vhost-net kernel acceleration support vhost-vsock virtio sockets device support + vhost-vsock-dgram virtio sockets datagram type support vhost-scsi vhost-scsi kernel target support vhost-cryptovhost-user-crypto backend support vhost-kernelvhost kernel backend support @@ -2389,6 +2395,10 @@ test "$vhost_vsock" = "" && vhost_vsock=$vhost_kernel if test "$vhost_vsock" = "yes" && test "$vhost_kernel" != "yes"; then error_exit "--enable-vhost-vsock requires --enable-vhost-kernel" fi +test "$vhost_vsock_dgram" = "" && vhost_vsock_dgram=$vhost_vsock +if test "$vhost_vsock_dgram" = "yes" && test "$vhost_vsock" != "yes"; then + error_exit "--enable-vhost-vsock-dgram requires --enable-vhost-vsock" +fi # vhost-user backends test "$vhost_net_user" = "" && vhost_net_user=$vhost_user @@ -5810,6 +5820,9 @@ if test "$vhost_vsock" = "yes" ; then if test "$vhost_user" = "yes" ; then echo "CONFIG_VHOST_USER_VSOCK=y" >> $config_host_mak fi + if test "$vhost_vsock_dgram" = "yes" ; then +echo "CONFIG_VHOST_VSOCK_DGRAM=y" >> $config_host_mak + fi fi if test "$vhost_kernel" = "yes" ; then echo "CONFIG_VHOST_KERNEL=y" >> $config_host_mak diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c index 4ad6e234ad..fff8d12d91 100644 --- a/hw/virtio/vhost-vsock-common.c +++ b/hw/virtio/vhost-vsock-common.c @@ -208,7 +208,12 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name) vhost_vsock_common_handle_output); vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); - +#ifdef CONFIG_VHOST_VSOCK_DGRAM +vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, + vhost_vsock_common_handle_output); +#endif /* The event queue belongs to QEMU */ vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE, vhost_vsock_common_handle_output); @@ -227,6 +232,10 @@ void vhost_vsock_common_unrealize(VirtIODevice *vdev) virtio_delete_queue(vvc->recv_vq); virtio_delete_queue(vvc->trans_vq); +#ifdef CONFIG_VHOST_VSOCK_DGRAM +virtio_delete_queue(vvc->dgram_recv_vq); +virtio_delete_queue(vvc->dgram_trans_vq); +#endif virtio_delete_queue(vvc->event_vq); virtio_cleanup(vdev); } diff --git a/hw/virtio/vhost-vsock.c b/hw/virtio/vhost-vsock.c index 8ddf