Re: [PATCH v4] virtio/vsock: add two more queues for datagram types

2021-09-07 Thread Stefano Garzarella

On Tue, Sep 07, 2021 at 12:15:30PM +0200, Stefano Garzarella wrote:

On Sun, Sep 05, 2021 at 11:08:34AM -0700, Jiang Wang . wrote:

On Mon, Aug 9, 2021 at 3:58 AM Stefano Garzarella  wrote:

On Thu, Aug 05, 2021 at 12:07:02PM -0700, Jiang Wang . wrote:

On Wed, Aug 4, 2021 at 1:13 AM Stefano Garzarella  wrote:

On Tue, Aug 03, 2021 at 11:41:32PM +, Jiang Wang wrote:


[...]


>+
>+if (nvqs == MAX_VQS_WITH_DGRAM) {
>+vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
>+  
vhost_vsock_common_handle_output);
>+vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
>+
>vhost_vsock_common_handle_output);
>+}
>+

I'm still concerned about compatibility with guests that don't
support
dgram, as I mentioned in the previous email.

I need to do some testing, but my guess is it won't work if the host
(QEMU and vhost-vsock) supports it and the guest doesn't.

I still think that we should allocate an array of queues and then decide
at runtime which one to use for events (third or fifth) after the guest
has acked the features.


Agree. I will check where the guest ack the feature. If you have any


I'm not sure we should delete them, I think we can allocate 5 queues and
then use queue 3 or 5 for events in vhost_vsock_common_start(), when the
guest already acked the features.



I think I just solved most compatibility issues during migration. The
previous error I saw was due to a bug in vhost-vsock kernel module.


Please post the fix upstream.


After fixing that, I did not change anything for qemu ( i.e, still the same
version 4, btw I will fix fd issue in v5) and did a few migration tests.
Most of them are good.

There are two test cases that migration failed with "Features 0x13002
unsupported"error, which is due to
SEQPACKET qemu patch (my dgram patch
is based on seqpacket patch). Not sure if we need to
fix it or not.  I think the compatibility is good as of now. Let me


I'm a bit lost. Do you mean because one of the QEMU doesn't support 
SEQPACKET?




know if you have other concerns or more test cases to test.
Otherwise, I will submit V5 soon.

Test results:
Left three columns are the source set-up,  right are destination set up.
Host and Guest refers to the host and guest kernel respectively. These
tests are not complete, and I make the src and dest kernel mostly the
same version. But I did test one case where source kernel has dgram
support while dest kernel does not and it is good. Btw, if the src kernel
and dest kernel have a big difference( like 5.14 vs 4.19), then QEMU
will show some msr errors which I guess is kind of expected.

HostQEMUGuest--> HostQEMUresult
dgram   no-dgramno-dgramdgram   no-dgramGood
dgram   no-dgramdgram   dgram   no-dgramGood
dgram   dgram   no-dgramdgram   dgram   Good
dgram   dgram   no-dgramdgram   no-dgramGood
dgram   dgram   dgram   dgram   no-dgram
load feature error *1

no-dgramno-dgramdgram   no-dgramno-dgramGood
no-dgramdgram   dgram   no-dgramdgram Good
no-dgramdgram   no-dgramno-dgramdgram   Good
no-dgramdgram   no-dgramno-dgramno-dgramGood
no-dgramdgram   dgram   no-dgramno-dgram
load feature error *1

dgram   dgram   no-dgramno-dgramno-dgramGood

*1 Qemu shows following error messages:
qemu-system-x86_64: Features 0x13002 unsupported. Allowed
features: 0x17900
qemu-system-x86_64: Failed to load virtio-vhost_vsock:virtio
qemu-system-x86_64: error while loading state for instance 0x0 of
device ':00:05.0/virtio-vhost_vsock'
qemu-system-x86_64: load of migration failed: Operation not permitted

This is due to SEQPACKET feature bit.


I think I found the issue, I'm sending the patch to the mailing list.

If you want to try it is here: 
https://gitlab.com/sgarzarella/qemu/-/commits/vsock-fix-seqpacket-migration


Maybe we can reuse the `features` field also for dgram.

Stefano




Re: [PATCH v4] virtio/vsock: add two more queues for datagram types

2021-09-07 Thread Stefano Garzarella

On Sun, Sep 05, 2021 at 11:08:34AM -0700, Jiang Wang . wrote:

On Mon, Aug 9, 2021 at 3:58 AM Stefano Garzarella  wrote:

On Thu, Aug 05, 2021 at 12:07:02PM -0700, Jiang Wang . wrote:
>On Wed, Aug 4, 2021 at 1:13 AM Stefano Garzarella  wrote:
>> On Tue, Aug 03, 2021 at 11:41:32PM +, Jiang Wang wrote:


[...]


>> >+
>> >+if (nvqs == MAX_VQS_WITH_DGRAM) {
>> >+vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
>> >+  
vhost_vsock_common_handle_output);
>> >+vvc->dgram_trans_vq = virtio_add_queue(vdev, 
VHOST_VSOCK_QUEUE_SIZE,
>> >+
>> >vhost_vsock_common_handle_output);
>> >+}
>> >+
>>
>> I'm still concerned about compatibility with guests that don't
>> support
>> dgram, as I mentioned in the previous email.
>>
>> I need to do some testing, but my guess is it won't work if the host
>> (QEMU and vhost-vsock) supports it and the guest doesn't.
>>
>> I still think that we should allocate an array of queues and then decide
>> at runtime which one to use for events (third or fifth) after the guest
>> has acked the features.
>>
>Agree. I will check where the guest ack the feature. If you have any

I'm not sure we should delete them, I think we can allocate 5 queues and
then use queue 3 or 5 for events in vhost_vsock_common_start(), when the
guest already acked the features.



I think I just solved most compatibility issues during migration. The
previous error I saw was due to a bug in vhost-vsock kernel module.


Please post the fix upstream.


After fixing that, I did not change anything for qemu ( i.e, still the same
version 4, btw I will fix fd issue in v5) and did a few migration tests.
Most of them are good.

There are two test cases that migration failed with "Features 0x13002
unsupported"error, which is due to
SEQPACKET qemu patch (my dgram patch
is based on seqpacket patch). Not sure if we need to
fix it or not.  I think the compatibility is good as of now. Let me


I'm a bit lost. Do you mean because one of the QEMU doesn't support 
SEQPACKET?




know if you have other concerns or more test cases to test.
Otherwise, I will submit V5 soon.

Test results:
Left three columns are the source set-up,  right are destination set up.
Host and Guest refers to the host and guest kernel respectively. These
tests are not complete, and I make the src and dest kernel mostly the
same version. But I did test one case where source kernel has dgram
support while dest kernel does not and it is good. Btw, if the src kernel
and dest kernel have a big difference( like 5.14 vs 4.19), then QEMU
will show some msr errors which I guess is kind of expected.

HostQEMUGuest--> HostQEMUresult
dgram   no-dgramno-dgramdgram   no-dgramGood
dgram   no-dgramdgram   dgram   no-dgramGood
dgram   dgram   no-dgramdgram   dgram   Good
dgram   dgram   no-dgramdgram   no-dgramGood
dgram   dgram   dgram   dgram   no-dgram
load feature error *1

no-dgramno-dgramdgram   no-dgramno-dgramGood
no-dgramdgram   dgram   no-dgramdgram Good
no-dgramdgram   no-dgramno-dgramdgram   Good
no-dgramdgram   no-dgramno-dgramno-dgramGood
no-dgramdgram   dgram   no-dgramno-dgram
load feature error *1

dgram   dgram   no-dgramno-dgramno-dgramGood

*1 Qemu shows following error messages:
qemu-system-x86_64: Features 0x13002 unsupported. Allowed
features: 0x17900
qemu-system-x86_64: Failed to load virtio-vhost_vsock:virtio
qemu-system-x86_64: error while loading state for instance 0x0 of
device ':00:05.0/virtio-vhost_vsock'
qemu-system-x86_64: load of migration failed: Operation not permitted

This is due to SEQPACKET feature bit.


Can you test with both (source and destination) that support SEQPACKET 
(or not)?


I mean, it is better if the only variable is the dgram support.

Anyway the tests seem ok to me :-)




Step back and rethink about whether the event vq number should be 3 or or 5,
now I think it does not matter. The tx and rx queues (whether 2 or 4 queues)
belong to vhost, but event queue belongs to QEMU. The qemu code
allocates an array  for vhost_dev.vqs only for tx and rx queues. So
event queue is never in that array. That means we don't need to
worry about even queue number is 3 or 5. And my tests confirmed that.
I think for the virtio spec, we need to put event queue somewhere and
it looks like having a relative position to tx rx queues. But for vhost kernel
implementation, the event queue is a special case and not related to
tx or rx queues.


Yep, the important thing is that QEMU uses the right queue depending on 
whether DGRAM support has been negotiated or not.


Thanks,
Stefano




Re: Re: [PATCH v4] virtio/vsock: add two more queues for datagram types

2021-09-05 Thread Jiang Wang .
On Mon, Aug 9, 2021 at 3:58 AM Stefano Garzarella  wrote:
>
> On Thu, Aug 05, 2021 at 12:07:02PM -0700, Jiang Wang . wrote:
> >On Wed, Aug 4, 2021 at 1:13 AM Stefano Garzarella  
> >wrote:
> >>
> >> On Tue, Aug 03, 2021 at 11:41:32PM +, Jiang Wang wrote:
> >> >Datagram sockets are connectionless and unreliable.
> >> >The sender does not know the capacity of the receiver
> >> >and may send more packets than the receiver can handle.
> >> >
> >> >Add two more dedicate virtqueues for datagram sockets,
> >> >so that it will not unfairly steal resources from
> >> >stream and future connection-oriented sockets.
> >> >
> >> >Signed-off-by: Jiang Wang 
> >> >---
> >> >v1 -> v2: use qemu cmd option to control number of queues,
> >> >   removed configuration settings for dgram.
> >> >v2 -> v3: use ioctl to get features and decide number of
> >> >virt queues, instead of qemu cmd option.
> >> >v3 -> v4: change DGRAM feature bit value to 2. Add an argument
> >> >   in vhost_vsock_common_realize to indicate dgram is supported or 
> >> > not.
> >> >
> >> > hw/virtio/vhost-user-vsock.c  |  2 +-
> >> > hw/virtio/vhost-vsock-common.c| 58 ++-
> >> > hw/virtio/vhost-vsock.c   |  5 +-
> >> > include/hw/virtio/vhost-vsock-common.h|  6 +-
> >> > include/hw/virtio/vhost-vsock.h   |  4 ++
> >> > include/standard-headers/linux/virtio_vsock.h |  1 +
> >> > 6 files changed, 69 insertions(+), 7 deletions(-)
> >> >
> >> >diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c
> >> >index 6095ed7349..e9ec0e1c00 100644
> >> >--- a/hw/virtio/vhost-user-vsock.c
> >> >+++ b/hw/virtio/vhost-user-vsock.c
> >> >@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, 
> >> >Error **errp)
> >> > return;
> >> > }
> >> >
> >> >-vhost_vsock_common_realize(vdev, "vhost-user-vsock");
> >> >+vhost_vsock_common_realize(vdev, "vhost-user-vsock", false);
> >> >
> >> > vhost_dev_set_config_notifier(>vhost_dev, _ops);
> >> >
> >> >diff --git a/hw/virtio/vhost-vsock-common.c 
> >> >b/hw/virtio/vhost-vsock-common.c
> >> >index 4ad6e234ad..c78536911a 100644
> >> >--- a/hw/virtio/vhost-vsock-common.c
> >> >+++ b/hw/virtio/vhost-vsock-common.c
> >> >@@ -17,6 +17,8 @@
> >> > #include "hw/virtio/vhost-vsock.h"
> >> > #include "qemu/iov.h"
> >> > #include "monitor/monitor.h"
> >> >+#include 
> >> >+#include 
> >> >
> >> > int vhost_vsock_common_start(VirtIODevice *vdev)
> >> > {
> >> >@@ -196,9 +198,39 @@ int vhost_vsock_common_post_load(void *opaque, int 
> >> >version_id)
> >> > return 0;
> >> > }
> >> >
> >> >-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name)
> >> >+static int vhost_vsock_get_max_qps(bool enable_dgram)
> >> >+{
> >> >+uint64_t features;
> >> >+int ret;
> >> >+int fd = -1;
> >> >+
> >> >+if (!enable_dgram)
> >> >+return MAX_VQS_WITHOUT_DGRAM;
> >> >+
> >> >+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY);
> >>
> >>
> >> As I said in the previous version, we cannot directly open
> >> /dev/vhost-vsock, for two reasons:
> >>
> >>1. this code is common with vhost-user-vsock which does not use
> >>/dev/vhost-vsock.
> >>
> >>2. the fd may have been passed from the management layer and qemu may
> >>not be able to directly open /dev/vhost-vsock.
> >>
> >> I think is better to move this function in hw/virtio/vhost-vsock.c,
> >> using the `vhostfd`, returning true or false if dgram is supported, then
> >> you can use it for `enable_dgram` param ...
> >>
> >
> >Yes, you are right. Now I remember you said that before but I forgot about 
> >that
> >when I changed the code. I will fix it. Sorry about that.
>
> No problem :-)
>
> >
> >> >+if (fd == -1) {
> >> >+error_report("vhost-vsock: failed to open device. %s", 
> >> >strerror(errno));
> >> >+return -1;
> >> >+}
> >> >+
> >> >+ret = ioctl(fd, VHOST_GET_FEATURES, );
> >> >+if (ret) {
> >> >+error_report("vhost-vsock: failed to read  device. %s", 
> >> >strerror(errno));
> >> >+qemu_close(fd);
> >> >+return ret;
> >> >+}
> >> >+
> >> >+qemu_close(fd);
> >> >+if (features & (1 << VIRTIO_VSOCK_F_DGRAM))
> >> >+return MAX_VQS_WITH_DGRAM;
> >> >+
> >> >+return MAX_VQS_WITHOUT_DGRAM;
> >> >+}
> >> >+
> >> >+void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, 
> >> >bool enable_dgram)
> >> > {
> >> > VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
> >> >+int nvqs = MAX_VQS_WITHOUT_DGRAM;
> >> >
> >> > virtio_init(vdev, name, VIRTIO_ID_VSOCK,
> >> > sizeof(struct virtio_vsock_config));
> >> >@@ -209,12 +241,24 @@ void vhost_vsock_common_realize(VirtIODevice
> >> >*vdev, const char *name)
> >> > vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
> >> >vhost_vsock_common_handle_output);
> >> >
> >> >+nvqs 

Re: Re: [PATCH v4] virtio/vsock: add two more queues for datagram types

2021-08-09 Thread Jiang Wang .
On Mon, Aug 9, 2021 at 3:58 AM Stefano Garzarella  wrote:
>
> On Thu, Aug 05, 2021 at 12:07:02PM -0700, Jiang Wang . wrote:
> >On Wed, Aug 4, 2021 at 1:13 AM Stefano Garzarella  
> >wrote:
> >>
> >> On Tue, Aug 03, 2021 at 11:41:32PM +, Jiang Wang wrote:
> >> >Datagram sockets are connectionless and unreliable.
> >> >The sender does not know the capacity of the receiver
> >> >and may send more packets than the receiver can handle.
> >> >
> >> >Add two more dedicate virtqueues for datagram sockets,
> >> >so that it will not unfairly steal resources from
> >> >stream and future connection-oriented sockets.
> >> >
> >> >Signed-off-by: Jiang Wang 
> >> >---
> >> >v1 -> v2: use qemu cmd option to control number of queues,
> >> >   removed configuration settings for dgram.
> >> >v2 -> v3: use ioctl to get features and decide number of
> >> >virt queues, instead of qemu cmd option.
> >> >v3 -> v4: change DGRAM feature bit value to 2. Add an argument
> >> >   in vhost_vsock_common_realize to indicate dgram is supported or 
> >> > not.
> >> >
> >> > hw/virtio/vhost-user-vsock.c  |  2 +-
> >> > hw/virtio/vhost-vsock-common.c| 58 ++-
> >> > hw/virtio/vhost-vsock.c   |  5 +-
> >> > include/hw/virtio/vhost-vsock-common.h|  6 +-
> >> > include/hw/virtio/vhost-vsock.h   |  4 ++
> >> > include/standard-headers/linux/virtio_vsock.h |  1 +
> >> > 6 files changed, 69 insertions(+), 7 deletions(-)
> >> >
> >> >diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c
> >> >index 6095ed7349..e9ec0e1c00 100644
> >> >--- a/hw/virtio/vhost-user-vsock.c
> >> >+++ b/hw/virtio/vhost-user-vsock.c
> >> >@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, 
> >> >Error **errp)
> >> > return;
> >> > }
> >> >
> >> >-vhost_vsock_common_realize(vdev, "vhost-user-vsock");
> >> >+vhost_vsock_common_realize(vdev, "vhost-user-vsock", false);
> >> >
> >> > vhost_dev_set_config_notifier(>vhost_dev, _ops);
> >> >
> >> >diff --git a/hw/virtio/vhost-vsock-common.c 
> >> >b/hw/virtio/vhost-vsock-common.c
> >> >index 4ad6e234ad..c78536911a 100644
> >> >--- a/hw/virtio/vhost-vsock-common.c
> >> >+++ b/hw/virtio/vhost-vsock-common.c
> >> >@@ -17,6 +17,8 @@
> >> > #include "hw/virtio/vhost-vsock.h"
> >> > #include "qemu/iov.h"
> >> > #include "monitor/monitor.h"
> >> >+#include 
> >> >+#include 
> >> >
> >> > int vhost_vsock_common_start(VirtIODevice *vdev)
> >> > {
> >> >@@ -196,9 +198,39 @@ int vhost_vsock_common_post_load(void *opaque, int 
> >> >version_id)
> >> > return 0;
> >> > }
> >> >
> >> >-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name)
> >> >+static int vhost_vsock_get_max_qps(bool enable_dgram)
> >> >+{
> >> >+uint64_t features;
> >> >+int ret;
> >> >+int fd = -1;
> >> >+
> >> >+if (!enable_dgram)
> >> >+return MAX_VQS_WITHOUT_DGRAM;
> >> >+
> >> >+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY);
> >>
> >>
> >> As I said in the previous version, we cannot directly open
> >> /dev/vhost-vsock, for two reasons:
> >>
> >>1. this code is common with vhost-user-vsock which does not use
> >>/dev/vhost-vsock.
> >>
> >>2. the fd may have been passed from the management layer and qemu may
> >>not be able to directly open /dev/vhost-vsock.
> >>
> >> I think is better to move this function in hw/virtio/vhost-vsock.c,
> >> using the `vhostfd`, returning true or false if dgram is supported, then
> >> you can use it for `enable_dgram` param ...
> >>
> >
> >Yes, you are right. Now I remember you said that before but I forgot about 
> >that
> >when I changed the code. I will fix it. Sorry about that.
>
> No problem :-)
>
> >
> >> >+if (fd == -1) {
> >> >+error_report("vhost-vsock: failed to open device. %s", 
> >> >strerror(errno));
> >> >+return -1;
> >> >+}
> >> >+
> >> >+ret = ioctl(fd, VHOST_GET_FEATURES, );
> >> >+if (ret) {
> >> >+error_report("vhost-vsock: failed to read  device. %s", 
> >> >strerror(errno));
> >> >+qemu_close(fd);
> >> >+return ret;
> >> >+}
> >> >+
> >> >+qemu_close(fd);
> >> >+if (features & (1 << VIRTIO_VSOCK_F_DGRAM))
> >> >+return MAX_VQS_WITH_DGRAM;
> >> >+
> >> >+return MAX_VQS_WITHOUT_DGRAM;
> >> >+}
> >> >+
> >> >+void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, 
> >> >bool enable_dgram)
> >> > {
> >> > VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
> >> >+int nvqs = MAX_VQS_WITHOUT_DGRAM;
> >> >
> >> > virtio_init(vdev, name, VIRTIO_ID_VSOCK,
> >> > sizeof(struct virtio_vsock_config));
> >> >@@ -209,12 +241,24 @@ void vhost_vsock_common_realize(VirtIODevice
> >> >*vdev, const char *name)
> >> > vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
> >> >vhost_vsock_common_handle_output);
> >> >
> >> >+nvqs 

Re: [PATCH v4] virtio/vsock: add two more queues for datagram types

2021-08-09 Thread Stefano Garzarella

On Thu, Aug 05, 2021 at 12:07:02PM -0700, Jiang Wang . wrote:

On Wed, Aug 4, 2021 at 1:13 AM Stefano Garzarella  wrote:


On Tue, Aug 03, 2021 at 11:41:32PM +, Jiang Wang wrote:
>Datagram sockets are connectionless and unreliable.
>The sender does not know the capacity of the receiver
>and may send more packets than the receiver can handle.
>
>Add two more dedicate virtqueues for datagram sockets,
>so that it will not unfairly steal resources from
>stream and future connection-oriented sockets.
>
>Signed-off-by: Jiang Wang 
>---
>v1 -> v2: use qemu cmd option to control number of queues,
>   removed configuration settings for dgram.
>v2 -> v3: use ioctl to get features and decide number of
>virt queues, instead of qemu cmd option.
>v3 -> v4: change DGRAM feature bit value to 2. Add an argument
>   in vhost_vsock_common_realize to indicate dgram is supported or not.
>
> hw/virtio/vhost-user-vsock.c  |  2 +-
> hw/virtio/vhost-vsock-common.c| 58 ++-
> hw/virtio/vhost-vsock.c   |  5 +-
> include/hw/virtio/vhost-vsock-common.h|  6 +-
> include/hw/virtio/vhost-vsock.h   |  4 ++
> include/standard-headers/linux/virtio_vsock.h |  1 +
> 6 files changed, 69 insertions(+), 7 deletions(-)
>
>diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c
>index 6095ed7349..e9ec0e1c00 100644
>--- a/hw/virtio/vhost-user-vsock.c
>+++ b/hw/virtio/vhost-user-vsock.c
>@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error 
**errp)
> return;
> }
>
>-vhost_vsock_common_realize(vdev, "vhost-user-vsock");
>+vhost_vsock_common_realize(vdev, "vhost-user-vsock", false);
>
> vhost_dev_set_config_notifier(>vhost_dev, _ops);
>
>diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c
>index 4ad6e234ad..c78536911a 100644
>--- a/hw/virtio/vhost-vsock-common.c
>+++ b/hw/virtio/vhost-vsock-common.c
>@@ -17,6 +17,8 @@
> #include "hw/virtio/vhost-vsock.h"
> #include "qemu/iov.h"
> #include "monitor/monitor.h"
>+#include 
>+#include 
>
> int vhost_vsock_common_start(VirtIODevice *vdev)
> {
>@@ -196,9 +198,39 @@ int vhost_vsock_common_post_load(void *opaque, int 
version_id)
> return 0;
> }
>
>-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name)
>+static int vhost_vsock_get_max_qps(bool enable_dgram)
>+{
>+uint64_t features;
>+int ret;
>+int fd = -1;
>+
>+if (!enable_dgram)
>+return MAX_VQS_WITHOUT_DGRAM;
>+
>+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY);


As I said in the previous version, we cannot directly open
/dev/vhost-vsock, for two reasons:

   1. this code is common with vhost-user-vsock which does not use
   /dev/vhost-vsock.

   2. the fd may have been passed from the management layer and qemu may
   not be able to directly open /dev/vhost-vsock.

I think is better to move this function in hw/virtio/vhost-vsock.c,
using the `vhostfd`, returning true or false if dgram is supported, then
you can use it for `enable_dgram` param ...



Yes, you are right. Now I remember you said that before but I forgot about that
when I changed the code. I will fix it. Sorry about that.


No problem :-)




>+if (fd == -1) {
>+error_report("vhost-vsock: failed to open device. %s", 
strerror(errno));
>+return -1;
>+}
>+
>+ret = ioctl(fd, VHOST_GET_FEATURES, );
>+if (ret) {
>+error_report("vhost-vsock: failed to read  device. %s", 
strerror(errno));
>+qemu_close(fd);
>+return ret;
>+}
>+
>+qemu_close(fd);
>+if (features & (1 << VIRTIO_VSOCK_F_DGRAM))
>+return MAX_VQS_WITH_DGRAM;
>+
>+return MAX_VQS_WITHOUT_DGRAM;
>+}
>+
>+void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, bool 
enable_dgram)
> {
> VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
>+int nvqs = MAX_VQS_WITHOUT_DGRAM;
>
> virtio_init(vdev, name, VIRTIO_ID_VSOCK,
> sizeof(struct virtio_vsock_config));
>@@ -209,12 +241,24 @@ void vhost_vsock_common_realize(VirtIODevice
>*vdev, const char *name)
> vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
>vhost_vsock_common_handle_output);
>
>+nvqs = vhost_vsock_get_max_qps(enable_dgram);
>+
>+if (nvqs < 0)
>+nvqs = MAX_VQS_WITHOUT_DGRAM;

... and here, if `enable_dgram` is true, you can set `nvqs =
MAX_VQS_WITH_DGRAM``


sure.


>+
>+if (nvqs == MAX_VQS_WITH_DGRAM) {
>+vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
>+  
vhost_vsock_common_handle_output);
>+vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
>+
>vhost_vsock_common_handle_output);
>+}
>+

I'm still concerned about compatibility with guests that don't 
support

dgram, as I mentioned in the previous email.

I need to do some testing, but my guess is 

Re: Re: [PATCH v4] virtio/vsock: add two more queues for datagram types

2021-08-05 Thread Jiang Wang .
On Wed, Aug 4, 2021 at 1:13 AM Stefano Garzarella  wrote:
>
> On Tue, Aug 03, 2021 at 11:41:32PM +, Jiang Wang wrote:
> >Datagram sockets are connectionless and unreliable.
> >The sender does not know the capacity of the receiver
> >and may send more packets than the receiver can handle.
> >
> >Add two more dedicate virtqueues for datagram sockets,
> >so that it will not unfairly steal resources from
> >stream and future connection-oriented sockets.
> >
> >Signed-off-by: Jiang Wang 
> >---
> >v1 -> v2: use qemu cmd option to control number of queues,
> >   removed configuration settings for dgram.
> >v2 -> v3: use ioctl to get features and decide number of
> >virt queues, instead of qemu cmd option.
> >v3 -> v4: change DGRAM feature bit value to 2. Add an argument
> >   in vhost_vsock_common_realize to indicate dgram is supported or not.
> >
> > hw/virtio/vhost-user-vsock.c  |  2 +-
> > hw/virtio/vhost-vsock-common.c| 58 ++-
> > hw/virtio/vhost-vsock.c   |  5 +-
> > include/hw/virtio/vhost-vsock-common.h|  6 +-
> > include/hw/virtio/vhost-vsock.h   |  4 ++
> > include/standard-headers/linux/virtio_vsock.h |  1 +
> > 6 files changed, 69 insertions(+), 7 deletions(-)
> >
> >diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c
> >index 6095ed7349..e9ec0e1c00 100644
> >--- a/hw/virtio/vhost-user-vsock.c
> >+++ b/hw/virtio/vhost-user-vsock.c
> >@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error 
> >**errp)
> > return;
> > }
> >
> >-vhost_vsock_common_realize(vdev, "vhost-user-vsock");
> >+vhost_vsock_common_realize(vdev, "vhost-user-vsock", false);
> >
> > vhost_dev_set_config_notifier(>vhost_dev, _ops);
> >
> >diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c
> >index 4ad6e234ad..c78536911a 100644
> >--- a/hw/virtio/vhost-vsock-common.c
> >+++ b/hw/virtio/vhost-vsock-common.c
> >@@ -17,6 +17,8 @@
> > #include "hw/virtio/vhost-vsock.h"
> > #include "qemu/iov.h"
> > #include "monitor/monitor.h"
> >+#include 
> >+#include 
> >
> > int vhost_vsock_common_start(VirtIODevice *vdev)
> > {
> >@@ -196,9 +198,39 @@ int vhost_vsock_common_post_load(void *opaque, int 
> >version_id)
> > return 0;
> > }
> >
> >-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name)
> >+static int vhost_vsock_get_max_qps(bool enable_dgram)
> >+{
> >+uint64_t features;
> >+int ret;
> >+int fd = -1;
> >+
> >+if (!enable_dgram)
> >+return MAX_VQS_WITHOUT_DGRAM;
> >+
> >+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY);
>
>
> As I said in the previous version, we cannot directly open
> /dev/vhost-vsock, for two reasons:
>
>1. this code is common with vhost-user-vsock which does not use
>/dev/vhost-vsock.
>
>2. the fd may have been passed from the management layer and qemu may
>not be able to directly open /dev/vhost-vsock.
>
> I think is better to move this function in hw/virtio/vhost-vsock.c,
> using the `vhostfd`, returning true or false if dgram is supported, then
> you can use it for `enable_dgram` param ...
>

Yes, you are right. Now I remember you said that before but I forgot about that
when I changed the code. I will fix it. Sorry about that.

> >+if (fd == -1) {
> >+error_report("vhost-vsock: failed to open device. %s", 
> >strerror(errno));
> >+return -1;
> >+}
> >+
> >+ret = ioctl(fd, VHOST_GET_FEATURES, );
> >+if (ret) {
> >+error_report("vhost-vsock: failed to read  device. %s", 
> >strerror(errno));
> >+qemu_close(fd);
> >+return ret;
> >+}
> >+
> >+qemu_close(fd);
> >+if (features & (1 << VIRTIO_VSOCK_F_DGRAM))
> >+return MAX_VQS_WITH_DGRAM;
> >+
> >+return MAX_VQS_WITHOUT_DGRAM;
> >+}
> >+
> >+void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, bool 
> >enable_dgram)
> > {
> > VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
> >+int nvqs = MAX_VQS_WITHOUT_DGRAM;
> >
> > virtio_init(vdev, name, VIRTIO_ID_VSOCK,
> > sizeof(struct virtio_vsock_config));
> >@@ -209,12 +241,24 @@ void vhost_vsock_common_realize(VirtIODevice
> >*vdev, const char *name)
> > vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
> >vhost_vsock_common_handle_output);
> >
> >+nvqs = vhost_vsock_get_max_qps(enable_dgram);
> >+
> >+if (nvqs < 0)
> >+nvqs = MAX_VQS_WITHOUT_DGRAM;
>
> ... and here, if `enable_dgram` is true, you can set `nvqs =
> MAX_VQS_WITH_DGRAM``
>
sure.

> >+
> >+if (nvqs == MAX_VQS_WITH_DGRAM) {
> >+vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
> >+  
> >vhost_vsock_common_handle_output);
> >+vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
> >+
> 

Re: [PATCH v4] virtio/vsock: add two more queues for datagram types

2021-08-04 Thread Stefano Garzarella

On Tue, Aug 03, 2021 at 11:41:32PM +, Jiang Wang wrote:

Datagram sockets are connectionless and unreliable.
The sender does not know the capacity of the receiver
and may send more packets than the receiver can handle.

Add two more dedicate virtqueues for datagram sockets,
so that it will not unfairly steal resources from
stream and future connection-oriented sockets.

Signed-off-by: Jiang Wang 
---
v1 -> v2: use qemu cmd option to control number of queues,
removed configuration settings for dgram.
v2 -> v3: use ioctl to get features and decide number of
   virt queues, instead of qemu cmd option.
v3 -> v4: change DGRAM feature bit value to 2. Add an argument
in vhost_vsock_common_realize to indicate dgram is supported or not.

hw/virtio/vhost-user-vsock.c  |  2 +-
hw/virtio/vhost-vsock-common.c| 58 ++-
hw/virtio/vhost-vsock.c   |  5 +-
include/hw/virtio/vhost-vsock-common.h|  6 +-
include/hw/virtio/vhost-vsock.h   |  4 ++
include/standard-headers/linux/virtio_vsock.h |  1 +
6 files changed, 69 insertions(+), 7 deletions(-)

diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c
index 6095ed7349..e9ec0e1c00 100644
--- a/hw/virtio/vhost-user-vsock.c
+++ b/hw/virtio/vhost-user-vsock.c
@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error 
**errp)
return;
}

-vhost_vsock_common_realize(vdev, "vhost-user-vsock");
+vhost_vsock_common_realize(vdev, "vhost-user-vsock", false);

vhost_dev_set_config_notifier(>vhost_dev, _ops);

diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c
index 4ad6e234ad..c78536911a 100644
--- a/hw/virtio/vhost-vsock-common.c
+++ b/hw/virtio/vhost-vsock-common.c
@@ -17,6 +17,8 @@
#include "hw/virtio/vhost-vsock.h"
#include "qemu/iov.h"
#include "monitor/monitor.h"
+#include 
+#include 

int vhost_vsock_common_start(VirtIODevice *vdev)
{
@@ -196,9 +198,39 @@ int vhost_vsock_common_post_load(void *opaque, int 
version_id)
return 0;
}

-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name)
+static int vhost_vsock_get_max_qps(bool enable_dgram)
+{
+uint64_t features;
+int ret;
+int fd = -1;
+
+if (!enable_dgram)
+return MAX_VQS_WITHOUT_DGRAM;
+
+fd = qemu_open_old("/dev/vhost-vsock", O_RDONLY);



As I said in the previous version, we cannot directly open 
/dev/vhost-vsock, for two reasons:


  1. this code is common with vhost-user-vsock which does not use 
  /dev/vhost-vsock.


  2. the fd may have been passed from the management layer and qemu may 
  not be able to directly open /dev/vhost-vsock.


I think is better to move this function in hw/virtio/vhost-vsock.c, 
using the `vhostfd`, returning true or false if dgram is supported, then 
you can use it for `enable_dgram` param ...



+if (fd == -1) {
+error_report("vhost-vsock: failed to open device. %s", 
strerror(errno));
+return -1;
+}
+
+ret = ioctl(fd, VHOST_GET_FEATURES, );
+if (ret) {
+error_report("vhost-vsock: failed to read  device. %s", 
strerror(errno));
+qemu_close(fd);
+return ret;
+}
+
+qemu_close(fd);
+if (features & (1 << VIRTIO_VSOCK_F_DGRAM))
+return MAX_VQS_WITH_DGRAM;
+
+return MAX_VQS_WITHOUT_DGRAM;
+}
+
+void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name, bool 
enable_dgram)
{
VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
+int nvqs = MAX_VQS_WITHOUT_DGRAM;

virtio_init(vdev, name, VIRTIO_ID_VSOCK,
sizeof(struct virtio_vsock_config));
@@ -209,12 +241,24 @@ void vhost_vsock_common_realize(VirtIODevice 
*vdev, const char *name)

vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
   vhost_vsock_common_handle_output);

+nvqs = vhost_vsock_get_max_qps(enable_dgram);
+
+if (nvqs < 0)
+nvqs = MAX_VQS_WITHOUT_DGRAM;


... and here, if `enable_dgram` is true, you can set `nvqs = 
MAX_VQS_WITH_DGRAM``



+
+if (nvqs == MAX_VQS_WITH_DGRAM) {
+vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
+  
vhost_vsock_common_handle_output);
+vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
+   
vhost_vsock_common_handle_output);

+}
+


I'm still concerned about compatibility with guests that don't support 
dgram, as I mentioned in the previous email.


I need to do some testing, but my guess is it won't work if the host 
(QEMU and vhost-vsock) supports it and the guest doesn't.


I still think that we should allocate an array of queues and then decide 
at runtime which one to use for events (third or fifth) after the guest 
has acked the features.



/* The event queue belongs to QEMU */
vvc->event_vq = virtio_add_queue(vdev,