Re: [PATCH v2 2/2] virtio: fix vq # for balloon

2024-07-10 Thread Michael S. Tsirkin
On Wed, Jul 10, 2024 at 03:54:22PM -0700, Daniel Verkamp wrote:
> On Wed, Jul 10, 2024 at 1:39 PM Michael S. Tsirkin  wrote:
> >
> > On Wed, Jul 10, 2024 at 12:58:11PM -0700, Daniel Verkamp wrote:
> > > On Wed, Jul 10, 2024 at 11:39 AM Michael S. Tsirkin  
> > > wrote:
> > > >
> > > > On Wed, Jul 10, 2024 at 11:12:34AM -0700, Daniel Verkamp wrote:
> > > > > On Wed, Jul 10, 2024 at 4:43 AM Michael S. Tsirkin  
> > > > > wrote:
> > > > > >
> > > > > > virtio balloon communicates to the core that in some
> > > > > > configurations vq #s are non-contiguous by setting name
> > > > > > pointer to NULL.
> > > > > >
> > > > > > Unfortunately, core then turned around and just made them
> > > > > > contiguous again. Result is that driver is out of spec.
> > > > >
> > > > > Thanks for fixing this - I think the overall approach of the patch 
> > > > > looks good.
> > > > >
> > > > > > Implement what the API was supposed to do
> > > > > > in the 1st place. Compatibility with buggy hypervisors
> > > > > > is handled inside virtio-balloon, which is the only driver
> > > > > > making use of this facility, so far.
> > > > >
> > > > > In addition to virtio-balloon, I believe the same problem also affects
> > > > > the virtio-fs device, since queue 1 is only supposed to be present if
> > > > > VIRTIO_FS_F_NOTIFICATION is negotiated, and the request queues are
> > > > > meant to be queue indexes 2 and up. From a look at the Linux driver
> > > > > (virtio_fs.c), it appears like it never acks VIRTIO_FS_F_NOTIFICATION
> > > > > and assumes that request queues start at index 1 rather than 2, which
> > > > > looks out of spec to me, but the current device implementations (that
> > > > > I am aware of, anyway) are also broken in the same way, so it ends up
> > > > > working today. Queue numbering in a spec-compliant device and the
> > > > > current Linux driver would mismatch; what the driver considers to be
> > > > > the first request queue (index 1) would be ignored by the device since
> > > > > queue index 1 has no function if F_NOTIFICATION isn't negotiated.
> > > >
> > > >
> > > > Oh, thanks a lot for pointing this out!
> > > >
> > > > I see so this patch is no good as is, we need to add a workaround for
> > > > virtio-fs first.
> > > >
> > > > QEMU workaround is simple - just add an extra queue. But I did not
> > > > reasearch how this would interact with vhost-user.
> > > >
> > > > From driver POV, I guess we could just ignore queue # 1 - would that be
> > > > ok or does it have performance implications?
> > >
> > > As a driver workaround for non-compliant devices, I think ignoring the
> > > first request queue would be a reasonable approach if the device's
> > > config advertises num_request_queues > 1. Unfortunately, both
> > > virtiofsd and crosvm's virtio-fs device have hard-coded
> > > num_request_queues =1, so this won't help with those existing devices.
> >
> > Do they care what the vq # is though?
> > We could do some magic to translate VQ #s in qemu.
> >
> >
> > > Maybe there are other devices that we would need to consider as well;
> > > commit 529395d2ae64 ("virtio-fs: add multi-queue support") quotes
> > > benchmarks that seem to be from a different virtio-fs implementation
> > > that does support multiple request queues, so the workaround could
> > > possibly be used there.
> > >
> > > > Or do what I did for balloon here: try with spec compliant #s first,
> > > > if that fails then assume it's the spec issue and shift by 1.
> > >
> > > If there is a way to "guess and check" without breaking spec-compliant
> > > devices, that sounds reasonable too; however, I'm not sure how this
> > > would work out in practice: an existing non-compliant device may fail
> > > to start if the driver tries to enable queue index 2 when it only
> > > supports one request queue,
> >
> > You don't try to enable queue - driver starts by checking queue size.
> > The way my patch works is that it assumes a non existing queue has
> > size 0 if not available.
> >
> > This was actually a documented way to check for PCI and MMIO:
> > Read the virtqueue size from queue_size. This controls how big the 
> > virtqueue is (see 2.6 Virtqueues).
> > If this field is 0, the virtqueue does not exist.
> > MMIO:
> > If the returned value is zero (0x0) the queue is not available.
> >
> > unfortunately not for CCW, but I guess CCW implementations outside
> > of QEMU are uncommon enough that we can assume it's the same?
> >
> >
> > To me the above is also a big hint that drivers are allowed to
> > query size for queues that do not exist.
> 
> Ah, that makes total sense - detecting queue presence by non-zero
> queue size sounds good to me, and it should work in the normal virtio
> device case.
> 
> I am not sure about vhost-user, since there is no way for the
> front-end to ask the back-end for a queue's size; the confusingly
> named VHOST_USER_SET_VRING_NUM allows the front-end to configure the
> size of a queue, but there's no 

Re: [PATCH v2 2/2] virtio: fix vq # for balloon

2024-07-10 Thread Daniel Verkamp
On Wed, Jul 10, 2024 at 1:39 PM Michael S. Tsirkin  wrote:
>
> On Wed, Jul 10, 2024 at 12:58:11PM -0700, Daniel Verkamp wrote:
> > On Wed, Jul 10, 2024 at 11:39 AM Michael S. Tsirkin  wrote:
> > >
> > > On Wed, Jul 10, 2024 at 11:12:34AM -0700, Daniel Verkamp wrote:
> > > > On Wed, Jul 10, 2024 at 4:43 AM Michael S. Tsirkin  
> > > > wrote:
> > > > >
> > > > > virtio balloon communicates to the core that in some
> > > > > configurations vq #s are non-contiguous by setting name
> > > > > pointer to NULL.
> > > > >
> > > > > Unfortunately, core then turned around and just made them
> > > > > contiguous again. Result is that driver is out of spec.
> > > >
> > > > Thanks for fixing this - I think the overall approach of the patch 
> > > > looks good.
> > > >
> > > > > Implement what the API was supposed to do
> > > > > in the 1st place. Compatibility with buggy hypervisors
> > > > > is handled inside virtio-balloon, which is the only driver
> > > > > making use of this facility, so far.
> > > >
> > > > In addition to virtio-balloon, I believe the same problem also affects
> > > > the virtio-fs device, since queue 1 is only supposed to be present if
> > > > VIRTIO_FS_F_NOTIFICATION is negotiated, and the request queues are
> > > > meant to be queue indexes 2 and up. From a look at the Linux driver
> > > > (virtio_fs.c), it appears like it never acks VIRTIO_FS_F_NOTIFICATION
> > > > and assumes that request queues start at index 1 rather than 2, which
> > > > looks out of spec to me, but the current device implementations (that
> > > > I am aware of, anyway) are also broken in the same way, so it ends up
> > > > working today. Queue numbering in a spec-compliant device and the
> > > > current Linux driver would mismatch; what the driver considers to be
> > > > the first request queue (index 1) would be ignored by the device since
> > > > queue index 1 has no function if F_NOTIFICATION isn't negotiated.
> > >
> > >
> > > Oh, thanks a lot for pointing this out!
> > >
> > > I see so this patch is no good as is, we need to add a workaround for
> > > virtio-fs first.
> > >
> > > QEMU workaround is simple - just add an extra queue. But I did not
> > > reasearch how this would interact with vhost-user.
> > >
> > > From driver POV, I guess we could just ignore queue # 1 - would that be
> > > ok or does it have performance implications?
> >
> > As a driver workaround for non-compliant devices, I think ignoring the
> > first request queue would be a reasonable approach if the device's
> > config advertises num_request_queues > 1. Unfortunately, both
> > virtiofsd and crosvm's virtio-fs device have hard-coded
> > num_request_queues =1, so this won't help with those existing devices.
>
> Do they care what the vq # is though?
> We could do some magic to translate VQ #s in qemu.
>
>
> > Maybe there are other devices that we would need to consider as well;
> > commit 529395d2ae64 ("virtio-fs: add multi-queue support") quotes
> > benchmarks that seem to be from a different virtio-fs implementation
> > that does support multiple request queues, so the workaround could
> > possibly be used there.
> >
> > > Or do what I did for balloon here: try with spec compliant #s first,
> > > if that fails then assume it's the spec issue and shift by 1.
> >
> > If there is a way to "guess and check" without breaking spec-compliant
> > devices, that sounds reasonable too; however, I'm not sure how this
> > would work out in practice: an existing non-compliant device may fail
> > to start if the driver tries to enable queue index 2 when it only
> > supports one request queue,
>
> You don't try to enable queue - driver starts by checking queue size.
> The way my patch works is that it assumes a non existing queue has
> size 0 if not available.
>
> This was actually a documented way to check for PCI and MMIO:
> Read the virtqueue size from queue_size. This controls how big the 
> virtqueue is (see 2.6 Virtqueues).
> If this field is 0, the virtqueue does not exist.
> MMIO:
> If the returned value is zero (0x0) the queue is not available.
>
> unfortunately not for CCW, but I guess CCW implementations outside
> of QEMU are uncommon enough that we can assume it's the same?
>
>
> To me the above is also a big hint that drivers are allowed to
> query size for queues that do not exist.

Ah, that makes total sense - detecting queue presence by non-zero
queue size sounds good to me, and it should work in the normal virtio
device case.

I am not sure about vhost-user, since there is no way for the
front-end to ask the back-end for a queue's size; the confusingly
named VHOST_USER_SET_VRING_NUM allows the front-end to configure the
size of a queue, but there's no corresponding GET message.

> > and a spec-compliant device would probably
> > balk if the driver tries to enable queue 1 but does not negotiate
> > VIRTIO_FS_F_NOTIFICATION. If there's a way to reset and retry the
> > whole virtio device initialization process if a device 

Re: [PATCH v2 2/2] virtio: fix vq # for balloon

2024-07-10 Thread Michael S. Tsirkin
On Wed, Jul 10, 2024 at 12:58:11PM -0700, Daniel Verkamp wrote:
> On Wed, Jul 10, 2024 at 11:39 AM Michael S. Tsirkin  wrote:
> >
> > On Wed, Jul 10, 2024 at 11:12:34AM -0700, Daniel Verkamp wrote:
> > > On Wed, Jul 10, 2024 at 4:43 AM Michael S. Tsirkin  
> > > wrote:
> > > >
> > > > virtio balloon communicates to the core that in some
> > > > configurations vq #s are non-contiguous by setting name
> > > > pointer to NULL.
> > > >
> > > > Unfortunately, core then turned around and just made them
> > > > contiguous again. Result is that driver is out of spec.
> > >
> > > Thanks for fixing this - I think the overall approach of the patch looks 
> > > good.
> > >
> > > > Implement what the API was supposed to do
> > > > in the 1st place. Compatibility with buggy hypervisors
> > > > is handled inside virtio-balloon, which is the only driver
> > > > making use of this facility, so far.
> > >
> > > In addition to virtio-balloon, I believe the same problem also affects
> > > the virtio-fs device, since queue 1 is only supposed to be present if
> > > VIRTIO_FS_F_NOTIFICATION is negotiated, and the request queues are
> > > meant to be queue indexes 2 and up. From a look at the Linux driver
> > > (virtio_fs.c), it appears like it never acks VIRTIO_FS_F_NOTIFICATION
> > > and assumes that request queues start at index 1 rather than 2, which
> > > looks out of spec to me, but the current device implementations (that
> > > I am aware of, anyway) are also broken in the same way, so it ends up
> > > working today. Queue numbering in a spec-compliant device and the
> > > current Linux driver would mismatch; what the driver considers to be
> > > the first request queue (index 1) would be ignored by the device since
> > > queue index 1 has no function if F_NOTIFICATION isn't negotiated.
> >
> >
> > Oh, thanks a lot for pointing this out!
> >
> > I see so this patch is no good as is, we need to add a workaround for
> > virtio-fs first.
> >
> > QEMU workaround is simple - just add an extra queue. But I did not
> > reasearch how this would interact with vhost-user.
> >
> > From driver POV, I guess we could just ignore queue # 1 - would that be
> > ok or does it have performance implications?
> 
> As a driver workaround for non-compliant devices, I think ignoring the
> first request queue would be a reasonable approach if the device's
> config advertises num_request_queues > 1. Unfortunately, both
> virtiofsd and crosvm's virtio-fs device have hard-coded
> num_request_queues =1, so this won't help with those existing devices.

Do they care what the vq # is though?
We could do some magic to translate VQ #s in qemu.


> Maybe there are other devices that we would need to consider as well;
> commit 529395d2ae64 ("virtio-fs: add multi-queue support") quotes
> benchmarks that seem to be from a different virtio-fs implementation
> that does support multiple request queues, so the workaround could
> possibly be used there.
> 
> > Or do what I did for balloon here: try with spec compliant #s first,
> > if that fails then assume it's the spec issue and shift by 1.
> 
> If there is a way to "guess and check" without breaking spec-compliant
> devices, that sounds reasonable too; however, I'm not sure how this
> would work out in practice: an existing non-compliant device may fail
> to start if the driver tries to enable queue index 2 when it only
> supports one request queue,

You don't try to enable queue - driver starts by checking queue size.
The way my patch works is that it assumes a non existing queue has
size 0 if not available.

This was actually a documented way to check for PCI and MMIO:
Read the virtqueue size from queue_size. This controls how big the 
virtqueue is (see 2.6 Virtqueues).
If this field is 0, the virtqueue does not exist.
MMIO:
If the returned value is zero (0x0) the queue is not available.

unfortunately not for CCW, but I guess CCW implementations outside
of QEMU are uncommon enough that we can assume it's the same?


To me the above is also a big hint that drivers are allowed to
query size for queues that do not exist.



> and a spec-compliant device would probably
> balk if the driver tries to enable queue 1 but does not negotiate
> VIRTIO_FS_F_NOTIFICATION. If there's a way to reset and retry the
> whole virtio device initialization process if a device fails like
> this, then maybe it's feasible. (Or can the driver tweak the virtqueue
> configuration and try to set DRIVER_OK repeatedly until it works? It's
> not clear to me if this is allowed by the spec, or what device
> implementations actually do in practice in this scenario.)
> 
> Thanks,
> -- Daniel

My patch starts with a spec compliant behaviour. If that fails,
try non-compliant one as a fallback.

-- 
MST




Re: [PATCH v2 2/2] virtio: fix vq # for balloon

2024-07-10 Thread Daniel Verkamp
On Wed, Jul 10, 2024 at 11:39 AM Michael S. Tsirkin  wrote:
>
> On Wed, Jul 10, 2024 at 11:12:34AM -0700, Daniel Verkamp wrote:
> > On Wed, Jul 10, 2024 at 4:43 AM Michael S. Tsirkin  wrote:
> > >
> > > virtio balloon communicates to the core that in some
> > > configurations vq #s are non-contiguous by setting name
> > > pointer to NULL.
> > >
> > > Unfortunately, core then turned around and just made them
> > > contiguous again. Result is that driver is out of spec.
> >
> > Thanks for fixing this - I think the overall approach of the patch looks 
> > good.
> >
> > > Implement what the API was supposed to do
> > > in the 1st place. Compatibility with buggy hypervisors
> > > is handled inside virtio-balloon, which is the only driver
> > > making use of this facility, so far.
> >
> > In addition to virtio-balloon, I believe the same problem also affects
> > the virtio-fs device, since queue 1 is only supposed to be present if
> > VIRTIO_FS_F_NOTIFICATION is negotiated, and the request queues are
> > meant to be queue indexes 2 and up. From a look at the Linux driver
> > (virtio_fs.c), it appears like it never acks VIRTIO_FS_F_NOTIFICATION
> > and assumes that request queues start at index 1 rather than 2, which
> > looks out of spec to me, but the current device implementations (that
> > I am aware of, anyway) are also broken in the same way, so it ends up
> > working today. Queue numbering in a spec-compliant device and the
> > current Linux driver would mismatch; what the driver considers to be
> > the first request queue (index 1) would be ignored by the device since
> > queue index 1 has no function if F_NOTIFICATION isn't negotiated.
>
>
> Oh, thanks a lot for pointing this out!
>
> I see so this patch is no good as is, we need to add a workaround for
> virtio-fs first.
>
> QEMU workaround is simple - just add an extra queue. But I did not
> reasearch how this would interact with vhost-user.
>
> From driver POV, I guess we could just ignore queue # 1 - would that be
> ok or does it have performance implications?

As a driver workaround for non-compliant devices, I think ignoring the
first request queue would be a reasonable approach if the device's
config advertises num_request_queues > 1. Unfortunately, both
virtiofsd and crosvm's virtio-fs device have hard-coded
num_request_queues =1, so this won't help with those existing devices.
Maybe there are other devices that we would need to consider as well;
commit 529395d2ae64 ("virtio-fs: add multi-queue support") quotes
benchmarks that seem to be from a different virtio-fs implementation
that does support multiple request queues, so the workaround could
possibly be used there.

> Or do what I did for balloon here: try with spec compliant #s first,
> if that fails then assume it's the spec issue and shift by 1.

If there is a way to "guess and check" without breaking spec-compliant
devices, that sounds reasonable too; however, I'm not sure how this
would work out in practice: an existing non-compliant device may fail
to start if the driver tries to enable queue index 2 when it only
supports one request queue, and a spec-compliant device would probably
balk if the driver tries to enable queue 1 but does not negotiate
VIRTIO_FS_F_NOTIFICATION. If there's a way to reset and retry the
whole virtio device initialization process if a device fails like
this, then maybe it's feasible. (Or can the driver tweak the virtqueue
configuration and try to set DRIVER_OK repeatedly until it works? It's
not clear to me if this is allowed by the spec, or what device
implementations actually do in practice in this scenario.)

Thanks,
-- Daniel



Re: [PATCH v2 2/2] virtio: fix vq # for balloon

2024-07-10 Thread Michael S. Tsirkin
On Wed, Jul 10, 2024 at 11:12:34AM -0700, Daniel Verkamp wrote:
> On Wed, Jul 10, 2024 at 4:43 AM Michael S. Tsirkin  wrote:
> >
> > virtio balloon communicates to the core that in some
> > configurations vq #s are non-contiguous by setting name
> > pointer to NULL.
> >
> > Unfortunately, core then turned around and just made them
> > contiguous again. Result is that driver is out of spec.
> 
> Thanks for fixing this - I think the overall approach of the patch looks good.
> 
> > Implement what the API was supposed to do
> > in the 1st place. Compatibility with buggy hypervisors
> > is handled inside virtio-balloon, which is the only driver
> > making use of this facility, so far.
> 
> In addition to virtio-balloon, I believe the same problem also affects
> the virtio-fs device, since queue 1 is only supposed to be present if
> VIRTIO_FS_F_NOTIFICATION is negotiated, and the request queues are
> meant to be queue indexes 2 and up. From a look at the Linux driver
> (virtio_fs.c), it appears like it never acks VIRTIO_FS_F_NOTIFICATION
> and assumes that request queues start at index 1 rather than 2, which
> looks out of spec to me, but the current device implementations (that
> I am aware of, anyway) are also broken in the same way, so it ends up
> working today. Queue numbering in a spec-compliant device and the
> current Linux driver would mismatch; what the driver considers to be
> the first request queue (index 1) would be ignored by the device since
> queue index 1 has no function if F_NOTIFICATION isn't negotiated.
> 
> [...]
> > diff --git a/drivers/virtio/virtio_pci_common.c 
> > b/drivers/virtio/virtio_pci_common.c
> > index 7d82facafd75..fa606e7321ad 100644
> > --- a/drivers/virtio/virtio_pci_common.c
> > +++ b/drivers/virtio/virtio_pci_common.c
> > @@ -293,7 +293,7 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, 
> > unsigned int nvqs,
> > struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> > struct virtqueue_info *vqi;
> > u16 msix_vec;
> > -   int i, err, nvectors, allocated_vectors, queue_idx = 0;
> > +   int i, err, nvectors, allocated_vectors;
> >
> > vp_dev->vqs = kcalloc(nvqs, sizeof(*vp_dev->vqs), GFP_KERNEL);
> > if (!vp_dev->vqs)
> > @@ -332,7 +332,7 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, 
> > unsigned int nvqs,
> > msix_vec = allocated_vectors++;
> > else
> > msix_vec = VP_MSIX_VQ_VECTOR;
> > -   vqs[i] = vp_setup_vq(vdev, queue_idx++, vqi->callback,
> > +   vqs[i] = vp_setup_vq(vdev, i, vqi->callback,
> >  vqi->name, vqi->ctx, msix_vec);
> > if (IS_ERR(vqs[i])) {
> > err = PTR_ERR(vqs[i]);
> > @@ -368,7 +368,7 @@ static int vp_find_vqs_intx(struct virtio_device *vdev, 
> > unsigned int nvqs,
> > struct virtqueue_info vqs_info[])
> >  {
> > struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> > -   int i, err, queue_idx = 0;
> > +   int i, err;
> >
> > vp_dev->vqs = kcalloc(nvqs, sizeof(*vp_dev->vqs), GFP_KERNEL);
> > if (!vp_dev->vqs)
> > @@ -388,8 +388,13 @@ static int vp_find_vqs_intx(struct virtio_device 
> > *vdev, unsigned int nvqs,
> > vqs[i] = NULL;
> > continue;
> > }
> > +<<< HEAD
> > vqs[i] = vp_setup_vq(vdev, queue_idx++, vqi->callback,
> >  vqi->name, vqi->ctx,
> > +===
> > +   vqs[i] = vp_setup_vq(vdev, i, callbacks[i], names[i],
> > +ctx ? ctx[i] : false,
> > +>>> f814759f80b7... virtio: fix vq # for balloon
> 
> This still has merge markers in it.
> 
> Thanks,
> -- Daniel

ouch forgot to commit ;)




Re: [PATCH v2 2/2] virtio: fix vq # for balloon

2024-07-10 Thread Michael S. Tsirkin
On Wed, Jul 10, 2024 at 11:12:34AM -0700, Daniel Verkamp wrote:
> On Wed, Jul 10, 2024 at 4:43 AM Michael S. Tsirkin  wrote:
> >
> > virtio balloon communicates to the core that in some
> > configurations vq #s are non-contiguous by setting name
> > pointer to NULL.
> >
> > Unfortunately, core then turned around and just made them
> > contiguous again. Result is that driver is out of spec.
> 
> Thanks for fixing this - I think the overall approach of the patch looks good.
> 
> > Implement what the API was supposed to do
> > in the 1st place. Compatibility with buggy hypervisors
> > is handled inside virtio-balloon, which is the only driver
> > making use of this facility, so far.
> 
> In addition to virtio-balloon, I believe the same problem also affects
> the virtio-fs device, since queue 1 is only supposed to be present if
> VIRTIO_FS_F_NOTIFICATION is negotiated, and the request queues are
> meant to be queue indexes 2 and up. From a look at the Linux driver
> (virtio_fs.c), it appears like it never acks VIRTIO_FS_F_NOTIFICATION
> and assumes that request queues start at index 1 rather than 2, which
> looks out of spec to me, but the current device implementations (that
> I am aware of, anyway) are also broken in the same way, so it ends up
> working today. Queue numbering in a spec-compliant device and the
> current Linux driver would mismatch; what the driver considers to be
> the first request queue (index 1) would be ignored by the device since
> queue index 1 has no function if F_NOTIFICATION isn't negotiated.


Oh, thanks a lot for pointing this out!

I see so this patch is no good as is, we need to add a workaround for
virtio-fs first.

QEMU workaround is simple - just add an extra queue. But I did not
reasearch how this would interact with vhost-user.

>From driver POV, I guess we could just ignore queue # 1 - would that be
ok or does it have performance implications?
Or do what I did for balloon here: try with spec compliant #s first,
if that fails then assume it's the spec issue and shift by 1.


> [...]
> > diff --git a/drivers/virtio/virtio_pci_common.c 
> > b/drivers/virtio/virtio_pci_common.c
> > index 7d82facafd75..fa606e7321ad 100644
> > --- a/drivers/virtio/virtio_pci_common.c
> > +++ b/drivers/virtio/virtio_pci_common.c
> > @@ -293,7 +293,7 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, 
> > unsigned int nvqs,
> > struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> > struct virtqueue_info *vqi;
> > u16 msix_vec;
> > -   int i, err, nvectors, allocated_vectors, queue_idx = 0;
> > +   int i, err, nvectors, allocated_vectors;
> >
> > vp_dev->vqs = kcalloc(nvqs, sizeof(*vp_dev->vqs), GFP_KERNEL);
> > if (!vp_dev->vqs)
> > @@ -332,7 +332,7 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, 
> > unsigned int nvqs,
> > msix_vec = allocated_vectors++;
> > else
> > msix_vec = VP_MSIX_VQ_VECTOR;
> > -   vqs[i] = vp_setup_vq(vdev, queue_idx++, vqi->callback,
> > +   vqs[i] = vp_setup_vq(vdev, i, vqi->callback,
> >  vqi->name, vqi->ctx, msix_vec);
> > if (IS_ERR(vqs[i])) {
> > err = PTR_ERR(vqs[i]);
> > @@ -368,7 +368,7 @@ static int vp_find_vqs_intx(struct virtio_device *vdev, 
> > unsigned int nvqs,
> > struct virtqueue_info vqs_info[])
> >  {
> > struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> > -   int i, err, queue_idx = 0;
> > +   int i, err;
> >
> > vp_dev->vqs = kcalloc(nvqs, sizeof(*vp_dev->vqs), GFP_KERNEL);
> > if (!vp_dev->vqs)
> > @@ -388,8 +388,13 @@ static int vp_find_vqs_intx(struct virtio_device 
> > *vdev, unsigned int nvqs,
> > vqs[i] = NULL;
> > continue;
> > }
> > +<<< HEAD
> > vqs[i] = vp_setup_vq(vdev, queue_idx++, vqi->callback,
> >  vqi->name, vqi->ctx,
> > +===
> > +   vqs[i] = vp_setup_vq(vdev, i, callbacks[i], names[i],
> > +ctx ? ctx[i] : false,
> > +>>> f814759f80b7... virtio: fix vq # for balloon
> 
> This still has merge markers in it.
> 
> Thanks,
> -- Daniel




Re: [PATCH v2 2/2] virtio: fix vq # for balloon

2024-07-10 Thread Daniel Verkamp
On Wed, Jul 10, 2024 at 4:43 AM Michael S. Tsirkin  wrote:
>
> virtio balloon communicates to the core that in some
> configurations vq #s are non-contiguous by setting name
> pointer to NULL.
>
> Unfortunately, core then turned around and just made them
> contiguous again. Result is that driver is out of spec.

Thanks for fixing this - I think the overall approach of the patch looks good.

> Implement what the API was supposed to do
> in the 1st place. Compatibility with buggy hypervisors
> is handled inside virtio-balloon, which is the only driver
> making use of this facility, so far.

In addition to virtio-balloon, I believe the same problem also affects
the virtio-fs device, since queue 1 is only supposed to be present if
VIRTIO_FS_F_NOTIFICATION is negotiated, and the request queues are
meant to be queue indexes 2 and up. From a look at the Linux driver
(virtio_fs.c), it appears like it never acks VIRTIO_FS_F_NOTIFICATION
and assumes that request queues start at index 1 rather than 2, which
looks out of spec to me, but the current device implementations (that
I am aware of, anyway) are also broken in the same way, so it ends up
working today. Queue numbering in a spec-compliant device and the
current Linux driver would mismatch; what the driver considers to be
the first request queue (index 1) would be ignored by the device since
queue index 1 has no function if F_NOTIFICATION isn't negotiated.

[...]
> diff --git a/drivers/virtio/virtio_pci_common.c 
> b/drivers/virtio/virtio_pci_common.c
> index 7d82facafd75..fa606e7321ad 100644
> --- a/drivers/virtio/virtio_pci_common.c
> +++ b/drivers/virtio/virtio_pci_common.c
> @@ -293,7 +293,7 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, 
> unsigned int nvqs,
> struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> struct virtqueue_info *vqi;
> u16 msix_vec;
> -   int i, err, nvectors, allocated_vectors, queue_idx = 0;
> +   int i, err, nvectors, allocated_vectors;
>
> vp_dev->vqs = kcalloc(nvqs, sizeof(*vp_dev->vqs), GFP_KERNEL);
> if (!vp_dev->vqs)
> @@ -332,7 +332,7 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, 
> unsigned int nvqs,
> msix_vec = allocated_vectors++;
> else
> msix_vec = VP_MSIX_VQ_VECTOR;
> -   vqs[i] = vp_setup_vq(vdev, queue_idx++, vqi->callback,
> +   vqs[i] = vp_setup_vq(vdev, i, vqi->callback,
>  vqi->name, vqi->ctx, msix_vec);
> if (IS_ERR(vqs[i])) {
> err = PTR_ERR(vqs[i]);
> @@ -368,7 +368,7 @@ static int vp_find_vqs_intx(struct virtio_device *vdev, 
> unsigned int nvqs,
> struct virtqueue_info vqs_info[])
>  {
> struct virtio_pci_device *vp_dev = to_vp_device(vdev);
> -   int i, err, queue_idx = 0;
> +   int i, err;
>
> vp_dev->vqs = kcalloc(nvqs, sizeof(*vp_dev->vqs), GFP_KERNEL);
> if (!vp_dev->vqs)
> @@ -388,8 +388,13 @@ static int vp_find_vqs_intx(struct virtio_device *vdev, 
> unsigned int nvqs,
> vqs[i] = NULL;
> continue;
> }
> +<<< HEAD
> vqs[i] = vp_setup_vq(vdev, queue_idx++, vqi->callback,
>  vqi->name, vqi->ctx,
> +===
> +   vqs[i] = vp_setup_vq(vdev, i, callbacks[i], names[i],
> +ctx ? ctx[i] : false,
> +>>> f814759f80b7... virtio: fix vq # for balloon

This still has merge markers in it.

Thanks,
-- Daniel



Re: [PATCH v2 2/2] virtio: fix vq # for balloon

2024-07-10 Thread Mathieu Poirier
On Wed, 10 Jul 2024 at 05:43, Michael S. Tsirkin  wrote:
>
> virtio balloon communicates to the core that in some
> configurations vq #s are non-contiguous by setting name
> pointer to NULL.
>
> Unfortunately, core then turned around and just made them
> contiguous again. Result is that driver is out of spec.
>
> Implement what the API was supposed to do
> in the 1st place. Compatibility with buggy hypervisors
> is handled inside virtio-balloon, which is the only driver
> making use of this facility, so far.
>
> Message-ID: 
> Fixes: b0c504f15471 ("virtio-balloon: add support for providing free page 
> reports to host")
> Cc: "Alexander Duyck" 
> Signed-off-by: Michael S. Tsirkin 
> ---
>  arch/um/drivers/virtio_uml.c   |  4 ++--
>  drivers/remoteproc/remoteproc_virtio.c |  4 ++--

Reviewed-by: Mathieu Poirier 

>  drivers/s390/virtio/virtio_ccw.c   |  4 ++--
>  drivers/virtio/virtio_mmio.c   |  4 ++--
>  drivers/virtio/virtio_pci_common.c | 11 ---
>  drivers/virtio/virtio_vdpa.c   |  4 ++--
>  6 files changed, 18 insertions(+), 13 deletions(-)
>
> diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
> index 2b6e701776b6..c903e4959f51 100644
> --- a/arch/um/drivers/virtio_uml.c
> +++ b/arch/um/drivers/virtio_uml.c
> @@ -1019,7 +1019,7 @@ static int vu_find_vqs(struct virtio_device *vdev, 
> unsigned nvqs,
>struct irq_affinity *desc)
>  {
> struct virtio_uml_device *vu_dev = to_virtio_uml_device(vdev);
> -   int i, queue_idx = 0, rc;
> +   int i, rc;
> struct virtqueue *vq;
>
> /* not supported for now */
> @@ -1038,7 +1038,7 @@ static int vu_find_vqs(struct virtio_device *vdev, 
> unsigned nvqs,
> continue;
> }
>
> -   vqs[i] = vu_setup_vq(vdev, queue_idx++, vqi->callback,
> +   vqs[i] = vu_setup_vq(vdev, i, vqi->callback,
>  vqi->name, vqi->ctx);
> if (IS_ERR(vqs[i])) {
> rc = PTR_ERR(vqs[i]);
> diff --git a/drivers/remoteproc/remoteproc_virtio.c 
> b/drivers/remoteproc/remoteproc_virtio.c
> index d3f39009b28e..1019b2825c26 100644
> --- a/drivers/remoteproc/remoteproc_virtio.c
> +++ b/drivers/remoteproc/remoteproc_virtio.c
> @@ -185,7 +185,7 @@ static int rproc_virtio_find_vqs(struct virtio_device 
> *vdev, unsigned int nvqs,
>  struct virtqueue_info vqs_info[],
>  struct irq_affinity *desc)
>  {
> -   int i, ret, queue_idx = 0;
> +   int i, ret;
>
> for (i = 0; i < nvqs; ++i) {
> struct virtqueue_info *vqi = _info[i];
> @@ -195,7 +195,7 @@ static int rproc_virtio_find_vqs(struct virtio_device 
> *vdev, unsigned int nvqs,
> continue;
> }
>
> -   vqs[i] = rp_find_vq(vdev, queue_idx++, vqi->callback,
> +   vqs[i] = rp_find_vq(vdev, i, vqi->callback,
> vqi->name, vqi->ctx);
> if (IS_ERR(vqs[i])) {
> ret = PTR_ERR(vqs[i]);
> diff --git a/drivers/s390/virtio/virtio_ccw.c 
> b/drivers/s390/virtio/virtio_ccw.c
> index 62eca9419ad7..82a3440bbabb 100644
> --- a/drivers/s390/virtio/virtio_ccw.c
> +++ b/drivers/s390/virtio/virtio_ccw.c
> @@ -694,7 +694,7 @@ static int virtio_ccw_find_vqs(struct virtio_device 
> *vdev, unsigned nvqs,
>  {
> struct virtio_ccw_device *vcdev = to_vc_device(vdev);
> dma64_t *indicatorp = NULL;
> -   int ret, i, queue_idx = 0;
> +   int ret, i;
> struct ccw1 *ccw;
> dma32_t indicatorp_dma = 0;
>
> @@ -710,7 +710,7 @@ static int virtio_ccw_find_vqs(struct virtio_device 
> *vdev, unsigned nvqs,
> continue;
> }
>
> -   vqs[i] = virtio_ccw_setup_vq(vdev, queue_idx++, vqi->callback,
> +   vqs[i] = virtio_ccw_setup_vq(vdev, i, vqi->callback,
>  vqi->name, vqi->ctx, ccw);
> if (IS_ERR(vqs[i])) {
> ret = PTR_ERR(vqs[i]);
> diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
> index 90e784e7b721..db6a0366f082 100644
> --- a/drivers/virtio/virtio_mmio.c
> +++ b/drivers/virtio/virtio_mmio.c
> @@ -494,7 +494,7 @@ static int vm_find_vqs(struct virtio_device *vdev, 
> unsigned int nvqs,
>  {
> struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
> int irq = platform_get_irq(vm_dev->pdev, 0);
> -   int i, err, queue_idx = 0;
> +   int i, err;
>
> if (irq < 0)
> return irq;
> @@ -515,7 +515,7 @@ static int vm_find_vqs(struct virtio_device *vdev, 
> unsigned int nvqs,
> continue;
> }
>
> -   vqs[i] = vm_setup_vq(vdev, queue_idx++, vqi->callback,
> +   vqs[i] = vm_setup_vq(vdev, i, vqi->callback,

[PATCH v2 2/2] virtio: fix vq # for balloon

2024-07-10 Thread Michael S. Tsirkin
virtio balloon communicates to the core that in some
configurations vq #s are non-contiguous by setting name
pointer to NULL.

Unfortunately, core then turned around and just made them
contiguous again. Result is that driver is out of spec.

Implement what the API was supposed to do
in the 1st place. Compatibility with buggy hypervisors
is handled inside virtio-balloon, which is the only driver
making use of this facility, so far.

Message-ID: 
Fixes: b0c504f15471 ("virtio-balloon: add support for providing free page 
reports to host")
Cc: "Alexander Duyck" 
Signed-off-by: Michael S. Tsirkin 
---
 arch/um/drivers/virtio_uml.c   |  4 ++--
 drivers/remoteproc/remoteproc_virtio.c |  4 ++--
 drivers/s390/virtio/virtio_ccw.c   |  4 ++--
 drivers/virtio/virtio_mmio.c   |  4 ++--
 drivers/virtio/virtio_pci_common.c | 11 ---
 drivers/virtio/virtio_vdpa.c   |  4 ++--
 6 files changed, 18 insertions(+), 13 deletions(-)

diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
index 2b6e701776b6..c903e4959f51 100644
--- a/arch/um/drivers/virtio_uml.c
+++ b/arch/um/drivers/virtio_uml.c
@@ -1019,7 +1019,7 @@ static int vu_find_vqs(struct virtio_device *vdev, 
unsigned nvqs,
   struct irq_affinity *desc)
 {
struct virtio_uml_device *vu_dev = to_virtio_uml_device(vdev);
-   int i, queue_idx = 0, rc;
+   int i, rc;
struct virtqueue *vq;
 
/* not supported for now */
@@ -1038,7 +1038,7 @@ static int vu_find_vqs(struct virtio_device *vdev, 
unsigned nvqs,
continue;
}
 
-   vqs[i] = vu_setup_vq(vdev, queue_idx++, vqi->callback,
+   vqs[i] = vu_setup_vq(vdev, i, vqi->callback,
 vqi->name, vqi->ctx);
if (IS_ERR(vqs[i])) {
rc = PTR_ERR(vqs[i]);
diff --git a/drivers/remoteproc/remoteproc_virtio.c 
b/drivers/remoteproc/remoteproc_virtio.c
index d3f39009b28e..1019b2825c26 100644
--- a/drivers/remoteproc/remoteproc_virtio.c
+++ b/drivers/remoteproc/remoteproc_virtio.c
@@ -185,7 +185,7 @@ static int rproc_virtio_find_vqs(struct virtio_device 
*vdev, unsigned int nvqs,
 struct virtqueue_info vqs_info[],
 struct irq_affinity *desc)
 {
-   int i, ret, queue_idx = 0;
+   int i, ret;
 
for (i = 0; i < nvqs; ++i) {
struct virtqueue_info *vqi = _info[i];
@@ -195,7 +195,7 @@ static int rproc_virtio_find_vqs(struct virtio_device 
*vdev, unsigned int nvqs,
continue;
}
 
-   vqs[i] = rp_find_vq(vdev, queue_idx++, vqi->callback,
+   vqs[i] = rp_find_vq(vdev, i, vqi->callback,
vqi->name, vqi->ctx);
if (IS_ERR(vqs[i])) {
ret = PTR_ERR(vqs[i]);
diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
index 62eca9419ad7..82a3440bbabb 100644
--- a/drivers/s390/virtio/virtio_ccw.c
+++ b/drivers/s390/virtio/virtio_ccw.c
@@ -694,7 +694,7 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, 
unsigned nvqs,
 {
struct virtio_ccw_device *vcdev = to_vc_device(vdev);
dma64_t *indicatorp = NULL;
-   int ret, i, queue_idx = 0;
+   int ret, i;
struct ccw1 *ccw;
dma32_t indicatorp_dma = 0;
 
@@ -710,7 +710,7 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, 
unsigned nvqs,
continue;
}
 
-   vqs[i] = virtio_ccw_setup_vq(vdev, queue_idx++, vqi->callback,
+   vqs[i] = virtio_ccw_setup_vq(vdev, i, vqi->callback,
 vqi->name, vqi->ctx, ccw);
if (IS_ERR(vqs[i])) {
ret = PTR_ERR(vqs[i]);
diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 90e784e7b721..db6a0366f082 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -494,7 +494,7 @@ static int vm_find_vqs(struct virtio_device *vdev, unsigned 
int nvqs,
 {
struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
int irq = platform_get_irq(vm_dev->pdev, 0);
-   int i, err, queue_idx = 0;
+   int i, err;
 
if (irq < 0)
return irq;
@@ -515,7 +515,7 @@ static int vm_find_vqs(struct virtio_device *vdev, unsigned 
int nvqs,
continue;
}
 
-   vqs[i] = vm_setup_vq(vdev, queue_idx++, vqi->callback,
+   vqs[i] = vm_setup_vq(vdev, i, vqi->callback,
 vqi->name, vqi->ctx);
if (IS_ERR(vqs[i])) {
vm_del_vqs(vdev);
diff --git a/drivers/virtio/virtio_pci_common.c 
b/drivers/virtio/virtio_pci_common.c
index 7d82facafd75..fa606e7321ad 100644
--- a/drivers/virtio/virtio_pci_common.c
+++