Am 14.12.23 um 20:53 schrieb Stefan Hajnoczi:
> 
> I will still try the other approach that Hanna and Paolo have suggested.
> It seems more palatable. I will send a v2.
> 

FYI, what I already tried downstream (for VirtIO SCSI):

> diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
> index 9c751bf296..a6449b04d0 100644
> --- a/hw/scsi/virtio-scsi.c
> +++ b/hw/scsi/virtio-scsi.c
> @@ -1166,6 +1166,8 @@ static void virtio_scsi_drained_end(SCSIBus *bus)
>  
>      for (uint32_t i = 0; i < total_queues; i++) {
>          VirtQueue *vq = virtio_get_queue(vdev, i);
> +        virtio_queue_set_notification(vq, 1);
> +        virtio_queue_notify(vdev, i);
>          virtio_queue_aio_attach_host_notifier(vq, s->ctx);
>      }
>  }

But this introduces an issue where e.g. a 'backup' QMP command would put
the iothread into a bad state. After the command, whenever the guest
issues IO, the thread will temporarily spike to using 100% CPU. Using
QMP stop+cont is a way to make it go back to normal.

I think it's because of nested drains, because when additionally
checking that the drain count is zero and only executing the loop then,
that issue doesn't seem to manifest, i.e.:

> diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
> index 9c751bf296..d22c586b38 100644
> --- a/hw/scsi/virtio-scsi.c
> +++ b/hw/scsi/virtio-scsi.c
> @@ -1164,9 +1164,13 @@ static void virtio_scsi_drained_end(SCSIBus *bus)
>          return;
>      }
>  
> -    for (uint32_t i = 0; i < total_queues; i++) {
> -        VirtQueue *vq = virtio_get_queue(vdev, i);
> -        virtio_queue_aio_attach_host_notifier(vq, s->ctx);
> +    if (s->bus.drain_count == 0) {
> +        for (uint32_t i = 0; i < total_queues; i++) {
> +            VirtQueue *vq = virtio_get_queue(vdev, i);
> +            virtio_queue_set_notification(vq, 1);
> +            virtio_queue_notify(vdev, i);
> +            virtio_queue_aio_attach_host_notifier(vq, s->ctx);
> +        }
>      }
>  }
>  

Best Regards,
Fiona


Reply via email to