On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> 
> 
> > From: Michael S. Tsirkin <m...@redhat.com>
> > Sent: 25 June 2025 12:37 AM
> > 
> > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> > >
> > >
> > > > From: Stefan Hajnoczi <stefa...@redhat.com>
> > > > Sent: 25 June 2025 12:26 AM
> > > >
> > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > > > > When the PCI device is surprise removed, requests may not complete
> > > > > the device as the VQ is marked as broken. Due to this, the disk
> > > > > deletion hangs.
> > > >
> > > > There are loops in the core virtio driver code that expect device
> > > > register reads to eventually return 0:
> > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
> > > >
> > > > Is there a hang if these loops are hit when a device has been
> > > > surprise removed? I'm trying to understand whether surprise removal
> > > > is fully supported or whether this patch is one step in that direction.
> > > >
> > > In one of the previous replies I answered to Michael, but don't have the 
> > > link
> > handy.
> > > It is not fully supported by this patch. It will hang.
> > >
> > > This patch restores driver back to the same state what it was before the 
> > > fixes
> > tag patch.
> > > The virtio stack level work is needed to support surprise removal, 
> > > including
> > the reset flow you rightly pointed.
> > 
> > Have plans to do that?
> >
> Didn't give enough thoughts on it yet.

This one is kind of pointless then? It just fixes the specific race window
that your test harness happens to hit?

Maybe it's better to wait until someone does a comprehensive fix..


> > > > Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
> > > > Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
> > > >
> > > Thanks.
> > >
> > > > >
> > > > > Fix it by aborting the requests when the VQ is broken.
> > > > >
> > > > > With this fix now fio completes swiftly.
> > > > > An alternative of IO timeout has been considered, however when the
> > > > > driver knows about unresponsive block device, swiftly clearing
> > > > > them enables users and upper layers to react quickly.
> > > > >
> > > > > Verified with multiple device unplug iterations with pending
> > > > > requests in virtio used ring and some pending with the device.
> > > > >
> > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of
> > > > > virtio pci device")
> > > > > Cc: sta...@vger.kernel.org
> > > > > Reported-by: Li RongQing <lirongq...@baidu.com>
> > > > > Closes:
> > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9
> > > > > b474
> > > > > 1...@baidu.com/
> > > > > Reviewed-by: Max Gurtovoy <mgurto...@nvidia.com>
> > > > > Reviewed-by: Israel Rukshin <isra...@nvidia.com>
> > > > > Signed-off-by: Parav Pandit <pa...@nvidia.com>
> > > > >
> > > > > ---
> > > > > v4->v5:
> > > > > - fixed comment style where comment to start with one empty line
> > > > > at start
> > > > > - Addressed comments from Alok
> > > > > - fixed typo in broken vq check
> > > > > v3->v4:
> > > > > - Addressed comments from Michael
> > > > > - renamed virtblk_request_cancel() to
> > > > >   virtblk_complete_request_with_ioerr()
> > > > > - Added comments for virtblk_complete_request_with_ioerr()
> > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > >   virtblk_cleanup_broken_device()
> > > > > - Added comments for virtblk_cleanup_broken_device()
> > > > > - Moved the broken vq check in virtblk_remove()
> > > > > - Fixed comment style to have first empty line
> > > > > - replaced freezed to frozen
> > > > > - Fixed comments rephrased
> > > > >
> > > > > v2->v3:
> > > > > - Addressed comments from Michael
> > > > > - updated comment for synchronizing with callbacks
> > > > >
> > > > > v1->v2:
> > > > > - Addressed comments from Stephan
> > > > > - fixed spelling to 'waiting'
> > > > > - Addressed comments from Michael
> > > > > - Dropped checking broken vq from queue_rq() and queue_rqs()
> > > > >   because it is checked in lower layer routines in virtio core
> > > > >
> > > > > v0->v1:
> > > > > - Fixed comments from Stefan to rename a cleanup function
> > > > > - Improved logic for handling any outstanding requests
> > > > >   in bio layer
> > > > > - improved cancel callback to sync with ongoing done()
> > > > > ---
> > > > >  drivers/block/virtio_blk.c | 95
> > > > > ++++++++++++++++++++++++++++++++++++++
> > > > >  1 file changed, 95 insertions(+)
> > > > >
> > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48
> > > > > 100644
> > > > > --- a/drivers/block/virtio_blk.c
> > > > > +++ b/drivers/block/virtio_blk.c
> > > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct
> > > > > virtio_device
> > > > *vdev)
> > > > >       return err;
> > > > >  }
> > > > >
> > > > > +/*
> > > > > + * If the vq is broken, device will not complete requests.
> > > > > + * So we do it for the device.
> > > > > + */
> > > > > +static bool virtblk_complete_request_with_ioerr(struct request
> > > > > +*rq, void *data) {
> > > > > +     struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > +     struct virtio_blk *vblk = data;
> > > > > +     struct virtio_blk_vq *vq;
> > > > > +     unsigned long flags;
> > > > > +
> > > > > +     vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > +
> > > > > +     spin_lock_irqsave(&vq->lock, flags);
> > > > > +
> > > > > +     vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > +     if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
> > > > > +             blk_mq_complete_request(rq);
> > > > > +
> > > > > +     spin_unlock_irqrestore(&vq->lock, flags);
> > > > > +     return true;
> > > > > +}
> > > > > +
> > > > > +/*
> > > > > + * If the device is broken, it will not use any buffers and
> > > > > +waiting
> > > > > + * for that to happen is pointless. We'll do the cleanup in the
> > > > > +driver,
> > > > > + * completing all requests for the device.
> > > > > + */
> > > > > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
> > > > > +     struct request_queue *q = vblk->disk->queue;
> > > > > +
> > > > > +     /*
> > > > > +      * Start freezing the queue, so that new requests keeps waiting 
> > > > > at the
> > > > > +      * door of bio_queue_enter(). We cannot fully freeze the queue
> > > > because
> > > > > +      * frozen queue is an empty queue and there are pending 
> > > > > requests, so
> > > > > +      * only start freezing it.
> > > > > +      */
> > > > > +     blk_freeze_queue_start(q);
> > > > > +
> > > > > +     /*
> > > > > +      * When quiescing completes, all ongoing dispatches have 
> > > > > completed
> > > > > +      * and no new dispatch will happen towards the driver.
> > > > > +      */
> > > > > +     blk_mq_quiesce_queue(q);
> > > > > +
> > > > > +     /*
> > > > > +      * Synchronize with any ongoing VQ callbacks that may have 
> > > > > started
> > > > > +      * before the VQs were marked as broken. Any outstanding 
> > > > > requests
> > > > > +      * will be completed by virtblk_complete_request_with_ioerr().
> > > > > +      */
> > > > > +     virtio_synchronize_cbs(vblk->vdev);
> > > > > +
> > > > > +     /*
> > > > > +      * At this point, no new requests can enter the queue_rq() and
> > > > > +      * completion routine will not complete any new requests either
> > > > > +for
> > > > the
> > > > > +      * broken vq. Hence, it is safe to cancel all requests which are
> > > > > +      * started.
> > > > > +      */
> > > > > +     blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > +                             virtblk_complete_request_with_ioerr, 
> > > > > vblk);
> > > > > +     blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > > > > +
> > > > > +     /*
> > > > > +      * All pending requests are cleaned up. Time to resume so that 
> > > > > disk
> > > > > +      * deletion can be smooth. Start the HW queues so that when
> > > > > +queue
> > > > is
> > > > > +      * unquiesced requests can again enter the driver.
> > > > > +      */
> > > > > +     blk_mq_start_stopped_hw_queues(q, true);
> > > > > +
> > > > > +     /*
> > > > > +      * Unquiescing will trigger dispatching any pending requests to 
> > > > > the
> > > > > +      * driver which has crossed bio_queue_enter() to the driver.
> > > > > +      */
> > > > > +     blk_mq_unquiesce_queue(q);
> > > > > +
> > > > > +     /*
> > > > > +      * Wait for all pending dispatches to terminate which may have 
> > > > > been
> > > > > +      * initiated after unquiescing.
> > > > > +      */
> > > > > +     blk_mq_freeze_queue_wait(q);
> > > > > +
> > > > > +     /*
> > > > > +      * Mark the disk dead so that once we unfreeze the queue, 
> > > > > requests
> > > > > +      * waiting at the door of bio_queue_enter() can be aborted right
> > > > away.
> > > > > +      */
> > > > > +     blk_mark_disk_dead(vblk->disk);
> > > > > +
> > > > > +     /* Unfreeze the queue so that any waiting requests will be 
> > > > > aborted. */
> > > > > +     blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > +}
> > > > > +
> > > > >  static void virtblk_remove(struct virtio_device *vdev)  {
> > > > >       struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@
> > > > > static void virtblk_remove(struct virtio_device *vdev)
> > > > >       /* Make sure no work handler is accessing the device. */
> > > > >       flush_work(&vblk->config_work);
> > > > >
> > > > > +     if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > +             virtblk_cleanup_broken_device(vblk);
> > > > > +
> > > > >       del_gendisk(vblk->disk);
> > > > >       blk_mq_free_tag_set(&vblk->tag_set);
> > > > >
> > > > > --
> > > > > 2.34.1
> > > > >


Reply via email to