> From: Michael S. Tsirkin <m...@redhat.com>
> Sent: Wednesday, May 21, 2025 4:13 PM
> 
> On Wed, May 21, 2025 at 10:34:46AM +0000, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin <m...@redhat.com>
> > > Sent: Wednesday, May 21, 2025 3:46 PM
> > >
> > > On Wed, May 21, 2025 at 09:32:30AM +0000, Parav Pandit wrote:
> > > > > From: Michael S. Tsirkin <m...@redhat.com>
> > > > > Sent: Wednesday, May 21, 2025 2:49 PM
> > > > > To: Parav Pandit <pa...@nvidia.com>
> > > > > Cc: stefa...@redhat.com; ax...@kernel.dk;
> > > > > virtualizat...@lists.linux.dev; linux-bl...@vger.kernel.or;
> > > > > sta...@vger.kernel.org; NBU-Contact-Li Rongqing
> > > > > (EXTERNAL) <lirongq...@baidu.com>; Chaitanya Kulkarni
> > > > > <chaitan...@nvidia.com>; xuanz...@linux.alibaba.com;
> > > > > pbonz...@redhat.com; jasow...@redhat.com; Max Gurtovoy
> > > > > <mgurto...@nvidia.com>; Israel Rukshin <isra...@nvidia.com>
> > > > > Subject: Re: [PATCH v1] virtio_blk: Fix disk deletion hang on
> > > > > device surprise removal
> > > > >
> > > > > On Wed, May 21, 2025 at 09:14:31AM +0000, Parav Pandit wrote:
> > > > > > > From: Michael S. Tsirkin <m...@redhat.com>
> > > > > > > Sent: Wednesday, May 21, 2025 1:48 PM
> > > > > > >
> > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote:
> > > > > > > > When the PCI device is surprise removed, requests may not
> > > > > > > > complete the device as the VQ is marked as broken. Due to
> > > > > > > > this, the disk deletion hangs.
> > > > > > > >
> > > > > > > > Fix it by aborting the requests when the VQ is broken.
> > > > > > > >
> > > > > > > > With this fix now fio completes swiftly.
> > > > > > > > An alternative of IO timeout has been considered, however
> > > > > > > > when the driver knows about unresponsive block device,
> > > > > > > > swiftly clearing them enables users and upper layers to react
> quickly.
> > > > > > > >
> > > > > > > > Verified with multiple device unplug iterations with
> > > > > > > > pending requests in virtio used ring and some pending with the
> device.
> > > > > > > >
> > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal
> > > > > > > > of virtio pci device")
> > > > > > > > Cc: sta...@vger.kernel.org
> > > > > > > > Reported-by: lirongq...@baidu.com
> > > > > > > > Closes:
> > > > > > > >
> > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55f
> > > > > > > b73c
> > > > > > > a9b4
> > > > > > > 74
> > > > > > > > 1...@baidu.com/
> > > > > > > > Reviewed-by: Max Gurtovoy <mgurto...@nvidia.com>
> > > > > > > > Reviewed-by: Israel Rukshin <isra...@nvidia.com>
> > > > > > > > Signed-off-by: Parav Pandit <pa...@nvidia.com>
> > > > > > > > ---
> > > > > > > > changelog:
> > > > > > > > v0->v1:
> > > > > > > > - Fixed comments from Stefan to rename a cleanup function
> > > > > > > > - Improved logic for handling any outstanding requests
> > > > > > > >   in bio layer
> > > > > > > > - improved cancel callback to sync with ongoing done()
> > > > > > >
> > > > > > > thanks for the patch!
> > > > > > > questions:
> > > > > > >
> > > > > > >
> > > > > > > > ---
> > > > > > > >  drivers/block/virtio_blk.c | 95
> > > > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > > > >  1 file changed, 95 insertions(+)
> > > > > > > >
> > > > > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > > > > b/drivers/block/virtio_blk.c index
> > > > > > > > 7cffea01d868..5212afdbd3c7
> > > > > > > > 100644
> > > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > > @@ -435,6 +435,13 @@ static blk_status_t
> > > > > > > > virtio_queue_rq(struct
> > > > > > > blk_mq_hw_ctx *hctx,
> > > > > > > >         blk_status_t status;
> > > > > > > >         int err;
> > > > > > > >
> > > > > > > > +       /* Immediately fail all incoming requests if the vq is 
> > > > > > > > broken.
> > > > > > > > +        * Once the queue is unquiesced, upper block layer
> > > > > > > > +flushes any
> > > > > > > pending
> > > > > > > > +        * queued requests; fail them right away.
> > > > > > > > +        */
> > > > > > > > +       if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq)))
> > > > > > > > +               return BLK_STS_IOERR;
> > > > > > > > +
> > > > > > > >         status = virtblk_prep_rq(hctx, vblk, req, vbr);
> > > > > > > >         if (unlikely(status))
> > > > > > > >                 return status;
> > > > > > >
> > > > > > > just below this:
> > > > > > >         spin_lock_irqsave(&vblk->vqs[qid].lock, flags);
> > > > > > >         err = virtblk_add_req(vblk->vqs[qid].vq, vbr);
> > > > > > >         if (err) {
> > > > > > >
> > > > > > >
> > > > > > > and virtblk_add_req calls virtqueue_add_sgs, so it will fail
> > > > > > > on a broken
> > > vq.
> > > > > > >
> > > > > > > Why do we need to check it one extra time here?
> > > > > > >
> > > > > > It may work, but for some reason if the hw queue is stopped in
> > > > > > this flow, it
> > > > > can hang the IOs flushing.
> > > > >
> > > > > > I considered it risky to rely on the error code ENOSPC
> > > > > > returned by non virtio-
> > > > > blk driver.
> > > > > > In other words, if lower layer changed for some reason, we may
> > > > > > end up in
> > > > > stopping the hw queue when broken, and requests would hang.
> > > > > >
> > > > > > Compared to that one-time entry check seems more robust.
> > > > >
> > > > > I don't get it.
> > > > > Checking twice in a row is more robust?
> > > > No. I am not confident on the relying on the error code -ENOSPC
> > > > from layers
> > > outside of virtio-blk driver.
> > >
> > > You can rely on virtio core to return an error on a broken vq.
> > > The error won't be -ENOSPC though, why would it?
> > >
> > Presently that is not the API contract between virtio core and driver.
> > When the VQ is broken the error code is EIO. This is from the code
> inspection.
> 
> yes
> 
> > If you prefer to rely on the code inspection of lower layer to define the 
> > virtio-
> blk, I am fine and remove the two checks.
> > I just find it fragile, but if you prefer this way, I am fine.
> 
> I think it's better, yes.
> 
Ok. Will adapt this in v2.

> > > > If for a broken VQ, ENOSPC arrives, then hw queue is stopped and
> > > > requests
> > > could be stuck.
> > >
> > > Can you describe the scenario in more detail pls?
> > > where does ENOSPC arrive from? when is the vq get broken ...
> > >
> > ENOSPC arrives when it fails to enqueue the request in present form.
> > EIO arrives when VQ is broken.
> >
> > If in the future, ENOSPC arrives for broken VQ, following flow can trigger a
> hang.
> >
> > cpu_0:
> > virtblk_broken_device_cleanup()
> > ...
> >     blk_mq_unquiesce_queue();
> >     ... stage_1:
> >     blk_mq_freeze_queue_wait().
> >
> >
> > Cpu_1:
> > Queue_rq()
> >   virtio_queue_rq()
> >      virtblk_add_req()
> >         -ENOSPC
> >             Stop_hw_queue()
> >                 At this point, new requests in block layer may get stuck 
> > and may not
> be enqueued to queue_rq().
> >
> > >
> > > > > What am I missing?
> > > > > Can you describe the scenario in more detail?
> > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct
> > > > > > > > rq_list
> > > > > *rqlist)
> > > > > > > >         while ((req = rq_list_pop(rqlist))) {
> > > > > > > >                 struct virtio_blk_vq *this_vq = 
> > > > > > > > get_virtio_blk_vq(req-
> > > > > > > >mq_hctx);
> > > > > > > >
> > > > > > > > +               if (unlikely(virtqueue_is_broken(this_vq->vq))) 
> > > > > > > > {
> > > > > > > > +                       rq_list_add_tail(&requeue_list, req);
> > > > > > > > +                       continue;
> > > > > > > > +               }
> > > > > > > > +
> > > > > > > >                 if (vq && vq != this_vq)
> > > > > > > >                         virtblk_add_req_batch(vq, &submit_list);
> > > > > > > >                 vq = this_vq;
> > > > > > >
> > > > > > > similarly
> > > > > > >
> > > > > > The error code is not surfacing up here from virtblk_add_req().
> > > > >
> > > > >
> > > > > but wait a sec:
> > > > >
> > > > > static void virtblk_add_req_batch(struct virtio_blk_vq *vq,
> > > > >                 struct rq_list *rqlist) {
> > > > >         struct request *req;
> > > > >         unsigned long flags;
> > > > >         bool kick;
> > > > >
> > > > >         spin_lock_irqsave(&vq->lock, flags);
> > > > >
> > > > >         while ((req = rq_list_pop(rqlist))) {
> > > > >                 struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
> > > > >                 int err;
> > > > >
> > > > >                 err = virtblk_add_req(vq->vq, vbr);
> > > > >                 if (err) {
> > > > >                         virtblk_unmap_data(req, vbr);
> > > > >                         virtblk_cleanup_cmd(req);
> > > > >                         blk_mq_requeue_request(req, true);
> > > > >                 }
> > > > >         }
> > > > >
> > > > >         kick = virtqueue_kick_prepare(vq->vq);
> > > > >         spin_unlock_irqrestore(&vq->lock, flags);
> > > > >
> > > > >         if (kick)
> > > > >                 virtqueue_notify(vq->vq); }
> > > > >
> > > > >
> > > > > it actually handles the error internally?
> > > > >
> > > > For all the errors it requeues the request here.
> > >
> > > ok and they will not prevent removal will they?
> > >
> > It should not prevent removal.
> > One must be careful every single time changing it to make sure that hw
> queues are not stopped in lower layer, but may be this is ok.
> >
> > >
> > > > >
> > > > >
> > > > >
> > > > > > It would end up adding checking for special error code here as
> > > > > > well to abort
> > > > > by translating broken VQ -> EIO to break the loop in
> > > virtblk_add_req_batch().
> > > > > >
> > > > > > Weighing on specific error code-based data path that may
> > > > > > require audit from
> > > > > lower layers now and future, an explicit check of broken in this
> > > > > layer could be better.
> > > > > >
> > > > > > [..]
> > > > >
> > > > >
> > > > > Checking add was successful is preferred because it has to be
> > > > > done
> > > > > *anyway* - device can get broken after you check before add.
> > > > >
> > > > > So I would like to understand why are we also checking
> > > > > explicitly and I do not get it so far.
> > > >
> > > > checking explicitly to not depend on specific error code-based logic.
> > >
> > >
> > > I do not understand. You must handle vq add errors anyway.
> >
> > I believe removal of the two vq broken checks should also be fine.
> > I would probably add the comment in the code indicating virtio block driver
> assumes that ENOSPC is not returned for broken VQ.
> 
> You can include this in the series if you like. Tweak to taste:
> 
Thanks for the patch. This virtio core comment update can be done at later 
point without fixes tag.
Will park for the later.

> -->
> 
> virtio: document ENOSPC
> 
> drivers handle ENOSPC specially since it's an error one can get from a working
> VQ. Document the semantics.
> 
> Signed-off-by: Michael S. Tsirkin <m...@redhat.com>
> 
> ---
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index
> b784aab66867..97ab0cce527d 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -2296,6 +2296,10 @@ static inline int virtqueue_add(struct virtqueue
> *_vq,
>   * at the same time (except where noted).
>   *
>   * Returns zero or a negative error (ie. ENOSPC, ENOMEM, EIO).
> + *
> + * NB: ENOSPC is a special code that is only returned on an attempt to
> + add a
> + * buffer to a full VQ. It indicates that some buffers are outstanding
> + and that
> + * the operation can be retried after some buffers have been used.
>   */
>  int virtqueue_add_sgs(struct virtqueue *_vq,
>                     struct scatterlist *sgs[],


Reply via email to