Ming Lei <[email protected]> writes:
> When there isn't enough vring descriptor for adding to vq,
> blk-mq will be put as stopped state until some of pending
> descriptors are completed & freed.
>
> Unfortunately, the vq's interrupt may come just before
> blk-mq's BLK_MQ_S_STOPPED flag is set, so the blk-mq will
> still be kept as stopped even though lots of descriptors
> are completed and freed in the interrupt handler. The worst
> case is that all pending descriptors are freed in the
> interrupt handler, and the queue is kept as stopped forever.
>
> This patch fixes the problem by starting/stopping blk-mq
> with holding vq_lock.

OK, but why the flag?  Isn't moving the
        blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);

inside the lock sufficient?

Cheers,
Rusty.



>
> Cc: Jens Axboe <[email protected]>
> Cc: Rusty Russell <[email protected]>
> Signed-off-by: Ming Lei <[email protected]>
> ---
>  drivers/block/virtio_blk.c |   13 ++++++++++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 7a51f06..97f53ac 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -42,6 +42,9 @@ struct virtio_blk
>       /* enable config space updates */
>       bool config_enable;
>  
> +     /* if the request queue is stopped, protected by vq_lock */
> +     bool queue_stopped;
> +
>       /* What host tells us, plus 2 for header & tailer. */
>       unsigned int sg_elems;
>  
> @@ -147,11 +150,13 @@ static void virtblk_done(struct virtqueue *vq)
>               if (unlikely(virtqueue_is_broken(vq)))
>                       break;
>       } while (!virtqueue_enable_cb(vq));
> -     spin_unlock_irqrestore(&vblk->vq_lock, flags);
>  
>       /* In case queue is stopped waiting for more buffers. */
> -     if (req_done)
> +     if (req_done && vblk->queue_stopped) {
>               blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
> +             vblk->queue_stopped = false;
> +     }
> +     spin_unlock_irqrestore(&vblk->vq_lock, flags);
>  }
>  
>  static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx, struct request *req)
> @@ -205,8 +210,9 @@ static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx, 
> struct request *req)
>       err = __virtblk_add_req(vblk->vq, vbr, vbr->sg, num);
>       if (err) {
>               virtqueue_kick(vblk->vq);
> -             spin_unlock_irqrestore(&vblk->vq_lock, flags);
>               blk_mq_stop_hw_queue(hctx);
> +             vblk->queue_stopped = true;
> +             spin_unlock_irqrestore(&vblk->vq_lock, flags);
>               /* Out of mem doesn't actually happen, since we fall back
>                * to direct descriptors */
>               if (err == -ENOMEM || err == -ENOSPC)
> @@ -598,6 +604,7 @@ static int virtblk_probe(struct virtio_device *vdev)
>       vblk->disk->fops = &virtblk_fops;
>       vblk->disk->driverfs_dev = &vdev->dev;
>       vblk->index = index;
> +     vblk->queue_stopped = false;
>  
>       /* configure queue flush support */
>       virtblk_update_cache_mode(vdev);
> -- 
> 1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to