On Fri, May 25, 2012 at 10:34:48AM +0800, Asias He wrote:
> blk_cleanup_queue() will call blk_drian_queue() to drain all the
> requests before queue DEAD marking. If we reset the device before
> blk_cleanup_queue() the drain would fail.
> 
> 1) if the queue is stopped in do_virtblk_request() because device is
> full, the q->request_fn() will not be called.
> 
> blk_drain_queue() {
>    while(true) {
>       ...
>       if (!list_empty(&q->queue_head))
>         __blk_run_queue(q) {
>           if (queue is not stoped)
>               q->request_fn()
>       }
>       ...
>    }
> }
> 
> Do no reset the device before blk_cleanup_queue() gives the chance to
> start the queue in interrupt handler blk_done().
> 
> 2) In commit b79d866c8b7014a51f611a64c40546109beaf24a, We abort requests
> dispatched to driver before blk_cleanup_queue(). There is a race if
> requests are dispatched to driver after the abort and before the queue
> DEAD mark. To fix this, instead of aborting the requests explicitly, we
> can just reset the device after after blk_cleanup_queue so that the
> device can complete all the requests before queue DEAD marking in the
> drain process.
> 
> Cc: Rusty Russell <ru...@rustcorp.com.au>
> Cc: "Michael S. Tsirkin" <m...@redhat.com>
> Cc: virtualizat...@lists.linux-foundation.org
> Cc: kvm@vger.kernel.org
> Signed-off-by: Asias He <as...@redhat.com>
> ---
>  drivers/block/virtio_blk.c |   12 +-----------
>  1 file changed, 1 insertion(+), 11 deletions(-)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 1bed517..b4fa2d7 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -576,8 +576,6 @@ static void __devexit virtblk_remove(struct virtio_device 
> *vdev)
>  {
>       struct virtio_blk *vblk = vdev->priv;
>       int index = vblk->index;
> -     struct virtblk_req *vbr;
> -     unsigned long flags;
>  
>       /* Prevent config work handler from accessing the device. */
>       mutex_lock(&vblk->config_lock);
> @@ -585,21 +583,13 @@ static void __devexit virtblk_remove(struct 
> virtio_device *vdev)
>       mutex_unlock(&vblk->config_lock);
>  
>       del_gendisk(vblk->disk);
> +     blk_cleanup_queue(vblk->disk->queue);

Device is not reset here yet, so it will process
some requests in parallel with blk_cleanup_queue.
Is this a problem? Why not?


>  
>       /* Stop all the virtqueues. */
>       vdev->config->reset(vdev);
>  
>       flush_work(&vblk->config_work);
>  
> -     /* Abort requests dispatched to driver. */
> -     spin_lock_irqsave(&vblk->lock, flags);
> -     while ((vbr = virtqueue_detach_unused_buf(vblk->vq))) {
> -             __blk_end_request_all(vbr->req, -EIO);
> -             mempool_free(vbr, vblk->pool);
> -     }
> -     spin_unlock_irqrestore(&vblk->lock, flags);
> -
> -     blk_cleanup_queue(vblk->disk->queue);
>       put_disk(vblk->disk);
>       mempool_destroy(vblk->pool);
>       vdev->config->del_vqs(vdev);
> -- 
> 1.7.10.2
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to