On Fri, 2017-03-17 at 17:57 +0800, Ming Lei wrote:
> Given blk_set_queue_dying() is always called in remove path
> of block device, and queue will be cleaned up later, we don't
> need to worry about undoing the counter.
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index d772c221cc17..62d4967c369f 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -500,9 +500,12 @@ void blk_set_queue_dying(struct request_queue *q)
>       queue_flag_set(QUEUE_FLAG_DYING, q);
>       spin_unlock_irq(q->queue_lock);
>  
> -     if (q->mq_ops)
> +     if (q->mq_ops) {
>               blk_mq_wake_waiters(q);
> -     else {
> +
> +             /* block new I/O coming */
> +             blk_mq_freeze_queue_start(q);
> +     } else {
>               struct request_list *rl;
>  
>               spin_lock_irq(q->queue_lock);

Hello Ming,

The blk_freeze_queue() call in blk_cleanup_queue() waits until q_usage_counter
drops to zero. Since the above blk_mq_freeze_queue_start() call increases that
counter by one, how is blk_freeze_queue() expected to finish ever?

Bart.

Reply via email to