With holding queue's kobject refcount, it is safe for driver
to schedule requeue. However, blk_mq_kick_requeue_list() may
be called after blk_sync_queue() is done because of concurrent
requeue activities, then requeue work may not be completed when
freeing queue, and kernel oops is triggered.

So moving the cancel of requeue_work into blk_mq_release() for
avoiding race between requeue and freeing queue.

Cc: Dongli Zhang <dongli.zh...@oracle.com>
Cc: James Smart <james.sm...@broadcom.com>
Cc: Bart Van Assche <bart.vanass...@wdc.com>
Cc: linux-scsi@vger.kernel.org,
Cc: Martin K . Petersen <martin.peter...@oracle.com>,
Cc: Christoph Hellwig <h...@lst.de>,
Cc: James E . J . Bottomley <j...@linux.vnet.ibm.com>,
Reviewed-by: Bart Van Assche <bvanass...@acm.org>
Reviewed-by: Johannes Thumshirn <jthumsh...@suse.de>
Reviewed-by: Hannes Reinecke <h...@suse.com>
Reviewed-by: Christoph Hellwig <h...@lst.de>
Tested-by: James Smart <james.sm...@broadcom.com>
Signed-off-by: Ming Lei <ming....@redhat.com>
---
 block/blk-core.c | 1 -
 block/blk-mq.c   | 2 ++
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index a55389ba8779..93dc588fabe2 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -237,7 +237,6 @@ void blk_sync_queue(struct request_queue *q)
                struct blk_mq_hw_ctx *hctx;
                int i;
 
-               cancel_delayed_work_sync(&q->requeue_work);
                queue_for_each_hw_ctx(q, hctx, i)
                        cancel_delayed_work_sync(&hctx->run_work);
        }
diff --git a/block/blk-mq.c b/block/blk-mq.c
index fc60ed7e940e..89781309a108 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2634,6 +2634,8 @@ void blk_mq_release(struct request_queue *q)
        struct blk_mq_hw_ctx *hctx;
        unsigned int i;
 
+       cancel_delayed_work_sync(&q->requeue_work);
+
        /* hctx kobj stays in hctx */
        queue_for_each_hw_ctx(q, hctx, i) {
                if (!hctx)
-- 
2.9.5

Reply via email to