Normally if driver is busy to dispatch a request the logic is like below: block layer: driver: __blk_mq_run_hw_queue a. blk_mq_stop_hw_queue b. rq add to ctx->dispatch
later: 1. blk_mq_start_hw_queue 2. __blk_mq_run_hw_queue But it's possible step 1-2 runs between a and b. And since rq isn't in ctx->dispatch yet, step 2 will not run rq. The rq might get lost if there are no subsequent requests kick in. Signed-off-by: Shaohua Li <s...@fb.com> --- block/blk-mq.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index ade8a2d..e6822a2 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -772,6 +772,7 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx) WARN_ON(!cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)); +again: if (unlikely(test_bit(BLK_MQ_S_STOPPED, &hctx->state))) return; @@ -853,8 +854,16 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx) */ if (!list_empty(&rq_list)) { spin_lock(&hctx->lock); - list_splice(&rq_list, &hctx->dispatch); + list_splice_init(&rq_list, &hctx->dispatch); spin_unlock(&hctx->lock); + /* + * the queue is expected stopped with BLK_MQ_RQ_QUEUE_BUSY, but + * it's possible the queue is stopped and restarted again + * before this. Queue restart will dispatch requests. And since + * requests in rq_list aren't added into hctx->dispatch yet, + * the requests in rq_list might get lost. + **/ + goto again; } } -- 1.8.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/