On 05/02/2015 06:31 PM, Shaohua Li wrote:
Normally if driver is busy to dispatch a request the logic is like below:
block layer:                                    driver:
        __blk_mq_run_hw_queue
a.                                              blk_mq_stop_hw_queue
b.      rq add to ctx->dispatch

later:
1.                                              blk_mq_start_hw_queue
2.      __blk_mq_run_hw_queue

But it's possible step 1-2 runs between a and b. And since rq isn't in
ctx->dispatch yet, step 2 will not run rq. The rq might get lost if
there are no subsequent requests kick in.

Good catch! But the patch introduces a potentially never ending loop in __blk_mq_run_hw_queue(). Not sure how we can fully close it, but it might be better to punt the re-run after adding the requests back to the worker. That would turn a potential busy loop (until requests complete) into something with nicer behavior, at least. Ala

if (!test_bit(BLK_MQ_S_STOPPED, &hctx->state))
     kblockd_schedule_delayed_work_on(blk_mq_hctx_next_cpu(hctx),
                                        &hctx->run_work, 0);


--
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to