3.16.42-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Bart Van Assche <[email protected]>

commit d15bb3a6467e102e60d954aadda5fb19ce6fd8ec upstream.

It is required to hold the queue lock when calling blk_run_queue_async()
to avoid that a race between blk_run_queue_async() and
blk_cleanup_queue() is triggered.

Signed-off-by: Bart Van Assche <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
[bwh: Backported to 3.16: adjust filename]
Signed-off-by: Ben Hutchings <[email protected]>
---
 drivers/md/dm.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -868,6 +868,9 @@ static void end_clone_bio(struct bio *cl
  */
 static void rq_completed(struct mapped_device *md, int rw, int run_queue)
 {
+       struct request_queue *q = md->queue;
+       unsigned long flags;
+
        atomic_dec(&md->pending[rw]);
 
        /* nudge anyone waiting on suspend queue */
@@ -880,8 +883,11 @@ static void rq_completed(struct mapped_d
         * back into ->request_fn() could deadlock attempting to grab the
         * queue lock again.
         */
-       if (run_queue)
-               blk_run_queue_async(md->queue);
+       if (run_queue) {
+               spin_lock_irqsave(q->queue_lock, flags);
+               blk_run_queue_async(q);
+               spin_unlock_irqrestore(q->queue_lock, flags);
+       }
 
        /*
         * dm_put() must be at the end of this function. See the comment above

Reply via email to