With issue/complete and timeout paths now using the generation number
and state based synchronization, blk_abort_request() is the only one
which depends on REQ_ATOM_COMPLETE for arbitrating completion.

There's no reason for blk_abort_request() to be a completely separate
path.  This patch makes blk_abort_request() piggyback on the timeout
path instead of trying to terminate the request directly.

This removes the last dependency on REQ_ATOM_COMPLETE in blk-mq.

Note that this makes blk_abort_request() asynchronous - it initiates
abortion but the actual termination will happen after a short while,
even when the caller owns the request.  AFAICS, SCSI and ATA should be
fine with that and I think mtip32xx and dasd should be safe but not
completely sure.  It'd be great if people who know the drivers take a
look.

v2: - Add comment explaining the lack of synchronization around
      ->deadline update as requested by Bart.

Signed-off-by: Tejun Heo <t...@kernel.org>
Cc: Asai Thambi SP <asamymuth...@micron.com>
Cc: Stefan Haberland <s...@linux.vnet.ibm.com>
Cc: Jan Hoeppner <hoepp...@linux.vnet.ibm.com>
Cc: Bart Van Assche <bart.vanass...@wdc.com>
---
 block/blk-mq.c      |  2 +-
 block/blk-mq.h      |  2 --
 block/blk-timeout.c | 13 +++++++++----
 3 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 51e9704..b419746 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -820,7 +820,7 @@ struct blk_mq_timeout_data {
        unsigned int nr_expired;
 };
 
-void blk_mq_rq_timed_out(struct request *req, bool reserved)
+static void blk_mq_rq_timed_out(struct request *req, bool reserved)
 {
        const struct blk_mq_ops *ops = req->q->mq_ops;
        enum blk_eh_timer_return ret = BLK_EH_RESET_TIMER;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index cf01f6f..6b2d616 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -94,8 +94,6 @@ extern int blk_mq_sysfs_register(struct request_queue *q);
 extern void blk_mq_sysfs_unregister(struct request_queue *q);
 extern void blk_mq_hctx_kobj_init(struct blk_mq_hw_ctx *hctx);
 
-extern void blk_mq_rq_timed_out(struct request *req, bool reserved);
-
 void blk_mq_release(struct request_queue *q);
 
 /**
diff --git a/block/blk-timeout.c b/block/blk-timeout.c
index 6427be7..4f04cd1 100644
--- a/block/blk-timeout.c
+++ b/block/blk-timeout.c
@@ -156,12 +156,17 @@ void blk_timeout_work(struct work_struct *work)
  */
 void blk_abort_request(struct request *req)
 {
-       if (blk_mark_rq_complete(req))
-               return;
-
        if (req->q->mq_ops) {
-               blk_mq_rq_timed_out(req, false);
+               /*
+                * All we need to ensure is that timeout scan takes place
+                * immediately and that scan sees the new timeout value.
+                * No need for fancy synchronizations.
+                */
+               req->deadline = jiffies;
+               mod_timer(&req->q->timeout, 0);
        } else {
+               if (blk_mark_rq_complete(req))
+                       return;
                blk_delete_timer(req);
                blk_rq_timed_out(req);
        }
-- 
2.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to