3.16.51-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Tang Junhui <[email protected]>

commit c81ffa32a214c84b08900fbc9d432187bd948eba upstream.

Sequential write IOs were tested with bs=1M by FIO in writeback cache
mode, these IOs were expected to be bypassed, but actually they did not.
We debug the code, and find in check_should_bypass():
    if (!congested &&
        mode == CACHE_MODE_WRITEBACK &&
        op_is_write(bio_op(bio)) &&
        (bio->bi_opf & REQ_SYNC))
        goto rescale
that means, If in writeback mode, a write IO with REQ_SYNC flag will not
be bypassed though it is a sequential large IO, It's not a correct thing
to do actually, so this patch remove these codes.

Signed-off-by: tang.junhui <[email protected]>
Reviewed-by: Kent Overstreet <[email protected]>
Reviewed-by: Eric Wheeler <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
[bwh: Backported to 3.16: deleted code is slightly different]
Signed-off-by: Ben Hutchings <[email protected]>
---
 drivers/md/bcache/request.c | 6 ------
 1 file changed, 6 deletions(-)

--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -395,12 +395,6 @@ static bool check_should_bypass(struct c
        if (!congested && !dc->sequential_cutoff)
                goto rescale;
 
-       if (!congested &&
-           mode == CACHE_MODE_WRITEBACK &&
-           (bio->bi_rw & REQ_WRITE) &&
-           (bio->bi_rw & REQ_SYNC))
-               goto rescale;
-
        spin_lock(&dc->io_lock);
 
        hlist_for_each_entry(i, iohash(dc, bio->bi_iter.bi_sector), hash)

Reply via email to