On 03/08/2017 06:11 AM, Minchan Kim wrote:
> And could you test this patch? It avoids split bio so no need new bio
> allocations and makes zram code simple.
> 
> From f778d7564d5cd772f25bb181329362c29548a257 Mon Sep 17 00:00:00 2001
> From: Minchan Kim <minc...@kernel.org>
> Date: Wed, 8 Mar 2017 13:35:29 +0900
> Subject: [PATCH] fix
> 
> Not-yet-Signed-off-by: Minchan Kim <minc...@kernel.org>
> ---

[...]

Yup, this works here.

I did a mkfs.xfs /dev/nvme0n1
dd if=/dev/urandom of=/test.bin bs=1M count=128
sha256sum test.bin
mount /dev/nvme0n1 /dir
mv test.bin /dir/
sha256sum /dir/test.bin

No panics and sha256sum of the 128MB test file still matches

Tested-by: Johannes Thumshirn <jthumsh...@suse.de>
Reviewed-by: Johannes Thumshirn <jthumsh...@suse.de>

Now that you removed the one page limit in zram_bvec_rw() you can also
add this hunk to remove the queue splitting:

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 85f4df8..27b168f6 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -868,8 +868,6 @@ static blk_qc_t zram_make_request(struct
request_queue *queue, struct bio *bio)
 {
        struct zram *zram = queue->queuedata;

-       blk_queue_split(queue, &bio, queue->bio_split);
-
        if (!valid_io_request(zram, bio->bi_iter.bi_sector,
                                        bio->bi_iter.bi_size)) {
                atomic64_inc(&zram->stats.invalid_io);

Byte,
        Johannes

-- 
Johannes Thumshirn                                          Storage
jthumsh...@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

Reply via email to