On Tue, Feb 03, 2015 at 12:56:28PM +0900, Sergey Senozhatsky wrote: > On (02/03/15 12:02), Minchan Kim wrote: > > On Tue, Feb 03, 2015 at 10:54:33AM +0900, Sergey Senozhatsky wrote: > > > On (02/02/15 16:06), Sergey Senozhatsky wrote: > > > > So, guys, how about doing it differently, in less lines of code, > > > > hopefully. Don't move reset_store()'s work to zram_reset_device(). > > > > Instead, move > > > > > > > > set_capacity(zram->disk, 0); > > > > revalidate_disk(zram->disk); > > > > > > > > out from zram_reset_device() to reset_store(). this two function are > > > > executed only when called from reset_store() anyway. this also will let > > > > us drop `bool reset capacity' param from zram_reset_device(). > > > > > > > > > > > > so we will do in reset_store() > > > > > > > > mutex_lock(bdev->bd_mutex); > > > > > > > > fsync_bdev(bdev); > > > > zram_reset_device(zram); > > > > set_capacity(zram->disk, 0); > > > > > > > > mutex_unlock(&bdev->bd_mutex); > > > > > > > > revalidate_disk(zram->disk); > > > > bdput(bdev); > > > > > > > > > > > > > > > > and change zram_reset_device(zram, false) call to simply > > > > zram_reset_device(zram) > > > > in __exit zram_exit(void). > > > > > > > > > > Hello, > > > > > > Minchan, Ganesh, I sent a patch last night, with the above solution. > > > looks ok to you? > > > > Just I sent a feedback. > > > > thanks. > yeah, !FMODE_EXCL mode. > > how do you want to handle it -- you want to send a separate patch or > you want me to send incremental one-liner and ask Andrew to squash them?
Send a new patch based on yours. Thanks. >From 9da15eb638ba74d8072a1e2451c5036e8473f03a Mon Sep 17 00:00:00 2001 From: Minchan Kim <minc...@kernel.org> Date: Tue, 3 Feb 2015 13:42:35 +0900 Subject: [PATCH] zram: check bd_openers instead bd_holders The bd_holders is increased only when user open the device file as FMODE_EXCL so if something opens zram0 as !FMODE_EXCL and request I/O while another user reset zram0, we can see following warning. [ 30.683449] zram0: detected capacity change from 0 to 64424509440 [ 33.736869] Buffer I/O error on dev zram0, logical block 180823, lost async page write [ 33.738814] Buffer I/O error on dev zram0, logical block 180824, lost async page write [ 33.740654] Buffer I/O error on dev zram0, logical block 180825, lost async page write [ 33.742551] Buffer I/O error on dev zram0, logical block 180826, lost async page write [ 33.744153] Buffer I/O error on dev zram0, logical block 180827, lost async page write [ 33.745807] Buffer I/O error on dev zram0, logical block 180828, lost async page write [ 33.747419] Buffer I/O error on dev zram0, logical block 180829, lost async page write [ 33.749060] Buffer I/O error on dev zram0, logical block 180830, lost async page write [ 33.750687] Buffer I/O error on dev zram0, logical block 180831, lost async page write [ 33.752286] Buffer I/O error on dev zram0, logical block 180832, lost async page write [ 33.811590] ------------[ cut here ]------------ [ 33.812038] WARNING: CPU: 11 PID: 1996 at fs/block_dev.c:57 __blkdev_put+0x1d7/0x210() [ 33.812817] Modules linked in: [ 33.813142] CPU: 11 PID: 1996 Comm: dd Not tainted 3.19.0-rc6-next-20150202+ #1125 [ 33.813837] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 [ 33.814525] ffffffff81801a2d ffff880061e77db8 ffffffff815b848e 0000000000000001 [ 33.815196] 0000000000000000 ffff880061e77df8 ffffffff8104de2a 0000000000000000 [ 33.815867] ffff88005da287f0 ffff88005da28680 ffff88005da28770 ffff88005da28698 [ 33.816536] Call Trace: [ 33.816817] [<ffffffff815b848e>] dump_stack+0x45/0x57 [ 33.817304] [<ffffffff8104de2a>] warn_slowpath_common+0x8a/0xc0 [ 33.817829] [<ffffffff8104df1a>] warn_slowpath_null+0x1a/0x20 [ 33.818331] [<ffffffff811b60b7>] __blkdev_put+0x1d7/0x210 [ 33.818797] [<ffffffff811b69c0>] blkdev_put+0x50/0x130 [ 33.819244] [<ffffffff811b6b55>] blkdev_close+0x25/0x30 [ 33.819723] [<ffffffff8118079f>] __fput+0xdf/0x1e0 [ 33.820140] [<ffffffff811808ee>] ____fput+0xe/0x10 [ 33.820576] [<ffffffff81068e07>] task_work_run+0xa7/0xe0 [ 33.821151] [<ffffffff81002b89>] do_notify_resume+0x49/0x60 [ 33.821721] [<ffffffff815bf09d>] int_signal+0x12/0x17 [ 33.822228] ---[ end trace 274fbbc5664827d2 ]--- The warning comes from bdev_write_node in blkdev_put path. tatic void bdev_write_inode(struct inode *inode) { spin_lock(&inode->i_lock); while (inode->i_state & I_DIRTY) { spin_unlock(&inode->i_lock); WARN_ON_ONCE(write_inode_now(inode, true)); <========= here. spin_lock(&inode->i_lock); } spin_unlock(&inode->i_lock); } The reason is dd process encounters I/O fails due to sudden block device disappear so in filemap_check_errors in __writeback_single_inode returns -EIO. If we checks bd_openners instead of bd_holders, we could address the problem. When I see the brd, it already have used it rather than bd_holders so although I'm not a expert of block layer, it seems to be better. I can make following warning with below simple script. In addition, I added msleep(2000) below set_capacity(zram->disk, 0) after applying your patch to make window huge(Kudos to Ganesh!) script: echo $((60<<30)) > /sys/block/zram0/disksize setsid dd if=/dev/zero of=/dev/zram0 & sleep 1 setsid echo 1 > /sys/block/zram0/reset If we checks bd_openners instead of bd_holders, we could address the problem. When I see the brd, it already have used it rather than bd_holders so although I'm not a expert of block layer, it seems to be better. Signed-off-by: Minchan Kim <minc...@kernel.org> --- drivers/block/zram/zram_drv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index a32069f98afa..cc0e6a3ddb4f 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -811,7 +811,7 @@ static ssize_t reset_store(struct device *dev, mutex_lock(&bdev->bd_mutex); /* Do not reset an active device! */ - if (bdev->bd_holders) { + if (bdev->bd_openers) { ret = -EBUSY; goto out; } -- 1.9.1 -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/