[PATCH] btrfs: qgroup: Finish rescan when hit the last leaf of extent tree

2018-05-03 Thread Qu Wenruo
Under the following case, qgroup rescan can double account cowed tree blocks: In this case, extent tree only has one tree block. - | transid=5 last committed=4 | btrfs_qgroup_rescan_worker() | |- btrfs_start_transaction() | | transid = 5 | |- qgroup_rescan_leaf() ||-

Re: [PATCH] btrfs-progs: Use exclude_super_stripes instead of account_super_bytes

2018-05-03 Thread Su Yue
On Wed, May 2, 2018 at 7:55 PM Nikolay Borisov wrote: > Originally commit 2681e00f00fe ("btrfs-progs: check for matchingi > free space in cache") added the account_super_bytes function to prevent > false negative when running btrfs check. Turns out this function is > really

Re: [PATCH] btrfs-progs: Use exclude_super_stripes instead of account_super_bytes

2018-05-03 Thread Su Yue
On Wed, May 2, 2018 at 9:15 PM Qu Wenruo wrote: > On 2018年05月02日 20:49, Nikolay Borisov wrote: > > > > > > On 2.05.2018 15:29, Qu Wenruo wrote: > >> > >> > >> On 2018年05月02日 19:52, Nikolay Borisov wrote: > >>> Originally commit 2681e00f00fe ("btrfs-progs: check for

[RFC PATCH] raid6_pq: Add module options to prefer algorithm

2018-05-03 Thread Timofey Titovets
Skip testing unnecessary algorithms to speedup module initialization For my systems: Before: 1.510s (initrd) After: 977ms (initrd) # I set prefer to fastest algorithm Dmesg after patch: [1.190042] raid6: avx2x4 gen() 28153 MB/s [1.246683] raid6: avx2x4 xor() 19440 MB/s [

Re: [PATCH] btrfs: qgroup: Fix root item corruption when multiple same source snapshiots are created with quota enabled

2018-05-03 Thread Qu Wenruo
On 2018年05月04日 00:43, David Sterba wrote: > On Tue, Dec 19, 2017 at 03:44:54PM +0800, Qu Wenruo wrote: >> When multiple pending snapshots referring the same source subvolume are >> executed, enabled quota will cause root item corruption, where root >> items are using old bytenr (no backref in

Re: [PATCH v3 0/3] btrfs: qgroup rescan races (part 1)

2018-05-03 Thread Jeff Mahoney
On 5/3/18 2:23 AM, Nikolay Borisov wrote: > > > On 3.05.2018 00:11, je...@suse.com wrote: >> From: Jeff Mahoney >> >> Hi Dave - >> >> Here's the updated patchset for the rescan races. This fixes the issue >> where we'd try to start multiple workers. It introduces a new

Re: [PATCH 00/14 RFC] Btrfs: Add journal for raid5/6 writes

2018-05-03 Thread Goffredo Baroncelli
On 08/02/2017 08:47 PM, Chris Mason wrote: >> I agree, MD pretty much needs a separate device simply because they can't >> allocate arbitrary space on the other array members.  BTRFS can do that >> though, and I would actually think that that would be _easier_ to implement >> than having a

Re: RAID56 - 6 parity raid

2018-05-03 Thread Goffredo Baroncelli
On 05/03/2018 02:47 PM, Alberto Bursi wrote: > > > On 01/05/2018 23:57, Gandalf Corvotempesta wrote: >> Hi to all >> I've found some patches from Andrea Mazzoleni that adds support up to 6 >> parity raid. >> Why these are wasn't merged ? >> With modern disk size, having something greater than 2

Re: RAID56 - 6 parity raid

2018-05-03 Thread Goffredo Baroncelli
On 05/03/2018 01:26 PM, Austin S. Hemmelgarn wrote: >> My intention was to highlight that the parity-checksum is not related to the >> reliability and safety of raid5/6. > It may not be related to the safety, but it is arguably indirectly related to > the reliability, dependent on your

Re: [PATCH] btrfs: qgroup: Fix root item corruption when multiple same source snapshiots are created with quota enabled

2018-05-03 Thread David Sterba
On Tue, Dec 19, 2017 at 03:44:54PM +0800, Qu Wenruo wrote: > When multiple pending snapshots referring the same source subvolume are > executed, enabled quota will cause root item corruption, where root > items are using old bytenr (no backref in extent tree). > > This can be triggered by fstests

Re: [PATCH 1/3] btrfs: qgroups, fix rescan worker running races

2018-05-03 Thread Jeff Mahoney
On 5/3/18 11:52 AM, Nikolay Borisov wrote: > > > On 3.05.2018 16:39, Jeff Mahoney wrote: >> On 5/3/18 3:24 AM, Nikolay Borisov wrote: >>> >>> >>> On 3.05.2018 00:11, je...@suse.com wrote: From: Jeff Mahoney Commit 8d9eddad194 (Btrfs: fix qgroup rescan worker

Re: [PATCH 1/3] btrfs: qgroups, fix rescan worker running races

2018-05-03 Thread Nikolay Borisov
On 3.05.2018 16:39, Jeff Mahoney wrote: > On 5/3/18 3:24 AM, Nikolay Borisov wrote: >> >> >> On 3.05.2018 00:11, je...@suse.com wrote: >>> From: Jeff Mahoney >>> >>> Commit 8d9eddad194 (Btrfs: fix qgroup rescan worker initialization) >>> fixed the issue with

Re: [PATCH 1/3] btrfs: qgroups, fix rescan worker running races

2018-05-03 Thread Jeff Mahoney
On 5/3/18 3:24 AM, Nikolay Borisov wrote: > > > On 3.05.2018 00:11, je...@suse.com wrote: >> From: Jeff Mahoney >> >> Commit 8d9eddad194 (Btrfs: fix qgroup rescan worker initialization) >> fixed the issue with BTRFS_IOC_QUOTA_RESCAN_WAIT being racy, but >> ended up

Re: RAID56 - 6 parity raid

2018-05-03 Thread Alberto Bursi
On 01/05/2018 23:57, Gandalf Corvotempesta wrote: > Hi to all > I've found some patches from Andrea Mazzoleni that adds support up to 6 > parity raid. > Why these are wasn't merged ? > With modern disk size, having something greater than 2 parity, would be > great. > -- > To unsubscribe from

Re: RAID56 - 6 parity raid

2018-05-03 Thread Austin S. Hemmelgarn
On 2018-05-03 04:11, Andrei Borzenkov wrote: On Wed, May 2, 2018 at 10:29 PM, Austin S. Hemmelgarn wrote: ... Assume you have a BTRFS raid5 volume consisting of 6 8TB disks (which gives you 40TB of usable space). You're storing roughly 20TB of data on it, using a 16kB

Re: RAID56 - 6 parity raid

2018-05-03 Thread Austin S. Hemmelgarn
On 2018-05-02 16:40, Goffredo Baroncelli wrote: On 05/02/2018 09:29 PM, Austin S. Hemmelgarn wrote: On 2018-05-02 13:25, Goffredo Baroncelli wrote: On 05/02/2018 06:55 PM, waxhead wrote: So again, which problem would solve having the parity checksummed ? On the best of my knowledge nothing.

Re: [btrfs_put_block_group] WARNING: CPU: 1 PID: 14674 at fs/btrfs/disk-io.c:3675 free_fs_root+0xc2/0xd0 [btrfs]

2018-05-03 Thread Nikolay Borisov
On 3.05.2018 11:07, Anand Jain wrote: > > > On 04/19/2018 03:25 PM, Nikolay Borisov wrote: >> >> >> On 19.04.2018 08:32, Fengguang Wu wrote: >>> Hello, >>> >>> FYI this happens in mainline kernel and at least dates back to v4.16 . >>> >>> It's rather rare error and happens when running

Re: RAID56 - 6 parity raid

2018-05-03 Thread Andrei Borzenkov
On Wed, May 2, 2018 at 10:29 PM, Austin S. Hemmelgarn wrote: ... > > Assume you have a BTRFS raid5 volume consisting of 6 8TB disks (which gives > you 40TB of usable space). You're storing roughly 20TB of data on it, using > a 16kB block size, and it sees about 1GB of

Re: [btrfs_put_block_group] WARNING: CPU: 1 PID: 14674 at fs/btrfs/disk-io.c:3675 free_fs_root+0xc2/0xd0 [btrfs]

2018-05-03 Thread Anand Jain
On 04/19/2018 03:25 PM, Nikolay Borisov wrote: On 19.04.2018 08:32, Fengguang Wu wrote: Hello, FYI this happens in mainline kernel and at least dates back to v4.16 . It's rather rare error and happens when running xfstests. Yeah, so this is something which only recently was

Re: [PATCH 1/3] btrfs: qgroups, fix rescan worker running races

2018-05-03 Thread Nikolay Borisov
On 3.05.2018 00:11, je...@suse.com wrote: > From: Jeff Mahoney > > Commit 8d9eddad194 (Btrfs: fix qgroup rescan worker initialization) > fixed the issue with BTRFS_IOC_QUOTA_RESCAN_WAIT being racy, but > ended up reintroducing the hang-on-unmount bug that the commit it >

[PATCH] btrfs: qgroup: Search commit root for rescan to avoid missing extent

2018-05-03 Thread Qu Wenruo
When doing qgroup rescan using the following script (modified from btrfs/017 test case), we can sometimes hit qgroup corruption. -- umount $dev &> /dev/null umount $mnt &> /dev/null mkfs.btrfs -f -n 64k $dev mount $dev $mnt extent_size=8192 xfs_io -f -d -c "pwrite 0 $extent_size" $mnt/foo

Corrupt leaf, name hash mismatch with key

2018-05-03 Thread Michał Węgrzynek
Hi! I'm running btrfs on a bcache device, for now, it performed flawlessly. Yesterday, however, during backup, I found the following errors in the journal: BTRFS critical (device bcache0): corrupt leaf: root=1 block=367591424 slot=1 ino=10085264, name hash mismatch with key, have

Re: [PATCH v3 0/3] btrfs: qgroup rescan races (part 1)

2018-05-03 Thread Nikolay Borisov
On 3.05.2018 00:11, je...@suse.com wrote: > From: Jeff Mahoney > > Hi Dave - > > Here's the updated patchset for the rescan races. This fixes the issue > where we'd try to start multiple workers. It introduces a new "ready" > bool that we set during initialization and clear