Re: Fwd: Confusion about snapshots containers

2017-03-29 Thread Duncan
Tim Cuthbertson posted on Wed, 29 Mar 2017 18:20:52 -0500 as excerpted: > So, another question... > > Do I then leave the top level mounted all the time for snapshots, or > should I create them, send them to external storage, and umount until > next time? Keep in mind that because snapshots

Re: [PATCH v3 2/5] btrfs: scrub: Fix RAID56 recovery race condition

2017-03-29 Thread Qu Wenruo
At 03/30/2017 08:51 AM, Liu Bo wrote: On Wed, Mar 29, 2017 at 09:33:19AM +0800, Qu Wenruo wrote: [...] Reported-by: Goffredo Baroncelli Signed-off-by: Qu Wenruo --- fs/btrfs/scrub.c | 14 ++ 1 file changed, 14 insertions(+) diff

Re: [PATCH v3 1/5] btrfs: scrub: Introduce full stripe lock for RAID56

2017-03-29 Thread Qu Wenruo
At 03/30/2017 08:34 AM, Liu Bo wrote: On Wed, Mar 29, 2017 at 09:33:18AM +0800, Qu Wenruo wrote: Unlike mirror based profiles, RAID5/6 recovery needs to read out the whole full stripe. And if we don't do proper protect, it can easily cause race condition. Introduce 2 new functions:

Re: [PATCH v3 2/5] btrfs: scrub: Fix RAID56 recovery race condition

2017-03-29 Thread Liu Bo
On Wed, Mar 29, 2017 at 09:33:19AM +0800, Qu Wenruo wrote: [...] > > Reported-by: Goffredo Baroncelli > Signed-off-by: Qu Wenruo > --- > fs/btrfs/scrub.c | 14 ++ > 1 file changed, 14 insertions(+) > > diff --git a/fs/btrfs/scrub.c

Re: [PATCH v3 1/5] btrfs: scrub: Introduce full stripe lock for RAID56

2017-03-29 Thread Liu Bo
On Wed, Mar 29, 2017 at 09:33:18AM +0800, Qu Wenruo wrote: > Unlike mirror based profiles, RAID5/6 recovery needs to read out the > whole full stripe. > > And if we don't do proper protect, it can easily cause race condition. > > Introduce 2 new functions: lock_full_stripe() and

Re: [RFC PATCH v1 00/30] fs: inode->i_version rework and optimization

2017-03-29 Thread Dave Chinner
On Wed, Mar 29, 2017 at 01:54:31PM -0400, Jeff Layton wrote: > On Wed, 2017-03-29 at 13:15 +0200, Jan Kara wrote: > > On Tue 21-03-17 14:46:53, Jeff Layton wrote: > > > On Tue, 2017-03-21 at 14:30 -0400, J. Bruce Fields wrote: > > > > On Tue, Mar 21, 2017 at 01:23:24PM -0400, Jeff Layton wrote: >

Fwd: Confusion about snapshots containers

2017-03-29 Thread Tim Cuthbertson
-- Forwarded message -- From: Hugo Mills Date: Wed, Mar 29, 2017 at 4:55 PM Subject: Re: Confusion about snapshots containers To: Tim Cuthbertson Cc: "linux-btrfs@vger.kernel.org" On Wed, Mar 29, 2017 at

Re: Confusion about snapshots containers

2017-03-29 Thread Hugo Mills
On Wed, Mar 29, 2017 at 04:27:30PM -0500, Tim Cuthbertson wrote: > I have recently switched from multiple partitions with multiple > btrfs's to a flat layout. I will try to keep my question concise. > > I am confused as to whether a snapshots container should be a normal > directory or a

Confusion about snapshots containers

2017-03-29 Thread Tim Cuthbertson
I have recently switched from multiple partitions with multiple btrfs's to a flat layout. I will try to keep my question concise. I am confused as to whether a snapshots container should be a normal directory or a mountable subvolume. I do not understand how it can be a normal directory while

Re: [PATCH v2] Btrfs: set scrub page's io_error if failing to submit io

2017-03-29 Thread David Sterba
On Wed, Mar 29, 2017 at 10:55:16AM -0700, Liu Bo wrote: > Scrub repairs data by the unit called scrub_block, which may contains > several pages. Scrub always tries to look up a good copy of a whole > block, but if there's no such copy, it tries to do repair page by page. > > If we don't set

Re: [PATCH 1/2] btrfs: warn about RAID5/6 being experimental at mount time

2017-03-29 Thread Adam Borowski
On Wed, Mar 29, 2017 at 09:27:32PM +0200, Christoph Anton Mitterer wrote: > On Wed, 2017-03-29 at 06:39 +0200, Adam Borowski wrote: > > Too many people come complaining about losing their data -- and indeed, > > there's no warning outside a wiki and the mailing list tribal knowledge. > > Message

Re: [PATCH 1/2] btrfs: warn about RAID5/6 being experimental at mount time

2017-03-29 Thread Christoph Anton Mitterer
On Wed, 2017-03-29 at 06:39 +0200, Adam Borowski wrote: > Too many people come complaining about losing their data -- and > indeed, > there's no warning outside a wiki and the mailing list tribal > knowledge. > Message severity chosen for consistency with XFS -- "alert" makes > dmesg > produce

[PATCH] btrfs: use clear_page where appropriate

2017-03-29 Thread David Sterba
There's a helper to clear whole page, with a arch-specific optimized code. The replaced cases do not seem to be in performace critical code, but we still might get some percent gain. Signed-off-by: David Sterba --- fs/btrfs/free-space-cache.c | 2 +- fs/btrfs/scrub.c

Re: [PATCH v3 5/5] btrfs: Prevent scrub recheck from racing with dev replace

2017-03-29 Thread Liu Bo
On Wed, Mar 29, 2017 at 09:33:22AM +0800, Qu Wenruo wrote: > scrub_setup_recheck_block() calls btrfs_map_sblock() and then access > bbio without protection of bio_counter. > s/access/accesses/ > This can leads to use-after-free if racing with dev replace cancel. > s/leads/lead/ > Fix it by

Re: [PATCH v3 4/5] btrfs: Wait flighting bio before freeing target device for raid56

2017-03-29 Thread Liu Bo
On Wed, Mar 29, 2017 at 09:33:21AM +0800, Qu Wenruo wrote: > When raid56 dev replace is cancelled by running scrub, we will free target > device without waiting flighting bios, causing the following NULL > pointer deference or general protection. > > BUG: unable to handle kernel NULL pointer

Re: [PATCH v3 3/5] btrfs: scrub: Don't append on-disk pages for raid56 scrub

2017-03-29 Thread Liu Bo
On Wed, Mar 29, 2017 at 09:33:20AM +0800, Qu Wenruo wrote: > In the following situation, scrub will calculate wrong parity to > overwrite correct one: > > RAID5 full stripe: > > Before > | Dev 1 | Dev 2 | Dev 3 | > | Data stripe 1 | Data stripe 2 | Parity Stripe | >

[PATCH v2] Btrfs: set scrub page's io_error if failing to submit io

2017-03-29 Thread Liu Bo
Scrub repairs data by the unit called scrub_block, which may contains several pages. Scrub always tries to look up a good copy of a whole block, but if there's no such copy, it tries to do repair page by page. If we don't set page's io_error when checking this bad copy, in the last step, we may

[PATCH v2] Btrfs: fix wrong failed mirror_num of read-repair on raid56

2017-03-29 Thread Liu Bo
In raid56 scenario, after trying parity recovery, we didn't set mirror_num for btrfs_bio with failed mirror_num, hence end_bio_extent_readpage() will report a random mirror_num in dmesg log. Cc: David Sterba Signed-off-by: Liu Bo --- v2: Set mirror_num

[PATCH v2] Btrfs: enable repair during read for raid56 profile

2017-03-29 Thread Liu Bo
Now that scrub can fix data errors with the help of parity for raid56 profile, repair during read is able to as well. Although the mirror num in raid56 scenario has different meanings, i.e. 0 or 1: read data directly > 1:do recover with parity, it could be fit into how we repair bad block

Re: [RFC PATCH v1 00/30] fs: inode->i_version rework and optimization

2017-03-29 Thread Jeff Layton
On Wed, 2017-03-29 at 13:15 +0200, Jan Kara wrote: > On Tue 21-03-17 14:46:53, Jeff Layton wrote: > > On Tue, 2017-03-21 at 14:30 -0400, J. Bruce Fields wrote: > > > On Tue, Mar 21, 2017 at 01:23:24PM -0400, Jeff Layton wrote: > > > > On Tue, 2017-03-21 at 12:30 -0400, J. Bruce Fields wrote: > > >

[PATCH v3] Btrfs: bring back repair during read

2017-03-29 Thread Liu Bo
Commit 20a7db8ab3f2 ("btrfs: add dummy callback for readpage_io_failed and drop checks") made a cleanup around readpage_io_failed_hook, and it was supposed to keep the original sematics, but it also unexpectedly disabled repair during read for dup, raid1 and raid10. This fixes the problem by

Re: [PATCH] btrfs: track exclusive filesystem operation in flags

2017-03-29 Thread David Sterba
On Wed, Mar 29, 2017 at 06:01:37PM +0800, Anand Jain wrote: > > > On 03/28/2017 08:44 PM, David Sterba wrote: > > There are several operations, usually started from ioctls, that cannot > > run concurrently. The status is tracked in > > mutually_exclusive_operation_running as an atomic_t. We can

Re: send snapshot from snapshot incremental

2017-03-29 Thread Giuseppe Della Bianca
Hi. >Jakob Schürz Tue, 28 Mar 2017 15:16:28 -0700 >Thanks for that explanation. >I'm sure, i didn't understand the -c option... and my english is pretty >good enough for the most things I need to know in Linux-things... but >not for this. :-( ]zac[ In one of my scripts I use this method:

Re: send snapshot from snapshot incremental

2017-03-29 Thread Andrei Borzenkov
On Wed, Mar 29, 2017 at 1:01 AM, Jakob Schürz wrote: > > There is Subvolume A on the send- and the receive-side. > There is also Subvolume AA on the send-side from A. > The parent-ID from send-AA is the ID from A. > The received-ID from A on received-side A is the ID

Re: Qgroups are not applied when snapshotting a subvol?

2017-03-29 Thread Austin S. Hemmelgarn
On 2017-03-29 01:38, Duncan wrote: Austin S. Hemmelgarn posted on Tue, 28 Mar 2017 07:44:56 -0400 as excerpted: On 2017-03-27 21:49, Qu Wenruo wrote: The problem is, how should we treat subvolume. Btrfs subvolume sits in the middle of directory and (logical) volume used in traditional

Re: [RFC PATCH v1 00/30] fs: inode->i_version rework and optimization

2017-03-29 Thread Jan Kara
On Tue 21-03-17 14:46:53, Jeff Layton wrote: > On Tue, 2017-03-21 at 14:30 -0400, J. Bruce Fields wrote: > > On Tue, Mar 21, 2017 at 01:23:24PM -0400, Jeff Layton wrote: > > > On Tue, 2017-03-21 at 12:30 -0400, J. Bruce Fields wrote: > > > > - It's durable; the above comparison still works if

[PATCH 08/25] btrfs: Convert to separately allocated bdi

2017-03-29 Thread Jan Kara
Allocate struct backing_dev_info separately instead of embedding it inside superblock. This unifies handling of bdi among users. CC: Chris Mason CC: Josef Bacik CC: David Sterba CC: linux-btrfs@vger.kernel.org Reviewed-by: Liu Bo

[PATCH 04/25] fs: Provide infrastructure for dynamic BDIs in filesystems

2017-03-29 Thread Jan Kara
Provide helper functions for setting up dynamically allocated backing_dev_info structures for filesystems and cleaning them up on superblock destruction. CC: linux-...@lists.infradead.org CC: linux-...@vger.kernel.org CC: Petr Vandrovec CC: linux-ni...@vger.kernel.org CC:

[PATCH 0/25 v2] fs: Convert all embedded bdis into separate ones

2017-03-29 Thread Jan Kara
Hello, this is the second revision of the patch series which converts all embedded occurences of struct backing_dev_info to use standalone dynamically allocated structures. This makes bdi handling unified across all bdi users and generally removes some boilerplate code from filesystems setting up

Re: [PATCH] btrfs: track exclusive filesystem operation in flags

2017-03-29 Thread Anand Jain
On 03/28/2017 08:44 PM, David Sterba wrote: There are several operations, usually started from ioctls, that cannot run concurrently. The status is tracked in mutually_exclusive_operation_running as an atomic_t. We can easily track the status as one of the per-filesystem flag bits with same

Re: [PATCH V2 4/4] btrfs: cleanup barrier_all_devices() to check dev stat flush error

2017-03-29 Thread Anand Jain
On 03/29/2017 12:19 AM, David Sterba wrote: On Tue, Mar 14, 2017 at 04:26:11PM +0800, Anand Jain wrote: The objective of this patch is to cleanup barrier_all_devices() so that the error checking is in a separate loop independent of of the loop which submits and waits on the device flush

Re: [PATCH 2/4] btrfs: Communicate back ENOMEM when it occurs

2017-03-29 Thread Anand Jain
On 03/28/2017 11:38 PM, David Sterba wrote: On Mon, Mar 13, 2017 at 03:42:12PM +0800, Anand Jain wrote: The only error that write dev flush (send) will fail is due to the ENOMEM then, as its not a device specific error and rather a system wide issue, we should rather stop further iterations

Re: [PATCH 1/4] btrfs: REQ_PREFLUSH does not use btrfs_end_bio() completion callback

2017-03-29 Thread Anand Jain
On 03/28/2017 11:19 PM, David Sterba wrote: On Mon, Mar 13, 2017 at 03:42:11PM +0800, Anand Jain wrote: REQ_PREFLUSH bio to flush dev cache uses btrfs_end_empty_barrier() completion callback only, as of now, and there it accounts for dev stat flush errors BTRFS_DEV_STAT_FLUSH_ERRS, so remove

Re: send snapshot from snapshot incremental

2017-03-29 Thread Henk Slager
On Wed, Mar 29, 2017 at 12:01 AM, Jakob Schürz wrote: [...] > There is Subvolume A on the send- and the receive-side. > There is also Subvolume AA on the send-side from A. > The parent-ID from send-AA is the ID from A. > The received-ID from A on received-side A is the

Re: [LKP] [btrfs] "fio: pid=2214, got signal=7" error showed in fio test for btrfs

2017-03-29 Thread Ye Xiaolong
Attach kmsg and reproduce script. Note: kernel cmdline contained "memmap=104G!4G memmap=104G!132G" Thanks, Xiaolong On 03/29, kernel test robot wrote: >Hi, > >We detected below error messages in fio pmem test for btrfs. > >machine: Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory

[LKP] [btrfs] "fio: pid=2214, got signal=7" error showed in fio test for btrfs

2017-03-29 Thread kernel test robot
Hi, We detected below error messages in fio pmem test for btrfs. machine: Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory kernel: v4.10 test parameters: [global] bs=2M ioengine=mmap iodepth=32 size=7669584457 direct=0 runtime=200 invalidate=1 fallocate=posix group_reporting