Tim Cuthbertson posted on Wed, 29 Mar 2017 18:20:52 -0500 as excerpted:
> So, another question...
>
> Do I then leave the top level mounted all the time for snapshots, or
> should I create them, send them to external storage, and umount until
> next time?
Keep in mind that because snapshots
At 03/30/2017 08:51 AM, Liu Bo wrote:
On Wed, Mar 29, 2017 at 09:33:19AM +0800, Qu Wenruo wrote:
[...]
Reported-by: Goffredo Baroncelli
Signed-off-by: Qu Wenruo
---
fs/btrfs/scrub.c | 14 ++
1 file changed, 14 insertions(+)
diff
At 03/30/2017 08:34 AM, Liu Bo wrote:
On Wed, Mar 29, 2017 at 09:33:18AM +0800, Qu Wenruo wrote:
Unlike mirror based profiles, RAID5/6 recovery needs to read out the
whole full stripe.
And if we don't do proper protect, it can easily cause race condition.
Introduce 2 new functions:
On Wed, Mar 29, 2017 at 09:33:19AM +0800, Qu Wenruo wrote:
[...]
>
> Reported-by: Goffredo Baroncelli
> Signed-off-by: Qu Wenruo
> ---
> fs/btrfs/scrub.c | 14 ++
> 1 file changed, 14 insertions(+)
>
> diff --git a/fs/btrfs/scrub.c
On Wed, Mar 29, 2017 at 09:33:18AM +0800, Qu Wenruo wrote:
> Unlike mirror based profiles, RAID5/6 recovery needs to read out the
> whole full stripe.
>
> And if we don't do proper protect, it can easily cause race condition.
>
> Introduce 2 new functions: lock_full_stripe() and
On Wed, Mar 29, 2017 at 01:54:31PM -0400, Jeff Layton wrote:
> On Wed, 2017-03-29 at 13:15 +0200, Jan Kara wrote:
> > On Tue 21-03-17 14:46:53, Jeff Layton wrote:
> > > On Tue, 2017-03-21 at 14:30 -0400, J. Bruce Fields wrote:
> > > > On Tue, Mar 21, 2017 at 01:23:24PM -0400, Jeff Layton wrote:
>
-- Forwarded message --
From: Hugo Mills
Date: Wed, Mar 29, 2017 at 4:55 PM
Subject: Re: Confusion about snapshots containers
To: Tim Cuthbertson
Cc: "linux-btrfs@vger.kernel.org"
On Wed, Mar 29, 2017 at
On Wed, Mar 29, 2017 at 04:27:30PM -0500, Tim Cuthbertson wrote:
> I have recently switched from multiple partitions with multiple
> btrfs's to a flat layout. I will try to keep my question concise.
>
> I am confused as to whether a snapshots container should be a normal
> directory or a
I have recently switched from multiple partitions with multiple
btrfs's to a flat layout. I will try to keep my question concise.
I am confused as to whether a snapshots container should be a normal
directory or a mountable subvolume. I do not understand how it can be
a normal directory while
On Wed, Mar 29, 2017 at 10:55:16AM -0700, Liu Bo wrote:
> Scrub repairs data by the unit called scrub_block, which may contains
> several pages. Scrub always tries to look up a good copy of a whole
> block, but if there's no such copy, it tries to do repair page by page.
>
> If we don't set
On Wed, Mar 29, 2017 at 09:27:32PM +0200, Christoph Anton Mitterer wrote:
> On Wed, 2017-03-29 at 06:39 +0200, Adam Borowski wrote:
> > Too many people come complaining about losing their data -- and indeed,
> > there's no warning outside a wiki and the mailing list tribal knowledge.
> > Message
On Wed, 2017-03-29 at 06:39 +0200, Adam Borowski wrote:
> Too many people come complaining about losing their data -- and
> indeed,
> there's no warning outside a wiki and the mailing list tribal
> knowledge.
> Message severity chosen for consistency with XFS -- "alert" makes
> dmesg
> produce
There's a helper to clear whole page, with a arch-specific optimized
code. The replaced cases do not seem to be in performace critical code,
but we still might get some percent gain.
Signed-off-by: David Sterba
---
fs/btrfs/free-space-cache.c | 2 +-
fs/btrfs/scrub.c
On Wed, Mar 29, 2017 at 09:33:22AM +0800, Qu Wenruo wrote:
> scrub_setup_recheck_block() calls btrfs_map_sblock() and then access
> bbio without protection of bio_counter.
>
s/access/accesses/
> This can leads to use-after-free if racing with dev replace cancel.
>
s/leads/lead/
> Fix it by
On Wed, Mar 29, 2017 at 09:33:21AM +0800, Qu Wenruo wrote:
> When raid56 dev replace is cancelled by running scrub, we will free target
> device without waiting flighting bios, causing the following NULL
> pointer deference or general protection.
>
> BUG: unable to handle kernel NULL pointer
On Wed, Mar 29, 2017 at 09:33:20AM +0800, Qu Wenruo wrote:
> In the following situation, scrub will calculate wrong parity to
> overwrite correct one:
>
> RAID5 full stripe:
>
> Before
> | Dev 1 | Dev 2 | Dev 3 |
> | Data stripe 1 | Data stripe 2 | Parity Stripe |
>
Scrub repairs data by the unit called scrub_block, which may contains
several pages. Scrub always tries to look up a good copy of a whole
block, but if there's no such copy, it tries to do repair page by page.
If we don't set page's io_error when checking this bad copy, in the last
step, we may
In raid56 scenario, after trying parity recovery, we didn't set
mirror_num for btrfs_bio with failed mirror_num, hence
end_bio_extent_readpage() will report a random mirror_num in dmesg
log.
Cc: David Sterba
Signed-off-by: Liu Bo
---
v2: Set mirror_num
Now that scrub can fix data errors with the help of parity for raid56
profile, repair during read is able to as well.
Although the mirror num in raid56 scenario has different meanings, i.e.
0 or 1: read data directly
> 1:do recover with parity,
it could be fit into how we repair bad block
On Wed, 2017-03-29 at 13:15 +0200, Jan Kara wrote:
> On Tue 21-03-17 14:46:53, Jeff Layton wrote:
> > On Tue, 2017-03-21 at 14:30 -0400, J. Bruce Fields wrote:
> > > On Tue, Mar 21, 2017 at 01:23:24PM -0400, Jeff Layton wrote:
> > > > On Tue, 2017-03-21 at 12:30 -0400, J. Bruce Fields wrote:
> > >
Commit 20a7db8ab3f2 ("btrfs: add dummy callback for readpage_io_failed
and drop checks") made a cleanup around readpage_io_failed_hook, and
it was supposed to keep the original sematics, but it also
unexpectedly disabled repair during read for dup, raid1 and raid10.
This fixes the problem by
On Wed, Mar 29, 2017 at 06:01:37PM +0800, Anand Jain wrote:
>
>
> On 03/28/2017 08:44 PM, David Sterba wrote:
> > There are several operations, usually started from ioctls, that cannot
> > run concurrently. The status is tracked in
> > mutually_exclusive_operation_running as an atomic_t. We can
Hi.
>Jakob Schürz Tue, 28 Mar 2017 15:16:28 -0700
>Thanks for that explanation.
>I'm sure, i didn't understand the -c option... and my english is pretty
>good enough for the most things I need to know in Linux-things... but
>not for this. :-(
]zac[
In one of my scripts I use this method:
On Wed, Mar 29, 2017 at 1:01 AM, Jakob Schürz wrote:
>
> There is Subvolume A on the send- and the receive-side.
> There is also Subvolume AA on the send-side from A.
> The parent-ID from send-AA is the ID from A.
> The received-ID from A on received-side A is the ID
On 2017-03-29 01:38, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 28 Mar 2017 07:44:56 -0400 as
excerpted:
On 2017-03-27 21:49, Qu Wenruo wrote:
The problem is, how should we treat subvolume.
Btrfs subvolume sits in the middle of directory and (logical) volume
used in traditional
On Tue 21-03-17 14:46:53, Jeff Layton wrote:
> On Tue, 2017-03-21 at 14:30 -0400, J. Bruce Fields wrote:
> > On Tue, Mar 21, 2017 at 01:23:24PM -0400, Jeff Layton wrote:
> > > On Tue, 2017-03-21 at 12:30 -0400, J. Bruce Fields wrote:
> > > > - It's durable; the above comparison still works if
Allocate struct backing_dev_info separately instead of embedding it
inside superblock. This unifies handling of bdi among users.
CC: Chris Mason
CC: Josef Bacik
CC: David Sterba
CC: linux-btrfs@vger.kernel.org
Reviewed-by: Liu Bo
Provide helper functions for setting up dynamically allocated
backing_dev_info structures for filesystems and cleaning them up on
superblock destruction.
CC: linux-...@lists.infradead.org
CC: linux-...@vger.kernel.org
CC: Petr Vandrovec
CC: linux-ni...@vger.kernel.org
CC:
Hello,
this is the second revision of the patch series which converts all embedded
occurences of struct backing_dev_info to use standalone dynamically allocated
structures. This makes bdi handling unified across all bdi users and generally
removes some boilerplate code from filesystems setting up
On 03/28/2017 08:44 PM, David Sterba wrote:
There are several operations, usually started from ioctls, that cannot
run concurrently. The status is tracked in
mutually_exclusive_operation_running as an atomic_t. We can easily track
the status as one of the per-filesystem flag bits with same
On 03/29/2017 12:19 AM, David Sterba wrote:
On Tue, Mar 14, 2017 at 04:26:11PM +0800, Anand Jain wrote:
The objective of this patch is to cleanup barrier_all_devices()
so that the error checking is in a separate loop independent of
of the loop which submits and waits on the device flush
On 03/28/2017 11:38 PM, David Sterba wrote:
On Mon, Mar 13, 2017 at 03:42:12PM +0800, Anand Jain wrote:
The only error that write dev flush (send) will fail is due
to the ENOMEM then, as its not a device specific error and
rather a system wide issue, we should rather stop further
iterations
On 03/28/2017 11:19 PM, David Sterba wrote:
On Mon, Mar 13, 2017 at 03:42:11PM +0800, Anand Jain wrote:
REQ_PREFLUSH bio to flush dev cache uses btrfs_end_empty_barrier()
completion callback only, as of now, and there it accounts for dev
stat flush errors BTRFS_DEV_STAT_FLUSH_ERRS, so remove
On Wed, Mar 29, 2017 at 12:01 AM, Jakob Schürz
wrote:
[...]
> There is Subvolume A on the send- and the receive-side.
> There is also Subvolume AA on the send-side from A.
> The parent-ID from send-AA is the ID from A.
> The received-ID from A on received-side A is the
Attach kmsg and reproduce script.
Note: kernel cmdline contained "memmap=104G!4G memmap=104G!132G"
Thanks,
Xiaolong
On 03/29, kernel test robot wrote:
>Hi,
>
>We detected below error messages in fio pmem test for btrfs.
>
>machine: Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory
Hi,
We detected below error messages in fio pmem test for btrfs.
machine: Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory
kernel: v4.10
test parameters:
[global]
bs=2M
ioengine=mmap
iodepth=32
size=7669584457
direct=0
runtime=200
invalidate=1
fallocate=posix
group_reporting
36 matches
Mail list logo