Just following up - the replace operation completed successfully and then the
source device (/dev/sdb) was removed, with all chunks moved on the target
(/dev/sdj). Putting it down to RAID level complexities I guess.
[root@array ~]# btrfs replace status -1 /export/archive/
Started on 15.Oct 19:15
Thanks Henk,
That's encouraging and what I suspected - everything looks fine (even some
bitrot picked up through read error corrections on checksum failures, hurray!)
and yes I'll resize the new device once it completes.
[root@array ~]# btrfs replace status -1 /export/archive/
547.9% done, 0 wri
Hi All,
I am noticing some strange numbers when replacing a disk under RAID1:
[root@array ~]# btrfs replace status -1 /export/archive/
367.2% done, 0 write errs, 0 uncorr. read errs
The filesystem is currently at 6x 6TB + 2x 5TB, replacing the first of each 5TB
with 8TB units:
[root@array ~]# b
-Original message-
From: Hugo Mills
Sent: Fri 01-23-2015 08:48 pm
Subject:Re: Recovery Operation With Multiple Devices
Attachment: signature.asc
To: Brett King ;
CC: linux-btrfs@vger.kernel.org;
> On Fri, Jan 23, 2015 at 06:53:42PM +1100, Brett King wrote:
>
-Original message-
From: Brendan Hide
Sent: Fri 01-23-2015 08:18 pm
Subject:Re: Recovery Operation With Multiple Devices
To: Brett King ; linux-btrfs@vger.kernel.org;
> On 2015/01/23 09:53, Brett King wrote:
> > Hi All,
> > Just wondering how 'btrf
Hi All,
Just wondering how 'btrfs recovery' operates, when the source device given is
one of many in an MD array - I can't find anything documentation beyond a
single device use case.
Does it automatically include all devices in the relevant MD array as occurs
when mounting, or does it only res
-Original message-
From: Brett King
Sent: Wed 01-21-2015 09:26 am
Subject:RE: Recovery options for FS forced readonly due to 3.17
snapshot bug
CC: linux-btrfs@vger.kernel.org;
To: fdman...@gmail.com;
> From: Filipe David Manana
> Sent: Tue 01-20-2015 11
From: Filipe David Manana
Sent: Tue 01-20-2015 11:40 pm
Subject:Re: Recovery options for FS forced readonly due to 3.17
snapshot bug
To: brett.k...@commandict.com.au;
CC: linux-btrfs@vger.kernel.org;
> On Tue, Jan 20, 2015 at 12:15 PM, wrote:
> > Hi,
> > My FS has been for
Hi,
My FS has been forced readonly by the early 3.17 snapshot bug. After much
reading, I'm looking for validation of some recovery scenarios:
1) btrfsck --repair under a later kernel.
2) replacing the devices one by one under a later kernel, effectively removing
the corruption.
3) Just copying
-Original Message-
From: Hugo Mills
To: brett.k...@commandict.com.au
Cc: linux-btrfs@vger.kernel.org
Sent: Sun, 11 May 2014 7:25 PM
Subject: Re: RAID10 across different sized disks shows data layout as single
not RAID10
On Sun, May 11, 2014 at 05:53:40PM +1000, brett.k...@commandict.com.
Hi,
I created a RAID10 array of 4x 4TB disks and later added another 4x 3TB disks,
expecting the result to be the same level of fault tolerance however with
simply more capacity. Recently I noticed the output of 'btrfs fi df' lists the
Data layout as 'single' and not RAID10 per my initial mkfs.b
11 matches
Mail list logo