RE: btrfs replace status >100%

2015-10-17 Thread Brett King
Just following up - the replace operation completed successfully and then the source device (/dev/sdb) was removed, with all chunks moved on the target (/dev/sdj). Putting it down to RAID level complexities I guess. [root@array ~]# btrfs replace status -1 /export/archive/ Started on 15.Oct

btrfs replace status >100%

2015-10-16 Thread Brett King
Hi All, I am noticing some strange numbers when replacing a disk under RAID1: [root@array ~]# btrfs replace status -1 /export/archive/ 367.2% done, 0 write errs, 0 uncorr. read errs The filesystem is currently at 6x 6TB + 2x 5TB, replacing the first of each 5TB with 8TB units: [root@array ~]#

Re: btrfs replace status >100%

2015-10-16 Thread Brett King
Thanks Henk, That's encouraging and what I suspected - everything looks fine (even some bitrot picked up through read error corrections on checksum failures, hurray!) and yes I'll resize the new device once it completes. [root@array ~]# btrfs replace status -1 /export/archive/ 547.9% done, 0

RE: Recovery Operation With Multiple Devices

2015-01-23 Thread Brett King
-Original message- From: Hugo Mills h...@carfax.org.uk Sent: Fri 01-23-2015 08:48 pm Subject:Re: Recovery Operation With Multiple Devices Attachment: signature.asc To: Brett King brett.k...@commandict.com.au; CC: linux-btrfs@vger.kernel.org; On Fri, Jan 23, 2015

Recovery Operation With Multiple Devices

2015-01-22 Thread Brett King
Hi All, Just wondering how 'btrfs recovery' operates, when the source device given is one of many in an MD array - I can't find anything documentation beyond a single device use case. Does it automatically include all devices in the relevant MD array as occurs when mounting, or does it only

RE: Recovery options for FS forced readonly due to 3.17 snapshot bug

2015-01-21 Thread Brett King
-Original message- From: Brett King brett.k...@commandict.com.au Sent: Wed 01-21-2015 09:26 am Subject:RE: Recovery options for FS forced readonly due to 3.17 snapshot bug CC: linux-btrfs@vger.kernel.org; To: fdman...@gmail.com; From: Filipe David Manana fdman

RAID10 across different sized disks shows data layout as single not RAID10

2014-05-11 Thread brett . king
Hi, I created a RAID10 array of 4x 4TB disks and later added another 4x 3TB disks, expecting the result to be the same level of fault tolerance however with simply more capacity. Recently I noticed the output of 'btrfs fi df' lists the Data layout as 'single' and not RAID10 per my initial

Re: RAID10 across different sized disks shows data layout as single not RAID10

2014-05-11 Thread brett . king
-Original Message- From: Hugo Mills h...@carfax.org.uk To: brett.k...@commandict.com.au Cc: linux-btrfs@vger.kernel.org Sent: Sun, 11 May 2014 7:25 PM Subject: Re: RAID10 across different sized disks shows data layout as single not RAID10 On Sun, May 11, 2014 at 05:53:40PM +1000,