For completeness here's the summary of my replacement of all four 6 TB drives
(henceforth "6T") with 8 TB drives ("8T") in a btrfs raid1 volume.
I included transfer rates so maybe others can get a rough idea what to expect
when doing similar things. All capacity units are SI, not base 2.
> a "replace" of the 3rd 6 TB drive onto a second 8 TB drive is currently in
> progress (at high speed).
This second replace is now finished, and it looks OK now:
# btrfs replace status /data
Started on 16.Jun 01:15:17, finished on 16.Jun 11:40:30, 0 write errs,
0 uncorr. read
>> So I was back to a 4-drive raid1, with 3x 6 TB drives and 1x 8 TB drive
>> (though that 8 TB drive had very little data on it). Then I tried to
>> "remove" (without "-r" this time) the 6 TB drive with the least amount
>> of data on it (one had 4.0 TiB, where the other two had 5.45 TiB each).
>>
boli posted on Tue, 14 Jun 2016 21:28:57 +0200 as excerpted:
> So I was back to a 4-drive raid1, with 3x 6 TB drives and 1x 8 TB drive
> (though that 8 TB drive had very little data on it). Then I tried to
> "remove" (without "-r" this time) the 6 TB drive with the least amount
> of data on it
> Replace doesn't need to do a balance, it's largely just a block level copy of
> the device being replaced, but with some special handling so that the
> filesystem is consistent throughout the whole operation. This is most of why
> it's so much more efficient than add/delete.
Thanks for this
On 2016-06-12 06:35, boli wrote:
It has now been doing "btrfs device delete missing /mnt" for about 90 hours.
These 90 hours seem like a rather long time, given that a rebalance/convert
from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a scrub
takes about 7 hours
Henk Slager posted on Sun, 12 Jun 2016 21:03:22 +0200 as excerpted:
> But now that you anyhow have all data on 3x 6TB drives, you could save
> balancing time by just doing btrfs-replace 6TB to 8TB 3x and then for
> the 4th 8TB just add it and let btrfs do the spreading/balancing over
> time by
On Sun, Jun 12, 2016 at 7:03 PM, boli wrote:
>>> It's done now, and took close to 99 hours to rebalance 8.1 TB of data from
>>> a 4x6TB raid1 (12 TB capacity) with 1 drive missing onto the remaining
>>> 3x6TB raid1 (9 TB capacity).
>>
>> Indeed, it not clear why it takes 4
>> It's done now, and took close to 99 hours to rebalance 8.1 TB of data from a
>> 4x6TB raid1 (12 TB capacity) with 1 drive missing onto the remaining 3x6TB
>> raid1 (9 TB capacity).
>
> Indeed, it not clear why it takes 4 days for such an action. You
> indicated that you cannot add an online
On Sun, Jun 12, 2016 at 12:35 PM, boli wrote:
>> It has now been doing "btrfs device delete missing /mnt" for about 90 hours.
>>
>> These 90 hours seem like a rather long time, given that a rebalance/convert
>> from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago,
> It has now been doing "btrfs device delete missing /mnt" for about 90 hours.
>
> These 90 hours seem like a rather long time, given that a rebalance/convert
> from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a scrub
> takes about 7 hours (4-disk-raid1).
>
> OTOH the
Updates:
> So for this first replacement I mounted the volume degraded and ran "btrfs
> device delete missing /mnt", and that's where it's been stuck for the past
> ~23 hours. Only later did I figure out that this command will trigger a
> rebalance, and of course that will take a long time.
This is somewhat off topic but...
9.6.2016, 18.20, Duncan kirjoitti:
Are those the 8 TB SMR "archive" drives?
I haven't been following the issue very closely, but be aware that there
were serious issues with those drives a few kernels back, and that while
those issues are now fixed, the
On 09.06.2016, at 17:20, Duncan <1i5t5.dun...@cox.net> wrote:
> Are those the 8 TB SMR "archive" drives?
No, they are Western Digital Red drives.
Thanks for the detailed follow-up anyway. :)
Half a year ago, when I evaluated hard drives, in the 8 TB category there were
only the Hitachi 8 TB
boli posted on Wed, 08 Jun 2016 20:55:13 +0200 as excerpted:
> Recently I had the idea to replace the 6 TB HDDs with 8 TB ones ("WD
> Red"), because their price is now acceptable.
Are those the 8 TB SMR "archive" drives?
I haven't been following the issue very closely, but be aware that there
Dear list
I've had a 4 drive btrfs raid1 setup in my backup NAS for a few months now.
It's running Fedora 23 Server with kernel 4.5.5 and btrfs-progs v4.4.1.
Recently I had the idea to replace the 6 TB HDDs with 8 TB ones ("WD Red"),
because their price is now acceptable.
(More back story:
16 matches
Mail list logo