Re: Replacing drives with larger ones in a 4 drive raid1

2016-06-19 Thread boli
For completeness here's the summary of my replacement of all four 6 TB drives 
(henceforth "6T") with 8 TB drives ("8T") in a btrfs raid1 volume.
I included transfer rates so maybe others can get a rough idea what to expect 
when doing similar things. All capacity units are SI, not base 2.

Filesystem usage was ~17.84 of 24 TB used when I started.

The first steps all happened while the machine was booted into emergency mode.

 1. Physically replaced 1st 6T with 1st 8T,
without having done a logical remove beforehand.
Should have done that to maintain redundancy.
 2. Mounted volume degraded and btrfs device remove missing.
Took over 4 days, and 1.4 TB were still missing after.
Also it was a close call: 17.84 TB of 18 TB used!
(Two of the drives were completely full after this)
Transfer rate of ~46 MB/s (~6 h/TB)
 3. Restored missing 1.4 TB onto the 1st 8T with btrfs replace -r
Would have been more efficient to try and complete step 2.
Transfer rate of ~159 MB/s (~1.75 h/TB)
 4. Resized to full size of 1st 8T
 5. btrfs device remove'd a 2nd 6T
 6. Physically replaced this 2nd 6T with 2nd 8T

At this point the machine was rebooted into normal mode.   

 7. Logically replaced 3rd 6T onto 2nd 8T with btrfs replace
Transfer rate of ~140 MB/s (~1.98 h/TB)
 8. Resized to full size of 2nd 8T
 9. Physically replaced 3rd 6T with 3rd 8T

Another reboot for kernel update to 4.5.6. Also the machine received a few of 
the backups that were previously held back so it could restore in peace.

10. Logically replaced 4th 6T onto 3rd 8T with btrfs replace
Transfer rate of ~151 MB/s (~1.84 h/TB)
11. Resized to full size of 3rd 8T
12. Physically replaced 4th 6T with 4th 8T (reboot)
13. Logically added 4th 8T to volume with btrfs device add
14. Ran a full balance (~18 TB used). Took about 2 days.
Transfer rate of ~104 MB/s (~2.67 h/TB)

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Replacing drives with larger ones in a 4 drive raid1

2016-06-16 Thread boli
> a "replace" of the 3rd 6 TB drive onto a second 8 TB drive is currently in 
> progress (at high speed).

This second replace is now finished, and it looks OK now:

# btrfs replace status /data
Started on 16.Jun 01:15:17, finished on 16.Jun 11:40:30, 0 write errs, 
0 uncorr. read errs

Transfer rate of ~134 MiB/s, or ~2.2 hours per TiB.

# btrfs device usage  /data 
/dev/dm-2, ID: 3
   Device size: 5.46TiB
   Data,RAID1:  4.85TiB
   Metadata,RAID1:  3.00GiB
   Unallocated:   620.03GiB

/dev/mapper/_enc, ID: 1
   Device size: 7.28TiB
   Data,RAID1:  6.66TiB
   Metadata,RAID1: 12.69GiB
   System,RAID1:   64.00MiB
   Unallocated:   620.31GiB

/dev/mapper/_enc, ID: 2
   Device size: 7.28TiB
   Data,RAID1:  4.79TiB
   Metadata,RAID1:  9.69GiB
   System,RAID1:   64.00MiB
   Unallocated:   676.31GiB

However, while the replace was in progress, it showed weird stuff, like this 
percentage > 100 today at 9am (~3 hours before completion):

# btrfs replace status /data   
272.1% done, 0 write errs, 0 uncorr. read errs

Also, contrary to he first replace, filesystem info was not updated during the 
replace, and looked like this (for example):

# btrfs device usage  /data 
/dev/dm-2, ID: 3
   Device size: 5.46TiB
   Data,RAID1:  4.85TiB
   Metadata,RAID1:  3.00GiB
   Unallocated:   620.03GiB

/dev/dm-3, ID: 2
   Device size: 5.46TiB
   Data,RAID1:  4.79TiB
   Metadata,RAID1:  9.69GiB
   System,RAID1:   64.00MiB
   Unallocated:   676.31GiB

/dev/mapper/_enc, ID: 1
   Device size: 7.28TiB
   Data,RAID1:  6.66TiB
   Metadata,RAID1: 12.69GiB
   System,RAID1:   64.00MiB
   Unallocated:   620.31GiB

/dev/mapper/_enc, ID: 0
   Device size: 7.28TiB
   Unallocated: 5.46TiB

I'm happy it worked, just wondering why it behaved weirdly this second time.

During the first replace, my Fedora 23 was booted in emergency mode, whereas 
for the second time it was booted normally.

I'm going to reboot now to update Kernel 4.5.5 to 4.5.6 and then continue 
replacing drives.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Replacing drives with larger ones in a 4 drive raid1

2016-06-15 Thread boli
>> So I was back to a 4-drive raid1, with 3x 6 TB drives and 1x 8 TB drive
>> (though that 8 TB drive had very little data on it). Then I tried to
>> "remove" (without "-r" this time) the 6 TB drive with the least amount
>> of data on it (one had 4.0 TiB, where the other two had 5.45 TiB each).
>> This failed after a few minutes because of "no space left on device".
>> 
>> […]
>> 
>> For now I avoided that by removing one of the other two (rather full) 6
>> TB drives at random, and this has been going on for the last 20 hours or
>> so. Thanks to running it in a screen I can check the progress this time
>> around, and it's doing its thing at ~41 MiB/s, or ~7 hours per TiB, on
>> average.
> 
> The ENOSPC errors are likely due to the fact that the raid1 allocator 
> needs _two_ devices with free space.  If your 6T devices get too full, 
> even if the 8T device is nearly empty, you'll run into ENOSPC, because 
> you have just one device with unallocated space and the raid1 allocator 
> needs two.

I see, now this makes total sense. Two of the 6 TB drives were almost 
completely full, at 5.45 used of 5.46 TiB capacity. Note to self: Maybe I 
should start using the --si option to make such a condition more obvious when 
mentally comparing to the advertised capacity of 6 TB (when comparing to to 
5.46 TiB would have been correct)

"remove"-ing one of these almost-full-drives did finish successfully, and a 
"replace" of the 3rd 6 TB drive onto a second 8 TB drive is currently in 
progress (at high speed).

> btrfs device usage should help diagnose this condition, with btrfs 
> filesystem show also showing the individual device space allocation but 
> not as much other information as usage will.

I had mostly been using btrfs filesystem usage, thanks for the reminder about 
device usage, which is easier to read in this case.

> If you run into this, you may just have to do the hardware yank and 
> replace-missing thing again, yanking a 6T and replacing with an 8T.  
> Don't forget the resize.  That should leave you with two devices with 
> free space and thus hopefully allow normal raid1 reallocation with a 
> device remove again.

Good to know. For now this doesn't seem necessary, even the other drive that 
was almost completely full before looks much better now at 4.8/5.46 TiB or 
5.27/6.0 TB in --si :), as some data was moved to the first 8 TB drive during 
the last "remove".

So far everything is looking good, thanks very much for the help everyone.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Replacing drives with larger ones in a 4 drive raid1

2016-06-14 Thread boli
> Replace doesn't need to do a balance, it's largely just a block level copy of 
> the device being replaced, but with some special handling so that the 
> filesystem is consistent throughout the whole operation.  This is most of why 
> it's so much more efficient than add/delete.

Thanks for this correction. In the mean time I experienced myself that replace 
is pretty fast…

Last time I wrote I thought the initial 4 day "remove missing" was 
successful/complete, but as it turned out that device was still missing. Maybe 
that Ctrl+C I tried after a few days did work after all. I only checked/noticed 
this after the 8 TB drive was zeroed and encrypted.

Luckily, most of the "missing" data was already rebuilt onto the remaining 2 
drives, and only 1.27 TiB were still "missing".

In hindsight I should probably have repeated "remove missing" here, but to 
completion. What I did instead was a "replace -r" onto the 8 TB drive. This did 
successfully rebuild the missing 1.27 TiB of data onto the 8 TB drive, at a 
speedy ~144 MiB/s no less!

So I was back to a 4-drive raid1, with 3x 6 TB drives and 1x 8 TB drive (though 
that 8 TB drive had very little data on it). Then I tried to "remove" (without 
"-r" this time) the 6 TB drive with the least amount of data on it (one had 4.0 
TiB, where the other two had 5.45 TiB each). This failed after a few minutes 
because of "no space left on device". 

Austin's mail reminded me to resize due to the larger disk, which I then did, 
but that device still couldn't be removed, same error message.
I then consulted the wiki, which mentions that space for metadata might be 
rather full (11.91 used of 12.66 GiB total here), and to try a "balance" with a 
low "dusage" in such cases.

For now I avoided that by removing one of the other two (rather full) 6 TB 
drives at random, and this has been going on for the last 20 hours or so. 
Thanks to running it in a screen I can check the progress this time around, and 
it's doing its thing at ~41 MiB/s, or ~7 hours per TiB, on average.

Maybe the "no data left on device" will sort itself out during this "remove"'s 
balance, otherwise I'll do it manually later.

> The most efficient way of converting the array online without adding any more 
> disks than you have to begin with is:
> 1. Delete one device from the array with device delete.
> 2. Physically switch the now unused device with one of the new devices.
> 3. Use btrfs replace to replace one of the devices in the array with the 
> newly connected device (and make sure to resize to the full size of the new 
> device).
> 4. Repeat from step 2 until you aren't using any of the old devices in the 
> array.
> 5. You should have one old device left unused, physically switch it for a new 
> device.
> 6. Use btrfs device add to add the new device to the array, then run a full 
> balance.
> 
> This will result in only two balances being needed (one implicit in the 
> device delete, and the explicit final one to restripe across the full array), 
> and will result in the absolute minimum possible data transfer.

Thank you for these very explicit/succinct instructions! Also thanks to Henk 
and Duncan! I will definitely do a full balance when all disks are replaced.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Replacing drives with larger ones in a 4 drive raid1

2016-06-12 Thread boli
>> It's done now, and took close to 99 hours to rebalance 8.1 TB of data from a 
>> 4x6TB raid1 (12 TB capacity) with 1 drive missing onto the remaining 3x6TB 
>> raid1 (9 TB capacity).
> 
> Indeed, it not clear why it takes 4 days for such an action. You
> indicated that you cannot add an online 5th drive, so then you and
> intermediate compaction of the fs to less drives is a way to handle
> this issue. There are 2 ways however:
> 
> 1) Keeping the to-be-replaced drive online until a btrfs dev remove of
> it from the fs of it is finished and only then replace a 6TB with an
> 8TB in the drivebay. So in this case, one needs enough free capacity
> on the fs (which you had) and full btrfs raid1 redundancy is there all
> the time.
> 
> 2) Take a 6TB out of the drivebay first and then do the btrfs dev
> remove, in this case on a really missing disk. This way, the fs is in
> degraded mode (or mounted as such) and the action of remove missing is
> also a sort of 'reconstruction'. I don't know the details of the code,
> but I can imagine that it has performance implications.

Thanks for reminding me about option 1). So in summary, without temporarily 
adding an additional drive, there are 3 ways to replace a drive:

1) Logically removing old drive (triggers 1st rebalance), physically removing 
it, then adding new drive physically and logically (triggers 2nd rebalance)

2) Physically removing old drive, mounting degraded, logically removing it 
(triggers 1st rebalance, while degraded), then adding new drive physically and 
logically (2nd rebalance)

3) Physically replacing old with new drive, mounting degraded, then logically 
replacing old with new drive (triggers rebalance while degraded)


I did option 2, which seems to be the worst of the three, as there was no 
redundancy for a couple days, and 2 rebalances are needed, which potentially 
take a long time.

Option 1 also has 2 rebalances, but redundancy is always maintained.

Option 3 needs just 1 rebalance, but (like option 1) does not maintain 
redundancy at all times.

That's where an extra drive bay would come in handy, allowing to maintain 
redundancy while still just needing one "rebalance"? Question mark because you 
mentioned "highspeed data transfer" rather than "rebalance" when doing a 
btrfs-replace, which sounds very efficient (in case of -r option these 
transfers would be from multiple drives).

The man page mentioned that the replacement drive needs to be at least as large 
as the original, which makes me wonder if it's still a "highspeed data 
transfer" if the new drive is larger, or if it does a rebalance in that case. 
If not then that'd be pretty much what I'm looking for. More on that below.

>> If the goal is to replace 4x 6TB drive (raid1) with 4x 8TB drive (still 
>> raid1), is there a way to remove one 6 TB drive at a time, recreate its 
>> exact contents from the other 3 drives onto a new 8 TB drive, without doing 
>> a full rebalance? That is: without writing any substantial amount of data 
>> onto the remaining 3 drives.
> 
> There isn't such a way. This goal has a violation in itself with
> respect to redundancy (btrfs raid1).

True, it would be "hack" to minimize the amount of data to rebalance (thus 
saving time), with the (significant) downside of not maintaining redundancy at 
all times.
Personally I'd probably be willing to take the risk, since I have a few other 
copies of this data.

> man btrfs-replace and option -r I would say. But still, having a 5th
> drive online available makes things much easier and faster and solid
> and is the way to do a drive replace. You can then do a normal replace
> and there is just highspeed data transfer for the old and the new disk
> and only for parts/blocks of the disk that contain filedata. So it is
> not a sector-by-sector copying also deleted blocks, but from end-user
> perspective is an exact copy. There are patches ('hot spare') that
> assume it to be this way, but they aren't in the mainline kernel yet.

Hmm, so maybe I should think about using an USB enclosure to temporarily add a 
5th drive.
Being a bit wary about an external USB enclosure, I'd probably try to minimize 
transfers from/to the USB enclosure.

Say by putting the old (to-be-replaced) drive into the USB enclosure, the new 
drive into the internal drive bay where the old drive used to be, and then do a 
btrfs-replace with -r option to minimize reads from USB.

Or put one of the *other* disks into the USB enclosure (neither the old nor its 
new replacement drive), and doing a btrfs-replace without -r option.

> The btrfs-replace should work ok for btrfs raid1 fs (at least it
> worked ok for btrfs raid10 half a year ago I can confirm), if the fs
> is mostly idle during the replace (almost no new files added).

That's good to read. The fs will be idle during the replace.

> Still, you might want to have the replace related fixes added in kernel
> 4.7-rc2.

Hmm, since I'm on Fedora with kernel 4.5.5 (or 4.5.6 after most rece

Re: Replacing drives with larger ones in a 4 drive raid1

2016-06-12 Thread boli
> It has now been doing "btrfs device delete missing /mnt" for about 90 hours.
> 
> These 90 hours seem like a rather long time, given that a rebalance/convert 
> from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a scrub 
> takes about 7 hours (4-disk-raid1).
> 
> OTOH the filesystem will be rather full with only 3 of 4 disks available, so 
> I do expect it to take somewhat "longer than usual".
> 
> Would anyone venture a guess as to how long it might take?

It's done now, and took close to 99 hours to rebalance 8.1 TB of data from a 
4x6TB raid1 (12 TB capacity) with 1 drive missing onto the remaining 3x6TB 
raid1 (9 TB capacity).

Now I made sure quotas were off, then started a screen to fill the new 8 TB 
disk with zeros, detached it and and checked iotop to get a rough estimate on 
how long it will take (I'm aware it will become slower in time).

After that I'll add this 8 TB disk to the btrfs raid1 (for yet another 
rebalance).

The next 3 disks will be replaced with "btrfs replace", so only one rebalance 
each is needed.

I assume each "btrfs replace" would do a full rebalance, and thus assign chunks 
according to the normal strategy of choosing the two drives with the most free 
space, which in this case would be a chunk to the new drive, and a mirrored 
chunk to that existing 3 drive with most free space.

What I'm wondering is this:
If the goal is to replace 4x 6TB drive (raid1) with 4x 8TB drive (still raid1), 
is there a way to remove one 6 TB drive at a time, recreate its exact contents 
from the other 3 drives onto a new 8 TB drive, without doing a full rebalance? 
That is: without writing any substantial amount of data onto the remaining 3 
drives.

It seems to me that would be a lot more efficient, but it would go against the 
normal chunk assignment strategy.

Cheers, boli

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Replacing drives with larger ones in a 4 drive raid1

2016-06-11 Thread boli
Updates:

> So for this first replacement I mounted the volume degraded and ran "btrfs 
> device delete missing /mnt", and that's where it's been stuck for the past 
> ~23 hours. Only later did I figure out that this command will trigger a 
> rebalance, and of course that will take a long time.

It has now been doing "btrfs device delete missing /mnt" for about 90 hours.

These 90 hours seem like a rather long time, given that a rebalance/convert 
from 4-disk-raid5 to 4-disk-raid1 took about 20 hours months ago, and a scrub 
takes about 7 hours (4-disk-raid1).

OTOH the filesystem will be rather full with only 3 of 4 disks available, so I 
do expect it to take somewhat "longer than usual".

Would anyone venture a guess as to how long it might take?

> I assume I could probably just Ctrl+C that "btrfs device delete missing 
> /mnt", and the balance would continue as usual in the background, but I have 
> not done that yet, as I'd rather consult you guys first (a bit late, I know).

I've tried finding more info about "btrfs device delete missing", but the man 
page doesn't even mention the "missing" option, nor does it tell about a 
rebalance automatically starting (or if that rebalance runs in the background 
and if Ctrl+C should work or not).

Given my assumption above I've just tried hitting Ctrl+C, but it didn't do 
anything. The (Java remote console) cursor is still happily blinking away, so I 
assume it's still doing its thing.

Since it's weekend I'd have more time to tinker, but I'm afraid to do anything 
drastic, such as force reboot the box, because in some other mails I read that 
one has just *one* chance to save a degraded array (though not sure if this 
applies to my case here).

If you know any DOs/DON'Ts please share. :)

Ofc I'll keep reporting any new developments. And should this replacement of 
the first of 4 drives turn end well, I'll replace the second drive with the 
"btrfs replace" option instead of delete/add and report.

Cheers, boli


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Replacing drives with larger ones in a 4 drive raid1

2016-06-09 Thread bOli
On 09.06.2016, at 17:20, Duncan <1i5t5.dun...@cox.net> wrote:

> Are those the 8 TB SMR "archive" drives?

No, they are Western Digital Red drives.

Thanks for the detailed follow-up anyway. :)

Half a year ago, when I evaluated hard drives, in the 8 TB category there were 
only the Hitachi 8 TB Helium drives for 800 bucks, and the Seagate SMR for 250 
bucks.

I bought myself one of the Seagate SMR ones for testing, and figured out it 
wouldn't work for my use case (I now use it in a write-very-seldom context).

For my two NASes I went with 6 TB WD Red drives all around.

Nowadays there are more choices of 8 TB drives, such as the WD Reds I'm 
switching my backup NAS to.

> I haven't been following the issue very closely, but be aware that there 
> were serious issues with those drives a few kernels back, and that while 
> those issues are now fixed, the drives themselves operate rather 
> differently than normal drives, and simply don't work well in normal 
> usage.
> 
> The short version is that they really are designed for archiving and work 
> well when used for that purpose -- a mostly write once and leave it there 
> for archiving and retrieval but rarely if ever rewrite it, type usage.  
> However, they work rather poorly in normal usage where data is rewritten, 
> because they have to rewrite entire zones of data, and that takes much 
> longer than simply rewriting individual sectors on normal drives does.
> 
> With the kernel patches to fix the initial problems they do work well 
> enough, tho performance may not be what you expect, but the key to 
> keeping them working well is being aware that they continue to do 
> rewrites in the background for long after they are done with the initial 
> write, and shutting them down while they are doing them can be an issue.
> 
> Due to btrfs' data checksumming feature, small variances to data that 
> wouldn't normally be detected on non-checksumming filesystems were 
> detected far sooner on btrfs, making it far more sensitive to these small 
> errors.  However, if you use the drives for their intended nearly write-
> only purpose, and/or very seldom power down the drives at all or do so 
> only long after (give it half an hour, say) any writes have completed, as 
> long as you're running a current kernel with the initial issues patched, 
> you should be fine.  Just don't treat them like normal drives.
> 
> If OTOH you need more normal drive usage including lots of data rewrites, 
> especially if you frequently poweroff the devices, strongly consider 
> avoiding those 8 TB SMR drives, at least until the technology has a few 
> more years to mature.
> 
> There's more information on other threads on the list and on other lists, 
> if you need it and nobody posts more direct information (such as the 
> specific patches in question and what specific kernel versions they hit) 
> here.  I could find it but I'd have to do a search in my own list 
> archives, and now that you are aware of the problem, you can of course do 
> the search as well, if you need to. =:^)
> 
> -- 
> Duncan - List replies preferred.   No HTML msgs.
> "Every nonfree program has a lord, a master --
> and if you use the program, he is your master."  Richard Stallman
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Replacing drives with larger ones in a 4 drive raid1

2016-06-08 Thread boli
Dear list

I've had a 4 drive btrfs raid1 setup in my backup NAS for a few months now. 
It's running Fedora 23 Server with kernel 4.5.5 and btrfs-progs v4.4.1.

Recently I had the idea to replace the 6 TB HDDs with 8 TB ones ("WD Red"), 
because their price is now acceptable.
(More back story: That particular machine has only 4 HDD bays, which is why I 
originally dared run it as raid5, but later converted to raid1 after having 
experienced very slow monthly btrfs scrubs and figuring that 12 TB total 
capacity would be enough for a while; my main NAS on the other hand has always 
had 6 x 6 TB raid1, that's from where I knew that scrubs can be much faster).

Anyway, so I physically replaced one of the 6 TB drives with an 8 TB one. 
Fedora didn't boot properly, but went into emergency mode, apparently because 
it couldn't mount the filesystem.

Because I have to use a finicky Java console when it's booted in emergency 
mode, I figured I should probably get it to boot normally again as quickly as 
possible, so I can connect properly with SSH instead.

I guessed the way to do that would be to remove the missing drive from 
/etc/crypttab (all drives use encryption) and from the btrfs raid1, then reboot 
and add the new drive to the btrfs volume (also I'd like to completely zero the 
new drive first, to weed out bad sectors).

In the wiki I read about replace as well as delete/add and figured since I will 
eventually have to replace all 4 drives one-by-one, I might as well try out 
different methods and gain insight while doing it. :)

So for this first replacement I mounted the volume degraded and ran "btrfs 
device delete missing /mnt", and that's where it's been stuck for the past ~23 
hours. Only later did I figure out that this command will trigger a rebalance, 
and of course that will take a long time.

I'm not entirely sure that this rebalance has a chance to work, as a 3x6 TB 
raid1 would only have 9 TB of space, which may just be enough (but not by 
much). I can't currently check how much space is actually used, but it must be 
at least 8.1 TB (that's how much data is on my main NAS), but probably not much 
more than that (my main NAS may still have most if not all of the snapshots 
synched to the backup NAS too, for now).

Regarding a few gotchas: I use btrbk to copy and thin snapshots, so there are < 
100 snapshots. I might still have quotas active though, because that allows 
determining the diff size between 2 snapshots. In practice I don't use this 
often, so will turn it off once things are stable, because I read in other list 
mails that it makes things slow.

I assume I could probably just Ctrl+C that "btrfs device delete missing /mnt", 
and the balance would continue as usual in the background, but I have not done 
that yet, as I'd rather consult you guys first (a bit late, I know).

Anyway, if you have any tips, I'm glad to read them.

For now my plan is to continue waiting what happens. Since it's a just my 
personal backup NAS, the downtime is not that bad, only that it won't get the 
usual nightly backups from my main NAS for some time.

Losing data and having to start from scratch would just be an inconvenience, 
but not a disaster, particularly because the backup NAS is at a friend's house 
and my upstream is only 50 Mbit/s.

Also thanks to Hugo and Duncan for their awesome/insightful replies to my first 
question a few months ago (didn't want to spam the list just to say thanks).

Best regards, boli--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


"layout" of a six drive raid10

2016-02-08 Thread boli
Hi

I'm trying to figure out what a six drive btrfs raid10 would look like. The 
example at 
<https://btrfs.wiki.kernel.org/index.php/FAQ#What_are_the_differences_among_MD-RAID_.2F_device_mapper_.2F_btrfs_raid.3F>
 seems ambiguous to me.

It could mean that stripes are split over two raid1 sets of three devices each. 
The sentence "Every stripe is split across to exactly 2 RAID-1 sets" would lead 
me to believe this.

However, earlier it says for raid0 that "stripe[s are] split across as many 
devices as possible". Which for six drives would be: stripes are split over 
three raid1 sets of two devices each.

Can anyone enlighten me as to which is correct?


Reason I'm asking is that I'm deciding on a suitable raid level for a new DIY 
NAS box. I'd rather not use btrfs raid6 (for now). The first alternative I 
thought of was raid10. Later I learned how btrfs raid1 works and figured it 
might be better suited for my use case: Striping the data over multiple raid1 
sets doesn't really help, as transfer from/to my box will be limited by gigabit 
ethernet anyway, and a single drive can saturate that.

Thoughts on this would also be appreciated.


As a bonus I was wondering how btrfs raid1 are layed out in general, in 
particular with even and odd numbers of drives. A pair is trivial. For three 
drives I think a "ring setup" with each drive sharing half of its data with 
another drive. But how is it with four drives – are they organized as two 
pairs, or four-way, or …

Cheers, boli--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html