Re: raid6 + hot spare question

2015-09-09 Thread Brendan Hide

Things can be a little more nuanced.

First off, I'm not even sure btrfs supports a hot spare currently. I 
haven't seen anything along those lines recently in the list - and don't 
recall anything along those lines before either. The current mention of 
it in the Project Ideas page on the wiki implies it hasn't been looked 
at yet.


Also, depending on your experience with btrfs, some of the tasks 
involved in fixing up a missing/dead disk might be daunting.


See further (queries for btrfs-devs too) inline below:

On 2015-09-08 14:12, Hugo Mills wrote:

On Tue, Sep 08, 2015 at 01:59:19PM +0200, Peter Keše wrote:


However I'd like to be prepared for a disk failure. Because my
server is not easily accessible and disk replacement times can be
long, I'm considering the idea of making a 5-drive raid6, thus
getting 12TB useable space + parity. In this case, the extra 4TB
drive would serve as some sort of a hot spare.

From the above I'm reading one of two situations:
a) 6 drives, raid6 across 5 drives and 1 unused/hot spare
b) 5 drives, raid6 across 5 drives and zero unused/hot spare


My assumption is that if one hard drive fails before the volume is
more than 8TB full, I can just rebalance and resize the volume from
12 TB back to 8 TB essentially going from 5-drive raid6 to 4-drive
raid6).

Can anyone confirm my assumption? Can I indeed rebalance from
5-drive raid6 to 4-drive raid6 if the volume is not too big?

Yes, you can, provided, as you say, the data is small enough to fit
into the reduced filesystem.

Hugo.

This is true - however, I'd be hesitant to build this up due to the 
current process not being very "smooth" depending on how unlucky you 
are. If you have scenario b above, will the filesystem still be 
read/write or read-only post-reboot? Will it "just work" with the only 
requirement being free space on the four working disks?


RAID6 is intended to be tolerant of two disk failures. In the case of 
there being a double failure and only 5 disks, the ease with which the 
user can balance/convert to a 3-disk raid5 is also important.


Please shoot down my concerns. :)

--
__
Brendan Hide
http://swiftspirit.co.za/
http://www.webafrica.co.za/?AFF1E97

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid6 + hot spare question

2015-09-09 Thread Chris Murphy
On Wed, Sep 9, 2015 at 9:48 AM, Brendan Hide  wrote:
> Things can be a little more nuanced.
>
> First off, I'm not even sure btrfs supports a hot spare currently. I haven't
> seen anything along those lines recently in the list - and don't recall
> anything along those lines before either. The current mention of it in the
> Project Ideas page on the wiki implies it hasn't been looked at yet.
>
> Also, depending on your experience with btrfs, some of the tasks involved in
> fixing up a missing/dead disk might be daunting.
>
> See further (queries for btrfs-devs too) inline below:
>
> On 2015-09-08 14:12, Hugo Mills wrote:
>>
>> On Tue, Sep 08, 2015 at 01:59:19PM +0200, Peter Keše wrote:
>>>
>>> 
>>> However I'd like to be prepared for a disk failure. Because my
>>> server is not easily accessible and disk replacement times can be
>>> long, I'm considering the idea of making a 5-drive raid6, thus
>>> getting 12TB useable space + parity. In this case, the extra 4TB
>>> drive would serve as some sort of a hot spare.
>
> From the above I'm reading one of two situations:
> a) 6 drives, raid6 across 5 drives and 1 unused/hot spare
> b) 5 drives, raid6 across 5 drives and zero unused/hot spare
>>>
>>>
>>> My assumption is that if one hard drive fails before the volume is
>>> more than 8TB full, I can just rebalance and resize the volume from
>>> 12 TB back to 8 TB essentially going from 5-drive raid6 to 4-drive
>>> raid6).
>>>
>>> Can anyone confirm my assumption? Can I indeed rebalance from
>>> 5-drive raid6 to 4-drive raid6 if the volume is not too big?
>>
>> Yes, you can, provided, as you say, the data is small enough to fit
>> into the reduced filesystem.
>>
>> Hugo.
>>
> This is true - however, I'd be hesitant to build this up due to the current
> process not being very "smooth" depending on how unlucky you are. If you
> have scenario b above, will the filesystem still be read/write or read-only
> post-reboot? Will it "just work" with the only requirement being free space
> on the four working disks?


There isn't even a need to rebalance, dev delete will shrink the fs
and balance. At least that's what I'm seeing here, and found a failure
in a really simple (I think) case, which I just made a new post about:
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg46296.html

This should work whether on a failed/missing disk, or normally
operating volume so long as a.) the removal doesn't go below the
minimum devices and b.) there's enough space for the data as a result
of the volume shrink operation.



-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid6 + hot spare question

2015-09-09 Thread Duncan
Brendan Hide posted on Wed, 09 Sep 2015 17:48:11 +0200 as excerpted:

> Things can be a little more nuanced.
> 
> First off, I'm not even sure btrfs supports a hot spare currently. I
> haven't seen anything along those lines recently in the list - and don't
> recall anything along those lines before either. The current mention of
> it in the Project Ideas page on the wiki implies it hasn't been looked
> at yet.

Btrfs doesn't support hot spares... yet.  As mentioned it's in ideas and 
given the practicality, is likely to be implemented at some point, but 
given the reality of btrfs development speed, that's likely to be some 
years away.

The best that can be done is "warm spare", connected up but (presumably) 
spun down and not part of the raid, so it can (remotely if necessary) be 
brought online and added to the raid as needed.  That's certainly 
possible, but not as a btrfs specific feature, rather, as a general part 
of the Linux infrastructure.

> Also, depending on your experience with btrfs, some of the tasks
> involved in fixing up a missing/dead disk might be daunting.

Yes...

> On 2015-09-08 14:12, Hugo Mills wrote:
>> On Tue, Sep 08, 2015 at 01:59:19PM +0200, Peter Keše wrote:
>>> 
>>> My assumption is that if one hard drive fails before the volume is
>>> more than 8TB full, I can just rebalance and resize the volume from 12
>>> TB back to 8 TB essentially going from 5-drive raid6 to 4-drive
>>> raid6).
>>>
>>> Can anyone confirm my assumption? Can I indeed rebalance from 5-drive
>>> raid6 to 4-drive raid6 if the volume is not too big?
>> 
>> Yes, you can, provided, as you say, the data is small enough to fit
>> into the reduced filesystem.
>>
> This is true - however, I'd be hesitant to build this up due to the
> current process not being very "smooth" depending on how unlucky you
> are.  [W]ill the filesystem still be read/write or read-only post-
> reboot? Will it "just work" with the only requirement being free space
> on the four working disks?

As long as there's four working devices and chunk-unallocated[1] space on 
them, yes, reducing to a 4-device raid6 should be fine.  What happens is 
that raid6 normally requires writing in at least fours[2], two-way data 
stripe and two parities.  If devices drop out and existing chunks with 
free space are no longer are available in fours, btrfs will leave them be 
and try to allocate additional chunks across remaining devices down to 
four[2].  If it can do so, writing can continue in the now reduced-stripe-
width raid6.  If not, there's a chance of going read-only, as it can no 
longer satisfy the raid6 requirements.[3]
 
> RAID6 is intended to be tolerant of two disk failures. In the case of
> there being a double failure and only 5 disks, the ease with which the
> user can balance/convert to a 3-disk raid5 is also important.

Again, see footnote [2] and [3] below.

---
[1] Btrfs allocates space in two stages, first in largish chunks to 
either data or metadata (chunk nominal size 1 GiB data, 256 MiB 
metadata), then actually using space from the chunk until it's gone and a 
new one needs allocated.  It's quite possible to have normal df, etc, 
report space left, but have it all locked up in pre-allocated chunks, 
typically data, and not have unallocated space left from which to 
allocate new chunks, typically metadata, when needed.  That used to be a 
big issue as btrfs could automatically allocate chunks but it took a 
balance to deallocate them, but now btrfs deallocates entirely empty 
chunks on its own, so the problem can still occur especially over time as 
existing chunks get fragmented and more chunks are only partially used, 
but it's not the /huge/ problem it once was, because at least entirely 
empty chunks are automatically deallocated and their space returned to 
the unallocated space pool to be chunk-allocated as necessary.

[2] While traditional raid6 requires minimum four devices, two-way-data-
stripe and two parity, and raid5 requires minimum three devices, two-way-
data-stripe with single parity, btrfs raid5, at least, degrades to single 
data, single parity, which ends up being in effect raid1, thus allowing a 
two-device "raid5".  I am not actually sure whether btrfs raid6 similarly 
allows degrading to single data, double parity, thus three devices, or 
not.  Of course, to do the full filesystem this way would require that 
the data and metadata fit on a single device, since the others are 
parity, but as a temporary fallback where existing chunks are simply left 
as is with data/metadata reconstructed from parity where necessary, only 
writing new data in the single-data/metadata mode, it can keep the 
filesystem writable.

[3] In actuality, given a device dropout situation, as long as the 
filesystem isn't unmounted, btrfs will continue to try to write to the 
failed/dropped device, writing to the other devices and buffering writes 
for the failed device in case it reappears, until memory is exhausted, at 
which point 

Re: raid6 + hot spare question

2015-09-08 Thread Hugo Mills
On Tue, Sep 08, 2015 at 01:59:19PM +0200, Peter Keše wrote:
> 
> I'm planning to set up a raid6 array with 4 x 4TB drives.
> Presumably that would result in 8TB of usable space + parity, which
> is about enough for my data (my data is currently 5TB in raid1,
> slowly growing at about 1 TB per year, but I often keep some
> additional backups if space permits).
> 
> However I'd like to be prepared for a disk failure. Because my
> server is not easily accessible and disk replacement times can be
> long, I'm considering the idea of making a 5-drive raid6, thus
> getting 12TB useable space + parity. In this case, the extra 4TB
> drive would serve as some sort of a hot spare.
> 
> My assumption is that if one hard drive fails before the volume is
> more than 8TB full, I can just rebalance and resize the volume from
> 12 TB back to 8 TB essentially going from 5-drive raid6 to 4-drive
> raid6).
> 
> Can anyone confirm my assumption? Can I indeed rebalance from
> 5-drive raid6 to 4-drive raid6 if the volume is not too big?

   Yes, you can, provided, as you say, the data is small enough to fit
into the reduced filesystem.

   Hugo.

-- 
Hugo Mills | "What's so bad about being drunk?"
hugo@... carfax.org.uk | "You ask a glass of water"
http://carfax.org.uk/  | Arthur & Ford
PGP: E2AB1DE4  | The Hitch-Hiker's Guide to the Galaxy


signature.asc
Description: Digital signature