Hello group.

I am confused: Can somebody please confirm/deny, which RAID subsystem is
affected? BTRFS' RAID5/6 or mdadm (Linux kernel raid) RAID 5/6 ?

Are there some gotchas (in terms of broken reliability) when using
kernel one?

The web is full of legends, it seems that this confusion is quite common...

On 06/21/2017 12:57 AM, waxhead wrote:
> I am trying to piece together the actual status of the RAID5/6 bit of
> BTRFS.
> The wiki refer to kernel 3.19 which was released in February 2015 so I
> assume that the information there is a tad outdated (the last update
> on the wiki page was July 2016)
> https://btrfs.wiki.kernel.org/index.php/RAID56
>
> Now there are four problems listed
>
> 1. Parity may be inconsistent after a crash (the "write hole")
> Is this still true, if yes - would not this apply for RAID1 / RAID10
> as well? How was it solved there , and why can't that be done for RAID5/6
>
> 2. Parity data is not checksummed
> Why is this a problem? Does it have to do with the design of BTRFS
> somehow?
> Parity is after all just data, BTRFS does checksum data so what is the
> reason this is a problem?
>
> 3. No support for discard? (possibly -- needs confirmation with cmason)
> Does this matter that much really?, is there an update on this?
>
> 4. The algorithm uses as many devices as are available: No support for
> a fixed-width stripe.
> What is the plan for this one? There was patches on the mailing list
> by the SnapRAID author to support up to 6 parity devices. Will the
> (re?) resign of btrfs raid5/6 support a scheme that allows for
> multiple parity devices?
>
> I do have a few other questions as well...
>
> 5. BTRFS does still (kernel 4.9) not seem to use the device ID to
> communicate with devices.
>
> If you on a multi device filesystem yank out a device, for example
> /dev/sdg and it reappear as /dev/sdx for example btrfs will still
> happily try to write to /dev/sdg even if btrfs fi sh /mnt shows the
> correct device ID. What is the status for getting BTRFS to properly
> understand that a device is missing?
>
> 6. RAID1 needs to be able to make two copies always. E.g. if you have
> three disks you can loose one and it should still work. What about
> RAID10 ? If you have for example 6 disk RAID10 array, loose one disk
> and reboots (due to #5 above). Will RAID10 recognize that the array
> now is a 5 disk array and stripe+mirror over 2 disks (or possibly 2.5
> disks?) instead of 3? In other words, will it work as long as it can
> create a RAID10 profile that requires a minimum of four disks? 

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to