David Christensen <dpchr...@holgerdanske.com> wrote:
> On 12/31/17 09:44, Sven Hartge wrote:
>> David Christensen <dpchr...@holgerdanske.com> wrote:
>>> On 12/30/17 14:38, Matthew Crews wrote:

>>>> The main issue I see with using BTRFS with MDADM is that you lose
>>>> the benefit of bit-rot repair. MDADM can't correct bit rot, but
>>>> BTRFS-Raid (and ZFS raid arrays) can, but only with native raid
>>>> configurations.
>> 
>>> AFAIK:
>> 
>>> 1.  mdadm RAID1 can fix bit rot, so long as one drive has a good
>>> block to fix the others.
 
>> Yes, but it can't fix silent bit-rot, where incorrect bytes are read
>> from the drive without the drive noticing. In that case the Kernel
>> has no way of knowing which bytes are the correct ones, you need some
>> sort of checksum for that.

> My bad -- the only way for md to detect bit-rot is via scrubbing:

>     $ man 4 md

>     SCRUBBING AND MISMATCHES
>     ...
>        If check was used, then no action is taken to handle the mismatch,  it
>        is  simply  recorded.   If  repair  was  used,  then a mismatch will be
>        repaired in the same way that resync repairs arrays.   For RAID5/RAID6
>        new parity blocks are written.  For RAID1/RAID10, all but one block are
>        overwritten with the content of that one block.


> I wonder how md picks "that one block"?

Only if one drives reports an error. Then data from the good block is
used to overwrite the bad block, hoping the drive remaps the sector and
everything is fine again.

If both devices report no error but differing data has been read,
MD-RAID1 can't know which block is good. 

MD-RAID5/6 could calculate all parity combinations and use the data a
majority agrees upon. (I don't know if it does it, though).

I tried looking at the Kernel RAID code, but I must admit: it is all
Esperanto to me, the code is far too low level for me to understand.

S°

-- 
Sigmentation fault. Core dumped.

Reply via email to