Hi Qu,

On 2017-06-21 10:45, Qu Wenruo wrote:
> At 06/21/2017 06:57 AM, waxhead wrote:
>> I am trying to piece together the actual status of the RAID5/6 bit of BTRFS.
>> The wiki refer to kernel 3.19 which was released in February 2015 so I assume
>> that the information there is a tad outdated (the last update on the wiki 
>> page was July 2016)
>> https://btrfs.wiki.kernel.org/index.php/RAID56
>>
>> Now there are four problems listed
>>
>> 1. Parity may be inconsistent after a crash (the "write hole")
>> Is this still true, if yes - would not this apply for RAID1 / 
>> RAID10 as well? How was it solved there , and why can't that be done for 
>> RAID5/6
> 
> Unlike pure stripe method, one fully functional RAID5/6 should be written in 
> full stripe behavior,
>  which is made up by N data stripes and correct P/Q.
> 
> Given one example to show how write sequence affects the usability of RAID5/6.
> 
> Existing full stripe:
> X = Used space (Extent allocated)
> O = Unused space
> Data 1   |XXXXXX|OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO|
> Data 2   |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO|
> Parity   |WWWWWW|ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ|
> 
> When some new extent is allocated to data 1 stripe, if we write
> data directly into that region, and crashed.
> The result will be:
> 
> Data 1   |XXXXXX|XXXXXX|OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO|
> Data 2   |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO|
> Parity   |WWWWWW|ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ|
> 
> Parity stripe is not updated, although it's fine since data is still correct, 
> this reduces the 
> usability, as in this case, if we lost device containing data 2 stripe, we 
> can't 
> recover correct data of data 2.
> 
> Although personally I don't think it's a big problem yet.
> 
> Someone has idea to modify extent allocator to handle it, but anyway I don't 
> consider it's worthy.
> 
>>
>> 2. Parity data is not checksummed
>> Why is this a problem? Does it have to do with the design of BTRFS somehow?
>> Parity is after all just data, BTRFS does checksum data so what is the 
>> reason this is a problem?
> 
> Because that's one solution to solve above problem.

In what it could be a solution for the write hole ? If a parity is wrong AND 
you lost a disk, even having a checksum of the parity, you are not in position 
to rebuild the missing data. And if you rebuild wrong data, anyway the checksum 
highlights it. So adding the checksum to the parity should not solve any issue.

A possible "mitigation", is to track in a "intent log" all the not "full stripe 
writes" during a transaction. If a power failure aborts a transaction, in the 
next mount a scrub process is started to correct the parities only in the 
stripes tracked before.

A solution, is to journal all the not "full stripe writes", as MD does.


BR
G.Baroncelli

-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to