Going back to my original email, would the BTRFS wiki admins consider a better more reflective update of the RAID56 status page?

It still states "multiple serious data-loss bugs" which as Qu Wenruo has already clarified is not the case.  The only "bug" left is the write hole edge-case problem.


On 30/1/19 6:47 am, Goffredo Baroncelli wrote:
On 29/01/2019 20.02, Chris Murphy wrote:
On Mon, Jan 28, 2019 at 3:52 PM Remi Gauvin <r...@georgianit.com> wrote:
On 2019-01-28 5:07 p.m., DanglingPointer wrote:

 From Qu's statement and perspective, there's no difference to other
non-BTRFS software RAID56's out there that are marked as stable (except
ZFS).
Also there are no "multiple serious data-loss bugs".
Please do consider my proposal as it will decrease the amount of
incorrect paranoia that exists in the community.
As long as the Wiki properly mentions the current state with the options
for mitigation; like backup power and perhaps RAID1 for metadata or
anything else you believe as appropriate.
Should implement some way to automatically scrub on unclean shutdown.
BTRFS is the only (to my knowlege) Raid implementation that will not
automatically detect an unclean shutdown and fix the affected parity
blocks, (either by some form of write journal/write intent map, or full
resync.)
There's no dirty bit set on mount, and thus no dirty bit to unset on
clean mount, from which to infer a dirty unmount if it's present at
the next mount.
It would be sufficient to use the log, which BTRFS already has. During each 
transaction, when an area is touched by a rwm cycle, it has to tracked in the 
log.
In case of unclean shutdown, it is already implemented a way to replay the log. So it 
would be sufficient to track a scrub of these area as "log replay".

Of course I am talking as not a BTRFS developers, so the reality could be more 
complex: e.g. I don't know how it would be easy to raise a scrub process on per 
area basis.

BR
G.Baroncelli



Reply via email to