02.12.2018 23:14, Patrick Dijkgraaf пишет:
> I have some additional info.
> 
> I found the reason the FS got corrupted. It was a single failing drive,
> which caused the entire cabinet (containing 7 drives) to reset. So the
> FS suddenly lost 7 drives.
> 

This remains mystery for me. btrfs is marketed to be always consistent
on disk - you either have previous full transaction or current full
transaction. If current transaction was interrupted the promise is you
are left with previous valid consistent transaction.

Obviously this is not what happens in practice. Which nullifies the main
selling point of btrfs.

Unless this is expected behavior, it sounds like some barriers are
missing and summary data is updated before (and without waiting for)
subordinate data. And if it is expected behavior ...

> I have removed the failed drive, so the RAID is now degraded. I hope
> the data is still recoverable... ☹
> 

Reply via email to