On Mon, Sep 19, 2016 at 11:38 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: >> ReiserFS had no working fsck for all of the 8 years I used it (and still >> didn't last year when I tried to use it on an old disk). "Not working" >> here means "much less data is readable from the filesystem after running >> fsck than before." It's not that much of an inconvenience if you have >> backups. > > For a small array, this may be the case. Once you start looking into double > digit TB scale arrays though, restoring backups becomes a very expensive > operation. If you had a multi-PB array with a single dentry which had no > inode, would you rather be spending multiple days restoring files and > possibly losing recent changes, or spend a few hours to check the filesystem > and fix it with minimal data loss?
Yep restoring backups, even fully re-replicating data in a cluster, is untenable it's so expensive. But even offline fsck is sufficiently non-scalable that at a certain volume size it's not tenable. 100TB takes a long time to fsck offline, and is it even possible to fsck 1PB Btrfs? Seems to me it's another case were if it were possible to isolate what tree limbs are sick, just cut them off and report the data loss rather than consider the whole fs unusable. That's what we do with living things. -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html