On Mon, Sep 19, 2016 at 01:38:36PM -0400, Austin S. Hemmelgarn wrote:
> >>I'm not sure if the brfsck is really all that helpful to user as much
> >>as it is for developers to better learn about the failure vectors of
> >>the file system.
> >
> >ReiserFS had no working fsck for all of the 8 years I used it (and still
> >didn't last year when I tried to use it on an old disk).  "Not working"
> >here means "much less data is readable from the filesystem after running
> >fsck than before."  It's not that much of an inconvenience if you have
> >backups.
> For a small array, this may be the case.  Once you start looking into double
> digit TB scale arrays though, restoring backups becomes a very expensive
> operation.  If you had a multi-PB array with a single dentry which had no
> inode, would you rather be spending multiple days restoring files and
> possibly losing recent changes, or spend a few hours to check the filesystem
> and fix it with minimal data loss?

I'd really prefer to be able to delete the dead dentry with 'rm' as root,
or failing that, with a ZDB-like tool or ioctl, if it's the only known
instance of such a bad metadata object and I already know where it's
located.

Usually the ultimate failure mode of a btrfs filesystem is a read-only
filesystem from which you can read most or all of your data, but you
can't ever make it writable again because of fsck limitations.

The one thing I do miss about every filesystem that isn't ext2/ext3 is
automated fsck that prioritizes availability, making the filesystem
safely writable even if it can't recover lost data.  On the other
hand, fixing an ext[23] filesystem is utterly trivial compared to any
btree-based filesystem.

Attachment: signature.asc
Description: Digital signature

Reply via email to