On Fri, Feb 1, 2019 at 9:28 PM Alan Hardman <al...@fastmail.com> wrote: > > I have a Btrfs filesystem using 6 partitionless disks in RAID1 that's failing > to mount. I've tried the common recommended safe check options, but I haven't > gotten the disk to mount at all, even with -o ro,recovery.
Try '-o ro,degraded,nologreplay' If that works update your backups before you do anything else. Then you can report the following which are all read-only commands, and should still work whether or not the file system is mounted. btrfs fi show btrfs insp dump-s -f <pick any btrfs device> btrfs rescue super -v <pick any btrfs device> smartctl -x <each drive> smartctl -l scterc <each drive> cat /sys/block/sdX/device/timeout #also for each drive X Search all system logs you have with 'grep -A 15 exception' so we can see if there are any nasty libata messages. > [ 534.519437] BTRFS warning (device sdd): 'recovery' is deprecated, use > 'usebackuproot' instead > [ 534.519441] BTRFS info (device sdd): trying to use backup root at mount > time > [ 534.519443] BTRFS info (device sdd): disk space caching is enabled > [ 534.519446] BTRFS info (device sdd): has skinny extents > [ 536.306194] BTRFS info (device sdd): bdev /dev/sdc errs: wr 23038942, rd > 22208378, flush 1, corrupt 29486730, gen 2933 That's a lot of errors. These statistics are for the life of the file system, until reset with 'btrfs dev stats -z' so it's possible all of these errors are for a previous problem you've since recovered from. However, that you now have a new problem, it's not clear to what degree they are the result of read, write, corruption and generation errors. > [ 556.126928] BTRFS critical (device sdd): corrupt leaf: root=2 > block=25540634836992 slot=45, unexpected item end, have 13882 expect 13898 > [ 556.134767] BTRFS critical (device sdd): corrupt leaf: root=2 > block=25540634836992 slot=45, unexpected item end, have 13882 expect 13898 > [ 556.150278] BTRFS critical (device sdd): corrupt leaf: root=2 > block=25540634836992 slot=45, unexpected item end, have 13882 expect 13898 The fact this is a raid1 volume, and there are no messages for fixups, tells me this is bad news. Either both copies are bad, or the good copy can't be found (missing device or more than one missing). Anyway, the less you modify the file system with repair attempts, or trying to mount it read write, the better the chance of recovery. Right now there isn't enough information to tell you what to do other than do as little as possible. -- Chris Murphy