On Oct 7, 2013, at 8:56 AM, Martin <m_bt...@ml1.co.uk> wrote: > > Or try "mount -o recovery,noatime" again?
Because of this: free space inode generation (0) did not match free space cache generation (1607) Try mount option clear_cache. You could then use iotop to make sure the btrfs-freespace process becomes inactive before unmounting the file system; I don't think you need to wait in order to use the file system, nor do you need to unmount then remount without the option. But if it works, it should only be needed once, not as a persistent mount option. > Or is it dead? > > (The 1.5TB of backup data is replicated elsewhere but it would be good > to rescue this version rather than completely redo from scratch. > Especially so for the sake of just a few MBytes of one corrupt directory > tree.) Right. If you snapshot the subvolume containing the corrupt portion of the file system, the snapshot probably inherits that corruption. But if you write to only one of them, if those writes make the problem worse, should be isolated only to the one you write to. I might avoid writing to it, honestly. To save time, get increasingly aggressive to get data out of this directory and once you succeed, blow away the file system and start from scratch. You could also then try kernel 3.12 rc4, as there are some btrfs bug fixes I'm seeing in there also, but I don't know if any of them will help your case. If you try it, mount normally, then try to get your data. If that doesn't work, try the recovery option. Maybe you'll get different results. Chris Murphy-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html