On Thu, Oct 13, 2011 at 02:57:01PM +0200, Francesco Riosa wrote: > 2011/10/12 Josef Bacik <jo...@redhat.com>: > > On Tue, Oct 11, 2011 at 11:21:45PM +0200, Francesco Riosa wrote: > >> 2011/10/7 Josef Bacik <jo...@redhat.com>: > >> > On 10/06/2011 04:56 PM, Francesco Riosa wrote: > >> >> 2011/10/6 Andi Kleen <a...@firstfloor.org>: > >> >>> Jeff Putney <jeffrey.put...@gmail.com> writes: > >> >>>> > >> >>>> http://en.wikipedia.org/wiki/Release_early,_release_often > >> >>> > >> >>> Well the other principle in free software you're forgetting > >> >>> is: > >> >>> > >> >>> "It will be released when it's ready" > >> >>> > >> >>> If you don't like Chris' ways to do releases you're free to write > >> >>> something on your own or pay someone to do so. Otherwise > >> >>> you just have to deal with his time frames, as shifty > >> >>> as they may be. > >> >> > >> >> I did a different thing, I've offered Chris money to help rescue an > >> >> hosed btrfs or to point to someone who could do, we ended in doing > >> >> some tests (for free) but nothing else materialized. > >> >> While the time passed has diminished the value of the data to be > >> >> rescued I'm more on the "show us some code we can start from" than "it > >> >> will be released when ready" vagon. > >> >> > >> > > >> > If you still need that data, clone this repo > >> > > >> > git://github.com/josefbacik/btrfs-progs.git > >> > > >> > run make, and then run > >> > > >> > ./restore /dev/whatever /some/dir > >> > > >> > and it will try and suck all of your data off the disk and dump it in > >> > that directory. If you have snapshots it will skip them by default, so > >> > if you have snapshots that have useful data in them you'll want to use > >> > the -s option. If you run into random errors that you think are > >> > recoverable, or if you don't care about the file that's being recovered, > >> > you can run with -i which will ignore errors and keep trying to recover > >> > your files. Thanks, > >> > > >> > Josef > >> > > >> > >> I've tried, w/o luck > >> > >> explanation come from 2011-06-21 thread; > >> http://thread.gmane.org/gmane.comp.file-systems.btrfs/11435 > >> the following refer to a copy of that system > >> > >> Label: space02 uuid: f752def1-1abc-48c7-8ebb-47ba37b8ffa6 > >> Total devices 7 FS bytes used 173.12GB > >> devid 6 size 488.94GB used 60.25GB path /dev/sdd7 > >> devid 2 size 487.65GB used 58.76GB path /dev/sdd8 > >> devid 7 size 487.65GB used 0.00 path /dev/sdf7 > >> devid 3 size 487.65GB used 60.26GB path /dev/sdf8 > >> devid 7 size 487.65GB used 1.50GB path /dev/sdg7 > >> devid 5 size 488.94GB used 58.75GB path /dev/sdb7 > >> devid 4 size 487.65GB used 60.26GB path /dev/sdb8 > >> > >> # ./restore /dev/sdd7 /tmp/restore > >> failed to read /dev/sr0 > >> failed to read /dev/sr0 > >> restore: volumes.c:1367: btrfs_read_sys_array: Assertion `!(ret)' failed. > >> Aborted > >> > > > > So this is kind of a problem since you have multiple disks. We maybe could > > get > > away with ignoring a failure, but the problem is if you have data on a disk > > where we couldn't read the chunk then the chances are we won't be able to > > map > > that file and read the data off. That being said, theres no harm in trying > > ;). > > Can you make btrfs_read_sys_array complain about failing, but not actually > > BUG? > > See if that helps you. Thanks, > > > > Josef > > > > I've tried replacing the "BUG_ON(ret);" to printk("FAILED!!! %d\n", ret); > the diff of the result is reported at the bottom. >
Ok so this is a little trickier, your chunk tree is screwed up. We need that to be intact so we can translate the logical block addresses to physical addresses, without that we're screwed because we have no way of knowing where anything is. I'm working on a tool to try and find root items, but currently it also requires having a working chunk tree. Once I get finished making it work on a file system with an intact chunk tree I'll try and figure out something for rebuilding a chunk tree. Thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html