Re: [zfs-discuss] recovering data from a dettach mirrored vdev
I was wondering if this ever made to zfs as a fix for bad labels? On Wed, 7 May 2008, Jeff Bonwick wrote: Yes, I think that would be useful. Something like 'zpool revive' or 'zpool undead'. It would not be completely general-purpose -- in a pool with multiple mirror devices, it could only work if all replicas were detached in the same txg -- but for the simple case of a single top-level mirror vdev, or a clean 'zpool split', it's actually pretty straightforward. Jeff On Tue, May 06, 2008 at 11:16:25AM +0100, Darren J Moffat wrote: Great tool, any chance we can have it integrated into zpool(1M) so that it can find and fixup on import detached vdevs as new pools ? I'd think it would be reasonable to extend the meaning of 'zpool import -D' to list detached vdevs as well as destroyed pools. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss !DSPAM:122,482161a8460825014478! ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
I'm wondering if this bug is fixed and if not, what is the bug number: If your entire pool consisted of a single mirror of two disks, A and B, and you detached B at some point in the past, you *should* be able to recover the pool as it existed when you detached B. However, I just ried that experiment on a test pool and it didn't work. PS: Thanks for helping that guy (just a fellow user) out :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Jeff, Sorry this is so late. Thanks for the labelfix binary. I would like to have one compiled for sparc. I tried compiling your source code but it threw up with many errors. I'm not a programmer and reading the source code means absolutely nothing to me. One error was: cc labelfix.c labelfix.c, line 1: #include directive missing file name Many more of those plus others. Which compiler did you use? I tried gcc and SUNWspro with the same results. This tool would really be handy at work as almost all of our Solaris 10 machines have mirrored zpools for data. Hope you can help. --ron -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Hello Darren, Tuesday, May 6, 2008, 11:16:25 AM, you wrote: DJM Great tool, any chance we can have it integrated into zpool(1M) so that DJM it can find and fixup on import detached vdevs as new pools ? I remember long time ago some posts about 'zpool split' so one could split a pool in two (assuming pool is mirrored). -- Best regards, Robert Milkowskimailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Darren J Moffat wrote: | Great tool, any chance we can have it integrated into zpool(1M) so that | it can find and fixup on import detached vdevs as new pools ? | | I'd think it would be reasonable to extend the meaning of | 'zpool import -D' to list detached vdevs as well as destroyed pools. +inf :-) - -- Jesus Cea Avion _/_/ _/_/_/_/_/_/ [EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/ jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/ _/_/_/_/_/ ~ _/_/ _/_/_/_/ _/_/ _/_/ Things are not so easy _/_/ _/_/_/_/ _/_/_/_/ _/_/ My name is Dump, Core Dump _/_/_/_/_/_/ _/_/ _/_/ El amor es poner tu felicidad en la felicidad de otro - Leibniz -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.8 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iQCVAwUBSCPNGZlgi5GaxT1NAQLXowQAnF/fWQ5SmBzRait+9wgVJdKEQ9Phh5D3 py3Bq75yQb4ljQ2PLbT1hU7QgNxavCLjx8NTz5pfnT9+m7E4SG5kQdfXXHgPMfHd 7Mp1ckRtcVZh+XWj2ESe/4ZDIIz/EvaeL4j7j9uFpDVWXGNPNZx1LyGcBuxt8uya jdchjKgwyZM= =xPth -END PGP SIGNATURE- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Yes, I think that would be useful. Something like 'zpool revive' or 'zpool undead'. It would not be completely general-purpose -- in a pool with multiple mirror devices, it could only work if all replicas were detached in the same txg -- but for the simple case of a single top-level mirror vdev, or a clean 'zpool split', it's actually pretty straightforward. Jeff On Tue, May 06, 2008 at 11:16:25AM +0100, Darren J Moffat wrote: Great tool, any chance we can have it integrated into zpool(1M) so that it can find and fixup on import detached vdevs as new pools ? I'd think it would be reasonable to extend the meaning of 'zpool import -D' to list detached vdevs as well as destroyed pools. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Jeff Bonwick wrote: Yes, I think that would be useful. Something like 'zpool revive' or 'zpool undead'. Why a new subcommand when 'zpool import' got '-D' to revive destroyed pools ? It would not be completely general-purpose -- in a pool with multiple mirror devices, it could only work if all replicas were detached in the same txg -- but for the simple case of a single top-level mirror vdev, or a clean 'zpool split', it's actually pretty straightforward. zpool split is the functionality need - take a side of a mirror and make a new unmirrored pool from it. However I think many people are likely to attempt 'zpool detach' because of experience with volume managers such as SVM (ODS, LVM what ever you want to call it this week) where you type 'metadetach'. Though of course that won't work in the case where there is actually a stripe of mirrors so 'zpool split' is need to deal with the non trivial case anyway. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Hello Cyril, Sunday, May 4, 2008, 11:34:28 AM, you wrote: CP On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick [EMAIL PROTECTED] wrote: Oh, and here's the source code, for the curious: CP [snipped] label_write(fd, offsetof(vdev_label_t, vl_uberblock), 1ULL UBERBLOCK_SHIFT, ub); label_write(fd, offsetof(vdev_label_t, vl_vdev_phys), VDEV_PHYS_SIZE, vl.vl_vdev_phys); CP Jeff, CP is it enough to overwrite only one label ? Isn't there four of them ? If checksum is ok IIRC the last one (most recent timestamp) is going to be used. -- Best regards, Robert Milkowski mailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Great tool, any chance we can have it integrated into zpool(1M) so that it can find and fixup on import detached vdevs as new pools ? I'd think it would be reasonable to extend the meaning of 'zpool import -D' to list detached vdevs as well as destroyed pools. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Oh, you're right! Well, that will simplify things! All we have to do is convince a few bits of code to ignore ub_txg == 0. I'll try a couple of things and get back to you in a few hours... Jeff On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote: Hi, while diving deeply in zfs in order to recover data I found that every uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it means only ub_txg was touch while detaching? Hoping it is the case, I modified ub_txg from one uberblock to match the tgx from the label and now I try to calculate the new SHA256 checksum but I failed. Can someone explain what I did wrong? And of course how to do it correctly? bbr The example is from a valid uberblock which belongs an other pool. Dumping the active uberblock in Label 0: # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x 1024+0 records in 1024+0 records out 000 b10c 00ba 0009 020 8bf2 8eef f6db c46f 4dcc 040 bba8 481a 0001 060 05e6 0003 0001 100 05e6 005b 0001 120 44e9 00b2 0001 0703 800b 140 160 8bf2 200 0018 a981 2f65 0008 220 e734 adf2 037a cedc d398 c063 240 da03 8a6e 26fc 001c 260 * 0001720 7a11 b10c da7a 0210 0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045 0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03 0002000 Checksum is at pos 01740 01760 I try to calculate it assuming only uberblock is relevant. #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 168+0 records in 168+0 records out 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3 Helas not matching :-( This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
OK, here you go. I've successfully recovered a pool from a detached device using the attached binary. You can verify its integrity against the following MD5 hash: # md5sum labelfix ab4f33d99fdb48d9d20ee62b49f11e20 labelfix It takes just one argument -- the disk to repair: # ./labelfix /dev/rdsk/c0d1s4 If all goes according to plan, your old pool should be importable. If you do a zpool status -v, it will complain that the old mirrors are no longer there. You can clean that up by detaching them: # zpool detach mypool guid where guid is the long integer that zpool status -v reports as the name of the missing device. Good luck, and please let us know how it goes! Jeff On Sat, May 03, 2008 at 10:48:34PM -0700, Jeff Bonwick wrote: Oh, you're right! Well, that will simplify things! All we have to do is convince a few bits of code to ignore ub_txg == 0. I'll try a couple of things and get back to you in a few hours... Jeff On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote: Hi, while diving deeply in zfs in order to recover data I found that every uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it means only ub_txg was touch while detaching? Hoping it is the case, I modified ub_txg from one uberblock to match the tgx from the label and now I try to calculate the new SHA256 checksum but I failed. Can someone explain what I did wrong? And of course how to do it correctly? bbr The example is from a valid uberblock which belongs an other pool. Dumping the active uberblock in Label 0: # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x 1024+0 records in 1024+0 records out 000 b10c 00ba 0009 020 8bf2 8eef f6db c46f 4dcc 040 bba8 481a 0001 060 05e6 0003 0001 100 05e6 005b 0001 120 44e9 00b2 0001 0703 800b 140 160 8bf2 200 0018 a981 2f65 0008 220 e734 adf2 037a cedc d398 c063 240 da03 8a6e 26fc 001c 260 * 0001720 7a11 b10c da7a 0210 0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045 0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03 0002000 Checksum is at pos 01740 01760 I try to calculate it assuming only uberblock is relevant. #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 168+0 records in 168+0 records out 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3 Helas not matching :-( This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss labelfix Description: Binary data ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Oh, and here's the source code, for the curious: #include devid.h #include dirent.h #include errno.h #include libintl.h #include stdlib.h #include string.h #include sys/stat.h #include unistd.h #include fcntl.h #include stddef.h #include sys/vdev_impl.h /* * Write a label block with a ZBT checksum. */ static void label_write(int fd, uint64_t offset, uint64_t size, void *buf) { zio_block_tail_t *zbt, zbt_orig; zio_cksum_t zc; zbt = (zio_block_tail_t *)((char *)buf + size) - 1; zbt_orig = *zbt; ZIO_SET_CHECKSUM(zbt-zbt_cksum, offset, 0, 0, 0); zio_checksum(ZIO_CHECKSUM_LABEL, zc, buf, size); VERIFY(pwrite64(fd, buf, size, offset) == size); *zbt = zbt_orig; } int main(int argc, char **argv) { int fd; vdev_label_t vl; nvlist_t *config; uberblock_t *ub = (uberblock_t *)vl.vl_uberblock; uint64_t txg; char *buf; size_t buflen; VERIFY(argc == 2); VERIFY((fd = open(argv[1], O_RDWR)) != -1); VERIFY(pread64(fd, vl, sizeof (vdev_label_t), 0) == sizeof (vdev_label_t)); VERIFY(nvlist_unpack(vl.vl_vdev_phys.vp_nvlist, sizeof (vl.vl_vdev_phys.vp_nvlist), config, 0) == 0); VERIFY(nvlist_lookup_uint64(config, ZPOOL_CONFIG_POOL_TXG, txg) == 0); VERIFY(txg == 0); VERIFY(ub-ub_txg == 0); VERIFY(ub-ub_rootbp.blk_birth != 0); txg = ub-ub_rootbp.blk_birth; ub-ub_txg = txg; VERIFY(nvlist_remove_all(config, ZPOOL_CONFIG_POOL_TXG) == 0); VERIFY(nvlist_add_uint64(config, ZPOOL_CONFIG_POOL_TXG, txg) == 0); buf = vl.vl_vdev_phys.vp_nvlist; buflen = sizeof (vl.vl_vdev_phys.vp_nvlist); VERIFY(nvlist_pack(config, buf, buflen, NV_ENCODE_XDR, 0) == 0); label_write(fd, offsetof(vdev_label_t, vl_uberblock), 1ULL UBERBLOCK_SHIFT, ub); label_write(fd, offsetof(vdev_label_t, vl_vdev_phys), VDEV_PHYS_SIZE, vl.vl_vdev_phys); fsync(fd); return (0); } Jeff On Sun, May 04, 2008 at 01:21:27AM -0700, Jeff Bonwick wrote: OK, here you go. I've successfully recovered a pool from a detached device using the attached binary. You can verify its integrity against the following MD5 hash: # md5sum labelfix ab4f33d99fdb48d9d20ee62b49f11e20 labelfix It takes just one argument -- the disk to repair: # ./labelfix /dev/rdsk/c0d1s4 If all goes according to plan, your old pool should be importable. If you do a zpool status -v, it will complain that the old mirrors are no longer there. You can clean that up by detaching them: # zpool detach mypool guid where guid is the long integer that zpool status -v reports as the name of the missing device. Good luck, and please let us know how it goes! Jeff On Sat, May 03, 2008 at 10:48:34PM -0700, Jeff Bonwick wrote: Oh, you're right! Well, that will simplify things! All we have to do is convince a few bits of code to ignore ub_txg == 0. I'll try a couple of things and get back to you in a few hours... Jeff On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote: Hi, while diving deeply in zfs in order to recover data I found that every uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it means only ub_txg was touch while detaching? Hoping it is the case, I modified ub_txg from one uberblock to match the tgx from the label and now I try to calculate the new SHA256 checksum but I failed. Can someone explain what I did wrong? And of course how to do it correctly? bbr The example is from a valid uberblock which belongs an other pool. Dumping the active uberblock in Label 0: # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x 1024+0 records in 1024+0 records out 000 b10c 00ba 0009 020 8bf2 8eef f6db c46f 4dcc 040 bba8 481a 0001 060 05e6 0003 0001 100 05e6 005b 0001 120 44e9 00b2 0001 0703 800b 140 160 8bf2 200 0018 a981 2f65 0008 220 e734 adf2 037a cedc d398 c063 240 da03 8a6e 26fc 001c 260 * 0001720 7a11 b10c da7a 0210 0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045 0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03 0002000 Checksum is at pos 01740 01760 I try to calculate it assuming only uberblock is relevant. #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 168+0 records in 168+0 records out 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick [EMAIL PROTECTED] wrote: Oh, and here's the source code, for the curious: [snipped] label_write(fd, offsetof(vdev_label_t, vl_uberblock), 1ULL UBERBLOCK_SHIFT, ub); label_write(fd, offsetof(vdev_label_t, vl_vdev_phys), VDEV_PHYS_SIZE, vl.vl_vdev_phys); Jeff, is it enough to overwrite only one label ? Isn't there four of them ? fsync(fd); return (0); } -- Regards, Cyril ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Oh, and here's the source code, for the curious: The forensics project will be all over this, I hope, and wrap it up in a nice command line tool. -mg ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Well, thanks your program, I could recover the data on the detach disk. Now I m copying the data on other disks and resilver it inside the pool. Warm words aren't enough to express how I feel. This community is great. Thanks you very much. bbr This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Hi, while diving deeply in zfs in order to recover data I found that every uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it means only ub_txg was touch while detaching? Hoping it is the case, I modified ub_txg from one uberblock to match the tgx from the label and now I try to calculate the new SHA256 checksum but I failed. Can someone explain what I did wrong? And of course how to do it correctly? bbr The example is from a valid uberblock which belongs an other pool. Dumping the active uberblock in Label 0: # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x 1024+0 records in 1024+0 records out 000 b10c 00ba 0009 020 8bf2 8eef f6db c46f 4dcc 040 bba8 481a 0001 060 05e6 0003 0001 100 05e6 005b 0001 120 44e9 00b2 0001 0703 800b 140 160 8bf2 200 0018 a981 2f65 0008 220 e734 adf2 037a cedc d398 c063 240 da03 8a6e 26fc 001c 260 * 0001720 7a11 b10c da7a 0210 0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045 0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03 0002000 Checksum is at pos 01740 01760 I try to calculate it assuming only uberblock is relevant. #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 168+0 records in 168+0 records out 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3 Helas not matching :-( This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Benjamin Brumaire wrote: I try to calculate it assuming only uberblock is relevant. #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 168+0 records in 168+0 records out 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3 Is this on SPARC or x86 ? ZFS stores the SHA256 checksum in 4 words in big endian format, see http://src.opensolaris.org/source/xref/zfs-crypto/gate/usr/src/uts/common/fs/zfs/sha256.c -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
it is on x86. Does it means that I have to split the output from digest in 4 words (each 8 bytes) and reverse each before comparing with the stored value? bbr This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
If your entire pool consisted of a single mirror of two disks, A and B, and you detached B at some point in the past, you *should* be able to recover the pool as it existed when you detached B. However, I just tried that experiment on a test pool and it didn't work. I will investigate further and get back to you. I suspect it's perfectly doable, just currently disallowed due to some sort of error check that's a little more conservative than necessary. Keep that disk! Jeff On Mon, Apr 28, 2008 at 10:33:32PM -0700, Benjamin Brumaire wrote: Hi, my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I'm aware that uberblock is gone and that i can't import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data) thanks in advance for any hints. bbr This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Jeff thank you very much for taking time to look at this. My entire pool consisted of a single mirror of two slices on different disks A and B. I attach a third slice on disk C and wait for resilver and then detach it. Now disks A and B burned and I have only disk C at hand. bbr This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
Urgh. This is going to be harder than I thought -- not impossible, just hard. When we detach a disk from a mirror, we write a new label to indicate that the disk is no longer in use. As a side effect, this zeroes out all the old uberblocks. That's the bad news -- you have no uberblocks. The good news is that the uberblock only contains one field that's hard to reconstruct: ub_rootbp, which points to the root of the block tree. The root block *itself* is still there -- we just have to find it. The root block has a known format: it's a compressed objset_phys_t, almost certainly one sector in size (could be two, but very unlikely because the root objset_phys_t is highly compressible). It should be possible to write a program that scans the disk, reading each sector and attempting to decompress it. If it decompresses into exactly 1K (size of an uncompressed objset_phys_t), then we can look at all the fields to see if they look plausible. Among all candidates we find, the one whose embedded meta-dnode has the highest birth time in its dn_blkptr is the one we want. I need to get some sleep now, but I'll code this up in a couple of days and we can take it from there. If this is time-sensitive, let me know and I'll see if I can find someone else to drive it. [ I've got a bunch of commitments tomorrow, plus I'm supposed to be on vacation... typical... ;-) ] Jeff On Tue, Apr 29, 2008 at 12:15:21AM -0700, Benjamin Brumaire wrote: Jeff thank you very much for taking time to look at this. My entire pool consisted of a single mirror of two slices on different disks A and B. I attach a third slice on disk C and wait for resilver and then detach it. Now disks A and B burned and I have only disk C at hand. bbr This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] recovering data from a dettach mirrored vdev
If I understand you correctly the steps to follow are: read each sector (dd bs=512 count=1 split=n is enough?) decompress it (any tools implementing the algo lzjb?) size = 1024? structure might be objset_phys_t? take the oldest birth time as the root block construction of the uberblocks Unfortunately I can't help with a C program but if I will be happy to support you in any other way. Don't consider it's time sensitive, those data are very important but I can continue my business without it. Again thanks you very much for your help. I really appreciate. bbr This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] recovering data from a dettach mirrored vdev
Hi, my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I'm aware that uberblock is gone and that i can't import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data) thanks in advance for any hints. bbr This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss