Hello all,

  When I started with my old test box (the 6-disk raidz2 pool), I had
first created the pool on partitions (i.e. c7t1d0p0 or physical paths
like /pci@0,0/pci1043,81ec@1f,2/disk@1,0:q), but I've soon destroyed
it and recreated (with the same name "pool") in slices (i.e. c7t0d0s0
or   /pci@0,0/pci1043,81ec@1f,2/disk@1,0:a) with a tailing 8Mb slice
(whole-disk ZFS layout). The disks currently carry the format EFI, and
the zpool command finds the correct pool by name.

  However, whenever I use zdb, it finds leftovers of my original test
as labels number 2 and 3 (numbers 0 and 1 are "failed to unpack"), so
zdb refuses to use my "pool" by name and I have to provide the GUID.
Is it easy to find out at which locations ZDB finds these labels so I
could zero them out and let zdb use the correct pool by name?

  Should I assume that "p0" addresses the while disk and wipe the last
512K of the disk size (which are now in the reserved 8Mb partition)?

  BTW, what role does this 8Mb piece play? I might guess it helps to
replace disks by new ones with similar (not exact) sizes and this
slice on the new disk would shrink or expand to cover up the HDD size
discrepancy. But I haven't done any replacements so far which would
prove or disprove this  ;)

Thanks,
//Jim

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to