Since you did not export the pool, it may be looking for the wrong
devices.  Try this:
   zpool export vault
   zpool import vault

which will clear the old entries out of the zpool.cache and look for
the new devices.

More below...

Brian Leonard wrote:
I had a machine die the other day and take one of its zfs pools with it. I booted the new machine, 
with the same disks but a different SATA controller, and the rpool was mounted but another pool 
"vault" was not.  If I try to import it I get "invalid vdev configuration".  
fmdump shows zfs.vdev.bad_label, and checking the label with zdb I find labels 2 and 3 missing.  
How can I get my pool back?  Thanks.

snv_98

zpool import
  pool: vault
    id: 196786381623412270
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

        vault       UNAVAIL  insufficient replicas
          mirror    UNAVAIL  corrupted data
            c6d1p0  ONLINE
            c7d1p0  ONLINE


fmdump -eV
Jun 04 2009 07:43:47.165169453 ereport.fs.zfs.vdev.bad_label
nvlist version: 0
        class = ereport.fs.zfs.vdev.bad_label
        ena = 0x8ebd8837ae00001
        detector = (embedded nvlist)
        nvlist version: 0
                version = 0x0
                scheme = zfs
                pool = 0x2bb202be54c462e
                vdev = 0xaa3f2fd35788620b
        (end detector)

        pool = vault
        pool_guid = 0x2bb202be54c462e
        pool_context = 2
        pool_failmode = wait
        vdev_guid = 0xaa3f2fd35788620b
        vdev_type = mirror
        parent_guid = 0x2bb202be54c462e
        parent_type = root
        prev_state = 0x7
        __ttl = 0x1
        __tod = 0x4a27c183 0x9d8492d

Jun 04 2009 07:43:47.165169794 ereport.fs.zfs.zpool
nvlist version: 0
        class = ereport.fs.zfs.zpool
        ena = 0x8ebd8837ae00001
        detector = (embedded nvlist)
        nvlist version: 0
                version = 0x0
                scheme = zfs
                pool = 0x2bb202be54c462e
        (end detector)

        pool = vault
        pool_guid = 0x2bb202be54c462e
        pool_context = 2
        pool_failmode = wait
        __ttl = 0x1
        __tod = 0x4a27c183 0x9d84a82


zdb -l /dev/rdsk/c6d1p0

It is unusual to have a vdev on a partition (c6d1p0).  It is
more common to have a vdev on a slice in the partition
(eg. c6d1s0).  The view of partition and slice into a device
may overlap, but not completely overlap. For example,
on one of my machines:
   c0t0d0p0 is physical blocks 0-976735935
   c0t0d0s0 is physical blocks 16065-308512259

If your system has the same starting block, but different sizes
for the c6d1p0 and c6d1s0, then zfs may not be able to see
the labels at the end (label 2 and 3).

Above, I used slice 0 as an example, your system may use a
different slice.  But you can run zdb -l on all of them to find
the proper, complete slice.
-- richard

--------------------------------------------
LABEL 0
--------------------------------------------
    version=13
    name='vault'
    state=0
    txg=42243
    pool_guid=196786381623412270
    hostid=997759551
    hostname='philo'
    top_guid=12267576494733681163
    guid=16901406274466991796
    vdev_tree
        type='mirror'
        id=0
        guid=12267576494733681163
        whole_disk=0
        metaslab_array=14
        metaslab_shift=33
        ashift=9
        asize=1000199946240
        is_log=0
        children[0]
                type='disk'
                id=0
                guid=16901406274466991796
                path='/dev/dsk/c1t1d0p0'
                devid='id1,s...@f3b789a3f48e44b860003d3320001/q'
                phys_path='/p...@0,0/pci1043,8...@7/d...@1,0:q'
                whole_disk=0
                DTL=77
        children[1]
                type='disk'
                id=1
                guid=6231056817092537765
                path='/dev/dsk/c1t0d0p0'
                devid='id1,s...@f3b789a3f48e44b86000263f90000/q'
                phys_path='/p...@0,0/pci1043,8...@7/d...@0,0:q'
                whole_disk=0
                DTL=76
--------------------------------------------
LABEL 1
--------------------------------------------
    version=13
    name='vault'
    state=0
    txg=42243
    pool_guid=196786381623412270
    hostid=997759551
    hostname='philo'
    top_guid=12267576494733681163
    guid=16901406274466991796
    vdev_tree
        type='mirror'
        id=0
        guid=12267576494733681163
        whole_disk=0
        metaslab_array=14
        metaslab_shift=33
        ashift=9
        asize=1000199946240
        is_log=0
        children[0]
                type='disk'
                id=0
                guid=16901406274466991796
                path='/dev/dsk/c1t1d0p0'
                devid='id1,s...@f3b789a3f48e44b860003d3320001/q'
                phys_path='/p...@0,0/pci1043,8...@7/d...@1,0:q'
                whole_disk=0
                DTL=77
        children[1]
                type='disk'
                id=1
                guid=6231056817092537765
                path='/dev/dsk/c1t0d0p0'
                devid='id1,s...@f3b789a3f48e44b86000263f90000/q'
                phys_path='/p...@0,0/pci1043,8...@7/d...@0,0:q'
                whole_disk=0
                DTL=76
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to