Terry Heatlie wrote:
> Folks,
>
> I have a zpool with a raidz2 configuration which I've been switching 
> between two machines - an old one with a hardware problem and a new 
> one, which doesn't have hardware issues, but has a different 
> configuration .   I've been trying to import the pool on the new 
> machine, so I can back up the data, because the old (broken) machine 
> resets (I don't think it's panicking, because there are no logged 
> messages) every time I try to tar off the data from the ZFS.  
>
>  Unfortunately, the first time I tried to import the pool on the new 
> machine, I didn't have the right five drives in it, so it didn't work. 
>  After I figured out that I was confused about which was the boot 
> drive, I did get the five drives into the new machine and asked it to 
> import the pool.  It said that the pool could not be imported due to 
> damaged devices or data.   Which is slightly odd, since it had been 
> mounting the pool fine on the broken machine before.   
>
> I then moved the drives back into the old machine, figuring I'd at 
> least copy some small stuff onto a USB stick (it only dies reading 
> large files, apparently), but now the old machine can't mount the pool 
> either, and asking it to import gives the same message.   It shows all 
> five drives online, but says the pool is UNAVAIL due to insufficient 
> replicas, and the  raidz2 is UNAVAIL due to corrupted data.
>
> Must I resign myself to having lost this pool due to the hardware 
> problems I've had, and restore such backups as I have on the new 
> machine, or is there something that can be done to get the pool back 
> online at least in degraded mode?

Note: we're also working on a troubleshooting wiki... need more days in the
hour...

You should try to read the labels from each device.
    zdb -l /dev/rdsk/...

You should see 4 labels for each proper device.

Here is my hypothesis:

If you see a device which has only label 0 and 1, then it may
be the case that the label has overlapping partitions.  Why does
this matter?  Because under normal circumstances, the actual
devices used for creating or importing the pool are stored in the
/etc/zfs/zpool.cache file.  When the system boots, it looks there
first and will import the pools listed therein.

When you export the pool, the zpool.cache entries for the pool
are removed.

If the pool is not in zpool.cache, then zpool import scans all of
the devices found in /dev/dsk for valid pools.  If you have overlapping
partitions or slices, then a partially exposed vdev may be found.
But since it won't be complete, due to perhaps not being able to
see the end of the device which is where labels 2 & 3 are located,
then it will be marked as bad.  The solution would be to reconcile
the partions/slices using format.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to