Hi Alex

More scary than interesting to me.

What kind of hardware and which Solaris release?

Do you know what steps lead up to this problem? Any recent hardware
changes?

This output should tell you which disks were in this pool originally:

# zpool history tank

If the history identifies tank's actual disks, maybe you can determine
which disk is masquerading as c5t1d0.

If that doesn't work, accessing the individual disk entries in format
should tell which one is the problem, if its only one.

I would like to see the output of this command:

# zdb -l /dev/dsk/c5t1d0s0

Make sure you have a good backup of your data. If you need to pull a
disk to check cabling, or rule out controller issues, you should
probably export this pool first. Have a good backup.

Others have resolved minor device issues by exporting/importing the
pool but with format/zpool commands hanging on your system, I'm not
confident that this operation will work for you.

Thanks,

Cindy

On 05/19/11 12:17, Alex wrote:
I thought this was interesting - it looks like we have a failing drive in our 
mirror, but the two device nodes in the mirror are the same:

  pool: tank
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: scrub completed after 1h9m with 0 errors on Sat May 14 03:09:45 2011
config:

        NAME        STATE     READ WRITE CKSUM
        tank        DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c5t1d0  ONLINE       0     0     0
            c5t1d0  FAULTED      0     0     0  corrupted data

c5t1d0 does indeed only appear once in the "format" list. I wonder how to go 
about correcting this if I can't uniquely identify the failing drive.

"format" takes forever to spill its guts, and the zpool commands all hang...... 
clearly there is hardware error here, probably causing that, but not sure how to identify 
which disk to pull.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to