Hello

I found myself in a curious situation regarding the state of a zpool inside
a VMWare Guest.   I've run into CKSUM errors on the below infastructure
stack.

> Hitachi (HDS) 9570V SAN, FC Disks
>> SUN X4600 M2 (16 Core, 32GB Memory)
>>>> VMWare ESXi 3.5 U3
>>>>> Single Extended Datastore, 4x 35GB FC LUNs.
>>>>>>>> Solaris 10 u6 x86 Guest OS


A striped zpool on the Solaris Guest is starting to show some CKSUM
Errors.   This is very surprisingly by itself because of the Enterprise
hardware we're dealing with, but assuming we can ignore why these errors are
happening for the time being:  How do I diagnose the state of the 'apps'
zpool?

1.  Why is ZFS showing <0x0> instead of an actual file(s)?
2.  How do I see where/which files these CKSUM errors are affecting?

I'm not seeing *any* errors or warnings in messages.

Any thoughts?


# zpool status -v
  pool: apps
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        apps        ONLINE       0     0    28
          c1t1d0    ONLINE       0     0    14
          c1t2d0    ONLINE       0     0     0
          c1t3d0    ONLINE       0     0    14

errors: Permanent errors have been detected in the following files:

        apps:<0x0>

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c1t0d0s0  ONLINE       0     0     0

errors: No known data errors
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to