I have one that looks like this:
  pool: preplica-1
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        preplica-1  ONLINE       2     0     2
          c2t0d0    ONLINE       0     0     0
          c2t1d0    ONLINE       0     0     0
          c2t2d0    ONLINE       2     0     2
          c2t3d0    ONLINE       0     0     0

errors: The following persistent errors have been detected:

          DATASET  OBJECT  RANGE
          36       3a2939  lvl=0 blkid=0

% uname -a
SunOS preplica01 5.10 Generic_118833-17 sun4u sparc SUNW,Sun-Fire-V210

% zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
preplica-1            9.06T   8.78T    291G    96%  ONLINE     -


This is a replicated filesystem, that is kept up to date with zfs send/recv, 
and is never even mounted locally.  Originally the error was in a regular 
inode.  So I did the find -inum thing, and found the filename.  I cp'ed the 
file and deleted the old copy on the original filesystem, and did some 
incremental zfs send|recv's to propagate the fix here.   And I expected the 
problem to go away.

But instead it started looking like that above.

I tried the trick with zdb listed here, but 
  zdb preplica-1 | grep "ID 36,"
is taking forever to complete.  But none of the filesystems listed near the 
front of the output have ID 36.

So I tried the zdb -vvv of 0x3a2939 on each of the filesystems that I have - 
and none of them was ID 36!  Not even the one that the bad inode had originally 
been reported it.

Any suggestions?

I know that it's a relatively old version of Solaris 10, with a fairly old 
patchset.

Should I be concerned about this error?  I do know what caused it (a bad disk 
in the underlying hardware raid5 storage - yes... I know... I know... :-)  - 
which was removed). So I'm not concerned about ongoing corruption from this 
specific problem.   I just want to know what file is impacted by it.

Thanks!
Davin.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to