So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
for three months, and it's had no hardware errors. But my zfs file system
seems to have died a quiet death. Sun engineering response was to point to
the FMRI, which says to throw out the zfs partition and start over. I'm real
reluctant to do that, since it'll take hours to do a tape restore, and we
don't know what's wrong.  I'm seriously wondering if I should just toss zfs.
Again, this is Solaris 10 06/06, not some beta version. It's an older
server, a 280R with an older SCSI RaidKing

The sun engineer is escalating but thanks for any clues or thoughts. I don't
want to erase the evidence until we get some idea what's wrong.


zpool status -v
 pool: mailstore
state: ONLINE
status: One or more devices has experienced an error resulting in data
       corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
       entire pool from backup.
  see: http://www.sun.com/msg/ZFS-8000-8A
scrub: scrub completed with 1 errors on Tue Nov 28 12:29:18 2006
config:

       NAME        STATE     READ WRITE CKSUM
       mailstore   ONLINE       0     0   250
         c1t4d0    ONLINE       0     0   250

errors: The following persistent errors have been detected:

         DATASET         OBJECT   RANGE
         mailstore/imap  4539004  0-8192
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to