I have a Solaris 10u3 x86 patched up with the important kernel/zfs/fs 
patches (now running kernel 120012-14).

after executing a 'zpool scrub' on one of my pools, i see I/O read errors:

# zpool status  | grep ONLINE | grep -v '0     0     0'
  state: ONLINE
             c2t1d0  ONLINE       9     0     0
             c2t4d0  ONLINE      32     0     0
             c2t5d0  ONLINE       7     0     0


Are these errors important enough to switch the disks?

if not, i've read that when these numbers break a magic threshold, zfs 
takes the disk offline and calls it dead.

If I use 'zpool clear', will only these administrative statistics be 
cleared, or will important internal numbers that keep track of the 
errors be cleared as well?

I do see bad blocks on the offending disks -- but why would zfs see them 
(assuming the disk re-mapped the bad blocks) ?
# smartctl -a /dev/rdsk/c2t1d0 | grep defect
Elements in grown defect list: 3
# smartctl -a /dev/rdsk/c2t4d0 | grep defect
Elements in grown defect list: 3
# smartctl -a /dev/rdsk/c2t5d0 | grep defect
Elements in grown defect list: 2



-- 

Jeremy Kister
http://jeremy.kister.net./





_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to