I have two separate system where "zpool remove <pool><disk>" failed to
remove the spare disks from pool. In both cases command is returning
without any error (success). Also, dtracing an IOCTL
ZFS_IOC_VDEV_REMOVE is also showing no error returned. I searched
sunsolve for bugs but no match found.
# zpool status sybdump_pool
pool: sybdump_pool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
sybdump_pool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c5t2d0 ONLINE 0 0 0
spares
c7t0d0 FAULTED corrupted data
c7t4d0 FAULTED corrupted data
c7t0d0 FAULTED corrupted data
c7t4d0 AVAIL
Other pool (different system) showing UNAVAIL after spare disk some how
lost the label and customer applied a default label on it.
pool: epool
ol: epool
state: ONLINE
scrub: scrub completed after 1h10m with 0 errors on Tue Feb 17 21:10:06
2009
config:
NAME STATE READ WRITE CKSUM
epool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t0d1 ONLINE 0 0 0
c2t0d2 ONLINE 0 0 0
c2t0d3 ONLINE 0 0 0
c2t0d4 ONLINE 0 0 0
spares
c2t0d5 UNAVAIL cannot open <<<
errors: No known data errors
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss