Hello,

 In OpenSolaris b111 with autoreplace=on and a pool without spares,
ZFS is not kicking the resilver after a faulty disk is replaced and
shows up with the same device name, even after waiting several
minutes. The solution is to do a manual `zpool replace` which returns
the following:

# zpool replace tank c3t17d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c3t17d0s0 is part of active ZFS pool tank. Please see zpool(1M).

 ... and resilvering starts immediately. Looks like the `zpool
replace` kicked in the autoreplace function.

 Since b111 is running a little old there is a chance this has already
been reported and fixed. Does anyone know anything about it ?

 Also, if autoreplace is on and the pool has spares, when a disk fails
the spare is automatically used (works fine) but when the faulty disk
is replaced.. nothing really happens. Was the autoreplace code
supposed to replace the faulty disk and release the spare when
resilver is done ?

Thank you,

-- 
Giovanni Tirloni
gtirl...@sysdroid.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to