Hi all, I'm trying to simulate a drive failure and recovery on a raidz array. I'm able to do so using 'replace', but this requires an extra disk that was not part of the array. How do you manage when you don't have or need an extra disk yet?
For example when I 'dd if=/dev/zero of=/dev/ad6', or physically remove the drive for awhile, then 'online' the disk, after it resilvers I'm typically left with the following after scrubbing: r...@file:~# zpool status pool: pool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scrub: scrub completed after 0h0m with 0 errors on Fri Dec 10 23:45:56 2010 config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad12 ONLINE 0 0 0 ad13 ONLINE 0 0 0 ad4 ONLINE 0 0 0 ad6 ONLINE 0 0 7 errors: No known data errors http://www.sun.com/msg/ZFS-8000-9P lists my above actions as a cause for this state and rightfully doesn't think them serious. When I 'clear' the errors though and offline/fault another drive, and then reboot, the array faults. That tells me ad6 was never fully integrated back in. Can I tell the array to re-add ad6 from scratch? 'detach' and 'remove' don't work for raidz. Otherwise I need to use 'replace' to get out of this situation. My system: r...@file:~# uname -a FreeBSD file 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0: Sun Nov 28 13:36:08 SAST 2010 r...@file:/usr/obj/usr/src/sys/COWNEL amd64 r...@file:~# dmesg | grep ZFS ZFS filesystem version 4 ZFS storage pool version 15 _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss