Re: [zfs-discuss] Corrupt Array

2011-12-22 Thread Gareth de Vaux
On Thu 2011-12-22 (09:13), Richard Elling wrote: > Be happy. Dance a jig. Buy a lottery ticket. > Notice: scrub repaired 85.5K in 1h21m with 0 errors on Mon Dec 19 06:24:25 > 2011 > ZFS found corruption and fixed it. lol, will do next time. > oops... tempting the fates? > Transient errors do occ

Re: [zfs-discuss] Corrupt Array

2011-12-22 Thread Gareth de Vaux
On Thu 2011-12-22 (10:09), Bob Friesenhahn wrote: > One of your disks failed to return a sector. Due to redundancy, the > original data was recreated from the remaining disks. This is normal > good behavior (other than the disk failing to read the sector). So those checksum counts were histori

[zfs-discuss] Corrupt Array

2011-12-21 Thread Gareth de Vaux
Hi guys, after a scrub my raidz array status showed: # zpool status pool: pool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced,

Re: [zfs-discuss] ZFS not starting

2011-12-02 Thread Gareth de Vaux
On Thu 2011-12-01 (14:19), Freddie Cash wrote: > You will need to find a lot of extra RAM to stuff into that machine in > order for it to boot correctly, load the dedeupe tables into ARC, process > the intent log, and then import the pool. Thanks guys, managed to get 24GB together and it made it (

[zfs-discuss] ZFS not starting

2011-12-01 Thread Gareth de Vaux
Hi guys, when ZFS starts it ends up hanging the system. We have a raidz over 5 x 2TB disks with 5 ZFS filesystems. (The root filesystem is on separate disks). # uname -a FreeBSD fortinbras.XXX 8.2-STABLE FreeBSD 8.2-STABLE #0: Wed Oct 19 09:20:04 SAST 2011 r...@storage.xxx:/usr/obj/usr/src/s

Re: [zfs-discuss] raidz recovery

2010-12-21 Thread Gareth de Vaux
Hi, I'm copying the list - assume you meant to send it there. On Sun 2010-12-19 (15:52), Miles Nordin wrote: > If 'zpool replace /dev/ad6' will not accept that the disk is a > replacement, then You can unplug the disk, erase the label in a > different machine using > > dd if=/dev/zero of=/dev/the

Re: [zfs-discuss] raidz recovery

2010-12-18 Thread Gareth de Vaux
On Sat 2010-12-18 (14:55), Tuomas Leikola wrote: > have you tried zpool replace? like remove ad6, fill with zeroes, > replace, command "zpool replace tank ad6". That should simulate drive > failure and replacement with a new disk. 'replace' requires a different disk to replace with. How do you "r

Re: [zfs-discuss] raidz recovery

2010-12-15 Thread Gareth de Vaux
On Mon 2010-12-13 (16:41), Marion Hakanson wrote: > After you "clear" the errors, do another "scrub" before trying anything > else. Once you get a complete scrub with no new errors (and no checksum > errors), you should be confident that the damaged drive has been fully > re-integrated into the po

[zfs-discuss] raidz recovery

2010-12-11 Thread Gareth de Vaux
Hi all, I'm trying to simulate a drive failure and recovery on a raidz array. I'm able to do so using 'replace', but this requires an extra disk that was not part of the array. How do you manage when you don't have or need an extra disk yet? For example when I 'dd if=/dev/zero of=/dev/ad6', or phy