I had a very similar problem. 8 external USB drives running OpenSolaris
native. When I moved the machine into a different room and powered it back
up (there were a couple of reboots and a couple of broken usb cables and
drive shut downs in between), I got the same error. Loosing that much data
is definitely a shock.

I m running zraid2 and I would have assumed that a 2 level redundancy should
fine to toss a lot of roughness at the pool.

After panicking a little, stressing my family out, and some playing with zdb
that lead nowhere, I did a
zpool export mypool
zpool import mypool

It complained about being unable to mount because the mount point was not
empty, so I did
umount /mypool/mypool
zfs mount mypool/mypool
zfs status mypool

and to my relieving surprise it seems all fine.
ls /mypool/mypool

does show data.

Scrub is running right now to be on the safe side.

Thought that may help some folks out there.

Cheers!

Andy
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to