> Oh, and regarding the original post -- as several
> readers correctly
> surmised, we weren't faking anything, we just didn't
> want to wait
> for all the device timeouts.  Because the disks were
> on USB, which
> is a hotplug-capable bus, unplugging the dead disk
> generated an
> interrupt that bypassed the timeout.  We could have
> waited it out,
> but 60 seconds is an eternity on stage.

I'm sorry, I didn't mean to sound offensive. Anyway I think that people should 
know that their drives can stuck the system for minutes, "despite" ZFS. I mean: 
there are a lot of writings about how ZFS is great for recovery in case a drive 
fails, but there's nothing regarding this problem. I know now it's not ZFS 
fault; but I wonder how many people set up their drives with ZFS assuming that 
"as soon as something goes bad, ZFS will fix it". 
Is there any way to test these cases other than smashing the drive with a 
hammer? Having a failover policy where the failover can't be tested sounds 
scary...
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to