I think this falls under the bug (of which the number I do not have handy at
the moment) where ZFS needs to more gracefully fail in a situation like this.
Yes, be probably broke his zpool, but it really shouldn't have paniced the
machine.

-brian

On Mon, Jun 11, 2007 at 03:05:19PM -0100, Mario Goebbels wrote:
> I think in your test, you have to force some IO on the pool for ZFS to
> recognize that your simulated disk has gone faulty, and that after the
> first mkfile already. Immediately overwriting both files after pool
> creation leaves ZFS with the impression that the disks went missing. And
> even if ZFS noticed by itself, you still need to allow it to perform and
> complete a resilver after issuing the first mkfile.
> 
> -mg
> 
> > Panic on snv_65&64 when:
> > #mkdir /disk
> > #mkfile 128m /disk/disk1
> > #mkfile 128m /disk/disk2
> > #zpool create data mirror /disk/disk1 /disk/disk2
> > #mkfile 128m /disk/disk1
> > #mkfile 128m /disk/disk2
> > #zpool scrub data
> > 
> > panic[cpu0]/thread=2a100e33ca0: ZFS: I/O failure (write on <unknown> off 0: 
> > zio 30002925770 [L0 bplist] 4000L/4000P DVA[0]=<0:10000:4000> 
> > DVA[1]=<0:1810000:4000> DVA[2]=<0:3010000:4000> fletcher4 uncompressed BE 
> > contiguous birth=15 fill=1 
> > cksum=1000000:1000000000:800800000000:2ab2ab000000000): error 6
> 



> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger is cooked thoroughly."  -- Jonathan 
Patschke
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to