> 
> On 9-Nov-07, at 2:45 AM, can you guess? wrote:

...

> > This suggests that in a ZFS-style installation
> without a hardware  
> > RAID controller they would have experienced at
> worst a bit error  
> > about every 10^14 bits or 12 TB
> 
> 
> And how about FAULTS?
> hw/firmware/cable/controller/ram/...

If you had read either the CERN study or what I already said about it, you 
would have realized that it included the effects of such faults.

...

> >  but I had a box that was randomly
> >> corrupting blocks during
> >> DMA.  The errors showed up when doing a ZFS scrub
> and
> >> I caught the
> >> problem in time.
> >
> > Yup - that's exactly the kind of error that ZFS and
> WAFL do a  
> > perhaps uniquely good job of catching.
> 
> WAFL can't catch all: It's distantly isolated from
> the CPU end.

WAFL will catch everything that ZFS catches, including the kind of DMA error 
described above:  it contains validating information outside the data blocks 
just as ZFS does.

...

> > CERN was using relatively cheap disks
> 
> Don't forget every other component in the chain.

I didn't, and they didn't:  read the study.

...

> > Your position is similar to that of an audiophile
> enthused about a  
> > measurable but marginal increase in music quality
> and trying to  
> > convince the hoi polloi that no other system will
> do:  while other  
> > audiophiles may agree with you, most people just
> won't consider it  
> > important - and in fact won't even be able to
> distinguish it at all.
> 
> Data integrity *is* important.

You clearly need to spend a lot more time trying to understand what you've read 
before responding to it.

- bill
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to