Gary Mills wrote:
On Wed, Mar 04, 2009 at 06:31:59PM -0700, Dave wrote:
Gary Mills wrote:
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote:
"gm" == Gary Mills <mi...@cc.umanitoba.ca> writes:
   gm> I suppose my RFE for two-level ZFS should be included,
It's a simply a consequence of ZFS's end-to-end error detection.
There are many different components that could contribute to such
errors.  Since only the lower ZFS has data redundancy, only it can
correct the error.  Of course, if something in the data path
consistently corrupts the data regardless of its origin, it won't be
able to correct the error.  The same thing can happen in the simple
case, with one ZFS over physical disks.
I would argue against building this into ZFS. Any corruption happening on the wire should not be the responsibility of ZFS. If you want to make sure your data is not corrupted over the wire, use IPSec. If you want to prevent corruption in RAM, use ECC sticks, etc.

But what if the `wire' is a SCSI bus?  Would you want ZFS to do error
correction in that case?  There are many possible wires.  Every
component does its own error checking of some sort, but in its own
domain.  This brings us back to end-to-end error checking again. Since
we are designing a filesystem, that's where the reliability should
reside.


ZFS can't eliminate or prevent all errors. You should have a split backplane/multiple controllers and a minimum 2-way mirror if you're concerned about this from a local component POV.

Same with iSCSI. I run a minimum 2-way mirror from my ZFS server from 2 different NICs, over 2 gigabit switches w/trunking to two different disk shelves for this reason. I do not stack ZFS layers, since it degrades performance and really doesn't provide any benefit.

What's your reason for stacking zpools? I can't recall the original argument for this.

--
Dave
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to