On Tue, Sep 30, 2008 at 21:48, Tim <[EMAIL PROTECTED]> wrote:
>> why ZFS can do this and hardware solutions can't (being several
>> unreliable subsystems away from the data).
> So how is a Server running Solaris with a QLogic HBA connected to an FC JBOD
> any different than a NetApp filer, running ONTAP with a QLogic HBA directly
> connected to an FC JBOD?  How is it "several unreliable subsystems away from
> the data"?
>
> That's a great talking point but it's far from accurate.
Do your applications run on the NetApp filer?  The idea of ZFS as I
see it is to checksum the data from when the application puts the data
into memory until it reads it out of memory again.  Separate filers
can checksum from when data is written into their buffers until they
receive the request for that data, but to get from the filer to the
machine running the application the data must be sent across an
unreliable medium.  If data is corrupted between the filer and the
host, the corruption cannot be detected.  Perhaps the filer could use
a special protocol and include the checksum for each block, but then
the host must verify the checksum for it to be useful.

Contrast this with ZFS.  It takes the application data, checksums it,
and writes the data and the checksum out across the (unreliable) wire
to the (unreliable) disk.  Then when a read request comes, it reads
the data and checksum across the (unreliable) wire, and verifies the
checksum on the *host* side of the wire.  If the data is corrupted any
time between the checksum being calculated on the host and checked on
the host, it can be detected.  This adds a couple more layers of
verifiability than filer-based checksums.

Will
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to