On Tue, 18 May 2010, Edward Ned Harvey wrote:

Either I'm crazy, or I completely miss what you're asking.  You want to have
one side of a mirror attached locally, and the other side of the mirror
attached ... via iscsi or something ... across the WAN?  Even if you have a
really fast WAN (1Gb or so) your performance is going to be terrible, and I
would be very concerned about reliability.  What happens if a switch reboots
or crashes?  Then suddenly half of the mirror isn't available anymore
(redundancy is degraded on all pairs) and ... Will it be a degraded mirror?
Or will the system just hang, waiting for iscsi IO to timeout?  When it
comes back online, will it intelligently resilver only the parts which have
changed since?  Since the mirror is now broken, and local operations can
happen faster than the WAN can carry them across, will the resilver ever
complete, ever?  I don't know.

This has been accomplished successfully before. There used to be a fellow posting here (from New Zealand I think) who used distributed storage just like that. If the WAN goes away, then zfs writes will likely hang for the iSCSI timeout period (likely 3 minutes) and then continue normally once iSCSI/zfs decides that the mirror device is not available. When the WAN returns, then zfs will send only the missing updates.

The whole point of a log device is to accelerate sync writes, by providing
nonvolatile storage which is faster than the primary storage.  You're not
going to get this if any part of the log device is at the other side of a
WAN.  So either add a mirror of log devices locally and not across the WAN,
or don't do it at all.

This depends on the nature of the WAN. The WAN latency may still be relatively low as compared with drive latency.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to