ZFS no longer has the issue where loss of a single device (even intermittently) causes pool corruption. That's been fixed.

That is, there used to be an issue in this scenario:

(1) zpool constructed from a single LUN on a SAN device
(2) SAN experiences temporary outage, while ZFS host remains running.
(3) zpool is permanently corrupted, even if no I/O occured during outage

This is fixed. (around b101, IIRC)

However, ZFS remains much more sensitive to loss of the underlying LUN than UFS, and has a tendency to mark such a LUN as defective during any such SAN outage. It's much more recoverable nowdays, though. Just to be clear, this occasionally occurs when something such as a SAN switch dies, or there is a temporary hiccup in the SAN infrastructure, causing some small (i.e. < minute) loss of connectivity to the underlying LUN.

RAIDZ and mirrored zpools are still the preferred method of arranging things in ZFS, even with hardware raid backing the underlying LUN (whether the LUN is from a SAN or local HBA doesn't matter).

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to