Just to make sure you know ... if you disable the ZIL altogether, and
you
have a power interruption, failed cpu, or kernel halt, then you're
likely to
have a corrupt unusable zpool, or at least data corruption.  If that
is
indeed acceptable to you, go nuts.  ;-)
I believe that the above is wrong information as long as the devices
involved do flush their caches when requested to.  Zfs still writes
data in order (at the TXG level) and advances to the next transaction
group when the devices written to affirm that they have flushed their
cache.  Without the ZIL, data claimed to be synchronously written
since the previous transaction group may be entirely lost.

If the devices don't flush their caches appropriately, the ZIL is
irrelevant to pool corruption.
I stand corrected.  You don't lose your pool.  You don't have corrupted
filesystem.  But you lose whatever writes were not yet completed, so if
those writes happen to be things like database transactions, you could have
corrupted databases or files, or missing files if you were creating them at
the time, and stuff like that.  AKA, data corruption.

But not pool corruption, and not filesystem corruption.


Which is an expected behavior when you break NFS requirements as Linux does out of the box. Disabling ZIL on a nfs server makes it no worse than the standard Linux behaviour - now you get decent performance at a cost of some data to get corrupted from a nfs client point of view. But then there are environments when it is perfectly acceptable as you there are not running critical databases but rather user home directories and zfs will flush a transaction maximum after 30s currently so user won't be able to loose more than last 30s if the nfs server would suddenly lost power.

To clarify - if ZIL is disabled it makes no difference at all for a pool/filesystem level consistency.

--
Robert Milkowski
http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to