> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dave Vrona
> 
> 1) Mirroring.  Leaving cost out of it, should ZIL and/or L2ARC SSDs be
> mirrored ?

IMHO, the best answer to this question is the one from the ZFS Best
Practices guide.  (I wrote part of it.)
In short:

You have no need to mirror your L2ARC cache device, and it's impossible even
if you want to for some bizarre reason.

For zpool < 19, which includes all present releases of Solaris 10 and
Opensolaris 2009.06, it is critical to mirror your ZIL log device.  A failed
unmirrored log device would be the permanent death of the pool.

For zpool >= 19, which is available in the developer builds, downloadable
from genunix, you need to make your own decision:  If you have an unmirrored
log device fail, *or* an ungraceful system crash, there is no problem.  But
if you have both, then you lose the latest writes leading up to the crash.
You don't lose your whole pool.  But there are some scenarios where it's
possible to have the failing log device go undetected, until after the
ungraceful reboot, in which case you lose the latest data, but not the whole
pool.

Personally, I recommend the latest build from genunix, and I recommend no
mirroring for log devices, except in the most critical of situations, such
as a machine that processes credit card transactions or stuff like that. 


> 2) ZIL write cache.  It appears some have disabled the write cache on
> the X-25E.  This results in a 5 fold performance hit but it eliminates
> a potential mechanism for data loss.  Is this valid?  If I can mirror
> ZIL, I imagine this is no longer a concern?

This disagrees with my measurements.  If you have a dedicated log device, I
found the best performance by disabling all the write cache on all the
devices (disk and HBA.)  This is because ZFS has inner knowledge of the
filesystem, and knowledge of the block level devices, while the HBA only has
knowledge of the block level devices, and no knowledge of the filesystem.
Long story short, ZFS does a better job of write buffering and utilizing the
devices available.

Details are on the ZFS Best Practices guide.


> 3) SATA devices on a SAS backplane.  Assuming the main drives are SAS,
> what impact do the SATA SSDs have?  Any performance impact?  I realize
> I could use an onboard SATA controller for the SSDs however this
> complicates things in terms of the mounting of these drives.

SATA SSD devices on the SAS backplane is precisely what you should do.  This
works perfectly, and this is the configuration I used when I produced the
measurements described above.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to