> From: Richard Elling [mailto:richard.ell...@gmail.com]
> 
> On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote:
> 
> > For zpool < 19, which includes all present releases of Solaris 10 and
> > Opensolaris 2009.06, it is critical to mirror your ZIL log device.  A
> failed
> > unmirrored log device would be the permanent death of the pool.
> 
> I do not believe this is a true statement. In large part it will depend
> on
> the nature of the failure -- all failures are not created equal. It has
> also
> been shown that such pools are recoverable, albeit with tedious, manual
> procedures required.  Rather than saying this is a "critical" issue, I
> could
> say it is "preferred."  Indeed, there are *many* SPOFs in the typical
> system
> (any x86 system) which can be considered similarly "critical."

Could you please describe a type of failure of an unmirrored log device in
zpool < 19 which does not result in the pool being faulted and unable to
import?  I don't know of any.

If you have a faulted zpool < 19, due to a faulted nonmirrored log device,
could you describe how it's possible to recover that pool?  I know I tried
and couldn't do it, but then again, it was only a test pool.  I only
dedicated an hour of labor to trying.


> Finally, you have choices -- you can use an HBA with nonvolatile write
> cache and avoid the need for separate log device.

The HBA with nonvolatile cache gains a lot over just plain disks.  By my
measurement, 2x-3x faster for sync writes, but no improvement for async
writes, or any reads.

But it's not as effective as using a dedicated SSD for log device.

By my measurement, using a SSD for log device (with all the HBA write cache
disabled) was about 3x-4x faster than just plain disks for sync writes, but
no different for async writes, or any reads.

I agree with you, HBA nonvolatile write cache is an option.  It's cheaper
than buying an SSD, and it doesn't consume a slot.  Better than nothing.
Depends on what your design requirements are, and how much you care about
the sync write performance.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to