On 18 apr 2010, at 06.43, Richard Elling wrote:

> On Apr 17, 2010, at 11:51 AM, Edward Ned Harvey wrote:
> 
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Dave Vrona
>>> 
>>> 1) Mirroring.  Leaving cost out of it, should ZIL and/or L2ARC SSDs be
>>> mirrored ?
>> 
>> IMHO, the best answer to this question is the one from the ZFS Best
>> Practices guide.  (I wrote part of it.)
>> In short:
>> 
>> You have no need to mirror your L2ARC cache device, and it's impossible even
>> if you want to for some bizarre reason.
>> 
>> For zpool < 19, which includes all present releases of Solaris 10 and
>> Opensolaris 2009.06, it is critical to mirror your ZIL log device.  A failed
>> unmirrored log device would be the permanent death of the pool.
> 
> I do not believe this is a true statement. In large part it will depend on
> the nature of the failure -- all failures are not created equal. It has also
> been shown that such pools are recoverable, albeit with tedious, manual 
> procedures required.  Rather than saying this is a "critical" issue, I could
> say it is "preferred."  Indeed, there are *many* SPOFs in the typical system
> (any x86 system) which can be considered similarly "critical."

Yes there are. The thing is that a common situations is that the 
most valuable is the data itself, often more than either
0.999[0-9]* uptime and certainly more than the machine itself.

If so, you want very good redundancy on your data, and don't care
much about (live) redundancy on the machine. You just take the
disks and slam them into another machine, physically, be means of
FC or SAS, virtually, or whatever. (You may want to have a spare
machine standby to save time, though.)

It is often not very expensive to get quite a bit of redundancy in
your data, running parallel systems is often much more both
complicated and expensive.

That the data possibly could be recovered with tedious procedures
with experts doing it by hand is not good enough a crash recovery
plan for many of us - in a crash situation you want your data to be
there and be safe and you just have to figure out how to access it,
and you are probably interested in making it happen as quickly as
possible. Hopefully you have planned for the procedure already.
That said, it is good that the manual option is there, if you get in
deep trouble.

At least this is our reasoning when we set up our server machines...

> Finally, you have choices -- you can use an HBA with nonvolatile write
> cache and avoid the need for separate log device.

Except that then that HBA is a non-redundant place, a SOPF, where you
store our data, and a place where you could lose data. As long as you
know that and know that you can take that, everything is fine.

Again, it all depends on the application, I guess, and giving general
advice is nearly impossible.

/ragge

>  -- richard
> 
> ZFS storage and performance consulting at http://www.RichardElling.com
> ZFS training on deduplication, NexentaStor, and NAS performance
> Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com 
> 
> 
> 
> 
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to