I would advise getting familiar with the basic terminology and vocabulary of ZFS first. Start with the Solaris 10 ZFS Administration Guide. It's a bit more complete for a newbie.
http://docs.sun.com/app/docs/doc/819-5461?l=en You can then move on to the Best Practices Guide, Configuration Guide, Troubleshooting Guide and Evil Tuning Guide on solarisinternals.com: http://www.solarisinternals.com//wiki/index.php?title=Category:ZFS All of the features in ZFS on Solaris 10 appear in OpenSolaris; the inverse does not necessarily hold true, as active development occurs on the OpenSolaris trunk and updates take about a year to filter back down into Solaris due to integration concerns, testing, etc. A Separate Log (SLOG) device can be used for a ZIL, but they are not necessarily the same thing. The ZIL always exists, and is part of the pool if you have not defined a SLOG device. The zpool.cache file does not reside in the pool. It lives in /etc/zfs in the root file system of your OpenSolaris system. Thus, it does not reside "on the ZIL device" either, since there may not necessarily be a SLOG (what you would term a "ZIL device") anyway. (There is always a ZIL, though. See remarks above.) Hopefully that clears up some of the misconceptions and misunderstandings you have. Cheers! On Mon, Apr 19, 2010 at 06:52, Michael DeMan <sola...@deman.com> wrote: > Also, pardon my typos, and my lack of re-titling my subject to note that it > is a fork from the original topic. Corrections in text that I noticed after > finally sorting out getting on the mailing list are below... > > On Apr 19, 2010, at 3:26 AM, Michael DeMan wrote: > > > By the way, > > > > I would like to chip in about how informative this thread has been, at > least for me, despite (and actually because of) the strong opinions on some > of the posts about the issues involved. > > > > From what I gather, there is still an interesting failure possibility > with ZFS, although probably rare. In the case where a zil (aka slog) device > fails, AND the zpool.cache information is not available, basically folks are > toast? > > > > In addition, the zpool.cache itself exhibits the following behaviors (and > I could be totally wrong, this is why I ask): > > > > A. It is not written to frequently, i.e., it is not a performance impact > unless new zfs file systems (pardon me if I have the incorrect terminology) > are not being fabricated and supplied to the underlying operating system. > > > The above 'are not being fabricated' should be 'are regularly being > fabricated' > > > B. The current implementation stores that cache file on the zil device, > so if for some reason, that device is totally lost (along with said .cache > file), it is nigh impossible to recover the entire pool it correlates with. > The above, 'on the zil device', should say 'on the fundamental zfs file > system itself, or a zil device if one is provisioned' > > > > > > > possible solutions: > > > > 1. Why not have an option to mirror that darn cache file (like to the > root file system of the boot device at least as an initial implementation) > no matter what intent log devices are present? Presuming that most folks at > least want enough redundancy that their machine will boot, and if it boots - > then they have a shot at recovery of the balance of the associated (zfs) > directly attached storage, and with my other presumptions above, there is > little reason do not to offer a feature like this? > Missing final sentence: The vast amount of problems with computer and > network reliability is typically related to human error. The more '9s' that > can be intrinsically provided by the systems themselves helps mitigate this. > > > > > > > Respectfully, > > - mike > > > > > > On Apr 18, 2010, at 10:10 PM, Richard Elling wrote: > > > >> On Apr 18, 2010, at 7:02 PM, Don wrote: > >> > >>> If you have a pair of heads talking to shared disks with ZFS- what can > you do to ensure the second head always has a current copy of the > zpool.cache file? > >> > >> By definition, the zpool.cache file is always up to date. > >> > >>> I'd prefer not to lose the ZIL, fail over, and then suddenly find out I > can't import the pool on my second head. > >> > >> I'd rather not have multiple failures, either. But the information > needed in the > >> zpool.cache file for reconstructing a missing (as in destroyed) > top-level vdev is > >> easily recovered from a backup or snapshot. > >> -- richard > >> > >> ZFS storage and performance consulting at http://www.RichardElling.com > >> ZFS training on deduplication, NexentaStor, and NAS performance > >> Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com > >> > >> > >> > >> > >> > >> _______________________________________________ > >> zfs-discuss mailing list > >> zfs-discuss@opensolaris.org > >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss@opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- "You can choose your friends, you can choose the deals." - Equity Private "If Linux is faster, it's a Solaris bug." - Phil Harman Blog - http://whatderass.blogspot.com/ Twitter - @khyron4eva
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss