> From: Michael Sullivan [mailto:michael.p.sulli...@mac.com] > > My Google is very strong and I have the Best Practices Guide committed > to bookmark as well as most of it to memory. > > While it explains how to implement these, there is no information > regarding failure of a device in a striped L2ARC set of SSD's. I have
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Sepa rate_Cache_Devices It is not possible to mirror or use raidz on cache devices, nor is it necessary. If a cache device fails, the data will simply be read from the main pool storage devices instead. I guess I didn't write this part, but: If you have multiple cache devices, they are all independent from each other. Failure of one does not negate the functionality of the others. > I'm running 2009.11 which is the latest OpenSolaris. Quoi?? 2009.06 is the latest available from opensolaris.com and opensolaris.org. If you want something newer, AFAIK, you have to go to developer build, such as osol-dev-134 Sure you didn't accidentally get 2008.11? > I am also well aware of the effect of losing a ZIL device will cause > loss of the entire pool. Which is why I would never have a ZIL device > unless it was mirrored and on different controllers. Um ... the log device is not special. If you lose *any* unmirrored device, you lose the pool. Except for cache devices, or log devices on zpool >=19 > From the information I've been reading about the loss of a ZIL device, > it will be relocated to the storage pool it is assigned to. I'm not > sure which version this is in, but it would be nice if someone could > provide the release number it is included in (and actually works), it > would be nice. What the heck? Didn't I just answer that question? I know I said this is answered in ZFS Best Practices Guide. http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Sepa rate_Log_Devices Prior to pool version 19, if you have an unmirrored log device that fails, your whole pool is permanently lost. Prior to pool version 19, mirroring the log device is highly recommended. In pool version 19 or greater, if an unmirrored log device fails during operation, the system reverts to the default behavior, using blocks from the main storage pool for the ZIL, just as if the log device had been gracefully removed via the "zpool remove" command. > Also, will this functionality be included in the > mythical 2010.03 release? Zpool 19 was released in build 125. Oct 16, 2009. You can rest assured it will be included in 2010.03, or 04, or whenever that thing comes out. > So what you are saying is that if a single device fails in a striped > L2ARC VDEV, then the entire VDEV is taken offline and the fallback is > to simply use the regular ARC and fetch from the pool whenever there is > a cache miss. It sounds like you're only going to believe it if you test it. Go for it. That's what I did before I wrote that section of the ZFS Best Practices Guide. In ZFS, there is no such thing as striping, although the term is commonly used, because adding multiple devices creates all the benefit of striping, plus all the benefit of concatenation, but colloquially, people think concatenation is weird or unused or something, so people just naturally gravitated to calling it a stripe in ZFS too, although that's not technically correct according to the traditional RAID definition. But nobody bothered to create a new term "stripecat" or whatever, for ZFS. > Or, does what you are saying here mean that if I have a 4 SSD's in a > stripe for my L2ARC, and one device fails, the L2ARC will be > reconfigured dynamically using the remaining SSD's for L2ARC. No reconfiguration necessary, because it's not a stripe. It's 4 separate devices, which ZFS can use simultaneously if it wants to. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss