> From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net] > Sent: Saturday, July 09, 2011 3:44 PM > > > > Could you test with some SSD SLOGs and see how well or bad the > > > system > > > performs? > > > > These are all async writes, so slog won't be used. Async writes that > > have a single fflush() and fsync() at the end to ensure system > > buffering is not skewing the results. > > Sorry, my bad, I meant L2ARC to help buffer the DDT
Oh - It just so happens I don't have one available, but that doesn't mean I can't talk about it. ;-) For quite a lot of these tests, all the data resides in the ARC, period. The only area where the L2ARC would have an effect is after that region... When I'm pushing the limits of ARC then there may be some benefit from the use of L2ARC. So ... It is distinctly possible the L2ARC might help soften the "brick wall." When reaching arc_meta_limit, some of the metadata might have been pushed out to L2ARC in order to leave a (slightly) smaller footprint in the ARC... I doubt it, but maybe there could be some gain here. It is distinctly possible the L2ARC might help test #2 approach the performance of test #3 (test #2 had primarycache=all and suffered approx 10x write performance degradation, while test #3 had primarycache=metadata and suffered approx 6x write performance degradation.) But there's positively no way the L2ARC would come into play on test #3. In this situation, all the metadata, the complete DDT resides in RAM. So with or without the cache device, the best case we're currently looking at is approx 6x write performance degradation. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss