mattba...@gmail.com said:
> We're looking at buying some additional SSD's for L2ARC (as well as
> additional RAM to support the increased L2ARC size) and I'm wondering if we
> NEED to plan for them to be large enough to hold the entire file or if ZFS
> can cache the most heavily used parts of a single file.
> 
> After watching arcstat (Mike Harsch's updated version) and arc_summary, I'm
> still not sure what to make of it. It's rare that the l2arc (14Gb) hits
> double digits in %hit whereas the ARC (3Gb) is frequently >80% hit. 

I'm not sure of the answer to your initial question (file-based vs 
block-based),
but I may have an explanation for the stats you're seeing.  We have a system
here with 96GB of RAM and also the Sun F20 flash accelerator card (96GB),
most of which is used for L2ARC.

Note that data is not written into the L2ARC until it is evicted from the
ARC (e.g. when something newer or more frequently used needs ARC space).
So, my interpretation of the high hit rates on the in-RAM ARC, and low hit
rates on the L2ARC, is that the working set of data fits mostly in RAM,
and the system seldom needs to go to the L2ARC for more.

Regards,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to