[ 
https://issues.apache.org/jira/browse/ACCUMULO-4626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997203#comment-15997203
 ] 

Josh Elser commented on ACCUMULO-4626:
--------------------------------------

bq. In either case, I'm wondering if we should be looking at off-heap caching 
in general.

With the limits of the JVM and more and more memory on new hardware being the 
norm, it's not a bad idea.

HBase's hybrid on-heap+off-heap approach for blockcache has done pretty well 
(there are some good benchmarks out there too from Nick Dimiduk -- albeit from 
a few years ago by now). They keep an "L1" block cache on heap (LruMap, 
TinyLFU) and use BucketCache (with the file backend) as an "L2" to be able to 
really saturate extra memory not being used on the machine. It's a nice 
approach where you can stay on the JVM for really hot stuff, but still use 
available memory without incurring the pain of large JVM heaps.

> improve cache hit rate via weak reference map
> ---------------------------------------------
>
>                 Key: ACCUMULO-4626
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-4626
>             Project: Accumulo
>          Issue Type: Improvement
>          Components: tserver
>            Reporter: Adam Fuchs
>              Labels: performance, stability
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> When a single iterator tree references the same RFile blocks in different 
> branches we sometimes get cache misses for one iterator even though the 
> requested block is held in memory by another iterator. This is particularly 
> important when using something like the IntersectingIterator to intersect 
> many deep copies. Instead of evicting completely, keeping evicted blocks into 
> a WeakReference value map can avoid re-reading blocks that are currently 
> referenced by another deep copied source iterator.
> We've seen this in the field for some of Sqrrl's queries against very large 
> tablets. The total memory usage for these queries can be equal to the size of 
> all the iterator block reads times the number of readahead threads times the 
> number of files times the number of IntersectingIterator children when cache 
> miss rates are high. This might work out to something like:
> {code}
> 16 readahead threads * 200 deep copied children * 99% cache miss rate * 20 
> files * 252KB per reader = ~16GB of memory
> {code}
> In most cases, evicting to a weak reference value map changes the cache miss 
> rate from very high to very low and has a dramatic effect on total memory 
> usage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to