Thanks Anoop/Stack.
With L1+L2, there is an overhead during getBlocks when the block need to be
cached in L1. Since the evictions are done in a separate thread when
threshold are reached the overhead during evictions can be discounted.
Based on a quick test looks like the overhead is in 100s of nan
Yes I would also say the current way is better. Specially after the
off heap read path improve, we are at almost same perf from bucket
cache compared to LRU cache. So it would be better to work with a
small Java heap and small L1 cache size. (This is not strict L1 vs L2
. Still I call it that wa
Some more info, when COMBINED=false, this is what happens:
// L1 and L2 are not 'combined'. They are connected via the LruBlockCache
victimhandler
// mechanism. It is a little ugly but works according to the following:
when the
// background eviction thread runs, blocks evicted from L1 will go to
Currently BUCKET_CACHE_COMBINED_KEY is set to "true" by default [1] which
makes L2 cache not strictly L2 cache. From the usability perspective, it is
better to set BUCKET_CACHE_COMBINED_KEY to "false" so that L2 cache would
behave strictly L2 and also use the L1 cache to store data blocks improvi