Yes I would also say the current way is better.  Specially after the
off heap read path improve, we are at  almost same perf from bucket
cache compared to LRU cache.  So it would be better to work with a
small Java heap and small L1 cache size. (This is not strict L1 vs L2
. Still I call it that way)  And we will get a large L2 off heap cache
size.  The data blocks are not moving around the L1 and L2 but
strictly goes into and out of L2.  The index and blooms blocks alone
goes into L1.   I believe we should make this the default for 2.0

-Anoop-

On Fri, Aug 18, 2017 at 1:12 AM, Stack <st...@duboce.net> wrote:
> Some more info, when COMBINED=false, this is what happens:
>
> // L1 and L2 are not 'combined'. They are connected via the LruBlockCache
> victimhandler
> // mechanism. It is a little ugly but works according to the following:
> when the
> // background eviction thread runs, blocks evicted from L1 will go to L2
> AND when we get
> // a block from the L1 cache, if not in L1, we will search L2.
>
> For me, I'd be interested in seeing perf compare. IIRC, when NOT combined,
> data blocks coming up into L1 and the being 'victim handled' -- evicted ==
> copied -- out to L2 was costly.
>
> St.Ack
>
>
>
> On Thu, Aug 17, 2017 at 11:23 AM, Biju N <bijuatapa...@gmail.com> wrote:
>
>> Currently BUCKET_CACHE_COMBINED_KEY is set to "true" by default  [1] which
>> makes L2 cache not strictly L2 cache. From the usability perspective, it is
>> better to set BUCKET_CACHE_COMBINED_KEY  to "false" so that L2 cache would
>> behave strictly L2 and also use the L1 cache to store data blocks improving
>> memory use. Thoughts?
>>
>> Thanks,
>> Biju
>>
>> [1]
>> https://github.com/apache/hbase/blob/84d7318f86305f34102502a70d7182
>> 23320590d5/hbase-server/src/main/java/org/apache/hadoop/
>> hbase/io/hfile/CacheConfig.java#L112
>>

Reply via email to