Viraj Jasani created HBASE-26018:
------------------------------------

             Summary: Perf improvement in L1 cache
                 Key: HBASE-26018
                 URL: https://issues.apache.org/jira/browse/HBASE-26018
             Project: HBase
          Issue Type: Improvement
    Affects Versions: 2.4.4, 2.3.5, 3.0.0-alpha-1
            Reporter: Viraj Jasani
            Assignee: Viraj Jasani
             Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5


After HBASE-25698 is in, all L1 caching strategies perform buffer.retain() in 
order to maintain refCount atomically while retrieving cached blocks 
(CHM#computeIfPresent). Retaining refCount is coming up bit expensive in terms 
of performance issues. Using computeIfPresent API, CHM uses coarse grained 
segment locking and even if our computation is not so complex (we just call 
block retain API), it will block other update APIs for the key. 
computeIfPresent keeps showing up on flame graphs as well (attached one of 
them). Specifically when we see aggressive cache hits for meta blocks (with 
major blocks in cache), we would want to get away from coarse grained locking.

One of the suggestions came up while reviewing PR#3215 is to treat cache read 
API as optimistic read and deal with block retain based refCount issues by 
catching the respective Exception and let it treat as cache miss. This should 
allow us to go ahead with lockless get API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to