[ 
https://issues.apache.org/jira/browse/HBASE-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385682#comment-14385682
 ] 

zhangduo commented on HBASE-13301:
----------------------------------

{quote}
In btw a context switch t1 completed the caching and done evict and again 
cached same block.. This seems rarest of rare case. 
{quote}
Agree. But HBase is a long running service, small probability events always 
occur if we keep it running long enough...

Let me revisit the whole read write path in regionserver which relates to 
BlockCache and give a clear locking schema first. Then it is easier to say if 
the situation in this testcase could happen.

Will come back later. Thanks.



> Possible memory leak in BucketCache
> -----------------------------------
>
>                 Key: HBASE-13301
>                 URL: https://issues.apache.org/jira/browse/HBASE-13301
>             Project: HBase
>          Issue Type: Bug
>          Components: BlockCache
>            Reporter: zhangduo
>            Assignee: zhangduo
>         Attachments: HBASE-13301-testcase.patch
>
>
> {code:title=BucketCache.java}
> public boolean evictBlock(BlockCacheKey cacheKey) {
>       ...
>       if (bucketEntry.equals(backingMap.remove(cacheKey))) {
>         bucketAllocator.freeBlock(bucketEntry.offset());
>         realCacheSize.addAndGet(-1 * bucketEntry.getLength());
>         blocksByHFile.remove(cacheKey.getHfileName(), cacheKey);
>         if (removedBlock == null) {
>           this.blockNumber.decrementAndGet();
>         }
>       } else {
>         return false;
>       }
>       ...
> {code}
> I think the problem is here. We remove a BucketEntry that should not be 
> removed by us, but we do not put it back and also do not do any clean up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to