[ 
https://issues.apache.org/jira/browse/HBASE-29667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil resolved HBASE-29667.
------------------------------------------
    Resolution: Fixed

Merged it to master, branch-3, branch-2 and branch-2.6. Thanks for the 
contribution, [~huginn]!

> The block priority is initialized as MULTI when the data block is first 
> written into the BucketCache
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-29667
>                 URL: https://issues.apache.org/jira/browse/HBASE-29667
>             Project: HBase
>          Issue Type: Bug
>          Components: BucketCache
>    Affects Versions: 3.0.0-beta-1, 2.7.0, 2.6.4
>            Reporter: huginn
>            Assignee: huginn
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.0.0, 2.7.0, 2.6.5
>
>
>  When a data block is first written into the BucketCahce, BucketCache 
> allocates a bucket for it, generates and returns the corresponding 
> BucketEntry, which will later be placed into the BackingMap. I notice that 
> BucketEntry sets the Block Priority to MULTI during initialization. Is this a 
> bug that causes the BucketCache to only contain blocks with priorities MULTI 
> and MEM, thereby confusing single-access and multiple-access blocks? In fact, 
> the BucketCache has logic to upgrade SINGLE to MULTI when the block is 
> accessed again, as well as different cleanup logic for blocks with different 
> block priorities.
> {code:java}
> public BucketEntry writeToCache(final IOEngine ioEngine, final 
> BucketAllocator alloc,
>       final LongAdder realCacheSize, Function<BucketEntry, Recycler> 
> createRecycler,
>       ByteBuffer metaBuff, final Long acceptableSize) throws IOException {
>       int len = data.getSerializedLength();
>       if (len == 0) {
>         return null;
>       }
>       if (isCachePersistent && data instanceof HFileBlock) {
>         len += Long.BYTES; 
>       }
>       long offset = alloc.allocateBlock(len);
>       if (isPrefetch() && alloc.getUsedSize() > acceptableSize) {
>         alloc.freeBlock(offset, len);
>         return null;
>       }
>       boolean succ = false;
>       BucketEntry bucketEntry = null;
>       try {
>         int diskSizeWithHeader = (data instanceof HFileBlock)
>           ? ((HFileBlock) data).getOnDiskSizeWithHeader()
>           : data.getSerializedLength();
>         bucketEntry = new BucketEntry(offset, len, diskSizeWithHeader, 
> accessCounter, inMemory,
>           createRecycler, getByteBuffAllocator());
>         bucketEntry.setDeserializerReference(data.getDeserializer());
> ...
> }
> {code}
> {code:java}
>   BucketEntry(long offset, int length, int onDiskSizeWithHeader, long 
> accessCounter,
>     long cachedTime, boolean inMemory, Function<BucketEntry, Recycler> 
> createRecycler,
>     ByteBuffAllocator allocator) {
>     if (createRecycler == null) {
>       throw new IllegalArgumentException("createRecycler could not be null!");
>     }
>     setOffset(offset);
>     this.length = length;
>     this.onDiskSizeWithHeader = onDiskSizeWithHeader;
>     this.accessCounter = accessCounter;
>     this.cachedTime = cachedTime;
>     this.priority = inMemory ? BlockPriority.MEMORY : BlockPriority.MULTI;
>     this.refCnt = RefCnt.create(createRecycler.apply(this));
>     this.markedAsEvicted = new AtomicBoolean(false);
>     this.allocator = allocator;
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to