[ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14627654#comment-14627654
 ] 

ramkrishna.s.vasudevan commented on HBASE-12213:
------------------------------------------------

Consolidating the changes done in this patch as per the discussions/comments 
over in RB
-> This patch now allows the read path to work with ByteBuff a new abstract 
class added (since we cannot subclass ByteBuffers).
The name ByteBuff was selected to avoid conflict with netty's ByteBuf and also 
that ByteBuffer is already used by nio.
-> This abstract class can have a SingleByteBuffer impl or MultipleByteBuffer 
impl.  In case of the blocks coming out of L1 cache HDFS it will always be 
singleByteBuffer.  This SingleByteBuffer wraps the incoming BB from the HDFS 
and L1 cache.
-> In case of BucketCache, we will create a the MultiByteBuffs (an array of 
BBs) and the read path would work on this MultiByteBuffs using the API in the 
ByteBuff interface.  For now, even from the BucketCAche we copy the buckets to 
a single onheap BB. This can be changed only after HBASE-12295 goes in. Once 
HBASE-12295 we will not copy the buckets and instead serve them directly from 
the buckets using the ByteBuff's APIs thus ensuring that an offheap bucket 
cache will serve the reads from the offheap.
-> After this change goes in and HBASE-12295, we need to ensure that we use the 
BufferBacked cells in the read path both for the non DBE case and DBE case.
-> There are some changes done in the HFileReaderImpl blockSeek that tries to 
use the ByteBuff APIs such that they are more optimized and performance 
oriented, like getIntStrictlyFwd(), getLongStrictlyFwd() ( the naming of this 
API is under discussion and also thinking if we could pass a delta position 
from the current postion).  But the point is that these APIs try to utilize the 
position based BBUtils Unsafe accessing of the Bytebuffers and thus bypassing 
the ByteBuffer's bookkeeping that it does on the read APIs.



> HFileBlock backed by Array of ByteBuffers
> -----------------------------------------
>
>                 Key: HBASE-12213
>                 URL: https://issues.apache.org/jira/browse/HBASE-12213
>             Project: HBase
>          Issue Type: Sub-task
>          Components: regionserver, Scanners
>            Reporter: Anoop Sam John
>            Assignee: ramkrishna.s.vasudevan
>         Attachments: HBASE-12213_1.patch, HBASE-12213_10_withBBI.patch, 
> HBASE-12213_11_withBBI.patch, HBASE-12213_12_withBBI.patch, 
> HBASE-12213_12_withBBI.patch, HBASE-12213_13_withBBI.patch, 
> HBASE-12213_2.patch, HBASE-12213_4.patch, HBASE-12213_8_withBBI.patch, 
> HBASE-12213_9_withBBI.patch, HBASE-12213_jmh.zip
>
>
> In L2 cache (offheap) an HFile block might have been cached into multiple 
> chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
> of bigger BB and copying. Instead we can make HFileBlock to serve data from 
> an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to