[ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16807754#comment-16807754 ]
Zheng Hu commented on HBASE-21937: ---------------------------------- Uploaded the initial patch.v1, because I found some UT were broken by this issue, so I raised the priority. FYI [~anoop.hbase], once the HBASE-22127 get merged, can help to start review this patch. Thanks. > Make the Compression#decompress can accept ByteBuff as input > ------------------------------------------------------------- > > Key: HBASE-21937 > URL: https://issues.apache.org/jira/browse/HBASE-21937 > Project: HBase > Issue Type: Sub-task > Reporter: Zheng Hu > Assignee: Zheng Hu > Priority: Major > Attachments: HBASE-21937.HBASE-21879.v1.patch > > > When decompressing an compressed block, we are also allocating > HeapByteBuffer for the unpacked block. should allocate ByteBuff from the > global ByteBuffAllocator, skimmed the code, the key point is, we need an > ByteBuff decompress interface, not the following: > {code} > # Compression.java > public static void decompress(byte[] dest, int destOffset, > InputStream bufferedBoundedStream, int compressedSize, > int uncompressedSize, Compression.Algorithm compressAlgo) > throws IOException { > //... > } > {code} > Not very high priority, let me make the block without compression to be > offheap firstly. > In HBASE-22005, I ignored the unit test: > 1. TestLoadAndSwitchEncodeOnDisk ; > 2. TestHFileBlock#testPreviousOffset; > Need to resolve this issue and make those UT works fine. -- This message was sent by Atlassian JIRA (v7.6.3#76005)