[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input
[ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874102#comment-16874102 ] Hudson commented on HBASE-21937: Results for branch branch-2 [build #2029 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2029//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Make the Compression#decompress can accept ByteBuff as input > - > > Key: HBASE-21937 > URL: https://issues.apache.org/jira/browse/HBASE-21937 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Attachments: HBASE-21937.HBASE-21879.v1.patch, > HBASE-21937.HBASE-21879.v2.patch, HBASE-21937.HBASE-21879.v3.patch > > > When decompressing an compressed block, we are also allocating > HeapByteBuffer for the unpacked block. should allocate ByteBuff from the > global ByteBuffAllocator, skimmed the code, the key point is, we need an > ByteBuff decompress interface, not the following: > {code} > # Compression.java > public static void decompress(byte[] dest, int destOffset, > InputStream bufferedBoundedStream, int compressedSize, > int uncompressedSize, Compression.Algorithm compressAlgo) > throws IOException { > //... > } > {code} > Not very high priority, let me make the block without compression to be > offheap firstly. > In HBASE-22005, I ignored the unit test: > 1. TestLoadAndSwitchEncodeOnDisk ; > 2. TestHFileBlock#testPreviousOffset; > Need to resolve this issue and make those UT works fine. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input
[ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16870928#comment-16870928 ] Hudson commented on HBASE-21937: Results for branch master [build #1168 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1168/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1168//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1168//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1168//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Make the Compression#decompress can accept ByteBuff as input > - > > Key: HBASE-21937 > URL: https://issues.apache.org/jira/browse/HBASE-21937 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Attachments: HBASE-21937.HBASE-21879.v1.patch, > HBASE-21937.HBASE-21879.v2.patch, HBASE-21937.HBASE-21879.v3.patch > > > When decompressing an compressed block, we are also allocating > HeapByteBuffer for the unpacked block. should allocate ByteBuff from the > global ByteBuffAllocator, skimmed the code, the key point is, we need an > ByteBuff decompress interface, not the following: > {code} > # Compression.java > public static void decompress(byte[] dest, int destOffset, > InputStream bufferedBoundedStream, int compressedSize, > int uncompressedSize, Compression.Algorithm compressAlgo) > throws IOException { > //... > } > {code} > Not very high priority, let me make the block without compression to be > offheap firstly. > In HBASE-22005, I ignored the unit test: > 1. TestLoadAndSwitchEncodeOnDisk ; > 2. TestHFileBlock#testPreviousOffset; > Need to resolve this issue and make those UT works fine. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input
[ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825809#comment-16825809 ] Zheng Hu commented on HBASE-21937: -- Pushed to branch HBASE-21879, Thanks [~Apache9] for reviewing. > Make the Compression#decompress can accept ByteBuff as input > - > > Key: HBASE-21937 > URL: https://issues.apache.org/jira/browse/HBASE-21937 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Attachments: HBASE-21937.HBASE-21879.v1.patch, > HBASE-21937.HBASE-21879.v2.patch, HBASE-21937.HBASE-21879.v3.patch > > > When decompressing an compressed block, we are also allocating > HeapByteBuffer for the unpacked block. should allocate ByteBuff from the > global ByteBuffAllocator, skimmed the code, the key point is, we need an > ByteBuff decompress interface, not the following: > {code} > # Compression.java > public static void decompress(byte[] dest, int destOffset, > InputStream bufferedBoundedStream, int compressedSize, > int uncompressedSize, Compression.Algorithm compressAlgo) > throws IOException { > //... > } > {code} > Not very high priority, let me make the block without compression to be > offheap firstly. > In HBASE-22005, I ignored the unit test: > 1. TestLoadAndSwitchEncodeOnDisk ; > 2. TestHFileBlock#testPreviousOffset; > Need to resolve this issue and make those UT works fine. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input
[ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821641#comment-16821641 ] Zheng Hu commented on HBASE-21937: -- Ping [~anoop.hbase], [~ram_krish], [~Apache9], any concerns ? > Make the Compression#decompress can accept ByteBuff as input > - > > Key: HBASE-21937 > URL: https://issues.apache.org/jira/browse/HBASE-21937 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Attachments: HBASE-21937.HBASE-21879.v1.patch, > HBASE-21937.HBASE-21879.v2.patch, HBASE-21937.HBASE-21879.v3.patch > > > When decompressing an compressed block, we are also allocating > HeapByteBuffer for the unpacked block. should allocate ByteBuff from the > global ByteBuffAllocator, skimmed the code, the key point is, we need an > ByteBuff decompress interface, not the following: > {code} > # Compression.java > public static void decompress(byte[] dest, int destOffset, > InputStream bufferedBoundedStream, int compressedSize, > int uncompressedSize, Compression.Algorithm compressAlgo) > throws IOException { > //... > } > {code} > Not very high priority, let me make the block without compression to be > offheap firstly. > In HBASE-22005, I ignored the unit test: > 1. TestLoadAndSwitchEncodeOnDisk ; > 2. TestHFileBlock#testPreviousOffset; > Need to resolve this issue and make those UT works fine. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input
[ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819739#comment-16819739 ] HBase QA commented on HBASE-21937: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HBASE-21879 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 26s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 38s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 37s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 53s{color} | {color:blue} hbase-server in HBASE-21879 has 11 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} HBASE-21879 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} hbase-common: The patch generated 0 new + 8 unchanged - 1 fixed = 8 total (was 9) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 15s{color} | {color:green} hbase-server: The patch generated 0 new + 26 unchanged - 1 fixed = 26 total (was 27) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 33s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 9m 5s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 33s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}142m 38s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}189m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HBASE-Build/105/artifact/patchprocess/Dockerfile | | JIRA Issue | HBASE-21937 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12966172/HBASE-21937.HBASE-21879.v3.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux
[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input
[ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16812724#comment-16812724 ] HBase QA commented on HBASE-21937: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 4m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HBASE-21879 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 17s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 31s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 26s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 57s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} HBASE-21879 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s{color} | {color:red} hbase-common: The patch generated 1 new + 8 unchanged - 1 fixed = 9 total (was 9) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} hbase-server: The patch generated 0 new + 26 unchanged - 1 fixed = 26 total (was 27) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 22s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 27s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 50s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}282m 33s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 1s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}330m 46s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.quotas.TestSpaceQuotas | | | hadoop.hbase.master.procedure.TestSCPWithReplicasWithoutZKCoordinated | | | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas | | | hadoop.hbase.regionserver.TestSplitTransactionOnCluster | | | hadoop.hbase.master.procedure.TestSCPWithReplicas | | | hadoop.hbase.master.TestRestartCluster | | | hadoop.hbase.TestSplitMerge | | | hadoop.hbase.master.TestSplitWALManager | | |
[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input
[ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16812260#comment-16812260 ] HBase QA commented on HBASE-21937: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HBASE-21879 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 20s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 43s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 53s{color} | {color:green} HBASE-21879 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} HBASE-21879 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s{color} | {color:red} hbase-common: The patch generated 1 new + 8 unchanged - 1 fixed = 9 total (was 9) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} hbase-server: The patch generated 0 new + 26 unchanged - 1 fixed = 26 total (was 27) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 26s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 42s{color} | {color:green} hbase-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 9s{color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 73m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.io.hfile.TestHFileEncryption | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HBASE-Build/21/artifact/patchprocess/Dockerfile | | JIRA Issue | HBASE-21937 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12964564/HBASE-21937.HBASE-21879.v1.patch | | Optional Tests | dupname asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle
[jira] [Commented] (HBASE-21937) Make the Compression#decompress can accept ByteBuff as input
[ https://issues.apache.org/jira/browse/HBASE-21937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807754#comment-16807754 ] Zheng Hu commented on HBASE-21937: -- Uploaded the initial patch.v1, because I found some UT were broken by this issue, so I raised the priority. FYI [~anoop.hbase], once the HBASE-22127 get merged, can help to start review this patch. Thanks. > Make the Compression#decompress can accept ByteBuff as input > - > > Key: HBASE-21937 > URL: https://issues.apache.org/jira/browse/HBASE-21937 > Project: HBase > Issue Type: Sub-task >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Attachments: HBASE-21937.HBASE-21879.v1.patch > > > When decompressing an compressed block, we are also allocating > HeapByteBuffer for the unpacked block. should allocate ByteBuff from the > global ByteBuffAllocator, skimmed the code, the key point is, we need an > ByteBuff decompress interface, not the following: > {code} > # Compression.java > public static void decompress(byte[] dest, int destOffset, > InputStream bufferedBoundedStream, int compressedSize, > int uncompressedSize, Compression.Algorithm compressAlgo) > throws IOException { > //... > } > {code} > Not very high priority, let me make the block without compression to be > offheap firstly. > In HBASE-22005, I ignored the unit test: > 1. TestLoadAndSwitchEncodeOnDisk ; > 2. TestHFileBlock#testPreviousOffset; > Need to resolve this issue and make those UT works fine. -- This message was sent by Atlassian JIRA (v7.6.3#76005)