[ 
https://issues.apache.org/jira/browse/HBASE-22532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16854604#comment-16854604
 ] 

Zheng Hu commented on HBASE-22532:
----------------------------------

Add a log like the following: 

{code}

commit f9b563e29d02204a5653eff6235a82511e9a6c08 (HEAD -> HBASE-21879)
Author: huzheng <open...@gmail.com>
Date: Mon Jun 3 21:07:26 2019 +0800

Add debug

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
index dc007f726a..2c94acf383 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
@@ -106,6 +106,10 @@ public class ChecksumUtil {
 } catch (ChecksumException e) {
 return false;
 }
+ } else {
+ LOG.info(
+ "--> data.class: {}, checksums.class: {}, data.capacity: {}, 
checksums.capacity: " + "{}",
+ data.getClass(), checksums.getClass(), data.capacity(), checksums.capacity());
 }
 
 // If the block is a MultiByteBuff. we use a small byte[] to update the 
checksum many times for

{code}

 

Seems many blocks with buffer.size = 130KB, because many logs say: 

{code}

2019-06-03,21:39:47,691 INFO org.apache.hadoop.hbase.io.hfile.ChecksumUtil: --> 
data.class: class org.apache.hadoop.hbase.nio.MultiByteBuff, checksums.class: 
class org.apache.hadoop.hbase.nio.MultiByteBuff, data.capacity: 133120, 
checksums.capacity: 133120
2019-06-03,21:39:47,691 INFO org.apache.hadoop.hbase.io.hfile.ChecksumUtil: --> 
data.class: class org.apache.hadoop.hbase.nio.MultiByteBuff, checksums.class: 
class org.apache.hadoop.hbase.nio.MultiByteBuff, data.capacity: 133120, 
checksums.capacity: 133120
2019-06-03,21:39:47,691 INFO org.apache.hadoop.hbase.io.hfile.ChecksumUtil: --> 
data.class: class org.apache.hadoop.hbase.nio.MultiByteBuff, checksums.class: 
class org.apache.hadoop.hbase.nio.MultiByteBuff, data.capacity: 133120, 
checksums.capacity: 133120

{code}

> There's still too much cpu wasting on validating checksum even if 
> buffer.size=65KB
> ----------------------------------------------------------------------------------
>
>                 Key: HBASE-22532
>                 URL: https://issues.apache.org/jira/browse/HBASE-22532
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Zheng Hu
>            Assignee: Zheng Hu
>            Priority: Major
>         Attachments: async-prof-pid-27827-cpu-3.svg
>
>
> After disabled the block cache, and with the following config: 
> {code}
>     # Disable the block cache
>     hfile.block.cache.size=0
>     hbase.ipc.server.allocator.buffer.size=66560
>     hbase.ipc.server.reservoir.minimal.allocating.size=0
> {code}
> The ByteBuff for block should be expected to be a SingleByteBuff,  which will 
> use the hadoop native lib to validate the checksum, while in the cpu flame 
> graph 
> [async-prof-pid-27827-cpu-3.svg|https://issues.apache.org/jira/secure/attachment/12970683/async-prof-pid-27827-cpu-3.svg],
>   we can still see that about 32% CPU wasted on PureJavaCrc32#update,  which 
> means it's not using the faster hadoop native lib.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to