[ https://issues.apache.org/jira/browse/HBASE-27224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17601133#comment-17601133 ]
Hudson commented on HBASE-27224: -------------------------------- Results for branch branch-2 [build #638 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/638/]: (x) *{color:red}-1 overall{color}* ---- details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/638/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/638/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/638/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/638/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > HFile tool statistic sampling produces misleading results > --------------------------------------------------------- > > Key: HBASE-27224 > URL: https://issues.apache.org/jira/browse/HBASE-27224 > Project: HBase > Issue Type: Improvement > Reporter: Bryan Beaudreault > Assignee: Bryan Beaudreault > Priority: Major > Labels: patch-available > Fix For: 2.5.1, 3.0.0-alpha-4 > > > HFile tool uses codahale metrics for collecting statistics about key/values > in an HFile. We recently had a case where the statistics printed out that the > max row size was only 25k. This was confusing because I was seeing bucket > cache allocation failures for blocks as large as 1.5mb. > Digging in, I was able to find the large row using the "-p" argument (which > was obviously very verbose). Once I found the row, I saw the vlen was listed > as ~1.5mb which made much more sense. > First thing I notice here is that default codahale metrics histogram is using > ExponentiallyDecayingReservoir. This probably makes sense for a long-lived > histogram, but the HFile tool is run at a point in time. It might be best to > use UniformReservoir instead. > Secondly, we do not need sampling for min/max. Let's supplement the histogram > with our own calculation which is guaranteed to be accurate for the entirety > of the file. -- This message was sent by Atlassian Jira (v8.20.10#820010)