[ https://issues.apache.org/jira/browse/HBASE-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13594279#comment-13594279 ]
Hadoop QA commented on HBASE-7842: ---------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12572226/HBASE-7842-4.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 4 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplicationQueueFailoverCompressed org.apache.hadoop.hbase.master.TestMasterFailover org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4688//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4688//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4688//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4688//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4688//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4688//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4688//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4688//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4688//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4688//console This message is automatically generated. > Add compaction policy that explores more storefile groups > --------------------------------------------------------- > > Key: HBASE-7842 > URL: https://issues.apache.org/jira/browse/HBASE-7842 > Project: HBase > Issue Type: New Feature > Components: Compaction > Reporter: Elliott Clark > Assignee: Elliott Clark > Attachments: HBASE-7842-0.patch, HBASE-7842-2.patch, > HBASE-7842-3.patch, HBASE-7842-4.patch > > > Some workloads that are not as stable can have compactions that are too large > or too small using the current storefile selection algorithm. > Currently: > * Find the first file that Size(fi) <= Sum(0, i-1, FileSize(fx)) > * Ensure that there are the min number of files (if there aren't then bail > out) > * If there are too many files keep the larger ones. > I would propose something like: > * Find all sets of storefiles where every file satisfies > ** FileSize(fi) <= Sum(0, i-1, FileSize(fx)) > ** Num files in set =< max > ** Num Files in set >= min > * Then pick the set of files that maximizes ((# storefiles in set) / > Sum(FileSize(fx))) > The thinking is that the above algorithm is pretty easy reason about, all > files satisfy the ratio, and should rewrite the least amount of data to get > the biggest impact in seeks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira