[ https://issues.apache.org/jira/browse/HBASE-9158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13734197#comment-13734197 ]
Hadoop QA commented on HBASE-9158: ---------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12596946/9158-trunk-v4.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 18 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6662//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6662//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6662//console This message is automatically generated. > Serious bug in cyclic replication > --------------------------------- > > Key: HBASE-9158 > URL: https://issues.apache.org/jira/browse/HBASE-9158 > Project: HBase > Issue Type: Bug > Affects Versions: 0.98.0, 0.95.1, 0.94.10 > Reporter: Lars Hofhansl > Assignee: Lars Hofhansl > Priority: Critical > Fix For: 0.98.0, 0.95.2, 0.94.11 > > Attachments: 9158-0.94.txt, 9158-0.94-v2.txt, 9158-0.94-v3.txt, > 9158-0.94-v4.txt, 9158-trunk-v1.txt, 9158-trunk-v2.txt, 9158-trunk-v3.txt, > 9158-trunk-v4.txt > > > While studying the code for HBASE-7709, I found a serious bug in the current > cyclic replication code. The problem is here in HRegion.doMiniBatchMutation: > {code} > Mutation first = batchOp.operations[firstIndex].getFirst(); > txid = this.log.appendNoSync(regionInfo, > this.htableDescriptor.getName(), > walEdit, first.getClusterId(), now, this.htableDescriptor); > {code} > Now note that edits replicated from remote cluster and local edits might > interleave in the WAL, we might also receive edit from multiple remote > clusters. Hence that <walEdit> might have edits from many clusters in it, but > all are just labeled with the clusterId of the first Mutation. > Fixing this in doMiniBatchMutation seems tricky to do efficiently (imagine we > get a batch with cluster1, cluster2, cluster1, cluster2, ..., in that case > each edit would have to be its own batch). The coprocessor handling would > also be difficult. > The other option is create batches of Puts grouped by the cluster id in > ReplicationSink.replicateEntries(...), this is not as general, but equally > correct. This is the approach I would favor. > Lastly this is very hard to verify in a unittest. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira