[ https://issues.apache.org/jira/browse/HDFS-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556403#comment-14556403 ]
Hadoop QA commented on HDFS-8461: --------------------------------- \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | pre-patch | 14m 45s | Pre-patch HDFS-7285 compilation is healthy. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 3 new or modified test files. | | {color:green}+1{color} | javac | 7m 29s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 9m 38s | There were no new javadoc warning messages. | | {color:red}-1{color} | release audit | 0m 14s | The applied patch generated 1 release audit warnings. | | {color:green}+1{color} | checkstyle | 0m 37s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 1s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 36s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with eclipse:eclipse. | | {color:red}-1{color} | findbugs | 3m 13s | The patch appears to introduce 1 new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 14s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 171m 9s | Tests failed in hadoop-hdfs. | | | | 212m 34s | | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs | | | Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 88% of time Unsynchronized access at DFSOutputStream.java:88% of time Unsynchronized access at DFSOutputStream.java:[line 146] | | Failed unit tests | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS | | | hadoop.hdfs.server.namenode.TestAuditLogs | | | hadoop.hdfs.server.blockmanagement.TestBlockInfo | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12734816/HDFS-8461-HDFS-7285.001.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | HDFS-7285 / 6313225 | | Release Audit | https://builds.apache.org/job/PreCommit-HDFS-Build/11104/artifact/patchprocess/patchReleaseAuditProblems.txt | | Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/11104/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/11104/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/11104/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/11104/console | This message was automatically generated. > Erasure coding: fix priority level of UnderReplicatedBlocks for striped block > ----------------------------------------------------------------------------- > > Key: HDFS-8461 > URL: https://issues.apache.org/jira/browse/HDFS-8461 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: Walter Su > Assignee: Walter Su > Attachments: HDFS-8461-HDFS-7285.001.patch > > > {code:title=UnderReplicatedBlocks.java} > private int getPriority(int curReplicas, > ... > } else if (curReplicas == 1) { > //only on replica -risk of loss > // highest priority > return QUEUE_HIGHEST_PRIORITY; > ... > {code} > For stripe blocks, we should return QUEUE_HIGHEST_PRIORITY when curReplicas > == 6( Suppose 6+3 schema). > That's important. Because > {code:title=BlockManager.java} > DatanodeDescriptor[] chooseSourceDatanodes(BlockInfo block, > ... > if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY > && !node.isDecommissionInProgress() > && node.getNumberOfBlocksToBeReplicated() >= maxReplicationStreams) > { > continue; // already reached replication limit > } > ... > {code} > It may return not enough source DNs ( maybe 5), and failed to recover. > A busy node should not be skiped if a block has highest risk/priority. The > issue is the striped block doesn't have priority. -- This message was sent by Atlassian JIRA (v6.3.4#6332)