[ https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14702839#comment-14702839 ]
Hadoop QA commented on HDFS-8884: --------------------------------- \\ \\ | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | pre-patch | 16m 1s | Findbugs (version ) appears to be broken on trunk. | | {color:green}+1{color} | @author | 0m 0s | The patch does not contain any @author tags. | | {color:green}+1{color} | tests included | 0m 0s | The patch appears to include 1 new or modified test files. | | {color:green}+1{color} | javac | 8m 7s | There were no new javac warning messages. | | {color:green}+1{color} | javadoc | 10m 8s | There were no new javadoc warning messages. | | {color:green}+1{color} | release audit | 0m 23s | The applied patch does not increase the total number of release audit warnings. | | {color:green}+1{color} | checkstyle | 0m 34s | There were no new checkstyle issues. | | {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that end in whitespace. | | {color:green}+1{color} | install | 1m 34s | mvn install still works. | | {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with eclipse:eclipse. | | {color:green}+1{color} | findbugs | 2m 34s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. | | {color:green}+1{color} | native | 3m 16s | Pre-build of native portion | | {color:red}-1{color} | hdfs tests | 175m 21s | Tests failed in hadoop-hdfs. | | | | 218m 37s | | \\ \\ || Reason || Tests || | Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate | | | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | | Timed out tests | org.apache.hadoop.cli.TestHDFSCLI | \\ \\ || Subsystem || Report/Notes || | Patch URL | http://issues.apache.org/jira/secure/attachment/12751189/HDFS-8884.002.patch | | Optional Tests | javadoc javac unit findbugs checkstyle | | git revision | trunk / 22dc5fc | | hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12040/artifact/patchprocess/testrun_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12040/testReport/ | | Java | 1.7.0_55 | | uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12040/console | This message was automatically generated. > Fail-fast check in BlockPlacementPolicyDefault#chooseTarget > ----------------------------------------------------------- > > Key: HDFS-8884 > URL: https://issues.apache.org/jira/browse/HDFS-8884 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Yi Liu > Assignee: Yi Liu > Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch > > > In current BlockPlacementPolicyDefault, when choosing datanode storage to > place block, we have following logic: > {code} > final DatanodeStorageInfo[] storages = DFSUtil.shuffle( > chosenNode.getStorageInfos()); > int i = 0; > boolean search = true; > for (Iterator<Map.Entry<StorageType, Integer>> iter = storageTypes > .entrySet().iterator(); search && iter.hasNext(); ) { > Map.Entry<StorageType, Integer> entry = iter.next(); > for (i = 0; i < storages.length; i++) { > StorageType type = entry.getKey(); > final int newExcludedNodes = addIfIsGoodTarget(storages[i], > {code} > We will iterate (actually two {{for}}, although they are usually small value) > all storages of the candidate datanode even the datanode itself is not good > (e.g. decommissioned, stale, too busy..), since currently we do all the check > in {{addIfIsGoodTarget}}. > We can fail-fast: check the datanode related conditions first, if the > datanode is not good, then no need to shuffle and iterate the storages. Then > it's more efficient. -- This message was sent by Atlassian JIRA (v6.3.4#6332)