[ https://issues.apache.org/jira/browse/HDFS-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740935#comment-13740935 ]
Hudson commented on HDFS-4898: ------------------------------ FAILURE: Integrated in Hadoop-Hdfs-trunk #1492 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1492/]) HDFS-4898. BlockPlacementPolicyWithNodeGroup.chooseRemoteRack() fails to properly fallback to local rack. (szetszwo: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1514156) * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java > BlockPlacementPolicyWithNodeGroup.chooseRemoteRack() fails to properly > fallback to local rack > --------------------------------------------------------------------------------------------- > > Key: HDFS-4898 > URL: https://issues.apache.org/jira/browse/HDFS-4898 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 1.2.0, 2.0.4-alpha > Reporter: Eric Sirianni > Assignee: Tsz Wo (Nicholas), SZE > Priority: Minor > Fix For: 1.3.0, 2.1.1-beta > > Attachments: h4898_20130809_b-1.patch, h4898_20130809.patch > > > As currently implemented, {{BlockPlacementPolicyWithNodeGroup}} does not > properly fallback to local rack when no nodes are available in remote racks, > resulting in an improper {{NotEnoughReplicasException}}. > {code:title=BlockPlacementPolicyWithNodeGroup.java} > @Override > protected void chooseRemoteRack(int numOfReplicas, > DatanodeDescriptor localMachine, HashMap<Node, Node> excludedNodes, > long blocksize, int maxReplicasPerRack, List<DatanodeDescriptor> > results, > boolean avoidStaleNodes) throws NotEnoughReplicasException { > int oldNumOfReplicas = results.size(); > // randomly choose one node from remote racks > try { > chooseRandom( > numOfReplicas, > "~" + > NetworkTopology.getFirstHalf(localMachine.getNetworkLocation()), > excludedNodes, blocksize, maxReplicasPerRack, results, > avoidStaleNodes); > } catch (NotEnoughReplicasException e) { > chooseRandom(numOfReplicas - (results.size() - oldNumOfReplicas), > localMachine.getNetworkLocation(), excludedNodes, blocksize, > maxReplicasPerRack, results, avoidStaleNodes); > } > } > {code} > As currently coded the {{chooseRandom()}} call in the {{catch}} block will > never succeed as the set of nodes within the passed in node path (e.g. > {{/rack1/nodegroup1}}) is entirely contained within the set of excluded nodes > (both are the set of nodes within the same nodegroup as the node chosen first > replica). > The bug is that the fallback {{chooseRandom()}} call in the catch block > should be passing in the _complement_ of the node path used in the initial > {{chooseRandom()}} call in the try block (e.g. {{/rack1}}) - namely: > {code} > NetworkTopology.getFirstHalf(localMachine.getNetworkLocation()) > {code} > This will yield the proper fallback behavior of choosing a random node from > _within the same rack_, but still excluding those nodes _in the same > nodegroup_ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira