[ https://issues.apache.org/jira/browse/HDFS-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eric Sirianni updated HDFS-4898: -------------------------------- Status: Patch Available (was: Open) {noformat} index 8cb072b..302981f 100644 --- a/src/hdfs/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyWithNodeGroup.java +++ b/src/hdfs/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyWithNodeGroup.java @@ -178,7 +178,7 @@ public class BlockPlacementPolicyWithNodeGroup extends BlockPlacementPolicyDefau avoidStaleNodes); } catch (NotEnoughReplicasException e) { chooseRandom(numOfReplicas - (results.size() - oldNumOfReplicas), - localMachine.getNetworkLocation(), excludedNodes, blocksize, + NetworkTopology.getFirstHalf(localMachine.getNetworkLocation()), excludedNodes, blocksize, maxReplicasPerRack, results, avoidStaleNodes); } } {noformat} > BlockPlacementPolicyWithNodeGroup.chooseRemoteRack() fails to properly > fallback to local rack > --------------------------------------------------------------------------------------------- > > Key: HDFS-4898 > URL: https://issues.apache.org/jira/browse/HDFS-4898 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 2.0.4-alpha, 1.2.0 > Reporter: Eric Sirianni > Priority: Minor > > As currently implemented, {{BlockPlacementPolicyWithNodeGroup}} does not > properly fallback to local rack when no nodes are available in remote racks, > resulting in an improper {{NotEnoughReplicasException}}. > {code:title=BlockPlacementPolicyWithNodeGroup.java} > @Override > protected void chooseRemoteRack(int numOfReplicas, > DatanodeDescriptor localMachine, HashMap<Node, Node> excludedNodes, > long blocksize, int maxReplicasPerRack, List<DatanodeDescriptor> > results, > boolean avoidStaleNodes) throws NotEnoughReplicasException { > int oldNumOfReplicas = results.size(); > // randomly choose one node from remote racks > try { > chooseRandom( > numOfReplicas, > "~" + > NetworkTopology.getFirstHalf(localMachine.getNetworkLocation()), > excludedNodes, blocksize, maxReplicasPerRack, results, > avoidStaleNodes); > } catch (NotEnoughReplicasException e) { > chooseRandom(numOfReplicas - (results.size() - oldNumOfReplicas), > localMachine.getNetworkLocation(), excludedNodes, blocksize, > maxReplicasPerRack, results, avoidStaleNodes); > } > } > {code} > As currently coded the {{chooseRandom()}} call in the {{catch}} block will > never succeed as the set of nodes within the passed in node path (e.g. > {{/rack1/nodegroup1}}) is entirely contained within the set of excluded nodes > (both are the set of nodes within the same nodegroup as the node chosen first > replica). > The bug is that the fallback {{chooseRandom()}} call in the catch block > should be passing in the _complement_ of the node path used in the initial > {{chooseRandom()}} call in the try block (e.g. {{/rack1}}) - namely: > {code} > NetworkTopology.getFirstHalf(localMachine.getNetworkLocation()) > {code} > This will yield the proper fallback behavior of choosing a random node from > _within the same rack_, but still excluding those nodes _in the same > nodegroup_ -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira