[ https://issues.apache.org/jira/browse/HDFS-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Allen Wittenauer updated HDFS-4861: ----------------------------------- Labels: BB2015-05-TBR (was: ) > BlockPlacementPolicyDefault does not consider decommissioning racks > ------------------------------------------------------------------- > > Key: HDFS-4861 > URL: https://issues.apache.org/jira/browse/HDFS-4861 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 0.23.7, 2.1.0-beta > Reporter: Kihwal Lee > Assignee: Rushabh S Shah > Labels: BB2015-05-TBR > Attachments: HDFS-4861-v2.patch, HDFS-4861.patch > > > getMaxNodesPerRack() calculates the max replicas/rack like this: > {code} > int maxNodesPerRack = (totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2; > {code} > Since this does not consider the racks that are being decommissioned and the > decommissioning state is only checked later in isGoodTarget(), certain blocks > are not replicated even when there are many racks and nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)