[ 
https://issues.apache.org/jira/browse/HDFS-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4861:
-----------------------------

    Summary: BlockPlacementPolicyDefault does not consider decommissioning 
racks  (was: BlockPlacementPolicyDefault does not consider decommissioning 
nodes)
    
> BlockPlacementPolicyDefault does not consider decommissioning racks
> -------------------------------------------------------------------
>
>                 Key: HDFS-4861
>                 URL: https://issues.apache.org/jira/browse/HDFS-4861
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 0.23.7, 2.0.5-beta
>            Reporter: Kihwal Lee
>
> getMaxNodesPerRack() calculates the max replicas/rack like this:
> {code}
> int maxNodesPerRack = (totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2;
> {code}
> Since this does not consider the racks that are being decommissioned and the 
> decommissioning state is only checked later in isGoodTarget(), certain blocks 
> are not replicated even when there are many racks and nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to