[ 
https://issues.apache.org/jira/browse/HDFS-5114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14118394#comment-14118394
 ] 

Zhe Zhang commented on HDFS-5114:
---------------------------------

[~kihwal] Since this was created a year ago, do you happen to know if it has 
been resolved in the latest code? If not I'm happy to work on it. Thanks!

> getMaxNodesPerRack() in BlockPlacementPolicyDefault does not take 
> decommissioning nodes into account.
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5114
>                 URL: https://issues.apache.org/jira/browse/HDFS-5114
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 3.0.0, 2.1.0-beta
>            Reporter: Kihwal Lee
>            Assignee: Zhe Zhang
>
> If a large proportion of data nodes are being decommissioned, one or more 
> racks may not be writable. However this is not taken into account when the 
> default block placement policy module invokes getMaxNodesPerRack(). Some 
> blocks, especially the ones with a high replication factor, may not be able 
> to fully replicated until those nodes are taken out of dfs.include.  It can 
> actually block decommissioning itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to