[ 
https://issues.apache.org/jira/browse/HDFS-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13851408#comment-13851408
 ] 

Arpit Agarwal commented on HDFS-5662:
-------------------------------------

Minor comment - contents of the {{if}} block in BlockManager.java have an extra 
indentation.

+1 otherwise.


> Can't decommission a DataNode due to file's replication factor larger than 
> the rest of the cluster size
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5662
>                 URL: https://issues.apache.org/jira/browse/HDFS-5662
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Brandon Li
>            Assignee: Brandon Li
>         Attachments: HDFS-5662.001.patch
>
>
> A datanode can't be decommissioned if it has replica that belongs to a file 
> with a replication factor larger than the rest of the cluster size.
> One way to fix this is to have some kind of minimum replication factor 
> setting and thus any datanode can be decommissioned regardless of the largest 
> replication factor it's related to. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Reply via email to