[ 
https://issues.apache.org/jira/browse/HDFS-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5662:
-----------------------------

    Summary: Can't decommission a DataNode due to file's replication factor 
larger than the rest of the cluster size  (was: Can't decommit a DataNode due 
to file's replication factor larger than the rest of the cluster size)

> Can't decommission a DataNode due to file's replication factor larger than 
> the rest of the cluster size
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5662
>                 URL: https://issues.apache.org/jira/browse/HDFS-5662
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Brandon Li
>            Assignee: Brandon Li
>
> A datanode can't be decommissioned if it has replica that belongs to a file 
> with a replication factor larger than the rest of the cluster size.
> One way to fix this is to have some kind of minimum replication factor 
> setting and thus any datanode can be decommissioned regardless of the largest 
> replication factor it's related to. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Reply via email to