[ 
https://issues.apache.org/jira/browse/HDFS-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-5662:
----------------------------------

    Description: 
A datanode can't be decommissioned if it has replica that belongs to a file 
with a replication factor larger than the rest of the cluster size.

One way to fix this is to have some kind of minimum replication factor setting 
and thus any datanode can be decommissioned regardless of the largest 
replication factor it's related to. 



  was:
A datanode can't be decommitted if it has replica belongs to a file with a 
replication factor larger than the rest of the cluster size.

One way to fix this is to have some kind of minimum replication factor setting 
and thus any datanode can be decommitted regardless of the largest replication 
factor it's related to. 




> Can't decommit a DataNode due to file's replication factor larger than the 
> rest of the cluster size
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5662
>                 URL: https://issues.apache.org/jira/browse/HDFS-5662
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Brandon Li
>            Assignee: Brandon Li
>
> A datanode can't be decommissioned if it has replica that belongs to a file 
> with a replication factor larger than the rest of the cluster size.
> One way to fix this is to have some kind of minimum replication factor 
> setting and thus any datanode can be decommissioned regardless of the largest 
> replication factor it's related to. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Reply via email to