[
https://issues.apache.org/jira/browse/HADOOP-3433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Lohit Vijayarenu resolved HADOOP-3433.
--------------------------------------
Resolution: Duplicate
This was resolved as part of HADOOP-3720. Closing this as duplicate. Now, if
you add hostnames to list and run refreshNodes, it should decommission the
nodes.
> dfs.hosts.exclude not working as expected
> -----------------------------------------
>
> Key: HADOOP-3433
> URL: https://issues.apache.org/jira/browse/HADOOP-3433
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.3
> Reporter: Christian Kunz
>
> We had to decommission a lot of hosts.
> Therefore, we added them to dfs.hosts.exclude and called 'dfsadmin
> -refreshNodes'.
> The list of excluded nodes appeared in the list of dead nodes (and still
> remained the list of live nodes), but there was no replication going on for
> more than 20 hours (no NameSystem.addStoredBlock messages in the namenode
> log).
> A few hours ago we stopped a datanode from that list. After it moved from the
> live node list to the dead node list (double entry), replication started
> immediately and completed after about 1 hour (replicated ~ 10,000 blocks).
> Somehow, mere exclusion does not trigger replication as it should.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.