[ 
https://issues.apache.org/jira/browse/HDFS-1547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12978882#action_12978882
 ] 

Suresh Srinivas commented on HDFS-1547:
---------------------------------------

Scott, if you follow the entire discussion the idea of adding third file has 
been dropped. We will retain only two files. 
include - cluster configuration file which lists datanodes in the cluster
exclude(reame it to decom file?) - file to decommission the nodes. These nodes 
are listed in (thanks Dhruba) as the last location for block to satisfy the 
intent from HADOOP-442.

The reason I think this is better is - include file should rarely change (only 
when new namenodes are added). Compared to that exclude file will change more 
frequently.

Current documentation alludes to relationship between include and exclude, 
describing the behavior when a datanode is in include and not exclude, in 
exclude and not in include and is in both the files. This is not necessary any 
more. include file is datanodes that make the cluster. exclude(or decom) is for 
decommissioning. I will update the document to reflect this.

> Improve decommission mechanism
> ------------------------------
>
>                 Key: HDFS-1547
>                 URL: https://issues.apache.org/jira/browse/HDFS-1547
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.23.0
>            Reporter: Suresh Srinivas
>            Assignee: Suresh Srinivas
>             Fix For: 0.23.0
>
>
> Current decommission mechanism driven using exclude file has several issues. 
> This bug proposes some changes in the mechanism for better manageability. See 
> the proposal in the next comment for more details.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to