[ 
https://issues.apache.org/jira/browse/HDFS-1547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12975869#action_12975869
 ] 

dhruba borthakur commented on HDFS-1547:
----------------------------------------

Thanks suresh for addressing this longstanding problem.

1. Does the dfsadmin -refreshNodes command upload the excludes and includes and 
the decom file (all three?) into the namenode's in memory state?

2. when a node is being-decommissioned state, why do you propose that it 
reduces the frequency of block reports and heartbeats? Is this really needed? 
This looks like an optimization to me. A decommissioned node can continue to 
serve read-requests, so we should continue to handle it just like any other 
live node as far as heartbeat and block report periodicity is concerned, isn't 
it? 

3. I am fine if you propose that we stop supporting decommissioning via the 
excludes file. It is an incompatible change, but not a catastrophic one and the 
release-notes can clearly spell this out.

4. I really like the fact that you are proposing the decommissioned nodes are 
not auto shutdown. 



> Improve decommission mechanism
> ------------------------------
>
>                 Key: HDFS-1547
>                 URL: https://issues.apache.org/jira/browse/HDFS-1547
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.23.0
>            Reporter: Suresh Srinivas
>            Assignee: Suresh Srinivas
>             Fix For: 0.23.0
>
>
> Current decommission mechanism driven using exclude file has several issues. 
> This bug proposes some changes in the mechanism for better manageability. See 
> the proposal in the next comment for more details.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to