[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14236549#comment-14236549
 ] 

Ming Ma commented on HDFS-7411:
-------------------------------

Andrew, nice work. It appears I don't need to continue the work on 
https://issues.apache.org/jira/browse/HDFS-7442. Some initial comments.

1. NN Memory impact on the additional decomNodeBlocks. It shouldn't be an issue 
given admins won't decommission lots of nodes at the same time. But it might be 
worth calling out some limit here. 100 nodes * 400k blocks per node * 8 bytes 
per blockInfo reference, 320MB extra at the start of the decommission process?
2. It appears "dfs.namenode.decommission.blocks.per.node" description should 
refer to "dfs.namenode.decommission.nodes.per.interval" instead.
3. It appears this patch also fixed 
https://issues.apache.org/jira/browse/HDFS-5757 by calling decomNodeBlocks.put 
during refreshNodes.

> Refactor and improve decommissioning logic into DecommissionManager
> -------------------------------------------------------------------
>
>                 Key: HDFS-7411
>                 URL: https://issues.apache.org/jira/browse/HDFS-7411
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.5.1
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>         Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to