[ https://issues.apache.org/jira/browse/HDFS-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15610068#comment-15610068 ]
Chris Nauroth commented on HDFS-11034: -------------------------------------- Hello [~GergelyNovak]. If the decommissioned host is removed from the {{dfs.hosts.exclude}} file, followed by running {{hdfs dfsadmin -refreshNodes}}, then the host is no longer considered to be excluded. If the DataNode process is still running, or if it's restarted accidentally, then that DataNode will re-register with the NameNode, come back into service and become a candidate for writing new blocks. I was imagining a new workflow, where the host remains decommissioned, but the administrator has a way to clear out the in-memory tracked state about that node. It's interesting that you brought up the exclude file. Since that's already the existing mechanism for inclusion/exclusion of hosts, I wonder if there is a way to enhance it to cover this use case, so that administrators wouldn't need to learn a new command. I'll think about it more (and comments are welcome from others who have ideas too). > Provide a command line tool to clear decommissioned DataNode information from > the NameNode without restarting. > -------------------------------------------------------------------------------------------------------------- > > Key: HDFS-11034 > URL: https://issues.apache.org/jira/browse/HDFS-11034 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Reporter: Chris Nauroth > Assignee: Gergely Novák > > Information about decommissioned DataNodes remains tracked in the NameNode > for the entire NameNode process lifetime. Currently, the only way to clear > this information is to restart the NameNode. This issue proposes to add a > way to clear this information online, without requiring a process restart. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org