The 3 steps you mentioned, were they done while namenode was still running?
I think (I might be wrong as well), that the config is read only once, when the 
namenode is started. So, you should have defined dfs.hosts.exclude file before 
hand. 
When you want to refresh, you just updated the file already defined in the 
config and call refresh. And this relates to namenode config. Is it possible to 
test it by defining it in config, restarting it and then trying to decomission 
the nodes?
If you are still seeing issues, would it be possible to open a JIRA 
(https://issues.apache.org/jira/secure/CreateIssue!default.jspa) describing 
steps to reproduce it.

PS: Even if it works, feel free to open JIRA for better documentation. I could 
not find one in HDFS User Guide. 
Thanks,
Lohit

----- Original Message ----
From: Xiangna Li <[EMAIL PROTECTED]>
To: [email protected]
Sent: Wednesday, June 4, 2008 8:54:07 AM
Subject: confusing about decommission in HDFS

hi,

    I try to decommission a node by the following the steps:
      (1) write the hostname of decommission node in a file as the exclude file.
      (2) let the exclude file be specified as a configuration
parameter  dfs.hosts.exclude.
      (3) run "bin/hadoop dfsadmin -refreshNodes".

     It is surprising that the node is found both in the "Live
Datanodes" list with "In Service" status, and also in the "Dead
Datanodes" lists of the dfs namenode web ui. I copy GB-files to the
HDFS to confirm whether it is in Service, and the result is that its
Used size is increasing as others. So could say the decommission
feature don't work ?

     the more strange thing, I put some nodes in the include file and
then add the configuration and then"refreshNodes", but these nodes and
the exclude node are all only in the "Dead Datanodes" lists. Is it a
bug?


    Sincerely for your reply!

Reply via email to