Thanks for the link.
I followed that guide, and now I have rather strange behavior. If I
have dfs.hosts set (I didn't when I wrote my last email) to an empty
file when I start the cluster, nothing happens when I refreshnodes; I
take it that's expected. If it's set it to the hosts I want to keep,
n
Just for the reference these links:
http://wiki.apache.org/hadoop/FAQ#17
http://hadoop.apache.org/core/docs/r0.19.0/hdfs_user_guide.html#DFSAdmin+Command
Decommissioning is not happening at once.
-refreshNodes just starts the process, but does not complete it.
There could be a lot of blocks on th
Hey David,
Look at the web interface. Here's mine:
http://dcache-head.unl.edu:8088/dfshealth.jsp
The "admin state" column says "in service" for normal nodes, and
"decommissioning in progress" for the rest. When the decommissioning
is done, the nodes will migrate to the list of "dead nodes
I'm starting to think I'm doing things wrong.
I have an absolute path to dfs.hosts.exclude that includes what i want
decommissioned, and a dfs.hosts which includes those i want to remain
commissioned (this points to the slaves file).
Nothing seems to do anything...
What am I missing?
-- David
Hi,
I'm trying to decommission some nodes. The process I tried to follow is:
1) add them to conf/excluding (hadoop-site points there)
2) invoke hadoop dfsadmin -refreshNodes
This returns immediately, so I thought it was done, so i killed off
the cluster and rebooted without the new nodes, but th