Jeremy Chow wrote:
Hi list,

 I added a property dfs.hosts.exclude to my conf/hadoop-site.xml. Then
refreshed my cluster with command
                     bin/hadoop dfsadmin -refreshNodes
It showed that it can only shut down the DataNode process but not included
the TaskTracker process on each slaver specified in the excludes file.
Presently, decommissioning TaskTracker on-the-fly is not available.
The jobtracker web still show that I hadnot shut down these nodes.
How can i totally decommission these slaver nodes on-the-fly? Is it can be
achieved only by operation on the master node?

I think one way to shutdown a TaskTracker is to kill it.

Thanks
Amareshwari
Thanks,
Jeremy


Reply via email to