that should also work if you have set HADOOP_CONF_DIR in env. best way is to 
follow down the the shell script ./bin/start-all.sh which invokes 
./bin/start-dfs.sh which starts datanode like this
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
Yes, you need to start tasktracker as well.
Thanks,
Lohit

----- Original Message ----
From: Keliang Zhao <[EMAIL PROTECTED]>
To: core-user@hadoop.apache.org
Sent: Friday, July 11, 2008 4:31:05 PM
Subject: Re: How to add/remove slave nodes on run time

May I ask what is the right command to start a datanode on a slave?

I used a simple one "bin/hadoop datanode &", but I am not sure.

Also. Should I start the tasktracker manually as well?

-Kevin


On Fri, Jul 11, 2008 at 3:56 PM, lohit <[EMAIL PROTECTED]> wrote:
> To add new datanodes, use the same hadoop version already running on your 
> cluster, the right config and start datanode on any node. The datanode would 
> be configured to talk to the namenode by reading the configs and it would 
> join the cluster. To remove datanode(s) you could decommission the datanode 
> and once decommissioned just kill DataNode process. This is described in 
> there http://wiki.apache.org/hadoop/FAQ#17
>
> Thanks,
> Lohit
>
> ----- Original Message ----
> From: Kevin <[EMAIL PROTECTED]>
> To: core-user@hadoop.apache.org
> Sent: Friday, July 11, 2008 3:43:41 PM
> Subject: How to add/remove slave nodes on run time
>
> Hi,
>
> I searched a bit but could not find the answer. What is the right way
> to add (and remove) new slave nodes on run time? Thank you.
>
> -Kevin
>
>

Reply via email to