Hi Adarsh,

Please check start-dfs.sh

You will find

"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters start 
secondarynamenode

Default secondarynamenode is run on "masters". 

You can change this shell, such as you can change the last row: 

"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode 
start secondarynamenode


Create a file "conf/secondarynamenode"  & list machine name in it. 


Best,


Xiujin Yang.

> Date: Wed, 18 Aug 2010 13:08:03 +0530
> From: adarsh.sha...@orkash.com
> To: core-u...@hadoop.apache.org
> Subject: Configure Secondary Namenode
> 
> I am not able to find any command or parameter in core-default.xml to 
> configure secondary namenode on separate machine.
> I have a 4-node cluster with jobtracker,master,secondary namenode on one 
> machine
> and remaining 3 are slaves.
> Can anyone please tell me.
> 
> Thanks in Advance
                                          

Reply via email to