The cleanest way to do this repeatably is to use the --config option of the 
launch scripts.
Copy the whole $HADOOP_CONF_DIR to a second directory, and edit hdfs-site.xml 
(and any other config files desired) as you please.
Pass one to the NN, and one to the Backup NN:

$HADOOP_COMMON_HOME/bin/hadoop-daemon.sh --config <conf_dir_1>  --script 
$HADOOP_HDFS_HOME/bin/hdfs start namenode
$HADOOP_COMMON_HOME/bin/hadoop-daemon.sh --config <conf_dir_2>  --script 
$HADOOP_HDFS_HOME/bin/hdfs start namenode -backup

The $HADOOP_HDFS_HOME/bin/hdfs script will also accept the --config option, if 
you prefer not to launch as a daemon.
Invoke bin/hdfs without any command line arguments, and it will show the usage.

Note that it is usually not advised to run two Namenodes in a single server 
because of memory contention.  But if you have a modest sized namespace and a 
lot of ram, it may be okay.

--Matt


On May 11, 2011, at 1:06 PM, Thanh Do wrote:

thanks! this works.
but we need to play a ugly trick.

Is there any way to run the BackupNode
with different config directory?

The current way to start backup node is:
bin/hdfs namenode -backup

But this doesn't provide a way to specify
the config directory.

On Mon, May 9, 2011 at 8:33 PM, Ozcan ILIKHAN 
<ilik...@cs.wisc.edu<mailto:ilik...@cs.wisc.edu>> wrote:
Since they are using different ports, I donot think there will be a problem. 
Only problem can arise because of namespace directory. I did not tested but 
should work in following steps:
1- Start NameNode.
2- change dfs.name.dir in hdfs-site.xml
3- start BackupNode


From: Thanh Do<mailto:than...@cs.wisc.edu>
Sent: Monday, May 09, 2011 8:06 PM
To: hdfs-user@hadoop.apache.org<mailto:hdfs-user@hadoop.apache.org>
Subject: running backup node in the same host as namenode

hi all,

I am using hdfs 0.21.0, and want to run
Backup Node in the same host as the NameNode
for experiment purpose.

Can anyone shed some light on how to do that?

Many thanks,
Thanh


Reply via email to