I am running Hadoop on single server. The issue I am running into is that
start-all.sh script is not starting up NameNode.
Only way I can start NameNode is by formatting it and I end up losing data
in HDFS.
Does anyone have solution to this issue?
Kaushal
Is there error output from start-all.sh?
On 11/9/09 11:10 PM, Kaushal Amin wrote:
I am running Hadoop on single server. The issue I am running into is that
start-all.sh script is not starting up NameNode.
Only way I can start NameNode is by formatting it and I end up losing data
in HDFS.
Doe
ir is wiped out on reboot.
Kind regards
Steve Watt
From:
"Kaushal Amin"
To:
Date:
11/10/2009 08:47 AM
Subject:
Hadoop NameNode not starting up
I am running Hadoop on single server. The issue I am running into is that
start-all.sh script is not starting up NameNode.
Only way I can
x27;m going to take a guess at your issue here and say that you used the
/tmp as a path for some of your hadoop conf settings and you have rebooted
lately. The /tmp dir is wiped out on reboot.
Kind regards
Steve Watt
From:
"Kaushal Amin"
To:
Date:
11/10/2009 08:47 AM
Subject:
Hado
I am seeing following error in my NameNode log file.
2009-11-11 10:59:59,407 ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
2009-11-11 10:59:59,449 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
org.apache.hadoop.hdfs.server.common.Inconsiste
Are you starting hadoop as a different user?
Maybe first time you are starting as user hadoop, now this time you
are starting as user root.
Or as stated above something is cleaning out your /tmp. Use your
configuration files to have namenode write to a permanent place.
Edward
On Wed, Nov 11, 200
which configuration file?
On Wed, Nov 11, 2009 at 1:50 PM, Edward Capriolo wrote:
> Are you starting hadoop as a different user?
> Maybe first time you are starting as user hadoop, now this time you
> are starting as user root.
>
> Or as stated above something is cleaning out your /tmp. Use your
The property you are going to need to set is
dfs.name.dir
${hadoop.tmp.dir}/dfs/name
Determines where on the local filesystem the DFS name node
should store the name table. If this is a comma-delimited list
of directories then the name table is replicated in all of the
di
actually you can put the hadoop.tmp.dir to other place, e.g /opt/hadoop_tmp
or /var/hadoop_tmp. first create the folder there, and assign the correct
mode for the hadoop_tmp folder, (chmod 777 for all the user to use hadoop).
then change the conf xml file accordingly, and run "hadoop namenode
-form