actually you can put the hadoop.tmp.dir to other place, e.g /opt/hadoop_tmp
or /var/hadoop_tmp. first create the folder there, and assign the correct
mode for the hadoop_tmp folder, (chmod 777 for all the user to use hadoop).
then change the conf xml file accordingly, and run "hadoop namenode
-format", then start it. hopefully it will work

my experience is that put hadoop.tmp.dir in /tmp will make hadoop unstable,
especially for long-running jobs.

Best regards,
Starry

/* Tomorrow is another day. So is today. */


On Thu, Nov 12, 2009 at 04:04, Edward Capriolo <edlinuxg...@gmail.com>wrote:

> The property you are going to need to set is
>
> <property>
>  <name>dfs.name.dir</name>
>  <value>${hadoop.tmp.dir}/dfs/name</value>
>  <description>Determines where on the local filesystem the DFS name node
>      should store the name table.  If this is a comma-delimited list
>      of directories then the name table is replicated in all of the
>      directories, for redundancy. </description>
> </property>
>
>
> If you are running 0.20 and later information the information about
> the critical variables you need to setup to get running is here:
> (give these a good read through)
>
> http://hadoop.apache.org/common/docs/current/quickstart.html
> http://hadoop.apache.org/common/docs/current/cluster_setup.html
>
> If you are running a version older then 0.20 you can look in
> hadoop-default.xml and make changes to hadoop-site.xml.
>
> Edward
>
> On Wed, Nov 11, 2009 at 2:55 PM, Kaushal Amin <kaushala...@gmail.com>
> wrote:
> > which configuration file?
> >
> > On Wed, Nov 11, 2009 at 1:50 PM, Edward Capriolo <edlinuxg...@gmail.com
> >wrote:
> >
> >> Are you starting hadoop as a different user?
> >> Maybe first time you are starting as user hadoop, now this time you
> >> are starting as user root.
> >>
> >> Or as stated above something is cleaning out your /tmp. Use your
> >> configuration files to have namenode write to a permanent place.
> >>
> >> Edward
> >>
> >> On Wed, Nov 11, 2009 at 2:36 PM, Kaushal Amin <kaushala...@gmail.com>
> >> wrote:
> >> > I am seeing following error in my NameNode log file.
> >> >
> >> > 2009-11-11 10:59:59,407 ERROR
> >> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> >> > initialization failed.
> >> > 2009-11-11 10:59:59,449 ERROR
> >> > org.apache.hadoop.hdfs.server.namenode.NameNode:
> >> > org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> >> Directory
> >> > /tmp/hadoop-root/dfs/name is in an inconsistent state: storage
> directory
> >> > does not exist or is not accessible.
> >> >
> >> > Any idea?
> >> >
> >> >
> >> > On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin <kaushala...@gmail.com>
> >> wrote:
> >> >
> >> >>  I am running Hadoop on single server. The issue I am running into is
> >> that
> >> >> start-all.sh script is not starting up NameNode.
> >> >>
> >> >> Only way I can start NameNode is by formatting it and I end up losing
> >> data
> >> >> in HDFS.
> >> >>
> >> >>
> >> >>
> >> >> Does anyone have solution to this issue?
> >> >>
> >> >>
> >> >>
> >> >> Kaushal
> >> >>
> >> >>
> >> >>
> >> >
> >>
> >
>

Reply via email to