Are you starting hadoop as a different user?
Maybe first time you are starting as user hadoop, now this time you
are starting as user root.

Or as stated above something is cleaning out your /tmp. Use your
configuration files to have namenode write to a permanent place.

Edward

On Wed, Nov 11, 2009 at 2:36 PM, Kaushal Amin <kaushala...@gmail.com> wrote:
> I am seeing following error in my NameNode log file.
>
> 2009-11-11 10:59:59,407 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> 2009-11-11 10:59:59,449 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
> /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory
> does not exist or is not accessible.
>
> Any idea?
>
>
> On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin <kaushala...@gmail.com> wrote:
>
>>  I am running Hadoop on single server. The issue I am running into is that
>> start-all.sh script is not starting up NameNode.
>>
>> Only way I can start NameNode is by formatting it and I end up losing data
>> in HDFS.
>>
>>
>>
>> Does anyone have solution to this issue?
>>
>>
>>
>> Kaushal
>>
>>
>>
>

Reply via email to