I found a suggestion to reformat the namenode.  In order to do so, I found
it necessary to set the dir to 777. AFter


$ sudo chmod 777 /var/lib/hadoop-0.20/cache/hadoop/dfs/name

$ ./hadoop namenode -format

(successful)

$ ./hadoop-daemon.sh --config $HADOOP/conf start namenode

(success!)


So.. this leads to a related question:  *What gives with these permissions?
 **  *Maybe  this is *cloudera *specific.   I am logged in to cloudera
user,. but these directories have owners/groups with a mix of hadoop,
mapred, hbase, hdfs, etc.    When i look in /etc/passwd and /etc/group there
is no clear indication that cloudera should be able to access files owned by
members of those groups.

Where is there more info about making the file permissions happy when
running the various hadoop services from cloudera user ?

i am on CDH3u1

thx


2011/10/25 Stephen Boesch <java...@gmail.com>

>
> I am relatively new here and starting the CDH3u1 (on vmware).   The
> nameserver is not coming up due to the following error:
>
>
> 2011-10-25 22:47:00,547 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Cannot access storage directory /var/lib/hadoop-0.20/cache/hadoop/dfs/name
> 2011-10-25 22:47:00,549 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
> Directory /var/lib/hadoop-0.20/cache/hadoop/dfs/name is in an inconsistent
> state: storage directory does not exist or is not accessible.
>         at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:305)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:358)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:327)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:465)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1224)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)
>
> Now, I first noticed there was a "lock" file.  So I sudo rm'ed it and
> retried. But same error.  Then, not knowing what files are required (if any)
> to restart, I moved the entire dir and created a new empty one.  Here are
> both the new and the 'sav" dirs
>
>
> cloudera@cloudera-demo:/usr/lib/hadoop/logs$ ll
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name
> total 8
> drwxr-xr-x 2 hdfs hdfs   4096 2011-10-25 23:11 .
> drwxr-xr-x 4 hdfs hadoop 4096 2011-10-25 23:11 ..
> cloudera@cloudera-demo:/usr/lib/hadoop/logs$ ll
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name.sav
> total 20
> drwxr-xr-x 2 hdfs hdfs   4096 2011-01-24 15:24 image
> drwxr-xr-x 2 hdfs hdfs   4096 2011-09-25 11:49 previous.checkpoint
> drwxr-xr-x 2 hdfs hdfs   4096 2011-10-25 21:01 current
> drwxr-xr-x 5 hdfs hdfs   4096 2011-10-25 23:02 .
> drwxr-xr-x 4 hdfs hadoop 4096 2011-10-25 23:11 ..
>
>
> So then, any recommendations on how to proceed?
>
>
> thanks
>
>
>

Reply via email to