Hi Shuja,

Can you paste the output of "ls -lR" on all of your dfs.name.dirs?
(hopefully you have more than one, with one on an external machine via NFS,
right?)

Thanks
-Todd

On Fri, Jan 7, 2011 at 4:39 AM, Shuja Rehman <shujamug...@gmail.com> wrote:

> Hi,
>
> After power failure, the name node is not starting,, giving the following
> error. kindly let me know how to resolve it
> thnx
>
>
>
> 2011-01-07 04:14:49,666 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = ubuntu/192.168.1.2
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2+737
> STARTUP_MSG:   build =  -r 98c55c28258aa6f42250569bd7fa431ac657bdbd;
> compiled by 'root' on Mon Oct 11 17:21:30 UTC 2010
> ************************************************************/
> 2011-01-07 04:14:50,610 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2011-01-07 04:14:50,670 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 2011-01-07 04:14:50,907 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hdfs
> 2011-01-07 04:14:50,908 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2011-01-07 04:14:50,908 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=false
> 2011-01-07 04:14:50,931 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2011-01-07 04:14:52,378 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 2011-01-07 04:14:52,392 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2011-01-07 04:14:52,651 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.EOFException
>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>        at java.io.DataInputStream.readLong(DataInputStream.java:399)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSImage.readCheckpointTime(FSImage.java:571)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSImage.getFields(FSImage.java:562)
>        at
>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:237)
>        at
>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:226)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:316)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:343)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:317)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:214)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:394)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1148)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1157)
> 2011-01-07 04:14:52,662 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.EOFException
>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>        at java.io.DataInputStream.readLong(DataInputStream.java:399)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSImage.readCheckpointTime(FSImage.java:571)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSImage.getFields(FSImage.java:562)
>        at
>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:237)
>        at
>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:226)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:316)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:99)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:343)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:317)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:214)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:394)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1148)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1157)
>
> 2011-01-07 04:14:52,673 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
>
> --
> Regards
> Shuja-ur-Rehman Baig
> <http://pk.linkedin.com/in/shujamughal>
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Reply via email to