dfs startup error, 0 datanodes in 
----------------------------------

                 Key: HADOOP-4912
                 URL: https://issues.apache.org/jira/browse/HADOOP-4912
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.17.2
         Environment: hadoop-site.xml setting: 
fs.default.name hdfs://master.cloud:9000 
mapred.job.tracker hdfs://master.cloud:9001 
hadoop.tmp.dir /home/user/hadoop/tmp/ 
mapred.chile.java.opts Xmls512M 
            Reporter: Focus


 Web site shows:
NameNode 'master.cloud:9000'
Started:  Thu Dec 18 17:10:35 CST 2008  
Version:  0.17.2.1, r684969  
Compiled:  Wed Aug 20 22:29:32 UTC 2008 by oom  
Upgrades:  There are no upgrades in progress.  


Browse the filesystem 
--------------------------------------------------------------------------------

Cluster Summary
Safe mode is ON. The ratio of reported blocks 0.0000 has not reached the 
threshold 0.9990. Safe mode will be turned off automatically.
21 files and directories, 6 blocks = 27 total. Heap Size is 4.94 MB / 992.31 MB 
(0%) 

Capacity : 0 KB 
DFS Remaining : 0 KB 
DFS Used : 0 KB 
DFS Used% : 0 % 
Live Nodes  : 0 
Dead Nodes  : 0 




--------------------------------------------------------------------------------
There are no datanodes in the cluster 

On blog of namenode, it shows:
2008-12-18 17:10:35,204 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master.cloud/10.100.4.226
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.17.2.1
STARTUP_MSG:   build = 
https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r 684969; 
compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
************************************************************/
2008-12-18 17:10:35,337 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: 
Initializing RPC Metrics with hostName=NameNode, port=9000
2008-12-18 17:10:35,344 INFO org.apache.hadoop.dfs.NameNode: Namenode up at: 
master.cloud/10.100.4.226:9000
2008-12-18 17:10:35,348 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processName=NameNode, sessionId=null
2008-12-18 17:10:35,351 INFO org.apache.hadoop.dfs.NameNodeMetrics: 
Initializing NameNodeMeterics using context 
object:org.apache.hadoop.metrics.spi.NullContext
2008-12-18 17:10:35,436 INFO org.apache.hadoop.fs.FSNamesystem: 
fsOwner=user,users,ftp,sshd
2008-12-18 17:10:35,437 INFO org.apache.hadoop.fs.FSNamesystem: 
supergroup=supergroup
2008-12-18 17:10:35,437 INFO org.apache.hadoop.fs.FSNamesystem: 
isPermissionEnabled=true
2008-12-18 17:10:35,576 INFO org.apache.hadoop.fs.FSNamesystem: Finished 
loading FSImage in 181 msecs
2008-12-18 17:10:35,585 INFO org.apache.hadoop.dfs.StateChange: STATE* Safe 
mode ON. 
The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe 
mode will be turned off automatically.
2008-12-18 17:10:35,595 INFO org.apache.hadoop.fs.FSNamesystem: Registered 
FSNamesystemStatusMBean
2008-12-18 17:10:35,727 INFO org.mortbay.util.Credential: Checking Resource 
aliases
2008-12-18 17:10:35,870 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4
2008-12-18 17:10:35,871 INFO org.mortbay.util.Container: Started 
HttpContext[/static,/static]
2008-12-18 17:10:35,871 INFO org.mortbay.util.Container: Started 
HttpContext[/logs,/logs]
2008-12-18 17:10:36,260 INFO org.mortbay.util.Container: Started 
org.mortbay.jetty.servlet.webapplicationhand...@b60b93
2008-12-18 17:10:36,307 INFO org.mortbay.util.Container: Started 
WebApplicationContext[/,/]
2008-12-18 17:10:36,309 INFO org.mortbay.http.SocketListener: Started 
SocketListener on 0.0.0.0:50070
2008-12-18 17:10:36,310 INFO org.mortbay.util.Container: Started 
org.mortbay.jetty.ser...@1bd7848
2008-12-18 17:10:36,310 INFO org.apache.hadoop.fs.FSNamesystem: Web-server up 
at: 0.0.0.0:50070
2008-12-18 17:10:36,310 INFO org.apache.hadoop.ipc.Server: IPC Server 
Responder: starting
2008-12-18 17:10:36,312 INFO org.apache.hadoop.ipc.Server: IPC Server listener 
on 9000: starting
2008-12-18 17:10:36,316 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 
on 9000: starting
2008-12-18 17:10:36,317 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 
on 9000: starting
2008-12-18 17:10:36,317 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 
on 9000: starting
2008-12-18 17:10:36,320 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 
on 9000: starting
2008-12-18 17:10:36,321 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 
on 9000: starting
2008-12-18 17:10:36,321 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 
on 9000: starting
2008-12-18 17:10:36,321 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 
on 9000: starting
2008-12-18 17:10:36,321 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 
on 9000: starting
2008-12-18 17:10:36,322 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 
on 9000: starting
2008-12-18 17:10:36,374 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 
on 9000: starting


and in the slaves blog, i find a strange thing. 

************************************************************/
2008-12-18 17:11:47,627 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = slave3.cloud/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.17.2.1
STARTUP_MSG:   build = 
https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r 684969; 
compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
************************************************************/
2008-12-18 17:11:48,267 ERROR org.apache.hadoop.dfs.DataNode: 
java.io.IOException: Incompatible namespaceIDs in 
/home/user/hadoop/tmp/dfs/data: namenode namespaceID = 1098832880; datanode 
namespaceID = 464592288
        at org.apache.hadoop.dfs.DataStorage.doTransition(DataStorage.java:298)
        at 
org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:142)
        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:258)
        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:176)
        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2795)
        at 
org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2750)
        at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2758)
        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:2880)

2008-12-18 17:11:48,269 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at slave3.cloud/127.0.0.1
************************************************************/


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to