Hi! I've followed the hadoop cluster tutorial on hadoop site (hadoop 1.1.1 on 64bit machines with openjdk 1.6). I've set-up 1 namenode, 1 jobtracker, and 3 slaves acting as datanode and tasktracker.
I have a problem setting up hdfs on the cluster: dfs daemon start fine on namenode and datanodes but when I go to http://namenode:50070/ I have this: Configured Capacity : 0 KB DFS Used : 0 KB Non DFS Used : 0 KB DFS Remaining : 0 KB DFS Used% : 100 % DFS Remaining% : 0 % Live Nodes : 0 Dead Nodes : 0 Decommissioning Nodes : 0 Number of Under-Replicated Blocks : 0 I've read disk space could be a problem, but i've checked there are 10GB of free space on each datanode. Here the logs of one of the datanodes: /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG: host = ncepspa117/172.16.140.117 STARTUP_MSG: args = [] STARTUP_MSG: version = 1.1.1 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012 ************************************************************/ 2013-02-08 16:36:49,881 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2013-02-08 16:36:49,892 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2013-02-08 16:36:49,892 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2013-02-08 16:36:49,893 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 2013-02-08 16:36:49,985 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2013-02-08 16:36:50,328 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean 2013-02-08 16:36:50,340 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened data transfer server at 50010 2013-02-08 16:36:50,343 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s 2013-02-08 16:36:50,388 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2013-02-08 16:36:50,448 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 2013-02-08 16:36:50,459 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false 2013-02-08 16:36:50,460 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075 2013-02-08 16:36:50,460 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075 2013-02-08 16:36:50,460 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075 2013-02-08 16:36:50,460 INFO org.mortbay.log: jetty-6.1.26 2013-02-08 16:36:50,729 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075 2013-02-08 16:36:50,735 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 2013-02-08 16:36:50,735 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source DataNode registered. 2013-02-08 16:36:50,756 INFO org.apache.hadoop.ipc.Server: Starting SocketReader 2013-02-08 16:36:50,758 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort50020 registered. 2013-02-08 16:36:50,758 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort50020 registered. 2013-02-08 16:36:50,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(ncepspa117.nce.amadeus.net:50010, storageID=, infoPort=50075, ipcPort=50020) Here the logs of the namenode: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = ncepspa119/172.16.140.119 STARTUP_MSG: args = [] STARTUP_MSG: version = 1.1.1 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:48:11 UTC 2012 ************************************************************/ 2013-02-08 16:36:48,124 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2013-02-08 16:36:48,136 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2013-02-08 16:36:48,137 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2013-02-08 16:36:48,137 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2013-02-08 16:36:48,287 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2013-02-08 16:36:48,297 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 2013-02-08 16:36:48,298 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 2013-02-08 16:36:48,323 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit 2013-02-08 16:36:48,323 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB 2013-02-08 16:36:48,323 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries 2013-02-08 16:36:48,323 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152 2013-02-08 16:36:48,359 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=psporacle 2013-02-08 16:36:48,360 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 2013-02-08 16:36:48,360 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true 2013-02-08 16:36:48,366 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 2013-02-08 16:36:48,366 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 2013-02-08 16:36:48,390 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 2013-02-08 16:36:48,409 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 2013-02-08 16:36:48,439 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1 2013-02-08 16:36:48,444 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0 2013-02-08 16:36:48,444 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 115 loaded in 0 seconds. 2013-02-08 16:36:48,445 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /mnt/nfs/farequote/hadoop/namenode/current/edits of size 4 edits # 0 loaded in 0 seconds. 2013-02-08 16:36:48,452 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 115 saved in 0 seconds. 2013-02-08 16:36:48,523 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/mnt/nfs/farequote/hadoop/namenode/current/edits 2013-02-08 16:36:48,524 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/mnt/nfs/farequote/hadoop/namenode/current/edits 2013-02-08 16:36:48,590 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups 2013-02-08 16:36:48,591 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 248 msecs 2013-02-08 16:36:48,592 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct = 0.9990000128746033 2013-02-08 16:36:48,592 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 2013-02-08 16:36:48,592 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension = 30000 2013-02-08 16:36:48,612 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0 2013-02-08 16:36:48,612 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0 2013-02-08 16:36:48,612 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0 2013-02-08 16:36:48,612 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0 2013-02-08 16:36:48,612 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 20 msec 2013-02-08 16:36:48,612 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs. 2013-02-08 16:36:48,613 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 2013-02-08 16:36:48,613 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2013-02-08 16:36:48,619 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list 2013-02-08 16:36:48,620 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec 2013-02-08 16:36:48,620 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec processing time, 1 msec clock time, 1 cycles 2013-02-08 16:36:48,620 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec 2013-02-08 16:36:48,620 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles 2013-02-08 16:36:48,625 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered. 2013-02-08 16:36:48,641 INFO org.apache.hadoop.ipc.Server: Starting SocketReader 2013-02-08 16:36:48,643 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort8020 registered. 2013-02-08 16:36:48,644 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort8020 registered. 2013-02-08 16:36:48,649 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: ncepspa119.nce.amadeus.net/172.16.140.119:8020 2013-02-08 16:36:48,712 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2013-02-08 16:36:48,775 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 2013-02-08 16:36:48,785 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false 2013-02-08 16:36:48,798 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070 2013-02-08 16:36:48,799 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070 2013-02-08 16:36:48,799 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070 2013-02-08 16:36:48,799 INFO org.mortbay.log: jetty-6.1.26 2013-02-08 16:36:49,264 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070 2013-02-08 16:36:49,265 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070 2013-02-08 16:36:49,275 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2013-02-08 16:36:49,287 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting 2013-02-08 16:36:49,288 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 8020: starting 2013-02-08 16:36:49,289 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 8020: starting 2013-02-08 16:36:49,289 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 8020: starting 2013-02-08 16:36:49,289 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 8020: starting 2013-02-08 16:36:49,289 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 8020: starting 2013-02-08 16:36:49,289 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020: starting 2013-02-08 16:36:49,290 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 8020: starting 2013-02-08 16:36:49,290 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 8020: starting 2013-02-08 16:36:49,290 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 8020: starting 2013-02-08 16:36:49,290 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 8020: starting 2013-02-08 16:41:51,831 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 172.16.140.119 2013-02-08 16:41:51,834 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 2013-02-08 16:41:51,834 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/mnt/nfs/farequote/hadoop/namenode/current/edits 2013-02-08 16:41:51,835 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/mnt/nfs/farequote/hadoop/namenode/current/edits 2013-02-08 16:41:52,316 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Opening connection to http://0.0.0.0:50090/getimage?getimage=1 2013-02-08 16:41:52,395 INFO org.apache.hadoop.hdfs.server.namenode.GetImageServlet: Downloaded new fsimage with checksum: cd934cec6b693969a84be19a4d9b0d26 2013-02-08 16:41:52,396 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 172.16.140.119 2013-02-08 16:41:52,396 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 27 2013-02-08 16:41:52,397 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/mnt/nfs/farequote/hadoop/namenode/current/edits.new 2013-02-08 16:41:52,398 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/mnt/nfs/farequote/hadoop/namenode/current/edits.new 2013-02-08 16:47:30,877 INFO org.apache.hadoop.util.HostsFileReader: Setting the includes file to 2013-02-08 16:47:30,878 INFO org.apache.hadoop.util.HostsFileReader: Setting the excludes file to 2013-02-08 16:47:30,878 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list 2013-02-08 16:48:19,904 INFO org.apache.hadoop.hdfs.server.common.Storage: Directory /mnt/nfs/farequote/hadoop/namenode/previous does not exist. 2013-02-08 16:48:19,905 INFO org.apache.hadoop.hdfs.server.common.Storage: Finalize upgrade for /mnt/nfs/farequote/hadoop/namenode is not required. I cannot see where the problem is. Any help would be appreciated. Thanks, Riccardo