You can check (datanode) logs on every system. Most probably datanodes are not able to join the namenode.
-P On Mon, Jan 2, 2012 at 8:53 AM, Martinus Martinus <martinus...@gmail.com>wrote: > Hi, > > I have setup a hadoop clusters with 4 nodes and I have start-all.sh and > checked in every node, there are tasktracker and datanode run, but when I > run hadoop dfsadmin -report it's said like this : > > Configured Capacity: 30352158720 (28.27 GB) > Present Capacity: 3756392448 (3.5 GB) > DFS Remaining: 3756355584 (3.5 GB) > DFS Used: 36864 (36 KB) > DFS Used%: 0% > Under replicated blocks: 1 > Blocks with corrupt replicas: 0 > Missing blocks: 0 > > ------------------------------------------------- > Datanodes available: 1 (1 total, 0 dead) > > Name: 192.168.1.1:50010 > Decommission Status : Normal > Configured Capacity: 30352158720 (28.27 GB) > DFS Used: 36864 (36 KB) > Non DFS Used: 26595766272 (24.77 GB) > DFS Remaining: 3756355584(3.5 GB) > DFS Used%: 0% > DFS Remaining%: 12.38% > Last contact: Mon Jan 02 11:19:44 CST 2012 > > Why is there only total 1 node available? How to fix this problem? > > Thanks. >