Hi Bharath, Thanks for your answer. I remembered hadoop has single point of failure, which is it's namenode. Is there a way to make my hadoop clusters to become fault tolerant, even when the master node (namenode) fail?
Thanks and Happy New Year 2012. On Tue, Jan 3, 2012 at 2:20 AM, Bharath Mundlapudi <mundlap...@gmail.com>wrote: > You might want to check the datanode logs. Go to the 3 remaining nodes > which didn't start and restart the datanode. > > -Bharath > > > On Sun, Jan 1, 2012 at 7:23 PM, Martinus Martinus > <martinus...@gmail.com>wrote: > >> Hi, >> >> I have setup a hadoop clusters with 4 nodes and I have start-all.sh and >> checked in every node, there are tasktracker and datanode run, but when I >> run hadoop dfsadmin -report it's said like this : >> >> Configured Capacity: 30352158720 (28.27 GB) >> Present Capacity: 3756392448 (3.5 GB) >> DFS Remaining: 3756355584 (3.5 GB) >> DFS Used: 36864 (36 KB) >> DFS Used%: 0% >> Under replicated blocks: 1 >> Blocks with corrupt replicas: 0 >> Missing blocks: 0 >> >> ------------------------------------------------- >> Datanodes available: 1 (1 total, 0 dead) >> >> Name: 192.168.1.1:50010 >> Decommission Status : Normal >> Configured Capacity: 30352158720 (28.27 GB) >> DFS Used: 36864 (36 KB) >> Non DFS Used: 26595766272 (24.77 GB) >> DFS Remaining: 3756355584(3.5 GB) >> DFS Used%: 0% >> DFS Remaining%: 12.38% >> Last contact: Mon Jan 02 11:19:44 CST 2012 >> >> Why is there only total 1 node available? How to fix this problem? >> >> Thanks. >> > >