Hello! Not sure I am on right mailing list, because I had a hard time to subscribe to at least one of those listed... Sorry in advance.
I've just installed pretty much standard way on a 4 nodes hadoop cluster, redirected all tmp/fsimage stuff to a pretty much solid space, but I am getting zero capacity and inability to put any kind of file. Is this is a correct behavior? If no, what I am missing, please? Version: 0.20, Java 1.6 Update 13. I see no tracebacks, no errors, no warnings in any logs. Directories are all writable by hadoop user and no problem with a space. All daemons are running, everything looks OK so far. Just zero capacity filesystem that are already 100% full. Thanks! $ hadoop dfsadmin -report Configured Capacity: 0 (0 KB) Present Capacity: 27648 (27 KB) DFS Remaining: 0 (0 KB) DFS Used: 27648 (27 KB) DFS Used%: 100% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 3 (3 total, 0 dead) Name: 192.168.1.242:50010 Decommission Status : Normal Configured Capacity: 0 (0 KB) DFS Used: 9216 (9 KB) Non DFS Used: 0 (0 KB) DFS Remaining: 0(0 KB) DFS Used%: 100% DFS Remaining%: 0% Last contact: Thu Jul 02 02:03:06 JST 2009 Name: 192.168.1.241:50010 Decommission Status : Normal Configured Capacity: 0 (0 KB) DFS Used: 9216 (9 KB) Non DFS Used: 0 (0 KB) DFS Remaining: 0(0 KB) DFS Used%: 100% DFS Remaining%: 0% Last contact: Thu Jul 02 02:03:04 JST 2009 Name: 192.168.1.243:50010 Decommission Status : Normal Configured Capacity: 0 (0 KB) DFS Used: 9216 (9 KB) Non DFS Used: 0 (0 KB) DFS Remaining: 0(0 KB) DFS Used%: 100% DFS Remaining%: 0% Last contact: Thu Jul 02 02:03:04 JST 2009 -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
