I am attempting my first steps learning Hadoop on top of AIX machine. I have followed the installation description: http://hadoop.apache.org/core/docs/r0.19.0/quickstart.html
The Stand-Alone Mode worked just well. However, I am failing when trying to execute the Psuedo-Distributed Mode: I have carried out the following steps: 1. update conf/hadoop-site.xml 2. exec bin/hadoop namenode -format 3. exec bin/start-all.sh 4. exec bin/hadoop fs -put conf input - the execution of step 2 (Formatting the NameNode) was succesful, corresponding to the expected result also shown in http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster ) - The exeuction of step 3 ( starting the Single-Node servers) , seems to be OK , although the output is not similiar to the one carried out by Ubuntu LINUX, It seems that the Local host shell is exited: starting namenode, logging to /usr/hadoop/hadoop-0.19.0/bin/../logs/hadoop-hdpuser-namenode-rcc-hrl-lpar-020.haifa.ibm.com.out localhost: starting datanode, logging to /usr/hadoop/hadoop-0.19.0/bin/../logs/hadoop-hdpuser-datanode-rcc-hrl-lpar-020.haifa.ibm.com.out localhost: Hasta la vista, baby *<<+==== IT SEEMS that the local host shell exits ==>* localhost: starting secondarynamenode, logging to /usr/hadoop/hadoop-0.19.0/bin/../logs/hadoop-hdpuser-secondarynamenode-rcc-hrl-lpar-020.haifa.ibm.com.out localhost: Hasta la vista, baby *<<+==== IT SEEMS that the local host shell exits ==>* starting jobtracker, logging to /usr/hadoop/hadoop-0.19.0/bin/../logs/hadoop-hdpuser-jobtracker-rcc-hrl-lpar-020.haifa.ibm.com.out localhost: starting tasktracker, logging to /usr/hadoop/hadoop-0.19.0/bin/../logs/hadoop-hdpuser-tasktracker-rcc-hrl-lpar-020.haifa.ibm.com.out localhost: Hasta la vista, baby *<<+==== IT SEEMS that the local host shell exits ==>* - The execution of step 4 fails, no data is copied to DFS input directory , recieving exception 09/02/18 12:14:24 INFO hdfs.DFSClient: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hdpuser/input/masters could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1270) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:351) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:45) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:599) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:892) at org.apache.hadoop.ipc.Client.call(Client.java:696) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216) at $Proxy0.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:45) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:599) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy0.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2815) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2697) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1997) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2183) 09/02/18 12:14:24 WARN hdfs.DFSClient: NotReplicatedYetException sleeping /user/hdpuser/input/masters retries left 4 ..... ..... 09/02/18 12:14:30 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null 09/02/18 12:14:30 WARN hdfs.DFSClient: Could not get block locations. Aborting... put: java.io.IOException: File /user/hdpuser/input/masters could only be replicated to 0 nodes, instead of 1 Exception closing file /user/hdpuser/input/masters java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198) at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053) at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942) at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210) at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243) at org.apache.hadoop.fs.FsShell.close(FsShell.java:1842) at org.apache.hadoop.fs.FsShell.main(FsShell.java:1856) - The NameNode WEB shows: *11 files and directories, 0 blocks = 11 total. Heap Size is 13.56 MB / 1000 MB (1%) * Configured Capacity : 960 MB DFS Used : 8 KB Non DFS Used : 959.96 MB DFS Remaining : 35 KB DFS Used% : 0 % DFS Remaining% : 0 % Live Nodes<http://rcc-hrl-lpar-020.haifa.ibm.com:50070/dfshealth.jsp#LiveNodes> : 1 Dead Nodes<http://rcc-hrl-lpar-020.haifa.ibm.com:50070/dfshealth.jsp#DeadNodes> : 0 Notice that the DFS is deployed under /tmp direcotry executing : > df -m suggested that 35% of /tmp is used (i.e. FREE space is only 626M) which mean that the DFS is configured to unavaialbe storage size !!!