Refer to the following fix: Hadoop will not work under AIX without it.  

https://issues.apache.org/jira/browse/HADOOP-4546

Bill

-----Original Message-----
From: work.av...@gmail.com [mailto:work.av...@gmail.com] On Behalf Of
Aviad sela
Sent: Wednesday, February 18, 2009 12:14 PM
To: Hadoop Users Support
Subject: Getting Started with AIX mahcines

I am attempting my first steps learning Hadoop on top of AIX machine.

I have followed the installation description:
http://hadoop.apache.org/core/docs/r0.19.0/quickstart.html

The Stand-Alone  Mode worked just well.

However, I am failing when trying to execute the Psuedo-Distributed
Mode:
I have carried out the following steps:


   1. update conf/hadoop-site.xml
   2. exec bin/hadoop namenode -format
   3. exec bin/start-all.sh
   4. exec bin/hadoop fs -put conf input


   - the execution of step 2 (Formatting the NameNode) was succesful,
   corresponding to the expected result also shown in
 
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Single-
Node_Cluster
   )



   - The exeuction of step 3 ( starting the Single-Node servers) , seems
to
   be OK , although the output is not similiar to the one carried out by
Ubuntu
   LINUX,

           It seems that the Local host shell is exited:
                   starting namenode, logging to
/usr/hadoop/hadoop-0.19.0/bin/../logs/hadoop-hdpuser-namenode-rcc-hrl-lp
ar-020.haifa.ibm.com.out
                   localhost: starting datanode, logging to
/usr/hadoop/hadoop-0.19.0/bin/../logs/hadoop-hdpuser-datanode-rcc-hrl-lp
ar-020.haifa.ibm.com.out
                   localhost: Hasta la vista, baby *<<+==== IT SEEMS
that
the local host shell exits ==>*
                   localhost: starting secondarynamenode, logging to
/usr/hadoop/hadoop-0.19.0/bin/../logs/hadoop-hdpuser-secondarynamenode-r
cc-hrl-lpar-020.haifa.ibm.com.out
                   localhost: Hasta la vista, baby *<<+==== IT SEEMS
that
the local host shell exits ==>*
                  starting jobtracker, logging to
/usr/hadoop/hadoop-0.19.0/bin/../logs/hadoop-hdpuser-jobtracker-rcc-hrl-
lpar-020.haifa.ibm.com.out
                   localhost: starting tasktracker, logging to
/usr/hadoop/hadoop-0.19.0/bin/../logs/hadoop-hdpuser-tasktracker-rcc-hrl
-lpar-020.haifa.ibm.com.out
                 localhost: Hasta la vista, baby *<<+==== IT SEEMS that
the
local host shell exits ==>*

   - The execution of step 4 fails, no data is copied to DFS input
directory
   , recieving exception

        09/02/18 12:14:24 INFO hdfs.DFSClient:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hdpuser/input/masters could only be replicated to 0 nodes, instead
of
1
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(F
SNamesystem.java:1270)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:3
51)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.jav
a:45)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
Impl.java:37)
        at java.lang.reflect.Method.invoke(Method.java:599)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:452)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:892)

        at org.apache.hadoop.ipc.Client.call(Client.java:696)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
        at $Proxy0.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.jav
a:45)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
Impl.java:37)
        at java.lang.reflect.Method.invoke(Method.java:599)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvo
cationHandler.java:82)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocation
Handler.java:59)
        at $Proxy0.addBlock(Unknown Source)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DF
SClient.java:2815)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(D
FSClient.java:2697)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.j
ava:1997)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSCli
ent.java:2183)
      09/02/18 12:14:24 WARN hdfs.DFSClient: NotReplicatedYetException
sleeping /user/hdpuser/input/masters retries left 4
.....
.....
09/02/18 12:14:30 WARN hdfs.DFSClient: Error Recovery for block null bad
datanode[0] nodes == null
09/02/18 12:14:30 WARN hdfs.DFSClient: Could not get block locations.
Aborting...
put: java.io.IOException: File /user/hdpuser/input/masters could only be
replicated to 0 nodes, instead of 1
Exception closing file /user/hdpuser/input/masters
java.io.IOException: Filesystem closed
        at
org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
        at
org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient
.java:3084)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:30
53)
        at
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
        at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem
.java:243)
        at org.apache.hadoop.fs.FsShell.close(FsShell.java:1842)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:1856)



   - The NameNode WEB shows:

*11 files and directories, 0 blocks = 11 total. Heap Size is 13.56 MB /
1000
MB (1%)
*   Configured Capacity : 960 MB DFS Used : 8 KB Non DFS Used : 959.96
MB DFS
Remaining : 35 KB DFS Used% : 0 % DFS Remaining% : 0 % Live
Nodes<http://rcc-hrl-lpar-020.haifa.ibm.com:50070/dfshealth.jsp#LiveNode
s>
:
1 Dead
Nodes<http://rcc-hrl-lpar-020.haifa.ibm.com:50070/dfshealth.jsp#DeadNode
s>
:
0

Notice that the DFS is deployed under /tmp direcotry
executing : > df -m
suggested that 35% of /tmp is used (i.e. FREE space is only 626M)
which mean that the DFS is configured to unavaialbe storage size !!!

Reply via email to