Hi Neo,

See this bug:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=560044

as well as the discussion here:

http://issues.apache.org/jira/browse/HADOOP-6056

Thanks
-Todd

On Wed, Feb 24, 2010 at 9:16 AM, neo anderson
<javadeveloper...@yahoo.co.uk> wrote:
>
> While running example programe ('hadoop jar *example*jar pi 2 2'), I
> encounter 'Network is unreachable' problem (at
> $HADOOP_HOME/logs/userlogs/.../stderr), as below:
>
> Exception in thread "main" java.io.IOException: Call to /127.0.0.1:<port>
> failed on local exception: java.net.SocketException: Network is unreachable
>        at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
>        at org.apache.hadoop.ipc.Client.call(Client.java:742)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>        at org.apache.hadoop.mapred.$Proxy0.getProtocolVersion(Unknown
> Source)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
>        at org.apache.hadoop.mapred.Child.main(Child.java:64)
> Caused by: java.net.SocketException: Network is unreachable
>        at sun.nio.ch.Net.connect(Native Method)
>        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:507)
>        at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
>        at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
>        at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>        at org.apache.hadoop.ipc.Client.getConnection(Client.java:859)
>        at org.apache.hadoop.ipc.Client.call(Client.java:719)
>        ... 6 more
>
> Initially, it seems to me that is firewall issue, but after disabling
> iptables  the example programe still can not execute correctly.
>
> command for disabling iptables.
> iptables -P INPUT ACCEPT
> iptables -P FORWARD ACCEPT
> iptables -P OUTPUT ACCEPT
> iptables -X
> iptables -F
>
> When starting up hadoop cluster (start-dfs.sh and start-mapred.sh), it looks
> like the namenode was correctly started up because the log in name node
> contains information
>
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.5:10010
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.4:10010
> ... org.apache.hadoop.net.NetworkTopology: Adding a new node:
> /default-rack/111.222.333.3:10010
>
> Also, in datanode
> ...
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /111.222.333.4:34539, dest: /111.222.333.5:50010, bytes: 4, op: HDFS_WRITE,
> ...
> INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /111.222.333.4:51610, dest: /111.222.333.3:50010, bytes: 118, op:
> HDFS_WRITE, cliID: ...
> ...
>
> The command 'hadoop fs -ls' can list the data uploaded to the hdfs without a
> problem. And jps shows the necessary processes are running.
>
> name node:
> 7710 SecondaryNameNode
> 7594 NameNode
> 8038 JobTracker
>
> data nodes:
> 3181 TaskTracker
> 3000 DataNode
>
> Environment: Debian squeeze, hadoop 0.20.1, jdk 1.6.x
>
> I search online and couldn't find the possible root cause. Is there any
> possibility that may cause such issue? Or any place that I may be able to
> check for more deatail information?
>
> Thanks for help.
>
>
> --
> View this message in context: 
> http://old.nabble.com/java.net.SocketException%3A-Network-is-unreachable-tp27714253p27714253.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>

Reply via email to