Have you ensured your firewall is off on all instances, or
appropriately configured if you need them?

$ service iptables stop

It is turned on by default on most distributions. I know CentOS6 turns
it on by default, with some rules.

On Thu, Feb 23, 2012 at 2:33 PM, tgh <guanhua.t...@ia.ac.cn> wrote:
> Hi
>
>         I setup hadoop with hadoop 0.20.2
>
>
>
>         I use three virtual machines on vmware,
>
>         The three virtual machine could ssh with each other,
>
> ERROR rise ,   the tasktracker on slave 192.168.164.137 and 192.168.164.138
> cloud not connect to master, while the tasktracker on 192.168.164.136 seems
> no error,
>
>
>
> Cloud you help me
>
>
>
> The conf file is set as follows,
>
> root@ubuntu:/home/hadoop-0.20.2/conf# cat masters
>
> 192.168.164.136
>
> root@ubuntu:/home/hadoop-0.20.2/conf# cat slaves
>
> 192.168.164.136
>
> 192.168.164.137
>
> 192.168.164.138
>
> root@ubuntu:/home/hadoop-0.20.2/conf#
>
> root@ubuntu:/home/hadoop-0.20.2/conf# cat core-site.xml
>
> <?xml version="1.0"?>
>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
>
> <!-- Put site-specific property overrides in this file. -->
>
>
>
> <configuration>
>
>  <property>
>
>    <name>fs.default.name</name>
>
>    <value>hdfs://192.168.164.136:9100</value>
>
>  </property>
>
>  <property>
>
>    <name>hadoop.tmp.dir</name>
>
>    <value>/home/hadoop-0.20.2/tmp/</value>
>
>  </property>
>
>  <property>
>
>    <name>dfs.replication</name>
>
>    <value>1</value>
>
>  </property>
>
>  <!-- property>
>
>    <name>mapred.child.java.opts</name>
>
>    <value>-Xmx128m</value>
>
>  </property>
>
>  <property>
>
>    <name>dfs.block.size</name>
>
>    <value>5120000</value>
>
>    <description>The default block size for new files.</description>
>
>  </property -->
>
> </configuration>
>
>
>
> root@ubuntu:/home/hadoop-0.20.2/conf# cat mapred-site.xml
>
> <?xml version="1.0"?>
>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
>
>
> <!-- Put site-specific property overrides in this file. -->
>
>
>
> <configuration>
>
>  <property>
>
>    <name>mapred.job.tracker</name>
>
>    <value>192.168.164.136:9101</value>
>
>  </property>
>
> </configuration>
>
>
>
>
>
>
>
> now ERROR rise ,   the tasktracker on slave 192.168.164.137 and
> 192.168.164.138 cloud not connect to master, while the tasktracker on
> 192.168.164.136 seems no error,
>
>
>
> this is the log on 192.168.164.138,
>
> root@ubuntu:/home/hadoop-0.20.2/logs#
>
> root@ubuntu:/home/hadoop-0.20.2/logs# cat hadoop-root-tasktracker-ubuntu.log
>
>
> 2012-02-23 00:44:10,851 INFO org.apache.hadoop.mapred.TaskTracker:
> STARTUP_MSG:
>
> /************************************************************
>
> STARTUP_MSG: Starting TaskTracker
>
> STARTUP_MSG:   host = ubuntu/127.0.1.1
>
> STARTUP_MSG:   args = []
>
> STARTUP_MSG:   version = 0.20.2
>
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>
> ************************************************************/
>
> 2012-02-23 00:44:16,080 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
>
> 2012-02-23 00:44:16,199 INFO org.apache.hadoop.http.HttpServer: Port
> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 50060
>
> 2012-02-23 00:44:16,205 INFO org.apache.hadoop.http.HttpServer:
> listener.getLocalPort() returned 50060
> webServer.getConnectors()[0].getLocalPort() returned 50060
>
> 2012-02-23 00:44:16,205 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50060
>
> 2012-02-23 00:44:16,205 INFO org.mortbay.log: jetty-6.1.14
>
> 2012-02-23 00:45:08,741 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50060
>
> 2012-02-23 00:45:08,808 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=TaskTracker, sessionId=
>
> 2012-02-23 00:45:08,848 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=TaskTracker, port=49689
>
> 2012-02-23 00:45:08,909 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
>
> 2012-02-23 00:45:08,912 INFO org.apache.hadoop.mapred.TaskTracker:
> TaskTracker up at: localhost/127.0.0.1:49689
>
> 2012-02-23 00:45:08,912 INFO org.apache.hadoop.mapred.TaskTracker: Starting
> tracker tracker_ubuntu:localhost/127.0.0.1:49689
>
> 2012-02-23 00:45:08,911 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 49689: starting
>
> 2012-02-23 00:45:08,911 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 49689: starting
>
> 2012-02-23 00:45:08,911 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 49689: starting
>
> 2012-02-23 00:45:08,911 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 49689: starting
>
> 2012-02-23 00:45:08,919 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 49689: starting
>
> 2012-02-23 00:47:53,638 INFO org.apache.hadoop.mapred.TaskTracker:  Using
> MemoryCalculatorPlugin :
> org.apache.hadoop.util.LinuxMemoryCalculatorPlugin@cafb56
>
> 2012-02-23 00:47:53,641 INFO org.apache.hadoop.mapred.TaskTracker: Starting
> thread: Map-events fetcher for all reduce tasks on
> tracker_ubuntu:localhost/127.0.0.1:49689
>
> 2012-02-23 00:47:53,646 WARN org.apache.hadoop.mapred.TaskTracker:
> TaskTracker's totalMemoryAllottedForTasks is -1. TaskMemoryManager is
> disabled.
>
> 2012-02-23 00:47:53,647 INFO org.apache.hadoop.mapred.IndexCache: IndexCache
> created with max memory = 10485760
>
> 2012-02-23 00:47:55,110 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu.local/192.168.164.138:9100. Already tried 0 time(s).
>
> 2012-02-23 00:47:56,112 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu.local/192.168.164.138:9100. Already tried 1 time(s).
>
> 2012-02-23 00:47:57,114 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu.local/192.168.164.138:9100. Already tried 2 time(s).
>
> 2012-02-23 00:47:58,116 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu.local/192.168.164.138:9100. Already tried 3 time(s).
>
> 2012-02-23 00:47:59,118 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu.local/192.168.164.138:9100. Already tried 4 time(s).
>
> 2012-02-23 00:48:00,120 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu.local/192.168.164.138:9100. Already tried 5 time(s).
>
> 2012-02-23 00:48:01,122 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu.local/192.168.164.138:9100. Already tried 6 time(s).
>
> 2012-02-23 00:48:02,124 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu.local/192.168.164.138:9100. Already tried 7 time(s).
>
> 2012-02-23 00:48:03,126 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu.local/192.168.164.138:9100. Already tried 8 time(s).
>
> 2012-02-23 00:48:04,130 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: ubuntu.local/192.168.164.138:9100. Already tried 9 time(s).
>
> 2012-02-23 00:48:04,132 ERROR org.apache.hadoop.mapred.TaskTracker: Caught
> exception: java.net.ConnectException: Call to
> ubuntu.local/192.168.164.138:9100 failed on connection exception:
> java.net.ConnectException: Connection refused
>
>         at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:743)
>
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>
>         at $Proxy5.getProtocolVersion(Unknown Source)
>
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>
>         at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSyste
> m.java:82)
>
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
>
>         at
> org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1033)
>
>         at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1720)
>
>         at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
>
> Caused by: java.net.ConnectException: Connection refused
>
>         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>
>         at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
>
>         at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:2
> 06)
>
>         at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
>
>         at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
>
>         at
> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>
>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:720)
>
>         ... 15 more
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>



-- 
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about

Reply via email to