You should be able to connect via ssh from any node to any node in your cluster 
without typing password.
Just type : ssh  <host name/host ip> . If you are asked to enter a password - 
there is problem in the configuration. Check it for all nodes.
Vladi

נשלח מה-iPad שלי

ב-24 Feb 2012, בשעה 03:30, "tgh" <guanhua.t...@ia.ac.cn> כתב/ה:

> Hi
>    Thank you, I have make passwordless SSH between the virtual machine,
>    I am confused about it
>    Cloud you help me
> 
> 
> 
> 
> -----邮件原件-----
> 发件人: common-user-return-32880-guanhua.tian=ia.ac...@hadoop.apache.org 
> [mailto:common-user-return-32880-guanhua.tian=ia.ac...@hadoop.apache.org] 代表 
> Vladislav Feigin
> 发送时间: 2012年2月23日 19:58
> 收件人: common-user@hadoop.apache.org
> 主题: Re: 答复: TaskTracker Error
> 
> Hi
> Check also passwordless SSH is configured properly between the nodes.
> Vladi
> 
> נשלח מה-iPad שלי
> 
> ב-23 Feb 2012, בשעה 12:10, "tgh" <guanhua.t...@ia.ac.cn> כתב/ה:
> 
>> Hi
>>   I use ubuntu , the firewall seems off , for three virtual machine, and how 
>> to solve ERROR ,  cloud you help me ?
>> 
>> root@ubuntu:/home/hadoop-0.20.2# service iptables status
>> iptables: unrecognized service
>> 
>> root@ubuntu:/home/hadoop-0.20.2# ufw disable Firewall stopped and 
>> disabled on system startup root@ubuntu:/home/hadoop-0.20.2# ufw status
>> Status: inactive
>> root@ubuntu:/home/hadoop-0.20.2#
>> 
>> 
>> this is port by Java on master 192.168.164.128 root@ubuntu:~# 
>> root@ubuntu:~# netstat -nap|grep java
>> tcp6       0      0 :::41095                :::*                    LISTEN   
>>    4000/java       
>> tcp6       0      0 :::50090                :::*                    LISTEN   
>>    4222/java       
>> tcp6       0      0 :::50060                :::*                    LISTEN   
>>    4492/java       
>> tcp6       0      0 :::42316                :::*                    LISTEN   
>>    4297/java       
>> tcp6       0      0 192.168.164.136:9100    :::*                    LISTEN   
>>    3800/java       
>> tcp6       0      0 192.168.164.136:9101    :::*                    LISTEN   
>>    4297/java       
>> tcp6       0      0 :::50030                :::*                    LISTEN   
>>    4297/java       
>> tcp6       0      0 :::33297                :::*                    LISTEN   
>>    3800/java       
>> tcp6       0      0 127.0.0.1:60722         :::*                    LISTEN   
>>    4492/java       
>> tcp6       0      0 :::50070                :::*                    LISTEN   
>>    3800/java       
>> tcp6       0      0 :::50010                :::*                    LISTEN   
>>    4000/java       
>> tcp6       0      0 :::50075                :::*                    LISTEN   
>>    4000/java       
>> tcp6       0      0 :::35262                :::*                    LISTEN   
>>    4222/java       
>> tcp6       0      0 :::50020                :::*                    LISTEN   
>>    4000/java       
>> tcp6       0      0 192.168.164.136:58531   192.168.164.136:9101    
>> ESTABLISHED 4492/java       
>> tcp6       0      0 192.168.164.136:9100    192.168.164.136:37493   
>> ESTABLISHED 3800/java       
>> tcp6       0      0 192.168.164.136:37490   192.168.164.136:9100    
>> ESTABLISHED 4297/java       
>> tcp6       0      0 192.168.164.136:9100    192.168.164.137:53796   
>> ESTABLISHED 3800/java       
>> tcp6       0      0 192.168.164.136:9100    192.168.164.136:37490   
>> ESTABLISHED 3800/java       
>> tcp6       0      0 192.168.164.136:9100    192.168.164.138:40077   
>> ESTABLISHED 3800/java       
>> tcp6       0      0 192.168.164.136:37493   192.168.164.136:9100    
>> ESTABLISHED 4000/java       
>> unix  2      [ ]         STREAM     CONNECTED     21015    4492/java         
>>   
>> unix  2      [ ]         STREAM     CONNECTED     20907    4297/java         
>>   
>> unix  2      [ ]         STREAM     CONNECTED     20204    4222/java         
>>   
>> unix  2      [ ]         STREAM     CONNECTED     19574    4000/java         
>>   
>> unix  2      [ ]         STREAM     CONNECTED     19293    3800/java         
>>   
>> root@ubuntu:~#
>> 
>> this is on slaves 192.168.164.137
>> root@ubuntu:/home/hadoop-0.20.2#
>> root@ubuntu:/home/hadoop-0.20.2# netstat -nap|grep java
>> tcp6       0      0 :::50060                :::*                    LISTEN   
>>    13130/java      
>> tcp6       0      0 127.0.0.1:40112         :::*                    LISTEN   
>>    13130/java      
>> tcp6       0      0 :::35703                :::*                    LISTEN   
>>    12949/java      
>> tcp6       0      0 :::50010                :::*                    LISTEN   
>>    12949/java      
>> tcp6       0      0 :::50075                :::*                    LISTEN   
>>    12949/java      
>> tcp6       0      0 :::50020                :::*                    LISTEN   
>>    12949/java      
>> tcp6       0      0 192.168.164.137:53796   192.168.164.136:9100    
>> ESTABLISHED 12949/java      
>> tcp6       0      0 192.168.164.137:43216   192.168.164.136:9101    
>> ESTABLISHED 13130/java      
>> unix  2      [ ]         STREAM     CONNECTED     51464    13130/java        
>>   
>> unix  2      [ ]         STREAM     CONNECTED     49229    12949/java        
>>   
>> root@ubuntu:/home/hadoop-0.20.2#
>> 
>> 
>> 
>> 
>> 
>> -----邮件原件-----
>> 发件人: common-user-return-32874-guanhua.tian=ia.ac...@hadoop.apache.org 
>> [mailto:common-user-return-32874-guanhua.tian=ia.ac.cn@hadoop.apache.o
>> rg] 代表 Harsh J
>> 发送时间: 2012年2月23日 17:31
>> 收件人: common-user@hadoop.apache.org
>> 主题: Re: TaskTracker Error
>> 
>> Have you ensured your firewall is off on all instances, or appropriately 
>> configured if you need them?
>> 
>> $ service iptables stop
>> 
>> It is turned on by default on most distributions. I know CentOS6 turns it on 
>> by default, with some rules.
>> 
>> On Thu, Feb 23, 2012 at 2:33 PM, tgh <guanhua.t...@ia.ac.cn> wrote:
>>> Hi
>>> 
>>>       I setup hadoop with hadoop 0.20.2
>>> 
>>> 
>>> 
>>>       I use three virtual machines on vmware,
>>> 
>>>       The three virtual machine could ssh with each other,
>>> 
>>> ERROR rise ,   the tasktracker on slave 192.168.164.137 and 
>>> 192.168.164.138 cloud not connect to master, while the tasktracker on
>>> 192.168.164.136 seems no error,
>>> 
>>> 
>>> 
>>> Cloud you help me
>>> 
>>> 
>>> 
>>> The conf file is set as follows,
>>> 
>>> root@ubuntu:/home/hadoop-0.20.2/conf# cat masters
>>> 
>>> 192.168.164.136
>>> 
>>> root@ubuntu:/home/hadoop-0.20.2/conf# cat slaves
>>> 
>>> 192.168.164.136
>>> 
>>> 192.168.164.137
>>> 
>>> 192.168.164.138
>>> 
>>> root@ubuntu:/home/hadoop-0.20.2/conf#
>>> 
>>> root@ubuntu:/home/hadoop-0.20.2/conf# cat core-site.xml
>>> 
>>> <?xml version="1.0"?>
>>> 
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>> 
>>> 
>>> 
>>> <!-- Put site-specific property overrides in this file. -->
>>> 
>>> 
>>> 
>>> <configuration>
>>> 
>>> <property>
>>> 
>>>  <name>fs.default.name</name>
>>> 
>>>  <value>hdfs://192.168.164.136:9100</value>
>>> 
>>> </property>
>>> 
>>> <property>
>>> 
>>>  <name>hadoop.tmp.dir</name>
>>> 
>>>  <value>/home/hadoop-0.20.2/tmp/</value>
>>> 
>>> </property>
>>> 
>>> <property>
>>> 
>>>  <name>dfs.replication</name>
>>> 
>>>  <value>1</value>
>>> 
>>> </property>
>>> 
>>> <!-- property>
>>> 
>>>  <name>mapred.child.java.opts</name>
>>> 
>>>  <value>-Xmx128m</value>
>>> 
>>> </property>
>>> 
>>> <property>
>>> 
>>>  <name>dfs.block.size</name>
>>> 
>>>  <value>5120000</value>
>>> 
>>>  <description>The default block size for new files.</description>
>>> 
>>> </property -->
>>> 
>>> </configuration>
>>> 
>>> 
>>> 
>>> root@ubuntu:/home/hadoop-0.20.2/conf# cat mapred-site.xml
>>> 
>>> <?xml version="1.0"?>
>>> 
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>> 
>>> 
>>> 
>>> <!-- Put site-specific property overrides in this file. -->
>>> 
>>> 
>>> 
>>> <configuration>
>>> 
>>> <property>
>>> 
>>>  <name>mapred.job.tracker</name>
>>> 
>>>  <value>192.168.164.136:9101</value>
>>> 
>>> </property>
>>> 
>>> </configuration>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> now ERROR rise ,   the tasktracker on slave 192.168.164.137 and
>>> 192.168.164.138 cloud not connect to master, while the tasktracker on
>>> 192.168.164.136 seems no error,
>>> 
>>> 
>>> 
>>> this is the log on 192.168.164.138,
>>> 
>>> root@ubuntu:/home/hadoop-0.20.2/logs#
>>> 
>>> root@ubuntu:/home/hadoop-0.20.2/logs# cat 
>>> hadoop-root-tasktracker-ubuntu.log
>>> 
>>> 
>>> 2012-02-23 00:44:10,851 INFO org.apache.hadoop.mapred.TaskTracker:
>>> STARTUP_MSG:
>>> 
>>> /************************************************************
>>> 
>>> STARTUP_MSG: Starting TaskTracker
>>> 
>>> STARTUP_MSG:   host = ubuntu/127.0.1.1
>>> 
>>> STARTUP_MSG:   args = []
>>> 
>>> STARTUP_MSG:   version = 0.20.2
>>> 
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 
>>> -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>>> 
>>> ************************************************************/
>>> 
>>> 2012-02-23 00:44:16,080 INFO org.mortbay.log: Logging to
>>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
>>> org.mortbay.log.Slf4jLog
>>> 
>>> 2012-02-23 00:44:16,199 INFO org.apache.hadoop.http.HttpServer: Port 
>>> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
>>> Opening the listener on 50060
>>> 
>>> 2012-02-23 00:44:16,205 INFO org.apache.hadoop.http.HttpServer:
>>> listener.getLocalPort() returned 50060
>>> webServer.getConnectors()[0].getLocalPort() returned 50060
>>> 
>>> 2012-02-23 00:44:16,205 INFO org.apache.hadoop.http.HttpServer: Jetty 
>>> bound to port 50060
>>> 
>>> 2012-02-23 00:44:16,205 INFO org.mortbay.log: jetty-6.1.14
>>> 
>>> 2012-02-23 00:45:08,741 INFO org.mortbay.log: Started
>>> SelectChannelConnector@0.0.0.0:50060
>>> 
>>> 2012-02-23 00:45:08,808 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>>> Initializing JVM Metrics with processName=TaskTracker, sessionId=
>>> 
>>> 2012-02-23 00:45:08,848 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>>> Initializing RPC Metrics with hostName=TaskTracker, port=49689
>>> 
>>> 2012-02-23 00:45:08,909 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> Responder: starting
>>> 
>>> 2012-02-23 00:45:08,912 INFO org.apache.hadoop.mapred.TaskTracker:
>>> TaskTracker up at: localhost/127.0.0.1:49689
>>> 
>>> 2012-02-23 00:45:08,912 INFO org.apache.hadoop.mapred.TaskTracker: 
>>> Starting tracker tracker_ubuntu:localhost/127.0.0.1:49689
>>> 
>>> 2012-02-23 00:45:08,911 INFO org.apache.hadoop.ipc.Server: IPC Server 
>>> listener on 49689: starting
>>> 
>>> 2012-02-23 00:45:08,911 INFO org.apache.hadoop.ipc.Server: IPC Server 
>>> handler 0 on 49689: starting
>>> 
>>> 2012-02-23 00:45:08,911 INFO org.apache.hadoop.ipc.Server: IPC Server 
>>> handler 1 on 49689: starting
>>> 
>>> 2012-02-23 00:45:08,911 INFO org.apache.hadoop.ipc.Server: IPC Server 
>>> handler 2 on 49689: starting
>>> 
>>> 2012-02-23 00:45:08,919 INFO org.apache.hadoop.ipc.Server: IPC Server 
>>> handler 3 on 49689: starting
>>> 
>>> 2012-02-23 00:47:53,638 INFO org.apache.hadoop.mapred.TaskTracker:  
>>> Using MemoryCalculatorPlugin :
>>> org.apache.hadoop.util.LinuxMemoryCalculatorPlugin@cafb56
>>> 
>>> 2012-02-23 00:47:53,641 INFO org.apache.hadoop.mapred.TaskTracker: 
>>> Starting
>>> thread: Map-events fetcher for all reduce tasks on
>>> tracker_ubuntu:localhost/127.0.0.1:49689
>>> 
>>> 2012-02-23 00:47:53,646 WARN org.apache.hadoop.mapred.TaskTracker:
>>> TaskTracker's totalMemoryAllottedForTasks is -1. TaskMemoryManager is 
>>> disabled.
>>> 
>>> 2012-02-23 00:47:53,647 INFO org.apache.hadoop.mapred.IndexCache: 
>>> IndexCache created with max memory = 10485760
>>> 
>>> 2012-02-23 00:47:55,110 INFO org.apache.hadoop.ipc.Client: Retrying 
>>> connect to server: ubuntu.local/192.168.164.138:9100. Already tried 0 
>>> time(s).
>>> 
>>> 2012-02-23 00:47:56,112 INFO org.apache.hadoop.ipc.Client: Retrying 
>>> connect to server: ubuntu.local/192.168.164.138:9100. Already tried 1 
>>> time(s).
>>> 
>>> 2012-02-23 00:47:57,114 INFO org.apache.hadoop.ipc.Client: Retrying 
>>> connect to server: ubuntu.local/192.168.164.138:9100. Already tried 2 
>>> time(s).
>>> 
>>> 2012-02-23 00:47:58,116 INFO org.apache.hadoop.ipc.Client: Retrying 
>>> connect to server: ubuntu.local/192.168.164.138:9100. Already tried 3 
>>> time(s).
>>> 
>>> 2012-02-23 00:47:59,118 INFO org.apache.hadoop.ipc.Client: Retrying 
>>> connect to server: ubuntu.local/192.168.164.138:9100. Already tried 4 
>>> time(s).
>>> 
>>> 2012-02-23 00:48:00,120 INFO org.apache.hadoop.ipc.Client: Retrying 
>>> connect to server: ubuntu.local/192.168.164.138:9100. Already tried 5 
>>> time(s).
>>> 
>>> 2012-02-23 00:48:01,122 INFO org.apache.hadoop.ipc.Client: Retrying 
>>> connect to server: ubuntu.local/192.168.164.138:9100. Already tried 6 
>>> time(s).
>>> 
>>> 2012-02-23 00:48:02,124 INFO org.apache.hadoop.ipc.Client: Retrying 
>>> connect to server: ubuntu.local/192.168.164.138:9100. Already tried 7 
>>> time(s).
>>> 
>>> 2012-02-23 00:48:03,126 INFO org.apache.hadoop.ipc.Client: Retrying 
>>> connect to server: ubuntu.local/192.168.164.138:9100. Already tried 8 
>>> time(s).
>>> 
>>> 2012-02-23 00:48:04,130 INFO org.apache.hadoop.ipc.Client: Retrying 
>>> connect to server: ubuntu.local/192.168.164.138:9100. Already tried 9 
>>> time(s).
>>> 
>>> 2012-02-23 00:48:04,132 ERROR org.apache.hadoop.mapred.TaskTracker: 
>>> Caught
>>> exception: java.net.ConnectException: Call to
>>> ubuntu.local/192.168.164.138:9100 failed on connection exception:
>>> java.net.ConnectException: Connection refused
>>> 
>>>       at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
>>> 
>>>       at org.apache.hadoop.ipc.Client.call(Client.java:743)
>>> 
>>>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>> 
>>>       at $Proxy5.getProtocolVersion(Unknown Source)
>>> 
>>>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>>> 
>>>       at
>>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106
>>> )
>>> 
>>>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>>> 
>>>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>>> 
>>>       at
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFi
>>> l
>>> eSyste
>>> m.java:82)
>>> 
>>>       at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378
>>> )
>>> 
>>>       at
>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> 
>>>       at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>>> 
>>>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>>> 
>>>       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
>>> 
>>>       at
>>> org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:10
>>> 3
>>> 3)
>>> 
>>>       at
>>> org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1720)
>>> 
>>>       at
>>> org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
>>> 
>>> Caused by: java.net.ConnectException: Connection refused
>>> 
>>>       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>> 
>>>       at
>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592
>>> )
>>> 
>>>       at
>>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.
>>> java:2
>>> 06)
>>> 
>>>       at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
>>> 
>>>       at
>>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:30
>>> 4
>>> )
>>> 
>>>       at
>>> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>>> 
>>>       at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
>>> 
>>>       at org.apache.hadoop.ipc.Client.call(Client.java:720)
>>> 
>>>       ... 15 more
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> --
>> Harsh J
>> Customer Ops. Engineer
>> Cloudera | http://tiny.cloudera.com/about
>> 
>> 
> 
> 

Reply via email to