Re: hadoop cares about /etc/hosts ?

2013-09-11 Thread Jitendra Yadav
Hi, So what you were expecting while pinging master? As per my understanding it is working fine.Well there is no sense of using localhost and hostname on same ip, for localhost it's always preferred to use loopback method i.e 127.0.0.1 Hope this will help you. Regards Jitendra On Wed, Sep 11,

Re: hadoop cares about /etc/hosts ?

2013-09-11 Thread Cipher Chen
Hi, all Thanks for all your replies and guidance. Although I haven't figured out why. :) On Wed, Sep 11, 2013 at 4:03 PM, Jitendra Yadav jeetuyadav200...@gmail.comwrote: Hi, So what you were expecting while pinging master? As per my understanding it is working fine.Well there is no

Re: hadoop cares about /etc/hosts ?

2013-09-10 Thread Cipher Chen
So for the first *wrong* /etc/hosts file, the sequence would be : find hdfs://master:54310 find master - 192.168.6.10 (*but it already got ip here*) find 192.168.6.10 - localhost find localhost - 127.0.0.1 The other thing, when 'ping master', i would got reply from '192.168.6.10' instead of

hadoop cares about /etc/hosts ?

2013-09-09 Thread Cipher Chen
Hi everyone, I have solved a configuration problem due to myself in hadoop cluster mode. I have configuration as below: property namefs.default.name/name valuehdfs://master:54310/value /property a ​nd the hosts file:​ /etc/hosts: 127.0.0.1 localhost ​​ ​​ 192.168.6.10

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Olivier Renault
Could you confirm that you put the hash in front of 192.168.6.10 localhost It should look like # 192.168.6.10localhost Thanks Olivier On 9 Sep 2013 12:31, Cipher Chen cipher.chen2...@gmail.com wrote: Hi everyone, I have solved a configuration problem due to myself in hadoop cluster

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Jitendra Yadav
Also can you please check your masters file content in hadoop conf directory? Regards JItendra On Mon, Sep 9, 2013 at 5:11 PM, Olivier Renault orena...@hortonworks.comwrote: Could you confirm that you put the hash in front of 192.168.6.10 localhost It should look like # 192.168.6.10

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Jay Vyas
Jitendra: When you say check your masters file content what are you referring to? On Mon, Sep 9, 2013 at 8:31 AM, Jitendra Yadav jeetuyadav200...@gmail.comwrote: Also can you please check your masters file content in hadoop conf directory? Regards JItendra On Mon, Sep 9, 2013 at 5:11

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Shahab Yunus
I think he means the 'masters' file found only at the master node(s) at conf/masters. Details here: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/#masters-vs-slaves Regards, Shahab On Mon, Sep 9, 2013 at 10:22 AM, Jay Vyas jayunit...@gmail.com wrote:

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Jitendra Yadav
Means your $HADOOP_HOME/conf/masters file content. On Mon, Sep 9, 2013 at 7:52 PM, Jay Vyas jayunit...@gmail.com wrote: Jitendra: When you say check your masters file content what are you referring to? On Mon, Sep 9, 2013 at 8:31 AM, Jitendra Yadav jeetuyadav200...@gmail.com wrote:

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Cipher Chen
Sorry i didn't express it well. conf/masters: master conf/slaves: master slave The /etc/hosts file which caused the problem (start-dfs.sh failed): 127.0.0.1 localhost ​​ ​​ 192.168.6.10localhost ​###​ 192.168.6.10tulip master 192.168.6.5 violet slave But when I commented the

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Chris Embree
This sound entirely like an OS Level problem and is slightly outside of the scope of this list, however. I'd suggest you look at your /etc/nsswitch.conf file and ensure that the hosts: line says hosts: files dns This will ensure that names are resolved first by /etc/hosts, then by DNS. Please