Hi,
So what you were expecting while pinging master?
As per my understanding it is working fine.Well there is no sense of using
localhost and hostname on same ip, for localhost it's always preferred to
use loopback method i.e 127.0.0.1
Hope this will help you.
Regards
Jitendra
On Wed, Sep 11,
Hi, all
Thanks for all your replies and guidance.
Although I haven't figured out why. :)
On Wed, Sep 11, 2013 at 4:03 PM, Jitendra Yadav
jeetuyadav200...@gmail.comwrote:
Hi,
So what you were expecting while pinging master?
As per my understanding it is working fine.Well there is no
So
for the first *wrong* /etc/hosts file, the sequence would be :
find hdfs://master:54310
find master - 192.168.6.10 (*but it already got ip here*)
find 192.168.6.10 - localhost
find localhost - 127.0.0.1
The other thing, when 'ping master', i would got reply from '192.168.6.10'
instead of
Hi everyone,
I have solved a configuration problem due to myself in hadoop cluster
mode.
I have configuration as below:
property
namefs.default.name/name
valuehdfs://master:54310/value
/property
a
nd the hosts file:
/etc/hosts:
127.0.0.1 localhost
192.168.6.10
Could you confirm that you put the hash in front of 192.168.6.10
localhost
It should look like
# 192.168.6.10localhost
Thanks
Olivier
On 9 Sep 2013 12:31, Cipher Chen cipher.chen2...@gmail.com wrote:
Hi everyone,
I have solved a configuration problem due to myself in hadoop cluster
Also can you please check your masters file content in hadoop conf
directory?
Regards
JItendra
On Mon, Sep 9, 2013 at 5:11 PM, Olivier Renault orena...@hortonworks.comwrote:
Could you confirm that you put the hash in front of 192.168.6.10
localhost
It should look like
# 192.168.6.10
Jitendra: When you say check your masters file content what are you
referring to?
On Mon, Sep 9, 2013 at 8:31 AM, Jitendra Yadav
jeetuyadav200...@gmail.comwrote:
Also can you please check your masters file content in hadoop conf
directory?
Regards
JItendra
On Mon, Sep 9, 2013 at 5:11
I think he means the 'masters' file found only at the master node(s) at
conf/masters.
Details here:
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/#masters-vs-slaves
Regards,
Shahab
On Mon, Sep 9, 2013 at 10:22 AM, Jay Vyas jayunit...@gmail.com wrote:
Means your $HADOOP_HOME/conf/masters file content.
On Mon, Sep 9, 2013 at 7:52 PM, Jay Vyas jayunit...@gmail.com wrote:
Jitendra: When you say check your masters file content what are you
referring to?
On Mon, Sep 9, 2013 at 8:31 AM, Jitendra Yadav jeetuyadav200...@gmail.com
wrote:
Sorry i didn't express it well.
conf/masters:
master
conf/slaves:
master
slave
The /etc/hosts file which caused the problem (start-dfs.sh failed):
127.0.0.1 localhost
192.168.6.10localhost
###
192.168.6.10tulip master
192.168.6.5 violet slave
But when I commented the
This sound entirely like an OS Level problem and is slightly outside of the
scope of this list, however. I'd suggest you look at your
/etc/nsswitch.conf file and ensure that the hosts: line says
hosts: files dns
This will ensure that names are resolved first by /etc/hosts, then by DNS.
Please
11 matches
Mail list logo