Re: hadoop cares about /etc/hosts ?

2013-09-11 Thread Jitendra Yadav
, 2013 at 7:05 AM, Cipher Chen cipher.chen2...@gmail.comwrote: So for the first *wrong* /etc/hosts file, the sequence would be : find hdfs://master:54310 find master - 192.168.6.10 (*but it already got ip here*) find 192.168.6.10 - localhost find localhost - 127.0.0.1 The other thing, when

Re: hadoop cares about /etc/hosts ?

2013-09-11 Thread Cipher Chen
there is no sense of using localhost and hostname on same ip, for localhost it's always preferred to use loopback method i.e 127.0.0.1 Hope this will help you. Regards Jitendra On Wed, Sep 11, 2013 at 7:05 AM, Cipher Chen cipher.chen2...@gmail.comwrote: So for the first *wrong* /etc/hosts file

Re: hadoop cares about /etc/hosts ?

2013-09-10 Thread Cipher Chen
So for the first *wrong* /etc/hosts file, the sequence would be : find hdfs://master:54310 find master - 192.168.6.10 (*but it already got ip here*) find 192.168.6.10 - localhost find localhost - 127.0.0.1 The other thing, when 'ping master', i would got reply from '192.168.6.10' instead

hadoop cares about /etc/hosts ?

2013-09-09 Thread Cipher Chen
Hi everyone, I have solved a configuration problem due to myself in hadoop cluster mode. I have configuration as below: property namefs.default.name/name valuehdfs://master:54310/value /property a ​nd the hosts file:​ /etc/hosts: 127.0.0.1 localhost ​​ ​​ 192.168.6.10

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Olivier Renault
mode. I have configuration as below: property namefs.default.name/name valuehdfs://master:54310/value /property a nd the hosts file: /etc/hosts: 127.0.0.1 localhost 192.168.6.10localhost ### 192.168.6.10tulip master 192.168.6.5 violet slave a nd

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Jitendra Yadav
/value /property a nd the hosts file: /etc/hosts: 127.0.0.1 localhost 192.168.6.10localhost ### 192.168.6.10tulip master 192.168.6.5 violet slave a nd when i was trying to start-dfs.sh, namenode failed to start. namenode log hinted that: 13/09/09 17:09:02 INFO

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Jay Vyas
solved a configuration problem due to myself in hadoop cluster mode. I have configuration as below: property namefs.default.name/name valuehdfs://master:54310/value /property a nd the hosts file: /etc/hosts: 127.0.0.1 localhost 192.168.6.10localhost

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Shahab Yunus
solved a configuration problem due to myself in hadoop cluster mode. I have configuration as below: property namefs.default.name/name valuehdfs://master:54310/value /property a nd the hosts file: /etc/hosts: 127.0.0.1 localhost 192.168.6.10localhost

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Jitendra Yadav
/value /property a nd the hosts file: /etc/hosts: 127.0.0.1 localhost 192.168.6.10localhost ### 192.168.6.10tulip master 192.168.6.5 violet slave a nd when i was trying to start-dfs.sh, namenode failed to start. namenode log hinted that: 13/09/09 17:09:02

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Cipher Chen
Sorry i didn't express it well. conf/masters: master conf/slaves: master slave The /etc/hosts file which caused the problem (start-dfs.sh failed): 127.0.0.1 localhost ​​ ​​ 192.168.6.10localhost ​###​ 192.168.6.10tulip master 192.168.6.5 violet slave But when I commented

Re: hadoop cares about /etc/hosts ?

2013-09-09 Thread Chris Embree
This sound entirely like an OS Level problem and is slightly outside of the scope of this list, however. I'd suggest you look at your /etc/nsswitch.conf file and ensure that the hosts: line says hosts: files dns This will ensure that names are resolved first by /etc/hosts, then by DNS. Please

Re: /etc/hosts

2013-01-14 Thread Colin McCabe
Hadoop on linux nodes with /etc/hosts like this: 127.0.0.1 localhost 127.0.1.1 hostname.domainname hostname My administrator says, he needs the second line because of Kerberos. I tried to LD_PRELOAD a modified version of gettaddrinfo, but it works only for some of Hadoop

/etc/hosts

2012-12-08 Thread Pavel Hančar
Hello, I would like to run Hadoop on linux nodes with /etc/hosts like this: 127.0.0.1 localhost 127.0.1.1 hostname.domainname hostname My administrator says, he needs the second line because of Kerberos. I tried to LD_PRELOAD a modified version of gettaddrinfo, but it works

Re: network configuration (etc/hosts) ?

2011-12-21 Thread Joey Echeverria
have properly configured the ssh keys, and the /etc/hosts files are: master- 127.0.0.1 localhost6.localdomain6 localhost 127.0.1.1 localhost4.localdomain4 master-pc 192.168.7.110 master 192.168.7.157 slave slave- 127.0.1.1    localhost5.localdomain5 lab-pc 127.0.0.1    localhost3

Re: network configuration (etc/hosts) ?

2011-12-21 Thread ArunKumar
MirrorX, Try out adding hostname of your master and slave system also to /etc/hosts/ That fixed same error for me. master- 127.0.0.1 localhost6.localdomain6 localhost 127.0.1.1 localhost4.localdomain4 master-pc 192.168.7.110 master master-pc 192.168.7.157 slave lab-pc slave- 127.0.1.1

network configuration (etc/hosts) ?

2011-12-20 Thread MirrorX
dear all i am trying for many days to get a simple hadoop cluster (with 2 nodes) to work but i have trouble configuring the network parameters. i have properly configured the ssh keys, and the /etc/hosts files are: master- 127.0.0.1 localhost6.localdomain6 localhost 127.0.1.1 localhost4

/etc/hosts related error?

2011-06-08 Thread bikash sharma
/job_201106081500_0018/job.xml Looking in the forums, it seems it has something to do with /etc/hosts settings, because I cannot also access the jobtracker web interface via the hostname, but can access it via the actual IP address. I set the /etc/hosts in all the VMs as per ip address hostname

Re: editing etc hosts files of a cluster

2009-10-21 Thread Steve Loughran
Allen Wittenauer wrote: A bit more specific: At Yahoo!, we had either every server as a DNS slave or a DNS caching server. In the case of LinkedIn, we're running Solaris so nscd is significantly better than its Linux counterpart. However, we still seem to be blowing out the cache too much.

Re: editing etc hosts files of a cluster

2009-10-20 Thread David Ritch
I also prefer to avoid custom software, and follow standards. We use Puppet to manage our node configuration (including hadoop config files), and adding one more file to the configuration is trivial. I prefer not to run additional daemons on all my nodes when I can avoid it. Replicating our

Re: editing etc hosts files of a cluster

2009-10-20 Thread Allen Wittenauer
Everything can get made to work in a small scale. As the grid grows, well... On 10/20/09 10:32 AM, David Ritch david.ri...@gmail.com wrote: I also prefer to avoid custom software, and follow standards. We use Puppet to manage our node configuration (including hadoop config files), and

editing etc hosts files of a cluster

2009-10-19 Thread Ramesh.Ramasamy
Hi, I have a cluster setup with 3 nodes, and I'm adding hostname details (in /etc/hosts) manually in each node. Seems it is not an effective approach. How this scenario is handled in big clusters? Is there any simple of way to add the hostname details in all the nodes by editing a single entry

Re: editing etc hosts files of a cluster

2009-10-19 Thread Last-chance Architect
DNS ;) Ramesh.Ramasamy wrote: Hi, I have a cluster setup with 3 nodes, and I'm adding hostname details (in /etc/hosts) manually in each node. Seems it is not an effective approach. How this scenario is handled in big clusters? Is there any simple of way to add the hostname details in all

Re: editing etc hosts files of a cluster

2009-10-19 Thread Allen Wittenauer
to DNS caching servers here as well. On 10/19/09 6:45 AM, Last-chance Architect archit...@galatea.com wrote: DNS ;) Ramesh.Ramasamy wrote: Hi, I have a cluster setup with 3 nodes, and I'm adding hostname details (in /etc/hosts) manually in each node. Seems it is not an effective approach

Re: editing etc hosts files of a cluster

2009-10-19 Thread Allen Wittenauer
On 10/19/09 11:46 AM, Edward Capriolo edlinuxg...@gmail.com wrote: I am interested in your post. What has caused you to run caching DNS servers on each of your nodes? Is this a hadoop specific problem or a problem specific to your implementation? Hadoop does a -tremendous- amount of

Re: editing etc hosts files of a cluster

2009-10-19 Thread David B. Ritch
Most of the communication and name lookups within a cluster refer to other nodes within that same cluster. It is usually not a big deal to put all the systems from a cluster in a single hosts file, and rsync it around the cluster. (Consider using prsync, which comes with pssh,

HMaster and /etc/hosts

2009-06-15 Thread Fredrik Möllerstrand
Hello list! I've spent the better part of the afternoon upgrading from 0.19.3 to trunk, and I did fall into a hole or two. Specifically, it turns out that we rely on DNS lookups to find out what address HMaster binds to, which caused me some grief. The documentation is also weak on what part

Re: HMaster and /etc/hosts

2009-06-15 Thread Jean-Daniel Cryans
Fredrik, First, thanks for trying out trunk. wrt your problem, have you tried setting the following configs? hbase.master.dns.interface hbase.master.dns.nameserver This works just like in Hadoop. The reason we removed the master address is that the master can now failover to any other waiting

Re: HMaster and /etc/hosts

2009-06-15 Thread Lars George
Hi Fredrik, Stack suggested it could be that your servers have in the nsswitch.conf to use files before dns? Could you try for us and switch that, revert the entry in the /etc/hosts and then try if the options J-D suggest to see if they work for this problem? Then we can document

Re: HMaster and /etc/hosts

2009-06-15 Thread Fredrik Möllerstrand
wrt your problem, have you tried setting the following configs? hbase.master.dns.interface hbase.master.dns.nameserver Indeed I did. Any such lookup was overriden by /etc/hosts as per /etc/nsswitch.conf. Now, if I only could get a hold of the person who put that hosts entry there in the first

Re: HMaster and /etc/hosts

2009-06-15 Thread Fredrik Möllerstrand
On Mon, Jun 15, 2009 at 6:18 PM, Lars Georgel...@worldlingo.com wrote: Hi Fredrik, Stack suggested it could be that your servers have in the nsswitch.conf to use files before dns? Could you try for us and switch that, revert the entry in the /etc/hosts and then try if the options J-D suggest

Setting /etc/hosts entry for namenode causes job submission failure

2009-04-05 Thread Foss User
I have a Linux machine where I do not run namenode or tasktracker but I have hadoop installed. I use this machine to submit jobs to the cluster. I see that the moment I put /etc/hosts entry for my-namenode, I get the following error: foss...@cave:~/mcr-wordcount$ hadoop jar dist/mcr-wordcount-0.1