, 2013 at 7:05 AM, Cipher Chen cipher.chen2...@gmail.comwrote:
So
for the first *wrong* /etc/hosts file, the sequence would be :
find hdfs://master:54310
find master - 192.168.6.10 (*but it already got ip here*)
find 192.168.6.10 - localhost
find localhost - 127.0.0.1
The other thing, when
there is no sense of using
localhost and hostname on same ip, for localhost it's always preferred to
use loopback method i.e 127.0.0.1
Hope this will help you.
Regards
Jitendra
On Wed, Sep 11, 2013 at 7:05 AM, Cipher Chen cipher.chen2...@gmail.comwrote:
So
for the first *wrong* /etc/hosts file
So
for the first *wrong* /etc/hosts file, the sequence would be :
find hdfs://master:54310
find master - 192.168.6.10 (*but it already got ip here*)
find 192.168.6.10 - localhost
find localhost - 127.0.0.1
The other thing, when 'ping master', i would got reply from '192.168.6.10'
instead
Hi everyone,
I have solved a configuration problem due to myself in hadoop cluster
mode.
I have configuration as below:
property
namefs.default.name/name
valuehdfs://master:54310/value
/property
a
nd the hosts file:
/etc/hosts:
127.0.0.1 localhost
192.168.6.10
mode.
I have configuration as below:
property
namefs.default.name/name
valuehdfs://master:54310/value
/property
a
nd the hosts file:
/etc/hosts:
127.0.0.1 localhost
192.168.6.10localhost
###
192.168.6.10tulip master
192.168.6.5 violet slave
a
nd
/value
/property
a
nd the hosts file:
/etc/hosts:
127.0.0.1 localhost
192.168.6.10localhost
###
192.168.6.10tulip master
192.168.6.5 violet slave
a
nd when i was trying to start-dfs.sh, namenode failed to start.
namenode log hinted that:
13/09/09 17:09:02 INFO
solved a configuration problem due to myself in hadoop cluster
mode.
I have configuration as below:
property
namefs.default.name/name
valuehdfs://master:54310/value
/property
a
nd the hosts file:
/etc/hosts:
127.0.0.1 localhost
192.168.6.10localhost
solved a configuration problem due to myself in hadoop cluster
mode.
I have configuration as below:
property
namefs.default.name/name
valuehdfs://master:54310/value
/property
a
nd the hosts file:
/etc/hosts:
127.0.0.1 localhost
192.168.6.10localhost
/value
/property
a
nd the hosts file:
/etc/hosts:
127.0.0.1 localhost
192.168.6.10localhost
###
192.168.6.10tulip master
192.168.6.5 violet slave
a
nd when i was trying to start-dfs.sh, namenode failed to start.
namenode log hinted that:
13/09/09 17:09:02
Sorry i didn't express it well.
conf/masters:
master
conf/slaves:
master
slave
The /etc/hosts file which caused the problem (start-dfs.sh failed):
127.0.0.1 localhost
192.168.6.10localhost
###
192.168.6.10tulip master
192.168.6.5 violet slave
But when I commented
This sound entirely like an OS Level problem and is slightly outside of the
scope of this list, however. I'd suggest you look at your
/etc/nsswitch.conf file and ensure that the hosts: line says
hosts: files dns
This will ensure that names are resolved first by /etc/hosts, then by DNS.
Please
Hadoop on linux nodes with /etc/hosts like this:
127.0.0.1 localhost
127.0.1.1 hostname.domainname hostname
My administrator says, he needs the second line because of Kerberos. I
tried to LD_PRELOAD a modified version of gettaddrinfo, but it works only
for some of Hadoop
Hello,
I would like to run Hadoop on linux nodes with /etc/hosts like this:
127.0.0.1 localhost
127.0.1.1 hostname.domainname hostname
My administrator says, he needs the second line because of Kerberos. I
tried to LD_PRELOAD a modified version of gettaddrinfo, but it works
have properly
configured the ssh keys, and the /etc/hosts files are:
master-
127.0.0.1 localhost6.localdomain6 localhost
127.0.1.1 localhost4.localdomain4 master-pc
192.168.7.110 master
192.168.7.157 slave
slave-
127.0.1.1 localhost5.localdomain5 lab-pc
127.0.0.1 localhost3
MirrorX,
Try out adding hostname of your master and slave system also to /etc/hosts/
That fixed same error for me.
master-
127.0.0.1 localhost6.localdomain6 localhost
127.0.1.1 localhost4.localdomain4 master-pc
192.168.7.110 master master-pc
192.168.7.157 slave lab-pc
slave-
127.0.1.1
dear all
i am trying for many days to get a simple hadoop cluster (with 2 nodes) to
work but i have trouble configuring the network parameters. i have properly
configured the ssh keys, and the /etc/hosts files are:
master-
127.0.0.1 localhost6.localdomain6 localhost
127.0.1.1 localhost4
/job_201106081500_0018/job.xml
Looking in the forums, it seems it has something to do with /etc/hosts
settings, because I cannot also access the jobtracker web interface via the
hostname, but can access it via the actual IP address.
I set the /etc/hosts in all the VMs as per
ip address hostname
Allen Wittenauer wrote:
A bit more specific:
At Yahoo!, we had either every server as a DNS slave or a DNS caching
server.
In the case of LinkedIn, we're running Solaris so nscd is significantly
better than its Linux counterpart. However, we still seem to be blowing out
the cache too much.
I also prefer to avoid custom software, and follow standards. We use Puppet
to manage our node configuration (including hadoop config files), and adding
one more file to the configuration is trivial.
I prefer not to run additional daemons on all my nodes when I can avoid it.
Replicating our
Everything can get made to work in a small scale. As the grid grows,
well...
On 10/20/09 10:32 AM, David Ritch david.ri...@gmail.com wrote:
I also prefer to avoid custom software, and follow standards. We use Puppet
to manage our node configuration (including hadoop config files), and
Hi,
I have a cluster setup with 3 nodes, and I'm adding hostname details (in
/etc/hosts) manually in each node. Seems it is not an effective approach.
How this scenario is handled in big clusters?
Is there any simple of way to add the hostname details in all the nodes by
editing a single entry
DNS ;)
Ramesh.Ramasamy wrote:
Hi,
I have a cluster setup with 3 nodes, and I'm adding hostname details (in
/etc/hosts) manually in each node. Seems it is not an effective approach.
How this scenario is handled in big clusters?
Is there any simple of way to add the hostname details in all
to DNS caching servers here as
well.
On 10/19/09 6:45 AM, Last-chance Architect archit...@galatea.com wrote:
DNS ;)
Ramesh.Ramasamy wrote:
Hi,
I have a cluster setup with 3 nodes, and I'm adding hostname details (in
/etc/hosts) manually in each node. Seems it is not an effective approach
On 10/19/09 11:46 AM, Edward Capriolo edlinuxg...@gmail.com wrote:
I am interested in your post. What has caused you to run caching DNS
servers on each of your nodes? Is this a hadoop specific problem or a
problem specific to your implementation?
Hadoop does a -tremendous- amount of
Most of the communication and name lookups within a cluster refer to
other nodes within that same cluster. It is usually not a big deal to
put all the systems from a cluster in a single hosts file, and rsync it
around the cluster. (Consider using prsync, which comes with pssh,
Hello list!
I've spent the better part of the afternoon upgrading from 0.19.3 to
trunk, and I did fall into a hole or two. Specifically, it turns out
that we rely on DNS lookups to find out what address HMaster binds to,
which caused me some grief. The documentation is also weak on what
part
Fredrik,
First, thanks for trying out trunk.
wrt your problem, have you tried setting the following configs?
hbase.master.dns.interface
hbase.master.dns.nameserver
This works just like in Hadoop.
The reason we removed the master address is that the master can now
failover to any other waiting
Hi Fredrik,
Stack suggested it could be that your servers have in the nsswitch.conf
to use files before dns? Could you try for us and switch that, revert
the entry in the /etc/hosts and then try if the options J-D suggest to
see if they work for this problem?
Then we can document
wrt your problem, have you tried setting the following configs?
hbase.master.dns.interface
hbase.master.dns.nameserver
Indeed I did. Any such lookup was overriden by /etc/hosts as per
/etc/nsswitch.conf. Now, if I only could get a hold of the person who
put that hosts entry there in the first
On Mon, Jun 15, 2009 at 6:18 PM, Lars Georgel...@worldlingo.com wrote:
Hi Fredrik,
Stack suggested it could be that your servers have in the nsswitch.conf to
use files before dns? Could you try for us and switch that, revert the entry
in the /etc/hosts and then try if the options J-D suggest
I have a Linux machine where I do not run namenode or tasktracker but
I have hadoop installed. I use this machine to submit jobs to the
cluster. I see that the moment I put /etc/hosts entry for my-namenode,
I get the following error:
foss...@cave:~/mcr-wordcount$ hadoop jar dist/mcr-wordcount-0.1
31 matches
Mail list logo