The address of the JobTracker (NameNode) is specified using *
mapred.job.tracker* (*fs.default.name*) in the configurations. When the
JobTracker (NameNode) starts, it will listen on the address specified by *
mapred.job.tracker* (*fs.default.name*); and when a TaskTracker (DataNode)
starts, it will
So it turns out the reason that I was getting the duey.local. was
because that is what was in the reverse DNS on the nameserver from a
previous test. So that is fixed, and now the machine says
duey.local.xxx.com.
The only remaining issue is the trailing "." (Period) that is required
by D
Matt,
Thanks for the suggestion.
I had actually forgotten about local dns caching. I am using a mac so
I used
dscacheutil -flushcache
To clear the cache, and also investigated the ordering. And
everything seems to be in order.
Except I still get a bogus result.
it is using the old nam
If you look at the documentation for the getCanonicalHostName()
function (thanks, Steve)...
http://java.sun.com/javase/6/docs/api/java/net/InetAddress.html#getCanonicalHostName()
you'll see two Java security properties (networkaddress.cache.ttl and
networkaddress.cache.negative.ttl).
You m
That is what I thought also, is that it needs to keep that information
somewhere, because it needs to be able to communicate with all of the
servers.
So I deleted the /tmp/had* and /tmp/hs* directories, removed the log
files, and grepped for the duey name in all files in config. And the
John Martyniak wrote:
Does hadoop "cache" the server names anywhere? Because I changed to
using DNS for name resolution, but when I go to the nodes view, it is
trying to view with the old name. And I changed the hadoop-site.xml
file so that it no longer has any of those values.
in SVN hea
Does hadoop "cache" the server names anywhere? Because I changed to
using DNS for name resolution, but when I go to the nodes view, it is
trying to view with the old name. And I changed the hadoop-site.xml
file so that it no longer has any of those values.
Any help would be appreciated.
So I setup a dns server that is for the internal network. changed all
of the names to duey.local, and created a master zone for .local on
the DNS. Put the domains server as the first one in /etc/resolv.conf
file, added it to the interface. I changed the hostname of the
machine that it
Raghu,
Thanks for the suggestions.
So I made those suggestions and on both the Map/Reduce, NameNode web
UIs the machines are listed using the external IP address.
So I don't think that worked. I am going to try it again and clear
out everything in the /tmp directory and try again.
-John
I still need to go through the whole thread. but we feel your pain.
First, please try setting fs.default.name to namenode internal ip on the
datanodes. This should make NN to attach internal ip so the datanodes
(assuming your routing is correct). NameNode webUI should list internal
ips for da
So I changed all of the 0.0.0.0 on one machine to point to the
192.168.1.102 address.
And still it picks up the hostname and ip address of the external
network.
I am kind of at my wits end with this, as I am not seeing a solution
yet, except to take the machines off of the external networ
I haven't even looked into that at all.
I am just trying to get a simple 2 node cluster working with 2 Nics.
-John
On Jun 9, 2009, at 1:41 PM, Edward Capriolo wrote:
Also if you are using a topology rack map, make sure you scripts
responds correctly to every possible hostname or IP address as
Also if you are using a topology rack map, make sure you scripts
responds correctly to every possible hostname or IP address as well.
On Tue, Jun 9, 2009 at 1:19 PM, John Martyniak wrote:
> It seems that this is the issue, as there several posts related to same
> topic but with no resolution.
>
>
It seems that this is the issue, as there several posts related to
same topic but with no resolution.
I guess the thing of it is that it shouldn't use the hostname of the
machine at all. If I tell it the master is x and it has an IP Address
of x.x.x.102 that should be good enough.
And if
Steve,
I missed this part of the email.
So if I change the 0.0.0.0 to either the 192.168.1.102 or 103
depending on what is necessary will that solve the problem?
It looks like it lives in 10 places.
-John
On Jun 9, 2009, at 10:17 AM, Steve Loughran wrote:
> I have some other applications
On Tue, Jun 9, 2009 at 11:59 AM, Steve Loughran wrote:
> John Martyniak wrote:
>>
>> When I run either of those on either of the two machines, it is trying to
>> resolve against the DNS servers configured for the external addresses for
>> the box.
>>
>> Here is the result
>> Server: xxx.xxx.
John Martyniak wrote:
When I run either of those on either of the two machines, it is trying
to resolve against the DNS servers configured for the external addresses
for the box.
Here is the result
Server:xxx.xxx.xxx.69
Address:xxx.xxx.xxx.69#53
OK. in an ideal world, each NIC ha
When I run either of those on either of the two machines, it is trying
to resolve against the DNS servers configured for the external
addresses for the box.
Here is the result
Server: xxx.xxx.xxx.69
Address:xxx.xxx.xxx.69#53
** server can't find 102.1.168.192.in-addr.arpa.: N
John Martyniak wrote:
I am running Mac OS X.
So en0 points to the external address and en1 points to the internal
address on both machines.
Here is the internal results from duey:
en1: flags=8963
mtu 1500
inet6 fe80::21e:52ff:fef4:65%en1 prefixlen 64 scopeid 0x5
inet 192.168.1.102 n
I am running Mac OS X.
So en0 points to the external address and en1 points to the internal
address on both machines.
Here is the internal results from duey:
en1: flags=8963
mtu 1500
inet6 fe80::21e:52ff:fef4:65%en1 prefixlen 64 scopeid 0x5
inet 192.168.1.102 netmask 0x
John Martyniak wrote:
My original names where huey-direct and duey-direct, both names in the
/etc/hosts file on both machines.
Are nn.internal and jt.interal special names?
no, just examples on a multihost network when your external names
could be something completely different.
What does
My original names where huey-direct and duey-direct, both names in
the /etc/hosts file on both machines.
Are nn.internal and jt.interal special names?
-John
On Jun 9, 2009, at 9:26 AM, Steve Loughran wrote:
John Martyniak wrote:
David,
For the Option #1.
I just changed the names to the IP
John Martyniak wrote:
David,
For the Option #1.
I just changed the names to the IP Addresses, and it still comes up as
the external name and ip address in the log files, and on the job
tracker screen.
So option 1 is a no go.
When I change the "dfs.datanode.dns.interface" values it doesn't
David,
For the Option #1.
I just changed the names to the IP Addresses, and it still comes up as
the external name and ip address in the log files, and on the job
tracker screen.
So option 1 is a no go.
When I change the "dfs.datanode.dns.interface" values it doesn't seem
to do anything
David,
Thanks for suggestions.
1) seems like a good possibility, makes it a little less readable, but
that is ok.
2) they are in the local /etc/hosts file, and they do resolve locally,
meaning that I can ssh to either machine by using the internal name.
3) This is another possibility, but wo
I see several possibilities:
1) Use IP addresses instead of machine names in the config files.
2) Make sure that the names are in local /etc/hosts files, and that they
resolve to the internal IP addresses.
3) Set up an internal name server, and make sure that it serves your
internal addresses and
Hi,
I am creating a small Hadoop (0.19.1) cluster (2 nodes to start), each
of the machines has 2 NIC cards (1 external facing, 1 internal
facing). It is important that Hadoop run and communicate on the
internal facing NIC (because the external facing NIC costs me money),
also the interna
27 matches
Mail list logo