I made a small mistake describing my problem. There is no 192.168.1.8.
There is only one machine, 192.168.101.8. I'll describe my problem
again.

1. I have set up a single-node cluster on 192.168.101.8. It is an Ubuntu server.

2. There is no entry for 192.168.101.8 in the DNS server. However, the
hostname is set to be hadoop in this server. But this is only local.
If I ping hadoop locally, it works. But if I ping hadoop or ping
hadoop.domain.example.com from another system it doesn't work. From
another system I have to ping 192.168.101.8. So, I hope I have made it
clear that hadoop.domain.example.com does not exist in our DNS server.

3. domain.example.com is only a dummy example. Of course the actual
name is the domain name of our organization.

4. I started hadoop on this server with the command, bin/hadoop
namenode -format; bin/start-all.sh

5. jps showed all the processes started successfully.

6. Here is my hadoop-site.xml

<configuration>

<property>
  <name>fs.default.name</name>
  <value>192.168.101.8:9000</value>
  <description></description>
</property>

<property>
  <name>mapred.job.tracker</name>
  <value>192.168.101.8:9001</value>
  <description></description>
</property>

<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description></description>
</property>

</configuration>

7. I am running a few ready examples present in
hadoop-0.15.3-examples.jar, especially, the wordcount one. I am also
putting some files into the DFS from remote systems, such as,
192.168.101.100, 192.168.101.101, etc. But these remote systems are
not slaves.

8. From a remote system, I try to access:-
http://192.168.101.8:50030/machines.jsp

It showed:-

Name  Host    # running tasks Failures        Seconds since heartbeat
tracker_hadoop.domain.example.com:/127.0.0.1:4545
hadoop.domain.example.com       0       0       9

Now, when I click on
tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
http://hadoop.domain.example.com:50060/. But it gives error in the
browser because of reason mentioned in point 2. I don't want it to use
the hostname to form those links. I want it to use the IP address,
192.168.101.8 to form the links. Is it possible?

On Feb 9, 2008 7:49 PM, Amar Kamat <[EMAIL PROTECTED]> wrote:
> Ben Kucinich wrote:
> > I have a Hadoop running on a master node 192.168.1.8. fs.default.name
> > is 192.168.101.8:9000 and mapred.job.tracker is 192.168.101.8:9001.
> >
> >
> Actually the masters are the nodes where the JobTracker and the NameNode
> are running i.e 192.168.101.8 in your case.
> 192.168.1.8 would be your client node, the node from where the jobs are
> submitted.
> > I am accessing it's web pages on port 50030 from another machine. I
> > visited http://192.168.101.8:50030/machines.jsp. It showed:-
> >
> > Name  Host    # running tasks Failures        Seconds since heartbeat
> > tracker_hadoop.domain.example.com:/127.0.0.1:4545     
> > hadoop.domain.example.com       0       0       9
> >
> The tacker-name is tracker_<tracker-hostname:port> where hostname is
> obtained from the DNS nameserver passed by
> 'mapred.tasktracker.dns.nameserver' in conf/hadoop-default.xml. So I
> guess in your case "hadoop.domain.example.com"
> is the name obtained from the DNS nameserver for that node. Can you
> provide more details on the xml parameters you have
> changed in conf directory. Also can you provide more details on how you
> are starting your hadoop.
> Amar
>
> > Now, when I click on
> > tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
> > http://hadoop.domain.example.com:50060/. But there is no DNS entry for
> > hadoop in our DNS server. So, I get error in browser. "hadoop" is just
> > the locally set name in the master node. From my machine I can't
> > access the master node as "hadoop". I have to access it as IP address
> > 192.168.101.8. So, this link fails. Is there a way I can set it so
> > that, it doesn't use names but only IP address in forming this link?
> >
>
>

Reply via email to