Re: URLs contain non-existant domain names in machines.jsp

2008-02-10 Thread Tim Wintle
I agree, this is a really annoying problem - most of the job appears to
work, but unfortunately the reduce stage doensn't normally work.

Interestingly, when hadoop runs on OSX it seems to set the hostname as
the ip (or sets a hostname through zeroconfig). Would be useful if we
could use just ip address, though (especially for dynamic clusters
where machines are being added / removed fairly often)


On Sat, 2008-02-09 at 21:11 +0530, Ben Kucinich wrote:
 I made a small mistake describing my problem. There is no 192.168.1.8.
 There is only one machine, 192.168.101.8. I'll describe my problem
 again.
 
 1. I have set up a single-node cluster on 192.168.101.8. It is an Ubuntu 
 server.
 
 2. There is no entry for 192.168.101.8 in the DNS server. However, the
 hostname is set to be hadoop in this server. But this is only local.
 If I ping hadoop locally, it works. But if I ping hadoop or ping
 hadoop.domain.example.com from another system it doesn't work. From
 another system I have to ping 192.168.101.8. So, I hope I have made it
 clear that hadoop.domain.example.com does not exist in our DNS server.
 
 3. domain.example.com is only a dummy example. Of course the actual
 name is the domain name of our organization.
 
 4. I started hadoop on this server with the command, bin/hadoop
 namenode -format; bin/start-all.sh
 
 5. jps showed all the processes started successfully.
 
 6. Here is my hadoop-site.xml
 
 configuration
 
 property
   namefs.default.name/name
   value192.168.101.8:9000/value
   description/description
 /property
 
 property
   namemapred.job.tracker/name
   value192.168.101.8:9001/value
   description/description
 /property
 
 property
   namedfs.replication/name
   value1/value
   description/description
 /property
 
 /configuration
 
 7. I am running a few ready examples present in
 hadoop-0.15.3-examples.jar, especially, the wordcount one. I am also
 putting some files into the DFS from remote systems, such as,
 192.168.101.100, 192.168.101.101, etc. But these remote systems are
 not slaves.
 
 8. From a remote system, I try to access:-
 http://192.168.101.8:50030/machines.jsp
 
 It showed:-
 
 Name  Host# running tasks FailuresSeconds since heartbeat
 tracker_hadoop.domain.example.com:/127.0.0.1:4545
 hadoop.domain.example.com   0   0   9
 
 Now, when I click on
 tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
 http://hadoop.domain.example.com:50060/. But it gives error in the
 browser because of reason mentioned in point 2. I don't want it to use
 the hostname to form those links. I want it to use the IP address,
 192.168.101.8 to form the links. Is it possible?
 
 On Feb 9, 2008 7:49 PM, Amar Kamat [EMAIL PROTECTED] wrote:
  Ben Kucinich wrote:
   I have a Hadoop running on a master node 192.168.1.8. fs.default.name
   is 192.168.101.8:9000 and mapred.job.tracker is 192.168.101.8:9001.
  
  
  Actually the masters are the nodes where the JobTracker and the NameNode
  are running i.e 192.168.101.8 in your case.
  192.168.1.8 would be your client node, the node from where the jobs are
  submitted.
   I am accessing it's web pages on port 50030 from another machine. I
   visited http://192.168.101.8:50030/machines.jsp. It showed:-
  
   Name  Host# running tasks FailuresSeconds since heartbeat
   tracker_hadoop.domain.example.com:/127.0.0.1:4545 
   hadoop.domain.example.com   0   0   9
  
  The tacker-name is tracker_tracker-hostname:port where hostname is
  obtained from the DNS nameserver passed by
  'mapred.tasktracker.dns.nameserver' in conf/hadoop-default.xml. So I
  guess in your case hadoop.domain.example.com
  is the name obtained from the DNS nameserver for that node. Can you
  provide more details on the xml parameters you have
  changed in conf directory. Also can you provide more details on how you
  are starting your hadoop.
  Amar
 
   Now, when I click on
   tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
   http://hadoop.domain.example.com:50060/. But there is no DNS entry for
   hadoop in our DNS server. So, I get error in browser. hadoop is just
   the locally set name in the master node. From my machine I can't
   access the master node as hadoop. I have to access it as IP address
   192.168.101.8. So, this link fails. Is there a way I can set it so
   that, it doesn't use names but only IP address in forming this link?
  
 
 



URLs contain non-existant domain names in machines.jsp

2008-02-08 Thread Ben Kucinich
I have a Hadoop running on a master node 192.168.1.8. fs.default.name
is 192.168.101.8:9000 and mapred.job.tracker is 192.168.101.8:9001.

I am accessing it's web pages on port 50030 from another machine. I
visited http://192.168.101.8:50030/machines.jsp. It showed:-

NameHost# running tasks FailuresSeconds since heartbeat
tracker_hadoop.domain.example.com:/127.0.0.1:4545   
hadoop.domain.example.com   0   0   9

Now, when I click on
tracker_hadoop..domain.example.com:/127.0.0.1:4545 link it takes me to
http://hadoop.domain.example.com:50060/. But there is no DNS entry for
hadoop in our DNS server. So, I get error in browser. hadoop is just
the locally set name in the master node. From my machine I can't
access the master node as hadoop. I have to access it as IP address
192.168.101.8. So, this link fails. Is there a way I can set it so
that, it doesn't use names but only IP address in forming this link?