Hi Humayun ,

 Lets assume you have JT, TT1, TT2, TT3

  Now you should configure the \etc\hosts like below examle

      10.18.xx.1 JT

      10.18.xx.2 TT1

      10.18.xx.3 TT2

      10.18.xx.4 TT3

   Configure the same set in all the machines, so that all task trackers can 
talk each other with hostnames correctly. Also pls remove some entries from 
your files

   127.0.0.1 localhost.localdomain localhost

   127.0.1.1 humayun



I have seen others already suggested many links for the regular configuration 
items. Hope you might clear about them.

hope it will help...

Regards,

Uma

________________________________

From: Humayun kabir [humayun0...@gmail.com]
Sent: Thursday, December 22, 2011 10:34 PM
To: common-user@hadoop.apache.org; Uma Maheswara Rao G
Subject: Re: Hadoop configuration

Hello Uma,

Thanks for your cordial and quick reply. It would be great if you explain what 
you suggested to do. Right now we are running on following
configuration.

We are using hadoop on virtual box. when it is a single node then it works fine 
for big dataset larger than the default block size. but in case of multinode 
cluster (2 nodes) we are facing some problems. We are able to ping both 
"Master->Slave" and "Slave->Master".
Like when the input dataset is smaller than the default block size(64 MB) then 
it works fine. but when the input dataset is larger than the default block size 
then it shows ‘too much fetch failure’ in reduce state.
here is the output link
http://paste.ubuntu.com/707517/

this is our /etc/hosts file

192.168.60.147 humayun # Added by NetworkManager
127.0.0.1 localhost.localdomain localhost
::1 humayun localhost6.localdomain6 localhost6
127.0.1.1 humayun

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

192.168.60.1 master
192.168.60.2 slave


Regards,

-Humayun.


On 22 December 2011 15:47, Uma Maheswara Rao G 
<mahesw...@huawei.com<mailto:mahesw...@huawei.com>> wrote:
Hey Humayun,

 To solve the too many fetch failures problem, you should configure host 
mapping correctly.
Each tasktracker should be able to ping from each other.

Regards,
Uma
________________________________________
From: Humayun kabir [humayun0...@gmail.com<mailto:humayun0...@gmail.com>]
Sent: Thursday, December 22, 2011 2:54 PM
To: common-user@hadoop.apache.org<mailto:common-user@hadoop.apache.org>
Subject: Hadoop configuration

someone please help me to configure hadoop such as core-site.xml,
hdfs-site.xml, mapred-site.xml etc.
please provide some example. it is badly needed. because i run in a 2 node
cluster. when i run the wordcount example then it gives the result too
mutch fetch failure.

Reply via email to