Re: Too much fetch failure

2011-10-16 Thread Humayun gmail
yes, the two nodes work as tasktracker

On 16 October 2011 22:20, Uma Maheswara Rao G 72686 wrote:

> I mean, two nodes here is tasktrackers.
>
> - Original Message -
> From: Humayun gmail 
> Date: Sunday, October 16, 2011 7:38 pm
> Subject: Re: Too much fetch failure
> To: common-user@hadoop.apache.org
>
> > yes we can ping every node (both master and slave).
> >
> > On 16 October 2011 19:52, Uma Maheswara Rao G 72686
> > wrote:
> > > Are you able to ping the other node with the configured hostnames?
> > >
> > > Make sure that you should be able to ping to the other machine
> > with the
> > > configured hostname in ect/hosts files.
> > >
> > > Regards,
> > > Uma
> > > - Original Message -
> > > From: praveenesh kumar 
> > > Date: Sunday, October 16, 2011 6:46 pm
> > > Subject: Re: Too much fetch failure
> > > To: common-user@hadoop.apache.org
> > >
> > > > try commenting 127.0.0.1 localhost line in your /etc/hosts and
> > then> > restartthe cluster and then try again.
> > > >
> > > > Thanks,
> > > > Praveenesh
> > > >
> > > > On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail
> > > > wrote:
> > > > > we are using hadoop on virtual box. when it is a single node
> > then> > it works
> > > > > fine for big dataset larger than the default block size. but in
> > > > case of
> > > > > multinode cluster (2 nodes) we are facing some problems.
> > > > > Like when the input dataset is smaller than the default block
> > > > size(64 MB)
> > > > > then it works fine. but when the input dataset is larger than
> > the> > default> block size then it shows ‘too much fetch failure’ in
> > > > reduce state.
> > > > > here is the output link
> > > > > http://paste.ubuntu.com/707517/
> > > > >
> > > > > From the above comments , there are many users who faced this
> > > > problem.> different users suggested to modify the /etc/hosts file
> > > > in different manner
> > > > > to fix the problem. but there is no ultimate solution.we need
> > the> > actual> solution thats why we are writing here.
> > > > >
> > > > > this is our /etc/hosts file
> > > > > 192.168.60.147 humayun # Added by NetworkManager
> > > > > 127.0.0.1 localhost.localdomain localhost
> > > > > ::1 humayun localhost6.localdomain6 localhost6
> > > > > 127.0.1.1 humayun
> > > > >
> > > > > # The following lines are desirable for IPv6 capable hosts
> > > > > ::1 localhost ip6-localhost ip6-loopback
> > > > > fe00::0 ip6-localnet
> > > > > ff00::0 ip6-mcastprefix
> > > > > ff02::1 ip6-allnodes
> > > > > ff02::2 ip6-allrouters
> > > > > ff02::3 ip6-allhosts
> > > > >
> > > > > 192.168.60.1 master
> > > > > 192.168.60.2 slave
> > > > >
> > > >
> > >
> >
>


Re: Too much fetch failure

2011-10-16 Thread Uma Maheswara Rao G 72686
I mean, two nodes here is tasktrackers.

- Original Message -
From: Humayun gmail 
Date: Sunday, October 16, 2011 7:38 pm
Subject: Re: Too much fetch failure
To: common-user@hadoop.apache.org

> yes we can ping every node (both master and slave).
> 
> On 16 October 2011 19:52, Uma Maheswara Rao G 72686 
> wrote:
> > Are you able to ping the other node with the configured hostnames?
> >
> > Make sure that you should be able to ping to the other machine 
> with the
> > configured hostname in ect/hosts files.
> >
> > Regards,
> > Uma
> > - Original Message -
> > From: praveenesh kumar 
> > Date: Sunday, October 16, 2011 6:46 pm
> > Subject: Re: Too much fetch failure
> > To: common-user@hadoop.apache.org
> >
> > > try commenting 127.0.0.1 localhost line in your /etc/hosts and 
> then> > restartthe cluster and then try again.
> > >
> > > Thanks,
> > > Praveenesh
> > >
> > > On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail
> > > wrote:
> > > > we are using hadoop on virtual box. when it is a single node 
> then> > it works
> > > > fine for big dataset larger than the default block size. but in
> > > case of
> > > > multinode cluster (2 nodes) we are facing some problems.
> > > > Like when the input dataset is smaller than the default block
> > > size(64 MB)
> > > > then it works fine. but when the input dataset is larger than 
> the> > default> block size then it shows ‘too much fetch failure’ in
> > > reduce state.
> > > > here is the output link
> > > > http://paste.ubuntu.com/707517/
> > > >
> > > > From the above comments , there are many users who faced this
> > > problem.> different users suggested to modify the /etc/hosts file
> > > in different manner
> > > > to fix the problem. but there is no ultimate solution.we need 
> the> > actual> solution thats why we are writing here.
> > > >
> > > > this is our /etc/hosts file
> > > > 192.168.60.147 humayun # Added by NetworkManager
> > > > 127.0.0.1 localhost.localdomain localhost
> > > > ::1 humayun localhost6.localdomain6 localhost6
> > > > 127.0.1.1 humayun
> > > >
> > > > # The following lines are desirable for IPv6 capable hosts
> > > > ::1 localhost ip6-localhost ip6-loopback
> > > > fe00::0 ip6-localnet
> > > > ff00::0 ip6-mcastprefix
> > > > ff02::1 ip6-allnodes
> > > > ff02::2 ip6-allrouters
> > > > ff02::3 ip6-allhosts
> > > >
> > > > 192.168.60.1 master
> > > > 192.168.60.2 slave
> > > >
> > >
> >
>


Re: Too much fetch failure

2011-10-16 Thread Humayun gmail
no. in my config files i mention as master

On 16 October 2011 20:20, praveenesh kumar  wrote:

> why are you formatting the namenode again ?
> 1. Just stop the cluster..
> 2. Just comment out the 127.0.0.1 localhost line
> 3. Restart the cluster.
>
> How have you defined your hadoop config files..?
> Have  you mentioned localhost there ?
>
> Thanks,
> Praveenesh
>
> On Sun, Oct 16, 2011 at 7:42 PM, Humayun gmail  >wrote:
>
> > commenting the line 127.0.0.1 in /etc/hosts is not working. if i format
> the
> > namenode then automatically this line is added.
> > any other solution?
> >
> > On 16 October 2011 19:13, praveenesh kumar  wrote:
> >
> > > try commenting 127.0.0.1 localhost line in your /etc/hosts and then
> > restart
> > > the cluster and then try again.
> > >
> > > Thanks,
> > > Praveenesh
> > >
> > > On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail  > > >wrote:
> > >
> > > > we are using hadoop on virtual box. when it is a single node then it
> > > works
> > > > fine for big dataset larger than the default block size. but in case
> of
> > > > multinode cluster (2 nodes) we are facing some problems.
> > > > Like when the input dataset is smaller than the default block size(64
> > MB)
> > > > then it works fine. but when the input dataset is larger than the
> > default
> > > > block size then it shows ‘too much fetch failure’ in reduce state.
> > > > here is the output link
> > > > http://paste.ubuntu.com/707517/
> > > >
> > > > From the above comments , there are many users who faced this
> problem.
> > > > different users suggested to modify the /etc/hosts file in different
> > > manner
> > > > to fix the problem. but there is no ultimate solution.we need the
> > actual
> > > > solution thats why we are writing here.
> > > >
> > > > this is our /etc/hosts file
> > > > 192.168.60.147 humayun # Added by NetworkManager
> > > > 127.0.0.1 localhost.localdomain localhost
> > > > ::1 humayun localhost6.localdomain6 localhost6
> > > > 127.0.1.1 humayun
> > > >
> > > > # The following lines are desirable for IPv6 capable hosts
> > > > ::1 localhost ip6-localhost ip6-loopback
> > > > fe00::0 ip6-localnet
> > > > ff00::0 ip6-mcastprefix
> > > > ff02::1 ip6-allnodes
> > > > ff02::2 ip6-allrouters
> > > > ff02::3 ip6-allhosts
> > > >
> > > > 192.168.60.1 master
> > > > 192.168.60.2 slave
> > > >
> > >
> >
>


Re: Too much fetch failure

2011-10-16 Thread praveenesh kumar
why are you formatting the namenode again ?
1. Just stop the cluster..
2. Just comment out the 127.0.0.1 localhost line
3. Restart the cluster.

How have you defined your hadoop config files..?
Have  you mentioned localhost there ?

Thanks,
Praveenesh

On Sun, Oct 16, 2011 at 7:42 PM, Humayun gmail wrote:

> commenting the line 127.0.0.1 in /etc/hosts is not working. if i format the
> namenode then automatically this line is added.
> any other solution?
>
> On 16 October 2011 19:13, praveenesh kumar  wrote:
>
> > try commenting 127.0.0.1 localhost line in your /etc/hosts and then
> restart
> > the cluster and then try again.
> >
> > Thanks,
> > Praveenesh
> >
> > On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail  > >wrote:
> >
> > > we are using hadoop on virtual box. when it is a single node then it
> > works
> > > fine for big dataset larger than the default block size. but in case of
> > > multinode cluster (2 nodes) we are facing some problems.
> > > Like when the input dataset is smaller than the default block size(64
> MB)
> > > then it works fine. but when the input dataset is larger than the
> default
> > > block size then it shows ‘too much fetch failure’ in reduce state.
> > > here is the output link
> > > http://paste.ubuntu.com/707517/
> > >
> > > From the above comments , there are many users who faced this problem.
> > > different users suggested to modify the /etc/hosts file in different
> > manner
> > > to fix the problem. but there is no ultimate solution.we need the
> actual
> > > solution thats why we are writing here.
> > >
> > > this is our /etc/hosts file
> > > 192.168.60.147 humayun # Added by NetworkManager
> > > 127.0.0.1 localhost.localdomain localhost
> > > ::1 humayun localhost6.localdomain6 localhost6
> > > 127.0.1.1 humayun
> > >
> > > # The following lines are desirable for IPv6 capable hosts
> > > ::1 localhost ip6-localhost ip6-loopback
> > > fe00::0 ip6-localnet
> > > ff00::0 ip6-mcastprefix
> > > ff02::1 ip6-allnodes
> > > ff02::2 ip6-allrouters
> > > ff02::3 ip6-allhosts
> > >
> > > 192.168.60.1 master
> > > 192.168.60.2 slave
> > >
> >
>


Re: Too much fetch failure

2011-10-16 Thread Humayun gmail
commenting the line 127.0.0.1 in /etc/hosts is not working. if i format the
namenode then automatically this line is added.
any other solution?

On 16 October 2011 19:13, praveenesh kumar  wrote:

> try commenting 127.0.0.1 localhost line in your /etc/hosts and then restart
> the cluster and then try again.
>
> Thanks,
> Praveenesh
>
> On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail  >wrote:
>
> > we are using hadoop on virtual box. when it is a single node then it
> works
> > fine for big dataset larger than the default block size. but in case of
> > multinode cluster (2 nodes) we are facing some problems.
> > Like when the input dataset is smaller than the default block size(64 MB)
> > then it works fine. but when the input dataset is larger than the default
> > block size then it shows ‘too much fetch failure’ in reduce state.
> > here is the output link
> > http://paste.ubuntu.com/707517/
> >
> > From the above comments , there are many users who faced this problem.
> > different users suggested to modify the /etc/hosts file in different
> manner
> > to fix the problem. but there is no ultimate solution.we need the actual
> > solution thats why we are writing here.
> >
> > this is our /etc/hosts file
> > 192.168.60.147 humayun # Added by NetworkManager
> > 127.0.0.1 localhost.localdomain localhost
> > ::1 humayun localhost6.localdomain6 localhost6
> > 127.0.1.1 humayun
> >
> > # The following lines are desirable for IPv6 capable hosts
> > ::1 localhost ip6-localhost ip6-loopback
> > fe00::0 ip6-localnet
> > ff00::0 ip6-mcastprefix
> > ff02::1 ip6-allnodes
> > ff02::2 ip6-allrouters
> > ff02::3 ip6-allhosts
> >
> > 192.168.60.1 master
> > 192.168.60.2 slave
> >
>


Re: Too much fetch failure

2011-10-16 Thread Humayun gmail
yes we can ping every node (both master and slave).

On 16 October 2011 19:52, Uma Maheswara Rao G 72686 wrote:

> Are you able to ping the other node with the configured hostnames?
>
> Make sure that you should be able to ping to the other machine with the
> configured hostname in ect/hosts files.
>
> Regards,
> Uma
> - Original Message -
> From: praveenesh kumar 
> Date: Sunday, October 16, 2011 6:46 pm
> Subject: Re: Too much fetch failure
> To: common-user@hadoop.apache.org
>
> > try commenting 127.0.0.1 localhost line in your /etc/hosts and then
> > restartthe cluster and then try again.
> >
> > Thanks,
> > Praveenesh
> >
> > On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail
> > wrote:
> > > we are using hadoop on virtual box. when it is a single node then
> > it works
> > > fine for big dataset larger than the default block size. but in
> > case of
> > > multinode cluster (2 nodes) we are facing some problems.
> > > Like when the input dataset is smaller than the default block
> > size(64 MB)
> > > then it works fine. but when the input dataset is larger than the
> > default> block size then it shows ‘too much fetch failure’ in
> > reduce state.
> > > here is the output link
> > > http://paste.ubuntu.com/707517/
> > >
> > > From the above comments , there are many users who faced this
> > problem.> different users suggested to modify the /etc/hosts file
> > in different manner
> > > to fix the problem. but there is no ultimate solution.we need the
> > actual> solution thats why we are writing here.
> > >
> > > this is our /etc/hosts file
> > > 192.168.60.147 humayun # Added by NetworkManager
> > > 127.0.0.1 localhost.localdomain localhost
> > > ::1 humayun localhost6.localdomain6 localhost6
> > > 127.0.1.1 humayun
> > >
> > > # The following lines are desirable for IPv6 capable hosts
> > > ::1 localhost ip6-localhost ip6-loopback
> > > fe00::0 ip6-localnet
> > > ff00::0 ip6-mcastprefix
> > > ff02::1 ip6-allnodes
> > > ff02::2 ip6-allrouters
> > > ff02::3 ip6-allhosts
> > >
> > > 192.168.60.1 master
> > > 192.168.60.2 slave
> > >
> >
>


Re: Too much fetch failure

2011-10-16 Thread Uma Maheswara Rao G 72686
Are you able to ping the other node with the configured hostnames?

Make sure that you should be able to ping to the other machine with the 
configured hostname in ect/hosts files.

Regards,
Uma
- Original Message -
From: praveenesh kumar 
Date: Sunday, October 16, 2011 6:46 pm
Subject: Re: Too much fetch failure
To: common-user@hadoop.apache.org

> try commenting 127.0.0.1 localhost line in your /etc/hosts and then 
> restartthe cluster and then try again.
> 
> Thanks,
> Praveenesh
> 
> On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail 
> wrote:
> > we are using hadoop on virtual box. when it is a single node then 
> it works
> > fine for big dataset larger than the default block size. but in 
> case of
> > multinode cluster (2 nodes) we are facing some problems.
> > Like when the input dataset is smaller than the default block 
> size(64 MB)
> > then it works fine. but when the input dataset is larger than the 
> default> block size then it shows ‘too much fetch failure’ in 
> reduce state.
> > here is the output link
> > http://paste.ubuntu.com/707517/
> >
> > From the above comments , there are many users who faced this 
> problem.> different users suggested to modify the /etc/hosts file 
> in different manner
> > to fix the problem. but there is no ultimate solution.we need the 
> actual> solution thats why we are writing here.
> >
> > this is our /etc/hosts file
> > 192.168.60.147 humayun # Added by NetworkManager
> > 127.0.0.1 localhost.localdomain localhost
> > ::1 humayun localhost6.localdomain6 localhost6
> > 127.0.1.1 humayun
> >
> > # The following lines are desirable for IPv6 capable hosts
> > ::1 localhost ip6-localhost ip6-loopback
> > fe00::0 ip6-localnet
> > ff00::0 ip6-mcastprefix
> > ff02::1 ip6-allnodes
> > ff02::2 ip6-allrouters
> > ff02::3 ip6-allhosts
> >
> > 192.168.60.1 master
> > 192.168.60.2 slave
> >
>


Re: Too much fetch failure

2011-10-16 Thread praveenesh kumar
try commenting 127.0.0.1 localhost line in your /etc/hosts and then restart
the cluster and then try again.

Thanks,
Praveenesh

On Sun, Oct 16, 2011 at 2:00 PM, Humayun gmail wrote:

> we are using hadoop on virtual box. when it is a single node then it works
> fine for big dataset larger than the default block size. but in case of
> multinode cluster (2 nodes) we are facing some problems.
> Like when the input dataset is smaller than the default block size(64 MB)
> then it works fine. but when the input dataset is larger than the default
> block size then it shows ‘too much fetch failure’ in reduce state.
> here is the output link
> http://paste.ubuntu.com/707517/
>
> From the above comments , there are many users who faced this problem.
> different users suggested to modify the /etc/hosts file in different manner
> to fix the problem. but there is no ultimate solution.we need the actual
> solution thats why we are writing here.
>
> this is our /etc/hosts file
> 192.168.60.147 humayun # Added by NetworkManager
> 127.0.0.1 localhost.localdomain localhost
> ::1 humayun localhost6.localdomain6 localhost6
> 127.0.1.1 humayun
>
> # The following lines are desirable for IPv6 capable hosts
> ::1 localhost ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> ff02::3 ip6-allhosts
>
> 192.168.60.1 master
> 192.168.60.2 slave
>