l
> On 4 Nov 2011, at 16:06, Uma Maheswara Rao G 72686 wrote:
>
> > - Original Message -
> > From: Russell Brown
> > Date: Friday, November 4, 2011 9:18 pm
> > Subject: Re: Never ending reduce jobs, error Error reading task
> > outputConnection refused
> On 4 Nov 2011, at 16:06, Uma Maheswara Rao G 72686 wrote:
>
> > - Original Message -
> > From: Russell Brown
> > Date: Friday, November 4, 2011 9:18 pm
> > Subject: Re: Never ending reduce jobs, error Error reading task
> outputConnection refused
> > T
reading task
> outputConnection refused
> To: mapreduce-user@hadoop.apache.org
>
>>
>> On 4 Nov 2011, at 15:44, Uma Maheswara Rao G 72686 wrote:
>>
>>> - Original Message -
>>> From: Russell Brown
>>> Date: Friday, November 4, 2011 9:11 pm
&
- Original Message -
From: Russell Brown
Date: Friday, November 4, 2011 9:18 pm
Subject: Re: Never ending reduce jobs, error Error reading task
outputConnection refused
To: mapreduce-user@hadoop.apache.org
>
> On 4 Nov 2011, at 15:44, Uma Maheswara Rao G 72686
Hi Robert,
Thanks for the reply. Version of hadoop is hadoop-0.20.203.0.
It is weird how this is only a problem when the amount of data goes up.
My setup might be to blame, this is all a learning process for me so I have 5
VMs running. 1 VM is the JobTracker/Namenode, the other 4 are data/task n
On 4 Nov 2011, at 15:44, Uma Maheswara Rao G 72686 wrote:
> - Original Message -
> From: Russell Brown
> Date: Friday, November 4, 2011 9:11 pm
> Subject: Re: Never ending reduce jobs, error Error reading task
> outputConnection refused
> To: mapreduce-user
- Original Message -
From: Russell Brown
Date: Friday, November 4, 2011 9:11 pm
Subject: Re: Never ending reduce jobs, error Error reading task
outputConnection refused
To: mapreduce-user@hadoop.apache.org
>
> On 4 Nov 2011, at 15:35, Uma Maheswara Rao G 72686 wrote:
>
I use IP addresses in the slaves config file, and via IP addresses
everyone can ping everyone else, do I need to set up hostnames too?
Cheers
Russell
>
> Regards,
> Uma
> - Original Message -
> From: Russell Brown
> Date: Friday, November 4, 2011 9:00 pm
> Subject: N
I am not sure what is causing this, but yes they are related. In hadoop the
map output is served to the reducers through jetty, which is an imbedded web
server. If the reducers are not able to fetch the map outputs, then they
assume that the mapper is bad and a new mapper is relaunched to comp
reduce jobs, error Error reading task outputConnection
refused
To: mapreduce-user@hadoop.apache.org
> Hi,
> I have a cluster of 4 tasktracker/datanodes and 1
> JobTracker/Namenode. I can run small jobs on this cluster fine
> (like up to a few thousand keys) but more than that
Hi,
I have a cluster of 4 tasktracker/datanodes and 1 JobTracker/Namenode. I can
run small jobs on this cluster fine (like up to a few thousand keys) but more
than that and I start seeing errors like this:
11/11/04 08:16:08 INFO mapred.JobClient: Task Id :
attempt_20040342_0006_m_05_0,
11 matches
Mail list logo