We ran into this recently. Solution was to bump up the value of the
dfs.datanode.max.xcievers setting.
HTH,
DR
On 10/12/2010 03:53 PM, rakesh kothari wrote:
Hi,
My MR Job is processing gzipped files each around 450 MB and there are 24 of
them. File block size is 512 MB.
This job is faili
No. It just runs this job. It's 7 node cluster with 3 mapper and 2 reducer slot
per node.
Date: Tue, 12 Oct 2010 13:23:23 -0700
Subject: Re: Failures in the reducers
From: shrij...@rocketfuel.com
To: mapreduce-user@hadoop.apache.org
Is your cluster busy doing other things? (while this j
; I wonder why this happen in the reduce stage since I just have 10 reducers
> and I don't see how those 256 connections are being opened.
>
> -Rakesh
>
> --
> Date: Tue, 12 Oct 2010 13:02:16 -0700
> Subject: Re: Failures in the reducers
> F
kesh
Date: Tue, 12 Oct 2010 13:02:16 -0700
Subject: Re: Failures in the reducers
From: shrij...@rocketfuel.com
To: mapreduce-user@hadoop.apache.org
Rakesh, That error log looks like it belonged to DataNode and not NameNode.
Anyways try pumping the parameter named dfs.datanode.max.xcievers up (shoo
Rakesh,
That error log looks like it belonged to DataNode and not NameNode. Anyways
try pumping the parameter named *dfs.datanode.max.xcievers* up (shoot for
512). This param belongs to core-site.xml .
-Shrijeet
On Tue, Oct 12, 2010 at 12:53 PM, rakesh kothari
wrote:
> Hi,
>
> My MR Job is proc