Re: Failures in the reducers

2010-10-13 Thread David Rosenstrauch
We ran into this recently. Solution was to bump up the value of the dfs.datanode.max.xcievers setting. HTH, DR On 10/12/2010 03:53 PM, rakesh kothari wrote: Hi, My MR Job is processing gzipped files each around 450 MB and there are 24 of them. File block size is 512 MB. This job is faili

RE: Failures in the reducers

2010-10-12 Thread rakesh kothari
No. It just runs this job. It's 7 node cluster with 3 mapper and 2 reducer slot per node. Date: Tue, 12 Oct 2010 13:23:23 -0700 Subject: Re: Failures in the reducers From: shrij...@rocketfuel.com To: mapreduce-user@hadoop.apache.org Is your cluster busy doing other things? (while this j

Re: Failures in the reducers

2010-10-12 Thread Shrijeet Paliwal
; I wonder why this happen in the reduce stage since I just have 10 reducers > and I don't see how those 256 connections are being opened. > > -Rakesh > > -- > Date: Tue, 12 Oct 2010 13:02:16 -0700 > Subject: Re: Failures in the reducers > F

RE: Failures in the reducers

2010-10-12 Thread rakesh kothari
kesh Date: Tue, 12 Oct 2010 13:02:16 -0700 Subject: Re: Failures in the reducers From: shrij...@rocketfuel.com To: mapreduce-user@hadoop.apache.org Rakesh, That error log looks like it belonged to DataNode and not NameNode. Anyways try pumping the parameter named dfs.datanode.max.xcievers up (shoo

Re: Failures in the reducers

2010-10-12 Thread Shrijeet Paliwal
Rakesh, That error log looks like it belonged to DataNode and not NameNode. Anyways try pumping the parameter named *dfs.datanode.max.xcievers* up (shoot for 512). This param belongs to core-site.xml . -Shrijeet On Tue, Oct 12, 2010 at 12:53 PM, rakesh kothari wrote: > Hi, > > My MR Job is proc