You can set the amount of memory used by the reducer using the
>>>>>> mapreduce.reduce.java.opts property. Set it in mapred-site.xml or
>>>>>> override it in your job. You can set it to something like: -Xm512M to
>>>>>> increase the amount of memory used
Thank you for the hint. I'm fairly new to this so nothing is well
known to me at this time ;-)
-K
On Wed, Feb 16, 2011 at 1:58 PM, Rahul Jain wrote:
> If you google for such memory failures, you'll find the mapreduce tunable
> that'll help you:
>
> mapred.job.shuffle.input.buffer.percent ; it i
file?
> >>>>>
> >>>>> -K
> >>>>>
> >>>>> On Wed, Feb 16, 2011 at 9:43 AM, Jim Falgout <
> jim.falg...@pervasive.com> wrote:
> >>>>>> You can set the amount of memory used by the reducer using the
&g
e?
>>>>>
>>>>> -K
>>>>>
>>>>> On Wed, Feb 16, 2011 at 9:43 AM, Jim Falgout
>>>>> wrote:
>>>>>> You can set the amount of memory used by the reducer using the
>>>>>> mapreduce.reduce.java.opts pro
n your job. You can set it to something like: -Xm512M to
>>>>> increase the amount of memory used by the JVM spawned for the reducer
>>>>> task.
>>>>>
>>>>> -Original Message-
>>>>> From: Kelly Burkhart [mailt
t the amount of memory used by the reducer using the
>>>> mapreduce.reduce.java.opts property. Set it in mapred-site.xml or override
>>>> it in your job. You can set it to something like: -Xm512M to increase the
>>>> amount of memory used by the JVM spawned for th
e
>>> amount of memory used by the JVM spawned for the reducer task.
>>>
>>> -Original Message-
>>> From: Kelly Burkhart [mailto:kelly.burkh...@gmail.com]
>>> Sent: Wednesday, February 16, 2011 9:12 AM
>>> To: common-user@hadoop.apache.or
;> amount of memory used by the JVM spawned for the reducer task.
>>
>> -Original Message-
>> From: Kelly Burkhart [mailto:kelly.burkh...@gmail.com]
>> Sent: Wednesday, February 16, 2011 9:12 AM
>> To: common-user@hadoop.apache.org
>> Subject: Re: Reduce java.la
job. You can set it to something like: -Xm512M to increase the amount
> of memory used by the JVM spawned for the reducer task.
>
> -Original Message-
> From: Kelly Burkhart [mailto:kelly.burkh...@gmail.com]
> Sent: Wednesday, February 16, 2011 9:12 AM
> To: common-user@ha
Message-
From: Kelly Burkhart [mailto:kelly.burkh...@gmail.com]
Sent: Wednesday, February 16, 2011 9:12 AM
To: common-user@hadoop.apache.org
Subject: Re: Reduce java.lang.OutOfMemoryError
I have had it fail with a single reducer and with 100 reducers.
Ultimately it needs to be funneled to a single
...oh sorry I didn't scroll below the exception the first time. Try part 2
James
Sent from my mobile. Please excuse the typos.
On 2011-02-16, at 8:00 AM, Kelly Burkhart wrote:
> Hello, I'm seeing frequent fails in reduce jobs with errors similar to this:
>
>
> 2011-02-15 15:21:10,163 INFO org.
another possibility could be increasing the memory allocated to jvm..not
sure how to do it though.
On Wed, Feb 16, 2011 at 8:46 PM, James Seigel wrote:
> Well the first thing I'd ask to see (if we can) is the code or a
> description of what your reducer is doing.
>
> If it is holding on to objec
Well the first thing I'd ask to see (if we can) is the code or a
description of what your reducer is doing.
If it is holding on to objects too long or accumulating lists well
then with the right amount of data you will run OOM.
Another thought is that you've just not allocated enough mem for the
I have had it fail with a single reducer and with 100 reducers.
Ultimately it needs to be funneled to a single reducer though.
-K
On Wed, Feb 16, 2011 at 9:02 AM, real great..
wrote:
> Hi,
> How many reducers are you using currently?
> Try increasing the number or reducers.
> Let me know if it h
Hi,
How many reducers are you using currently?
Try increasing the number or reducers.
Let me know if it helps.
On Wed, Feb 16, 2011 at 8:30 PM, Kelly Burkhart wrote:
> Hello, I'm seeing frequent fails in reduce jobs with errors similar to
> this:
>
>
> 2011-02-15 15:21:10,163 INFO org.apache.hado
Hello, I'm seeing frequent fails in reduce jobs with errors similar to this:
2011-02-15 15:21:10,163 INFO org.apache.hadoop.mapred.ReduceTask:
header: attempt_201102081823_0175_m_002153_0, compressed len: 172492,
decompressed len: 172488
2011-02-15 15:21:10,163 FATAL org.apache.hadoop.mapred.Task
16 matches
Mail list logo