Thank you for the hint.  I'm fairly new to this so nothing is well
known to me at this time ;-)

-K

On Wed, Feb 16, 2011 at 1:58 PM, Rahul Jain <rja...@gmail.com> wrote:
> If you google for such memory failures, you'll find the mapreduce tunable
> that'll help you:
>
> mapred.job.shuffle.input.buffer.percent ; it is well known that the default
> values in hadoop config
>
> don't work well for large data systems
>

Reply via email to