While I think work should be done to make the numbers nearby, we should
ideally raise the JVM heap value than lower the memory.mb resource request
of MR tasks. Otherwise, with YARN, users will start seeing more containers
per node than before.
Also good to raise heap plus the sort buffer memory of
i retested it and oom error is also in 2.0.3, but not in 1.x.
how much memory is needed for reducer during shuffle and sort? Peak
usage is memoryLimit or there have to be additional memory for
mapred.reduce.parallel.copies fetch buffers?
MergerManager: memoryLimit=326264416, maxSingleShuffleL
Radim Kolar created MAPREDUCE-5209:
--
Summary: ShuffleScheduler log message incorrect
Key: MAPREDUCE-5209
URL: https://issues.apache.org/jira/browse/MAPREDUCE-5209
Project: Hadoop Map/Reduce
Radim,
Those are already present - the discussion is about the heap vs.
requested memory resource.
On Sun, May 5, 2013 at 3:22 AM, Radim Kolar wrote:
>
> While looking into MAPREDUCE-5207 (adding defaults for
> mapreduce.{map|reduce}.memory.mb), I was wondering how much headroom
> should be left