Hi Gurus,
The parameter spark.yarn.executor.memoryOverhead is explained as below:
spark.yarn.executor.memoryOverhead
executorMemory * 0.10, with minimum of 384
The amount of off-heap memory (in megabytes) to be allocated per executor. This
is memory that accounts for things like VM overheads
>From my understanding, this memory overhead should include
"spark.memory.offHeap.size", which means off-heap memory size should not be
larger than the overhead memory size when running in yarn.
On Thu, Nov 24, 2016 at 3:01 AM, Koert Kuipers wrote:
> in YarnAllocator i see that memoryOverhead is
in YarnAllocator i see that memoryOverhead is by default set to
math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toInt,
MEMORY_OVERHEAD_MIN))
this does not take into account spark.memory.offHeap.size i think. should
it?
something like:
math.max((MEMORY_OVERHEAD_FACTOR * executorMemory + offHea
tainer killed by YARN for exceeding memory
> limits. 12.0 GB of 12 GB physical memory used. Consider boosting
> spark.yarn.executor.memoryOverhead.
>
> Based on what I found on internet and the error message, I increased the
> memoryOverhead to 768. This is actually slowing the applic
Hello Experts,
For one of our streaming appilcation, we intermittently saw:
WARN yarn.YarnAllocator: Container killed by YARN for exceeding memory
limits. 12.0 GB of 12 GB physical memory used. Consider boosting
spark.yarn.executor.memoryOverhead.
Based on what I found on internet and the error
was getting a failure which included the message
> Container killed by YARN for exceeding memory limits. 2.1 GB of 2 GB
> physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
>
> So I attempted the following -
> spark-submit -
Previously I was getting a failure which included the message Container
killed by YARN for exceeding memory limits. 2.1 GB of 2 GB physical memory
used. Consider boosting spark.yarn.executor.memoryOverhead.
So I attempted the following - spark-submit --jars examples.jar
me on this failure mode as well and has settled in
> to passing "--conf spark.yarn.executor.memoryOverhead=1024" to most jobs
> (that works out to 10-20% of --executor-memory, depending on the job).
>
> I agree that learning about this the hard way is a weak part of the
>
My team wasted a lot of time on this failure mode as well and has settled
> in to passing "--conf spark.yarn.executor.memoryOverhead=1024" to most
> jobs (that works out to 10-20% of --executor-memory, depending on the job).
>
> I agree that learning about this the hard way is
t;won't fix".
My team wasted a lot of time on this failure mode as well and has settled
in to passing "--conf spark.yarn.executor.memoryOverhead=1024" to most jobs
(that works out to 10-20% of --executor-memory, depending on the job).
I agree that learning about this the ha
overhead value; it may still
>>>> have to be increased in some cases, but a fatter default may make this
>>>> kind of surprise less frequent.
>>>>
>>>> I'd support increasing the default; any other thoughts?
>>>>
>>>> On Sat,
rt increasing the default; any other thoughts?
>>>
>>> On Sat, Feb 28, 2015 at 3:34 PM, Koert Kuipers
>>> wrote:
>>> > hey,
>>> > running my first map-red like (meaning disk-to-disk, avoiding in memory
>>> > RDDs) computation in sp
s frequent.
>>
>> I'd support increasing the default; any other thoughts?
>>
>> On Sat, Feb 28, 2015 at 3:34 PM, Koert Kuipers wrote:
>> > hey,
>> > running my first map-red like (meaning disk-to-disk, avoiding in memory
>> > RDDs) computation in s
;
> On Sat, Feb 28, 2015 at 3:34 PM, Koert Kuipers wrote:
> > hey,
> > running my first map-red like (meaning disk-to-disk, avoiding in memory
> > RDDs) computation in spark on yarn i immediately got bitten by a too low
> > spark.yarn.executor.memoryOverhead. however it
park on yarn i immediately got bitten by a too low
> spark.yarn.executor.memoryOverhead. however it took me about an hour to find
> out this was the cause. at first i observed failing shuffles leading to
> restarting of tasks, then i realized this was because executors could not be
> reached, th
hey,
running my first map-red like (meaning disk-to-disk, avoiding in memory
RDDs) computation in spark on yarn i immediately got bitten by a too low
spark.yarn.executor.memoryOverhead. however it took me about an hour to
find out this was the cause. at first i observed failing shuffles leading
to
16 matches
Mail list logo