The parameter spark.yarn.executor.memoryOverhead

2017-10-30 Thread Ashok Kumar
Hi Gurus, The parameter spark.yarn.executor.memoryOverhead is explained as below: spark.yarn.executor.memoryOverhead executorMemory * 0.10, with minimum of 384 The amount of off-heap memory (in megabytes) to be allocated per executor. This is memory that accounts for things like VM overheads

Re: spark.yarn.executor.memoryOverhead

2016-11-23 Thread Saisai Shao
>From my understanding, this memory overhead should include "spark.memory.offHeap.size", which means off-heap memory size should not be larger than the overhead memory size when running in yarn. On Thu, Nov 24, 2016 at 3:01 AM, Koert Kuipers wrote: > in YarnAllocator i see that memoryOverhead is

spark.yarn.executor.memoryOverhead

2016-11-23 Thread Koert Kuipers
in YarnAllocator i see that memoryOverhead is by default set to math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toInt, MEMORY_OVERHEAD_MIN)) this does not take into account spark.memory.offHeap.size i think. should it? something like: math.max((MEMORY_OVERHEAD_FACTOR * executorMemory + offHea

Re: Increasing spark.yarn.executor.memoryOverhead degrades performance

2016-07-18 Thread Sean Owen
tainer killed by YARN for exceeding memory > limits. 12.0 GB of 12 GB physical memory used. Consider boosting > spark.yarn.executor.memoryOverhead. > > Based on what I found on internet and the error message, I increased the > memoryOverhead to 768. This is actually slowing the applic

Increasing spark.yarn.executor.memoryOverhead degrades performance

2016-07-18 Thread Sunita Arvind
Hello Experts, For one of our streaming appilcation, we intermittently saw: WARN yarn.YarnAllocator: Container killed by YARN for exceeding memory limits. 12.0 GB of 12 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead. Based on what I found on internet and the error

Re: Boosting spark.yarn.executor.memoryOverhead

2015-08-11 Thread Sandy Ryza
was getting a failure which included the message > Container killed by YARN for exceeding memory limits. 2.1 GB of 2 GB > physical memory used. Consider boosting spark.yarn.executor.memoryOverhead. > > So I attempted the following - > spark-submit -

Boosting spark.yarn.executor.memoryOverhead

2015-08-11 Thread Eric Bless
Previously I was getting a failure which included the message Container killed by YARN for exceeding memory limits. 2.1 GB of 2 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead. So I attempted the following - spark-submit --jars examples.jar

Re: bitten by spark.yarn.executor.memoryOverhead

2015-03-02 Thread Sean Owen
me on this failure mode as well and has settled in > to passing "--conf spark.yarn.executor.memoryOverhead=1024" to most jobs > (that works out to 10-20% of --executor-memory, depending on the job). > > I agree that learning about this the hard way is a weak part of the >

Re: bitten by spark.yarn.executor.memoryOverhead

2015-03-02 Thread Ted Yu
My team wasted a lot of time on this failure mode as well and has settled > in to passing "--conf spark.yarn.executor.memoryOverhead=1024" to most > jobs (that works out to 10-20% of --executor-memory, depending on the job). > > I agree that learning about this the hard way is

Re: bitten by spark.yarn.executor.memoryOverhead

2015-03-02 Thread Ryan Williams
t;won't fix". My team wasted a lot of time on this failure mode as well and has settled in to passing "--conf spark.yarn.executor.memoryOverhead=1024" to most jobs (that works out to 10-20% of --executor-memory, depending on the job). I agree that learning about this the ha

Re: bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Corey Nolet
overhead value; it may still >>>> have to be increased in some cases, but a fatter default may make this >>>> kind of surprise less frequent. >>>> >>>> I'd support increasing the default; any other thoughts? >>>> >>>> On Sat,

Re: bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Ted Yu
rt increasing the default; any other thoughts? >>> >>> On Sat, Feb 28, 2015 at 3:34 PM, Koert Kuipers >>> wrote: >>> > hey, >>> > running my first map-red like (meaning disk-to-disk, avoiding in memory >>> > RDDs) computation in sp

Re: bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Corey Nolet
s frequent. >> >> I'd support increasing the default; any other thoughts? >> >> On Sat, Feb 28, 2015 at 3:34 PM, Koert Kuipers wrote: >> > hey, >> > running my first map-red like (meaning disk-to-disk, avoiding in memory >> > RDDs) computation in s

Re: bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Ted Yu
; > On Sat, Feb 28, 2015 at 3:34 PM, Koert Kuipers wrote: > > hey, > > running my first map-red like (meaning disk-to-disk, avoiding in memory > > RDDs) computation in spark on yarn i immediately got bitten by a too low > > spark.yarn.executor.memoryOverhead. however it

Re: bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Sean Owen
park on yarn i immediately got bitten by a too low > spark.yarn.executor.memoryOverhead. however it took me about an hour to find > out this was the cause. at first i observed failing shuffles leading to > restarting of tasks, then i realized this was because executors could not be > reached, th

bitten by spark.yarn.executor.memoryOverhead

2015-02-28 Thread Koert Kuipers
hey, running my first map-red like (meaning disk-to-disk, avoiding in memory RDDs) computation in spark on yarn i immediately got bitten by a too low spark.yarn.executor.memoryOverhead. however it took me about an hour to find out this was the cause. at first i observed failing shuffles leading to