>From my understanding, this memory overhead should include
"spark.memory.offHeap.size", which means off-heap memory size should not be
larger than the overhead memory size when running in yarn.

On Thu, Nov 24, 2016 at 3:01 AM, Koert Kuipers <ko...@tresata.com> wrote:

> in YarnAllocator i see that memoryOverhead is by default set to
> math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toInt,
> MEMORY_OVERHEAD_MIN))
>
> this does not take into account spark.memory.offHeap.size i think. should
> it?
>
> something like:
>
> math.max((MEMORY_OVERHEAD_FACTOR * executorMemory + offHeapMemory).toInt,
> MEMORY_OVERHEAD_MIN))
>

Reply via email to