Possibilities:

- You are using more memory now (and not getting killed), but now are
exceeding OS memory and are swapping
- Your heap sizes / config aren't quite right and now, instead of
failing earlier because YARN killed the job, you're running normally
but seeing a lot of time lost to GC thrashing

Based on your description I suspect the first one. Disable swap in
general on cluster machines.

On Mon, Jul 18, 2016 at 4:47 PM, Sunita Arvind <sunitarv...@gmail.com> wrote:
> Hello Experts,
>
> For one of our streaming appilcation, we intermittently saw:
>
> WARN yarn.YarnAllocator: Container killed by YARN for exceeding memory
> limits. 12.0 GB of 12 GB physical memory used. Consider boosting
> spark.yarn.executor.memoryOverhead.
>
> Based on what I found on internet and the error message, I increased the
> memoryOverhead to 768. This is actually slowing the application. We are on
> spark1.3, so not sure if its due to any GC pauses. Just to do some
> intelligent trials, I wanted to understand what could be causing the
> degrade. Should I increase driver memoryOverhead also? Another interesting
> observation is, bringing down the executor memory to 5GB with executor
> memoryOverhead to 768 showed significant performance gains. What are the
> other associated settings?
>
> regards
> Sunita
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to