Hello Experts,

For one of our streaming appilcation, we intermittently saw:

WARN yarn.YarnAllocator: Container killed by YARN for exceeding memory
limits. 12.0 GB of 12 GB physical memory used. Consider boosting
spark.yarn.executor.memoryOverhead.

Based on what I found on internet and the error message, I increased the
memoryOverhead to 768. This is actually slowing the application. We are on
spark1.3, so not sure if its due to any GC pauses. Just to do some
intelligent trials, I wanted to understand what could be causing the
degrade. Should I increase driver memoryOverhead also? Another interesting
observation is, bringing down the executor memory to 5GB with executor
memoryOverhead to 768 showed significant performance gains. What are the
other associated settings?

regards
Sunita

Reply via email to