Greetings!
My executors apparently are being terminated because they are running beyond
physical memory limits according to the yarn-hadoop-nodemanager logs on the
worker nodes (/mnt/var/log/hadoop on AWS EMR). I'm setting the driver-memory
to 8G.However, looking at stdout in userlogs, I can
Short answer: yes.
Take a look at: http://spark.apache.org/docs/latest/running-on-yarn.html
Look for memoryOverhead.
On Mon, Jan 12, 2015 at 2:06 PM, Michael Albert
m_albert...@yahoo.com.invalid wrote:
Greetings!
My executors apparently are being terminated because they are
running beyond