Previously I was getting a failure which included the message     Container 
killed by YARN for exceeding memory limits. 2.1 GB of 2 GB physical memory 
used. Consider boosting spark.yarn.executor.memoryOverhead.

So I attempted the following -     spark-submit --jars examples.jar 
latest_msmtdt_by_gridid_and_source.py --conf 
spark.yarn.executor.memoryOverhead=1024 host table

This resulted in -    Application application_1438983806434_24070 failed 2 
times due to AM Container for appattempt_1438983806434_24070_000002 exited with 
exitCode: -1000

Am I specifying the spark.yarn.executor.memoryOverhead incorrectly?

Reply via email to