Hi Takashi,

thanks for your help. After a further investigation, I figure out that the 
killed container was the driver process. After setting 
spark.yarn.driver.memoryOverhead instead of spark.yarn.executor.memoryOverhead 
the error was gone and application is executed without error. Maybe it will 
help you as well.

Regards,

Pascal 




> Am 17.07.2017 um 22:59 schrieb Takashi Sasaki <tsasaki...@gmail.com>:
> 
> Hi Pascal,
> 
> The error also occurred frequently in our project.
> 
> As a solution, it was effective to specify the memory size directly
> with spark-submit command.
> 
> eg. spark-submit executor-memory 2g
> 
> 
> Regards,
> 
> Takashi
> 
>> 2017-07-18 5:18 GMT+09:00 Pascal Stammer <stam...@deichbrise.de>:
>>> Hi,
>>> 
>>> I am running a Spark 2.1.x Application on AWS EMR with YARN and get
>>> following error that kill my application:
>>> 
>>> AM Container for appattempt_1500320286695_0001_000001 exited with exitCode:
>>> -104
>>> For more detailed output, check application tracking
>>> page:http://ip-172-31-35-192.eu-central-1.compute.internal:8088/cluster/app/application_1500320286695_0001Then,
>>> click on links to logs of each attempt.
>>> Diagnostics: Container
>>> [pid=9216,containerID=container_1500320286695_0001_01_000001] is running
>>> beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical
>>> memory used; 3.3 GB of 6.9 GB virtual memory used. Killing container.
>>> 
>>> 
>>> I already change spark.yarn.executor.memoryOverhead but the error still
>>> occurs. Does anybody have a hint for me which parameter or configuration I
>>> have to adapt.
>>> 
>>> Thank you very much.
>>> 
>>> Regards,
>>> 
>>> Pascal Stammer
>>> 
>>> 
> 
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> 

Reply via email to