Hi, there

I have a spark batch job running on CDH5.4 + Spark 1.3.0. Job is submitted in 
“yarn-client” mode. The job itself failed due to YARN kills several executor 
containers because the containers exceeded the memory limit posed by YARN. 
However, when I went to the YARN resource manager site, it displayed the job as 
successful. I found there was an issue reported in JIRA 
https://issues.apache.org/jira/browse/SPARK-3627 
<https://issues.apache.org/jira/browse/SPARK-3627>, but it says it was fixed in 
Spark 1.2. On Spark history server, it shows the job as “Incomplete”. 

Is this still a bug or there is something I need to do in spark application to 
report the correct job status to YARN?

Lan

Reply via email to