I¹m running a Python script using spark-submit on YARN in an EMR cluster,
and if I have a job that fails due to ExecutorLostFailure or if I kill the
job, it still shows up on the web UI with a FinalStatus of SUCCEEDED.  Is
this due to PySpark, or is there potentially some other issue with the job
failure status not propagating to the logs?


Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to