Thank you, that looks promising as well.
* Marshall
From: Yinan Li
Sent: Sunday, April 5, 2020 3:49 PM
To: Marshall Markham
Cc: user
Subject: Re: spark-submit exit status on k8s
Not sure if you are aware of this new feature in Airflow
https://issues.apache.org/jira/browse/AIRFLOW-6542
11:25 AM
To: Marshall Markham ; user
Subject: Re: spark-submit exit status on k8s
Another, simpler solution that I just thought of: just add an operation at the
end of your Spark program to write an empty file somewhere, with filename
SUCCESS for example. Add a stage to your AirFlow graph
e any
> discussion of picking up this work in the near future?
>
>
>
> Thanks,
>
> Marshall
>
>
>
> *From:* Masood Krohy
>
> *Sent:* Friday, April 3, 2020 9:34 PM
> *To:* Marshall Markham
> ; user
>
> *Subject:* Re: spark-submit exit st
:* Re: spark-submit exit status on k8s
While you wait for a fix on that JIRA ticket, you may be able to add
an intermediary step in your AirFlow graph, calling Spark's REST API
after submitting the job, and dig into the actual status of the
application, and make a success/fail decision accordingly
*Subject:* Re: spark-submit exit status on k8s
While you wait for a fix on that JIRA ticket, you may be able to add
an intermediary step in your AirFlow graph, calling Spark's REST API
after submitting the job, and dig into the actual status of the
application, and make a success/fail
discussion of picking
up this work in the near future?
Thanks,
Marshall
From: Masood Krohy
Sent: Friday, April 3, 2020 9:34 PM
To: Marshall Markham ; user
Subject: Re: spark-submit exit status on k8s
While you wait for a fix on that JIRA ticket, you may be able to add an
intermediary step in your
While you wait for a fix on that JIRA ticket, you may be able to add an
intermediary step in your AirFlow graph, calling Spark's REST API after
submitting the job, and dig into the actual status of the application,
and make a success/fail decision accordingly. You can make repeated
calls in a
Hi Team,
My team recently conducted a POC of Kubernetes/Airflow/Spark with great
success. The major concern we have about this system, after the completion of
our POC is a behavior of spark-submit. When called with a Kubernetes API
endpoint as master spark-submit seems to always return exit