You can look into the SparkListener interface to get some of those
messages. Losing the master though is pretty fatal to all apps.

On Mon, Sep 5, 2016 at 7:30 AM, Hough, Stephen C <stephenc.ho...@sc.com> wrote:
> I have a long running application, configured to be HA, whereby only the
> designated leader will acquire a JavaSparkContext, listen for requests and
> push jobs onto this context.
>
>
>
> The problem I have is, whenever my AWS instances running workers die (either
> a time to live expires or I cancel those instances) it seems that Spark
> blames my driver, I see the following in logs.
>
>
>
> org.apache.spark.SparkException: Exiting due to error from cluster
> scheduler: Master removed our application: FAILED
>
>
>
> However my application doesn’t get a notification so thinks everything is
> okay, until it receives another request and tries to submit to the spark and
> gets a
>
>
>
> java.lang.IllegalStateException: Cannot call methods on a stopped
> SparkContext.
>
>
>
> Is there a way I can observe when the JavaSparkContext I own is stopped?
>
>
>
> Thanks
> Stephen
>
>
> This email and any attachments are confidential and may also be privileged.
> If you are not the intended recipient, please delete all copies and notify
> the sender immediately. You may wish to refer to the incorporation details
> of Standard Chartered PLC, Standard Chartered Bank and their subsidiaries at
> https://www.sc.com/en/incorporation-details.html

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to