Hello All,

I have several Spark Streaming applications running on Standalone mode in
Spark 1.5.  Spark is currently set up for dynamic resource allocation.  The
issue I am seeing is that I can have about 12 Spark Streaming Jobs running
concurrently.  Occasionally I would see more than half where to fail due
to Stage cancelled because SparkContext was shut down.  It would
automatically restart as it runs on supervised mode.  Attached is the
screenshot of one of the jobs that failed.  Anyone have any insight as to
what is going on?
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to