I had similar question in the past and worked around by having my
spark-submit application to register to my master application in-order to
co-ordinate kill and/or progress of execution. This is a bit clergy I
suppose in comparison to a REST like API available in the spark stand-alone
cluster. 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Kill-Spark-Application-programmatically-in-Spark-standalone-cluster-tp29113p29116.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to