So, I tried to use SparkAppHandle.Listener with SparkLauncher as you suggested. The behavior of Launcher is not what I expected.
1- If I start the job (using SparkLauncher) and my Spark cluster has enough cores available, I receive events in my class extending SparkAppHandle.Listener and I see the status getting changed from UNKOWN->CONNECTED -> SUBMITTED -> RUNNING. All good here. 2- If my Spark cluster has cores only for my Driver process (running in cluster mode) but no cores for my executor, then I still receive the RUNNING event. I was expecting something else since my executor has no cores and Master UI shows WAITING state for executors, listener should respond with SUBMITTED state instead of RUNNING. 3- If my Spark cluster has no cores for even the driver process then SparkLauncher invokes no events at all. The state stays in UNKNOWN. I would have expected it to be in SUBMITTED state atleast. *Is there any way with which I can reliably get the WAITING state of job?* Driver=RUNNING, executor=RUNNING, overall state should be RUNNING Driver=RUNNING, executor=WAITING overall state should be SUBMITTED/WAITING Driver=WAITING, executor=WAITING overall state should be CONNECTED/SUBMITTED/WAITING -- Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/ --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org