Mariusz Dubielecki created SPARK-22235:
------------------------------------------

             Summary: Can not kill job gracefully in spark standalone cluster
                 Key: SPARK-22235
                 URL: https://issues.apache.org/jira/browse/SPARK-22235
             Project: Spark
          Issue Type: Bug
          Components: Spark Submit
    Affects Versions: 2.1.0
         Environment: Spark standalone cluster
            Reporter: Mariusz Dubielecki


There is a problem with killing streaming jobs gracefully in spark 2.1.0 with 
enabled spark.streaming.stopGracefullyOnShutdown I've tested killing spark jobs 
in many ways and I got some conclusions.

#     With command spark-submit --master spark:// --kill driver-id
       It kills job almost immediately - not gracefully
#     With api curl -X POST http://localhost:6066/v1/submissions/kill/driverId
       The same like in 1. (I looked at the spark-submit code and it seems like 
this tool calls just REST endpoint)
#     With unix kill driver-process
       It doesn't kill the job at all (driver is immediately restarted)

Then I noticed that I'd used param: --supervise so I repeated these all tests 
without this flag. It turned out that 1. and 2. methods worked in the same way 
like before but 3. method worked like I assumed. This means, calling kill 
driver-process job - spark digests all messages from kafka which left and than 
turns down job gracefully. It is of course some solution but quite inconvenient 
since I must track machine with driver instead of using simple spark REST 
endpoint. The second downside is that I can not use flag "supervise" so 
whenever node with spark driver fails than job stops.

I noticed also that killing streaming job with spark-submit does not mark app 
as completed in spark history server.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to