Nong Li created SPARK-12486:
-------------------------------

             Summary: Executors are not always terminated successfully by the 
worker.
                 Key: SPARK-12486
                 URL: https://issues.apache.org/jira/browse/SPARK-12486
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
            Reporter: Nong Li


There are cases when the executor is not killed successfully by the worker. 
One way this can happen is if the executor is in a bad state, fails to 
heartbeat and the master tells the worker to kill the executor. The executor is 
in such a bad state that the kill request is ignored. This seems to be able to 
happen if the executor is in heavy GC.

The cause of this is that the Process.destroy() API is not forceful enough. In 
Java8, a new API, destroyForcibly() was added. We should use that if available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to