[ https://issues.apache.org/jira/browse/SPARK-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15034776#comment-15034776 ]
Marcelo Vanzin commented on SPARK-10911: ---------------------------------------- So, one thing that I still haven't understood is: why isn't YARN killing the executors? When the AM goes away and there are no retries left, it stops the application, so YARN should clean up all its containers, no? I ran a simple test by doing this in the spark shell: {code} sc.parallelize(1 to 4).map { i => val sleeper = new Runnable() { override def run(): Unit = while (true) { Thread.sleep(Integer.MAX_VALUE) } } val t = new Thread(sleeper) t.setDaemon(false) t.start() i }.collect() {code} I verified that the executor was up and running the non-daemon threads. When I exited the shell, the executors went away. Or is this about executors started by, e.g., attempt 1 of the AM, which then fails, and attempt 2 then is running and those old executors are still alive? I don't think users should be relying on anything being correctly shut down in executors (especially since they have no way to do that!), but I want to understand what's causing the underlying issue here. > Executors should System.exit on clean shutdown > ---------------------------------------------- > > Key: SPARK-10911 > URL: https://issues.apache.org/jira/browse/SPARK-10911 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Affects Versions: 1.5.1 > Reporter: Thomas Graves > Assignee: Zhuo Liu > Priority: Minor > > Executors should call System.exit on clean shutdown to make sure all user > threads exit and jvm shuts down. > We ran into a case where an Executor was left around for days trying to > shutdown because the user code was using a non-daemon thread pool and one of > those threads wasn't exiting. We should force the jvm to go away with > System.exit. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org