Github user vanzin commented on a diff in the pull request: https://github.com/apache/spark/pull/14926#discussion_r77426133 --- Diff: core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala --- @@ -392,10 +397,36 @@ private[spark] class ExecutorAllocationManager( } /** - * Request the cluster manager to remove the given executor. + * Request the cluster manager to remove the given executors. * Return whether the request is received. */ - private def removeExecutor(executorId: String): Boolean = synchronized { + private def removeExecutors(executorIds: Seq[String]): Boolean = synchronized { + + val executorIdsToBeRemoved = executorIds.filter(canBeKilled) + + // Send a request to the backend to kill this executor + val removeRequestAcknowledged = testing || client.killExecutors(executorIdsToBeRemoved) --- End diff -- Ah, this is something I've already asked for in the past... `killExecutors` should really return something more interesting than a boolean. Because `CoarseGrainedSchedulerBackend` *will* ignore executors it doesn't want to kill. The problem with that is that through some unfortunate inheritance chain, `SparkContext` actually extends `ExecutorAllocationClient` and thus, `killExecutors` is a public API. I think we should fix this so that: - `SparkContext` still exposes the existing `killExecutors`, but doesn't override `ExecutorAllocationClient` - `ExecutorAllocationClient` defines the proper interface, which optimally would return the list of executors actually removed so that the rest of the code here can do the right thing. Otherwise, there's no way to do what this patch proposes without breaking all the accounting, unfortunately.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org