Github user vanzin commented on a diff in the pull request: https://github.com/apache/spark/pull/19041#discussion_r138760634 --- Diff: core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala --- @@ -88,6 +89,12 @@ class CoarseGrainedSchedulerBackend(scheduler: TaskSchedulerImpl, val rpcEnv: Rp @GuardedBy("CoarseGrainedSchedulerBackend.this") private val executorsPendingToRemove = new HashMap[String, Boolean] + // Mark executors that we will request to kill in the near future. + // This is different from executors in executorsPendingToRemove, which have already asked to be + // killed. + @GuardedBy("CoarseGrainedSchedulerBackend.this") + private val executorsToBeKilled = mutable.Set.empty[String] --- End diff -- Instead of introducing yet another state for executors, would it be possible to re-use the `disableExecutor` logic for this? Sounds like that mostly maps to what you want - an executor that is not available for scheduling but still hasn't fully been decommissioned.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org