Github user vanzin commented on a diff in the pull request: https://github.com/apache/spark/pull/20604#discussion_r170103841 --- Diff: core/src/main/scala/org/apache/spark/ExecutorAllocationClient.scala --- @@ -55,18 +55,18 @@ private[spark] trait ExecutorAllocationClient { /** * Request that the cluster manager kill the specified executors. * - * When asking the executor to be replaced, the executor loss is considered a failure, and - * killed tasks that are running on the executor will count towards the failure limits. If no - * replacement is being requested, then the tasks will not count towards the limit. - * * @param executorIds identifiers of executors to kill - * @param replace whether to replace the killed executors with new ones, default false + * @param adjustTargetNumExecutors whether the target number of executors will be adjusted down + * after these executors have been killed + * @param countFailures if there are tasks running on the executors when they are killed, whether --- End diff -- I'm still a little confused about this parameter. If `force = false`, it's a no op. And all call sites I've seen seem to set this parameter to `false`. So is there something I'm missing?
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org