Github user squito commented on a diff in the pull request: https://github.com/apache/spark/pull/22288#discussion_r216726755 --- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala --- @@ -623,8 +623,9 @@ private[spark] class TaskSetManager( * * It is possible that this taskset has become impossible to schedule *anywhere* due to the * blacklist. The most common scenario would be if there are fewer executors than - * spark.task.maxFailures. We need to detect this so we can fail the task set, otherwise the job - * will hang. + * spark.task.maxFailures. We need to detect this so we can avoid the job from being hung. + * If dynamic allocation is enabled we try to acquire new executor/s by killing the existing one. + * In case of static allocation we abort the taskSet immediately to fail the job. --- End diff -- why do you want something different with static allocation? If you kill an executor, static allocation will also request a replacement.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org