Github user vanzin commented on a diff in the pull request: https://github.com/apache/spark/pull/3082#discussion_r19779445 --- Diff: core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala --- @@ -124,6 +126,22 @@ private[spark] class ExecutorAllocationManager(sc: SparkContext) extends Logging throw new SparkException(s"spark.dynamicAllocation.minExecutors ($minNumExecutors) must " + s"be less than or equal to spark.dynamicAllocation.maxExecutors ($maxNumExecutors)!") } + // Verify that timeouts are positive + if (schedulerBacklogTimeout <= 0) { + throw new SparkException(s"spark.dynamicAllocation.schedulerBacklogTimeout must be > 0!") + } + if (sustainedSchedulerBacklogTimeout <= 0) { + throw new SparkException( + s"spark.dynamicAllocation.sustainedSchedulerBacklogTimeout must be > 0!") + } + if (executorIdleTimeout <= 0) { + throw new SparkException(s"spark.dynamicAllocation.executorIdleTimeout must be > 0!") + } + // Verify that external shuffle service is enabled + if (!conf.getBoolean("spark.shuffle.service.enabled", false)) { + throw new SparkException(s"Dynamic allocation of executors requires the external " + + s"shuffle service. You may enable this through spark.shuffle.service.enabled.") --- End diff -- At this point, is there any sense in having two separate settings that need to be set in tandem? Couldn't we just base everything on whether dynamic executor allocation is enabled or not?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org