[ https://issues.apache.org/jira/browse/SPARK-4585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sandy Ryza updated SPARK-4585: ------------------------------ Summary: Spark dynamic executor allocation shouldn't use maxExecutors as initial number (was: Spark dynamic executor allocation maxExecutors as initial number) > Spark dynamic executor allocation shouldn't use maxExecutors as initial number > ------------------------------------------------------------------------------ > > Key: SPARK-4585 > URL: https://issues.apache.org/jira/browse/SPARK-4585 > Project: Spark > Issue Type: Improvement > Components: Spark Core, YARN > Affects Versions: 1.1.0 > Reporter: Chengxiang Li > > With SPARK-3174, one can configure a minimum and maximum number of executors > for a Spark application on Yarn. However, the application always starts with > the maximum. It seems more reasonable, at least for Hive on Spark, to start > from the minimum and scale up as needed up to the maximum. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org