[ https://issues.apache.org/jira/browse/SPARK-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171024#comment-14171024 ]
Sandy Ryza commented on SPARK-3174: ----------------------------------- bq. If I understand correctly, your concern with requesting executors in rounds is that we will end up making many requests if we have many executors, when we could instead just batch them? My original claim was that a policy that could be aware of the number of tasks needed by a stage would be necessary to get enough executors quickly when going from idle to running a job. You claim was that the exponential policy can be tuned to give similar enough behavior to the type of policy I'm describing. My concern was that, even if that is true, tuning the exponential policy in this way would be detrimental at other points of the application lifecycle: e.g., starting the next stage within a job could lead to over-allocation. > Provide elastic scaling within a Spark application > -------------------------------------------------- > > Key: SPARK-3174 > URL: https://issues.apache.org/jira/browse/SPARK-3174 > Project: Spark > Issue Type: Improvement > Components: Spark Core, YARN > Affects Versions: 1.0.2 > Reporter: Sandy Ryza > Assignee: Andrew Or > Attachments: SPARK-3174design.pdf, SparkElasticScalingDesignB.pdf, > dynamic-scaling-executors-10-6-14.pdf > > > A common complaint with Spark in a multi-tenant environment is that > applications have a fixed allocation that doesn't grow and shrink with their > resource needs. We're blocked on YARN-1197 for dynamically changing the > resources within executors, but we can still allocate and discard whole > executors. > It would be useful to have some heuristics that > * Request more executors when many pending tasks are building up > * Discard executors when they are idle > See the latest design doc for more information. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org