[ 
https://issues.apache.org/jira/browse/SPARK-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14161378#comment-14161378
 ] 

Andrew Or commented on SPARK-3174:
----------------------------------

@[~tgraves] Replying inline:

bq. Just to make sure, the things in blue with the star in the doc are the 
approaches you are proposing?

Yes.

bq. So how well is the spark scheduler going to handle the addition of/waiting 
for executors? is SPARK-3795 going to address making that better? Meaning if 
you start a job with a small number of executors it will be scheduled 
non-optimally and in many cases will cause failures. Hence why we added the 
configs to wait for executors before starting.

Not sure if I understand what you mean by this. I would assume that the Spark 
application will start with a reasonable number of executors. This scaling 
feature is mainly concerned with scaling down your executors from the ones you 
started with, but because we may need them later we also need a way and a 
heuristic to add them back. I did not intend for this feature to be, for 
instance, used for bootstrapping from one executor in the beginning.

bq. Will the config(s) be allowed to be changed part of the way through an 
application? for instance, lets say I do some ETL stuff where I want it to do 
dynamic, but then I need to run an ML algorithm or do some heavy caching where 
I want to shut it off.

Yeah, especially for a REPL it would be good to change this across jobs. This 
proposal is a first-cut design and does not incorporate that. Though I think 
this is a more general issue; Spark configurations are intended to be set for 
the entire duration of the application, but many are somewhat specific to each 
job within that application. I think it is of interest to eventually be able to 
configure this dynamically, but I don't have a great idea of how to expose that 
at the moment.

bq. Is it also safe to assume this doesn't handle changing the resource 
requirements of the containers? Ie the executors it starts and stops will also 
be the same size.

Yes, we do not resize the executor's JVM or container (I don't think Yarn 
supports this yet). Our unit of execution here is on the granularity of 
fixed-size executors.


> Provide elastic scaling within a Spark application
> --------------------------------------------------
>
>                 Key: SPARK-3174
>                 URL: https://issues.apache.org/jira/browse/SPARK-3174
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, YARN
>    Affects Versions: 1.0.2
>            Reporter: Sandy Ryza
>            Assignee: Andrew Or
>         Attachments: SPARK-3174design.pdf, 
> dynamic-scaling-executors-10-6-14.pdf
>
>
> A common complaint with Spark in a multi-tenant environment is that 
> applications have a fixed allocation that doesn't grow and shrink with their 
> resource needs.  We're blocked on YARN-1197 for dynamically changing the 
> resources within executors, but we can still allocate and discard whole 
> executors.
> It would be useful to have some heuristics that
> * Request more executors when many pending tasks are building up
> * Discard executors when they are idle
> See the latest design doc for more information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to