Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/4051#issuecomment-71401367
  
    @sryza Is the point of not requiring these configs that the users don't 
really know how many executors they actually want? I'm OK with not requiring a 
min (and defaulting it to at least 1, but not 0), but in YARN, even without 
dynamic allocation users already have to specify the number of executors, so I 
think it's fine to require the user to set the max. Also defaulting the max to 
Int.MaxValue doesn't make much sense in my opinion because the whole point of 
doing this is to share resources in the cluster.
    
    @lianhuiwang @ksakellis The initial scale up delay is the reason why I 
originally set the starting number to the max. I understand that introducing an 
additional config may be confusing since we already have `--num-executors` and 
`spark.executor.instances`. We have two options here: (1) introduce a 
`spark.dynamicAllocation.initialNumExecutors` or something, or (2) use the 
existing setting of `--num-executors` or `spark.executor.instances` as the 
starting point. I personally prefer (2) even though it might confuse existing 
users because in Spark 1.2.0 we actually *disallow* users from setting 
`--num-executors` if they're using dynamic allocation. I think that might be 
the best option here though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to