[ https://issues.apache.org/jira/browse/SPARK-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644809#comment-14644809 ]
Andrew Or commented on SPARK-9092: ---------------------------------- To add to [~sandyryza]'s comment, I think there's actually an argument both ways: - Dynamic allocation is not fundamental to Spark and is really an opt-in feature. If the user explicitly set `spark.dynamicAllocation.enabled`, it is likely that they actually intended to use it. If they accidentally set `--num-executors` and we don't even log a warning, then they may mistakenly believe that they're sharing the cluster efficiently when they are not. - On the flip side, there is no way to enable dynamic allocation by default in a cluster-wide setting otherwise. In fact, this precludes us from ever making dynamic allocation the default in Spark itself even in the distant future (though I don't personally see this happening any time soon). Because of the latter, I do agree we should have `--num-executors` override dynamic allocation, but it's not because it's fundamentally more correct. No strong opinion one way or the other. > Make --num-executors compatible with dynamic allocation > ------------------------------------------------------- > > Key: SPARK-9092 > URL: https://issues.apache.org/jira/browse/SPARK-9092 > Project: Spark > Issue Type: Improvement > Components: YARN > Affects Versions: 1.2.0 > Reporter: Niranjan Padmanabhan > > Currently when you enable dynamic allocation, you can't use --num-executors > or the property spark.executor.instances. If we are to enable dynamic > allocation by default, we should make these work so that existing workloads > don't fail -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org