Regarding the 'spark.executor.cores' config option in a Standalone spark
environment, I'm curious about whether there's a way to enforce the
following logic:

*- Max cores per executor = 4*
** Max executors PER application PER worker = 1*

In order to force better balance across all workers, I want to ensure that a
single spark job can only ever use a specific upper limit on the number of
cores for each executor it holds, however, do not want a situation where it
can spawn 3 executors on a worker and only 1/2 on the others. Some spark
jobs end up using much more memory during aggregation tasks (joins /
groupBy's) which is more heavily impacted by the number of cores per
executor for that job. 

If this kind of setup/configuration doesn't already exist for Spark, and
others see the benefit of what I mean by this, where would be the best
location to insert this logic?

Mark.



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-Executor-Cores-question-tp14763.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to