[ 
https://issues.apache.org/jira/browse/SPARK-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14265796#comment-14265796
 ] 

Timothy Chen commented on SPARK-5095:
-------------------------------------

I think instead of configuring the number of executors to launch per slave, I 
think it's more ideal to configure the amount of cpu/mem per executor.
My current thoughts for implementation is to introduce two more configs:
spark.mesos.coarse.executors.max <-- the maximum amount of executors launched 
per slave, applies to coarse grain mode
spark.mesos.coarse.cores.max <-- the maximum amount of cpus to use per executor

Memory is already configurable through spark.executor.memory.

With these, you can choose to launch two executors by specifiying two max 
executors and also capping the max cpus to be halved the amount. 

These configurations can also fix SPARK-4940.

> Support launching multiple mesos executors in coarse grained mesos mode
> -----------------------------------------------------------------------
>
>                 Key: SPARK-5095
>                 URL: https://issues.apache.org/jira/browse/SPARK-5095
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>            Reporter: Timothy Chen
>
> Currently in coarse grained mesos mode, it's expected that we only launch one 
> Mesos executor that launches one JVM process to launch multiple spark 
> executors.
> However, this become a problem when the JVM process launched is larger than 
> an ideal size (30gb is recommended value from databricks), which causes GC 
> problems reported on the mailing list.
> We should support launching mulitple executors when large enough resources 
> are available for spark to use, and these resources are still under the 
> configured limit.
> This is also applicable when users want to specifiy number of executors to be 
> launched on each node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to