[ 
https://issues.apache.org/jira/browse/SPARK-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14269331#comment-14269331
 ] 

Josh Devins commented on SPARK-5095:
------------------------------------

Do you really need `spark.mesos.coarse.executors.max`? I guess this helps with 
SPARK-4940 in order to help balance executors around the cluster. Otherwise, 
just `spark.mesos.coarse.cores.max` is sufficient to calculate the number of 
executors to launch.

e.g.
num executors = spark.cores.max / spark.mesos.coarse.cores.max
total memory = num executors * spark.executor.memory


> Support launching multiple mesos executors in coarse grained mesos mode
> -----------------------------------------------------------------------
>
>                 Key: SPARK-5095
>                 URL: https://issues.apache.org/jira/browse/SPARK-5095
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>            Reporter: Timothy Chen
>
> Currently in coarse grained mesos mode, it's expected that we only launch one 
> Mesos executor that launches one JVM process to launch multiple spark 
> executors.
> However, this become a problem when the JVM process launched is larger than 
> an ideal size (30gb is recommended value from databricks), which causes GC 
> problems reported on the mailing list.
> We should support launching mulitple executors when large enough resources 
> are available for spark to use, and these resources are still under the 
> configured limit.
> This is also applicable when users want to specifiy number of executors to be 
> launched on each node



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to