[ 
https://issues.apache.org/jira/browse/SPARK-8726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefano Parmesan closed SPARK-8726.
-----------------------------------
       Resolution: Fixed
    Fix Version/s: 1.4.0

> Wrong spark.executor.memory when using different EC2 master and worker 
> machine types
> ------------------------------------------------------------------------------------
>
>                 Key: SPARK-8726
>                 URL: https://issues.apache.org/jira/browse/SPARK-8726
>             Project: Spark
>          Issue Type: Bug
>          Components: EC2
>    Affects Versions: 1.4.0
>            Reporter: Stefano Parmesan
>             Fix For: 1.4.0
>
>
> _(this is a mirror of 
> [MESOS-2985|https://issues.apache.org/jira/browse/MESOS-2985])_
> By default, {{spark.executor.memory}} is set to the [min(slave_ram_kb, 
> master_ram_kb)|https://github.com/mesos/spark-ec2/blob/e642aa362338e01efed62948ec0f063d5fce3242/deploy_templates.py#L32];
>  when using the same instance type for master and workers you will not 
> notice, but when using different ones (which makes sense, as the master 
> cannot be a spot instance, and using a big machine for the master would be a 
> waste of resources) the default amount of memory given to each worker is 
> capped to the amount of RAM available on the master (ex: if you create a 
> cluster with an m1.small master (1.7GB RAM) and one m1.large worker (7.5GB 
> RAM), spark.executor.memory will be set to 512MB).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to