[ https://issues.apache.org/jira/browse/SPARK-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971228#comment-14971228 ]
Jerry Lam edited comment on SPARK-4940 at 10/23/15 4:15 PM: ------------------------------------------------------------ I just want to weight in the importance of this issue. My observation is that using coarse grained mode, it is possible that if I configure total core max to 20, I could end up having ONE executor with 20 cores. This is not ideal when I have 5 slaves with 32 cores each. It would makes more sense to have ONE executor per slave and each executor has 4 cores. It is very difficult to use because an executor configures with 10GB ram could have 20 tasks or 1 task allocated to it (assuming 1 cpu per task). Say each task could use up to 2GB of RAM, it would be a OOM for 20 tasks (40GB required) and underutilized for 1 task (2GB required). Is there a workaround at this moment using Spark 1.5.1. to make load more evenly distributed on mesos. How people actually use spark on mesos when the resource is not distributed evenly? Also, I notice that there is much better features on Spark with Yarn. Does it mean it is better to run spark on Yarn than Mesos? Thanks! was (Author: superwai): I just want to weight in the importance of this issue. My observation is that using coarse grained mode, it is possible that if I configure total core max to 20, I could end up having ONE executor with 20 cores. This is not ideal when I have 5 slaves with 32 cores each. It would makes more sense to have ONE executor per slave and each executor has 4 cores. Is there a workaround at this moment using Spark 1.5.1. to make load more evenly distributed on mesos. How people actually use spark on mesos when the resource is not distributed evenly? Also, I notice that there is much better features on Spark with Yarn. Does it mean it is better to run spark on Yarn than Mesos? Thanks! > Support more evenly distributing cores for Mesos mode > ----------------------------------------------------- > > Key: SPARK-4940 > URL: https://issues.apache.org/jira/browse/SPARK-4940 > Project: Spark > Issue Type: Improvement > Components: Mesos > Reporter: Timothy Chen > Attachments: mesos-config-difference-3nodes-vs-2nodes.png > > > Currently in Coarse grain mode the spark scheduler simply takes all the > resources it can on each node, but can cause uneven distribution based on > resources available on each slave. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org