[ https://issues.apache.org/jira/browse/SPARK-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971228#comment-14971228 ]
Jerry Lam edited comment on SPARK-4940 at 10/23/15 4:01 PM: ------------------------------------------------------------ I just want to weight in the importance of this issue. My observation is that using coarse grained mode, it is possible that if I configure total core max to 20, I could end up having ONE executor with 20 cores. This is not ideal when I have 5 slaves with 32 cores each. It would makes more sense to have ONE executor per slave and each executor has 4 cores. Is there a workaround at this moment using Spark 1.5.1. to make load more evenly distributed on mesos. How people actually use spark on mesos when the resource is not distributed evenly? Thanks! was (Author: superwai): I just want to weight in the importance of this issue. My observation is that using coarse grained mode, it is possible that if I configure total core max to 20, I could end up having ONE executor with 20 cores. This is not ideal even I have 5 slaves with 32 cores each. It would makes more sense to have ONE executor per slave and each executor has 4 cores. Is there a workaround at this moment using Spark 1.5.1. to make load more evenly distributed on mesos. How people actually use spark on mesos when the resource is not distributed evenly? Thanks! > Support more evenly distributing cores for Mesos mode > ----------------------------------------------------- > > Key: SPARK-4940 > URL: https://issues.apache.org/jira/browse/SPARK-4940 > Project: Spark > Issue Type: Improvement > Components: Mesos > Reporter: Timothy Chen > Attachments: mesos-config-difference-3nodes-vs-2nodes.png > > > Currently in Coarse grain mode the spark scheduler simply takes all the > resources it can on each node, but can cause uneven distribution based on > resources available on each slave. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org