[ https://issues.apache.org/jira/browse/SPARK-6707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15326297#comment-15326297 ]
Fan Du commented on SPARK-6707: ------------------------------- Hi [~ankurcha] I have one query about the example, {{tachyon=true;us-east-1=false}}, in the ticket message, I assume tachyon is memory-centric distributed storage system, which is now renamed to Alluxio, Alluxio can run on Mesos too, so back to my question: Does {{tachyon=true;us-east-1=false}} indicate application prefer nodes which already runs tachyon on them? IOW, applications has affinity between each other? I'm asking this because I'm working on a similar feature, which would enable Mesos framework the ability to update attributes in [MESOS-5545|https://issues.apache.org/jira/browse/MESOS-5545#], I think your comments could possibly justify such requirement. > Mesos Scheduler should allow the user to specify constraints based on slave > attributes > -------------------------------------------------------------------------------------- > > Key: SPARK-6707 > URL: https://issues.apache.org/jira/browse/SPARK-6707 > Project: Spark > Issue Type: Improvement > Components: Mesos, Scheduler > Affects Versions: 1.3.0 > Reporter: Ankur Chauhan > Assignee: Ankur Chauhan > Labels: mesos, scheduler > Fix For: 1.5.0 > > > Currently, the mesos scheduler only looks at the 'cpu' and 'mem' resources > when trying to determine the usablility of a resource offer from a mesos > slave node. It may be preferable for the user to be able to ensure that the > spark jobs are only started on a certain set of nodes (based on attributes). > For example, If the user sets a property, let's say > {code}spark.mesos.constraints{code} is set to > {code}tachyon=true;us-east-1=false{code}, then the resource offers will be > checked to see if they meet both these constraints and only then will be > accepted to start new executors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org