[ 
https://issues.apache.org/jira/browse/SPARK-26867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16782270#comment-16782270
 ] 

Prabhu Joseph commented on SPARK-26867:
---------------------------------------

Spark can allow users to configure the Placement Constraint so that users will 
have more control on where the executors will get placed. For example:

1. Spark job wants to be run on machines where Python version is x or Java 
version is y (Node Attributes)
2. Spark job needs / does not need executors to be placed on machine where 
Hbase RegionServer / Zookeeper / Or any other Service is running. (Affinity / 
Anti Affinity)
3. Spark job wants no more than 2 of it's executors on same node (Cardinality)
4. Spark Job A executors wants / does not want to be run on where Spark Job / 
Any Other Job B containers runs (Application_Tag NameSpace)



> Spark Support of YARN Placement Constraint
> ------------------------------------------
>
>                 Key: SPARK-26867
>                 URL: https://issues.apache.org/jira/browse/SPARK-26867
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core, YARN
>    Affects Versions: 3.0.0
>            Reporter: Prabhu Joseph
>            Priority: Major
>
> YARN provides Placement Constraint Features - where application can request 
> containers based on affinity / anti-affinity / cardinality to services or 
> other application containers / node attributes. This is a useful feature for 
> Spark Jobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to