[jira] [Commented] (SPARK-35956) Support auto-assigning labels to less important pods (e.g. decommissioning pods)

2022-03-08 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502882#comment-17502882
 ] 

Apache Spark commented on SPARK-35956:
--

User 'dongjoon-hyun' has created a pull request for this issue:
https://github.com/apache/spark/pull/35767

> Support auto-assigning labels to less important pods (e.g. decommissioning 
> pods)
> 
>
> Key: SPARK-35956
> URL: https://issues.apache.org/jira/browse/SPARK-35956
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.2.0
>Reporter: Holden Karau
>Assignee: Holden Karau
>Priority: Major
> Fix For: 3.3.0
>
>
> To allow for folks to use pod disruption budgets or replicasets we should 
> indicate which pods Spark cares about "the least", those would be pods that 
> are otherwise exiting soon.
>  
> With PDBs the user would create a PDB representing the label of 
> decommissioning executors and this could have a higher number of unavailable 
> than the PDB for the "regular" execs. For people using replicasets in 1.21 we 
> could also set a label of "controller.kubernetes.io/pod-deletion-cost" (see 
> [https://github.com/kubernetes/kubernetes/pull/99163] ) to hint to the 
> controller that a pod is less important to us.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35956) Support auto-assigning labels to less important pods (e.g. decommissioning pods)

2021-07-08 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377528#comment-17377528
 ] 

Apache Spark commented on SPARK-35956:
--

User 'holdenk' has created a pull request for this issue:
https://github.com/apache/spark/pull/33270

> Support auto-assigning labels to less important pods (e.g. decommissioning 
> pods)
> 
>
> Key: SPARK-35956
> URL: https://issues.apache.org/jira/browse/SPARK-35956
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.2.0
>Reporter: Holden Karau
>Assignee: Holden Karau
>Priority: Major
>
> To allow for folks to use pod disruption budgets or replicasets we should 
> indicate which pods Spark cares about "the least", those would be pods that 
> are otherwise exiting soon.
>  
> With PDBs the user would create a PDB representing the label of 
> decommissioning executors and this could have a higher number of unavailable 
> than the PDB for the "regular" execs. For people using replicasets in 1.21 we 
> could also set a label of "controller.kubernetes.io/pod-deletion-cost" (see 
> [https://github.com/kubernetes/kubernetes/pull/99163] ) to hint to the 
> controller that a pod is less important to us.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-35956) Support auto-assigning labels to less important pods (e.g. decommissioning pods)

2021-07-08 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-35956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377527#comment-17377527
 ] 

Apache Spark commented on SPARK-35956:
--

User 'holdenk' has created a pull request for this issue:
https://github.com/apache/spark/pull/33270

> Support auto-assigning labels to less important pods (e.g. decommissioning 
> pods)
> 
>
> Key: SPARK-35956
> URL: https://issues.apache.org/jira/browse/SPARK-35956
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.2.0
>Reporter: Holden Karau
>Assignee: Holden Karau
>Priority: Major
>
> To allow for folks to use pod disruption budgets or replicasets we should 
> indicate which pods Spark cares about "the least", those would be pods that 
> are otherwise exiting soon.
>  
> With PDBs the user would create a PDB representing the label of 
> decommissioning executors and this could have a higher number of unavailable 
> than the PDB for the "regular" execs. For people using replicasets in 1.21 we 
> could also set a label of "controller.kubernetes.io/pod-deletion-cost" (see 
> [https://github.com/kubernetes/kubernetes/pull/99163] ) to hint to the 
> controller that a pod is less important to us.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org