[ https://issues.apache.org/jira/browse/SPARK-24105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Marcelo Vanzin resolved SPARK-24105. ------------------------------------ Resolution: Won't Fix Most probably covered by pod templates. Also the bug summary doesn't explain the issue. > Spark 2.3.0 on kubernetes > ------------------------- > > Key: SPARK-24105 > URL: https://issues.apache.org/jira/browse/SPARK-24105 > Project: Spark > Issue Type: Improvement > Components: Kubernetes > Affects Versions: 2.3.0 > Reporter: Lenin > Priority: Major > > Right now its only possible to define node selector configurations > thruspark.kubernetes.node.selector.[labelKey]. This gets used for both driver > & executor pods. Without the capability to isolate driver & executor pods, > the cluster can run into a livelock scenario, where if there are a lot of > spark submits, can cause the driver pods to fill up the cluster capacity, > with no room for executor pods to do any work. > > To avoid this deadlock, its required to support node selector (in future > affinity/anti-affinity) configruation by driver & executor. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org