[ 
https://issues.apache.org/jira/browse/SPARK-25148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcelo Vanzin resolved SPARK-25148.
------------------------------------
    Resolution: Cannot Reproduce

This seems to work for me locally. Executor pods are prefixed with a unique 
identifier based on the app name, unless overridden with 
{{spark.kubernetes.executor.podNamePrefix}}.

> Executors launched with Spark on K8s client mode should prefix name with 
> spark.app.name
> ---------------------------------------------------------------------------------------
>
>                 Key: SPARK-25148
>                 URL: https://issues.apache.org/jira/browse/SPARK-25148
>             Project: Spark
>          Issue Type: Improvement
>          Components: Kubernetes
>    Affects Versions: 2.4.0
>            Reporter: Timothy Chen
>            Priority: Major
>
> With the latest added client mode with Spark on k8s, executors launched by 
> default are all named "spark-exec-#". Which means when multiple jobs are 
> launched in the same cluster, they often have to retry to find unused pod 
> names. Also it's hard to correlate which executors are launched for which 
> spark app. The work around is to manually use the executor prefix 
> configuration for each job launched.
> Ideally the experience should be the same for cluster mode, which each 
> executor is default prefix with the spark.app.name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to