ulysses-you commented on a change in pull request #33550:
URL: https://github.com/apache/spark/pull/33550#discussion_r678787932



##########
File path: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
##########
@@ -70,8 +70,16 @@ private[spark] class KubernetesClusterManager extends 
ExternalClusterManager wit
     // If/when feature steps are executed in client mode, they should instead 
take care of this,
     // and this code should be removed.
     if (!sc.conf.contains(KUBERNETES_EXECUTOR_POD_NAME_PREFIX)) {
-      sc.conf.set(KUBERNETES_EXECUTOR_POD_NAME_PREFIX,
-        KubernetesConf.getResourceNamePrefix(sc.conf.get("spark.app.name")))
+      val podNamePrefix = 
KubernetesConf.getResourceNamePrefix(sc.conf.get("spark.app.name"))
+      if 
(org.apache.spark.deploy.k8s.Config.isValidExecutorPodNamePrefix(podNamePrefix))
 {

Review comment:
       ah, this is because the current import has an existed 
`io.fabric8.kubernetes.client.Config` so I use full name for the Spark `Config` 
here. But after we moved into `KubernetesUtils`, I think here is no this issue.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to