attilapiros commented on a change in pull request #33550:
URL: https://github.com/apache/spark/pull/33550#discussion_r679117614



##########
File path: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
##########
@@ -70,8 +70,16 @@ private[spark] class KubernetesClusterManager extends 
ExternalClusterManager wit
     // If/when feature steps are executed in client mode, they should instead 
take care of this,
     // and this code should be removed.
     if (!sc.conf.contains(KUBERNETES_EXECUTOR_POD_NAME_PREFIX)) {
-      sc.conf.set(KUBERNETES_EXECUTOR_POD_NAME_PREFIX,
-        KubernetesConf.getResourceNamePrefix(sc.conf.get("spark.app.name")))
+      val podNamePrefix = 
KubernetesConf.getResourceNamePrefix(sc.conf.get("spark.app.name"))
+      if (KubernetesUtils.isValidExecutorPodNamePrefix(podNamePrefix)) {
+        sc.conf.set(KUBERNETES_EXECUTOR_POD_NAME_PREFIX, podNamePrefix)
+      } else {
+        val shortPrefix = "spark-" + KubernetesUtils.uniqueID()

Review comment:
       I do not think we should lose all the app name when the prefix is too 
long.
    
   The `getResourceNamePrefix ` a concatenation of  <app name> "-" <uniqueID>.
   The uniqID is 16 chars so from the 47 chars we have 30 remaining for the app 
name (as "-" is counted also).
   
   My idea to slice the app name to 30 chars long. This way a [1-30] char long 
name remains as it is and a 31 long is cutted to 30 and not to 0. 
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to