Github user mccheah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21067#discussion_r194862246
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala
 ---
    @@ -67,12 +68,19 @@ private[spark] class BasicExecutorFeatureStep(
         }
       private val executorLimitCores = 
kubernetesConf.get(KUBERNETES_EXECUTOR_LIMIT_CORES)
     
    -  override def configurePod(pod: SparkPod): SparkPod = {
    -    val name = 
s"$executorPodNamePrefix-exec-${kubernetesConf.roleSpecificConf.executorId}"
    +  // If the driver pod is killed, the new driver pod will try to
    +  // create new executors with the same name, but it will fail
    +  // and hangs indefinitely because a terminating executors blocks
    +  // the creation of the new ones, so to avoid that apply salt
    +  private val executorNameSalt = 
Random.alphanumeric.take(4).mkString("").toLowerCase
    --- End diff --
    
    We should just have the executor pod name be 
`s"$applicationId-exec-$executorId` then. Don't think the pod name prefix has 
to strictly be tied to the application name. The application name should be 
applied to a label so the executor pod can be located using that when using the 
dashboard and kubectl, etc.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to