Github user erikerlandson commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22146#discussion_r212668742
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
 ---
    @@ -225,6 +225,18 @@ private[spark] object Config extends Logging {
             "Ensure that major Python version is either Python2 or Python3")
           .createWithDefault("2")
     
    +  val KUBERNETES_DRIVER_CONTAINER_NAME =
    +    ConfigBuilder("spark.kubernetes.driver.containerName")
    --- End diff --
    
    I'm sensing some confusion over the purpose of 
`spark.kubernetes.driver.containerName` - is it to select the container out of 
the `containers` array in the template, or is it to *set* that name? Since this 
PR is creating `containerName` spark conf, I'm assuming it is no longer 
hard-coded, and we're exposing it to user control.
    
    IIUC, the current concept is that the user *must* declare a container name, 
*and* also must specify it via `spark.kubernetes.driver.containerName`; If not, 
I'm not sure what the default behavior is.
    
    I agree that leaning on convention (spark container is the first one) can 
cause problems, although the same problems would happen if they specify the 
wrong container name in the config. Clear documentation would be important 
regardless.  In general, user supplied templates are a "power-user" feature, 
and some increased opportunities for error are inherent here.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to