Github user mccheah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21092#discussion_r183161726
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala
 ---
    @@ -88,15 +94,22 @@ private[spark] class BasicDriverFeatureStep(
             .addToRequests("memory", driverMemoryQuantity)
             .addToLimits("memory", driverMemoryQuantity)
             .endResources()
    -      .addToArgs("driver")
    +      .addToArgs(driverDockerContainer)
           .addToArgs("--properties-file", SPARK_CONF_PATH)
           .addToArgs("--class", conf.roleSpecificConf.mainClass)
    -      // The user application jar is merged into the spark.jars list and 
managed through that
    -      // property, so there is no need to reference it explicitly here.
    -      .addToArgs(SparkLauncher.NO_RESOURCE)
    -      .addToArgs(conf.roleSpecificConf.appArgs: _*)
    -      .build()
     
    +    val driverContainer =
    +      if (driverDockerContainer == "driver-py") {
    --- End diff --
    
    > So what about applications which need Python support (e.g. have Python 
UDFS) but don't use a Python driver process?
    
    Think that's up to the user to make it work - I don't see this being 
specifically handled by the other cluster managers.
    
    The goal of this PR should be to bring Kubernetes up to par with the other 
cluster managers with respect to what they provide.Do the other cluster 
managers provide any specific support for this?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to