Github user aditanase commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22146#discussion_r223491915
  
    --- Diff: docs/running-on-kubernetes.md ---
    @@ -799,4 +815,168 @@ specific to Spark on Kubernetes.
        This sets the major Python version of the docker image used to run the 
driver and executor containers. Can either be 2 or 3. 
       </td>
     </tr>
    +<tr>
    +  <td><code>spark.kubernetes.driver.podTemplateFile</code></td>
    +  <td>(none)</td>
    +  <td>
    +   Specify the local file that contains the driver [pod 
template](#pod-template). For example
    +   
<code>spark.kubernetes.driver.podTemplateFile=/path/to/driver-pod-template.yaml`</code>
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.executor.podTemplateFile</code></td>
    +  <td>(none)</td>
    +  <td>
    +   Specify the local file that contains the executor [pod 
template](#pod-template). For example
    +   
<code>spark.kubernetes.executor.podTemplateFile=/path/to/executor-pod-template.yaml`</code>
    +  </td>
    +</tr>
    +</table>
    +
    +#### Pod template properties
    +
    +See the below table for the full list of pod specifications that will be 
overwritten by spark.
    +
    +### Pod Metadata
    +
    +<table class="table">
    +<tr><th>Pod metadata key</th><th>Modified 
value</th><th>Description</th></tr>
    +<tr>
    +  <td>name</td>
    +  <td>Value of <code>spark.kubernetes.driver.pod.name</code></td>
    +  <td>
    +    The driver pod name will be overwritten with either the configured or 
default value of
    +    <code>spark.kubernetes.driver.pod.name</code>. The executor pod names 
will be unaffected.
    +  </td>
    +</tr>
    +<tr>
    +  <td>namespace</td>
    +  <td>Value of <code>spark.kubernetes.namespace</code></td>
    +  <td>
    +    Spark makes strong assumptions about the driver and executor 
namespaces. Both driver and executor namespaces will
    +    be replaced by either the configured or default spark conf value.
    +  </td>
    +</tr>
    +<tr>
    +  <td>labels</td>
    +  <td>Adds the labels from 
<code>spark.kubernetes.{driver,executor}.label.*</code></td>
    +  <td>
    +    Spark will add additional labels specified by the spark configuration.
    +  </td>
    +</tr>
    +<tr>
    +  <td>annotations</td>
    +  <td>Adds the annotations from 
<code>spark.kubernetes.{driver,executor}.annotation.*</code></td>
    +  <td>
    +    Spark will add additional labels specified by the spark configuration.
    +  </td>
    +</tr>
    +</table>
    +
    +### Pod Spec
    +
    +<table class="table">
    +<tr><th>Pod spec key</th><th>Modified value</th><th>Description</th></tr>
    +<tr>
    +  <td>imagePullSecrets</td>
    +  <td>Adds image pull secrets from 
<code>spark.kubernetes.container.image.pullSecrets</code></td>
    +  <td>
    +    Additional pull secrets will be added from the spark configuration to 
both executor pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td>nodeSelector</td>
    +  <td>Adds node selectors from 
<code>spark.kubernetes.node.selector.*</code></td>
    +  <td>
    +    Additional node selectors will be added from the spark configuration 
to both executor pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td>restartPolicy</td>
    +  <td><code>"never"</code></td>
    +  <td>
    +    Spark assumes that both drivers and executors never restart.
    +  </td>
    +</tr>
    +<tr>
    +  <td>serviceAccount</td>
    +  <td>Value of 
<code>spark.kubernetes.authenticate.driver.serviceAccountName</code></td>
    +  <td>
    +    Spark will override <code>serviceAccount</code> with the value of the 
spark configuration for only
    +    driver pods, and only if the spark configuration is specified. 
Executor pods will remain unaffected.
    +  </td>
    +</tr>
    +<tr>
    +  <td>serviceAccountName</td>
    +  <td>Value of 
<code>spark.kubernetes.authenticate.driver.serviceAccountName</code></td>
    +  <td>
    +    Spark will override <code>serviceAccountName</code> with the value of 
the spark configuration for only
    +    driver pods, and only if the spark configuration is specified. 
Executor pods will remain unaffected.
    +  </td>
    +</tr>
    +<tr>
    +  <td>volumes</td>
    +  <td>Adds volumes from 
<code>spark.kubernetes.{driver,executor}.volumes.[VolumeType].[VolumeName].mount.path</code></td>
    +  <td>
    +    Spark will add volumes as specified by the spark conf, as well as 
additional volumes necessary for passing
    +    spark conf and pod template files.
    +  </td>
    +</tr>
    +</table>
    +
    +### Container spec
    +
    +The following affect the driver and executor containers. All other 
containers in the pod spec will be unaffected.
    +
    +<table class="table">
    +<tr><th>Container spec key</th><th>Modified 
value</th><th>Description</th></tr>
    +<tr>
    +  <td>env</td>
    +  <td>Adds env variables from 
<code>spark.kubernetes.driverEnv.[EnvironmentVariableName]</code></td>
    +  <td>
    +    Spark will add driver env variables from 
<code>spark.kubernetes.driverEnv.[EnvironmentVariableName]</code>, and
    +    executor env variables from 
<code>spark.executorEnv.[EnvironmentVariableName]</code>.
    +  </td>
    +</tr>
    +<tr>
    +  <td>image</td>
    +  <td>Value of 
<code>spark.kubernetes.{driver,executor}.container.image</code></td>
    +  <td>
    +    The image will be defined by the spark configurations.
    +  </td>
    +</tr>
    +<tr>
    +  <td>imagePullPolicy</td>
    +  <td>Value of 
<code>spark.kubernetes.container.image.pullPolicy</code></td>
    +  <td>
    +    Spark will override the pull policy for both driver and executors.
    +  </td>
    +</tr>
    +<tr>
    +  <td>name</td>
    +  <td>See description.</code></td>
    +  <td>
    +    The container name will be assigned by spark 
("spark-kubernetes-driver" for the driver container, and
    +    "executor" for each executor container) if not defined by the pod 
template. If the container is defined by the
    +    template, the template's name will be used.
    +  </td>
    +</tr>
    +<tr>
    +  <td>resources</td>
    +  <td>See description</td>
    +  <td>
    +    The cpu limits are set by 
<code>spark.kubernetes.{driver,executor}.limit.cores</code>. The cpu is set by
    +    <code>spark.{driver,executor}.cores</code>. The memory request and 
limit are set by summing the values of
    +    <code>spark.{driver,executor}.memory</code> and 
<code>spark.{driver,executor}.memoryOverhead</code>.
    +
    +  </td>
    +</tr>
    +<tr>
    +  <td>volumeMounts</td>
    +  <td>Add volumes from 
<code>spark.kubernetes.driver.volumes.[VolumeType].[VolumeName].mount.{path,readOnly}</code></td>
    --- End diff --
    
    Just checking, is it add or replace? I'm hoping one could use this to mount 
unsupported volume types, like config maps or secrets, in addition to those 
managed by spark.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to