Github user foxish commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20059#discussion_r158564505
  
    --- Diff: docs/running-on-kubernetes.md ---
    @@ -528,51 +545,94 @@ specific to Spark on Kubernetes.
       </td>
     </tr>
     <tr>
    -   <td><code>spark.kubernetes.driver.limit.cores</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for the driver pod.
    -   </td>
    - </tr>
    - <tr>
    -   <td><code>spark.kubernetes.executor.limit.cores</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for each executor pod launched for the Spark Application.
    -   </td>
    - </tr>
    - <tr>
    -   <td><code>spark.kubernetes.node.selector.[labelKey]</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Adds to the node selector of the driver pod and executor pods, with 
key <code>labelKey</code> and the value as the
    -     configuration's value. For example, setting 
<code>spark.kubernetes.node.selector.identifier</code> to 
<code>myIdentifier</code>
    -     will result in the driver pod and executors having a node selector 
with key <code>identifier</code> and value
    -      <code>myIdentifier</code>. Multiple node selector keys can be added 
by setting multiple configurations with this prefix.
    -    </td>
    -  </tr>
    - <tr>
    -   
<td><code>spark.kubernetes.driverEnv.[EnvironmentVariableName]</code></td>
    -   <td>(none)</td>
    -   <td>
    -     Add the environment variable specified by 
<code>EnvironmentVariableName</code> to
    -     the Driver process. The user can specify multiple of these to set 
multiple environment variables.
    -   </td>
    - </tr>
    -  <tr>
    -    
<td><code>spark.kubernetes.mountDependencies.jarsDownloadDir</code></td>
    -    <td><code>/var/spark-data/spark-jars</code></td>
    -    <td>
    -      Location to download jars to in the driver and executors.
    -      This directory must be empty and will be mounted as an empty 
directory volume on the driver and executor pods.
    -    </td>
    -  </tr>
    -   <tr>
    -     
<td><code>spark.kubernetes.mountDependencies.filesDownloadDir</code></td>
    -     <td><code>/var/spark-data/spark-files</code></td>
    -     <td>
    -       Location to download jars to in the driver and executors.
    -       This directory must be empty and will be mounted as an empty 
directory volume on the driver and executor pods.
    -     </td>
    -   </tr>
    +  <td><code>spark.kubernetes.driver.limit.cores</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for the driver pod.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.executor.limit.cores</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for each executor pod launched for the Spark Application.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.node.selector.[labelKey]</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Adds to the node selector of the driver pod and executor pods, with 
key <code>labelKey</code> and the value as the
    +    configuration's value. For example, setting 
<code>spark.kubernetes.node.selector.identifier</code> to 
<code>myIdentifier</code>
    +    will result in the driver pod and executors having a node selector 
with key <code>identifier</code> and value
    +     <code>myIdentifier</code>. Multiple node selector keys can be added 
by setting multiple configurations with this prefix.
    +  </td>
    +</tr>
    +<tr>
    +  
<td><code>spark.kubernetes.driverEnv.[EnvironmentVariableName]</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Add the environment variable specified by 
<code>EnvironmentVariableName</code> to
    +    the Driver process. The user can specify multiple of these to set 
multiple environment variables.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.mountDependencies.jarsDownloadDir</code></td>
    +  <td><code>/var/spark-data/spark-jars</code></td>
    +  <td>
    +    Location to download jars to in the driver and executors.
    +    This directory must be empty and will be mounted as an empty directory 
volume on the driver and executor pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.mountDependencies.filesDownloadDir</code></td>
    +  <td><code>/var/spark-data/spark-files</code></td>
    +  <td>
    +    Location to download jars to in the driver and executors.
    +    This directory must be empty and will be mounted as an empty directory 
volume on the driver and executor pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.mountDependencies.mountTimeout</code></td>
    +  <td>5 minutes</td>
    +  <td>
    +   Timeout before aborting the attempt to download and unpack dependencies 
from remote locations when initializing
    +   the driver and executor pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.initContainer.image</code></td>
    +  <td>(none)</td>
    +  <td>
    +   Container image for the init-container of the driver and executors for 
downloading dependencies.
    +   This is usually of the form 
<code>example.com/repo/spark-init:v1.0.0</code>.
    +   This configuration is optional and must be provided by the user if any 
non-container local dependency is used and
    +    must be downloaded remotely.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.initContainer.maxThreadPoolSize</code></td>
    +  <td>5</td>
    +  <td>
    +   Maximum size of the thread pool in the init-container for downloading 
remote dependencies.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.driver.secrets.[SecretName]</code></td>
    +  <td>(none)</td>
    +  <td>
    +   Add the secret named <code>SecretName</code> to the driver pod on the 
path specified in the value. For example,
    --- End diff --
    
    secret -> Kubernetes Secret
    Please also link to the Kubernetes secrets docs page.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to