Github user mccheah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20553#discussion_r178371345
  
    --- Diff: docs/running-on-kubernetes.md ---
    @@ -576,14 +576,21 @@ specific to Spark on Kubernetes.
       <td><code>spark.kubernetes.driver.limit.cores</code></td>
       <td>(none)</td>
       <td>
    -    Specify the hard CPU 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for the driver pod.
    +    Specify a hard cpu 
[limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container)
 for the driver pod.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.executor.cores</code></td>
    +  <td>(none)</td>
    +  <td>
    +    Specify the cpu request for each executor pod. Values conform to the 
Kubernetes 
[convention](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu).
 Takes precendence over <code>spark.executor.cores</code> if set.
    --- End diff --
    
    Precedence in what sense? This won't override `spark.executor.cores` when 
it comes to the number of concurrently running tasks. Think it's better to say 
that it's distinct from spark.executor.cores.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to