Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19881#discussion_r174125381
  
    --- Diff: docs/configuration.md ---
    @@ -1795,6 +1796,19 @@ Apart from these, the following properties are also 
available, and may be useful
         Lower bound for the number of executors if dynamic allocation is 
enabled.
       </td>
     </tr>
    +<tr>
    +  <td><code>spark.dynamicAllocation.fullParallelismDivisor</code></td>
    +  <td>1</td>
    +  <td>
    +    By default, the dynamic allocation will request enough executors to 
maximize the 
    +    parallelism according to the number of tasks to process. While this 
minimizes the 
    +    latency of the job, with small tasks this setting wastes a lot of 
resources due to
    --- End diff --
    
    can waste.
    



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to