Github user jinxing64 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16989#discussion_r116907401
  
    --- Diff: docs/configuration.md ---
    @@ -954,16 +970,16 @@ Apart from these, the following properties are also 
available, and may be useful
       <td><code>spark.memory.offHeap.enabled</code></td>
       <td>false</td>
       <td>
    -    If true, Spark will attempt to use off-heap memory for certain 
operations. If off-heap memory use is enabled, then 
<code>spark.memory.offHeap.size</code> must be positive.
    +    If true, Spark will attempt to use off-heap memory for certain 
operations(e.g. allocate memory by Unsafe/Tungsten code). If off-heap memory 
use is enabled, then <code>spark.memory.offHeap.size</code> must be positive.
       </td>
     </tr>
     <tr>
       <td><code>spark.memory.offHeap.size</code></td>
    -  <td>0</td>
    +  <td>384 * 1024 * 1024</td>
       <td>
         The absolute amount of memory in bytes which can be used for off-heap 
allocation.
         This setting has no impact on heap memory usage, so if your executors' 
total memory consumption must fit within some hard limit then be sure to shrink 
your JVM heap size accordingly.
    -    This must be set to a positive value when 
<code>spark.memory.offHeap.enabled=true</code>.
    +    This must be set to a positive value when 
<code>spark.memory.offHeap.enabled=true</code>. Note that Blocks will be 
shuffled to off heap by default.
    --- End diff --
    
    Yes, I will fix this. I was thinking Netty will use off-heap for fetching 
remote blocks when `spark.shuffle.io.preferDirectBufs` is true. That's why I 
put a `by default` here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to