Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16989#discussion_r116926673
  
    --- Diff: 
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
    @@ -278,4 +278,39 @@ package object config {
             "spark.io.compression.codec.")
           .booleanConf
           .createWithDefault(false)
    +
    +  private[spark] val SHUFFLE_ACCURATE_BLOCK_THRESHOLD =
    +    ConfigBuilder("spark.shuffle.accurateBlkThreshold")
    +      .doc("When we compress the size of shuffle blocks in 
HighlyCompressedMapStatus, we will" +
    +        "record the size accurately if it's above the threshold specified 
by this config. This " +
    +        "helps to prevent OOM by avoiding underestimating shuffle block 
size when fetch shuffle " +
    +        "blocks.")
    +      .longConf
    +      .createWithDefault(100 * 1024 * 1024)
    +
    +  private[spark] val MEMORY_OFF_HEAP_ENABLED =
    +    ConfigBuilder("spark.memory.offHeap.enabled")
    +      .doc("If true, Spark will attempt to use off-heap memory for certain 
operations(e.g. sort, " +
    +        "aggregate, etc. However, the buffer used for fetching shuffle 
blocks is always " +
    +        "off-heap). If off-heap memory use is enabled, then 
spark.memory.offHeap.size must be " +
    +        "positive.")
    +      .booleanConf
    +      .createWithDefault(false)
    +
    +  private[spark] val MEMORY_OFF_HEAP_SIZE =
    +    ConfigBuilder("spark.memory.offHeap.size")
    +      .doc("The absolute amount of memory in bytes which can be used for 
off-heap allocation." +
    +        " This setting has no impact on heap memory usage, so if your 
executors' total memory" +
    +        " consumption must fit within some hard limit then be sure to 
shrink your JVM heap size" +
    +        " accordingly. This must be set to a positive value when " +
    +        "spark.memory.offHeap.enabled=true. Note that Blocks will be 
shuffled to off-heap.")
    --- End diff --
    
    `Note that Blocks will be shuffled to off-heap.`  this is not needed as we 
already mentioned it in `MEMORY_OFF_HEAP_ENABLED`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to