Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19763#discussion_r151962620
  
    --- Diff: 
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
    @@ -485,4 +485,13 @@ package object config {
             "array in the sorter.")
           .intConf
           .createWithDefault(Integer.MAX_VALUE)
    +
    +  private[spark] val 
SHUFFLE_MAP_OUTPUT_STATISTICS_PARALLEL_AGGREGATION_THRESHOLD =
    +    
ConfigBuilder("spark.shuffle.mapOutputStatistics.parallelAggregationThreshold")
    +      .internal()
    +      .doc("Multi-thread is used when the number of mappers * shuffle 
partitions exceeds this " +
    +        "threshold")
    +      .intConf
    +      .createWithDefault(100000000)
    --- End diff --
    
    wow 100 million is really a large threshold, how do you pick this number?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to