Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18487#discussion_r128257328
  
    --- Diff: 
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
    @@ -321,6 +321,17 @@ package object config {
           .intConf
           .createWithDefault(3)
     
    +  private[spark] val REDUCER_MAX_BLOCKS_IN_FLIGHT_PER_ADDRESS =
    +    ConfigBuilder("spark.reducer.maxBlocksInFlightPerAddress")
    +      .doc("This configuration limits the number of remote blocks being 
fetched per reduce task" +
    +        " from a given host port. When a large number of blocks are being 
requested from a given" +
    +        " address in a single fetch or simultaneously, this could crash 
the serving executor or" +
    +        " Node Manager. This is especially useful to reduce the load on 
the Node Manager when" +
    +        " external shuffle is enabled. You can mitigate the issue by 
setting it to a lower value.")
    +      .intConf
    +      .checkValue(_ > 0, "The max no. of blocks in flight cannot be 
non-positive.")
    +      .createWithDefault(Int.MaxValue)
    --- End diff --
    
    I'm fine leaving it maxvalue for now to not change current behavior just 
like we have done with some of these other related configs.   I would like to 
get more runtime on this in production and then we can set it later.  Perhaps 
in 2.3, it would be nice to pull this back into branch 2.2 as well master.   


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to