Github user viirya commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16677#discussion_r197613004
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -204,6 +204,13 @@ object SQLConf {
         .intConf
         .createWithDefault(4)
     
    +  val LIMIT_FLAT_GLOBAL_LIMIT = 
buildConf("spark.sql.limit.flatGlobalLimit")
    +    .internal()
    +    .doc("During global limit, try to evenly distribute limited rows 
across data " +
    +      "partitions. If disabled, scanning data partitions sequentially 
until reaching limit number.")
    +    .booleanConf
    +    .createWithDefault(true)
    --- End diff --
    
    I set this as true. One reason is to see if it can pass existing tests. If 
we don't feel confident or worry about behavior change, we can set this to 
false before merging.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to