Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20361#discussion_r164650445
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -377,6 +377,12 @@ object SQLConf {
           .booleanConf
           .createWithDefault(true)
     
    +  val PARQUET_VECTORIZED_READER_BATCH_SIZE = 
buildConf("spark.sql.parquet.batchSize")
    --- End diff --
    
    Still a question. Is that possible to use the estimated memory size instead 
of the number of rows?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to