Github user viirya commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21868#discussion_r210779310
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -459,6 +458,29 @@ object SQLConf {
         .intConf
         .createWithDefault(4096)
     
    +  val IS_PARQUET_PARTITION_ADAPTIVE_ENABLED = 
buildConf("spark.sql.parquet.adaptiveFileSplit")
    +    .doc("For columnar file format (e.g., Parquet), it's possible that 
only few (not all) " +
    +      "columns are needed. So, it's better to make sure that the total 
size of the selected " +
    +      "columns is about 128 MB "
    +    )
    +    .booleanConf
    +    .createWithDefault(false)
    +
    +  val PARQUET_STRUCT_LENGTH = buildConf("spark.sql.parquet.struct.length")
    +    .doc("Set the default size of struct column")
    +    .intConf
    +    .createWithDefault(StringType.defaultSize)
    +
    +  val PARQUET_MAP_LENGTH = buildConf("spark.sql.parquet.map.length")
    +    .doc("Set the default size of map column")
    +    .intConf
    +    .createWithDefault(StringType.defaultSize)
    +
    +  val PARQUET_ARRAY_LENGTH = buildConf("spark.sql.parquet.array.length")
    +    .doc("Set the default size of array column")
    +    .intConf
    +    .createWithDefault(StringType.defaultSize)
    --- End diff --
    
    This feature includes so many configs, my concern is it is hard for end 
users to set them.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to