Github user viirya commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21868#discussion_r205278202
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -381,6 +381,26 @@ object SQLConf {
           .booleanConf
           .createWithDefault(true)
     
    +  val IS_PARQUET_PARTITION_ADAPTIVE_ENABLED = 
buildConf("spark.sql.parquet.adaptiveFileSplit")
    +    .doc("For columnar file format (e.g., Parquet), it's possible that 
only few (not all) " +
    +      "columns are needed. So, it's better to make sure that the total 
size of the selected " +
    +      "columns is about 128 MB "
    +    )
    +    .booleanConf
    +    .createWithDefault(false)
    +
    +  val PARQUET_STRUCT_LENGTH = buildConf("spark.sql.parquet.struct.length")
    +    .intConf
    +    .createWithDefault(StructType.defaultConcreteType.defaultSize)
    +
    +  val PARQUET_MAP_LENGTH = buildConf("spark.sql.parquet.map.length")
    +    .intConf
    +    .createWithDefault(MapType.defaultConcreteType.defaultSize)
    +
    +  val PARQUET_ARRAY_LENGTH = buildConf("spark.sql.parquet.array.length")
    +    .intConf
    +    .createWithDefault(ArrayType.defaultConcreteType.defaultSize)
    --- End diff --
    
    `ArrayType.defaultConcreteType` is `ArrayType(NullType, containsNull = 
true)`. I think using this you won't get a reasonable number.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to