[ https://issues.apache.org/jira/browse/SPARK-26188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Damien Doucet-Girard updated SPARK-26188: ----------------------------------------- Description: My team uses spark to partition and output parquet files to amazon S3. We typically use 256 partitions, from 00 to ff. We've observed that in spark 2.3.2 and prior, it reads the partitions as strings by default. However, in spark 2.4.0 and later, the type of each partition is inferred by default, and partitions such as 00 become 0 and 4d become 4.0. After some investigation, we've isolated the issue to [https://github.com/apache/spark/blob/02b510728c31b70e6035ad541bfcdc2b59dcd79a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L132-L136] In the inferPartitioning method, 2.3.2 sets the type inference to false by default (lines 132-136): ``` {color:#cc7832}val {color}spec = PartitioningUtils.parsePartitions( leafDirs{color:#cc7832}, {color} typeInference = {color:#cc7832}false{color}{color:#cc7832}, {color} basePaths = basePaths{color:#cc7832}, {color} timeZoneId = timeZoneId) ``` However, in version 2.4.0, the typeInference flag has been replace with a config flag [https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L129-L133 |https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L129-L133] ``` {color:#cc7832}val {color}inferredPartitionSpec = PartitioningUtils.parsePartitions(leafDirs{color:#cc7832}, {color} typeInference = sparkSession.sessionState.conf.partitionColumnTypeInferenceEnabled{color:#cc7832}, {color} basePaths = basePaths{color:#cc7832}, {color} timeZoneId = timeZoneId) ``` And this conf's default value is true ``` {color:#cc7832}val {color}PARTITION_COLUMN_TYPE_INFERENCE = buildConf({color:#6a8759}"spark.sql.sources.partitionColumnTypeInference.enabled"{color}) .doc({color:#6a8759}"When true, automatically infer the data types for partitioned columns."{color}) .booleanConf .createWithDefault({color:#cc7832}true{color}) ``` [https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L636-L640] I was wondering if a bug report would be appropriate to preserve backwards compatibility and change the default conf value to false. > Spark 2.4.0 behavior breaks backwards compatibility > --------------------------------------------------- > > Key: SPARK-26188 > URL: https://issues.apache.org/jira/browse/SPARK-26188 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 2.4.0 > Reporter: Damien Doucet-Girard > Priority: Minor > > My team uses spark to partition and output parquet files to amazon S3. We > typically use 256 partitions, from 00 to ff. > We've observed that in spark 2.3.2 and prior, it reads the partitions as > strings by default. However, in spark 2.4.0 and later, the type of each > partition is inferred by default, and partitions such as 00 become 0 and 4d > become 4.0. > After some investigation, we've isolated the issue to > [https://github.com/apache/spark/blob/02b510728c31b70e6035ad541bfcdc2b59dcd79a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L132-L136] > > In the inferPartitioning method, 2.3.2 sets the type inference to false by > default (lines 132-136): > ``` > {color:#cc7832}val {color}spec = PartitioningUtils.parsePartitions( > leafDirs{color:#cc7832}, > {color} typeInference = {color:#cc7832}false{color}{color:#cc7832}, > {color} basePaths = basePaths{color:#cc7832}, > {color} timeZoneId = timeZoneId) > ``` > However, in version 2.4.0, the typeInference flag has been replace with a > config flag > > [https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L129-L133 > > |https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L129-L133] > > ``` > {color:#cc7832}val {color}inferredPartitionSpec = > PartitioningUtils.parsePartitions(leafDirs{color:#cc7832}, > {color} typeInference = > sparkSession.sessionState.conf.partitionColumnTypeInferenceEnabled{color:#cc7832}, > {color} basePaths = basePaths{color:#cc7832}, > {color} timeZoneId = timeZoneId) > ``` > And this conf's default value is true > ``` > {color:#cc7832}val {color}PARTITION_COLUMN_TYPE_INFERENCE = > > buildConf({color:#6a8759}"spark.sql.sources.partitionColumnTypeInference.enabled"{color}) > .doc({color:#6a8759}"When true, automatically infer the data types for > partitioned columns."{color}) > .booleanConf > .createWithDefault({color:#cc7832}true{color}) > ``` > [https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L636-L640] > > > I was wondering if a bug report would be appropriate to preserve backwards > compatibility and change the default conf value to false. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org