[ 
https://issues.apache.org/jira/browse/SPARK-26188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Doucet-Girard updated SPARK-26188:
-----------------------------------------
    Description: 
My team uses spark to partition and output parquet files to amazon S3. We 
typically use 256 partitions, from 00 to ff.

We've observed that in spark 2.3.2 and prior, it reads the partitions as 
strings by default. However, in spark 2.4.0 and later, the type of each 
partition is inferred by default, and partitions such as 00 become 0 and 4d 
become 4.0.

After some investigation, we've isolated the issue to
 
[https://github.com/apache/spark/blob/02b510728c31b70e6035ad541bfcdc2b59dcd79a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L132-L136]
  

In the inferPartitioning method, 2.3.2 sets the type inference to false by 
default (lines 132-136):
{code:java}
val spec = PartitioningUtils.parsePartitions(
  leafDirs,
  typeInference = false,
  basePaths = basePaths,
  timeZoneId = timeZoneId){code}
However, in version 2.4.0, the typeInference flag has been replace with a 
config flag

https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L129-L133

 
  
{code:java}
val inferredPartitionSpec = PartitioningUtils.parsePartitions(
  leafDirs,
  typeInference = 
sparkSession.sessionState.conf.partitionColumnTypeInferenceEnabled,
  basePaths = basePaths,
  timeZoneId = timeZoneId){code}
And this conf's default value is true
{code:java}
val PARTITION_COLUMN_TYPE_INFERENCE =
buildConf("spark.sql.sources.partitionColumnTypeInference.enabled")
.doc("When true, automatically infer the data types for partitioned columns.")
.booleanConf
.createWithDefault(true){code}
[https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L636-L640]
  

I was wondering if a bug report would be appropriate to preserve backwards 
compatibility and change the default conf value to false.

 
  

  was:
My team uses spark to partition and output parquet files to amazon S3. We 
typically use 256 partitions, from 00 to ff.

We've observed that in spark 2.3.2 and prior, it reads the partitions as 
strings by default. However, in spark 2.4.0 and later, the type of each 
partition is inferred by default, and partitions such as 00 become 0 and 4d 
become 4.0.

After some investigation, we've isolated the issue to
 
[https://github.com/apache/spark/blob/02b510728c31b70e6035ad541bfcdc2b59dcd79a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L132-L136]
  

In the inferPartitioning method, 2.3.2 sets the type inference to false by 
default (lines 132-136):
{code:java}
val spec = PartitioningUtils.parsePartitions(
  leafDirs,
  typeInference = false,
  basePaths = basePaths,
  timeZoneId = timeZoneId){code}

 However, in version 2.4.0, the typeInference flag has been replace with a 
config flag
 
[https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L129-L133
 
 
{code:java}
val inferredPartitionSpec = PartitioningUtils.parsePartitions(
  leafDirs,
  typeInference = 
sparkSession.sessionState.conf.partitionColumnTypeInferenceEnabled,
  basePaths = basePaths,
  timeZoneId = timeZoneId){code}
And this conf's default value is true
{code:java}
val PARTITION_COLUMN_TYPE_INFERENCE =
buildConf("spark.sql.sources.partitionColumnTypeInference.enabled")
.doc("When true, automatically infer the data types for partitioned columns.")
.booleanConf
.createWithDefault(true){code}

 
[https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L636-L640]
  

I was wondering if a bug report would be appropriate to preserve backwards 
compatibility and change the default conf value to false.

 
  


> Spark 2.4.0 behavior breaks backwards compatibility
> ---------------------------------------------------
>
>                 Key: SPARK-26188
>                 URL: https://issues.apache.org/jira/browse/SPARK-26188
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: Damien Doucet-Girard
>            Priority: Minor
>
> My team uses spark to partition and output parquet files to amazon S3. We 
> typically use 256 partitions, from 00 to ff.
> We've observed that in spark 2.3.2 and prior, it reads the partitions as 
> strings by default. However, in spark 2.4.0 and later, the type of each 
> partition is inferred by default, and partitions such as 00 become 0 and 4d 
> become 4.0.
> After some investigation, we've isolated the issue to
>  
> [https://github.com/apache/spark/blob/02b510728c31b70e6035ad541bfcdc2b59dcd79a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L132-L136]
>   
> In the inferPartitioning method, 2.3.2 sets the type inference to false by 
> default (lines 132-136):
> {code:java}
> val spec = PartitioningUtils.parsePartitions(
>   leafDirs,
>   typeInference = false,
>   basePaths = basePaths,
>   timeZoneId = timeZoneId){code}
> However, in version 2.4.0, the typeInference flag has been replace with a 
> config flag
> https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L129-L133
>  
>   
> {code:java}
> val inferredPartitionSpec = PartitioningUtils.parsePartitions(
>   leafDirs,
>   typeInference = 
> sparkSession.sessionState.conf.partitionColumnTypeInferenceEnabled,
>   basePaths = basePaths,
>   timeZoneId = timeZoneId){code}
> And this conf's default value is true
> {code:java}
> val PARTITION_COLUMN_TYPE_INFERENCE =
> buildConf("spark.sql.sources.partitionColumnTypeInference.enabled")
> .doc("When true, automatically infer the data types for partitioned columns.")
> .booleanConf
> .createWithDefault(true){code}
> [https://github.com/apache/spark/blob/075447b3965489ffba4e6afb2b120880bc307505/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L636-L640]
>   
> I was wondering if a bug report would be appropriate to preserve backwards 
> compatibility and change the default conf value to false.
>  
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to