Github user seancxmao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22184#discussion_r213020789
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1895,6 +1895,10 @@ working with timestamps in `pandas_udf`s to get the 
best performance, see
       - Since Spark 2.4, File listing for compute statistics is done in 
parallel by default. This can be disabled by setting 
`spark.sql.parallelFileListingInStatsComputation.enabled` to `False`.
       - Since Spark 2.4, Metadata files (e.g. Parquet summary files) and 
temporary files are not counted as data files when calculating table size 
during Statistics computation.
     
    +## Upgrading From Spark SQL 2.3.1 to 2.3.2 and above
    +
    +  - In version 2.3.1 and earlier, when reading from a Parquet table, Spark 
always returns null for any column whose column names in Hive metastore schema 
and Parquet schema are in different letter cases, no matter whether 
`spark.sql.caseSensitive` is set to true or false. Since 2.3.2, when 
`spark.sql.caseSensitive` is set to false, Spark does case insensitive column 
name resolution between Hive metastore schema and Parquet schema, so even 
column names are in different letter cases, Spark returns corresponding column 
values. An exception is thrown if there is ambiguity, i.e. more than one 
Parquet column is matched.
    --- End diff --
    
    As a followup to cloud-fan's point, I did a deep dive into read path of 
parquet hive serde table. Following is a rough invocation chain:
    
    ```
    org.apache.spark.sql.hive.execution.HiveTableScanExec
    org.apache.spark.sql.hive.HadoopTableReader (extendes 
org.apache.spark.sql.hive.TableReader)
    org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat (extends 
org.apache.hadoop.mapred.FileInputFormat)
    org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper 
(extends org.apache.hadoop.mapred.RecordReader)
    parquet.hadoop.ParquetRecordReader
    parquet.hadoop.InternalParquetRecordReader
    org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport (extends 
parquet.hadoop.api.ReadSupport)
    ```
    
    Finally, `DataWritableReadSupport#getFieldTypeIgnoreCase` is invoked. 
    
    
https://github.com/JoshRosen/hive/blob/release-1.2.1-spark2/ql/src/java/org/apache/hadoop/hive/ql/io/parquet/read/DataWritableReadSupport.java#L79-L95
    
    This is why parquet hive serde table always do case-insensitive field 
resolution. However, this is a class inside 
`org.spark-project.hive:hive-exec:1.2.1.spark2`.
    
    I also found the related Hive JIRA ticket:
    [HIVE-7554: Parquet Hive should resolve column names in case insensitive 
manner](https://issues.apache.org/jira/browse/HIVE-7554)
    
    BTW:
    * org.apache.hadoop.hive.ql = org.spark-project.hive:hive-exec:1.2.1.spark2
    * parquet.hadoop = com.twitter:parquet-hadoop-bundle:1.6.0


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to