Repository: spark
Updated Branches:
  refs/heads/master bdf32847b -> 55276d3a2


[SPARK-25132][SQL][FOLLOWUP][DOC] Add migration doc for case-insensitive field 
resolution when reading from Parquet

## What changes were proposed in this pull request?
#22148 introduces a behavior change. According to discussion at #22184, this PR 
updates migration guide when upgrade from Spark 2.3 to 2.4.

## How was this patch tested?
N/A

Closes #23238 from seancxmao/SPARK-25132-doc-2.4.

Authored-by: seancxmao <seancx...@gmail.com>
Signed-off-by: Dongjoon Hyun <dongj...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/55276d3a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/55276d3a
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/55276d3a

Branch: refs/heads/master
Commit: 55276d3a26474e7479941db3e9c065d86344885f
Parents: bdf3284
Author: seancxmao <seancx...@gmail.com>
Authored: Sat Dec 8 17:53:12 2018 -0800
Committer: Dongjoon Hyun <dongj...@apache.org>
Committed: Sat Dec 8 17:53:12 2018 -0800

----------------------------------------------------------------------
 docs/sql-migration-guide-upgrade.md | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/55276d3a/docs/sql-migration-guide-upgrade.md
----------------------------------------------------------------------
diff --git a/docs/sql-migration-guide-upgrade.md 
b/docs/sql-migration-guide-upgrade.md
index 67c30fb..f6458a9 100644
--- a/docs/sql-migration-guide-upgrade.md
+++ b/docs/sql-migration-guide-upgrade.md
@@ -145,6 +145,8 @@ displayTitle: Spark SQL Upgrading Guide
 
   - In Spark version 2.3 and earlier, HAVING without GROUP BY is treated as 
WHERE. This means, `SELECT 1 FROM range(10) HAVING true` is executed as `SELECT 
1 FROM range(10) WHERE true`  and returns 10 rows. This violates SQL standard, 
and has been fixed in Spark 2.4. Since Spark 2.4, HAVING without GROUP BY is 
treated as a global aggregate, which means `SELECT 1 FROM range(10) HAVING 
true` will return only one row. To restore the previous behavior, set 
`spark.sql.legacy.parser.havingWithoutGroupByAsWhere` to `true`.
 
+  - In version 2.3 and earlier, when reading from a Parquet data source table, 
Spark always returns null for any column whose column names in Hive metastore 
schema and Parquet schema are in different letter cases, no matter whether 
`spark.sql.caseSensitive` is set to `true` or `false`. Since 2.4, when 
`spark.sql.caseSensitive` is set to `false`, Spark does case insensitive column 
name resolution between Hive metastore schema and Parquet schema, so even 
column names are in different letter cases, Spark returns corresponding column 
values. An exception is thrown if there is ambiguity, i.e. more than one 
Parquet column is matched. This change also applies to Parquet Hive tables when 
`spark.sql.hive.convertMetastoreParquet` is set to `true`.
+
 ## Upgrading From Spark SQL 2.3.0 to 2.3.1 and above
 
   - As of version 2.3.1 Arrow functionality, including `pandas_udf` and 
`toPandas()`/`createDataFrame()` with `spark.sql.execution.arrow.enabled` set 
to `True`, has been marked as experimental. These are still evolving and not 
currently recommended for use in production.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to