This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.5
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.5 by this push:
     new e57a7d068839 [SPARK-47462][SQL][FOLLOWUP][3.5] Add migration guide for 
TINYINT mapping changes
e57a7d068839 is described below

commit e57a7d068839d549afe08b4a79e82d027b56a5f5
Author: Kent Yao <y...@apache.org>
AuthorDate: Thu Mar 21 23:06:03 2024 -0700

    [SPARK-47462][SQL][FOLLOWUP][3.5] Add migration guide for TINYINT mapping 
changes
    
    ### What changes were proposed in this pull request?
    
    Add migration guide for TINYINT type mapping changes
    ### Why are the changes needed?
    
    behavior change doc
    ### Does this PR introduce _any_ user-facing change?
    
    no
    
    ### How was this patch tested?
    doc build
    
    ### Was this patch authored or co-authored using generative AI tooling?
    no
    
    Closes #45658 from yaooqinn/SPARK-47462-FB.
    
    Authored-by: Kent Yao <y...@apache.org>
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
---
 docs/sql-migration-guide.md | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/docs/sql-migration-guide.md b/docs/sql-migration-guide.md
index f788d89c4999..3bb83750ef92 100644
--- a/docs/sql-migration-guide.md
+++ b/docs/sql-migration-guide.md
@@ -22,6 +22,14 @@ license: |
 * Table of contents
 {:toc}
 
+## Upgrading from Spark SQL 3.5.1 to 3.5.2
+
+- Since 3.5.2, MySQL JDBC datasource will read TINYINT UNSIGNED as ShortType, 
while in 3.5.1, it was wrongly read as ByteType.
+
+## Upgrading from Spark SQL 3.5.0 to 3.5.1
+
+- Since Spark 3.5.1, MySQL JDBC datasource will read TINYINT(n > 1) and 
TINYINT UNSIGNED as ByteType, while in Spark 3.5.0 and below, they were read as 
IntegerType. To restore the previous behavior, you can cast the column to the 
old type.
+
 ## Upgrading from Spark SQL 3.4 to 3.5
 
 - Since Spark 3.5, the JDBC options related to DS V2 pushdown are `true` by 
default. These options include: `pushDownAggregate`, `pushDownLimit`, 
`pushDownOffset` and `pushDownTableSample`. To restore the legacy behavior, 
please set them to `false`. e.g. set 
`spark.sql.catalog.your_catalog_name.pushDownAggregate` to `false`.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to