[ 
https://issues.apache.org/jira/browse/SPARK-36227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17405231#comment-17405231
 ] 

Apache Spark commented on SPARK-36227:
--------------------------------------

User 'gengliangwang' has created a pull request for this issue:
https://github.com/apache/spark/pull/33851

> Remove TimestampNTZ type support in Spark 3.2
> ---------------------------------------------
>
>                 Key: SPARK-36227
>                 URL: https://issues.apache.org/jira/browse/SPARK-36227
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>    Affects Versions: 3.3.0
>            Reporter: Gengliang Wang
>            Assignee: Gengliang Wang
>            Priority: Major
>             Fix For: 3.3.0
>
>
> As of now, there are some blockers for delivering the TimestampNTZ project in 
> Spark 3.2:
> # In the Hive Thrift server, both TimestampType and TimestampNTZType are 
> mapped to the same timestamp type, which can cause confusion for users. 
> # For the Parquet data source, the new written TimestampNTZType Parquet 
> columns will be read as TimestampType in old Spark releases. Also, we need to 
> decide the merge schema for files mixed with TimestampType and TimestampNTZ 
> type.
> # The type coercion rules for TimestampNTZType are incomplete. For example, 
> what should the data type of the in clause "IN(Timestamp'2020-01-01 
> 00:00:00', TimestampNtz'2020-01-01 00:00:00') be.
> # It is tricky to support TimestampNTZType in JSON/CSV data readers. We need 
> to avoid regressions as possible as we can.
> There are 10 days left for the expected 3.2 RC date. So, I propose to release 
> the TimestampNTZ type in Spark 3.3 instead of Spark 3.2. So that we have 
> enough time to make considerate designs for the issues. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to