[
https://issues.apache.org/jira/browse/SPARK-51162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Max Gekk updated SPARK-51162:
-----------------------------
Description:
*Q1. What are you trying to do? Articulate your objectives using absolutely no
jargon.*
Add new data type *TIME* to Spark SQL which represents a time value with fields
hour, minute, second, up to microseconds. All operations over the type are
performed without taking any time zone into account. New data type should
conform to the type *TIME\(n\) WITHOUT TIME ZONE* defined by the SQL standard.
*Q3. How is it done today, and what are the limits of current practice?*
The TIME type can be emulated via the TIMESTAMP_NTZ data type by setting the
date part to the some constant value like 1970-01-01, 0001-01-01 or 0000-00-00
(though this is out of supported rage of dates).
Although the type can be emulation via TIMESTAMP_NTZ, Spark SQL cannot
recognize it in data sources, and for instance cannot load the TIME values from
parquet files.
was:
*Q1. What are you trying to do? Articulate your objectives using absolutely no
jargon.*
Add new data type *TIME* to Spark SQL which represents a time value with fields
hour, minute, second, up to microseconds. All operations over the type are
performed without taking any time zone into account. New data type should
conform to the type *TIME\(n\) WITHOUT TIME ZONE* defined by the SQL standard.
*Q3. How is it done today, and what are the limits of current practice?*
The TIME type can be emulated via the TIMESTAMP_NTZ data type by setting the
date part to the some constant value like 1970-01-01, 0001-01-01 or 0000-00-00
(though this is out of supported rage of dates).
> [WIP] SPIP: Add the TIME data type
> ----------------------------------
>
> Key: SPARK-51162
> URL: https://issues.apache.org/jira/browse/SPARK-51162
> Project: Spark
> Issue Type: New Feature
> Components: SQL
> Affects Versions: 4.0.0
> Reporter: Max Gekk
> Assignee: Max Gekk
> Priority: Major
> Labels: SPIP
>
> *Q1. What are you trying to do? Articulate your objectives using absolutely
> no jargon.*
> Add new data type *TIME* to Spark SQL which represents a time value with
> fields hour, minute, second, up to microseconds. All operations over the type
> are performed without taking any time zone into account. New data type should
> conform to the type *TIME\(n\) WITHOUT TIME ZONE* defined by the SQL standard.
> *Q3. How is it done today, and what are the limits of current practice?*
> The TIME type can be emulated via the TIMESTAMP_NTZ data type by setting the
> date part to the some constant value like 1970-01-01, 0001-01-01 or
> 0000-00-00 (though this is out of supported rage of dates).
> Although the type can be emulation via TIMESTAMP_NTZ, Spark SQL cannot
> recognize it in data sources, and for instance cannot load the TIME values
> from parquet files.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]