Moving forward with the timestamp proposal

2019-02-20 Thread Zoltan Ivanfi
Hi,

Last december we shared a timestamp harmonization proposal
 with the Hive, Spark and Impala communities. This
was followed by an extensive discussion in January that lead to various
updates and improvements to the proposal, as well as the creation of a new
document for file format components. February has been quiet regarding this
topic and the latest revision of the proposal has been steady in the recent
weeks.

In short, the following is being proposed (please see the document for
details):

   - The TIMESTAMP WITHOUT TIME ZONE type should have LocalDateTime
   semantics.
   - The TIMESTAMP WITH LOCAL TIME ZONE type should have Instant semantics.
   - The TIMESTAMP WITH TIME ZONE type should have OffsetDateTime semantics.

This proposal is in accordance with the SQL standard and many major DB
engines.

Based on the feedback we got I believe that the latest revision of the
proposal addresses the needs of all affected components, therefore I would
like to move forward and create JIRA-s and/or roadmap documentation pages
for the desired semantics of the different SQL types according to the
proposal.

Please let me know if you have any remaning concerns about the proposal or
about the course of action outlined above.

Thanks,

Zoltan


Adding more timestamp types to on-disk storage formats

2019-01-17 Thread Zoltan Ivanfi
Hi,

One of the feedbacks I got for the SQL timestamp type harmonization
proposal was that I should reach out the file format communities as
well. For this purpose I created a separate document from their
perspective and sent it to the Avro, ORC, Parquet, Arrow, Kudu and
Iceberg developer lists. Please let me know about any other
communities you think I should involve.

The document can be found here:
https://docs.google.com/document/d/1E-7miCh4qK6Mg54b-Dh5VOyhGX8V4xdMXKIHJL36a9U/edit

Br,

Zoltan

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: proposal for expanded & consistent timestamp types

2019-01-08 Thread Zoltan Ivanfi
Hi,

> ORC has long had a timestamp format. If extra attributes are needed on a 
> timestamp, as long as the default "no metadata" value isn't changed, then at 
> the file level things should be OK.
>
> more problematic is: what would happen to an existing app reading in 
> timestamps and ignoring any extra attributes. That way lies trouble

Maybe it would be best if the freshly introduced more explicit types
were not forwards-compatible. To be more precise, it would be enough
if only the "new" semantics were not forwards-compatible, it is fine
if older readers can read the "already existing" semantics, since that
is what they expect. Of course, this more fine-grained control is only
possible if there is a single "already existing" semantics only.
Whether that's the case or not depends on the file format as well.

> Talk to the format groups sooner rather than later

Thanks for the suggestion, I will write a small summary from that
perspective soon and contact the file format groups. I have Avro,
Parquet and ORC in mind. Any other file format group I should contact?
I plan to reach out to Arrow and Kudu as well. (Although strictly
speaking these are not file formats, yet they have their own type
systems as well.)

> What does Arrow do in this world, incidentally?

Arrow has a bit more options than just UTC-normalized or
timezone-agnostic. It supports arbitrary timezones as well:

/// The time zone is a string indicating the name of a time zone [...]
///
/// * If the time zone is null or equal to an empty string, the data is "time
/// zone naive" and shall be displayed *as is* to the user, not localized
/// to the locale of the user. [...]
///
/// * If the time zone is set to a valid value, values can be displayed as
/// "localized" to that time zone, even though the underlying 64-bit
/// integers are identical to the same data stored in UTC. [...]

https://github.com/apache/arrow/blob/master/format/Schema.fbs#L162

Br,

Zoltan



On Wed, Jan 2, 2019 at 5:36 PM Steve Loughran  wrote:
>
> OK, I've seen the document now. Probably the best summary of timestamps out 
> there I've ever seen.
>
> Irrespective of what historical stuff has done, the goal should be "make 
> everything consistent enough that cut and paste SQL queries over the same 
> data works" and "you shouldn't have to care about the persistence format *or 
> which app created the data*
>
> What does Arrow do in this world, incidentally?
>
>
> On 2 Jan 2019, at 11:48, Steve Loughran  wrote:
>
>
>
> On 17 Dec 2018, at 17:44, Zoltan Ivanfi  wrote:
>
> Hi,
>
> On Sun, Dec 16, 2018 at 4:43 AM Wenchen Fan  wrote:
>
> Shall we include Parquet and ORC? If they don't support it, it's hard for 
> general query engines like Spark to support it.
>
>
> For each of the more explicit timestamp types we propose a single
> semantics regardless of the file format. Query engines and other
> applications must explicitly support the new semantics, but it is not
> strictly necessary to extend or modify the file formats themselves,
> since users can declare the desired semantics directly in the end-user
> applications:
>
> - In SQL they would do so by using the more explicit timestamp types
> as detailed in the proposal. And since the SQL engines in question
> share the same metastore, users only have to define/update the SQL
> schema once to achieve interoperability in SQL.
>
> - Other applications will have to add support for the different
> semantics, but due to the large number of such applications, we can
> not coordinate all of that effort. Hopefully though, if we add support
> in the three major Hadoop SQL engines, other applications will follow
> suit.
>
> - Spark, specifically, falls into both of the categories mentioned
> above. It supports SQL queries, where it gets the benefit of the SQL
> schemas shared via the metastore. It also supports reading data files
> directly, where the correct timestamp semantics to use would have to
> be declared programmatically by the user/consumer of the API.
>
> That being said, although not strictly necessary, it is beneficial to
> store the semantics in some file-level metadata as well. This allows
> writers to record the intended semantics of timestamps and readers to
> recognize it, so no input is needed from the user when data is
> ingested from or exported to other tools. It will still require
> explicit support from the applications though. Parquet does have such
> metadata about the timestamp semantics: the isAdjustedToUTC field is
> part of the new parametric timestamp logical type. True means Instant
> semantics, while false means LocalDateTime semantics.
>
>
> I support the idea of adding similar metadata to other file formats 

Updated proposal: Consistent timestamp types in Hadoop SQL engines

2018-12-19 Thread Zoltan Ivanfi
Dear All,

I would like to thank every reviewer of the consistent timestamps
proposal[1] for their time and valuable comments. Based on your
feedback, I have updated the proposal. The changes include
clarifications, fixes and other improvements as summarized at the end
of the document, in the Changelog section[2].

Another category of changes is declaring some topics as out-of-scope
with the intention to keep the scope under control. While these topics
are worth discussing, I suggest doing that in follow-up efforts. I
think it is easier to reach decisions in bite-sized chunks and the
proposal in its current form is already near the limit that is a
convenient read in a single sitting.

Please take a look at the updated proposal. I'm looking forward to
further feedback and suggestions.

Thanks,

Zoltan

[1] 
https://docs.google.com/document/d/1gNRww9mZJcHvUDCXklzjFEQGpefsuR_akCDfWsdE35Q/edit
[2] 
https://docs.google.com/document/d/1gNRww9mZJcHvUDCXklzjFEQGpefsuR_akCDfWsdE35Q/edit#heading=h.b90toonzuv1y

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Timestamp interoperability design doc available for review

2017-08-16 Thread Zoltan Ivanfi
Dear Spark Community,

Based on earlier feedback from the Spark community, we would like to
suggest a short-term fix for the timestamp interoperability problem[1]
between different SQL-on-Hadoop engines. I created a design document[2] and
would like to ask you to review it and let me know of any concerns and/or
suggestions you may have.

[1] https://issues.apache.org/jira/browse/SPARK-12297
[2]
https://docs.google.com/document/d/1XmyVjr3eOJiNFjVeSnmjIU60Hq-XiZB03pgi3r1razM/edit

Thanks,

Zoltan


Re: SQL TIMESTAMP semantics vs. SPARK-18350

2017-06-06 Thread Zoltan Ivanfi
Hi Michael,

To answer this I think we should distinguish between the long-term fix and
the short-term fix.

If understand the replies correctly, everyone agrees that the desired
long-term fix is to have two separate SQL types (TIMESTAMP [WITH|WITHOUT]
TIME ZONE). Because of having separate types, mixing them as you described
can not happen (unless a new feature intentionally allows that). Of course,
conversions are still needed, but there are many examples from different
database systems that we can follow.

Since having two separate types is a huge effort, for a short term solution
I would suggest allowing the single existing TIMESTAMP type to allow both
semantics, configurable per table. The implementation of timezone-agnostic
semantics could be similar to Hive. In Hive, just like in Spark, a
timestamp is UTC-normalized internally but it is shown as a local time when
it gets displayed. To achieve timezone-agnostic behavior, Hive still uses
UTC-based timestamps in memory and adjusts on-disk data to/from this
internal representation if needed. When the on-disk data is UTC-normalized
as well, it matches this internal representation, so the on-disk value
directly corresponds to the UTC instant of the in-memory representation.

When the on-disk data is supposed to have timezone-agnostic semantics, the
on-disk value is made to match the local time value of the in-memory
timestamp, so the value that ultimately gets displayed to the user has
timezone-agnostic semantics (although the corresponding UTC value will be
different depending on the local time zone). So instead of implementing a
separate in-memory representation for timezone-agnostic timestamps, the
desired on-disk semantics are simulated on top of the existing
representation. Timestamps are adjusted during reading/writing as needed.

Implementing this workaround takes a lot less effort and simplifies some
scenarios as well. For example, the situation that you described (union of
two queries returning timestamps of different semantics) does not have to
be handled explicitly, since the in-memory representation are the same,
including their interpretation. Semantics only matter when reading/writing
timestamps from/to disk.

A disadvantage of this workaround is that it is not perfect. In most time
zones, there is an hour skipped by the DST change every year.
Timezone-agnostic timestamps from that single hour can not be emulated this
way, because they are invalid in the local timezone, so there is no UTC
instant that would ultimately get displayed as the desired timestamp. But
that only affects ~0.01% of all timestamps and adapting this workaround
would allow interoperability with 99.99% of timezone-agnostic timestamps
written by Impala and Hive instead of the current situation in which 0% of
these timestamps are interpreted correctly.

Please let me know if some parts of my description were unclear and I will
gladly elaborate on them.

Thanks,

Zoltan

On Fri, Jun 2, 2017 at 9:41 PM Michael Allman <mich...@videoamp.com> wrote:

> Hi Zoltan,
>
> I don't fully understand your proposal for table-specific timestamp type
> semantics. I think it will be helpful to everyone in this conversation if
> you can identify the expected behavior for a few concrete scenarios.
>
> Suppose we have a Hive metastore table hivelogs with a column named ts
> with the hive timestamp type as described here:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-timestamp.
> This table was created by Hive and is usually accessed through Hive or
> Presto.
>
> Suppose again we have a Hive metastore table sparklogs with a column named
> ts with the Spark SQL timestamp type as described here:
> http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.types.TimestampType$.
> This table was created by Spark SQL and is usually accessed through Spark
> SQL.
>
> Let's say Spark SQL sets and reads a table property called
> timestamp_interp to determine timestamp type semantics for that table.
> Consider a dataframe df defined by sql("SELECT sts as ts FROM sparklogs
> UNION ALL SELECT hts as ts FROM hivelogs"). Suppose the timestamp_interp
> table property is absent from hivelogs. For each possible value of
> timestamp_interp set on the table sparklogs,
>
> 1. does df successfully pass analysis (i.e. is it a valid query)?
> 2. if it's a valid dataframe, what is the type of the ts column?
> 3. if it's a valid dataframe, what are the semantics of the type of the ts
> column?
>
> Suppose further that Spark SQL sets the timestamp_interp on hivelogs. Can
> you answer the same three questions for each combination of
> timestamp_interp on hivelogs and sparklogs?
>
> Thank you.
>
> Michael
>
>
> On Jun 2, 2017, at 8:33 AM, Zoltan Ivanfi <z...@cloudera.com> wrote:
>
> Hi,
>
> We would like

Re: SQL TIMESTAMP semantics vs. SPARK-18350

2017-06-02 Thread Zoltan Ivanfi
Hi,

We would like to solve the problem of interoperability of existing data,
and that is the main use case for having table-level control. Spark should
be able to read timestamps written by Impala or Hive and at the same time
read back its own data. These have different semantics, so having a single
flag is not enough.

Two separate types will solve this problem indeed, but only once every
component involved supports them. Unfortunately, adding these separate SQL
types is a larger effort that is only feasible in the long term and we
would like to provide a short-term solution for interoperability in the
meantime.

Br,

Zoltan

On Fri, Jun 2, 2017 at 1:32 AM Reynold Xin <r...@databricks.com> wrote:

> Yea I don't see why this needs to be per table config. If the user wants
> to configure it per table, can't they just declare the data type on a per
> table basis, once we have separate types for timestamp w/ tz and w/o tz?
>
> On Thu, Jun 1, 2017 at 4:14 PM, Michael Allman <mich...@videoamp.com>
> wrote:
>
>> I would suggest that making timestamp type behavior configurable and
>> persisted per-table could introduce some real confusion, e.g. in queries
>> involving tables with different timestamp type semantics.
>>
>> I suggest starting with the assumption that timestamp type behavior is a
>> per-session flag that can be set in a global `spark-defaults.conf` and
>> consider more granular levels of configuration as people identify solid use
>> cases.
>>
>> Cheers,
>>
>> Michael
>>
>>
>>
>> On May 30, 2017, at 7:41 AM, Zoltan Ivanfi <z...@cloudera.com> wrote:
>>
>> Hi,
>>
>> If I remember correctly, the TIMESTAMP type had UTC-normalized local time
>> semantics even before Spark 2, so I can understand that Spark considers it
>> to be the "established" behavior that must not be broken. Unfortunately,
>> this behavior does not provide interoperability with other SQL engines of
>> the Hadoop stack.
>>
>> Let me summarize the findings of this e-mail thread so far:
>>
>>- Timezone-agnostic TIMESTAMP semantics would be beneficial for
>>interoperability and SQL compliance.
>>- Spark can not make a breaking change. For backward-compatibility
>>with existing data, timestamp semantics should be user-configurable on a
>>per-table level.
>>
>> Before going into the specifics of a possible solution, do we all agree
>> on these points?
>>
>> Thanks,
>>
>> Zoltan
>>
>> On Sat, May 27, 2017 at 8:57 PM Imran Rashid <iras...@cloudera.com>
>> wrote:
>>
>>> I had asked zoltan to bring this discussion to the dev list because I
>>> think it's a question that extends beyond a single jira (we can't figure
>>> out the semantics of timestamp in parquet if we don't k ow the overall goal
>>> of the timestamp type) and since its a design question the entire community
>>> should be involved.
>>>
>>> I think that a lot of the confusion comes because we're talking about
>>> different ways time zone affect behavior: (1) parsing and (2) behavior when
>>> changing time zones for processing data.
>>>
>>> It seems we agree that spark should eventually provide a timestamp type
>>> which does conform to the standard.   The question is, how do we get
>>> there?  Has spark already broken compliance so much that it's impossible to
>>> go back without breaking user behavior?  Or perhaps spark already has
>>> inconsistent behavior / broken compatibility within the 2.x line, so its
>>> not unthinkable to have another breaking change?
>>>
>>> (Another part of the confusion is on me -- I believed the behavior
>>> change was in 2.2, but actually it looks like its in 2.0.1.  That changes
>>> how we think about this in context of what goes into a 2.2
>>> release.  SPARK-18350 isn't the origin of the difference in behavior.)
>>>
>>> First: consider processing data that is already stored in tables, and
>>> then accessing it from machines in different time zones.  The standard is
>>> clear that "timestamp" should be just like "timestamp without time zone":
>>> it does not represent one instant in time, rather it's always displayed the
>>> same, regardless of time zone.  This was the behavior in spark 2.0.0 (and
>>> 1.6),  for hive tables stored as text files, and for spark's json formats.
>>>
>>> Spark 2.0.1  changed the behavior of the json format (I believe
>>> with SPARK-16216), so that it behaves more like timestamp *with* time
>>>

Re: SQL TIMESTAMP semantics vs. SPARK-18350

2017-05-30 Thread Zoltan Ivanfi
and correctly, the existing implementation is similar to
>>> TIMESTAMP WITH LOCAL TIMEZONE data type in Oracle..
>>> In addition, there are the standard TIMESTAMP and TIMESTAMP WITH
>>> TIMEZONE data types which are missing from Spark.
>>> So, it is better (for me) if instead of extending the existing types,
>>> Spark would just implement the additional well-defined types properly.
>>> Just trying to copy-paste CREATE TABLE between SQL engines should not be
>>> an exercise of flags and incompatibilities.
>>>
>>> Regarding the current behaviour, if I remember correctly I had to force
>>> our spark O/S user into UTC so Spark wont change my timestamps.
>>>
>>> Ofir Manor
>>>
>>> Co-Founder & CTO | Equalum
>>>
>>> Mobile: +972-54-7801286 | Email: ofir.ma...@equalum.io
>>>
>>> On Thu, May 25, 2017 at 1:33 PM, Reynold Xin <r...@databricks.com>
>>> wrote:
>>>
>>>> Zoltan,
>>>>
>>>> Thanks for raising this again, although I'm a bit confused since I've
>>>> communicated with you a few times on JIRA and on private emails to explain
>>>> that you have some misunderstanding of the timestamp type in Spark and some
>>>> of your statements are wrong (e.g. the except text file part). Not sure why
>>>> you didn't get any of those.
>>>>
>>>>
>>>> Here's another try:
>>>>
>>>>
>>>> 1. I think you guys misunderstood the semantics of timestamp in Spark
>>>> before session local timezone change. IIUC, Spark has always assumed
>>>> timestamps to be with timezone, since it parses timestamps with timezone
>>>> and does all the datetime conversions with timezone in mind (it doesn't
>>>> ignore timezone if a timestamp string has timezone specified). The session
>>>> local timezone change further pushes Spark to that direction, but the
>>>> semantics has been with timezone before that change. Just run Spark on
>>>> machines with different timezone and you will know what I'm talking about.
>>>>
>>>> 2. CSV/Text is not different. The data type has always been "with
>>>> timezone". If you put a timezone in the timestamp string, it parses the
>>>> timezone.
>>>>
>>>> 3. We can't change semantics now, because it'd break all existing Spark
>>>> apps.
>>>>
>>>> 4. We can however introduce a new timestamp without timezone type, and
>>>> have a config flag to specify which one (with tz or without tz) is the
>>>> default behavior.
>>>>
>>>>
>>>>
>>>> On Wed, May 24, 2017 at 5:46 PM, Zoltan Ivanfi <z...@cloudera.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Sorry if you receive this mail twice, it seems that my first attempt
>>>>> did not make it to the list for some reason.
>>>>>
>>>>> I would like to start a discussion about SPARK-18350
>>>>> <https://issues.apache.org/jira/browse/SPARK-18350> before it gets
>>>>> released because it seems to be going in a different direction than what
>>>>> other SQL engines of the Hadoop stack do.
>>>>>
>>>>> ANSI SQL defines the TIMESTAMP type (also known as TIMESTAMP WITHOUT
>>>>> TIME ZONE) to have timezone-agnostic semantics - basically a type that
>>>>> expresses readings from calendars and clocks and is unaffected by time
>>>>> zone. In the Hadoop stack, Impala has always worked like this and recently
>>>>> Presto also took steps
>>>>> <https://github.com/prestodb/presto/issues/7122> to become standards
>>>>> compliant. (Presto's design doc
>>>>> <https://docs.google.com/document/d/1UUDktZDx8fGwHZV4VyaEDQURorFbbg6ioeZ5KMHwoCk/edit>
>>>>> also contains a great summary of the different semantics.) Hive has a
>>>>> timezone-agnostic TIMESTAMP type as well (except for Parquet, a major
>>>>> source of incompatibility that is already being addressed
>>>>> <https://issues.apache.org/jira/browse/HIVE-12767>). A TIMESTAMP in
>>>>> SparkSQL, however, has UTC-normalized local time semantics (except for
>>>>> textfile), which is generally the semantics of the TIMESTAMP WITH TIME 
>>>>> ZONE
>>>>> type.
>>>>>
>>>>> Given that timezone-agnostic TIMESTAMP se

Re: SQL TIMESTAMP semantics vs. SPARK-18350

2017-05-25 Thread Zoltan Ivanfi
Hi,

Ofir, thanks for your support. My understanding is that many users have the
same problem as you do.

Reynold, thanks for your reply and sorry for the confusion. My personal
e-mail was specifically about your concerns regarding SPARK-12297 and I
started this separate thread because this is about the general vision
regarding the TIMESTAMP type which may be of interest to the whole
community. My initial e-mail did not address your concerns because I wrote
it before you answered on the other thread.

Regarding your specific concerns:

1. I realize that the TIMESTAMP type in Spark already has UTC-normalized
local time semantics, but I believe that this is problematic for
consistency and interoperability with other SQL engines. In my opinion a
standard-compliant behavior would be the best and since SPARK-18350 takes
SparkSQL even further away from it, I am worried that it makes fixing this
incompatibility even harder.

2. If a timezone is present in a textfile, SparkSQL can parse it indeed.
However, if there is no specific timezone mentioned, it will parse the
TIMESTAMP as a local time, and when the result is displayed to the user
(without the timezone), it will be identical regardless of the current
timezone. This actually matches the way how Hive approximates a
timezone-agnostic TIMESTAMP behavior. Since Hive's in-memory timestamp
representation is UTC-normalized local time (similar to Spark), reading
timestamps in different timezones will result in a different UTC value in
the in-memory representation. However, when they are rendered, they will
look the same, so the apparent behavior will match the desired
timezone-agnostic semantics. (The reason why this is only an approximation
is that timestamps skipped due to DST changes can not be represented this
way.)

But even if we consider textfile to be no exception, it is still not
SQL-compliant that the TIMESTAMP type has TIMESTAMP WITH TIME ZONE
semantics.

3. I agree that Spark must not break compatibility in the interpretation of
already existing data, but I don't think that it means that we can't change
semantics now. It just means that we have to make it configurable, as I
suggested in the initial mail of this thread.

Actually, the requirement of never breaking compatibility is the exact
reason why I'm worried about SPARK-18350, since if people start using that
feature, it will be even harder to change semantics while keeping
compatibility at the same time. (On the other hand, SPARK-18350 would be an
essential feature for a separate TIMESTAMP WITH TIME ZONE type.)

4. The ability to choose the desired behavior of a TIMESTAMP as you suggest
actually solves the problem of breaking compatibility. However, I don't
think that a central configuration flag is enough. Since users who already
have timestamp data may also want to have standard-compliant behavior for
new tables, I think there needs to be a table-specific override for the
global configuration flag. In fact, that is what we wanted to achieve in
SPARK-12297, although our effort was limited to the Parquet format.

Zoltan

On Thu, May 25, 2017 at 12:33 PM Reynold Xin <r...@databricks.com> wrote:

> Zoltan,
>
> Thanks for raising this again, although I'm a bit confused since I've
> communicated with you a few times on JIRA and on private emails to explain
> that you have some misunderstanding of the timestamp type in Spark and some
> of your statements are wrong (e.g. the except text file part). Not sure why
> you didn't get any of those.
>
>
> Here's another try:
>
>
> 1. I think you guys misunderstood the semantics of timestamp in Spark
> before session local timezone change. IIUC, Spark has always assumed
> timestamps to be with timezone, since it parses timestamps with timezone
> and does all the datetime conversions with timezone in mind (it doesn't
> ignore timezone if a timestamp string has timezone specified). The session
> local timezone change further pushes Spark to that direction, but the
> semantics has been with timezone before that change. Just run Spark on
> machines with different timezone and you will know what I'm talking about.
>
> 2. CSV/Text is not different. The data type has always been "with
> timezone". If you put a timezone in the timestamp string, it parses the
> timezone.
>
> 3. We can't change semantics now, because it'd break all existing Spark
> apps.
>
> 4. We can however introduce a new timestamp without timezone type, and
> have a config flag to specify which one (with tz or without tz) is the
> default behavior.
>
>
>
> On Wed, May 24, 2017 at 5:46 PM, Zoltan Ivanfi <z...@cloudera.com> wrote:
>
>> Hi,
>>
>> Sorry if you receive this mail twice, it seems that my first attempt did
>> not make it to the list for some reason.
>>
>> I would like to start a discussion about SPARK-18350
>> <https://is

SQL TIMESTAMP semantics vs. SPARK-18350

2017-05-24 Thread Zoltan Ivanfi
Hi,

Sorry if you receive this mail twice, it seems that my first attempt did
not make it to the list for some reason.

I would like to start a discussion about SPARK-18350
 before it gets released
because it seems to be going in a different direction than what other SQL
engines of the Hadoop stack do.

ANSI SQL defines the TIMESTAMP type (also known as TIMESTAMP WITHOUT TIME
ZONE) to have timezone-agnostic semantics - basically a type that expresses
readings from calendars and clocks and is unaffected by time zone. In the
Hadoop stack, Impala has always worked like this and recently Presto also
took steps  to become
standards compliant. (Presto's design doc

also contains a great summary of the different semantics.) Hive has a
timezone-agnostic TIMESTAMP type as well (except for Parquet, a major
source of incompatibility that is already being addressed
). A TIMESTAMP in
SparkSQL, however, has UTC-normalized local time semantics (except for
textfile), which is generally the semantics of the TIMESTAMP WITH TIME ZONE
type.

Given that timezone-agnostic TIMESTAMP semantics provide standards
compliance and consistency with most SQL engines, I was wondering whether
SparkSQL should also consider it in order to become ANSI SQL compliant and
interoperable with other SQL engines of the Hadoop stack. Should SparkSQL
adapt this semantics in the future, SPARK-18350
 may turn out to be a
source of problems. Please correct me if I'm wrong, but this change seems
to explicitly assign TIMESTAMP WITH TIME ZONE semantics to the TIMESTAMP
type. I think SPARK-18350 would be a great feature for a separate TIMESTAMP
WITH TIME ZONE type, but the plain unqualified TIMESTAMP type would be
better becoming timezone-agnostic instead of gaining further timezone-aware
capabilities. (Of course becoming timezone-agnostic would be a behavior
change, so it must be optional and configurable by the user, as in Presto.)

I would like to hear your opinions about this concern and about TIMESTAMP
semantics in general. Does the community agree that a standards-compliant
and interoperable TIMESTAMP type is desired? Do you perceive SPARK-18350 as
a potential problem in achieving this or do I misunderstand the effects of
this change?

Thanks,

Zoltan

---

List of links in case in-line links do not work:

   -

   SPARK-18350: https://issues.apache.org/jira/browse/SPARK-18350
   -

   Presto's change: https://github.com/prestodb/presto/issues/7122
   -

   Presto's design doc:
   
https://docs.google.com/document/d/1UUDktZDx8fGwHZV4VyaEDQURorFbbg6ioeZ5KMHwoCk/edit