vrozov commented on code in PR #5868:
URL: https://github.com/apache/hive/pull/5868#discussion_r2167554480
##########
ql/src/java/org/apache/hadoop/hive/ql/io/BatchToRowReader.java:
##########
@@ -518,7 +521,8 @@ public static TimestampWritableV2
nextTimestamp(ColumnVector vector,
result = (TimestampWritableV2) previous;
}
TimestampColumnVector tcv = (TimestampColumnVector) vector;
- result.setInternal(tcv.time[row], tcv.nanos[row]);
+ result.set(Timestamp.ofEpochSecond(Math.floorDiv(tcv.time[row], 1000L),
tcv.nanos[row],
+ tcv.isUTC() ? ZoneOffset.UTC : ZoneId.systemDefault()));
Review Comment:
1. As I mentioned local time zone was tested using Spark unit tests. I don't
see how this can be done inside Hive (that hardcodes UTC time zone), but if you
have a suggestion I am open to it.
2. I am not open to implement changes that I consider to be incorrect and
not maintainable in the long run. It is your changes and if you consider them
correct, why not to open a PR?
3. It is up to Hive PMC member to proceed with or without changes. I'll cast
my (non-binding) vote once RC is available. Also, it is likely that Spark
committers will also request fix to Spark regressions caused by the Hive
behavior change between 2.3.10 and 4.x if the fix is not implemented one way or
another in Hive 4.1.x
4. I never said that `TimestampTreeReader` uses the same approach as what is
implemented in `RecordReaderImpl`. I said that it works with (and actually
without) those changes. That change is required for
TimestampFromXXXTreeReaders, not for `TimestampTreeReader`. And once
https://github.com/apache/orc/pull/2300 is fixed, `TimestampFromXXXTreeReaders`
will work with and without changes in `RecordReaderImpl`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]