rdblue commented on code in PR #12470:
URL: https://github.com/apache/iceberg/pull/12470#discussion_r1984039899
##########
flink/v1.20/flink/src/main/java/org/apache/iceberg/flink/data/FlinkParquetReaders.java:
##########
@@ -274,17 +272,11 @@ public Optional<ParquetValueReader<?>> visit(
public Optional<ParquetValueReader<?>> visit(
LogicalTypeAnnotation.TimestampLogicalTypeAnnotation
timestampLogicalType) {
if (timestampLogicalType.getUnit() ==
LogicalTypeAnnotation.TimeUnit.MILLIS) {
- if (timestampLogicalType.isAdjustedToUTC()) {
- return Optional.of(new MillisToTimestampTzReader(desc));
- } else {
- return Optional.of(new MillisToTimestampReader(desc));
- }
+ return Optional.of(new MillisToTimestampReader(desc));
} else if (timestampLogicalType.getUnit() ==
LogicalTypeAnnotation.TimeUnit.MICROS) {
- if (timestampLogicalType.isAdjustedToUTC()) {
- return Optional.of(new MicrosToTimestampTzReader(desc));
- } else {
- return Optional.of(new MicrosToTimestampReader(desc));
- }
+ return Optional.of(new MicrosToTimestampReader(desc));
Review Comment:
Previously, the readers were converting values to `LocalDateTime` or
`OffsetDateTime` and then Flink would convert those values back to a (`millis`,
`nanosOfMilli`) pair. This involved a lot of unnecessary date/time logic in
both Iceberg and Flink as well as readers to produce the separate types.
Now, the conversion to Flink is direct and doesn't go through Java date/time
classes. That avoids all time zone calculations and should be quicker.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]