amogh-jahagirdar commented on code in PR #11775:
URL: https://github.com/apache/iceberg/pull/11775#discussion_r2113087795
##########
api/src/main/java/org/apache/iceberg/expressions/Literals.java:
##########
@@ -300,8 +300,7 @@ public <T> Literal<T> to(Type type) {
case TIMESTAMP:
return (Literal<T>) new TimestampLiteral(value());
case TIMESTAMP_NANO:
- // assume micros and convert to nanos to match the behavior in the
timestamp case above
- return new TimestampLiteral(value()).to(type);
+ return (Literal<T>) new TimestampNanoLiteral(value());
Review Comment:
@stevenzwu imo I think that the fix as it is, is good as is. I'm not
entirely sure I get why we need to keep this behavior of interpreting as
microseconds because the case where there's a chance for a correctness issue
about is Spark microsecond values being interpreted as nanoseconds for a
timestamp_nano type. However, spark doesn't even support this data type yet so
this situation doesn't seem possible yet so it feels like the right thing to do
is to invalidate this assumption
>Since long literal value can't express precision explicitly, it is more
intuitive to assume the same precision as the timestamp field type.
Exactly my thinking as well. The alternative means restructuring a lot of
the expression APIs that exist which is just a wider change for this narrow
issue.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]