yiyutian1 commented on code in PR #25763:
URL: https://github.com/apache/flink/pull/25763#discussion_r1889825476


##########
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java:
##########
@@ -365,6 +368,38 @@ public static TimestampData toTimestampData(double v, int 
precision) {
         }
     }
 
+    public static TimestampData toTimestampData(int v, int precision) {
+        switch (precision) {
+            case 0:
+                if (MIN_EPOCH_SECONDS <= v && v <= MAX_EPOCH_SECONDS) {
+                    return timestampDataFromEpochMills((v * 
MILLIS_PER_SECOND));
+                } else {
+                    return null;
+                }
+            case 3:
+                return timestampDataFromEpochMills(v);
+            default:
+                throw new TableException(
+                        "The precision value '"
+                                + precision
+                                + "' for function "
+                                + "TO_TIMESTAMP_LTZ(numeric, precision) is 
unsupported,"
+                                + " the supported value is '0' for second or 
'3' for millisecond.");
+        }
+    }
+
+    public static TimestampData toTimestampData(long epoch) {
+        return toTimestampData(epoch, DEFAULT_PRECISION);
+    }
+
+    public static TimestampData toTimestampData(double epoch) {
+        return toTimestampData(epoch, DEFAULT_PRECISION);
+    }
+
+    public static TimestampData toTimestampData(DecimalData epoch) {
+        return toTimestampData(epoch, DEFAULT_PRECISION);
+    }

Review Comment:
   Hi @snuyanzin , you asked a great question. I spent quite some time on this, 
and I think I figured it out. 
   
   In this ticket, we aim to ensure that the existing `Scala` tests pass, to 
confirm that the function's existing behavior remains unchanged, therefore we 
still depend on some `scala generated code`. In an ideal world, we will have 
`Scala` logic only for existing scala tests, and the new java function support 
all the function behaviors for our `Java` tests. However, in reality, we can't 
have both stacks running at the same time. 
   
   If I don't modify these `scala` folders, my new java tests fail because they 
can't pick up the new function behaviors.
   If I get rid of the `Scala` tech stack, the existing `Scala` tests fail. 
   
   Once the new functionality is out and stable, we should complete the 
migration by removing the `Scala` tests and fully transition to `Java`, but now 
I think we should keep it as is, so that we have the confidence that we don't 
break existing tests. What do you think?



##########
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/DateTimeUtils.java:
##########
@@ -365,6 +368,38 @@ public static TimestampData toTimestampData(double v, int 
precision) {
         }
     }
 
+    public static TimestampData toTimestampData(int v, int precision) {
+        switch (precision) {
+            case 0:
+                if (MIN_EPOCH_SECONDS <= v && v <= MAX_EPOCH_SECONDS) {
+                    return timestampDataFromEpochMills((v * 
MILLIS_PER_SECOND));
+                } else {
+                    return null;
+                }
+            case 3:
+                return timestampDataFromEpochMills(v);
+            default:
+                throw new TableException(
+                        "The precision value '"
+                                + precision
+                                + "' for function "
+                                + "TO_TIMESTAMP_LTZ(numeric, precision) is 
unsupported,"
+                                + " the supported value is '0' for second or 
'3' for millisecond.");
+        }
+    }
+
+    public static TimestampData toTimestampData(long epoch) {
+        return toTimestampData(epoch, DEFAULT_PRECISION);
+    }
+
+    public static TimestampData toTimestampData(double epoch) {
+        return toTimestampData(epoch, DEFAULT_PRECISION);
+    }
+
+    public static TimestampData toTimestampData(DecimalData epoch) {
+        return toTimestampData(epoch, DEFAULT_PRECISION);
+    }

Review Comment:
   Hi @snuyanzin , you asked a great question. I spent quite some time on this, 
and I think I figured it out. 
   
   In this ticket, we aim to ensure that the existing `Scala` tests pass, to 
confirm that the function's existing behavior remains unchanged, therefore we 
still depend on some `scala generated code`. In an ideal world, we will have 
`Scala` logic only for existing scala tests, and the new java function support 
all the function behaviors for our `Java` tests. However, in reality, we can't 
have both stacks running at the same time. 
   
   If I don't modify these `scala` folders, my new `java` tests fail because 
they can't pick up the new function behaviors.
   If I get rid of the `Scala` tech stack, the existing `Scala` tests fail. 
   
   Once the new functionality is out and stable, we should complete the 
migration by removing the `Scala` tests and fully transition to `Java`, but now 
I think we should keep it as is, so that we have the confidence that we don't 
break existing tests. What do you think?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to