slinkydeveloper commented on a change in pull request #18611:
URL: https://github.com/apache/flink/pull/18611#discussion_r807008892



##########
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CastRuleProvider.java
##########
@@ -107,6 +108,16 @@ public static boolean exists(LogicalType inputType, 
LogicalType targetType) {
         return resolve(inputType, targetType) != null;
     }
 
+    /**
+     * Resolves the rule and returns the result of {@link 
CastRule#canFail(LogicalType,
+     * LogicalType)}. Fails with {@link NullPointerException} if the rule 
cannot be resolved.
+     */
+    public static boolean canFail(LogicalType inputType, LogicalType 
targetType) {
+        return Preconditions.checkNotNull(
+                        resolve(inputType, targetType), "Cast rule cannot be 
resolved")
+                .canFail(inputType, targetType);

Review comment:
       Rather than making `CastRule`s mutable, which i don't really like as 
solution, I think we could make `matches` a three valued logic function: a 
result of a matches is either `SUPPORT`, `FALLIBLE`, `UNSUPPORTED`. WDYT? 

##########
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/CastRuleProvider.java
##########
@@ -107,6 +108,16 @@ public static boolean exists(LogicalType inputType, 
LogicalType targetType) {
         return resolve(inputType, targetType) != null;
     }
 
+    /**
+     * Resolves the rule and returns the result of {@link 
CastRule#canFail(LogicalType,
+     * LogicalType)}. Fails with {@link NullPointerException} if the rule 
cannot be resolved.
+     */
+    public static boolean canFail(LogicalType inputType, LogicalType 
targetType) {
+        return Preconditions.checkNotNull(
+                        resolve(inputType, targetType), "Cast rule cannot be 
resolved")
+                .canFail(inputType, targetType);

Review comment:
       Rather than making `CastRule`s mutable, which i don't really like as 
solution, I think we could make `matches` a three valued logic function: a 
result of a matches is either `SUPPORTED`, `FALLIBLE`, `UNSUPPORTED`. WDYT? 

##########
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/casting/RowToRowCastRule.java
##########
@@ -224,4 +224,13 @@ protected String generateCodeBlockInternal(
         writer.stmt(methodCall(writerTerm, 
"complete")).assignStmt(returnVariable, rowTerm);
         return writer.toString();
     }
+
+    @Override
+    public boolean canFail(LogicalType inputLogicalType, LogicalType 
targetLogicalType) {

Review comment:
       No, because the logic is not the same: the row to row cast rule allows 
for target type to have less fields than the source type, so we cannot assert 
that the children type list has the same size across the two types. Look a 
couple of lines below when i use `Math.min` and also look at the `matches` 
method implementation.

##########
File path: 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/core/testutils/FlinkAssertions.java
##########
@@ -117,7 +117,7 @@ private FlinkAssertions() {}
      *                  .hasMessageContaining(containsMessage));
      * }</pre>
      */
-    public static ThrowingConsumer<? extends Throwable> anyCauseMatches(String 
containsMessage) {
+    public static ThrowingConsumer<? super Throwable> anyCauseMatches(String 
containsMessage) {

Review comment:
       +1 just as FYI no one is using this method (also because without this 
change is unusable :smile: )

##########
File path: 
flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java
##########
@@ -1529,6 +1529,20 @@ public void testSetReset() {
         sql("RESET 'test-key'").ok("RESET 'test-key'");
     }
 
+    @Test
+    public void testTryCast() {
+        // Note that is expected that the unparsed value has the comma rather 
than AS, because we
+        // don't use a custom SqlNode for TryCast, but we rely on SqlBasicCall
+
+        // Simple types
+        expr("try_cast(a as timestamp)").ok("TRY_CAST(`A` AS TIMESTAMP)");
+        expr("try_cast('abc' as timestamp)").ok("TRY_CAST('abc' AS 
TIMESTAMP)");
+
+        // Complex types
+        expr("try_cast(a as row(f0 int, f1 varchar))")

Review comment:
       Added this specific case

##########
File path: 
flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java
##########
@@ -1529,6 +1529,20 @@ public void testSetReset() {
         sql("RESET 'test-key'").ok("RESET 'test-key'");
     }
 
+    @Test
+    public void testTryCast() {
+        // Note that is expected that the unparsed value has the comma rather 
than AS, because we

Review comment:
       Removed it, it's a leftover

##########
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/sql/SqlTryCastFunction.java
##########
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.functions.sql;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.functions.casting.CastRuleProvider;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.sql.SqlCall;
+import org.apache.calcite.sql.SqlCallBinding;
+import org.apache.calcite.sql.SqlFunctionCategory;
+import org.apache.calcite.sql.SqlIntervalQualifier;
+import org.apache.calcite.sql.SqlKind;
+import org.apache.calcite.sql.SqlOperandCountRange;
+import org.apache.calcite.sql.SqlOperatorBinding;
+import org.apache.calcite.sql.SqlSyntax;
+import org.apache.calcite.sql.SqlWriter;
+import org.apache.calcite.sql.fun.SqlCastFunction;
+import org.apache.calcite.sql.fun.SqlStdOperatorTable;
+
+import static 
org.apache.flink.table.functions.BuiltInFunctionDefinition.DEFAULT_VERSION;
+
+/**
+ * This class implements the {@code TRY_CAST} built-in, essentially delegating 
all the method
+ * invocations, whenever is possible, to Calcite's {@link SqlCastFunction}.
+ */
+@Internal
+public class SqlTryCastFunction extends BuiltInSqlFunction {
+
+    /**
+     * Note that this constructor is mimicking as much as possible the 
constructor of Calcite's
+     * {@link SqlCastFunction}.
+     */
+    SqlTryCastFunction() {
+        super(
+                "TRY_CAST",
+                DEFAULT_VERSION,
+                SqlKind.OTHER_FUNCTION,
+                null,
+                SqlStdOperatorTable.CAST
+                        .getOperandTypeInference(), // From Calcite's 
SqlCastFunction
+                null,

Review comment:
       And in fact we're adopting it, without copying the actual 
implementation, but rather simply using the getters to get monotonicity and 
operand type inference. it's essentially no different than directly using 
`InferTypes.FIRST_KNOWN` here, with the difference that this potentially is one 
less pain to fix when upgrading to the next calcite version

##########
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/expressions/converter/converters/TryCastConverter.java
##########
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.expressions.converter.converters;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.expressions.CallExpression;
+import org.apache.flink.table.expressions.TypeLiteralExpression;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import 
org.apache.flink.table.planner.expressions.converter.CallExpressionConvertRule;
+import 
org.apache.flink.table.planner.expressions.converter.FunctionDefinitionConvertRule;
+import org.apache.flink.table.planner.functions.casting.CastRuleProvider;
+import org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexNode;
+
+import java.util.Collections;
+
+/**
+ * Conversion for {@link BuiltInFunctionDefinitions#TRY_CAST}.
+ *
+ * <p>We need this custom converter as {@link FunctionDefinitionConvertRule} 
doesn't support type
+ * literal arguments.
+ */
+@Internal
+class TryCastConverter extends CustomizedConverter {
+
+    @Override
+    public RexNode convert(CallExpression call, 
CallExpressionConvertRule.ConvertContext context) {
+        checkArgumentNumber(call, 2);
+
+        final RexNode child = context.toRexNode(call.getChildren().get(0));
+        final TypeLiteralExpression targetType = (TypeLiteralExpression) 
call.getChildren().get(1);
+
+        final LogicalType fromType = 
FlinkTypeFactory.toLogicalType(child.getType());
+        final LogicalType toType = 
targetType.getOutputDataType().getLogicalType();
+
+        // We need to adjust the type nullability here, as in table-common we 
cannot implement it
+        // correctly because we cannot access CastRuleProvider#canFail
+        RelDataType targetRelDataType =
+                
context.getTypeFactory().createFieldTypeFromLogicalType(toType);
+        if (CastRuleProvider.canFail(fromType, toType)) {

Review comment:
       After removing this code that was simplifying the not fallible case, not 
really, as this code now only differs for very specific sql or table api calls 
for inferring the return type

##########
File path: docs/data/sql_functions.yml
##########
@@ -561,7 +561,10 @@ conditional:
 conversion:
   - sql: CAST(value AS type)
     table: ANY.cast(TYPE)
-    description: Returns a new value being cast to type type. E.g., CAST('42' 
AS INT) returns 42; CAST(NULL AS VARCHAR) returns NULL of type VARCHAR.
+    description: Returns a new value being cast to type type. A CAST error 
throws an exception and fails the job. If you're performing a cast operation 
that may fail, like INT to STRING, you should rather use TRY_CAST, in order to 
handle errors. E.g., CAST('42' AS INT) returns 42; CAST(NULL AS VARCHAR) 
returns NULL of type VARCHAR; TRY_CAST('non-number' AS INT) throws an exception 
and fails the job.

Review comment:
       This has been fixed already in one of the last commits of this PR




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to