[ 
https://issues.apache.org/jira/browse/FLINK-8215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16281537#comment-16281537
 ] 

Timo Walther commented on FLINK-8215:
-------------------------------------

I think the Table API should not support automatic type widening. It is also 
not supported at other locations. But if the SQL standard supports that we need 
to adapt the code generation accordingly.

> Collections codegen exception when constructing Array or Map via SQL API
> ------------------------------------------------------------------------
>
>                 Key: FLINK-8215
>                 URL: https://issues.apache.org/jira/browse/FLINK-8215
>             Project: Flink
>          Issue Type: Bug
>          Components: Table API & SQL
>            Reporter: Rong Rong
>            Assignee: Rong Rong
>
> TableAPI goes through `LogicalNode.validate()`, which brings up the 
> collection validation and rejects inconsistent type, this will throw 
> `ValidationExcpetion` for something like `array(1.0, 2.0f)`.
> SqlAPI uses `FlinkPlannerImpl.validator(SqlNode)`, which uses calcite SqlNode 
> validation, which supports resolving leastRestrictive type. `ARRAY[CAST(1 AS 
> DOUBLE), CAST(2 AS FLOAT)]` throws codegen exception.
> Root cause is the CodeGeneration for these collection value constructors does 
> not cast or resolve leastRestrictive type correctly. I see 2 options:
> 1. Strengthen validation to not allow resolving leastRestrictive type on SQL.
> 2. Making codegen support leastRestrictive type cast, such as using 
> `generateCast` instead of direct casting like `(ClassType) element`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to