[ 
https://issues.apache.org/jira/browse/FLINK-6886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16051451#comment-16051451
 ] 

sunjincheng commented on FLINK-6886:
------------------------------------

But, I think not the optimization issue, because, {{FlinkPlannerImpl#rel}} will 
do the {{StreamTableEnvironment#translate}}, In this method we will do the 
{{optimize}}, After {{optimize}} we really translate 
{{TimeIndicatorRelDataType}} to {{TIMESTAMP}}. That's correct. The core problem 
is occur in {{ translate(dataStreamPlan, relNode.getRowType, queryConfig, 
withChangeFlag) }}, we can see the second param {{ relNode.getRowType}} is the 
non-optimized node which contains {{TimeIndicatorRelDataType}}.  all of the 
follows operations are based on the type of non-optimal node, so there will be 
such a problem.


> Fix Timestamp field can not be selected in event time case when  
> toDataStream[T], `T` not a `Row` Type.
> -------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-6886
>                 URL: https://issues.apache.org/jira/browse/FLINK-6886
>             Project: Flink
>          Issue Type: Bug
>          Components: Table API & SQL
>    Affects Versions: 1.4.0
>            Reporter: sunjincheng
>            Assignee: sunjincheng
>
> Currently for event-time window(group/over), When contain `Timestamp` type 
> field in `SELECT Clause`, And toDataStream[T], `T` not a `Row` Type, Such 
> `PojoType`, will throw a exception. In this JIRA. will fix this bug. For 
> example:
> Group Window on SQL:
> {code}
> SELECT name, max(num) as myMax, TUMBLE_START(rowtime, INTERVAL '5' SECOND) as 
> winStart,TUMBLE_END(rowtime, INTERVAL '5' SECOND) as winEnd FROM T1 GROUP BY 
> name, TUMBLE(rowtime, INTERVAL '5' SECOND)
> {code}
> Throw Exception:
> {code}
> org.apache.flink.table.api.TableException: The field types of physical and 
> logical row types do not match.This is a bug and should not happen. Please 
> file an issue.
>       at org.apache.flink.table.api.TableException$.apply(exceptions.scala:53)
>       at 
> org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:721)
>       at 
> org.apache.flink.table.api.StreamTableEnvironment.getConversionMapper(StreamTableEnvironment.scala:247)
>       at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:647)
> {code}
> In fact, when we solve this exception,subsequent other exceptions will be 
> thrown. The real reason is {{TableEnvironment#generateRowConverterFunction}} 
> method bug. So in this JIRA. will fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to