[ 
https://issues.apache.org/jira/browse/FLINK-16860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikola updated FLINK-16860:
---------------------------
    Description: 
We have a batch job which we currently have on a flink cluster running 1.8.2
 The job runs fine. We wanted to upgrade to flink 1.10, but that yielded 
errors, so we started downgrading until we found that the issue is in flink 
1.9.2

The job on 1.9.2 fails with:
{code:java}
Caused by: org.apache.flink.table.api.TableException: Failed to push filter 
into table source! table source with pushdown capability must override and 
change explainSource() API to explain the pushdown applied!{code}
Which is not happening on flink 1.8.2. You can check the logs for the exactly 
same job, just running on different cluster versions: [^flink-1.8.2.txt] 
[^flink-1.9.2.txt]

 

I tried to narrow it down and it seems that this exception has been added in 
FLINK-12399 and there was a small discussion regarding the exception: 
[https://github.com/apache/flink/pull/8468#discussion_r329876088]

Our code looks something like this:

 
{code:java}
String tempTableName = "tempTable";
String sql = SqlBuilder.buildSql(tempTableName);
BatchTableEnvironment tableEnv = BatchTableEnvironment.create(env);
OrcTableSource orcTableSource = OrcTableSource.builder()
 .path(hdfsFolder, true)
 .forOrcSchema(ORC.getSchema())
 .withConfiguration(config)
 .build();
tableEnv.registerTableSource(tempTableName, orcTableSource);
Table tempTable = tableEnv.sqlQuery(sql);
return tableEnv.toDataSet(tempTable, Row.class); 
{code}
Where the sql build is nothing more than
{code:java}
SELECT * FROM table WHERE id IN (1,2,3) AND mid IN(4,5,6){code}
 

  was:
We have a batch job which we currently have on a flink cluster running 1.8.2
The job runs fine. We wanted to upgrade to flink 1.10, but that yielded errors, 
so we started downgrading until we found that the issue is in flink 1.9.2

The job on 1.9.2 fails with:
{code:java}
Caused by: org.apache.flink.table.api.TableException: Failed to push filter 
into table source! table source with pushdown capability must override and 
change explainSource() API to explain the pushdown applied!{code}
 

Which is not happening on flink 1.8.2

I tried to narrow it down and it seems that this exception has been added in 
FLINK-12399 and there was a small discussion regarding the exception: 
[https://github.com/apache/flink/pull/8468#discussion_r329876088]

Our code looks something like this:



 
{code:java}
String tempTableName = "tempTable";
String sql = SqlBuilder.buildSql(tempTableName);
BatchTableEnvironment tableEnv = BatchTableEnvironment.create(env);
OrcTableSource orcTableSource = OrcTableSource.builder()
 .path(hdfsFolder, true)
 .forOrcSchema(ORC.getSchema())
 .withConfiguration(config)
 .build();
tableEnv.registerTableSource(tempTableName, orcTableSource);
Table tempTable = tableEnv.sqlQuery(sql);
return tableEnv.toDataSet(tempTable, Row.class); 
{code}

Where the sql build is nothing more than
{code:java}
SELECT * FROM table WHERE id IN (1,2,3) AND mid IN(4,5,6){code}
 


> TableException: Failed to push filter into table source! when upgrading flink 
> to 1.9.2
> --------------------------------------------------------------------------------------
>
>                 Key: FLINK-16860
>                 URL: https://issues.apache.org/jira/browse/FLINK-16860
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / ORC
>    Affects Versions: 1.9.2, 1.10.0
>         Environment: flink 1.8.2
> flink 1.9.2
>            Reporter: Nikola
>            Priority: Major
>         Attachments: flink-1.8.2.txt, flink-1.9.2.txt
>
>
> We have a batch job which we currently have on a flink cluster running 1.8.2
>  The job runs fine. We wanted to upgrade to flink 1.10, but that yielded 
> errors, so we started downgrading until we found that the issue is in flink 
> 1.9.2
> The job on 1.9.2 fails with:
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Failed to push filter 
> into table source! table source with pushdown capability must override and 
> change explainSource() API to explain the pushdown applied!{code}
> Which is not happening on flink 1.8.2. You can check the logs for the exactly 
> same job, just running on different cluster versions: [^flink-1.8.2.txt] 
> [^flink-1.9.2.txt]
>  
> I tried to narrow it down and it seems that this exception has been added in 
> FLINK-12399 and there was a small discussion regarding the exception: 
> [https://github.com/apache/flink/pull/8468#discussion_r329876088]
> Our code looks something like this:
>  
> {code:java}
> String tempTableName = "tempTable";
> String sql = SqlBuilder.buildSql(tempTableName);
> BatchTableEnvironment tableEnv = BatchTableEnvironment.create(env);
> OrcTableSource orcTableSource = OrcTableSource.builder()
>  .path(hdfsFolder, true)
>  .forOrcSchema(ORC.getSchema())
>  .withConfiguration(config)
>  .build();
> tableEnv.registerTableSource(tempTableName, orcTableSource);
> Table tempTable = tableEnv.sqlQuery(sql);
> return tableEnv.toDataSet(tempTable, Row.class); 
> {code}
> Where the sql build is nothing more than
> {code:java}
> SELECT * FROM table WHERE id IN (1,2,3) AND mid IN(4,5,6){code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to