[jira] [Created] (FLINK-32235) Translate CrateDB Docs to chinese

2023-06-01 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-32235:
---

 Summary: Translate CrateDB Docs to chinese
 Key: FLINK-32235
 URL: https://issues.apache.org/jira/browse/FLINK-32235
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Reporter: Marios Trivyzas


Translate the newly added docs for CrateDB with 
[https://github.com/apache/flink-connector-jdbc/pull/29] to chinese.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31699) JDBC nightly CI failure

2023-04-03 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707996#comment-17707996
 ] 

Marios Trivyzas commented on FLINK-31699:
-

[~ruanhang1993] FYI, I had such failures when using jdk > 8 to build locally.

Also, maybe worths checking: 
[https://github.com/apache/flink-connector-jdbc/pull/30] and 
[https://github.com/apache/flink-connector-jdbc/pull/31]

if they address the issue.

> JDBC nightly CI failure
> ---
>
> Key: FLINK-31699
> URL: https://issues.apache.org/jira/browse/FLINK-31699
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC
>Reporter: Danny Cranmer
>Assignee: Hang Ruan
>Priority: Major
>
> Investigate and fix the nightly CI failure. Example 
> [https://github.com/apache/flink-connector-jdbc/actions/runs/4585903259]
>  
> {code:java}
> Error:  Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test (default-test) 
> on project flink-connector-jdbc: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M5:test failed: 
> org.junit.platform.commons.JUnitException: TestEngine with ID 'archunit' 
> failed to discover tests: 
> com.tngtech.archunit.lang.syntax.elements.MethodsThat.areAnnotatedWith(Ljava/lang/Class;)Ljava/lang/Object;
>  -> [Help 1]{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31553) Choose Catalog, non-url based

2023-04-03 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707805#comment-17707805
 ] 

Marios Trivyzas commented on FLINK-31553:
-

Imho, I don't think that this requires a FLIP, it's a simple change/addition.
I created a discussion thread in the list, instead.

> Choose Catalog, non-url based
> -
>
> Key: FLINK-31553
> URL: https://issues.apache.org/jira/browse/FLINK-31553
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Priority: Major
>
> Currently. the {*}Catalog{*}{*}/Dialect{*} (etc.) is chosen automatically 
> based on the URL provided. This takes place in 
> {*}{color:#00}JdbcDialectFactory{color}#load{*}, using the: 
> {*}{color:#00}JdbcDialectFactory{color}#acceptsURL{*}, so for a URL 
> *jdbc:postgresql://...* the PostgresCatalog/Dialect is used. *CrateDB* used 
> the same driver and URL but needs it's own Catalog/Dialect (etc.) as it has 
> it's own stack of internal tables, type sytem, etc. (see 
> https://issues.apache.org/jira/browse/FLINK-31551) So if a user wants to use 
> *CrateDB,* currently, needs to use the legacy CrateDB driver which uses the 
> jdbc:crate://... url.
> Ideally, there should be another way, not only url based, that allows the 
> user to use a given url, but choose the dialect manually. It could be some 
> parameter on the catalog definition or even a special URL parameter like 
> *?dialect=CrateDB* or so.
>  
> Other DBs that use implement the PostgreSQL wire protocol and are compatible 
> with the PostgreSQL JDBC driver, but need their own Catalog/Dialect etc. 
> would benefit from this. (e.g.: CockroachDB)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31552) Cannot use schema other than public with TableEnvironment

2023-03-29 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas closed FLINK-31552.
---
Resolution: Invalid

> Cannot use schema other than public with TableEnvironment
> -
>
> Key: FLINK-31552
> URL: https://issues.apache.org/jira/browse/FLINK-31552
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Priority: Major
>
> Cannot use a schema other than *public* in 
> TableEnvironment+flink-jdbc-connector.
> Postgres:
>  
> {noformat}
> psql (15.1)
> Type "help" for help.
> matriv=> create schema myschema;
> CREATE SCHEMA
> matriv=> create table myschema.t1(a int);
> CREATE TABLE
> matriv=> insert into myschema.t1(a) values (1), (2);
> INSERT 0 2
> matriv=> select * from myschema.t1;
>  a  
> ---
>  1
>  2
> (2 rows)
> {noformat}
> {noformat}
> EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().inStreamingMode().build();
> TableEnvironment tableEnv = TableEnvironment.create(settings);
> String name= "my_catalog";
> String defaultDatabase = "matriv";
> String username= "matriv";
> String password= "matriv";
> String baseUrl = "jdbc:postgresql://localhost:5432";
> JdbcCatalog catalog = new JdbcCatalog(name, defaultDatabase, username, 
> password, baseUrl);
> tableEnv.registerCatalog("my_catalog", catalog);
> // set the JdbcCatalog as the current catalog of the session
> tableEnv.listTables();
> tableEnv.executeSql("select * from myschema.t1").print();{noformat}
> Exception:
> {noformat}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 15 to line 1, column 25: Object 
> 'myschema' not found
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:186)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:113)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:261)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:723)
>     at 
> io.crate.streaming.TaxiRidesStreamingJob.main(TaxiRidesStreamingJob.java:109)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 15 to line 1, column 25: Object 'myschema' not found
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 15 to line 1, column 25: Object 'myschema' not found    at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
>     at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:883)
>     at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:868)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:4867)
>     at 
> org.apache.calcite.sql.validate.IdentifierNamespace.resolveImpl(IdentifierNamespace.java:179)
>     at 
> org.apache.calcite.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:184)
>     at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3085)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3070)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3335)
>     at 
> org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
>     at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
>     at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232)
>     at 
> 

[jira] [Assigned] (FLINK-31552) Cannot use schema other than public with TableEnvironment

2023-03-29 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-31552:
---

Assignee: Marios Trivyzas

> Cannot use schema other than public with TableEnvironment
> -
>
> Key: FLINK-31552
> URL: https://issues.apache.org/jira/browse/FLINK-31552
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>
> Cannot use a schema other than *public* in 
> TableEnvironment+flink-jdbc-connector.
> Postgres:
>  
> {noformat}
> psql (15.1)
> Type "help" for help.
> matriv=> create schema myschema;
> CREATE SCHEMA
> matriv=> create table myschema.t1(a int);
> CREATE TABLE
> matriv=> insert into myschema.t1(a) values (1), (2);
> INSERT 0 2
> matriv=> select * from myschema.t1;
>  a  
> ---
>  1
>  2
> (2 rows)
> {noformat}
> {noformat}
> EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().inStreamingMode().build();
> TableEnvironment tableEnv = TableEnvironment.create(settings);
> String name= "my_catalog";
> String defaultDatabase = "matriv";
> String username= "matriv";
> String password= "matriv";
> String baseUrl = "jdbc:postgresql://localhost:5432";
> JdbcCatalog catalog = new JdbcCatalog(name, defaultDatabase, username, 
> password, baseUrl);
> tableEnv.registerCatalog("my_catalog", catalog);
> // set the JdbcCatalog as the current catalog of the session
> tableEnv.listTables();
> tableEnv.executeSql("select * from myschema.t1").print();{noformat}
> Exception:
> {noformat}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 15 to line 1, column 25: Object 
> 'myschema' not found
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:186)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:113)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:261)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:723)
>     at 
> io.crate.streaming.TaxiRidesStreamingJob.main(TaxiRidesStreamingJob.java:109)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 15 to line 1, column 25: Object 'myschema' not found
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 15 to line 1, column 25: Object 'myschema' not found    at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
>     at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:883)
>     at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:868)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:4867)
>     at 
> org.apache.calcite.sql.validate.IdentifierNamespace.resolveImpl(IdentifierNamespace.java:179)
>     at 
> org.apache.calcite.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:184)
>     at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3085)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3070)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3335)
>     at 
> org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
>     at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
>     at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232)
>     at 
> 

[jira] [Commented] (FLINK-31552) Cannot use schema other than public with TableEnvironment

2023-03-29 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17706378#comment-17706378
 ] 

Marios Trivyzas commented on FLINK-31552:
-

This is invalid, the  part must be escaped, and then it works:
{noformat}
tableEnv.executeSql("select * from `myschema.t1`").print();{noformat}

> Cannot use schema other than public with TableEnvironment
> -
>
> Key: FLINK-31552
> URL: https://issues.apache.org/jira/browse/FLINK-31552
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Priority: Major
>
> Cannot use a schema other than *public* in 
> TableEnvironment+flink-jdbc-connector.
> Postgres:
>  
> {noformat}
> psql (15.1)
> Type "help" for help.
> matriv=> create schema myschema;
> CREATE SCHEMA
> matriv=> create table myschema.t1(a int);
> CREATE TABLE
> matriv=> insert into myschema.t1(a) values (1), (2);
> INSERT 0 2
> matriv=> select * from myschema.t1;
>  a  
> ---
>  1
>  2
> (2 rows)
> {noformat}
> {noformat}
> EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().inStreamingMode().build();
> TableEnvironment tableEnv = TableEnvironment.create(settings);
> String name= "my_catalog";
> String defaultDatabase = "matriv";
> String username= "matriv";
> String password= "matriv";
> String baseUrl = "jdbc:postgresql://localhost:5432";
> JdbcCatalog catalog = new JdbcCatalog(name, defaultDatabase, username, 
> password, baseUrl);
> tableEnv.registerCatalog("my_catalog", catalog);
> // set the JdbcCatalog as the current catalog of the session
> tableEnv.listTables();
> tableEnv.executeSql("select * from myschema.t1").print();{noformat}
> Exception:
> {noformat}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 15 to line 1, column 25: Object 
> 'myschema' not found
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:186)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:113)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:261)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:723)
>     at 
> io.crate.streaming.TaxiRidesStreamingJob.main(TaxiRidesStreamingJob.java:109)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 15 to line 1, column 25: Object 'myschema' not found
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 15 to line 1, column 25: Object 'myschema' not found    at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
>     at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:883)
>     at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:868)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:4867)
>     at 
> org.apache.calcite.sql.validate.IdentifierNamespace.resolveImpl(IdentifierNamespace.java:179)
>     at 
> org.apache.calcite.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:184)
>     at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3085)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3070)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3335)
>     at 
> org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
>     at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>     at 
> 

[jira] [Commented] (FLINK-31553) Choose Catalog, non-url based

2023-03-29 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17706276#comment-17706276
 ] 

Marios Trivyzas commented on FLINK-31553:
-

Yep, I agree, we need to keep the URL based discovery for a Dialect -> Catalog 
-> TypeMapping, etc. not only for compatibility reasons, but also for ease of 
use,

since with `jdbc:postgresql` a user would expect to get the PGDialect by 
default. And on top add a mechanism like a config param that you've mentioned

to be able to use the same driver but different Dialect->Catalog->etc.

 

> Choose Catalog, non-url based
> -
>
> Key: FLINK-31553
> URL: https://issues.apache.org/jira/browse/FLINK-31553
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Priority: Major
>
> Currently. the {*}Catalog{*}{*}/Dialect{*} (etc.) is chosen automatically 
> based on the URL provided. This takes place in 
> {*}{color:#00}JdbcDialectFactory{color}#load{*}, using the: 
> {*}{color:#00}JdbcDialectFactory{color}#acceptsURL{*}, so for a URL 
> *jdbc:postgresql://...* the PostgresCatalog/Dialect is used. *CrateDB* used 
> the same driver and URL but needs it's own Catalog/Dialect (etc.) as it has 
> it's own stack of internal tables, type sytem, etc. (see 
> https://issues.apache.org/jira/browse/FLINK-31551) So if a user wants to use 
> *CrateDB,* currently, needs to use the legacy CrateDB driver which uses the 
> jdbc:crate://... url.
> Ideally, there should be another way, not only url based, that allows the 
> user to use a given url, but choose the dialect manually. It could be some 
> parameter on the catalog definition or even a special URL parameter like 
> *?dialect=CrateDB* or so.
>  
> Other DBs that use implement the PostgreSQL wire protocol and are compatible 
> with the PostgreSQL JDBC driver, but need their own Catalog/Dialect etc. 
> would benefit from this. (e.g.: CockroachDB)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31553) Choose Catalog, non-url based

2023-03-27 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17705407#comment-17705407
 ] 

Marios Trivyzas commented on FLINK-31553:
-

[~libenchao]  [~eskabetxe] Please take a look at this one, and provide any 
suggestions.

I would be happy to work on it.

> Choose Catalog, non-url based
> -
>
> Key: FLINK-31553
> URL: https://issues.apache.org/jira/browse/FLINK-31553
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Priority: Major
>
> Currently. the {*}Catalog{*}{*}/Dialect{*} (etc.) is chosen automatically 
> based on the URL provided. This takes place in 
> {*}{color:#00}JdbcDialectFactory{color}#load{*}, using the: 
> {*}{color:#00}JdbcDialectFactory{color}#acceptsURL{*}, so for a URL 
> *jdbc:postgresql://...* the PostgresCatalog/Dialect is used. *CrateDB* used 
> the same driver and URL but needs it's own Catalog/Dialect (etc.) as it has 
> it's own stack of internal tables, type sytem, etc. (see 
> https://issues.apache.org/jira/browse/FLINK-31551) So if a user wants to use 
> *CrateDB,* currently, needs to use the legacy CrateDB driver which uses the 
> jdbc:crate://... url.
> Ideally, there should be another way, not only url based, that allows the 
> user to use a given url, but choose the dialect manually. It could be some 
> parameter on the catalog definition or even a special URL parameter like 
> *?dialect=CrateDB* or so.
>  
> Other DBs that use implement the PostgreSQL wire protocol and are compatible 
> with the PostgreSQL JDBC driver, but need their own Catalog/Dialect etc. 
> would benefit from this. (e.g.: CockroachDB)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31551) Support CrateDB in JDBC Connector

2023-03-26 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17705130#comment-17705130
 ] 

Marios Trivyzas commented on FLINK-31551:
-

Thank you [~libenchao] !

> Support CrateDB in JDBC Connector
> -
>
> Key: FLINK-31551
> URL: https://issues.apache.org/jira/browse/FLINK-31551
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>  Labels: pull-request-available
>
> Currently PostgreSQL is supported, but PostgresCatalog along with all the 
> relevant classes don't support CrateDB, so a new stack must be implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31551) Support CrateDB in JDBC Connector

2023-03-24 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17704602#comment-17704602
 ] 

Marios Trivyzas commented on FLINK-31551:
-

[~libenchao] could you please take a look?

> Support CrateDB in JDBC Connector
> -
>
> Key: FLINK-31551
> URL: https://issues.apache.org/jira/browse/FLINK-31551
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>  Labels: pull-request-available
>
> Currently PostgreSQL is supported, but PostgresCatalog along with all the 
> relevant classes don't support CrateDB, so a new stack must be implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31553) Choose Catalog, non-url based

2023-03-21 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-31553:
---

 Summary: Choose Catalog, non-url based
 Key: FLINK-31553
 URL: https://issues.apache.org/jira/browse/FLINK-31553
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Reporter: Marios Trivyzas


Currently. the {*}Catalog{*}{*}/Dialect{*} (etc.) is chosen automatically based 
on the URL provided. This takes place in 
{*}{color:#00}JdbcDialectFactory{color}#load{*}, using the: 
{*}{color:#00}JdbcDialectFactory{color}#acceptsURL{*}, so for a URL 
*jdbc:postgresql://...* the PostgresCatalog/Dialect is used. *CrateDB* used the 
same driver and URL but needs it's own Catalog/Dialect (etc.) as it has it's 
own stack of internal tables, type sytem, etc. (see 
https://issues.apache.org/jira/browse/FLINK-31551) So if a user wants to use 
*CrateDB,* currently, needs to use the legacy CrateDB driver which uses the 
jdbc:crate://... url.

Ideally, there should be another way, not only url based, that allows the user 
to use a given url, but choose the dialect manually. It could be some parameter 
on the catalog definition or even a special URL parameter like 
*?dialect=CrateDB* or so.

 

Other DBs that use implement the PostgreSQL wire protocol and are compatible 
with the PostgreSQL JDBC driver, but need their own Catalog/Dialect etc. would 
benefit from this. (e.g.: CockroachDB)

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31552) Cannot use schema other than public with TableEnvironment

2023-03-21 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-31552:
---

 Summary: Cannot use schema other than public with TableEnvironment
 Key: FLINK-31552
 URL: https://issues.apache.org/jira/browse/FLINK-31552
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / JDBC
Reporter: Marios Trivyzas


Cannot use a schema other than *public* in 
TableEnvironment+flink-jdbc-connector.

Postgres:

 
{noformat}
psql (15.1)
Type "help" for help.


matriv=> create schema myschema;
CREATE SCHEMA

matriv=> create table myschema.t1(a int);
CREATE TABLE

matriv=> insert into myschema.t1(a) values (1), (2);
INSERT 0 2

matriv=> select * from myschema.t1;
 a  
---
 1
 2
(2 rows)

{noformat}
{noformat}
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().inStreamingMode().build();
TableEnvironment tableEnv = TableEnvironment.create(settings);
String name= "my_catalog";
String defaultDatabase = "matriv";
String username= "matriv";
String password= "matriv";
String baseUrl = "jdbc:postgresql://localhost:5432";

JdbcCatalog catalog = new JdbcCatalog(name, defaultDatabase, username, 
password, baseUrl);
tableEnv.registerCatalog("my_catalog", catalog);

// set the JdbcCatalog as the current catalog of the session
tableEnv.listTables();
tableEnv.executeSql("select * from myschema.t1").print();{noformat}
Exception:
{noformat}
Exception in thread "main" org.apache.flink.table.api.ValidationException: SQL 
validation failed. From line 1, column 15 to line 1, column 25: Object 
'myschema' not found
    at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:186)
    at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:113)
    at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:261)
    at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:723)
    at 
io.crate.streaming.TaxiRidesStreamingJob.main(TaxiRidesStreamingJob.java:109)
Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 15 to line 1, column 25: Object 'myschema' not found
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 15 to line 1, column 25: Object 'myschema' not found    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at 
org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
    at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:883)
    at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:868)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:4867)
    at 
org.apache.calcite.sql.validate.IdentifierNamespace.resolveImpl(IdentifierNamespace.java:179)
    at 
org.apache.calcite.sql.validate.IdentifierNamespace.validateImpl(IdentifierNamespace.java:184)
    at 
org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3085)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3070)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3335)
    at 
org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
    at 
org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
    at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:952)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:704)
    at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:182)
    ... 5 more
{noformat}
 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31551) Support CrateDB in JDBC Connector

2023-03-21 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17703301#comment-17703301
 ] 

Marios Trivyzas commented on FLINK-31551:
-

FYI: [~martijnvisser] 

> Support CrateDB in JDBC Connector
> -
>
> Key: FLINK-31551
> URL: https://issues.apache.org/jira/browse/FLINK-31551
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>  Labels: pull-request-available
>
> Currently PostgreSQL is supported, but PostgresCatalog along with all the 
> relevant classes don't support CrateDB, so a new stack must be implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31551) Support CrateDB in JDBC Connector

2023-03-21 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-31551:
---

Assignee: Marios Trivyzas

> Support CrateDB in JDBC Connector
> -
>
> Key: FLINK-31551
> URL: https://issues.apache.org/jira/browse/FLINK-31551
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>  Labels: pull-request-available
>
> Currently PostgreSQL is supported, but PostgresCatalog along with all the 
> relevant classes don't support CrateDB, so a new stack must be implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31551) Support CrateDB in JDBC Connector

2023-03-21 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-31551:
---

 Summary: Support CrateDB in JDBC Connector
 Key: FLINK-31551
 URL: https://issues.apache.org/jira/browse/FLINK-31551
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / JDBC
Reporter: Marios Trivyzas


Currently PostgreSQL is supported, but PostgresCatalog along with all the 
relevant classes don't support CrateDB, so a new stack must be implemented.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27438) SQL validation failed when constructing a map array

2022-05-09 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17533672#comment-17533672
 ] 

Marios Trivyzas commented on FLINK-27438:
-

I'm not a committer myself, so we'd need to wait for Martijn or Timo.

> SQL validation failed when constructing a map array
> ---
>
> Key: FLINK-27438
> URL: https://issues.apache.org/jira/browse/FLINK-27438
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0, 1.14.4
>Reporter: Wei Zhong
>Assignee: Roman Boyko
>Priority: Major
>  Labels: pull-request-available
>
> Exception: 
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. Unsupported type when convertTypeToSpec: MAP
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:185)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:110)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:237)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:105)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:695)
>     at 
> org.apache.flink.table.planner.plan.stream.sql.LegacyTableFactoryTest.(LegacyTableFactoryTest.java:35)
>     at 
> org.apache.flink.table.planner.plan.stream.sql.LegacyTableFactoryTest.main(LegacyTableFactoryTest.java:49)
> Caused by: java.lang.UnsupportedOperationException: Unsupported type when 
> convertTypeToSpec: MAP
>     at 
> org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1059)
>     at 
> org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1081)
>     at 
> org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.castTo(SqlValidatorUtils.java:82)
>     at 
> org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForMultisetConstructor(SqlValidatorUtils.java:74)
>     at 
> org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForArrayConstructor(SqlValidatorUtils.java:39)
>     at 
> org.apache.flink.table.planner.functions.sql.SqlArrayConstructor.inferReturnType(SqlArrayConstructor.java:44)
>     at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:449)
>     at org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:531)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5716)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5703)
>     at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1736)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1727)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:421)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelectList(SqlValidatorImpl.java:4061)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3347)
>     at 
> org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
>     at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
>     at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:952)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:704)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:180)
>     ... 6 more {code}
> How to reproduce:
> {code:java}
> tableEnv.executeSql("select array[map['A', 'AA'], map['B', 'BB'], map['C', 
> CAST(NULL AS STRING)]] from (VALUES ('a'))").print(); {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27438) SQL validation failed when constructing a map array

2022-05-06 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17532829#comment-17532829
 ] 

Marios Trivyzas commented on FLINK-27438:
-

Reviewed!

> SQL validation failed when constructing a map array
> ---
>
> Key: FLINK-27438
> URL: https://issues.apache.org/jira/browse/FLINK-27438
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0, 1.14.4
>Reporter: Wei Zhong
>Assignee: Roman Boyko
>Priority: Major
>  Labels: pull-request-available
>
> Exception: 
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. Unsupported type when convertTypeToSpec: MAP
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:185)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:110)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:237)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:105)
>     at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:695)
>     at 
> org.apache.flink.table.planner.plan.stream.sql.LegacyTableFactoryTest.(LegacyTableFactoryTest.java:35)
>     at 
> org.apache.flink.table.planner.plan.stream.sql.LegacyTableFactoryTest.main(LegacyTableFactoryTest.java:49)
> Caused by: java.lang.UnsupportedOperationException: Unsupported type when 
> convertTypeToSpec: MAP
>     at 
> org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1059)
>     at 
> org.apache.calcite.sql.type.SqlTypeUtil.convertTypeToSpec(SqlTypeUtil.java:1081)
>     at 
> org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.castTo(SqlValidatorUtils.java:82)
>     at 
> org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForMultisetConstructor(SqlValidatorUtils.java:74)
>     at 
> org.apache.flink.table.planner.functions.utils.SqlValidatorUtils.adjustTypeForArrayConstructor(SqlValidatorUtils.java:39)
>     at 
> org.apache.flink.table.planner.functions.sql.SqlArrayConstructor.inferReturnType(SqlArrayConstructor.java:44)
>     at 
> org.apache.calcite.sql.SqlOperator.validateOperands(SqlOperator.java:449)
>     at org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:531)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5716)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5703)
>     at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1736)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1727)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:421)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelectList(SqlValidatorImpl.java:4061)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3347)
>     at 
> org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
>     at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
>     at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:952)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:704)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:180)
>     ... 6 more {code}
> How to reproduce:
> {code:java}
> tableEnv.executeSql("select array[map['A', 'AA'], map['B', 'BB'], map['C', 
> CAST(NULL AS STRING)]] from (VALUES ('a'))").print(); {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27465) AvroRowDeserializationSchema.convertToTimestamp fails with negative nano seconds

2022-05-02 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17530698#comment-17530698
 ] 

Marios Trivyzas commented on FLINK-27465:
-

Nevermind, I saw the thread in the mailing list.

> AvroRowDeserializationSchema.convertToTimestamp fails with negative nano 
> seconds
> 
>
> Key: FLINK-27465
> URL: https://issues.apache.org/jira/browse/FLINK-27465
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.15.0
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>
> The issue is exposed due to time zone dependency in 
> AvroRowDeSerializationSchemaTest.
>  
> The root cause is that convertToTimestamp attempts to set negative value with 
> java.sql.Timestamp.setNanos



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27465) AvroRowDeserializationSchema.convertToTimestamp fails with negative nano seconds

2022-05-02 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17530685#comment-17530685
 ] 

Marios Trivyzas commented on FLINK-27465:
-

Do you have an example?

As far as I know Avro doesn't currently support nanos (not even micros), only 
millis.

> AvroRowDeserializationSchema.convertToTimestamp fails with negative nano 
> seconds
> 
>
> Key: FLINK-27465
> URL: https://issues.apache.org/jira/browse/FLINK-27465
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.15.0
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>
> The issue is exposed due to time zone dependency in 
> AvroRowDeSerializationSchemaTest.
>  
> The root cause is that convertToTimestamp attempts to set negative value with 
> java.sql.Timestamp.setNanos



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27331) Support Avro microsecond precision for TIME

2022-04-20 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-27331:

Description: Avro spec: 
https://avro.apache.org/docs/1.8.0/spec.html#Time+%28microsecond+precision%29

> Support Avro microsecond precision for TIME
> ---
>
> Key: FLINK-27331
> URL: https://issues.apache.org/jira/browse/FLINK-27331
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Marios Trivyzas
>Priority: Major
>
> Avro spec: 
> https://avro.apache.org/docs/1.8.0/spec.html#Time+%28microsecond+precision%29



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-23589) Support Avro Microsecond precision

2022-04-20 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17525060#comment-17525060
 ] 

Marios Trivyzas commented on FLINK-23589:
-

Opened https://issues.apache.org/jira/browse/FLINK-27331 to add support for 
microseconds for TIME

> Support Avro Microsecond precision
> --
>
> Key: FLINK-23589
> URL: https://issues.apache.org/jira/browse/FLINK-23589
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Robert Metzger
>Assignee: Marios Trivyzas
>Priority: Major
> Fix For: 1.16.0
>
>
> This was raised by a user: 
> https://lists.apache.org/thread.html/r463f748358202d207e4bf9c7fdcb77e609f35bbd670dbc5278dd7615%40%3Cuser.flink.apache.org%3E
> Here's the Avro spec: 
> https://avro.apache.org/docs/1.8.0/spec.html#Timestamp+%28microsecond+precision%29



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27331) Support Avro microsecond precision for TIME

2022-04-20 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-27331:
---

 Summary: Support Avro microsecond precision for TIME
 Key: FLINK-27331
 URL: https://issues.apache.org/jira/browse/FLINK-27331
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Marios Trivyzas






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-23589) Support Avro Microsecond precision

2022-04-20 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17525059#comment-17525059
 ] 

Marios Trivyzas commented on FLINK-23589:
-

Support for millis/micros for TIME (not TIMESTAMP) doesn't work correctly 
because of https://issues.apache.org/jira/browse/FLINK-17224

> Support Avro Microsecond precision
> --
>
> Key: FLINK-23589
> URL: https://issues.apache.org/jira/browse/FLINK-23589
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Robert Metzger
>Assignee: Marios Trivyzas
>Priority: Major
> Fix For: 1.16.0
>
>
> This was raised by a user: 
> https://lists.apache.org/thread.html/r463f748358202d207e4bf9c7fdcb77e609f35bbd670dbc5278dd7615%40%3Cuser.flink.apache.org%3E
> Here's the Avro spec: 
> https://avro.apache.org/docs/1.8.0/spec.html#Timestamp+%28microsecond+precision%29



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27312) Implicit casting should be introduced with Rules during planning

2022-04-19 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-27312:
---

 Summary: Implicit casting should be introduced with Rules during 
planning
 Key: FLINK-27312
 URL: https://issues.apache.org/jira/browse/FLINK-27312
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Marios Trivyzas


Currently we do implicit casting directly in the code generation, and the plan 
is not in sync with what is happening under the hood (in the generated code).

Ideally implicit casting should be introduced where necessary at an earlier 
stage, during planning with a Rule, so that one can exactly see from the plan 
produced what are the operations introduced automatically (implicit casting) to 
make the SQL executable.

See for example: https://issues.apache.org/jira/browse/FLINK-27247



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27247) ScalarOperatorGens.numericCasting is not compatible with legacy behavior

2022-04-19 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524251#comment-17524251
 ] 

Marios Trivyzas commented on FLINK-27247:
-

We can fix the issue with the quick fix in your PR.

We have to open an issue, so that these kind of explicit casting is introduced 
with a Rule during the analysis/planning phase, and not in the code generation.

This way, it will be visible what kind of implicit cast is introduced, by 
looking at the plan, and not just generate code for it under the hood.

> ScalarOperatorGens.numericCasting is not compatible with legacy behavior
> 
>
> Key: FLINK-27247
> URL: https://issues.apache.org/jira/browse/FLINK-27247
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Priority: Minor
>  Labels: pull-request-available
>
> Add the following test cases in ScalarFunctionsTest:
> {code:java}
> // code placeholder
> @Test
> def test(): Unit ={
>   testSqlApi("rand(1) + 1","")
> } {code}
> it will throw the following exception:
> {code:java}
> // code placeholder
> org.apache.flink.table.planner.codegen.CodeGenException: Unsupported casting 
> from DOUBLE to DOUBLE NOT NULL.
>     at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.numericCasting(ScalarOperatorGens.scala:1734)
>     at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.generateBinaryArithmeticOperator(ScalarOperatorGens.scala:85)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:507)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:481)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
>     at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$visitCall$1(ExprCodeGenerator.scala:478)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:469)
> ... {code}
> This is because in ScalarOperatorGens#numericCasting,  FLINK-24779  lost the 
> logic that in some cases there is no need to casting the left and right type.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-27212) Failed to CAST('abcde', VARBINARY)

2022-04-19 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-27212:

Component/s: (was: Table SQL / Runtime)

> Failed to CAST('abcde', VARBINARY)
> --
>
> Key: FLINK-27212
> URL: https://issues.apache.org/jira/browse/FLINK-27212
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Shengkai Fang
>Assignee: Marios Trivyzas
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> Please add test in the CalcITCase
> {code:scala}
> @Test
>   def testCalc(): Unit = {
> val sql =
>   """
> |SELECT CAST('abcde' AS VARBINARY(6))
> |""".stripMargin
> val result = tEnv.executeSql(sql)
> print(result.getResolvedSchema)
> result.print()
>   }
> {code}
> The exception is 
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Odd number of 
> characters.
>   at 
> org.apache.flink.table.utils.EncodingUtils.decodeHex(EncodingUtils.java:203)
>   at StreamExecCalc$33.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:418)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:513)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$SwitchingOnClose.collect(StreamSourceContexts.java:103)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-27212) Failed to CAST('abcde', VARBINARY)

2022-04-19 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-27212:

Component/s: Table SQL / Planner

> Failed to CAST('abcde', VARBINARY)
> --
>
> Key: FLINK-27212
> URL: https://issues.apache.org/jira/browse/FLINK-27212
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Shengkai Fang
>Assignee: Marios Trivyzas
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> Please add test in the CalcITCase
> {code:scala}
> @Test
>   def testCalc(): Unit = {
> val sql =
>   """
> |SELECT CAST('abcde' AS VARBINARY(6))
> |""".stripMargin
> val result = tEnv.executeSql(sql)
> print(result.getResolvedSchema)
> result.print()
>   }
> {code}
> The exception is 
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Odd number of 
> characters.
>   at 
> org.apache.flink.table.utils.EncodingUtils.decodeHex(EncodingUtils.java:203)
>   at StreamExecCalc$33.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:418)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:513)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$SwitchingOnClose.collect(StreamSourceContexts.java:103)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27212) Failed to CAST('abcde', VARBINARY)

2022-04-19 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524190#comment-17524190
 ] 

Marios Trivyzas commented on FLINK-27212:
-

Here is the commit for this printing change: 
https://github.com/apache/flink/pull/19507/commits/58f60562fb2745437c5938332698967550c4ee1c

> Failed to CAST('abcde', VARBINARY)
> --
>
> Key: FLINK-27212
> URL: https://issues.apache.org/jira/browse/FLINK-27212
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Shengkai Fang
>Assignee: Marios Trivyzas
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> Please add test in the CalcITCase
> {code:scala}
> @Test
>   def testCalc(): Unit = {
> val sql =
>   """
> |SELECT CAST('abcde' AS VARBINARY(6))
> |""".stripMargin
> val result = tEnv.executeSql(sql)
> print(result.getResolvedSchema)
> result.print()
>   }
> {code}
> The exception is 
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Odd number of 
> characters.
>   at 
> org.apache.flink.table.utils.EncodingUtils.decodeHex(EncodingUtils.java:203)
>   at StreamExecCalc$33.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:418)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:513)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$SwitchingOnClose.collect(StreamSourceContexts.java:103)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27212) Failed to CAST('abcde', VARBINARY)

2022-04-19 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524179#comment-17524179
 ] 

Marios Trivyzas commented on FLINK-27212:
-

I propose, when we do {{CAST( AS STRING)}} to also use UTF-8 
decoding, so we can have a seamless roundtrip: {{CAST(CAST('' AS 
BYTES) AS STRING)}} would return ``.

But when we do a {{SELECT CAST('' AS BYTES}} and we print the 
result, we will print in a {{x'ABC324Fd32'}} format, which can then be easily 
copy pasted as a binary literal in another SQL query:
{noformat}
tableEnv.executeSql("SELECT CAST('Marios Timo' AS BYTES)").print();{noformat}
will print:
{noformat}
++
| EXPR$0 |
++
|  x'4d6172696f732054696d6f' |
++{noformat}
and then:
{noformat}
tableEnv.executeSql("SELECT x'4d6172696f732054696d6f'"){noformat}
which creates the desired binary column.

 

[~jark] [~wenlong.lwl] what do you think?

> Failed to CAST('abcde', VARBINARY)
> --
>
> Key: FLINK-27212
> URL: https://issues.apache.org/jira/browse/FLINK-27212
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Shengkai Fang
>Assignee: Marios Trivyzas
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.16.0
>
>
> Please add test in the CalcITCase
> {code:scala}
> @Test
>   def testCalc(): Unit = {
> val sql =
>   """
> |SELECT CAST('abcde' AS VARBINARY(6))
> |""".stripMargin
> val result = tEnv.executeSql(sql)
> print(result.getResolvedSchema)
> result.print()
>   }
> {code}
> The exception is 
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Odd number of 
> characters.
>   at 
> org.apache.flink.table.utils.EncodingUtils.decodeHex(EncodingUtils.java:203)
>   at StreamExecCalc$33.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:418)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:513)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$SwitchingOnClose.collect(StreamSourceContexts.java:103)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27212) Failed to CAST('abcde', VARBINARY)

2022-04-18 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17523639#comment-17523639
 ] 

Marios Trivyzas commented on FLINK-27212:
-

https://github.com/apache/flink/pull/19507

> Failed to CAST('abcde', VARBINARY)
> --
>
> Key: FLINK-27212
> URL: https://issues.apache.org/jira/browse/FLINK-27212
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Shengkai Fang
>Assignee: Marios Trivyzas
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> Please add test in the CalcITCase
> {code:scala}
> @Test
>   def testCalc(): Unit = {
> val sql =
>   """
> |SELECT CAST('abcde' AS VARBINARY(6))
> |""".stripMargin
> val result = tEnv.executeSql(sql)
> print(result.getResolvedSchema)
> result.print()
>   }
> {code}
> The exception is 
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Odd number of 
> characters.
>   at 
> org.apache.flink.table.utils.EncodingUtils.decodeHex(EncodingUtils.java:203)
>   at StreamExecCalc$33.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:418)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:513)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$SwitchingOnClose.collect(StreamSourceContexts.java:103)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27212) Failed to CAST('abcde', VARBINARY)

2022-04-18 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17523606#comment-17523606
 ] 

Marios Trivyzas commented on FLINK-27212:
-

Ok, I agree, I can revert the change and just produce the UTF-8 byte[] from any 
string when casting a CHAR/VARCHAR/BYTES to a STRING/CHAR/VARCHAR

We can introduce a dedicated function if we want a hex cast similar to a 
{{x'AB0cDF28' literal}}

> Failed to CAST('abcde', VARBINARY)
> --
>
> Key: FLINK-27212
> URL: https://issues.apache.org/jira/browse/FLINK-27212
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Shengkai Fang
>Assignee: Marios Trivyzas
>Priority: Blocker
> Fix For: 1.16.0
>
>
> Please add test in the CalcITCase
> {code:scala}
> @Test
>   def testCalc(): Unit = {
> val sql =
>   """
> |SELECT CAST('abcde' AS VARBINARY(6))
> |""".stripMargin
> val result = tEnv.executeSql(sql)
> print(result.getResolvedSchema)
> result.print()
>   }
> {code}
> The exception is 
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Odd number of 
> characters.
>   at 
> org.apache.flink.table.utils.EncodingUtils.decodeHex(EncodingUtils.java:203)
>   at StreamExecCalc$33.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:418)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:513)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$SwitchingOnClose.collect(StreamSourceContexts.java:103)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27232) .scalafmt.conf cant be located when running in sub-directory

2022-04-14 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522372#comment-17522372
 ] 

Marios Trivyzas commented on FLINK-27232:
-

I get your points, not insisting, we can leave it as is.

> .scalafmt.conf cant be located when running in sub-directory
> 
>
> Key: FLINK-27232
> URL: https://issues.apache.org/jira/browse/FLINK-27232
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System
>Affects Versions: 1.16.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> cd flink-scala
> mvn validate
> Error:  Failed to execute goal 
> com.diffplug.spotless:spotless-maven-plugin:2.13.0:check (spotless-check) on 
> project flink-scala_2.12: Execution spotless-check of goal 
> com.diffplug.spotless:spotless-maven-plugin:2.13.0:check failed: Unable to 
> locate file with path: .scalafmt.conf: Could not find resource 
> '.scalafmt.conf'. -> [Help 1]
> Currently breaks the docs build.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27247) ScalarOperatorGens.numericCasting is not compatible with legacy behavior

2022-04-14 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522367#comment-17522367
 ] 

Marios Trivyzas commented on FLINK-27247:
-

Commented on the PR as well, I think we need to fix the root of the issue which 
is the implicit cast introduced to cast from a nullable to a not nullable type, 
and not just allow this in the code gen.

> ScalarOperatorGens.numericCasting is not compatible with legacy behavior
> 
>
> Key: FLINK-27247
> URL: https://issues.apache.org/jira/browse/FLINK-27247
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyang
>Priority: Minor
>  Labels: pull-request-available
>
> Add the following test cases in ScalarFunctionsTest:
> {code:java}
> // code placeholder
> @Test
> def test(): Unit ={
>   testSqlApi("rand(1) + 1","")
> } {code}
> it will throw the following exception:
> {code:java}
> // code placeholder
> org.apache.flink.table.planner.codegen.CodeGenException: Unsupported casting 
> from DOUBLE to DOUBLE NOT NULL.
>     at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.numericCasting(ScalarOperatorGens.scala:1734)
>     at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.generateBinaryArithmeticOperator(ScalarOperatorGens.scala:85)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:507)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:481)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:57)
>     at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$visitCall$1(ExprCodeGenerator.scala:478)
>     at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
>     at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>     at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at scala.collection.TraversableLike.map(TraversableLike.scala:233)
>     at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
>     at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>     at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:469)
> ... {code}
> This is because in ScalarOperatorGens#numericCasting,  FLINK-24779  lost the 
> logic that in some cases there is no need to casting the left and right type.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-27232) .scalafmt.conf cant be located when running in sub-directory

2022-04-14 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522310#comment-17522310
 ] 

Marios Trivyzas edited comment on FLINK-27232 at 4/14/22 1:48 PM:
--

[https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html#default-lifecycle]

I think that conceptually, checking the source files for checkstyle should be 
done in the {{process-sources}} phase and not in the {{validate}} or 
{{initialize}} phases which are for setting up things.

Furthermore the {{directory-maven-plugin}} should maybe be bound to 
{{validate}} which is the 1st phase, to make sure that {{rootDir}} property is 
available from {{initialize}} state on.


was (Author: matriv):
[https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html#default-lifecycle]

I think that conceptually, checking the source files for checkstyle should be 
done in the {{process-sources}} phase and not in the {{validate}} or 
{{initialize}} phases which are for setting up things.

Furthermore the {{directory-maven-plugin}} should maybe be bound to 
{{validate}} which is the 1st phase, to make sure that it's available from 
{{initialize}} state on.

> .scalafmt.conf cant be located when running in sub-directory
> 
>
> Key: FLINK-27232
> URL: https://issues.apache.org/jira/browse/FLINK-27232
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System
>Affects Versions: 1.16.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> cd flink-scala
> mvn validate
> Error:  Failed to execute goal 
> com.diffplug.spotless:spotless-maven-plugin:2.13.0:check (spotless-check) on 
> project flink-scala_2.12: Execution spotless-check of goal 
> com.diffplug.spotless:spotless-maven-plugin:2.13.0:check failed: Unable to 
> locate file with path: .scalafmt.conf: Could not find resource 
> '.scalafmt.conf'. -> [Help 1]
> Currently breaks the docs build.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27212) Failed to CAST('abcde', VARBINARY)

2022-04-14 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522314#comment-17522314
 ] 

Marios Trivyzas commented on FLINK-27212:
-

We chose for the new behavior (Legacy=DISABLED) to accept only valid hex 
strings to be consistent with literals like: {{x'ABC0df2E'}}

 
{noformat}
tableEnv.executeSql("SELECT x'abcde'").print();{noformat}
throws:

 

 
{noformat}
Exception in thread "main" org.apache.flink.table.api.ValidationException: SQL 
validation failed. From line 1, column 8 to line 1, column 15: Binary literal 
string must contain an even number of hexits
    at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:184)
    at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:109)
    at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:237)
    at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:105)
    at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:695)
    at 
org.apache.flink.table.examples.java.basics.WordCountSQLExample.main(WordCountSQLExample.java:43)
Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
column 8 to line 1, column 15: Binary literal string must contain an even 
number of hexits
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at 
org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
    at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:883)
    at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:868)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:4867)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateLiteral(SqlValidatorImpl.java:2960)
    at 
org.apache.flink.table.planner.calcite.FlinkCalciteSqlValidator.validateLiteral(FlinkCalciteSqlValidator.java:86)
    at org.apache.calcite.sql.SqlLiteral.validate(SqlLiteral.java:554)
    at org.apache.calcite.sql.SqlNode.validateExpr(SqlNode.java:273)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateExpr(SqlValidatorImpl.java:4112)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelectList(SqlValidatorImpl.java:4088)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3347)
    at 
org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60)
    at 
org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:975)
    at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:232)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:952)
    at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:704)
    at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:180)
    ... 5 more
Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Binary 
literal string must contain an even number of hexits
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at 
org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:467)
    at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:560)
    ... 23 more{noformat}
which comes from calcite.

 

In my opinion having consistency with literals and columns is very important.

 

I would vote to keep the new behaviour of CAST( AS 
BYTES/BINARY/VARBINARY) as is currently in master and 1.15 and introduce 
special function that converts a string to a byte[] with an optional arg that 
defines the desired charset (if omitted, defaults to UTF-8), i.e.:
{noformat}
to_binary(, []){noformat}
 

 

> Failed to CAST('abcde', VARBINARY)
> --
>
> Key: FLINK-27212
> URL: 

[jira] [Commented] (FLINK-27232) .scalafmt.conf cant be located when running in sub-directory

2022-04-14 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522310#comment-17522310
 ] 

Marios Trivyzas commented on FLINK-27232:
-

[https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html#default-lifecycle]

I think that conceptually, checking the source files for checkstyle should be 
done in the {{process-sources}} phase and not in the {{validate}} or 
{{initialize}} phases which are for setting up things.

Furthermore the {{directory-maven-plugin}} should maybe be bound to 
{{validate}} which is the 1st phase, to make sure that it's available from 
{{initialize}} state on.

> .scalafmt.conf cant be located when running in sub-directory
> 
>
> Key: FLINK-27232
> URL: https://issues.apache.org/jira/browse/FLINK-27232
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System
>Affects Versions: 1.16.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> cd flink-scala
> mvn validate
> Error:  Failed to execute goal 
> com.diffplug.spotless:spotless-maven-plugin:2.13.0:check (spotless-check) on 
> project flink-scala_2.12: Execution spotless-check of goal 
> com.diffplug.spotless:spotless-maven-plugin:2.13.0:check failed: Unable to 
> locate file with path: .scalafmt.conf: Could not find resource 
> '.scalafmt.conf'. -> [Help 1]
> Currently breaks the docs build.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27232) .scalafmt.conf cant be located when running in sub-directory

2022-04-14 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522286#comment-17522286
 ] 

Marios Trivyzas commented on FLINK-27232:
-

Even if we come with some solution that takes us back to the same workflow 
("autonomous" build inside any module), I think we should reconsider the phase 
binding, according to my findings here: 
https://github.com/apache/flink/pull/19472

> .scalafmt.conf cant be located when running in sub-directory
> 
>
> Key: FLINK-27232
> URL: https://issues.apache.org/jira/browse/FLINK-27232
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Build System
>Affects Versions: 1.16.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> cd flink-scala
> mvn validate
> Error:  Failed to execute goal 
> com.diffplug.spotless:spotless-maven-plugin:2.13.0:check (spotless-check) on 
> project flink-scala_2.12: Execution spotless-check of goal 
> com.diffplug.spotless:spotless-maven-plugin:2.13.0:check failed: Unable to 
> locate file with path: .scalafmt.conf: Could not find resource 
> '.scalafmt.conf'. -> [Help 1]
> Currently breaks the docs build.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27212) Failed to CAST('abcde', VARBINARY)

2022-04-14 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522195#comment-17522195
 ] 

Marios Trivyzas commented on FLINK-27212:
-

If we use the legacy cast behaviour flag, and set it to ENABLED then we support 
the previous behaviour that we don't require a hex string, but simply use UTF-8 
char encoding.

> Failed to CAST('abcde', VARBINARY)
> --
>
> Key: FLINK-27212
> URL: https://issues.apache.org/jira/browse/FLINK-27212
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Shengkai Fang
>Assignee: Marios Trivyzas
>Priority: Blocker
> Fix For: 1.16.0
>
>
> Please add test in the CalcITCase
> {code:scala}
> @Test
>   def testCalc(): Unit = {
> val sql =
>   """
> |SELECT CAST('abcde' AS VARBINARY(6))
> |""".stripMargin
> val result = tEnv.executeSql(sql)
> print(result.getResolvedSchema)
> result.print()
>   }
> {code}
> The exception is 
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Odd number of 
> characters.
>   at 
> org.apache.flink.table.utils.EncodingUtils.decodeHex(EncodingUtils.java:203)
>   at StreamExecCalc$33.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:418)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:513)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$SwitchingOnClose.collect(StreamSourceContexts.java:103)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27212) Failed to CAST('abcde', VARBINARY)

2022-04-13 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17521533#comment-17521533
 ] 

Marios Trivyzas commented on FLINK-27212:
-

You cannot CAST an odd number of chars to a binary type 
(BYTES/BINARY/VARBINARY) as you need pairs, each one of them representing one 
byte. 

> Failed to CAST('abcde', VARBINARY)
> --
>
> Key: FLINK-27212
> URL: https://issues.apache.org/jira/browse/FLINK-27212
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Shengkai Fang
>Assignee: Marios Trivyzas
>Priority: Major
>
> Please add test in the CalcITCase
> {code:scala}
> @Test
>   def testCalc(): Unit = {
> val sql =
>   """
> |SELECT CAST('abcde' AS VARBINARY(6))
> |""".stripMargin
> val result = tEnv.executeSql(sql)
> print(result.getResolvedSchema)
> result.print()
>   }
> {code}
> The exception is 
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Odd number of 
> characters.
>   at 
> org.apache.flink.table.utils.EncodingUtils.decodeHex(EncodingUtils.java:203)
>   at StreamExecCalc$33.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:418)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:513)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$SwitchingOnClose.collect(StreamSourceContexts.java:103)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-27212) Failed to CAST('abcde', VARBINARY)

2022-04-13 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-27212:
---

Assignee: Marios Trivyzas

> Failed to CAST('abcde', VARBINARY)
> --
>
> Key: FLINK-27212
> URL: https://issues.apache.org/jira/browse/FLINK-27212
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Shengkai Fang
>Assignee: Marios Trivyzas
>Priority: Major
>
> Please add test in the CalcITCase
> {code:scala}
> @Test
>   def testCalc(): Unit = {
> val sql =
>   """
> |SELECT CAST('abcde' AS VARBINARY(6))
> |""".stripMargin
> val result = tEnv.executeSql(sql)
> print(result.getResolvedSchema)
> result.print()
>   }
> {code}
> The exception is 
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Odd number of 
> characters.
>   at 
> org.apache.flink.table.utils.EncodingUtils.decodeHex(EncodingUtils.java:203)
>   at StreamExecCalc$33.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:99)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:80)
>   at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:418)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:513)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$SwitchingOnClose.collect(StreamSourceContexts.java:103)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:332)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-23589) Support Avro Microsecond precision

2022-04-12 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-23589:
---

Assignee: Marios Trivyzas

> Support Avro Microsecond precision
> --
>
> Key: FLINK-23589
> URL: https://issues.apache.org/jira/browse/FLINK-23589
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Robert Metzger
>Assignee: Marios Trivyzas
>Priority: Major
> Fix For: 1.15.0
>
>
> This was raised by a user: 
> https://lists.apache.org/thread.html/r463f748358202d207e4bf9c7fdcb77e609f35bbd670dbc5278dd7615%40%3Cuser.flink.apache.org%3E
> Here's the Avro spec: 
> https://avro.apache.org/docs/1.8.0/spec.html#Timestamp+%28microsecond+precision%29



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27121) Translate "Configuration#overview" paragraph and the code example in "Application Development > Table API & SQL" to Chinese

2022-04-12 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17521057#comment-17521057
 ] 

Marios Trivyzas commented on FLINK-27121:
-

[~MartijnVisser] Could you please take care of this?

> Translate "Configuration#overview" paragraph and the code example in 
> "Application Development > Table API & SQL" to Chinese
> ---
>
> Key: FLINK-27121
> URL: https://issues.apache.org/jira/browse/FLINK-27121
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Marios Trivyzas
>Assignee: LEI ZHOU
>Priority: Major
>  Labels: chinese-translation, pull-request-available
>
> After [https://github.com/apache/flink/pull/19387 
> |https://github.com/apache/flink/pull/19387] is merged, we need to update the 
> translation for 
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/table/config/#overview]
> The markdown file is located in 
> {noformat}
> flink/docs/content.zh/docs/dev/table/config.md{noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-16835) Replace TableConfig with Configuration

2022-04-11 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17520383#comment-17520383
 ] 

Marios Trivyzas commented on FLINK-16835:
-

Thank you for your precious help and guidance [~twalthr] !

> Replace TableConfig with Configuration
> --
>
> Key: FLINK-16835
> URL: https://issues.apache.org/jira/browse/FLINK-16835
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Marios Trivyzas
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.16.0
>
>
> In order to allow reading and writing of configuration from a file or 
> string-based properties. We should consider removing {{TableConfig}} and 
> fully rely on a Configuration-based object with {{ConfigOptions}}.
> This effort was partially already started which is why 
> {{TableConfig.getConfiguration}} exists.
> However, we should clarify if we would like to have control and traceability 
> over layered configurations such as {{flink-conf,yaml < 
> StreamExecutionEnvironment < TableEnvironment < Query}}. Maybe the 
> {{Configuration}} class is not the right abstraction for this. 
> [~jark], [~twalthr], and [~fhueske] discussed the configuration options (see 
> comments below) and concluded with the following design:
> {code:java}
>   public static final ConfigOption IDLE_STATE_RETENTION =
>   key("table.exec.state.ttl")
>   .durationType()
>   .defaultValue(Duration.ofMillis(0));
>   public static final ConfigOption MAX_LENGTH_GENERATED_CODE =
>   key("table.generated-code.max-length")
>   .intType()
>   .defaultValue(64000);
>   public static final ConfigOption LOCAL_TIME_ZONE =
>   key("table.local-time-zone")
>   .stringType()
>   .defaultValue(ZoneId.systemDefault().toString());
> {code}
> *Note*: The following {{TableConfig}} options are not preserved:
> * {{nullCheck}}: Flink will automatically enable null checks based on the 
> table schema ({{NOT NULL}} property)
> * {{decimalContext}}: this configuration is only used by the legacy planner 
> which will be removed in one of the next releases
> * {{maxIdleStateRetention}}: is automatically derived as 1.5* 
> {{idleStateRetention}} until StateTtlConfig is fully supported (at which 
> point only a single parameter is required).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-16835) Replace TableConfig with Configuration

2022-04-08 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519754#comment-17519754
 ] 

Marios Trivyzas commented on FLINK-16835:
-

[~twalthr] Can we close this issue now, as all of the subtasks have been 
completed?

> Replace TableConfig with Configuration
> --
>
> Key: FLINK-16835
> URL: https://issues.apache.org/jira/browse/FLINK-16835
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Marios Trivyzas
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> In order to allow reading and writing of configuration from a file or 
> string-based properties. We should consider removing {{TableConfig}} and 
> fully rely on a Configuration-based object with {{ConfigOptions}}.
> This effort was partially already started which is why 
> {{TableConfig.getConfiguration}} exists.
> However, we should clarify if we would like to have control and traceability 
> over layered configurations such as {{flink-conf,yaml < 
> StreamExecutionEnvironment < TableEnvironment < Query}}. Maybe the 
> {{Configuration}} class is not the right abstraction for this. 
> [~jark], [~twalthr], and [~fhueske] discussed the configuration options (see 
> comments below) and concluded with the following design:
> {code:java}
>   public static final ConfigOption IDLE_STATE_RETENTION =
>   key("table.exec.state.ttl")
>   .durationType()
>   .defaultValue(Duration.ofMillis(0));
>   public static final ConfigOption MAX_LENGTH_GENERATED_CODE =
>   key("table.generated-code.max-length")
>   .intType()
>   .defaultValue(64000);
>   public static final ConfigOption LOCAL_TIME_ZONE =
>   key("table.local-time-zone")
>   .stringType()
>   .defaultValue(ZoneId.systemDefault().toString());
> {code}
> *Note*: The following {{TableConfig}} options are not preserved:
> * {{nullCheck}}: Flink will automatically enable null checks based on the 
> table schema ({{NOT NULL}} property)
> * {{decimalContext}}: this configuration is only used by the legacy planner 
> which will be removed in one of the next releases
> * {{maxIdleStateRetention}}: is automatically derived as 1.5* 
> {{idleStateRetention}} until StateTtlConfig is fully supported (at which 
> point only a single parameter is required).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27054) Elasticsearch SQL connector SSL issue

2022-04-08 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519471#comment-17519471
 ] 

Marios Trivyzas commented on FLINK-27054:
-

https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/7.17/_encrypted_communication.html

> Elasticsearch SQL connector SSL issue
> -
>
> Key: FLINK-27054
> URL: https://issues.apache.org/jira/browse/FLINK-27054
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: ricardo
>Priority: Major
>
> The current Flink ElasticSearch SQL connector 
> https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/
>  is missing SSL options, can't connect to ES clusters which require SSL 
> certificate.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-27121) Translate "Configuration#overview" paragraph and the code example in "Application Development > Table API & SQL" to Chinese

2022-04-08 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-27121:
---

Assignee: LEI ZHOU

> Translate "Configuration#overview" paragraph and the code example in 
> "Application Development > Table API & SQL" to Chinese
> ---
>
> Key: FLINK-27121
> URL: https://issues.apache.org/jira/browse/FLINK-27121
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Marios Trivyzas
>Assignee: LEI ZHOU
>Priority: Major
>  Labels: chinese-translation
>
> After [https://github.com/apache/flink/pull/19387 
> |https://github.com/apache/flink/pull/19387] is merged, we need to update the 
> translation for 
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/table/config/#overview]
> The markdown file is located in 
> {noformat}
> flink/docs/content.zh/docs/dev/table/config.md{noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-18556) Drop the unused options in TableConfig

2022-04-08 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-18556:
---

Assignee: Marios Trivyzas

> Drop the unused options in TableConfig
> --
>
> Key: FLINK-18556
> URL: https://issues.apache.org/jira/browse/FLINK-18556
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jark Wu
>Assignee: Marios Trivyzas
>Priority: Major
>
> As disucssed in FLINK-16835, the following {{TableConfig}} options are not 
> preserved:
> * {{nullCheck}}: Flink will automatically enable null checks based on the 
> table schema ({{NOT NULL}} property)
> * {{decimalContext}}: this configuration is only used by the legacy planner 
> which will be removed in one of the next releases
> * {{maxIdleStateRetention}}: is automatically derived as 1.5* 
> {{idleStateRetention}} until StateTtlConfig is fully supported (at which 
> point only a single parameter is required).
> The blink planner should remove the dependencies on {{nullCheck}} and 
> {{maxIdleStateRetention}} first.  Besides, this maybe blocked by when to drop 
> old planner, because old planner is still using them. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26190) Remove getTableConfig from ExecNodeConfiguration

2022-04-08 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17519403#comment-17519403
 ] 

Marios Trivyzas commented on FLINK-26190:
-

Thank you [~dianfu], indeed the {{ExecNodeConfig#getTableConfig()}} cannot be 
removed in 1.15, there are other refactorings that I did on master to allow 
this,

which we didn't backport to 1.15 since it's already on feature freeze.

> Remove getTableConfig from ExecNodeConfiguration
> 
>
> Key: FLINK-26190
> URL: https://issues.apache.org/jira/browse/FLINK-26190
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Marios Trivyzas
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Currently, *ExecNodeConfig* holds *TableConfig* instead of *ReadableConfig* 
> for the configuration coming from the planner, because it's used by
> *CommonPythonUtil#getMergedConfig.* This should be fixed, so that 
> *CommonPythonUtil#getMergedConfig* cam use a *ReadableConfig* instead, and 
> then we can pass the *ExecNodeConfig* which holds the complete view of 
> {*}Planner{*}'s *TableConfig* + the {*}ExecNode{*}'s {*}persistedConfig{*}.
>  
> To achieve that the *getMergedConfig* methods of *PythonConfigUtil* must be 
> changed, and also the temp solution in 
> *PythonFunctionFactory#getPythonFunction* must be changed as well:
> {noformat}
> if (config instanceof TableConfig) {
> PythonDependencyUtils.merge(mergedConfig, ((TableConfig) 
> config).getConfiguration());
> } else {
> PythonDependencyUtils.merge(mergedConfig, (Configuration) config);
> }{noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-16835) Replace TableConfig with Configuration

2022-04-07 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-16835:
---

Assignee: Marios Trivyzas

> Replace TableConfig with Configuration
> --
>
> Key: FLINK-16835
> URL: https://issues.apache.org/jira/browse/FLINK-16835
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Marios Trivyzas
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> In order to allow reading and writing of configuration from a file or 
> string-based properties. We should consider removing {{TableConfig}} and 
> fully rely on a Configuration-based object with {{ConfigOptions}}.
> This effort was partially already started which is why 
> {{TableConfig.getConfiguration}} exists.
> However, we should clarify if we would like to have control and traceability 
> over layered configurations such as {{flink-conf,yaml < 
> StreamExecutionEnvironment < TableEnvironment < Query}}. Maybe the 
> {{Configuration}} class is not the right abstraction for this. 
> [~jark], [~twalthr], and [~fhueske] discussed the configuration options (see 
> comments below) and concluded with the following design:
> {code:java}
>   public static final ConfigOption IDLE_STATE_RETENTION =
>   key("table.exec.state.ttl")
>   .durationType()
>   .defaultValue(Duration.ofMillis(0));
>   public static final ConfigOption MAX_LENGTH_GENERATED_CODE =
>   key("table.generated-code.max-length")
>   .intType()
>   .defaultValue(64000);
>   public static final ConfigOption LOCAL_TIME_ZONE =
>   key("table.local-time-zone")
>   .stringType()
>   .defaultValue(ZoneId.systemDefault().toString());
> {code}
> *Note*: The following {{TableConfig}} options are not preserved:
> * {{nullCheck}}: Flink will automatically enable null checks based on the 
> table schema ({{NOT NULL}} property)
> * {{decimalContext}}: this configuration is only used by the legacy planner 
> which will be removed in one of the next releases
> * {{maxIdleStateRetention}}: is automatically derived as 1.5* 
> {{idleStateRetention}} until StateTtlConfig is fully supported (at which 
> point only a single parameter is required).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-26098) TableAPI does not forward idleness configuration from DataStream

2022-04-07 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518956#comment-17518956
 ] 

Marios Trivyzas edited comment on FLINK-26098 at 4/7/22 3:35 PM:
-

For {{DataStream}} API, with 
[FLIP-27|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface]]
 , there is {{WatermarkStrategy}} interface

and {{WatermarkStrategyWithIdleness}} to configure the idleness, which is only 
in the context of {{DataStream}} API.

 

For TableApi/SQL we have {{PushWatermarkIntoTableSourceScanRuleBase}} which 
uses the {{table.exec.source.idle-timeout and in turn}} create a 
{{WatermarkStrategyWithIdleness}} to be used.

 

So I think that this issue can now be closed, since it has been addressed by 
the work in [[FLINK-16835]|https://issues.apache.org/jira/browse/FLINK-16835] 
and is already available for {{{}Flink 1.15{}}}.

 


was (Author: matriv):
For {{DataStream}} API, with 
[FLIP-27|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface]]
 , there is {{WatermarkStrategy}} interface

and {{WatermarkStrategyWithIdleness}} to configure the idleness, which is only 
in the context of {{DataStream}} API.

 

For TableApi/SQL we have {{PushWatermarkIntoTableSourceScanRuleBase}} which 
uses the {{table.exec.source.idle-timeout and in turn}} create a 
{{WatermarkStrategyWithIdleness}} to be used.

 

So I think that this issue can now be closed, since it has been addressed by 
the work in https://issues.apache.org/jira/browse/FLINK-16835.

 

> TableAPI does not forward idleness configuration from DataStream
> 
>
> Key: FLINK-26098
> URL: https://issues.apache.org/jira/browse/FLINK-26098
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Marios Trivyzas
>Priority: Major
> Attachments: Screenshot_20220407_150020.png, 
> Screenshot_20220407_151012.png
>
>
> The TableAPI does not forward the idleness configuration from a DataStream 
> source. That can lead to the halt of processing if all sources are idle 
> because {{WatermarkAssignerOperator}} [1] will never set a channel to active 
> again. The only way to mitigate the problem is to explicitly configure the 
> idleness for table sources via {{table.exec.source.idle-timeout}}. 
> Configuring this value is actually not easy because creating a 
> {{StreamExecutionEnvironment}} via {{create(StreamExecutionEnvironment, 
> TableConfig)}} is deprecated.
> [1] 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/wmassigners/WatermarkAssignerOperator.java#L103



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26098) TableAPI does not forward idleness configuration from DataStream

2022-04-07 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518956#comment-17518956
 ] 

Marios Trivyzas commented on FLINK-26098:
-

For {{DataStream}} API, with 
[FLIP-27|[https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface]]
 , there is {{WatermarkStrategy}} interface

and {{WatermarkStrategyWithIdleness}} to configure the idleness, which is only 
in the context of {{DataStream}} API.

 

For TableApi/SQL we have {{PushWatermarkIntoTableSourceScanRuleBase}} which 
uses the {{table.exec.source.idle-timeout and in turn}} create a 
{{WatermarkStrategyWithIdleness}} to be used.

 

So I think that this issue can now be closed, since it has been addressed by 
the work in https://issues.apache.org/jira/browse/FLINK-16835.

 

> TableAPI does not forward idleness configuration from DataStream
> 
>
> Key: FLINK-26098
> URL: https://issues.apache.org/jira/browse/FLINK-26098
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Marios Trivyzas
>Priority: Major
> Attachments: Screenshot_20220407_150020.png, 
> Screenshot_20220407_151012.png
>
>
> The TableAPI does not forward the idleness configuration from a DataStream 
> source. That can lead to the halt of processing if all sources are idle 
> because {{WatermarkAssignerOperator}} [1] will never set a channel to active 
> again. The only way to mitigate the problem is to explicitly configure the 
> idleness for table sources via {{table.exec.source.idle-timeout}}. 
> Configuring this value is actually not easy because creating a 
> {{StreamExecutionEnvironment}} via {{create(StreamExecutionEnvironment, 
> TableConfig)}} is deprecated.
> [1] 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/wmassigners/WatermarkAssignerOperator.java#L103



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26098) TableAPI does not forward idleness configuration from DataStream

2022-04-07 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518897#comment-17518897
 ] 

Marios Trivyzas commented on FLINK-26098:
-

It also works with the new approach of creating a 
{{{}StreamTableEnvironment{}}}:
{noformat}
StreamExecutionEnvironment streamExecutionEnvironment =
StreamExecutionEnvironment.getExecutionEnvironment();
Configuration configuration = new Configuration();
configuration.set(TABLE_EXEC_SOURCE_IDLE_TIMEOUT, Duration.ofSeconds(100));
EnvironmentSettings settings =
EnvironmentSettings.newInstance()
.inStreamingMode()
.withConfiguration(configuration)
.build();
StreamTableEnvironment streamTableEnvironment =
StreamTableEnvironment.create(streamExecutionEnvironment, 
settings);{noformat}

> TableAPI does not forward idleness configuration from DataStream
> 
>
> Key: FLINK-26098
> URL: https://issues.apache.org/jira/browse/FLINK-26098
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Marios Trivyzas
>Priority: Major
> Attachments: Screenshot_20220407_150020.png, 
> Screenshot_20220407_151012.png
>
>
> The TableAPI does not forward the idleness configuration from a DataStream 
> source. That can lead to the halt of processing if all sources are idle 
> because {{WatermarkAssignerOperator}} [1] will never set a channel to active 
> again. The only way to mitigate the problem is to explicitly configure the 
> idleness for table sources via {{table.exec.source.idle-timeout}}. 
> Configuring this value is actually not easy because creating a 
> {{StreamExecutionEnvironment}} via {{create(StreamExecutionEnvironment, 
> TableConfig)}} is deprecated.
> [1] 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/wmassigners/WatermarkAssignerOperator.java#L103



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26098) TableAPI does not forward idleness configuration from DataStream

2022-04-07 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518841#comment-17518841
 ] 

Marios Trivyzas commented on FLINK-26098:
-

With the following code, where the {{WatermarkAssignerOperator}} is also used:
{noformat}
Configuration configuration = new Configuration();
configuration.set(TABLE_EXEC_SOURCE_IDLE_TIMEOUT, Duration.ofSeconds(100));
EnvironmentSettings settings =
EnvironmentSettings.newInstance()
.inStreamingMode()
.withConfiguration(configuration)
.build();
TableEnvironment tEnv = TableEnvironment.create(settings);

tEnv.executeSql(
"CREATE TABLE Orders (\n"
+ " amount INT,\n"
+ " currency STRING,\n"
+ " rowtime TIMESTAMP(3),\n"
+ " proctime AS PROCTIME(),\n"
+ " WATERMARK FOR rowtime AS rowtime\n"
+ ") WITH (\n"
+ " 'connector' = 'datagen',\n"
+ " 'number-of-rows' = '1'"
+ ")");
tEnv.executeSql(
"CREATE TABLE RatesHistory (\n"
+ " currency STRING,\n"
+ " rate INT,\n"
+ " rowtime TIMESTAMP(3),\n"
+ " WATERMARK FOR rowtime AS rowtime,\n"
+ " PRIMARY KEY(currency) NOT ENFORCED\n"
+ ") WITH (\n"
+ " 'connector' = 'datagen',\n"
+ " 'number-of-rows' = '1'"
+ ")");
TemporalTableFunction ratesHistory =
tEnv.from("RatesHistory").createTemporalTableFunction($("rowtime"), 
$("currency"));
tEnv.createTemporarySystemFunction("Rates", ratesHistory);

String sinkTableDdl =
"CREATE TABLE MySink (\n"
+ "  a int\n"
+ ") with (\n"
+ "  'connector' = 'blackhole')";
tEnv.executeSql(sinkTableDdl);
tEnv.executeSql(
"INSERT INTO MySink "
+ "SELECT amount * r.rate "
+ "FROM Orders AS o  "
+ "JOIN RatesHistory  FOR SYSTEM_TIME AS OF o.rowtime AS r "
+ "ON o.currency = r.currency ");{noformat}
We can see in in the {{processElement}} as well:

!Screenshot_20220407_151012.png!

> TableAPI does not forward idleness configuration from DataStream
> 
>
> Key: FLINK-26098
> URL: https://issues.apache.org/jira/browse/FLINK-26098
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Marios Trivyzas
>Priority: Major
> Attachments: Screenshot_20220407_150020.png, 
> Screenshot_20220407_151012.png
>
>
> The TableAPI does not forward the idleness configuration from a DataStream 
> source. That can lead to the halt of processing if all sources are idle 
> because {{WatermarkAssignerOperator}} [1] will never set a channel to active 
> again. The only way to mitigate the problem is to explicitly configure the 
> idleness for table sources via {{table.exec.source.idle-timeout}}. 
> Configuring this value is actually not easy because creating a 
> {{StreamExecutionEnvironment}} via {{create(StreamExecutionEnvironment, 
> TableConfig)}} is deprecated.
> [1] 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/wmassigners/WatermarkAssignerOperator.java#L103



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26098) TableAPI does not forward idleness configuration from DataStream

2022-04-07 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26098:

Attachment: Screenshot_20220407_151012.png

> TableAPI does not forward idleness configuration from DataStream
> 
>
> Key: FLINK-26098
> URL: https://issues.apache.org/jira/browse/FLINK-26098
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Marios Trivyzas
>Priority: Major
> Attachments: Screenshot_20220407_150020.png, 
> Screenshot_20220407_151012.png
>
>
> The TableAPI does not forward the idleness configuration from a DataStream 
> source. That can lead to the halt of processing if all sources are idle 
> because {{WatermarkAssignerOperator}} [1] will never set a channel to active 
> again. The only way to mitigate the problem is to explicitly configure the 
> idleness for table sources via {{table.exec.source.idle-timeout}}. 
> Configuring this value is actually not easy because creating a 
> {{StreamExecutionEnvironment}} via {{create(StreamExecutionEnvironment, 
> TableConfig)}} is deprecated.
> [1] 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/wmassigners/WatermarkAssignerOperator.java#L103



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26098) TableAPI does not forward idleness configuration from DataStream

2022-04-07 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518838#comment-17518838
 ] 

Marios Trivyzas commented on FLINK-26098:
-

Using the following code:

 
{noformat}
Configuration configuration = new Configuration();
configuration.set(TABLE_EXEC_SOURCE_IDLE_TIMEOUT, Duration.ofSeconds(100));
EnvironmentSettings settings =

EnvironmentSettings.newInstance().withConfiguration(configuration).build();
TableEnvironment tEnv = TableEnvironment.create(settings);

String srcTableDdl =
"CREATE TABLE MyTable (\n"
+ " a INT,\n"
+ " b BIGINT,\n"
+ " c VARCHAR,\n"
+ " `rowtime` AS TO_TIMESTAMP(c),\n"
+ " proctime as PROCTIME(),\n"
+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL '1' 
SECOND\n"
+ ") WITH (\n"
+ " 'connector' = 'datagen',\n"
+ " 'number-of-rows' = '10'"
+ ")\n";
tEnv.executeSql(srcTableDdl);
String sinkTableDdl =
"CREATE TABLE MySink (\n"
+ " b BIGINT,\n"
+ " sum_a INT\n"
+ ") WITH (\n"
+ " 'connector' = 'blackhole')\n";
tEnv.executeSql(sinkTableDdl);

{noformat}
and with debugging, we can see that the {{table.exec.source.idle-timeout}} is 
there (100 seconds) and used correctly in the pipeline:

!Screenshot_20220407_150020.png!

 

[~twalthr] Do you think we can close this issue?

> TableAPI does not forward idleness configuration from DataStream
> 
>
> Key: FLINK-26098
> URL: https://issues.apache.org/jira/browse/FLINK-26098
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Marios Trivyzas
>Priority: Major
> Attachments: Screenshot_20220407_150020.png
>
>
> The TableAPI does not forward the idleness configuration from a DataStream 
> source. That can lead to the halt of processing if all sources are idle 
> because {{WatermarkAssignerOperator}} [1] will never set a channel to active 
> again. The only way to mitigate the problem is to explicitly configure the 
> idleness for table sources via {{table.exec.source.idle-timeout}}. 
> Configuring this value is actually not easy because creating a 
> {{StreamExecutionEnvironment}} via {{create(StreamExecutionEnvironment, 
> TableConfig)}} is deprecated.
> [1] 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/wmassigners/WatermarkAssignerOperator.java#L103



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26098) TableAPI does not forward idleness configuration from DataStream

2022-04-07 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26098:

Attachment: Screenshot_20220407_150020.png

> TableAPI does not forward idleness configuration from DataStream
> 
>
> Key: FLINK-26098
> URL: https://issues.apache.org/jira/browse/FLINK-26098
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Till Rohrmann
>Assignee: Marios Trivyzas
>Priority: Major
> Attachments: Screenshot_20220407_150020.png
>
>
> The TableAPI does not forward the idleness configuration from a DataStream 
> source. That can lead to the halt of processing if all sources are idle 
> because {{WatermarkAssignerOperator}} [1] will never set a channel to active 
> again. The only way to mitigate the problem is to explicitly configure the 
> idleness for table sources via {{table.exec.source.idle-timeout}}. 
> Configuring this value is actually not easy because creating a 
> {{StreamExecutionEnvironment}} via {{create(StreamExecutionEnvironment, 
> TableConfig)}} is deprecated.
> [1] 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/wmassigners/WatermarkAssignerOperator.java#L103



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27121) Translate "Configuration#overview" paragraph and the code example in "Application Development > Table API & SQL" to Chinese

2022-04-07 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-27121:
---

 Summary: Translate "Configuration#overview" paragraph and the code 
example in "Application Development > Table API & SQL" to Chinese
 Key: FLINK-27121
 URL: https://issues.apache.org/jira/browse/FLINK-27121
 Project: Flink
  Issue Type: Improvement
Reporter: Marios Trivyzas


After [https://github.com/apache/flink/pull/19387 
|https://github.com/apache/flink/pull/19387] is merged, we need to update the 
translation for 
[https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/table/config/#overview]

The markdown file is located in 
{noformat}
flink/docs/content.zh/docs/dev/table/config.md{noformat}
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-27120) Translate "Gradle" tab of "Application Development > Project Configuration > Overview" to Chinese

2022-04-07 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-27120:

Description: 
Translate the changes of 
[https://github.com/apache/flink/pull/18609,|https://github.com/apache/flink/pull/18609]
 we need to update the translation for 
[https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/configuration/overview/]

The markdown file is located in 
{noformat}
flink/docs/content.zh/docs/dev/configuration/overview.md{noformat}

  was:
Translate the changes of 
[https://github.com/apache/flink/pull/18609,|https://github.com/apache/flink/pull/18609]
 we need to update the translation for 
[https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/configuration/overview/]

The markdown file is located in 
flink/docs/content.zh/docs/dev/configuration/overview.md


> Translate "Gradle" tab of "Application Development > Project Configuration > 
> Overview" to Chinese
> -
>
> Key: FLINK-27120
> URL: https://issues.apache.org/jira/browse/FLINK-27120
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.15.0
>Reporter: Marios Trivyzas
>Priority: Major
>
> Translate the changes of 
> [https://github.com/apache/flink/pull/18609,|https://github.com/apache/flink/pull/18609]
>  we need to update the translation for 
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/configuration/overview/]
> The markdown file is located in 
> {noformat}
> flink/docs/content.zh/docs/dev/configuration/overview.md{noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-27120) Translate "Gradle" tab of "Application Development > Project Configuration > Overview" to Chinese

2022-04-07 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-27120:

Summary: Translate "Gradle" tab of "Application Development > Project 
Configuration > Overview" to Chinese  (was: Translate "configuration" page of 
"Application Development > Project Configuration > Overview > Gradle (tab)" to 
Chinese)

> Translate "Gradle" tab of "Application Development > Project Configuration > 
> Overview" to Chinese
> -
>
> Key: FLINK-27120
> URL: https://issues.apache.org/jira/browse/FLINK-27120
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.15.0
>Reporter: Marios Trivyzas
>Priority: Major
>
> Translate the changes of 
> [https://github.com/apache/flink/pull/18609,|https://github.com/apache/flink/pull/18609]
>  we need to update the translation for 
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/configuration/overview/]
> The markdown file is located in 
> flink/docs/content.zh/docs/dev/configuration/overview.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27120) Translate "configuration" page of "Application Development > Project Configuration > Overview > Gradle (tab)" to Chinese

2022-04-07 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518769#comment-17518769
 ] 

Marios Trivyzas commented on FLINK-27120:
-

Needs to be done for `master` and `release-1.15`

> Translate "configuration" page of "Application Development > Project 
> Configuration > Overview > Gradle (tab)" to Chinese
> 
>
> Key: FLINK-27120
> URL: https://issues.apache.org/jira/browse/FLINK-27120
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.15.0
>Reporter: Marios Trivyzas
>Priority: Major
>
> Translate the changes of 
> [https://github.com/apache/flink/pull/18609,|https://github.com/apache/flink/pull/18609]
>  we need to update the translation for 
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/configuration/overview/]
> The markdown file is located in 
> flink/docs/content.zh/docs/dev/configuration/overview.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27120) Translate "configuration" page of "Application Development > Project Configuration > Overview > Gradle (tab)" to Chinese

2022-04-07 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-27120:
---

 Summary: Translate "configuration" page of "Application 
Development > Project Configuration > Overview > Gradle (tab)" to Chinese
 Key: FLINK-27120
 URL: https://issues.apache.org/jira/browse/FLINK-27120
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.15.0
Reporter: Marios Trivyzas


Translate the changes of 
[https://github.com/apache/flink/pull/18609,|https://github.com/apache/flink/pull/18609]
 we need to update the translation for 
[https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/configuration/overview/]

The markdown file is located in 
flink/docs/content.zh/docs/dev/configuration/overview.md



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27111) Update docs regarding EnvironmentSettings / TableConfig

2022-04-07 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-27111:
---

 Summary: Update docs regarding EnvironmentSettings / TableConfig
 Key: FLINK-27111
 URL: https://issues.apache.org/jira/browse/FLINK-27111
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Affects Versions: 1.15.0
Reporter: Marios Trivyzas
Assignee: Marios Trivyzas






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-27089) Calling TRY_CAST with an invalid value throws IndexOutOfBounds Exception.

2022-04-06 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518186#comment-17518186
 ] 

Marios Trivyzas commented on FLINK-27089:
-

With the help of [~twalthr] we have tried out workarounds,

like creating a call with a dummy second operand in *FlinkConvertletTable* 
where we make the call, which solves the issues, but changes our *EXPLAIN* 
output (and also the json plan output)

or changing the *TRY_CAST* to be of kind *CAST* instead of 
{*}OTHER_FUNCTION{*}, which creates other issues, since calcite will then apply 
rules and optimizations and end up with *CAST* instead of {*}TRY_CAST{*}.

 

So finally, we decided to just not implement the *getMonotonicity()* for 
{*}TRY_CAST{*}, which is the only way to "cleanly" solve the bug currently, but 
with the drawback to many looses some optimizations in batch mode.

> Calling TRY_CAST with an invalid value throws IndexOutOfBounds Exception.
> -
>
> Key: FLINK-27089
> URL: https://issues.apache.org/jira/browse/FLINK-27089
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: Caizhi Weng
>Assignee: Marios Trivyzas
>Priority: Major
>  Labels: pull-request-available
>
> Add the following test to 
> {org.apache.flink.table.planner.runtime.batch.sql.CalcITCase} to reproduce 
> this issue.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   checkResult("SELECT TRY_CAST('invalid' AS INT)", Seq(row(null)))
> }
> {code}
> The exception stack is
> {code}
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>   at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1345)
>   at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1327)
>   at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:43)
>   at 
> org.apache.calcite.rex.RexCallBinding.getOperandType(RexCallBinding.java:136)
>   at 
> org.apache.calcite.sql.fun.SqlCastFunction.getMonotonicity(SqlCastFunction.java:205)
>   at 
> org.apache.flink.table.planner.functions.sql.BuiltInSqlFunction.getMonotonicity(BuiltInSqlFunction.java:141)
>   at 
> org.apache.calcite.rel.metadata.RelMdCollation.project(RelMdCollation.java:291)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.lambda$create$0(LogicalProject.java:122)
>   at org.apache.calcite.plan.RelTraitSet.replaceIfs(RelTraitSet.java:242)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.create(LogicalProject.java:121)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.create(LogicalProject.java:111)
>   at 
> org.apache.calcite.rel.core.RelFactories$ProjectFactoryImpl.createProject(RelFactories.java:177)
>   at org.apache.calcite.tools.RelBuilder.project_(RelBuilder.java:1516)
>   at org.apache.calcite.tools.RelBuilder.project(RelBuilder.java:1311)
>   at 
> org.apache.calcite.tools.RelBuilder.projectNamed(RelBuilder.java:1565)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectList(SqlToRelConverter.java:4222)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:687)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:198)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:190)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:1240)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:1188)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertValidatedSqlNode(SqlToOperationConverter.java:345)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:105)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:675)
>   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.parseQuery(BatchTestBase.scala:297)
>   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:139)
>   at 
> 

[jira] [Assigned] (FLINK-27089) Calling TRY_CAST with an invalid value throws IndexOutOfBounds Exception.

2022-04-06 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-27089:
---

Assignee: Marios Trivyzas

> Calling TRY_CAST with an invalid value throws IndexOutOfBounds Exception.
> -
>
> Key: FLINK-27089
> URL: https://issues.apache.org/jira/browse/FLINK-27089
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: Caizhi Weng
>Assignee: Marios Trivyzas
>Priority: Major
>
> Add the following test to 
> {org.apache.flink.table.planner.runtime.batch.sql.CalcITCase} to reproduce 
> this issue.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   checkResult("SELECT TRY_CAST('invalid' AS INT)", Seq(row(null)))
> }
> {code}
> The exception stack is
> {code}
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>   at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1345)
>   at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1327)
>   at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:43)
>   at 
> org.apache.calcite.rex.RexCallBinding.getOperandType(RexCallBinding.java:136)
>   at 
> org.apache.calcite.sql.fun.SqlCastFunction.getMonotonicity(SqlCastFunction.java:205)
>   at 
> org.apache.flink.table.planner.functions.sql.BuiltInSqlFunction.getMonotonicity(BuiltInSqlFunction.java:141)
>   at 
> org.apache.calcite.rel.metadata.RelMdCollation.project(RelMdCollation.java:291)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.lambda$create$0(LogicalProject.java:122)
>   at org.apache.calcite.plan.RelTraitSet.replaceIfs(RelTraitSet.java:242)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.create(LogicalProject.java:121)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.create(LogicalProject.java:111)
>   at 
> org.apache.calcite.rel.core.RelFactories$ProjectFactoryImpl.createProject(RelFactories.java:177)
>   at org.apache.calcite.tools.RelBuilder.project_(RelBuilder.java:1516)
>   at org.apache.calcite.tools.RelBuilder.project(RelBuilder.java:1311)
>   at 
> org.apache.calcite.tools.RelBuilder.projectNamed(RelBuilder.java:1565)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectList(SqlToRelConverter.java:4222)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:687)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:198)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:190)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:1240)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:1188)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertValidatedSqlNode(SqlToOperationConverter.java:345)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:105)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:675)
>   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.parseQuery(BatchTestBase.scala:297)
>   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:139)
>   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:106)
>   at 
> org.apache.flink.table.planner.runtime.batch.sql.CalcITCase.myTest(CalcITCase.scala:75)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> 

[jira] [Commented] (FLINK-27089) Calling TRY_CAST with an invalid value throws IndexOutOfBounds Exception.

2022-04-06 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17518037#comment-17518037
 ] 

Marios Trivyzas commented on FLINK-27089:
-

The issue is because *SqlTryCastFunction* is of kind *SqlKind.OTHER_FUNCTION* 
then in 

*RexCallBinding*
{noformat}
public static RexCallBinding create(RelDataTypeFactory typeFactory,
RexCall call,
List inputCollations) {
  switch (call.getKind()) {
  case CAST:
return new RexCastCallBinding(typeFactory, call.getOperator(),
call.getOperands(), call.getType(), inputCollations);
  }
  return new RexCallBinding(typeFactory, call.getOperator(),
  call.getOperands(), inputCollations);
}{noformat}
we don't use the {*}RexCastCallBinding{*},but the regular one, for which we 
don't get the code:

 
{noformat}
@Override public RelDataType getOperandType(int ordinal) {
  if (ordinal == 1) {
return type;
  }
  return super.getOperandType(ordinal);
}{noformat}
 

but the one from the regular *RexCallBinding* (superclass)
{noformat}
public RelDataType getOperandType(int ordinal) {
  return operands.get(ordinal).getType();
}{noformat}
which uses the preconditions of *SingletonImmutableList* and fails (since 
indeed there is only one operand, given the special syntax of CAST)

 

> Calling TRY_CAST with an invalid value throws IndexOutOfBounds Exception.
> -
>
> Key: FLINK-27089
> URL: https://issues.apache.org/jira/browse/FLINK-27089
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: Caizhi Weng
>Priority: Major
>
> Add the following test to 
> {org.apache.flink.table.planner.runtime.batch.sql.CalcITCase} to reproduce 
> this issue.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   checkResult("SELECT TRY_CAST('invalid' AS INT)", Seq(row(null)))
> }
> {code}
> The exception stack is
> {code}
> java.lang.IndexOutOfBoundsException: index (1) must be less than size (1)
>   at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1345)
>   at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:1327)
>   at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:43)
>   at 
> org.apache.calcite.rex.RexCallBinding.getOperandType(RexCallBinding.java:136)
>   at 
> org.apache.calcite.sql.fun.SqlCastFunction.getMonotonicity(SqlCastFunction.java:205)
>   at 
> org.apache.flink.table.planner.functions.sql.BuiltInSqlFunction.getMonotonicity(BuiltInSqlFunction.java:141)
>   at 
> org.apache.calcite.rel.metadata.RelMdCollation.project(RelMdCollation.java:291)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.lambda$create$0(LogicalProject.java:122)
>   at org.apache.calcite.plan.RelTraitSet.replaceIfs(RelTraitSet.java:242)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.create(LogicalProject.java:121)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.create(LogicalProject.java:111)
>   at 
> org.apache.calcite.rel.core.RelFactories$ProjectFactoryImpl.createProject(RelFactories.java:177)
>   at org.apache.calcite.tools.RelBuilder.project_(RelBuilder.java:1516)
>   at org.apache.calcite.tools.RelBuilder.project(RelBuilder.java:1311)
>   at 
> org.apache.calcite.tools.RelBuilder.projectNamed(RelBuilder.java:1565)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectList(SqlToRelConverter.java:4222)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:687)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:198)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:190)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:1240)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:1188)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertValidatedSqlNode(SqlToOperationConverter.java:345)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:105)
>   at 
> 

[jira] [Assigned] (FLINK-26686) Use a junit temp folder for TestingTaskManagerRuntimeInfo tmpWorkingDirectory

2022-04-05 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-26686:
---

Assignee: (was: Marios Trivyzas)

> Use a junit temp folder for TestingTaskManagerRuntimeInfo tmpWorkingDirectory
> -
>
> Key: FLINK-26686
> URL: https://issues.apache.org/jira/browse/FLINK-26686
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Marios Trivyzas
>Priority: Major
>
> Following FLINK-26418, maybe it's a good idea to not expose a constructor for 
> {{TestingTaskManagerRuntimeInfo}} without the *tmpWorkingDirectory* as this 
> ends up in creating the ROCKSDB folders inside */tmp* (which at least is an 
> improvement over having them created in the given {*}CWD{*}, ending up in the 
> root folder of each given module (i.e: {*}flink-table/flink-table-planner{*}).
> If the *tmpWorkingDirectory* is a mandatory argument then the consumers (IT 
> tests) would have to provide this directory and they can use the standard 
> junit *TemporaryFolder.newFolder()* so that the directories are automatically 
> deleted after the tests are run, and devs can find those directories under 
> the build/test folder of each module and not in an unorganized structure in 
> */tmp* (or windows user's tmp folder).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26686) Use a junit temp folder for TestingTaskManagerRuntimeInfo tmpWorkingDirectory

2022-04-05 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17517287#comment-17517287
 ] 

Marios Trivyzas commented on FLINK-26686:
-

Seems that to implement this globally, and "force" to pass a `tmpWorkingDir` 
needs lots of changes in many modules. I have pushed a WIP branch: 
[https://github.com/matriv/flink/tree/FLINK-26686] but not planning to continue 
with this soon.

> Use a junit temp folder for TestingTaskManagerRuntimeInfo tmpWorkingDirectory
> -
>
> Key: FLINK-26686
> URL: https://issues.apache.org/jira/browse/FLINK-26686
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>
> Following FLINK-26418, maybe it's a good idea to not expose a constructor for 
> {{TestingTaskManagerRuntimeInfo}} without the *tmpWorkingDirectory* as this 
> ends up in creating the ROCKSDB folders inside */tmp* (which at least is an 
> improvement over having them created in the given {*}CWD{*}, ending up in the 
> root folder of each given module (i.e: {*}flink-table/flink-table-planner{*}).
> If the *tmpWorkingDirectory* is a mandatory argument then the consumers (IT 
> tests) would have to provide this directory and they can use the standard 
> junit *TemporaryFolder.newFolder()* so that the directories are automatically 
> deleted after the tests are run, and devs can find those directories under 
> the build/test folder of each module and not in an unorganized structure in 
> */tmp* (or windows user's tmp folder).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25227) Comparing the equality of the same (boxed) numeric values returns false

2022-03-29 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-25227:

Fix Version/s: 1.15.0
   1.16.0

> Comparing the equality of the same (boxed) numeric values returns false
> ---
>
> Key: FLINK-25227
> URL: https://issues.apache.org/jira/browse/FLINK-25227
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0, 1.12.5, 1.13.3
>Reporter: Caizhi Weng
>Assignee: Marios Trivyzas
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.15.0, 1.16.0, 1.13.7, 1.14.5
>
>
> Add the following test case to {{TableEnvironmentITCase}} to reproduce this 
> bug.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   val data = Seq(
> Row.of(
>   java.lang.Integer.valueOf(1000),
>   java.lang.Integer.valueOf(2000),
>   java.lang.Integer.valueOf(1000),
>   java.lang.Integer.valueOf(2000))
>   )
>   tEnv.executeSql(
> s"""
>|create table T (
>|  a int,
>|  b int,
>|  c int,
>|  d int
>|) with (
>|  'connector' = 'values',
>|  'bounded' = 'true',
>|  'data-id' = '${TestValuesTableFactory.registerData(data)}'
>|)
>|""".stripMargin)
>   tEnv.executeSql("select greatest(a, b) = greatest(c, d) from T").print()
> }
> {code}
> The result is false, which is obviously incorrect.
> This is caused by the generated java code:
> {code:java}
> public class StreamExecCalc$8 extends 
> org.apache.flink.table.runtime.operators.TableStreamOperator
> implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator {
> private final Object[] references;
> org.apache.flink.table.data.BoxedWrapperRowData out =
> new org.apache.flink.table.data.BoxedWrapperRowData(1);
> private final 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord outElement =
> new 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord(null);
> public StreamExecCalc$8(
> Object[] references,
> org.apache.flink.streaming.runtime.tasks.StreamTask task,
> org.apache.flink.streaming.api.graph.StreamConfig config,
> org.apache.flink.streaming.api.operators.Output output,
> org.apache.flink.streaming.runtime.tasks.ProcessingTimeService 
> processingTimeService)
> throws Exception {
> this.references = references;
> this.setup(task, config, output);
> if (this instanceof 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator) {
> 
> ((org.apache.flink.streaming.api.operators.AbstractStreamOperator) this)
> .setProcessingTimeService(processingTimeService);
> }
> }
> @Override
> public void open() throws Exception {
> super.open();
> }
> @Override
> public void 
> processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord 
> element)
> throws Exception {
> org.apache.flink.table.data.RowData in1 =
> (org.apache.flink.table.data.RowData) element.getValue();
> int field$0;
> boolean isNull$0;
> int field$1;
> boolean isNull$1;
> int field$3;
> boolean isNull$3;
> int field$4;
> boolean isNull$4;
> boolean isNull$6;
> boolean result$7;
> isNull$3 = in1.isNullAt(2);
> field$3 = -1;
> if (!isNull$3) {
> field$3 = in1.getInt(2);
> }
> isNull$0 = in1.isNullAt(0);
> field$0 = -1;
> if (!isNull$0) {
> field$0 = in1.getInt(0);
> }
> isNull$1 = in1.isNullAt(1);
> field$1 = -1;
> if (!isNull$1) {
> field$1 = in1.getInt(1);
> }
> isNull$4 = in1.isNullAt(3);
> field$4 = -1;
> if (!isNull$4) {
> field$4 = in1.getInt(3);
> }
> out.setRowKind(in1.getRowKind());
> java.lang.Integer result$2 = field$0;
> boolean nullTerm$2 = false;
> if (!nullTerm$2) {
> java.lang.Integer cur$2 = field$0;
> if (isNull$0) {
> nullTerm$2 = true;
> } else {
> int compareResult = result$2.compareTo(cur$2);
> if ((true && compareResult < 0) || (compareResult > 0 && 
> !true)) {
> result$2 = cur$2;
> }
> }
> }
> if (!nullTerm$2) {
> java.lang.Integer cur$2 = field$1;
> if (isNull$1) {
> nullTerm$2 = true;
> } else {
>   

[jira] [Assigned] (FLINK-25227) Comparing the equality of the same (boxed) numeric values returns false

2022-03-29 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-25227:
---

Assignee: Marios Trivyzas  (was: Caizhi Weng)

> Comparing the equality of the same (boxed) numeric values returns false
> ---
>
> Key: FLINK-25227
> URL: https://issues.apache.org/jira/browse/FLINK-25227
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0, 1.12.5, 1.13.3
>Reporter: Caizhi Weng
>Assignee: Marios Trivyzas
>Priority: Critical
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.13.7, 1.14.5
>
>
> Add the following test case to {{TableEnvironmentITCase}} to reproduce this 
> bug.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   val data = Seq(
> Row.of(
>   java.lang.Integer.valueOf(1000),
>   java.lang.Integer.valueOf(2000),
>   java.lang.Integer.valueOf(1000),
>   java.lang.Integer.valueOf(2000))
>   )
>   tEnv.executeSql(
> s"""
>|create table T (
>|  a int,
>|  b int,
>|  c int,
>|  d int
>|) with (
>|  'connector' = 'values',
>|  'bounded' = 'true',
>|  'data-id' = '${TestValuesTableFactory.registerData(data)}'
>|)
>|""".stripMargin)
>   tEnv.executeSql("select greatest(a, b) = greatest(c, d) from T").print()
> }
> {code}
> The result is false, which is obviously incorrect.
> This is caused by the generated java code:
> {code:java}
> public class StreamExecCalc$8 extends 
> org.apache.flink.table.runtime.operators.TableStreamOperator
> implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator {
> private final Object[] references;
> org.apache.flink.table.data.BoxedWrapperRowData out =
> new org.apache.flink.table.data.BoxedWrapperRowData(1);
> private final 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord outElement =
> new 
> org.apache.flink.streaming.runtime.streamrecord.StreamRecord(null);
> public StreamExecCalc$8(
> Object[] references,
> org.apache.flink.streaming.runtime.tasks.StreamTask task,
> org.apache.flink.streaming.api.graph.StreamConfig config,
> org.apache.flink.streaming.api.operators.Output output,
> org.apache.flink.streaming.runtime.tasks.ProcessingTimeService 
> processingTimeService)
> throws Exception {
> this.references = references;
> this.setup(task, config, output);
> if (this instanceof 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator) {
> 
> ((org.apache.flink.streaming.api.operators.AbstractStreamOperator) this)
> .setProcessingTimeService(processingTimeService);
> }
> }
> @Override
> public void open() throws Exception {
> super.open();
> }
> @Override
> public void 
> processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord 
> element)
> throws Exception {
> org.apache.flink.table.data.RowData in1 =
> (org.apache.flink.table.data.RowData) element.getValue();
> int field$0;
> boolean isNull$0;
> int field$1;
> boolean isNull$1;
> int field$3;
> boolean isNull$3;
> int field$4;
> boolean isNull$4;
> boolean isNull$6;
> boolean result$7;
> isNull$3 = in1.isNullAt(2);
> field$3 = -1;
> if (!isNull$3) {
> field$3 = in1.getInt(2);
> }
> isNull$0 = in1.isNullAt(0);
> field$0 = -1;
> if (!isNull$0) {
> field$0 = in1.getInt(0);
> }
> isNull$1 = in1.isNullAt(1);
> field$1 = -1;
> if (!isNull$1) {
> field$1 = in1.getInt(1);
> }
> isNull$4 = in1.isNullAt(3);
> field$4 = -1;
> if (!isNull$4) {
> field$4 = in1.getInt(3);
> }
> out.setRowKind(in1.getRowKind());
> java.lang.Integer result$2 = field$0;
> boolean nullTerm$2 = false;
> if (!nullTerm$2) {
> java.lang.Integer cur$2 = field$0;
> if (isNull$0) {
> nullTerm$2 = true;
> } else {
> int compareResult = result$2.compareTo(cur$2);
> if ((true && compareResult < 0) || (compareResult > 0 && 
> !true)) {
> result$2 = cur$2;
> }
> }
> }
> if (!nullTerm$2) {
> java.lang.Integer cur$2 = field$1;
> if (isNull$1) {
> nullTerm$2 = true;
> } else {
>  

[jira] [Commented] (FLINK-26092) JsonAggregationFunctionsITCase fails with NPE when using RocksDB

2022-03-27 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17512862#comment-17512862
 ] 

Marios Trivyzas commented on FLINK-26092:
-

[~roman] I opened a PR to address this: 
[https://github.com/apache/flink/pull/19249]

but I wanted to ask, with StringDataSerializer, doesn't support nullability? Is 
that something we would like to address?

> JsonAggregationFunctionsITCase fails with NPE when using RocksDB
> 
>
> Key: FLINK-26092
> URL: https://issues.apache.org/jira/browse/FLINK-26092
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends, Tests
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Roman Khachatryan
>Assignee: Marios Trivyzas
>Priority: Major
>  Labels: pull-request-available
>
> Whith RocksDB backend chosen manually (instead of Heap; e.g. by altering 
> mini-cluster configuration in BuiltInAggregateFunctionTestBase);
> the test [0: JSON_OBJECTAGG_NULL_ON_NULL (Basic Aggregation)] fails with NPE 
> (below).
>  
> Not sure whether it's a RocksDB issue, a test issue, or not an issue at all.
> The current Changelog backend behavior mimics RocksDB, and therefore enabling 
> it with materialization fails the test too (Changelog +  Heap).
>  
> {code:java}
> java.lang.RuntimeException: Could not collect results
>     at 
> org.apache.flink.table.planner.functions.BuiltInAggregateFunctionTestBase.materializeResult(BuiltInAggregateFunctionTestBase.java:169)
>     at 
> org.apache.flink.table.planner.functions.BuiltInAggregateFunctionTestBase.assertRows(BuiltInAggregateFunctionTestBase.java:133)
>     at 
> org.apache.flink.table.planner.functions.BuiltInAggregateFunctionTestBase$SuccessItem.execute(BuiltInAggregateFunctionTestBase.java:279)
>     at 
> org.apache.flink.table.planner.functions.BuiltInAggregateFunctionTestBase.testFunction(BuiltInAggregateFunctionTestBase.java:93)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runners.Suite.runChild(Suite.java:128)
>     at org.junit.runners.Suite.runChild(Suite.java:27)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
>     at 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>     at 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
>     at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
> Caused by: java.lang.RuntimeException: Failed to fetch next result
>     at 
> 

[jira] [Commented] (FLINK-26190) Remove getTableConfig from ExecNodeConfiguration

2022-03-24 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17511897#comment-17511897
 ] 

Marios Trivyzas commented on FLINK-26190:
-

[~dianfu] Please take a look at this when you have some time.

> Remove getTableConfig from ExecNodeConfiguration
> 
>
> Key: FLINK-26190
> URL: https://issues.apache.org/jira/browse/FLINK-26190
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Marios Trivyzas
>Priority: Major
>
> Currently, *ExecNodeConfig* holds *TableConfig* instead of *ReadableConfig* 
> for the configuration coming from the planner, because it's used by
> *CommonPythonUtil#getMergedConfig.* This should be fixed, so that 
> *CommonPythonUtil#getMergedConfig* cam use a *ReadableConfig* instead, and 
> then we can pass the *ExecNodeConfig* which holds the complete view of 
> {*}Planner{*}'s *TableConfig* + the {*}ExecNode{*}'s {*}persistedConfig{*}.
>  
> To achieve that the *getMergedConfig* methods of *PythonConfigUtil* must be 
> changed, and also the temp solution in 
> *PythonFunctionFactory#getPythonFunction* must be changed as well:
> {noformat}
> if (config instanceof TableConfig) {
> PythonDependencyUtils.merge(mergedConfig, ((TableConfig) 
> config).getConfiguration());
> } else {
> PythonDependencyUtils.merge(mergedConfig, (Configuration) config);
> }{noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26190) Remove getTableConfig from ExecNodeConfiguration

2022-03-24 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26190:

Description: 
Currently, *ExecNodeConfig* holds *TableConfig* instead of *ReadableConfig* for 
the configuration coming from the planner, because it's used by

*CommonPythonUtil#getMergedConfig.* This should be fixed, so that 
*CommonPythonUtil#getMergedConfig* cam use a *ReadableConfig* instead, and then 
we can pass the *ExecNodeConfig* which holds the complete view of 
{*}Planner{*}'s *TableConfig* + the {*}ExecNode{*}'s {*}persistedConfig{*}.

 

To achieve that the *getMergedConfig* methods of *PythonConfigUtil* must be 
changed, and also the temp solution in 

*PythonFunctionFactory#getPythonFunction* must be changed as well:
{noformat}
if (config instanceof TableConfig) {
PythonDependencyUtils.merge(mergedConfig, ((TableConfig) 
config).getConfiguration());
} else {
PythonDependencyUtils.merge(mergedConfig, (Configuration) config);
}{noformat}

  was:
Currently, *ExecNodeConfiguration* holds *TableConfig* on top of the 
*plannerConfig* and the *persistedConfig,* since it's needed by the 
*CodeGeneratorContext* basically for the {*}nullCheck{*}.

The *nullCheck* should be deprecated and removed which will facilitate the 
usage of the simple iface: *ReadableConfig* in *CodeGeneratorContext* ** 
instead of *TableConfig* and the *TableConfig* could be removed 

 


> Remove getTableConfig from ExecNodeConfiguration
> 
>
> Key: FLINK-26190
> URL: https://issues.apache.org/jira/browse/FLINK-26190
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Marios Trivyzas
>Priority: Major
>
> Currently, *ExecNodeConfig* holds *TableConfig* instead of *ReadableConfig* 
> for the configuration coming from the planner, because it's used by
> *CommonPythonUtil#getMergedConfig.* This should be fixed, so that 
> *CommonPythonUtil#getMergedConfig* cam use a *ReadableConfig* instead, and 
> then we can pass the *ExecNodeConfig* which holds the complete view of 
> {*}Planner{*}'s *TableConfig* + the {*}ExecNode{*}'s {*}persistedConfig{*}.
>  
> To achieve that the *getMergedConfig* methods of *PythonConfigUtil* must be 
> changed, and also the temp solution in 
> *PythonFunctionFactory#getPythonFunction* must be changed as well:
> {noformat}
> if (config instanceof TableConfig) {
> PythonDependencyUtils.merge(mergedConfig, ((TableConfig) 
> config).getConfiguration());
> } else {
> PythonDependencyUtils.merge(mergedConfig, (Configuration) config);
> }{noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] (FLINK-26190) Remove getTableConfig from ExecNodeConfiguration

2022-03-24 Thread Marios Trivyzas (Jira)


[ https://issues.apache.org/jira/browse/FLINK-26190 ]


Marios Trivyzas deleted comment on FLINK-26190:
-

was (Author: matriv):
Also consider making *PlannerConfiguration* more generic, like 
*MergedConfiguration* and use this new common class for both case.

> Remove getTableConfig from ExecNodeConfiguration
> 
>
> Key: FLINK-26190
> URL: https://issues.apache.org/jira/browse/FLINK-26190
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Marios Trivyzas
>Priority: Major
>
> Currently, *ExecNodeConfiguration* holds *TableConfig* on top of the 
> *plannerConfig* and the *persistedConfig,* since it's needed by the 
> *CodeGeneratorContext* basically for the {*}nullCheck{*}.
> The *nullCheck* should be deprecated and removed which will facilitate the 
> usage of the simple iface: *ReadableConfig* in *CodeGeneratorContext* ** 
> instead of *TableConfig* and the *TableConfig* could be removed 
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26190) Remove getTableConfig from ExecNodeConfiguration

2022-03-24 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26190:

Summary: Remove getTableConfig from ExecNodeConfiguration  (was: Remove 
TableConfig from ExecNodeConfiguration)

> Remove getTableConfig from ExecNodeConfiguration
> 
>
> Key: FLINK-26190
> URL: https://issues.apache.org/jira/browse/FLINK-26190
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Marios Trivyzas
>Priority: Major
>
> Currently, *ExecNodeConfiguration* holds *TableConfig* on top of the 
> *plannerConfig* and the *persistedConfig,* since it's needed by the 
> *CodeGeneratorContext* basically for the {*}nullCheck{*}.
> The *nullCheck* should be deprecated and removed which will facilitate the 
> usage of the simple iface: *ReadableConfig* in *CodeGeneratorContext* ** 
> instead of *TableConfig* and the *TableConfig* could be removed 
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26092) JsonAggregationFunctionsITCase fails with NPE when using RocksDB

2022-03-22 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-26092:
---

Assignee: Marios Trivyzas

> JsonAggregationFunctionsITCase fails with NPE when using RocksDB
> 
>
> Key: FLINK-26092
> URL: https://issues.apache.org/jira/browse/FLINK-26092
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends, Tests
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Roman Khachatryan
>Assignee: Marios Trivyzas
>Priority: Major
>
> Whith RocksDB backend chosen manually (instead of Heap; e.g. by altering 
> mini-cluster configuration in BuiltInAggregateFunctionTestBase);
> the test [0: JSON_OBJECTAGG_NULL_ON_NULL (Basic Aggregation)] fails with NPE 
> (below).
>  
> Not sure whether it's a RocksDB issue, a test issue, or not an issue at all.
> The current Changelog backend behavior mimics RocksDB, and therefore enabling 
> it with materialization fails the test too (Changelog +  Heap).
>  
> {code:java}
> java.lang.RuntimeException: Could not collect results
>     at 
> org.apache.flink.table.planner.functions.BuiltInAggregateFunctionTestBase.materializeResult(BuiltInAggregateFunctionTestBase.java:169)
>     at 
> org.apache.flink.table.planner.functions.BuiltInAggregateFunctionTestBase.assertRows(BuiltInAggregateFunctionTestBase.java:133)
>     at 
> org.apache.flink.table.planner.functions.BuiltInAggregateFunctionTestBase$SuccessItem.execute(BuiltInAggregateFunctionTestBase.java:279)
>     at 
> org.apache.flink.table.planner.functions.BuiltInAggregateFunctionTestBase.testFunction(BuiltInAggregateFunctionTestBase.java:93)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runners.Suite.runChild(Suite.java:128)
>     at org.junit.runners.Suite.runChild(Suite.java:27)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
>     at 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>     at 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:235)
>     at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:54)
> Caused by: java.lang.RuntimeException: Failed to fetch next result
>     at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
>     at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>     at 
> 

[jira] [Closed] (FLINK-26376) Replace TableConfig with PlannerConfig in PlannerContext

2022-03-22 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas closed FLINK-26376.
---
Resolution: Invalid

This issue is not valid anymore, as it's superseded by the work in 
https://issues.apache.org/jira/browse/FLINK-16835

*TableConfig* is the one that holds now both the application specific and the 
environment specific configuration providing a complete view.

> Replace TableConfig with PlannerConfig in PlannerContext
> 
>
> Key: FLINK-26376
> URL: https://issues.apache.org/jira/browse/FLINK-26376
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>
> PlannerConfig should be used instead of TableConfig in PlannerContext, since 
> it contains both the merged configuration coming from the executor together 
> with the tableConfig.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26777) Remove PlannerConfig

2022-03-21 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-26777:
---

 Summary: Remove PlannerConfig
 Key: FLINK-26777
 URL: https://issues.apache.org/jira/browse/FLINK-26777
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Marios Trivyzas
Assignee: Marios Trivyzas


Remove `PlannerConfig` delegation class, since `TableConfig` is now the one 
that holds the complete view of `rootConfiguration` (environment config) + 
`configuration` (the application specific configuration).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26767) Replace TableConfig.getConfiguration().get(Option) with TableConfig.get(Option) (python)

2022-03-21 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-26767:
---

 Summary: Replace TableConfig.getConfiguration().get(Option) with 
TableConfig.get(Option) (python)
 Key: FLINK-26767
 URL: https://issues.apache.org/jira/browse/FLINK-26767
 Project: Flink
  Issue Type: Sub-task
Reporter: Marios Trivyzas
Assignee: Marios Trivyzas


Replace TableConfig.getConfiguration().get(Option) with TableConfig.get(Option) 
and use the complete view of rootConfiguration + configuration (environment 
config + app specific config)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-18556) Drop the unused options in TableConfig

2022-03-21 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509676#comment-17509676
 ] 

Marios Trivyzas commented on FLINK-18556:
-

Resolved with [https://github.com/apache/flink/pull/19147]

> Drop the unused options in TableConfig
> --
>
> Key: FLINK-18556
> URL: https://issues.apache.org/jira/browse/FLINK-18556
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.15.0
>
>
> As disucssed in FLINK-16835, the following {{TableConfig}} options are not 
> preserved:
> * {{nullCheck}}: Flink will automatically enable null checks based on the 
> table schema ({{NOT NULL}} property)
> * {{decimalContext}}: this configuration is only used by the legacy planner 
> which will be removed in one of the next releases
> * {{maxIdleStateRetention}}: is automatically derived as 1.5* 
> {{idleStateRetention}} until StateTtlConfig is fully supported (at which 
> point only a single parameter is required).
> The blink planner should remove the dependencies on {{nullCheck}} and 
> {{maxIdleStateRetention}} first.  Besides, this maybe blocked by when to drop 
> old planner, because old planner is still using them. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26733) Deprecate TableConfig ctor and use getDefault() instead

2022-03-18 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-26733:
---

 Summary: Deprecate TableConfig ctor and use getDefault() instead
 Key: FLINK-26733
 URL: https://issues.apache.org/jira/browse/FLINK-26733
 Project: Flink
  Issue Type: Sub-task
Reporter: Marios Trivyzas
Assignee: Marios Trivyzas






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26709) Replace TableConfig.getConfiguration().get(Option) with TableConfig.get(Option)

2022-03-17 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-26709:
---

 Summary: Replace TableConfig.getConfiguration().get(Option) with 
TableConfig.get(Option)
 Key: FLINK-26709
 URL: https://issues.apache.org/jira/browse/FLINK-26709
 Project: Flink
  Issue Type: Sub-task
Reporter: Marios Trivyzas
Assignee: Marios Trivyzas


Replace TableConfig.getConfiguration().get(Option) with TableConfig.get(Option) 
and use the complete view of rootConfiguration + configuration (environment 
config + app specific config)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26689) Replace TableConfig with ReadableConfig where possible

2022-03-16 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-26689:
---

 Summary: Replace TableConfig with ReadableConfig where possible
 Key: FLINK-26689
 URL: https://issues.apache.org/jira/browse/FLINK-26689
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Marios Trivyzas
Assignee: Marios Trivyzas


Following the removal of `nullCheck` in the code generation, we can replace 
TableConfig with ReadableConfig in various places, and take advantage of the 
new `TableConfig implments ReadableConfig` approach.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26688) Remove usages of deprecated TableConfig options

2022-03-16 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26688:

Parent: FLINK-16835
Issue Type: Sub-task  (was: Improvement)

> Remove usages of deprecated TableConfig options
> ---
>
> Key: FLINK-26688
> URL: https://issues.apache.org/jira/browse/FLINK-26688
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>
> Remove usages of *nullCheck* and *decimalContext*



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26688) Remove usages of deprecated TableConfig options

2022-03-16 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-26688:
---

 Summary: Remove usages of deprecated TableConfig options
 Key: FLINK-26688
 URL: https://issues.apache.org/jira/browse/FLINK-26688
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Marios Trivyzas
Assignee: Marios Trivyzas


Remove usages of *nullCheck* and *decimalContext*



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26686) Use a junit temp folder for TestingTaskManagerRuntimeInfo tmpWorkingDirectory

2022-03-16 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-26686:
---

Assignee: Marios Trivyzas

> Use a junit temp folder for TestingTaskManagerRuntimeInfo tmpWorkingDirectory
> -
>
> Key: FLINK-26686
> URL: https://issues.apache.org/jira/browse/FLINK-26686
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>
> Following FLINK-26418, maybe it's a good idea to not expose a constructor for 
> {{TestingTaskManagerRuntimeInfo}} without the *tmpWorkingDirectory* as this 
> ends up in creating the ROCKSDB folders inside */tmp* (which at least is an 
> improvement over having them created in the given {*}CWD{*}, ending up in the 
> root folder of each given module (i.e: {*}flink-table/flink-table-planner{*}).
> If the *tmpWorkingDirectory* is a mandatory argument then the consumers (IT 
> tests) would have to provide this directory and they can use the standard 
> junit *TemporaryFolder.newFolder()* so that the directories are automatically 
> deleted after the tests are run, and devs can find those directories under 
> the build/test folder of each module and not in an unorganized structure in 
> */tmp* (or windows user's tmp folder).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26686) Use a junit temp folder for TestingTaskManagerRuntimeInfo tmpWorkingDirectory

2022-03-16 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-26686:
---

 Summary: Use a junit temp folder for TestingTaskManagerRuntimeInfo 
tmpWorkingDirectory
 Key: FLINK-26686
 URL: https://issues.apache.org/jira/browse/FLINK-26686
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Configuration
Reporter: Marios Trivyzas


Following FLINK-26418, maybe it's a good idea to not expose a constructor 
without the *tmpWorkingDirectory* as this ends up in creating the ROCKSDB 
folders inside */tmp* (which at least is an improvement over having them 
created in the given {*}CWD{*}, ending up in the root folder of each given 
module (i.e: {*}flink-table/flink-table-planner{*}).

If the *tmpWorkingDirectory* is a mandatory argument then the consumers (IT 
tests) would have to provide this directory and they can use the standard junit 
*TemporaryFolder.newFolder()* so that the directories are automatically deleted 
after the tests are run, and devs can find those directories under the 
build/test folder of each module and not in an unorganized structure in */tmp* 
(or windows user's tmp folder).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26587) Replace Configuration arg of RichFunction#open with ReadableConfig

2022-03-10 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26587:

Description: 
It would be nice to replace the more concrete *Configuration* with 
*ReadableConfig* since according the the javadoc description of the method 
*RichFunction#open* the Configuration is a read-only object from which the 
function can retrieve and use config options.
{noformat}
 {noformat}

  was:
It would be nice to replace the more concrete *Configuration* with 
*ReadableConfig* since according the the javadoc description of the method 
*RichFunction#open* the Configuration is a read-only object from which the 
function can retrieve and use config options.

 

Since *RichFunction* is an old public interface, maybe we can introduce the new 
*open(ReadableConfig parameters),* deprecate the old *open(Configuration 
parameters)* and make it call the new one:
{noformat}
default void open(Configuration parameters) throws Exception {
open((ReadableConfig) parameters);
}

void open(ReadableConfig parameters) throws Exception;{noformat}


> Replace Configuration arg of RichFunction#open with ReadableConfig
> --
>
> Key: FLINK-26587
> URL: https://issues.apache.org/jira/browse/FLINK-26587
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Marios Trivyzas
>Priority: Major
>
> It would be nice to replace the more concrete *Configuration* with 
> *ReadableConfig* since according the the javadoc description of the method 
> *RichFunction#open* the Configuration is a read-only object from which the 
> function can retrieve and use config options.
> {noformat}
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26587) Replace Configuration arg of RichFunction#open with ReadableConfig

2022-03-10 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26587:

Description: It would be nice to replace the more concrete *Configuration* 
with *ReadableConfig* since according the the javadoc description of the method 
*RichFunction#open* the Configuration is a read-only object from which the 
function can retrieve and use config options.  (was: It would be nice to 
replace the more concrete *Configuration* with *ReadableConfig* since according 
the the javadoc description of the method *RichFunction#open* the Configuration 
is a read-only object from which the function can retrieve and use config 
options.
{noformat}
 {noformat})

> Replace Configuration arg of RichFunction#open with ReadableConfig
> --
>
> Key: FLINK-26587
> URL: https://issues.apache.org/jira/browse/FLINK-26587
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Marios Trivyzas
>Priority: Major
>
> It would be nice to replace the more concrete *Configuration* with 
> *ReadableConfig* since according the the javadoc description of the method 
> *RichFunction#open* the Configuration is a read-only object from which the 
> function can retrieve and use config options.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26587) Replace Configuration arg of RichFunction#open with ReadableConfig

2022-03-10 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26587:

Description: 
It would be nice to replace the more concrete *Configuration* with 
*ReadableConfig* since according the the javadoc description of the method 
*RichFunction#open* the Configuration is a read-only object from which the 
function can retrieve and use config options.

 

Since *RichFunction* is an old public interface, maybe we can introduce the new 
*open(ReadableConfig parameters),* deprecate the old *open(Configuration 
parameters)* and make it call the new one:
{noformat}
default void open(Configuration parameters) throws Exception {
open((ReadableConfig) parameters);
}

void open(ReadableConfig parameters) throws Exception;{noformat}

  was:It would be nice to replace the more concrete *Configuration* with 
*ReadableConfig* since according the the javadoc description of the method 
*RichFunction#open* the Configuration is a read-only object from which the 
function can retrieve and use config options.


> Replace Configuration arg of RichFunction#open with ReadableConfig
> --
>
> Key: FLINK-26587
> URL: https://issues.apache.org/jira/browse/FLINK-26587
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Marios Trivyzas
>Priority: Major
>
> It would be nice to replace the more concrete *Configuration* with 
> *ReadableConfig* since according the the javadoc description of the method 
> *RichFunction#open* the Configuration is a read-only object from which the 
> function can retrieve and use config options.
>  
> Since *RichFunction* is an old public interface, maybe we can introduce the 
> new *open(ReadableConfig parameters),* deprecate the old *open(Configuration 
> parameters)* and make it call the new one:
> {noformat}
> default void open(Configuration parameters) throws Exception {
> open((ReadableConfig) parameters);
> }
> void open(ReadableConfig parameters) throws Exception;{noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26587) Replace Configuration arg of RichFunction#open with ReadableConfig

2022-03-10 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-26587:
---

 Summary: Replace Configuration arg of RichFunction#open with 
ReadableConfig
 Key: FLINK-26587
 URL: https://issues.apache.org/jira/browse/FLINK-26587
 Project: Flink
  Issue Type: Improvement
  Components: API / Core
Reporter: Marios Trivyzas


It would be nice to replace the more concrete *Configuration* with 
*ReadableConfig* since according the the javadoc description of the method 
*RichFunction#open* the Configuration is a read-only object from which the 
function can retrieve and use config options.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26194) Deprecate unused options in TableConfig

2022-03-10 Thread Marios Trivyzas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504136#comment-17504136
 ] 

Marios Trivyzas commented on FLINK-26194:
-

*getMaxIdleStateRetentionTime* is already deprecated.

 

> Deprecate unused options in TableConfig
> ---
>
> Key: FLINK-26194
> URL: https://issues.apache.org/jira/browse/FLINK-26194
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>
> Deprecate *nullCheck* in *TableConfig.*



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26194) Deprecate unused options in TableConfig

2022-03-10 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-26194:
---

Assignee: Marios Trivyzas

> Deprecate unused options in TableConfig
> ---
>
> Key: FLINK-26194
> URL: https://issues.apache.org/jira/browse/FLINK-26194
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>
> Deprecate *nullCheck* in *TableConfig.*



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26418) Tests on flink-table-planner produce tmp_XXX dirs which are not cleaned up

2022-03-10 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26418:

Component/s: Runtime / Configuration
 (was: Table SQL / Planner)

> Tests on flink-table-planner produce tmp_XXX dirs which are not cleaned up
> --
>
> Key: FLINK-26418
> URL: https://issues.apache.org/jira/browse/FLINK-26418
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Configuration
>Affects Versions: 1.15.0
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>  Labels: pull-request-available
>
> Running tests in *flink-table-planner* produces a bunch of *tmp_XX* 
> directories in the *flink-table-planner* root module dir, (not inside the 
> build dirs){*}.{*} As a result, if you don't change the global *gitignore* 
> show as new dirs/files to commit, and they are not cleaned up when one runs 
> {*}mvn clean{*}. On top, if you try to build the whole *flink* project you 
> get:
>  
> {noformat}
> [ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.12:check 
> (default) on project flink-parent: Too many files with unapproved license: 6 
> See RAT report in: /home/matriv/ververica/flink/target/rat.txt -> [Help 
> 1]{noformat}
> and you need to manually remove those dirs.
> It would be great to keep these directories under the *build* and maybe also 
> automatically remove each one, once the test producing it finishes 
> successfully.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26418) Tests on flink-table-planner produce tmp_XXX dirs which are not cleaned up

2022-03-09 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas reassigned FLINK-26418:
---

Assignee: Marios Trivyzas

> Tests on flink-table-planner produce tmp_XXX dirs which are not cleaned up
> --
>
> Key: FLINK-26418
> URL: https://issues.apache.org/jira/browse/FLINK-26418
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: Marios Trivyzas
>Assignee: Marios Trivyzas
>Priority: Major
>
> Running tests in *flink-table-planner* produces a bunch of *tmp_XX* 
> directories in the *flink-table-planner* root module dir, (not inside the 
> build dirs){*}.{*} As a result, if you don't change the global *gitignore* 
> show as new dirs/files to commit, and they are not cleaned up when one runs 
> {*}mvn clean{*}. On top, if you try to build the whole *flink* project you 
> get:
>  
> {noformat}
> [ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.12:check 
> (default) on project flink-parent: Too many files with unapproved license: 6 
> See RAT report in: /home/matriv/ververica/flink/target/rat.txt -> [Help 
> 1]{noformat}
> and you need to manually remove those dirs.
> It would be great to keep these directories under the *build* and maybe also 
> automatically remove each one, once the test producing it finishes 
> successfully.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26453) execution.allow-client-job-configurations not checked for executeAsync

2022-03-03 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26453:

Description: 
* *checkNotAllowedConfigurations()* should be called by  
*StreamContextEnvironment#executeAsync()*
 * Description of the *DeploymentOption* should be more clear, and it's not 
only checked by application mode.
 * When using a config option which is the same as the one in the environment 
*(flink-conf.yaml + CLI options)*  we still throw an exception, and we also 
throwing the exception even if the option is not in the environment, but it's 
the default value of the option anywa. Should we check for those cases, or 
should we at least document them and say explicitly that no config option is 
allowed to be set, if the setAsContext

 

 

 

  was:
* *checkNotAllowedConfigurations()* should be called by  
*StreamContextEnvironment#executeAsync()*
 * Description of the *DeploymentOption* should be more clear, and it's not 
only checked by application mode.
 * When using a configuration which is the same as the one in flink-conf.yaml 

Modified code of StreamSQLExample to pass extra config
{noformat}
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

Configuration conf = new Configuration();
conf.set(
ExecutionConfigOptions.TABLE_EXEC_SINK_NOT_NULL_ENFORCER,
ExecutionConfigOptions.NotNullEnforcer.DROP);

// set up the Java Table API
final StreamTableEnvironment tableEnv =
StreamTableEnvironment.create(env, 
EnvironmentSettings.fromConfiguration(conf));

final DataStream orderA =
env.fromCollection(
Arrays.asList(
new Order(1L, "beer", 3),
new Order(1L, "diaper", 4),
new Order(3L, "rubber", 2)));

final DataStream orderB =
env.fromCollection(
Arrays.asList(
new Order(2L, "pen", 3),
new Order(2L, "rubber", 3),
new Order(4L, "beer", 1)));

// convert the first DataStream to a Table object
// it will be used "inline" and is not registered in a catalog
final Table tableA = tableEnv.fromDataStream(orderA);

// convert the second DataStream and register it as a view
// it will be accessible under a name
tableEnv.createTemporaryView("TableB", orderB);

// union the two tables
final Table result =
tableEnv.sqlQuery(
"SELECT * FROM "
+ tableA
+ " WHERE amount > 2 UNION ALL "
+ "SELECT * FROM TableB WHERE amount < 2");

// convert the Table back to an insert-only DataStream of type `Order`
tableEnv.toDataStream(result, Order.class).print();

// after the table program is converted to a DataStream program,
// we must use `env.execute()` to submit the job
env.execute();


{noformat}
ExecutionConfigOptions.TABLE_EXEC_SINK_NOT_NULL_ENFORCER is set to ERROR in 
flink-conf.yaml and yet no exception is thrown, that is because in

StreamTableEnvironmentImpl:
{noformat}
public static StreamTableEnvironment create(
StreamExecutionEnvironment executionEnvironment,
EnvironmentSettings settings,
TableConfig tableConfig) {{noformat}
we use the 
{noformat}
public static Executor lookupExecutor(
ClassLoader classLoader,
String executorIdentifier,
StreamExecutionEnvironment executionEnvironment) {{noformat}
so we don't follow the same path to call the 
StreamContextEnvironment#setAsContext

which checks for overriding options depending on the new flag.

 

 

 


> execution.allow-client-job-configurations not checked for executeAsync
> --
>
> Key: FLINK-26453
> URL: https://issues.apache.org/jira/browse/FLINK-26453
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.15.0
>Reporter: Marios Trivyzas
>Assignee: Fabian Paul
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> * *checkNotAllowedConfigurations()* should be called by  
> *StreamContextEnvironment#executeAsync()*
>  * Description of the *DeploymentOption* should be more clear, and it's not 
> only checked by application mode.
>  * When using a config option which is the same as the one in the environment 
> *(flink-conf.yaml + CLI options)*  we still throw an exception, and we also 
> throwing the exception even if the option is not in the environment, but it's 
> the default value of the option anywa. Should we check for those cases, or 
> should we at least document them and say explicitly that no config option is 
> allowed to be set, if the setAsContext
>  
>  
>  



--
This message was sent by Atlassian Jira

[jira] [Updated] (FLINK-26453) execution.allow-client-job-configurations not checked for executeAsync

2022-03-03 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26453:

Description: 
* *checkNotAllowedConfigurations()* should be called by  
*StreamContextEnvironment#executeAsync()*
 * Description of the *DeploymentOption* should be more clear, and it's not 
only checked by application mode.
 * When using a config option which is the same as the one in the environment 
*(flink-conf.yaml + CLI options)*  we still throw an exception, and we also 
throwing the exception even if the option is not in the environment, but it's 
the default value of the option anyway. Should we check for those cases, or 
should we at least document them and say explicitly that no config option is 
allowed to be set, if the *execution.allow-client-job-configurations* is set to 
{*}false{*}?

 

 

 

  was:
* *checkNotAllowedConfigurations()* should be called by  
*StreamContextEnvironment#executeAsync()*
 * Description of the *DeploymentOption* should be more clear, and it's not 
only checked by application mode.
 * When using a config option which is the same as the one in the environment 
*(flink-conf.yaml + CLI options)*  we still throw an exception, and we also 
throwing the exception even if the option is not in the environment, but it's 
the default value of the option anywa. Should we check for those cases, or 
should we at least document them and say explicitly that no config option is 
allowed to be set, if the setAsContext

 

 

 


> execution.allow-client-job-configurations not checked for executeAsync
> --
>
> Key: FLINK-26453
> URL: https://issues.apache.org/jira/browse/FLINK-26453
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.15.0
>Reporter: Marios Trivyzas
>Assignee: Fabian Paul
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> * *checkNotAllowedConfigurations()* should be called by  
> *StreamContextEnvironment#executeAsync()*
>  * Description of the *DeploymentOption* should be more clear, and it's not 
> only checked by application mode.
>  * When using a config option which is the same as the one in the environment 
> *(flink-conf.yaml + CLI options)*  we still throw an exception, and we also 
> throwing the exception even if the option is not in the environment, but it's 
> the default value of the option anyway. Should we check for those cases, or 
> should we at least document them and say explicitly that no config option is 
> allowed to be set, if the *execution.allow-client-job-configurations* is set 
> to {*}false{*}?
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26453) execution.allow-client-job-configurations not checked for executeAsync

2022-03-03 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26453:

Description: 
* *checkNotAllowedConfigurations()* should be called by  
*StreamContextEnvironment#executeAsync()*
 * Description of the *DeploymentOption* should be more clear, and it's not 
only checked by application mode.
 * When using a configuration which is the same as the one in flink-conf.yaml 

Modified code of StreamSQLExample to pass extra config
{noformat}
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

Configuration conf = new Configuration();
conf.set(
ExecutionConfigOptions.TABLE_EXEC_SINK_NOT_NULL_ENFORCER,
ExecutionConfigOptions.NotNullEnforcer.DROP);

// set up the Java Table API
final StreamTableEnvironment tableEnv =
StreamTableEnvironment.create(env, 
EnvironmentSettings.fromConfiguration(conf));

final DataStream orderA =
env.fromCollection(
Arrays.asList(
new Order(1L, "beer", 3),
new Order(1L, "diaper", 4),
new Order(3L, "rubber", 2)));

final DataStream orderB =
env.fromCollection(
Arrays.asList(
new Order(2L, "pen", 3),
new Order(2L, "rubber", 3),
new Order(4L, "beer", 1)));

// convert the first DataStream to a Table object
// it will be used "inline" and is not registered in a catalog
final Table tableA = tableEnv.fromDataStream(orderA);

// convert the second DataStream and register it as a view
// it will be accessible under a name
tableEnv.createTemporaryView("TableB", orderB);

// union the two tables
final Table result =
tableEnv.sqlQuery(
"SELECT * FROM "
+ tableA
+ " WHERE amount > 2 UNION ALL "
+ "SELECT * FROM TableB WHERE amount < 2");

// convert the Table back to an insert-only DataStream of type `Order`
tableEnv.toDataStream(result, Order.class).print();

// after the table program is converted to a DataStream program,
// we must use `env.execute()` to submit the job
env.execute();


{noformat}
ExecutionConfigOptions.TABLE_EXEC_SINK_NOT_NULL_ENFORCER is set to ERROR in 
flink-conf.yaml and yet no exception is thrown, that is because in

StreamTableEnvironmentImpl:
{noformat}
public static StreamTableEnvironment create(
StreamExecutionEnvironment executionEnvironment,
EnvironmentSettings settings,
TableConfig tableConfig) {{noformat}
we use the 
{noformat}
public static Executor lookupExecutor(
ClassLoader classLoader,
String executorIdentifier,
StreamExecutionEnvironment executionEnvironment) {{noformat}
so we don't follow the same path to call the 
StreamContextEnvironment#setAsContext

which checks for overriding options depending on the new flag.

 

 

 

  was:
* *checkNotAllowedConfigurations()* should be called by  
{*}{*}{*}StreamContextEnvironment#executeAsync(){*}
 *  Description of the *DeploymentOption* should be more clear, and it's not 
only checked by application mode.
 * When using a combination of TableAPI and DataStreamApi, the check for 
overriding config options is not applied:

Modified code of StreamSQLExample to pass extra config
{noformat}
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

Configuration conf = new Configuration();
conf.set(
ExecutionConfigOptions.TABLE_EXEC_SINK_NOT_NULL_ENFORCER,
ExecutionConfigOptions.NotNullEnforcer.DROP);

// set up the Java Table API
final StreamTableEnvironment tableEnv =
StreamTableEnvironment.create(env, 
EnvironmentSettings.fromConfiguration(conf));

final DataStream orderA =
env.fromCollection(
Arrays.asList(
new Order(1L, "beer", 3),
new Order(1L, "diaper", 4),
new Order(3L, "rubber", 2)));

final DataStream orderB =
env.fromCollection(
Arrays.asList(
new Order(2L, "pen", 3),
new Order(2L, "rubber", 3),
new Order(4L, "beer", 1)));

// convert the first DataStream to a Table object
// it will be used "inline" and is not registered in a catalog
final Table tableA = tableEnv.fromDataStream(orderA);

// convert the second DataStream and register it as a view
// it will be accessible under a name
tableEnv.createTemporaryView("TableB", orderB);

// union the two tables
final Table result =
tableEnv.sqlQuery(
"SELECT * FROM "
+ tableA
+ " WHERE amount > 2 UNION ALL "
+ "SELECT * FROM TableB WHERE amount < 2");

// convert the Table back to an 

[jira] [Created] (FLINK-26453) execution.allow-client-job-configurations not checked for executeAsync

2022-03-02 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-26453:
---

 Summary: execution.allow-client-job-configurations not checked for 
executeAsync
 Key: FLINK-26453
 URL: https://issues.apache.org/jira/browse/FLINK-26453
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.15.0
Reporter: Marios Trivyzas
Assignee: Fabian Paul


* *checkNotAllowedConfigurations()* should be called by  
{*}{*}{*}StreamContextEnvironment#executeAsync(){*}
 *  Description of the *DeploymentOption* should be more clear, and it's not 
only checked by application mode.
 * When using a combination of TableAPI and DataStreamApi, the check for 
overriding config options is not applied:

Modified code of StreamSQLExample to pass extra config
{noformat}
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

Configuration conf = new Configuration();
conf.set(
ExecutionConfigOptions.TABLE_EXEC_SINK_NOT_NULL_ENFORCER,
ExecutionConfigOptions.NotNullEnforcer.DROP);

// set up the Java Table API
final StreamTableEnvironment tableEnv =
StreamTableEnvironment.create(env, 
EnvironmentSettings.fromConfiguration(conf));

final DataStream orderA =
env.fromCollection(
Arrays.asList(
new Order(1L, "beer", 3),
new Order(1L, "diaper", 4),
new Order(3L, "rubber", 2)));

final DataStream orderB =
env.fromCollection(
Arrays.asList(
new Order(2L, "pen", 3),
new Order(2L, "rubber", 3),
new Order(4L, "beer", 1)));

// convert the first DataStream to a Table object
// it will be used "inline" and is not registered in a catalog
final Table tableA = tableEnv.fromDataStream(orderA);

// convert the second DataStream and register it as a view
// it will be accessible under a name
tableEnv.createTemporaryView("TableB", orderB);

// union the two tables
final Table result =
tableEnv.sqlQuery(
"SELECT * FROM "
+ tableA
+ " WHERE amount > 2 UNION ALL "
+ "SELECT * FROM TableB WHERE amount < 2");

// convert the Table back to an insert-only DataStream of type `Order`
tableEnv.toDataStream(result, Order.class).print();

// after the table program is converted to a DataStream program,
// we must use `env.execute()` to submit the job
env.execute();


{noformat}
ExecutionConfigOptions.TABLE_EXEC_SINK_NOT_NULL_ENFORCER is set to ERROR in 
flink-conf.yaml and yet no exception is thrown, that is because in

StreamTableEnvironmentImpl:
{noformat}
public static StreamTableEnvironment create(
StreamExecutionEnvironment executionEnvironment,
EnvironmentSettings settings,
TableConfig tableConfig) {{noformat}
we use the 
{noformat}
public static Executor lookupExecutor(
ClassLoader classLoader,
String executorIdentifier,
StreamExecutionEnvironment executionEnvironment) {{noformat}
so we don't follow the same path to call the 
StreamContextEnvironment#setAsContext

which checks for overriding options depending on the new flag.

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26421) Cleanup EnvironmentSettings and only use ConfigOptions

2022-03-01 Thread Marios Trivyzas (Jira)
Marios Trivyzas created FLINK-26421:
---

 Summary: Cleanup EnvironmentSettings and only use ConfigOptions
 Key: FLINK-26421
 URL: https://issues.apache.org/jira/browse/FLINK-26421
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Marios Trivyzas
Assignee: Marios Trivyzas


* integrate Configuration into EnvironmentSettings
 * EnvironmentSettings should only contain a Configuration, not other members 
(create config options for all members)
 * Remove `TableEnvironmentImpl.create(EnvSettings, Configuration)` -> 
create(EnvSettings)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26194) Deprecate unused options in TableConfig

2022-03-01 Thread Marios Trivyzas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marios Trivyzas updated FLINK-26194:

Summary: Deprecate unused options in TableConfig  (was: Deprecate nullCheck 
in TableConfig)

> Deprecate unused options in TableConfig
> ---
>
> Key: FLINK-26194
> URL: https://issues.apache.org/jira/browse/FLINK-26194
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Marios Trivyzas
>Priority: Major
>
> Deprecate *nullCheck* in *TableConfig.*



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   3   >