[jira] [Reopened] (FLINK-33407) Port time functions to the new type inference stack

2023-10-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reopened FLINK-33407:
--
  Assignee: Dawid Wysakowicz

> Port time functions to the new type inference stack
> ---
>
> Key: FLINK-33407
> URL: https://issues.apache.org/jira/browse/FLINK-33407
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.19.0
>
>
> Remove leftovers from https://issues.apache.org/jira/browse/FLINK-13785
> There are some par



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33407) Port time functions to the new type inference stack

2023-10-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33407.

Resolution: Duplicate

> Port time functions to the new type inference stack
> ---
>
> Key: FLINK-33407
> URL: https://issues.apache.org/jira/browse/FLINK-33407
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.19.0
>
>
> The end goal for this task is to remove 
> https://github.com/apache/flink/blob/91d81c427aa6312841ca868d54e8ce6ea721cd60/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/expressions/time.scala
> For that to happen we need to port:
> * EXTRACT
> * CURRENT_DATE
> * CURRENT_TIME
> * CURRENT_TIMESTAMP
> * LOCAL_TIME
> * LOCAL_TIMESTAMP
> * TEMPORAL_OVERLAPS
> * DATE_FORMAT
> * TIMESTAMP_DIFF
> * TO_TIMESTAMP_LTZ
> functions to the new type inference



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33407) Port time functions to the new type inference stack

2023-10-31 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33407:


 Summary: Port time functions to the new type inference stack
 Key: FLINK-33407
 URL: https://issues.apache.org/jira/browse/FLINK-33407
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
 Fix For: 1.19.0


The end goal for this task is to remove 
https://github.com/apache/flink/blob/91d81c427aa6312841ca868d54e8ce6ea721cd60/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/expressions/time.scala

For that to happen we need to port:
* EXTRACT
* CURRENT_DATE
* CURRENT_TIME
* CURRENT_TIMESTAMP
* LOCAL_TIME
* LOCAL_TIMESTAMP
* TEMPORAL_OVERLAPS
* DATE_FORMAT
* TIMESTAMP_DIFF
* TO_TIMESTAMP_LTZ
functions to the new type inference



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-18286) Implement type inference for functions on composite types

2023-10-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-18286:


Assignee: Dawid Wysakowicz  (was: Francesco Guardiani)

> Implement type inference for functions on composite types
> -
>
> Key: FLINK-18286
> URL: https://issues.apache.org/jira/browse/FLINK-18286
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> We should implement type inference for functions such as AT/ITEM/ELEMENT/GET.
> Additionally we should make sure they are consistent in Table API & SQL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33375) Add a RestoreTestBase

2023-10-28 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz resolved FLINK-33375.
--
Resolution: Fixed

Implemented in 
dcce3764a4500b2006cd260677169d14c553a3eb..d863ff38c0671255df2452c79dad88fa47e2bc0c

> Add a RestoreTestBase
> -
>
> Key: FLINK-33375
> URL: https://issues.apache.org/jira/browse/FLINK-33375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Add a test base class for writing restore tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33371) Make TestValues sinks return results as Rows

2023-10-27 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33371.

Resolution: Fixed

Implemented in e914eb7fc3f31286ed7e33cc93e7ffbca785b731

> Make TestValues sinks return results as Rows
> 
>
> Key: FLINK-33371
> URL: https://issues.apache.org/jira/browse/FLINK-33371
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner, Tests
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> If we want to use the predicates from 
> https://github.com/apache/flink/pull/23584 in restore tests we need to make 
> testing sinks return Rows instead of Strings



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33375) Add a RestoreTestBase

2023-10-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33375:


 Summary: Add a RestoreTestBase
 Key: FLINK-33375
 URL: https://issues.apache.org/jira/browse/FLINK-33375
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


Add a test base class for writing restore tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33372) Cryptic exception for a sub query in a CompiledPlan

2023-10-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33372:


 Summary: Cryptic exception for a sub query in a CompiledPlan
 Key: FLINK-33372
 URL: https://issues.apache.org/jira/browse/FLINK-33372
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: Dawid Wysakowicz


SQL statements with a SUBQUERY can be compiled to a plan, but such plans can 
not be executed and they fail with a cryptic exception.

Example:

{code}
final CompiledPlan compiledPlan = tEnv.compilePlanSql("insert into MySink 
SELECT * FROM LATERAL TABLE(func1(select c from MyTable))");

tEnv.loadPlan(PlanReference.fromJsonString(compiledPlan.asJsonString())).execute();
{code}

fails with:
{code}
org.apache.flink.table.planner.codegen.CodeGenException: Unsupported call: 
$SCALAR_QUERY() 
If you think this function should be supported, you can create an issue and 
start a discussion for it.
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33371) Make TestValues sinks return results as Rows

2023-10-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33371:


 Summary: Make TestValues sinks return results as Rows
 Key: FLINK-33371
 URL: https://issues.apache.org/jira/browse/FLINK-33371
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner, Tests
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


If we want to use the predicates from 
https://github.com/apache/flink/pull/23584 in restore tests we need to make 
testing sinks return Rows instead of Strings



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33255) Validate argument count during type inference

2023-10-25 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33255.

Resolution: Fixed

Improved in b1bbafddc98c76653a5733ec345f79b1ee4eee71

> Validate argument count during type inference
> -
>
> Key: FLINK-33255
> URL: https://issues.apache.org/jira/browse/FLINK-33255
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently we do not validate the argument count in 
> {{TypeInferenceOperandInference}} which results in bugs like e.g. 
> [FLINK-33248]. We do run the check already in {{TypeInferenceUtil}} when 
> running inference for Table API so we should do the same in 
> {{TypeInferenceOperandInference}} case.
> We could expose {{TypeInferenceUtil#validateArgumentCount}} and call it. If 
> the check fails, we should not adapt {{operandTypes}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33327) Window TVF column expansion does not work with an INSERT INTO

2023-10-20 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33327.

Resolution: Fixed

Fixed in c7beda0da81ffc4bbb01befafd2eed08b7b35854

> Window TVF column expansion does not work with an INSERT INTO
> -
>
> Key: FLINK-33327
> URL: https://issues.apache.org/jira/browse/FLINK-33327
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> If we have an {{INSERT INTO}} with an explicit column list and a {{TUMBLE}} 
> function, the explicit column expansion fails with {{NullPointerException}}.
> Test to reproduce:
> {code}
> @Test
> public void 
> testExplicitTableWithinTableFunctionWithInsertIntoNamedColumns() {
> tableEnv.getConfig()
> .set(
> TABLE_COLUMN_EXPANSION_STRATEGY,
> 
> Collections.singletonList(EXCLUDE_DEFAULT_VIRTUAL_METADATA_COLUMNS));
> tableEnv.executeSql(
> "CREATE TABLE sink (\n"
> + "  a STRING,\n"
> + "  c BIGINT\n"
> + ") WITH (\n"
> + " 'connector' = 'values',"
> + " 'sink-insert-only' = 'false'"
> + ")");
> tableEnv.explainSql(
> "INSERT INTO sink(a, c) "
> + "SELECT t3_s, COUNT(t3_i) FROM "
> + " TABLE(TUMBLE(TABLE t3, DESCRIPTOR(t3_m_virtual), 
> INTERVAL '1' MINUTE)) "
> + "GROUP BY t3_s;");
> }
> {code}
> Exception:
> {code}
> org.apache.flink.table.api.ValidationException: SQL validation failed. SQL 
> validation failed. null
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:189)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:115)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:282)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.explainSql(TableEnvironmentImpl.java:696)
>   at 
> org.apache.flink.table.api.TableEnvironment.explainSql(TableEnvironment.java:976)
>   at 
> org.apache.flink.table.planner.plan.stream.sql.ColumnExpansionTest.testExplicitTableWithinTableFunctionWithInsertIntoNamedColumns(ColumnExpansionTest.java:279)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
>   at 
> 

[jira] [Created] (FLINK-33327) Window TVF column expansion does not work with an INSERT INTO

2023-10-20 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33327:


 Summary: Window TVF column expansion does not work with an INSERT 
INTO
 Key: FLINK-33327
 URL: https://issues.apache.org/jira/browse/FLINK-33327
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
 Fix For: 1.19.0


If we have an {{INSERT INTO}} with an explicit column list and a {{TUMBLE}} 
function, the explicit column expansion fails with {{NullPointerException}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33327) Window TVF column expansion does not work with an INSERT INTO

2023-10-20 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-33327:
-
Description: 
If we have an {{INSERT INTO}} with an explicit column list and a {{TUMBLE}} 
function, the explicit column expansion fails with {{NullPointerException}}.

Test to reproduce:

{code}
@Test
public void 
testExplicitTableWithinTableFunctionWithInsertIntoNamedColumns() {
tableEnv.getConfig()
.set(
TABLE_COLUMN_EXPANSION_STRATEGY,

Collections.singletonList(EXCLUDE_DEFAULT_VIRTUAL_METADATA_COLUMNS));

tableEnv.executeSql(
"CREATE TABLE sink (\n"
+ "  a STRING,\n"
+ "  c BIGINT\n"
+ ") WITH (\n"
+ " 'connector' = 'values',"
+ " 'sink-insert-only' = 'false'"
+ ")");

tableEnv.explainSql(
"INSERT INTO sink(a, c) "
+ "SELECT t3_s, COUNT(t3_i) FROM "
+ " TABLE(TUMBLE(TABLE t3, DESCRIPTOR(t3_m_virtual), 
INTERVAL '1' MINUTE)) "
+ "GROUP BY t3_s;");
}
{code}

Exception:

{code}
org.apache.flink.table.api.ValidationException: SQL validation failed. SQL 
validation failed. null

at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:189)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:115)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:282)
at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.explainSql(TableEnvironmentImpl.java:696)
at 
org.apache.flink.table.api.TableEnvironment.explainSql(TableEnvironment.java:976)
at 
org.apache.flink.table.planner.plan.stream.sql.ColumnExpansionTest.testExplicitTableWithinTableFunctionWithInsertIntoNamedColumns(ColumnExpansionTest.java:279)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at 
com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
at 
com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:232)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:55)
Caused by: org.apache.flink.table.api.ValidationException: SQL validation 
failed. null
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:189)
at 

[jira] [Assigned] (FLINK-33327) Window TVF column expansion does not work with an INSERT INTO

2023-10-20 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-33327:


Assignee: Dawid Wysakowicz

> Window TVF column expansion does not work with an INSERT INTO
> -
>
> Key: FLINK-33327
> URL: https://issues.apache.org/jira/browse/FLINK-33327
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.19.0
>
>
> If we have an {{INSERT INTO}} with an explicit column list and a {{TUMBLE}} 
> function, the explicit column expansion fails with {{NullPointerException}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33255) Validate argument count during type inference

2023-10-12 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33255:


 Summary: Validate argument count during type inference
 Key: FLINK-33255
 URL: https://issues.apache.org/jira/browse/FLINK-33255
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


Currently we do not validate the argument count in 
{{TypeInferenceOperandInference}} which results in bugs like e.g. 
[FLINK-33248]. We do run the check already in {{TypeInferenceUtil}} when 
running inference for Table API so we should do the same in 
{{TypeInferenceOperandInference}} case.

We could expose {{TypeInferenceUtil#validateArgumentCount}} and call it. If the 
check fails, we should not adapt {{operandTypes}}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33248) Calling CURRENT_WATERMARK without parameters gives IndexOutOfBoundsException

2023-10-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33248.

Fix Version/s: 1.19.0
   Resolution: Fixed

Fixed in master: 35a2257af5d0cf9e671460b9770f1363b7a3d60c

> Calling CURRENT_WATERMARK without parameters gives IndexOutOfBoundsException
> 
>
> Key: FLINK-33248
> URL: https://issues.apache.org/jira/browse/FLINK-33248
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Create a table
> {code:java}
> Flink SQL> CREATE TABLE T (ts TIMESTAMP(3)) WITH ('connector'='values', 
> 'bounded'='true');
> [INFO] Execute statement succeed. {code}
> Select CURRENT_WATERMARK without parameters
> {code:java}
> Flink SQL> SELECT ts, CURRENT_WATERMARK() FROM T;
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.IndexOutOfBoundsException: Index 0 out of bounds for length 0 {code}
> Logs from sql-client:
> {code:java}
> 2023-10-11 14:58:20,576 ERROR 
> org.apache.flink.table.gateway.service.operation.OperationManager [] - Failed 
> to execute the operation cc8d29d3-e6ad-428a-92ea-b0a2a72c25f9.
> org.apache.flink.table.api.ValidationException: SQL validation failed. 
> Unexpected error in type inference logic of function 'CURRENT_WATERMARK'. 
> This is a bug.
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:200)
>  ~[?:?]
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:117)
>  ~[?:?]
>     at 
> org.apache.flink.table.planner.operations.SqlNodeToOperationConversion.convert(SqlNodeToOperationConversion.java:261)
>  ~[?:?]
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
>  ~[?:?]
>     at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:187)
>  ~[flink-sql-gateway-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
>     at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
>  ~[flink-sql-gateway-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
>     at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
>  ~[flink-sql-gateway-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
>     at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
>  ~[flink-sql-gateway-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>     at java.lang.Thread.run(Thread.java:829) [?:?] {code}
>  
> {*}Expected Behavior{*}:
> It should return a SqlValidatorException: No match found for function 
> signature CURRENT_WATERMARK()
> This is inline with other functions which expects a parameter
> Example:
> {code:java}
> Flink SQL> SELECT ARRAY_JOIN();
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.calcite.sql.validate.SqlValidatorException: No match found for 
> function signature ARRAY_JOIN(){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33248) Calling CURRENT_WATERMARK without parameters gives IndexOutOfBoundsException

2023-10-12 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-33248:


Assignee: Bonnie Varghese

> Calling CURRENT_WATERMARK without parameters gives IndexOutOfBoundsException
> 
>
> Key: FLINK-33248
> URL: https://issues.apache.org/jira/browse/FLINK-33248
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Minor
>  Labels: pull-request-available
>
> Create a table
> {code:java}
> Flink SQL> CREATE TABLE T (ts TIMESTAMP(3)) WITH ('connector'='values', 
> 'bounded'='true');
> [INFO] Execute statement succeed. {code}
> Select CURRENT_WATERMARK without parameters
> {code:java}
> Flink SQL> SELECT ts, CURRENT_WATERMARK() FROM T;
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.IndexOutOfBoundsException: Index 0 out of bounds for length 0 {code}
> Logs from sql-client:
> {code:java}
> 2023-10-11 14:58:20,576 ERROR 
> org.apache.flink.table.gateway.service.operation.OperationManager [] - Failed 
> to execute the operation cc8d29d3-e6ad-428a-92ea-b0a2a72c25f9.
> org.apache.flink.table.api.ValidationException: SQL validation failed. 
> Unexpected error in type inference logic of function 'CURRENT_WATERMARK'. 
> This is a bug.
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:200)
>  ~[?:?]
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:117)
>  ~[?:?]
>     at 
> org.apache.flink.table.planner.operations.SqlNodeToOperationConversion.convert(SqlNodeToOperationConversion.java:261)
>  ~[?:?]
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
>  ~[?:?]
>     at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:187)
>  ~[flink-sql-gateway-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
>     at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
>  ~[flink-sql-gateway-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
>     at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
>  ~[flink-sql-gateway-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
>     at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
>  ~[flink-sql-gateway-1.19-SNAPSHOT.jar:1.19-SNAPSHOT]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>     at java.lang.Thread.run(Thread.java:829) [?:?] {code}
>  
> {*}Expected Behavior{*}:
> It should return a SqlValidatorException: No match found for function 
> signature CURRENT_WATERMARK()
> This is inline with other functions which expects a parameter
> Example:
> {code:java}
> Flink SQL> SELECT ARRAY_JOIN();
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.calcite.sql.validate.SqlValidatorException: No match found for 
> function signature ARRAY_JOIN(){code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33179) Improve reporting serialisation issues

2023-10-10 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33179.

Fix Version/s: 1.19.0
   Resolution: Fixed

Fixed in master: 6b52a4107db7521a25f4f308891095c5ba33cca0

> Improve reporting serialisation issues
> --
>
> Key: FLINK-33179
> URL: https://issues.apache.org/jira/browse/FLINK-33179
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> FLINK-33158 shows that serialisation exceptions are not reported in a helpful 
> manner. We should improve error reporting so that it gives more context what 
> went wrong.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33223) MATCH_RECOGNIZE AFTER MATCH clause can not be deserialised from a compiled plan

2023-10-10 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17773558#comment-17773558
 ] 

Dawid Wysakowicz edited comment on FLINK-33223 at 10/10/23 6:44 AM:


Fixed in 
* master
** 53ece12c25579497338ed59a7aebe70f2b3d9ed6
* 1.18.1
** 9b837727b6d369af9029c73a02bf5c43f0ce6201


was (Author: dawidwys):
Fixed in master: 53ece12c25579497338ed59a7aebe70f2b3d9ed6

> MATCH_RECOGNIZE AFTER MATCH clause can not be deserialised from a compiled 
> plan
> ---
>
> Key: FLINK-33223
> URL: https://issues.apache.org/jira/browse/FLINK-33223
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.1
>
>
> {code}
> String sql =
> "insert into MySink"
> + " SELECT * FROM\n"
> + " MyTable\n"
> + "   MATCH_RECOGNIZE(\n"
> + "   PARTITION BY vehicle_id\n"
> + "   ORDER BY `rowtime`\n"
> + "   MEASURES \n"
> + "   FIRST(A.`rowtime`) as startTime,\n"
> + "   LAST(A.`rowtime`) as endTime,\n"
> + "   FIRST(A.engine_temperature) as 
> Initial_Temp,\n"
> + "   LAST(A.engine_temperature) as Final_Temp\n"
> + "   ONE ROW PER MATCH\n"
> + "   AFTER MATCH SKIP TO FIRST B\n"
> + "   PATTERN (A+ B)\n"
> + "   DEFINE\n"
> + "   A as LAST(A.engine_temperature,1) is NULL 
> OR A.engine_temperature > LAST(A.engine_temperature,1),\n"
> + "   B as B.engine_temperature < 
> LAST(A.engine_temperature)\n"
> + "   )MR;";
> util.verifyJsonPlan(String.format(sql, afterClause));
> {code}
> fails with:
> {code}
> Could not resolve internal system function '$SKIP TO LAST$1'. This is a bug, 
> please file an issue. (through reference chain: 
> org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[3]->org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecMatch["matchSpec"]->org.apache.flink.table.planner.plan.nodes.exec.spec.MatchSpec["after"])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33223) MATCH_RECOGNIZE AFTER MATCH clause can not be deserialised from a compiled plan

2023-10-10 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-33223:
-
Fix Version/s: 1.18.1

> MATCH_RECOGNIZE AFTER MATCH clause can not be deserialised from a compiled 
> plan
> ---
>
> Key: FLINK-33223
> URL: https://issues.apache.org/jira/browse/FLINK-33223
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.1
>
>
> {code}
> String sql =
> "insert into MySink"
> + " SELECT * FROM\n"
> + " MyTable\n"
> + "   MATCH_RECOGNIZE(\n"
> + "   PARTITION BY vehicle_id\n"
> + "   ORDER BY `rowtime`\n"
> + "   MEASURES \n"
> + "   FIRST(A.`rowtime`) as startTime,\n"
> + "   LAST(A.`rowtime`) as endTime,\n"
> + "   FIRST(A.engine_temperature) as 
> Initial_Temp,\n"
> + "   LAST(A.engine_temperature) as Final_Temp\n"
> + "   ONE ROW PER MATCH\n"
> + "   AFTER MATCH SKIP TO FIRST B\n"
> + "   PATTERN (A+ B)\n"
> + "   DEFINE\n"
> + "   A as LAST(A.engine_temperature,1) is NULL 
> OR A.engine_temperature > LAST(A.engine_temperature,1),\n"
> + "   B as B.engine_temperature < 
> LAST(A.engine_temperature)\n"
> + "   )MR;";
> util.verifyJsonPlan(String.format(sql, afterClause));
> {code}
> fails with:
> {code}
> Could not resolve internal system function '$SKIP TO LAST$1'. This is a bug, 
> please file an issue. (through reference chain: 
> org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[3]->org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecMatch["matchSpec"]->org.apache.flink.table.planner.plan.nodes.exec.spec.MatchSpec["after"])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33223) MATCH_RECOGNIZE AFTER MATCH clause can not be deserialised from a compiled plan

2023-10-10 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33223.

Resolution: Fixed

Fixed in master: 53ece12c25579497338ed59a7aebe70f2b3d9ed6

> MATCH_RECOGNIZE AFTER MATCH clause can not be deserialised from a compiled 
> plan
> ---
>
> Key: FLINK-33223
> URL: https://issues.apache.org/jira/browse/FLINK-33223
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> {code}
> String sql =
> "insert into MySink"
> + " SELECT * FROM\n"
> + " MyTable\n"
> + "   MATCH_RECOGNIZE(\n"
> + "   PARTITION BY vehicle_id\n"
> + "   ORDER BY `rowtime`\n"
> + "   MEASURES \n"
> + "   FIRST(A.`rowtime`) as startTime,\n"
> + "   LAST(A.`rowtime`) as endTime,\n"
> + "   FIRST(A.engine_temperature) as 
> Initial_Temp,\n"
> + "   LAST(A.engine_temperature) as Final_Temp\n"
> + "   ONE ROW PER MATCH\n"
> + "   AFTER MATCH SKIP TO FIRST B\n"
> + "   PATTERN (A+ B)\n"
> + "   DEFINE\n"
> + "   A as LAST(A.engine_temperature,1) is NULL 
> OR A.engine_temperature > LAST(A.engine_temperature,1),\n"
> + "   B as B.engine_temperature < 
> LAST(A.engine_temperature)\n"
> + "   )MR;";
> util.verifyJsonPlan(String.format(sql, afterClause));
> {code}
> fails with:
> {code}
> Could not resolve internal system function '$SKIP TO LAST$1'. This is a bug, 
> please file an issue. (through reference chain: 
> org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[3]->org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecMatch["matchSpec"]->org.apache.flink.table.planner.plan.nodes.exec.spec.MatchSpec["after"])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33223) MATCH_RECOGNIZE AFTER MATCH clause can not be deserialised from a compiled plan

2023-10-09 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33223:


 Summary: MATCH_RECOGNIZE AFTER MATCH clause can not be 
deserialised from a compiled plan
 Key: FLINK-33223
 URL: https://issues.apache.org/jira/browse/FLINK-33223
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.19.0


{code}
String sql =
"insert into MySink"
+ " SELECT * FROM\n"
+ " MyTable\n"
+ "   MATCH_RECOGNIZE(\n"
+ "   PARTITION BY vehicle_id\n"
+ "   ORDER BY `rowtime`\n"
+ "   MEASURES \n"
+ "   FIRST(A.`rowtime`) as startTime,\n"
+ "   LAST(A.`rowtime`) as endTime,\n"
+ "   FIRST(A.engine_temperature) as 
Initial_Temp,\n"
+ "   LAST(A.engine_temperature) as Final_Temp\n"
+ "   ONE ROW PER MATCH\n"
+ "   AFTER MATCH SKIP TO FIRST B\n"
+ "   PATTERN (A+ B)\n"
+ "   DEFINE\n"
+ "   A as LAST(A.engine_temperature,1) is NULL OR 
A.engine_temperature > LAST(A.engine_temperature,1),\n"
+ "   B as B.engine_temperature < 
LAST(A.engine_temperature)\n"
+ "   )MR;";
util.verifyJsonPlan(String.format(sql, afterClause));
{code}

fails with:

{code}
Could not resolve internal system function '$SKIP TO LAST$1'. This is a bug, 
please file an issue. (through reference chain: 
org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[3]->org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecMatch["matchSpec"]->org.apache.flink.table.planner.plan.nodes.exec.spec.MatchSpec["after"])
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33158) Cryptic exception when there is a StreamExecSort in JsonPlan

2023-10-03 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17771467#comment-17771467
 ] 

Dawid Wysakowicz edited comment on FLINK-33158 at 10/3/23 12:51 PM:


Fixed in:
* master
** 4afd09245823a1cf2d849dbd84c1b6d5ab58c875
* 1.18.1
** 2c8207a11a1850b11b06edfe10f3c8eecaf8d641


was (Author: dawidwys):
Fixed in 4afd09245823a1cf2d849dbd84c1b6d5ab58c875

> Cryptic exception when there is a StreamExecSort in JsonPlan
> 
>
> Key: FLINK-33158
> URL: https://issues.apache.org/jira/browse/FLINK-33158
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.2, 1.17.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.1
>
>
> {code}
> CREATE TABLE MyTable (
>a bigint,
>b int not null,
>c varchar,
>d timestamp(3)
> with (
>'connector' = 'values',
>'bounded' = 'false'
> )
> insert into MySink SELECT a, a from MyTable order by b
> {code}
> fails with:
> {code}
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
>  For input string: "null" (through reference chain: 
> org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[2])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33158) Cryptic exception when there is a StreamExecSort in JsonPlan

2023-10-03 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-33158:
-
Fix Version/s: 1.18.1
   (was: 1.18.0)

> Cryptic exception when there is a StreamExecSort in JsonPlan
> 
>
> Key: FLINK-33158
> URL: https://issues.apache.org/jira/browse/FLINK-33158
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.2, 1.17.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.1
>
>
> {code}
> CREATE TABLE MyTable (
>a bigint,
>b int not null,
>c varchar,
>d timestamp(3)
> with (
>'connector' = 'values',
>'bounded' = 'false'
> )
> insert into MySink SELECT a, a from MyTable order by b
> {code}
> fails with:
> {code}
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
>  For input string: "null" (through reference chain: 
> org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[2])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33158) Cryptic exception when there is a StreamExecSort in JsonPlan

2023-10-03 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33158.

Resolution: Fixed

Fixed in 4afd09245823a1cf2d849dbd84c1b6d5ab58c875

> Cryptic exception when there is a StreamExecSort in JsonPlan
> 
>
> Key: FLINK-33158
> URL: https://issues.apache.org/jira/browse/FLINK-33158
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.2, 1.17.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> {code}
> CREATE TABLE MyTable (
>a bigint,
>b int not null,
>c varchar,
>d timestamp(3)
> with (
>'connector' = 'values',
>'bounded' = 'false'
> )
> insert into MySink SELECT a, a from MyTable order by b
> {code}
> fails with:
> {code}
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
>  For input string: "null" (through reference chain: 
> org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[2])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33179) Improve reporting serialisation issues

2023-10-03 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33179:


 Summary: Improve reporting serialisation issues
 Key: FLINK-33179
 URL: https://issues.apache.org/jira/browse/FLINK-33179
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz


FLINK-33158 shows that serialisation exceptions are not reported in a helpful 
manner. We should improve error reporting so that it gives more context what 
went wrong.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32873) Add configuration to allow disabling SQL query hints

2023-09-28 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-32873:


Assignee: Bonnie Varghese

> Add configuration to allow disabling SQL query hints
> 
>
> Key: FLINK-32873
> URL: https://issues.apache.org/jira/browse/FLINK-32873
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Bonnie Varghese
>Assignee: Bonnie Varghese
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Platform providers may want to disable hints completely for security reasons.
> Currently, there is a configuration to disable OPTIONS hint - 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/#table-dynamic-table-options-enabled]
> We need a new configuration to also disable QUERY hints - 
> [https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/sql/queries/hints/#query-hints]
> The proposal is to add a new configuration:
>  
> {code:java}
> Name: table.optimizer.query-hints.enabled
> Description: Enable or disable the QUERY hint, if disabled, an exception 
> would be thrown if any QUERY hints are specified
> Note: The default value will be set to true.
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33158) Cryptic exception when there is a StreamExecSort in JsonPlan

2023-09-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33158:


 Summary: Cryptic exception when there is a StreamExecSort in 
JsonPlan
 Key: FLINK-33158
 URL: https://issues.apache.org/jira/browse/FLINK-33158
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.17.1, 1.16.2
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.18.0


{code}
CREATE TABLE MyTable (
   a bigint,
   b int not null,
   c varchar,
   d timestamp(3)
with (
   'connector' = 'values',
   'bounded' = 'false'
)

insert into MySink SELECT a, a from MyTable order by b
{code}

fails with:

{code}
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
 For input string: "null" (through reference chain: 
org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[2])
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33083) SupportsReadingMetadata is not applied when loading a CompiledPlan

2023-09-19 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33083.

Fix Version/s: 1.18.0
   Resolution: Fixed

fixed in 
065107edbea6d1afa26af72d8d0bf536104521b8..35f13ea50c1c99d4b4751b33410fb5f5241094c2

> SupportsReadingMetadata is not applied when loading a CompiledPlan
> --
>
> Key: FLINK-33083
> URL: https://issues.apache.org/jira/browse/FLINK-33083
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.2, 1.17.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> If a few conditions are met, we can not apply ReadingMetadata interface:
> # source overwrites:
>  {code}
> @Override
> public boolean supportsMetadataProjection() {
> return false;
> }
>  {code}
> # source does not implement {{SupportsProjectionPushDown}}
> # table has metadata columns e.g.
> {code}
> CREATE TABLE src (
>   physical_name STRING,
>   physical_sum INT,
>   timestamp TIMESTAMP_LTZ(3) NOT NULL METADATA VIRTUAL
> )
> {code}
> # we query the table {{SELECT * FROM src}}
> It fails with:
> {code}
> Caused by: java.lang.IllegalArgumentException: Row arity: 1, but serializer 
> arity: 2
>   at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:124)
> {code}
> The reason is {{SupportsReadingMetadataSpec}} is created only in the 
> {{PushProjectIntoTableSourceScanRule}}, but the rule is not applied when 1 & 2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33093) SHOW FUNCTIONS throw exception with unset catalog

2023-09-19 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-33093.

Resolution: Fixed

Fixed in 05b0b61c62434c73cd819750c0d56b1070a2b0f2

> SHOW FUNCTIONS throw exception with unset catalog
> -
>
> Key: FLINK-33093
> URL: https://issues.apache.org/jira/browse/FLINK-33093
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> A test like this throw an exception. It should instead return only built-in 
> functions
> {code}
> @Test
> public void testUnsetCatalogWithShowFunctions() throws Exception {
> TableEnvironment tEnv = TableEnvironment.create(ENVIRONMENT_SETTINGS);
> tEnv.useCatalog(null);
> TableResult table = tEnv.executeSql("SHOW FUNCTIONS");
> final List functions = 
> CollectionUtil.iteratorToList(table.collect());
> // check it has some built-in functions
> assertThat(functions).hasSizeGreaterThan(0);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33093) SHOW FUNCTIONS throw exception with unset catalog

2023-09-15 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33093:


 Summary: SHOW FUNCTIONS throw exception with unset catalog
 Key: FLINK-33093
 URL: https://issues.apache.org/jira/browse/FLINK-33093
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.18.0
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.18.0


A test like this throw an exception. It should instead return only built-in 
functions
{code}
@Test
public void testUnsetCatalogWithShowFunctions() throws Exception {
TableEnvironment tEnv = TableEnvironment.create(ENVIRONMENT_SETTINGS);

tEnv.useCatalog(null);

TableResult table = tEnv.executeSql("SHOW FUNCTIONS");
final List functions = 
CollectionUtil.iteratorToList(table.collect());

// check it has some built-in functions
assertThat(functions).hasSizeGreaterThan(0);
}
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33083) SupportsReadingMetadata is not applied when loading a CompiledPlan

2023-09-13 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-33083:


Assignee: Dawid Wysakowicz

> SupportsReadingMetadata is not applied when loading a CompiledPlan
> --
>
> Key: FLINK-33083
> URL: https://issues.apache.org/jira/browse/FLINK-33083
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.2, 1.17.1
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>
> If a few conditions are met, we can not apply ReadingMetadata interface:
> # source overwrites:
>  {code}
> @Override
> public boolean supportsMetadataProjection() {
> return false;
> }
>  {code}
> # source does not implement {{SupportsProjectionPushDown}}
> # table has metadata columns e.g.
> {code}
> CREATE TABLE src (
>   physical_name STRING,
>   physical_sum INT,
>   timestamp TIMESTAMP_LTZ(3) NOT NULL METADATA VIRTUAL
> )
> {code}
> # we query the table {{SELECT * FROM src}}
> It fails with:
> {code}
> Caused by: java.lang.IllegalArgumentException: Row arity: 1, but serializer 
> arity: 2
>   at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:124)
> {code}
> The reason is {{SupportsReadingMetadataSpec}} is created only in the 
> {{PushProjectIntoTableSourceScanRule}}, but the rule is not applied when 1 & 2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33083) SupportsReadingMetadata is not applied when loading a CompiledPlan

2023-09-13 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-33083:


 Summary: SupportsReadingMetadata is not applied when loading a 
CompiledPlan
 Key: FLINK-33083
 URL: https://issues.apache.org/jira/browse/FLINK-33083
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.17.1, 1.16.2
Reporter: Dawid Wysakowicz


If a few conditions are met, we can not apply ReadingMetadata interface:
# source overwrites:
 {code}
@Override
public boolean supportsMetadataProjection() {
return false;
}
 {code}
# source does not implement {{SupportsProjectionPushDown}}
# table has metadata columns e.g.
{code}
CREATE TABLE src (
  physical_name STRING,
  physical_sum INT,
  timestamp TIMESTAMP_LTZ(3) NOT NULL METADATA VIRTUAL
)
{code}
# we query the table {{SELECT * FROM src}}

It fails with:
{code}
Caused by: java.lang.IllegalArgumentException: Row arity: 1, but serializer 
arity: 2
at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:124)
{code}

The reason is {{SupportsReadingMetadataSpec}} is created only in the 
{{PushProjectIntoTableSourceScanRule}}, but the rule is not applied when 1 & 2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32691) Make it possible to use builtin functions without catalog/db set

2023-07-28 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-32691.

Resolution: Fixed

Fixed in 3f63e03e83144e9857834f8db1895637d2aa218a

> Make it possible to use builtin functions without catalog/db set
> 
>
> Key: FLINK-32691
> URL: https://issues.apache.org/jira/browse/FLINK-32691
> Project: Flink
>  Issue Type: Bug
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Relative to https://issues.apache.org/jira/browse/FLINK-32584, function 
> lookup fails without the catalog and database set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32691) Make it possible to use builtin functions without catalog/db set

2023-07-27 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-32691:
-
Summary: Make it possible to use builtin functions without catalog/db set  
(was: SELECT fcn does not work with an unset catalog or database)

> Make it possible to use builtin functions without catalog/db set
> 
>
> Key: FLINK-32691
> URL: https://issues.apache.org/jira/browse/FLINK-32691
> Project: Flink
>  Issue Type: Bug
>Reporter: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Relative to https://issues.apache.org/jira/browse/FLINK-32584, function 
> lookup fails without the catalog and database set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32691) Make it possible to use builtin functions without catalog/db set

2023-07-27 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-32691:


Assignee: Jim Hughes

> Make it possible to use builtin functions without catalog/db set
> 
>
> Key: FLINK-32691
> URL: https://issues.apache.org/jira/browse/FLINK-32691
> Project: Flink
>  Issue Type: Bug
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Relative to https://issues.apache.org/jira/browse/FLINK-32584, function 
> lookup fails without the catalog and database set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32682) Introduce option for choosing time function evaluation methods

2023-07-26 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-32682:


Assignee: Dawid Wysakowicz

> Introduce option for choosing time function evaluation methods
> --
>
> Key: FLINK-32682
> URL: https://issues.apache.org/jira/browse/FLINK-32682
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> In [FLIP-162|https://cwiki.apache.org/confluence/x/KAxRCg] as future plans it 
> was discussed to introduce an option {{table.exec.time-function-evaluation}} 
> to control evaluation method of time function.
> We should add this option to be able to evaluate time functions with 
> {{query-time}} method in streaming mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32682) Introduce option for choosing time function evaluation methods

2023-07-26 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-32682:
-
Description: 
In [FLIP-162|https://cwiki.apache.org/confluence/x/KAxRCg] as future plans it 
was discussed to introduce an option {{table.exec.time-function-evaluation}} to 
control evaluation method of time function.

We should add this option to be able to evaluate time functions with 
{{query-time}} method in streaming mode.

  was:
In FLIP-162 as future plans it was discussed to introduce an option 
{{table.exec.time-function-evaluation}} to control evaluation method of time 
function.

We should add this option to be able to evaluate time functions with 
{{query-time}} method in streaming mode.


> Introduce option for choosing time function evaluation methods
> --
>
> Key: FLINK-32682
> URL: https://issues.apache.org/jira/browse/FLINK-32682
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.18.0
>
>
> In [FLIP-162|https://cwiki.apache.org/confluence/x/KAxRCg] as future plans it 
> was discussed to introduce an option {{table.exec.time-function-evaluation}} 
> to control evaluation method of time function.
> We should add this option to be able to evaluate time functions with 
> {{query-time}} method in streaming mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32682) Introduce option for choosing time function evaluation methods

2023-07-26 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-32682:


 Summary: Introduce option for choosing time function evaluation 
methods
 Key: FLINK-32682
 URL: https://issues.apache.org/jira/browse/FLINK-32682
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner, Table SQL / Runtime
Reporter: Dawid Wysakowicz
 Fix For: 1.18.0


In FLIP-162 as future plans it was discussed to introduce an option 
{{table.exec.time-function-evaluation}} to control evaluation method of time 
function.

We should add this option to be able to evaluate time functions with 
{{query-time}} method in streaming mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32584) Make it possible to unset default catalog and/or database

2023-07-12 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742448#comment-17742448
 ] 

Dawid Wysakowicz commented on FLINK-32584:
--

Hey [~jark] this is not a big change of behaviour. It only touches a few 
argument checks see: https://github.com/apache/flink/pull/22986 Do you think 
this is fine to go without entire FLIP process?

> Make it possible to unset default catalog and/or database
> -
>
> Key: FLINK-32584
> URL: https://issues.apache.org/jira/browse/FLINK-32584
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.18.0
>
>
> In certain scenarios it might make sense to unset the default catalog and/or 
> database. For example in a situation when there is no sane default one, but 
> we want the user make that decision consciously. 
> This change has a narrow scope and changes only some checks in the API 
> surface.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32584) Make it possible to unset default catalog and/or database

2023-07-12 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-32584:


 Summary: Make it possible to unset default catalog and/or database
 Key: FLINK-32584
 URL: https://issues.apache.org/jira/browse/FLINK-32584
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API
Reporter: Dawid Wysakowicz
Assignee: Dawid Wysakowicz
 Fix For: 1.18.0


In certain scenarios it might make sense to unset the default catalog and/or 
database. For example in a situation when there is no sane default one, but we 
want the user make that decision consciously. 

This change has a narrow scope and changes only some checks in the API surface.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-07-05 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740087#comment-17740087
 ] 

Dawid Wysakowicz commented on FLINK-32257:
--

I'd rather stick to what is supported today. I am not confident we ever support 
comparing complex types.

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-07-05 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740075#comment-17740075
 ] 

Dawid Wysakowicz commented on FLINK-32257:
--

[~jackylau] Please read through: 
https://github.com/apache/flink/pull/22730#discussion_r1242187327

To be honest it's rather an issue in `ComparableTypeStrategy`.

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32498) array_max return type should always nullable

2023-07-04 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17739843#comment-17739843
 ] 

Dawid Wysakowicz edited comment on FLINK-32498 at 7/4/23 8:38 AM:
--

Fixed in f3eb364ef15f54a74776004e8c1535d7ff569080


was (Author: dawidwys):
Implemented in f3eb364ef15f54a74776004e8c1535d7ff569080

> array_max return type should always nullable
> 
>
> Key: FLINK-32498
> URL: https://issues.apache.org/jira/browse/FLINK-32498
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32498) array_max return type should always nullable

2023-07-04 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-32498.

Resolution: Fixed

Implemented in f3eb364ef15f54a74776004e8c1535d7ff569080

> array_max return type should always nullable
> 
>
> Key: FLINK-32498
> URL: https://issues.apache.org/jira/browse/FLINK-32498
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32259) Add ARRAY_JOIN support in SQL & Table API

2023-07-03 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-32259.

Resolution: Fixed

Implemented in 962e51f771882ecc4664b8e406e733764ea8cd9f

> Add ARRAY_JOIN support in SQL & Table API
> -
>
> Key: FLINK-32259
> URL: https://issues.apache.org/jira/browse/FLINK-32259
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Concatenates the elements of the array with a delimiter and optional string 
> to replace nulls.
> Syntax:
> array_join(array, delimiter, null_replacement)
>  
> Arguments:
> array: An ARRAY to be handled.
> delimiter: A STRING used to separate the concatenated array elements.
> null_replacement: A STRING used to express a NULL value in the result.
> Returns:
> A STRING where the elements of array are separated by delimiter and null 
> elements are substituted for null_replacement. If null_replacement parameter 
> is not provided, null elements are filtered out. If any argument is NULL, the 
> result is NULL.
> Examples:
> {code:java}
> > SELECT array_join(array('hello', 'world'), ' ');
> hello world
> > SELECT array_join(array('hello', NULL ,'world'), ' ');
> hello world
> > SELECT array_join(array('hello', NULL ,'world'), ' ', ',');
> hello , world{code}
>  
> See also:
> spark - [https://spark.apache.org/docs/latest/api/sql/index.html#array_join]
> presto - [https://prestodb.io/docs/current/functions/array.html]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-30 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738980#comment-17738980
 ] 

Dawid Wysakowicz edited comment on FLINK-32257 at 6/30/23 9:19 AM:
---

[~jackylau]I'd love to spend more time reviewing tickets, but my time 
unfortunately is very limited nowadays. I cannot make any promises, but I'll 
try to review at least one of your PRs.


was (Author: dawidwys):
I'd love to spend more time reviewing tickets, but my time unfortunately is 
very limited. I cannot make any promises, but I'll try to review at least one 
of your PRs.

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32498) array_max return type should always nullable

2023-06-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-32498:


Assignee: Jacky Lau

> array_max return type should always nullable
> 
>
> Key: FLINK-32498
> URL: https://issues.apache.org/jira/browse/FLINK-32498
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.18.0
>Reporter: Jacky Lau
>Assignee: Jacky Lau
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-30 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738983#comment-17738983
 ] 

Dawid Wysakowicz commented on FLINK-32257:
--

Thank you for checking this PR by the way!

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-30 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738980#comment-17738980
 ] 

Dawid Wysakowicz commented on FLINK-32257:
--

I'd love to spend more time reviewing tickets, but my time unfortunately is 
very limited. I cannot make any promises, but I'll try to review at least one 
of your PRs.

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-32257.

Resolution: Fixed

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-30 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738974#comment-17738974
 ] 

Dawid Wysakowicz commented on FLINK-32257:
--

The thing is I merged it again with a fix for tests.

Now [~jackylau] pointed an issue in the logic.

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-30 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738974#comment-17738974
 ] 

Dawid Wysakowicz edited comment on FLINK-32257 at 6/30/23 9:11 AM:
---

The thing is I merged it again with a fix for tests like 30 minutes ago.

Now [~jackylau] pointed an issue in the logic.


was (Author: dawidwys):
The thing is I merged it again with a fix for tests.

Now [~jackylau] pointed an issue in the logic.

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-30 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738974#comment-17738974
 ] 

Dawid Wysakowicz edited comment on FLINK-32257 at 6/30/23 9:12 AM:
---

[~mapohl] The thing is I merged it again with a fix for tests like 30 minutes 
ago.

Now [~jackylau] pointed an issue in the logic.


was (Author: dawidwys):
The thing is I merged it again with a fix for tests like 30 minutes ago.

Now [~jackylau] pointed an issue in the logic.

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-30 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738974#comment-17738974
 ] 

Dawid Wysakowicz edited comment on FLINK-32257 at 6/30/23 9:13 AM:
---

[~mapohl] The thing is I merged it again with a fix for tests like 40 minutes 
ago.

Now [~jackylau] pointed an issue in the logic.


was (Author: dawidwys):
[~mapohl] The thing is I merged it again with a fix for tests like 30 minutes 
ago.

Now [~jackylau] pointed an issue in the logic.

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-30 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738975#comment-17738975
 ] 

Dawid Wysakowicz commented on FLINK-32257:
--

Merged again with fixed tests in: 82776bffe7d93b5eb26c82dcce00b60155a82450

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-30 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17738963#comment-17738963
 ] 

Dawid Wysakowicz commented on FLINK-32257:
--

You're right [~jackylau]

Do you mind creating a bug ticket?

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-29 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-32257.

Resolution: Fixed

Implemented in 318d7e4daa8c791521ed2c256285c42a35b4eba6

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32256) Add ARRAY_MIN support in SQL & Table API

2023-06-29 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-32256:
-
Issue Type: New Feature  (was: Bug)

> Add ARRAY_MIN support in SQL & Table API
> 
>
> Key: FLINK-32256
> URL: https://issues.apache.org/jira/browse/FLINK-32256
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
> Fix For: 1.18.0
>
>
> Find the minimum among all elements in the array for which ordering is 
> supported.
> Syntax:
> array_min(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
> Examples:
> {code:sql}
> SELECT array_min(array(1, 20, NULL, 3));
> -- 1
> {code}
> See also
> spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-06-29 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-32257:
-
Summary: Add ARRAY_MAX support in SQL & Table API  (was: Add ARRAY_MIN 
support in SQL & Table API)

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32256) Add ARRAY_MIN support in SQL & Table API

2023-06-29 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-32256:
-
Description: 
Find the minimum among all elements in the array for which ordering is 
supported.

Syntax:

array_min(array)

Arguments:
array: An ARRAY to be handled.

Returns:

The result matches the type of the elements. NULL elements are skipped. If 
array is empty, or contains only NULL elements, NULL is returned.

Examples:
{code:sql}
SELECT array_min(array(1, 20, NULL, 3));
-- 1
{code}
See also
spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_min]

presto [https://prestodb.io/docs/current/functions/array.html]

  was:
This is an implementation of ARRAY_MAX

The array_max() function concatenates get the maximum element from input array.

The result matches the type of the elements. NULL elements are skipped. If 
array is empty, or contains only NULL elements, NULL is returned.

 

Syntax

array_max(array)

Arguments

array: Any ARRAY with elements for which order is supported.

 

Returns

The result matches the type of the elements. NULL elements are skipped. If 
array is empty, or contains only NULL elements, NULL is returned.

 

Examples

SQL

 

> SELECT array_max(array(1, 20, NULL, 3)); 20

 
{code:java}
// Fink SQL-> select array_max(array[1, 20, null, 3])
20{code}
 

See also
spark 
[https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]

presto [https://prestodb.io/docs/current/functions/array.html]


> Add ARRAY_MIN support in SQL & Table API
> 
>
> Key: FLINK-32256
> URL: https://issues.apache.org/jira/browse/FLINK-32256
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
> Fix For: 1.18.0
>
>
> Find the minimum among all elements in the array for which ordering is 
> supported.
> Syntax:
> array_min(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
> Examples:
> {code:sql}
> SELECT array_min(array(1, 20, NULL, 3));
> -- 1
> {code}
> See also
> spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32257) Add ARRAY_MIN support in SQL & Table API

2023-06-29 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-32257:
-
Description: 
This is an implementation of ARRAY_MAX

The array_max() function concatenates get the maximum element from input array.

The result matches the type of the elements. NULL elements are skipped. If 
array is empty, or contains only NULL elements, NULL is returned.

 

Syntax

array_max(array)

Arguments

array: Any ARRAY with elements for which order is supported.

 

Returns

The result matches the type of the elements. NULL elements are skipped. If 
array is empty, or contains only NULL elements, NULL is returned.

 

Examples

SQL

 

> SELECT array_max(array(1, 20, NULL, 3)); 20

 
{code:java}
// Fink SQL-> select array_max(array[1, 20, null, 3])
20{code}
 

See also
spark 
[https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]

presto [https://prestodb.io/docs/current/functions/array.html]


  was:
Find the minimum among all elements in the array for which ordering is 
supported.

Syntax:

array_min(array)

Arguments:
array: An ARRAY to be handled.

Returns:

The result matches the type of the elements. NULL elements are skipped. If 
array is empty, or contains only NULL elements, NULL is returned.

Examples:
{code:sql}
SELECT array_min(array(1, 20, NULL, 3)); 
-- 1
{code}
See also
spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_min]

presto [https://prestodb.io/docs/current/functions/array.html]


> Add ARRAY_MIN support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32256) Add ARRAY_MIN support in SQL & Table API

2023-06-29 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-32256:
-
Summary: Add ARRAY_MIN support in SQL & Table API  (was: Add ARRAY_MAX 
support in SQL & Table API)

> Add ARRAY_MIN support in SQL & Table API
> 
>
> Key: FLINK-32256
> URL: https://issues.apache.org/jira/browse/FLINK-32256
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31665) Add ARRAY_CONCAT supported in SQL & Table API

2023-06-05 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-31665:


Assignee: Hanyu Zheng

> Add ARRAY_CONCAT supported in SQL & Table API
> -
>
> Key: FLINK-31665
> URL: https://issues.apache.org/jira/browse/FLINK-31665
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Assignee: Hanyu Zheng
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31663) Add ARRAY_EXCEPT supported in SQL & Table API

2023-05-16 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-31663:


Assignee: Bonnie Varghese

> Add ARRAY_EXCEPT supported in SQL & Table API
> -
>
> Key: FLINK-31663
> URL: https://issues.apache.org/jira/browse/FLINK-31663
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: luoyuxia
>Assignee: Bonnie Varghese
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-31118) Add ARRAY_UNION supported in SQL & Table API

2023-05-16 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz resolved FLINK-31118.
--
Resolution: Implemented

Implemented in 1957ef5fabed0622b1d3e4b542f1df0ee070fc33

> Add ARRAY_UNION supported in SQL & Table API
> 
>
> Key: FLINK-31118
> URL: https://issues.apache.org/jira/browse/FLINK-31118
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Remove all elements that equal to element from array.
> Syntax:
> array_union(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> An ARRAY. If value is NULL, the result is NULL. 
> Examples:
> {code:sql}
> > SELECT array_union(array(1, 2, 3), array(1, 3, 5));
>  [1,2,3,5] {code}
> See also
> spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_union]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31118) Add ARRAY_UNION supported in SQL & Table API

2023-05-16 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-31118:


Assignee: jackylau

> Add ARRAY_UNION supported in SQL & Table API
> 
>
> Key: FLINK-31118
> URL: https://issues.apache.org/jira/browse/FLINK-31118
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Remove all elements that equal to element from array.
> Syntax:
> array_union(array)
> Arguments:
> array: An ARRAY to be handled.
> Returns:
> An ARRAY. If value is NULL, the result is NULL. 
> Examples:
> {code:sql}
> > SELECT array_union(array(1, 2, 3), array(1, 3, 5));
>  [1,2,3,5] {code}
> See also
> spark [https://spark.apache.org/docs/latest/api/sql/index.html#array_union]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31660) flink-connectors-kafka ITCases are not runnable in the IDE

2023-03-31 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17707207#comment-17707207
 ] 

Dawid Wysakowicz commented on FLINK-31660:
--

I think the way I did it is that I added one of the table modules (either 
planner or the uber jar) to the classpath of the test configuration in 
IntelliJ. ("Modify options" > "Modify classpath") That way I didn't need to 
change a valid pom configuration which is not handled 100% correctly by 
IntelliJ.

> flink-connectors-kafka ITCases are not runnable in the IDE
> --
>
> Key: FLINK-31660
> URL: https://issues.apache.org/jira/browse/FLINK-31660
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.18.0
>Reporter: Natea Eshetu Beshada
>Assignee: Natea Eshetu Beshada
>Priority: Major
>  Labels: pull-request-available
>
> The following exception is thrown when trying to run 
> {{KafkaChangelogTableITCase}} or {{KafkaTableITCase}}
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/flink/table/shaded/com/jayway/jsonpath/spi/json/JsonProvider    at 
> java.base/java.lang.Class.getDeclaredMethods0(Native Method)
>     at java.base/java.lang.Class.privateGetDeclaredMethods(Class.java:3166)
>     at java.base/java.lang.Class.getMethodsRecursive(Class.java:3307)
>     at java.base/java.lang.Class.getMethod0(Class.java:3293)
>     at java.base/java.lang.Class.getMethod(Class.java:2106)
>     at org.apache.calcite.linq4j.tree.Types.lookupMethod(Types.java:309)
>     at org.apache.calcite.util.BuiltInMethod.(BuiltInMethod.java:670)
>     at org.apache.calcite.util.BuiltInMethod.(BuiltInMethod.java:357)
>     at 
> org.apache.calcite.rel.metadata.BuiltInMetadata$PercentageOriginalRows.(BuiltInMetadata.java:344)
>     at 
> org.apache.calcite.rel.metadata.RelMdPercentageOriginalRows$RelMdPercentageOriginalRowsHandler.getDef(RelMdPercentageOriginalRows.java:231)
>     at 
> org.apache.calcite.rel.metadata.ReflectiveRelMetadataProvider.reflectiveSource(ReflectiveRelMetadataProvider.java:134)
>     at 
> org.apache.calcite.rel.metadata.RelMdPercentageOriginalRows.(RelMdPercentageOriginalRows.java:42)
>     at 
> org.apache.calcite.rel.metadata.DefaultRelMetadataProvider.(DefaultRelMetadataProvider.java:42)
>     at 
> org.apache.calcite.rel.metadata.DefaultRelMetadataProvider.(DefaultRelMetadataProvider.java:28)
>     at org.apache.calcite.plan.RelOptCluster.(RelOptCluster.java:97)
>     at org.apache.calcite.plan.RelOptCluster.create(RelOptCluster.java:106)
>     at 
> org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$.create(FlinkRelOptClusterFactory.scala:36)
>     at 
> org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory.create(FlinkRelOptClusterFactory.scala)
>     at 
> org.apache.flink.table.planner.delegation.PlannerContext.(PlannerContext.java:132)
>     at 
> org.apache.flink.table.planner.delegation.PlannerBase.(PlannerBase.scala:121)
>     at 
> org.apache.flink.table.planner.delegation.StreamPlanner.(StreamPlanner.scala:65)
>     at 
> org.apache.flink.table.planner.delegation.DefaultPlannerFactory.create(DefaultPlannerFactory.java:65)
>     at 
> org.apache.flink.table.planner.loader.DelegatePlannerFactory.create(DelegatePlannerFactory.java:36)
>     at 
> org.apache.flink.table.factories.PlannerFactoryUtil.createPlanner(PlannerFactoryUtil.java:58)
>     at 
> org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.create(StreamTableEnvironmentImpl.java:127)
>     at 
> org.apache.flink.table.api.bridge.java.StreamTableEnvironment.create(StreamTableEnvironment.java:122)
>     at 
> org.apache.flink.table.api.bridge.java.StreamTableEnvironment.create(StreamTableEnvironment.java:94)
>     at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.setup(KafkaTableTestBase.java:93)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>     at 
> 

[jira] [Commented] (FLINK-28758) Failed to stop with savepoint

2023-03-14 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17700124#comment-17700124
 ] 

Dawid Wysakowicz commented on FLINK-28758:
--

I think it would be nice to actually fix it. As long as we have the consumer in 
the code base we should ensure its stability does not decline. This 
`stop-with-savepoint` has been broken by the rework of `stop-with-savepoint`, 
so I think it makes sense to fix it, especially as I believe it's not too hard 
to do so.

As far as I can tell, the problem is that {{FlinkKafkaConsumer}} does not fully 
respect the contract of {{SourceFunction#cancel}} 
(https://issues.apache.org/jira/browse/FLINK-23527), because it does not finish 
gracefully, but always throws {{ClosedException}}.

I believe we could fix it, by adjusting the {{KafkaFetcher#runFetchLoop}}:
{code}
@Override
public void runFetchLoop() throws Exception {
try {
// kick off the actual Kafka consumer
consumerThread.start();

while (running) {
// this blocks until we get the next records
// it automatically re-throws exceptions encountered in the 
consumer thread
final ConsumerRecords records = 
handover.pollNext();
 .
}
}
} catch (Handover.ClosedException ex) {

// WE SHOULD ADD THIS CODE

if (running) {
// rethrow, only if we are running, if fetcher is not running 
we should not throw
// the ClosedException, as we are stopping gracefully
ExceptionUtils.rethrowException(ex);
}
} finally {
// this signals the consumer thread that no more work is to be done
consumerThread.shutdown();
}
{code}

> Failed to stop with savepoint 
> --
>
> Key: FLINK-28758
> URL: https://issues.apache.org/jira/browse/FLINK-28758
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Runtime / Checkpointing, Runtime / 
> Task
>Affects Versions: 1.15.0
> Environment: Flink version:1.15.0
> deploy mode :K8s applicaiton Mode.   local mini cluster also have this 
> problem.
> Kafka Connector : use Kafka SourceFunction . No new Api.
>Reporter: hjw
>Priority: Major
> Attachments: image-2022-10-13-19-47-56-635.png
>
>
> I post a stop with savepoint request to Flink Job throught rest api.
> A Error happened in Kafka connector close.
> The job will enter restarting .
> It is successful to use savepoint command alone.
> {code:java}
> 13:33:42.857 [Kafka Fetcher for Source: nlp-kafka-source -> nlp-clean 
> (1/1)#0] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer 
> clientId=consumer-hjw-3, groupId=hjw] Kafka consumer has been closed
> 13:33:42.857 [Kafka Fetcher for Source: cpp-kafka-source -> cpp-clean 
> (1/1)#0] INFO org.apache.kafka.common.utils.AppInfoParser - App info 
> kafka.consumer for consumer-hjw-4 unregistered
> 13:33:42.857 [Kafka Fetcher for Source: cpp-kafka-source -> cpp-clean 
> (1/1)#0] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer 
> clientId=consumer-hjw-4, groupId=hjw] Kafka consumer has been closed
> 13:33:42.860 [Source: nlp-kafka-source -> nlp-clean (1/1)#0] DEBUG 
> org.apache.flink.streaming.runtime.tasks.StreamTask - Cleanup StreamTask 
> (operators closed: false, cancelled: false)
> 13:33:42.860 [jobmanager-io-thread-4] INFO 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Decline 
> checkpoint 5 by task eeefcb27475446241861ad8db3f33144 of job 
> d6fed247feab1c0bcc1b0dcc2cfb4736 at 79edfa88-ccc3-4140-b0e1-7ce8a7f8669f @ 
> 127.0.0.1 (dataPort=-1).
> org.apache.flink.util.SerializedThrowable: Task name with subtask : Source: 
> nlp-kafka-source -> nlp-clean (1/1)#0 Failure reason: Task has failed.
>  at 
> org.apache.flink.runtime.taskmanager.Task.declineCheckpoint(Task.java:1388)
>  at 
> org.apache.flink.runtime.taskmanager.Task.lambda$triggerCheckpointBarrier$3(Task.java:1331)
>  at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
>  at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>  at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:343)
> Caused by: org.apache.flink.util.SerializedThrowable: 
> org.apache.flink.streaming.connectors.kafka.internals.Handover$ClosedException
>  at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
>  at 
> 

[jira] [Updated] (FLINK-30113) Compression for operator state

2023-03-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-30113:
-
Fix Version/s: 1.18.0
   (was: 1.17.0)

> Compression for operator state
> --
>
> Key: FLINK-30113
> URL: https://issues.apache.org/jira/browse/FLINK-30113
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Dawid Wysakowicz
>Assignee: Etienne Chauchot
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> It has been requested in the ML to be able to enable compression for 
> broadcast state
> https://lists.apache.org/thread/2kylgj0fdmn21jk7x63696mgdvd1csxo



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-30113) Compression for operator state

2023-03-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-30113.

Resolution: Implemented

Implemented in d04d727958b641c041d593618de6a40fd62d5338

> Compression for operator state
> --
>
> Key: FLINK-30113
> URL: https://issues.apache.org/jira/browse/FLINK-30113
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Dawid Wysakowicz
>Assignee: Etienne Chauchot
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>
> It has been requested in the ML to be able to enable compression for 
> broadcast state
> https://lists.apache.org/thread/2kylgj0fdmn21jk7x63696mgdvd1csxo



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30862) Config doc generation should keep @deprecated ConfigOption

2023-02-01 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17683005#comment-17683005
 ] 

Dawid Wysakowicz commented on FLINK-30862:
--

An alternative, that've just popped in my mind is to have a separate 
section/page with all deprecated options.

> Config doc generation should keep @deprecated ConfigOption
> --
>
> Key: FLINK-30862
> URL: https://issues.apache.org/jira/browse/FLINK-30862
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> Currently the content will be removed once the ConfigOption is marked as 
> @deprecated. The content should be kept and marked with a deprecated flag, 
> since the ConfigOption is only deprecated and still be used. The content 
> should be only removed when the ConfigOption has been removed.
>  
> If we just remove the fresh deprecated option in the document, user will be 
> confused and think the option is gone and does not work anymore, which means 
> the backward compatibility is broken.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-30862) Config doc generation should keep @deprecated ConfigOption

2023-02-01 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17683001#comment-17683001
 ] 

Dawid Wysakowicz edited comment on FLINK-30862 at 2/1/23 12:36 PM:
---

I second [~chesnay]. I don't think we should document deprecated options as 
they should not be used.

Listing deprecated options gives an impression it can still be used.


was (Author: dawidwys):
I second [~chesnay]. I don't think we should document deprecated options as 
they should not be used. If an option is subsumed by a newer one, it should be 
listed in the description of a newer option and thus be searchable.

Listing deprecated options gives an impression it can still be used.

> Config doc generation should keep @deprecated ConfigOption
> --
>
> Key: FLINK-30862
> URL: https://issues.apache.org/jira/browse/FLINK-30862
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> Currently the content will be removed once the ConfigOption is marked as 
> @deprecated. The content should be kept and marked with a deprecated flag, 
> since the ConfigOption is only deprecated and still be used. The content 
> should be only removed when the ConfigOption has been removed.
>  
> If we just remove the fresh deprecated option in the document, user will be 
> confused and think the option is gone and does not work anymore, which means 
> the backward compatibility is broken.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30862) Config doc generation should keep @deprecated ConfigOption

2023-02-01 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17683001#comment-17683001
 ] 

Dawid Wysakowicz commented on FLINK-30862:
--

I second [~chesnay]. I don't think we should document deprecated options as 
they should not be used. If an option is subsumed by a newer one, it should be 
listed in the description of a newer option and thus be searchable.

Listing deprecated options gives an impression it can still be used.

> Config doc generation should keep @deprecated ConfigOption
> --
>
> Key: FLINK-30862
> URL: https://issues.apache.org/jira/browse/FLINK-30862
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>
> Currently the content will be removed once the ConfigOption is marked as 
> @deprecated. The content should be kept and marked with a deprecated flag, 
> since the ConfigOption is only deprecated and still be used. The content 
> should be only removed when the ConfigOption has been removed.
>  
> If we just remove the fresh deprecated option in the document, user will be 
> confused and think the option is gone and does not work anymore, which means 
> the backward compatibility is broken.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30483) Make Avro format support for TIMESTAMP_LTZ

2023-02-01 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17682980#comment-17682980
 ] 

Dawid Wysakowicz commented on FLINK-30483:
--

[~jadireddi] We did a review of the PR in github. Sorry it took so long. Do you 
mind telling if you're still interested in finalising the PR? Thanks!

> Make Avro format support for TIMESTAMP_LTZ
> --
>
> Key: FLINK-30483
> URL: https://issues.apache.org/jira/browse/FLINK-30483
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: Mingliang Liu
>Assignee: Jagadesh Adireddi
>Priority: Major
>  Labels: pull-request-available
>
> Currently Avro format does not support TIMESTAMP_LTZ (short for 
> TIMESTAMP_WITH_LOCAL_TIME_ZONE) type. Avro 1.10+ introduces local timestamp 
> logic type (both milliseconds and microseconds), see spec [1]. As TIMESTAMP 
> currently only supports milliseconds, we can make TIMESTAMP_LTZ support 
> milliseconds first.
> A related work is to support microseconds, and there is already 
> work-in-progress Jira FLINK-23589 for TIMESTAMP type. We can consolidate the 
> effort or track that separately for TIMESTAMP_LTZ.
> [1] 
> https://avro.apache.org/docs/1.10.2/spec.html#Local+timestamp+%28millisecond+precision%29



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-24456) Support bounded offset in the Kafka table connector

2023-01-31 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-24456.

Fix Version/s: 1.17.0
   Resolution: Fixed

Implemented in 9e8a99c12ce939418086fa9555c1c4f74bcf6b59

> Support bounded offset in the Kafka table connector
> ---
>
> Key: FLINK-24456
> URL: https://issues.apache.org/jira/browse/FLINK-24456
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: Haohui Mai
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.17.0
>
>
> The {{setBounded}} API in the DataStream connector of Kafka is particularly 
> useful when writing tests. Unfortunately the table connector of Kafka lacks 
> the same API.
> It would be good to have this API added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30427) Pulsar SQL connector lists not bundled dependencies

2022-12-15 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-30427:


 Summary: Pulsar SQL connector lists not bundled dependencies
 Key: FLINK-30427
 URL: https://issues.apache.org/jira/browse/FLINK-30427
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Affects Versions: pulsar-3.0.0
Reporter: Dawid Wysakowicz


flink-connector-pulsar lists:
{code}
- org.bouncycastle:bcpkix-jdk15on:1.69
- org.bouncycastle:bcprov-ext-jdk15on:1.69
{code}

but does not bundle them. (It uses them in test scope)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30386) Column constraint lacks primary key not enforced check

2022-12-13 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17646491#comment-17646491
 ] 

Dawid Wysakowicz commented on FLINK-30386:
--

Thanks for the explanation [~qingyue], I think that's fair assessment. 

> Column constraint lacks primary key not enforced check
> --
>
> Key: FLINK-30386
> URL: https://issues.apache.org/jira/browse/FLINK-30386
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.16.0, 1.15.2, 1.15.3
>Reporter: Jane Chan
>Priority: Major
>
> Currently, only table constraint performs the enforced check. Not sure if it 
> is by design or a bug.
> The following case can be reproduced on Flink 1.16.0, 1.15.3, and 1.15.2. I 
> think the earlier version might also reveal it.
> {code:sql}
> Flink SQL> create table T (f0 int not null primary key, f1 string) with 
> ('connector' = 'datagen');
> [INFO] Execute statement succeed.
> Flink SQL> explain select * from T;
> == Abstract Syntax Tree ==
> LogicalProject(f0=[$0], f1=[$1])
> +- LogicalTableScan(table=[[default_catalog, default_database, T]])
> == Optimized Physical Plan ==
> TableSourceScan(table=[[default_catalog, default_database, T]], fields=[f0, 
> f1])
> == Optimized Execution Plan ==
> TableSourceScan(table=[[default_catalog, default_database, T]], fields=[f0, 
> f1])
> Flink SQL> create table S (f0 int not null, f1 string, primary key(f0)) with 
> ('connector' = 'datagen');
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Flink doesn't support 
> ENFORCED mode for PRIMARY KEY constraint. ENFORCED/NOT ENFORCED  controls if 
> the constraint checks are performed on the incoming/outgoing data. Flink does 
> not own the data therefore the only supported mode is the NOT ENFORCED mode
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30386) Column constraint lacks primary key not enforced check

2022-12-12 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17646172#comment-17646172
 ] 

Dawid Wysakowicz commented on FLINK-30386:
--

Do you mean that we should support the {{ENFORCED}}  mode? It was a conscious 
decision to support only {{NOT ENFORCED}} as the data is stored outside of 
Flink and thus there is no way to ensure the uniqueness of the key.

> Column constraint lacks primary key not enforced check
> --
>
> Key: FLINK-30386
> URL: https://issues.apache.org/jira/browse/FLINK-30386
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.16.0, 1.15.2, 1.15.3
>Reporter: Jane Chan
>Priority: Major
>
> Currently, only table constraint performs the enforced check. Not sure if it 
> is by design or a bug.
> The following case can be reproduced on Flink 1.16.0, 1.15.3, and 1.15.2. I 
> think the earlier version might also reveal it.
> {code:sql}
> Flink SQL> create table T (f0 int not null primary key, f1 string) with 
> ('connector' = 'datagen');
> [INFO] Execute statement succeed.
> Flink SQL> explain select * from T;
> == Abstract Syntax Tree ==
> LogicalProject(f0=[$0], f1=[$1])
> +- LogicalTableScan(table=[[default_catalog, default_database, T]])
> == Optimized Physical Plan ==
> TableSourceScan(table=[[default_catalog, default_database, T]], fields=[f0, 
> f1])
> == Optimized Execution Plan ==
> TableSourceScan(table=[[default_catalog, default_database, T]], fields=[f0, 
> f1])
> Flink SQL> create table S (f0 int not null, f1 string, primary key(f0)) with 
> ('connector' = 'datagen');
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Flink doesn't support 
> ENFORCED mode for PRIMARY KEY constraint. ENFORCED/NOT ENFORCED  controls if 
> the constraint checks are performed on the incoming/outgoing data. Flink does 
> not own the data therefore the only supported mode is the NOT ENFORCED mode
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-23408) Wait for a checkpoint completed after finishing a task

2022-12-09 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17645192#comment-17645192
 ] 

Dawid Wysakowicz commented on FLINK-23408:
--

I don't know where is this code coming from. As I said, in master both drain 
and w/o drain stops the source first: 
https://github.com/apache/flink/blob/d86ae5d642fa578fb85118e81dd4140d504f818a/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTask.java#L260
 or 
https://github.com/apache/flink/blob/d86ae5d642fa578fb85118e81dd4140d504f818a/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SourceOperatorStreamTask.java#L179

> Wait for a checkpoint completed after finishing a task
> --
>
> Key: FLINK-23408
> URL: https://issues.apache.org/jira/browse/FLINK-23408
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / Task
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
> Attachments: image-2022-12-09-16-53-08-286.png, 
> image-2022-12-09-16-54-05-453.png
>
>
> Before finishing a task we should wait for a checkpoint issued after 
> {{finish()}} to commit all pending transactions created from the {{finish()}} 
> method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-23408) Wait for a checkpoint completed after finishing a task

2022-12-09 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17645177#comment-17645177
 ] 

Dawid Wysakowicz commented on FLINK-23408:
--

In current master both in case of drain and w/o drain we stop the source 
operator first. In 1.14 we did not make it in time to change the behaviour for 
w/o drain as well. W/o drain in 1.14 works as before.

> Wait for a checkpoint completed after finishing a task
> --
>
> Key: FLINK-23408
> URL: https://issues.apache.org/jira/browse/FLINK-23408
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / Task
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Before finishing a task we should wait for a checkpoint issued after 
> {{finish()}} to commit all pending transactions created from the {{finish()}} 
> method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-23408) Wait for a checkpoint completed after finishing a task

2022-12-09 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17645165#comment-17645165
 ] 

Dawid Wysakowicz commented on FLINK-23408:
--

[~qiunan] Can you elaborate what do you mean?

> Wait for a checkpoint completed after finishing a task
> --
>
> Key: FLINK-23408
> URL: https://issues.apache.org/jira/browse/FLINK-23408
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / Task
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> Before finishing a task we should wait for a checkpoint issued after 
> {{finish()}} to commit all pending transactions created from the {{finish()}} 
> method.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30113) Compression for operator state

2022-11-21 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-30113:


 Summary: Compression for operator state
 Key: FLINK-30113
 URL: https://issues.apache.org/jira/browse/FLINK-30113
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / State Backends
Reporter: Dawid Wysakowicz
 Fix For: 1.17.0


It has been requested in the ML to be able to enable compression for broadcast 
state

https://lists.apache.org/thread/2kylgj0fdmn21jk7x63696mgdvd1csxo



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30112) Improve docs re state compression

2022-11-21 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-30112:
-
Fix Version/s: 1.16.1
   1.15.4

> Improve docs re state compression
> -
>
> Key: FLINK-30112
> URL: https://issues.apache.org/jira/browse/FLINK-30112
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.17.0, 1.16.1, 1.15.4
>
>
> Documentation should state explicitly state compression is supported only for 
> KeyedState as of now.
> https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/ops/state/large_state_tuning/#compression



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30112) Improve docs re state compression

2022-11-21 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-30112:


 Summary: Improve docs re state compression
 Key: FLINK-30112
 URL: https://issues.apache.org/jira/browse/FLINK-30112
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Dawid Wysakowicz
 Fix For: 1.17.0


Documentation should state explicitly state compression is supported only for 
KeyedState as of now.

https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/ops/state/large_state_tuning/#compression



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-18647) How to handle processing time timers with bounded input

2022-11-10 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-18647:
-
Description: 
(most of this description comes from an offline discussion between me, 
[~AHeise], [~roman_khachatryan], [~aljoscha] and [~sunhaibotb])

In case of end of input (for example for bounded sources), all pending 
(untriggered) processing time timers are ignored/dropped. In some cases this is 
desirable, but for example for {{WindowOperator}} it means that last trailing 
window will not be triggered, causing an apparent data loss.

There are a couple of ideas what should be considered.

1. Provide a way for users to decide what to do with such timers: cancel, wait, 
trigger immediately. For example by overloading the existing methods: 
{{ProcessingTimeService#registerTimer}} and 
{{ProcessingTimeService#scheduleAtFixedRate}} in the following way:

{code:java}
ScheduledFuture registerTimer(long timestamp, ProcessingTimeCallback target, 
TimerAction timerAction);

enum TimerAction { 
CANCEL_ON_END_OF_INPUT, 
TRIGGER_ON_END_OF_INPUT,
WAIT_ON_END_OF_INPUT}

{code}
or maybe:
{code}
public interface TimerAction {
void onEndOfInput(ScheduledFuture timer);
}
{code}

But this would also mean we store additional state with each timer and we need 
to modify the serialisation format (providing some kind of state migration 
path) and potentially increase the size foot print of the timers.

Extra overhead could have been avoided via some kind of {{Map}}, with lack of entry meaning some default value.

2. 

Also another way to solve this problem might be let the operator code decide 
what to do with the given timer.

 a. Either ask an operator what should happen with given timer, 
 b. or let the operator iterate and cancel the timers on endOfInput(), 
 c. or just fire the timer with some endOfInput flag.

I think none of the (a), (b), and (c) would require braking API changes, no 
state changes and no additional overheads. Just the logic what to do with the 
timer would have to be “hardcoded” in the operator’s code. (which btw might 
even has an additional benefit of being easier to change in case of some bugs, 
like a timer was registered with wrong/incorrect {{TimerAction}}).

This is complicated a bit by a question, how (if at all?) options a), b) or c) 
should be exposed to UDFs? 

3. 

Maybe we need a combination of both? Pre existing operators could implement 
some custom handling of this issue (via 2a, 2b or 2c), while UDFs could be 
handled by 1.? 

  was:
(most of this description comes from an offline discussion between me, 
[~AHeise], [~roman_khachatryan], [~aljoscha] and [~sunhaibotb])

In case of end of input (for example for bounded sources), all pending 
(untriggered) processing time timers are ignored/dropped. In some cases this is 
desirable, but for example for {{WindowOperator}} it means that last trailing 
window will not be triggered, causing an apparent data loss.

There are a couple of ideas what should be considered.

1. Provide a way for users to decide what to do with such timers: cancel, wait, 
trigger immediately. For example by overloading the existing methods: 
{{ProcessingTimeService#registerTimer}} and 
{{ProcessingTimeService#scheduleAtFixedRate}} in the following way:

{code:java}
ScheduledFuture registerTimer(long timestamp, ProcessingTimeCallback target, 
TimerAction timerAction);

enum TimerAction { 
CANCEL_ON_END_OF_INPUT, 
TRIGGER_ON_END_OF_INPUT,
WAIT_ON_END_OF_INPUT}

{code}
or maybe:
{code}
public interface TimerAction {
void onEndOfInput(ScheduledFuture timer);
}
{code}

But this would also mean we store additional state with each timer and we need 
to modify the serialisation format (providing some kind of state migration 
path) and potentially increase the size foot print of the timers.

Extra overhead could have been avoided via some kind of {{Map}}, with lack of entry meaning some default value.

2. 

Also another way to solve this problem might be let the operator code decide 
what to do with the given timer. Either ask an operator what should happen with 
given timer (a), or let the operator iterate and cancel the timers on 
endOfInput() (b), or just fire the timer with some endOfInput flag (c).

I think none of the (a), (b), and (c) would require braking API changes, no 
state changes and no additional overheads. Just the logic what to do with the 
timer would have to be “hardcoded” in the operator’s code. (which btw might 
even has an additional benefit of being easier to change in case of some bugs, 
like a timer was registered with wrong/incorrect {{TimerAction}}).

This is complicated a bit by a question, how (if at all?) options a), b) or c) 
should be exposed to UDFs? 

3. 

Maybe we need a combination of both? Pre existing operators could implement 
some custom handling of this issue (via 2a, 2b or 2c), while UDFs 

[jira] [Updated] (FLINK-29852) The operator is repeatedly displayed on the Flink Web UI

2022-11-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-29852:
-
Priority: Critical  (was: Major)

> The operator is repeatedly displayed on the Flink Web UI
> 
>
> Key: FLINK-29852
> URL: https://issues.apache.org/jira/browse/FLINK-29852
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.16.0
> Environment: Flink 1.16.0
>Reporter: JasonLee
>Priority: Critical
> Attachments: image-2022-11-02-23-57-39-387.png
>
>
> All the operators in the DAG are shown repeatedly
> !image-2022-11-02-23-57-39-387.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29686) Flink-SQL使用Hive方言出现bug

2022-10-19 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-29686.

Resolution: Invalid

Please use English in the open-source JIRA. Feel free to translate the ticket 
to English and reopen the ticket.

> Flink-SQL使用Hive方言出现bug
> --
>
> Key: FLINK-29686
> URL: https://issues.apache.org/jira/browse/FLINK-29686
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Client
>Affects Versions: 1.14.4
> Environment: Flink-ver : 1.14.4-on-cdh6.3.2
> Flink-sql-cli : 1.14.4
>Reporter: Vincent Long
>Priority: Blocker
>
> 我在使用sql-cli提交任务到session集群的过程中, 我通过Flink-sql-connectors-hive 
> 使用hive方言执行sql代码时发生了如下报错: 
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> {color:#FF}Unexpected exception. This is a bug. Please consider filing an 
> issue.{color}
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> {color:#FF}Caused by: java.lang.ExceptionInInitializerError{color}
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at 
> org.apache.hive.common.util.ReflectionUtil.newInstance(ReflectionUtil.java:83)
>     at org.apache.hadoop.hive.ql.exec.Registry.registerUDAF(Registry.java:238)
>     at org.apache.hadoop.hive.ql.exec.Registry.registerUDAF(Registry.java:231)
>     at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.(FunctionRegistry.java:430)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.apache.flink.table.catalog.hive.client.HiveShimV120.registerTemporaryFunction(HiveShimV120.java:262)
>     at 
> org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:207)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$parseStatement$1(LocalExecutor.java:172)
>     at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:88)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:172)
>     at 
> org.apache.flink.table.client.cli.CliClient.parseCommand(CliClient.java:396)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:324)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95)
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
>     ... 1 more
> {color:#FF}Caused by: java.lang.RuntimeException: 
> java.lang.IllegalArgumentException: Unrecognized Hadoop major version number: 
> 3.0.0-cdh6.3.2{color}
>     at 
> org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:102)
>     at 
> org.apache.hadoop.hive.ql.udf.UDAFPercentile.(UDAFPercentile.java:51)
>     ... 25 more
> Caused by: java.lang.IllegalArgumentException: Unrecognized Hadoop major 
> version number: 3.0.0-cdh6.3.2
>     at 
> org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:177)
>     at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:144)
>     at 
> org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:99)
>     ... 26 more
> Shutting down the session...
> done.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29645) BatchExecutionKeyedStateBackend is using incorrect ExecutionConfig when creating serializer

2022-10-17 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-29645.

Fix Version/s: 1.16.0
   1.17.0
   1.15.3
   Resolution: Fixed

Fixed in:
* master
** 0ffcfb757965ca62f7b4f1b93fd1387a45a50b2c
* 1.16
** c07d5aa98c016201eab38f883d20f2e807213113
* 1.15
** f19f032daee2fabefb4ccc6257740dd491b3a925

> BatchExecutionKeyedStateBackend is using incorrect ExecutionConfig when 
> creating serializer
> ---
>
> Key: FLINK-29645
> URL: https://issues.apache.org/jira/browse/FLINK-29645
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.12.7, 1.13.6, 1.16.0, 1.17.0, 1.15.2, 1.14.6
>Reporter: Piotr Nowojski
>Assignee: Dawid Wysakowicz
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.17.0, 1.15.3
>
>
> {{org.apache.flink.streaming.api.operators.sorted.state.BatchExecutionKeyedStateBackend#getOrCreateKeyedState}}
>  is using freshly constructed {{ExecutionConfig}}, instead of the one 
> configured by the user from the environment.
> {code:java}
> public  S getOrCreateKeyedState(
> TypeSerializer namespaceSerializer, StateDescriptor 
> stateDescriptor)
> throws Exception {
> checkNotNull(namespaceSerializer, "Namespace serializer");
> checkNotNull(
> keySerializer,
> "State key serializer has not been configured in the config. "
> + "This operation cannot use partitioned state.");
> if (!stateDescriptor.isSerializerInitialized()) {
> stateDescriptor.initializeSerializerUnlessSet(new 
> ExecutionConfig());
> }
> {code}
> The correct one could be obtained from {{env.getExecutionConfig()}} in 
> {{org.apache.flink.streaming.api.operators.sorted.state.BatchExecutionStateBackend#createKeyedStateBackend}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29645) BatchExecutionKeyedStateBackend is using incorrect ExecutionConfig when creating serializer

2022-10-17 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-29645:


Assignee: Dawid Wysakowicz

> BatchExecutionKeyedStateBackend is using incorrect ExecutionConfig when 
> creating serializer
> ---
>
> Key: FLINK-29645
> URL: https://issues.apache.org/jira/browse/FLINK-29645
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.12.7, 1.13.6, 1.16.0, 1.17.0, 1.15.2, 1.14.6
>Reporter: Piotr Nowojski
>Assignee: Dawid Wysakowicz
>Priority: Major
>
> {{org.apache.flink.streaming.api.operators.sorted.state.BatchExecutionKeyedStateBackend#getOrCreateKeyedState}}
>  is using freshly constructed {{ExecutionConfig}}, instead of the one 
> configured by the user from the environment.
> {code:java}
> public  S getOrCreateKeyedState(
> TypeSerializer namespaceSerializer, StateDescriptor 
> stateDescriptor)
> throws Exception {
> checkNotNull(namespaceSerializer, "Namespace serializer");
> checkNotNull(
> keySerializer,
> "State key serializer has not been configured in the config. "
> + "This operation cannot use partitioned state.");
> if (!stateDescriptor.isSerializerInitialized()) {
> stateDescriptor.initializeSerializerUnlessSet(new 
> ExecutionConfig());
> }
> {code}
> The correct one could be obtained from {{env.getExecutionConfig()}} in 
> {{org.apache.flink.streaming.api.operators.sorted.state.BatchExecutionStateBackend#createKeyedStateBackend}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29645) BatchExecutionKeyedStateBackend is using incorrect ExecutionConfig when creating serializer

2022-10-17 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17618696#comment-17618696
 ] 

Dawid Wysakowicz commented on FLINK-29645:
--

This is not a significant issue as we actually do not use any of the 
serializers during operation. The code is there to fulfill contracts (there is 
a check in the stack if a serializer has been initialized). I'll fix that 
anyhow.

> BatchExecutionKeyedStateBackend is using incorrect ExecutionConfig when 
> creating serializer
> ---
>
> Key: FLINK-29645
> URL: https://issues.apache.org/jira/browse/FLINK-29645
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.12.7, 1.13.6, 1.16.0, 1.17.0, 1.15.2, 1.14.6
>Reporter: Piotr Nowojski
>Priority: Major
>
> {{org.apache.flink.streaming.api.operators.sorted.state.BatchExecutionKeyedStateBackend#getOrCreateKeyedState}}
>  is using freshly constructed {{ExecutionConfig}}, instead of the one 
> configured by the user from the environment.
> {code:java}
> public  S getOrCreateKeyedState(
> TypeSerializer namespaceSerializer, StateDescriptor 
> stateDescriptor)
> throws Exception {
> checkNotNull(namespaceSerializer, "Namespace serializer");
> checkNotNull(
> keySerializer,
> "State key serializer has not been configured in the config. "
> + "This operation cannot use partitioned state.");
> if (!stateDescriptor.isSerializerInitialized()) {
> stateDescriptor.initializeSerializerUnlessSet(new 
> ExecutionConfig());
> }
> {code}
> The correct one could be obtained from {{env.getExecutionConfig()}} in 
> {{org.apache.flink.streaming.api.operators.sorted.state.BatchExecutionStateBackend#createKeyedStateBackend}}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-22243) Reactive Mode parallelism changes are not shown in the job graph visualization in the UI

2022-10-11 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17613317#comment-17613317
 ] 

Dawid Wysakowicz edited comment on FLINK-22243 at 10/11/22 5:19 PM:


Fixed in:
* master
** 40c5a71dbddaaa98d55f5fdb29f1c495b8084c83, 
75dddf69037ad6cdde0109ae53f79473af2b90da
* 1.16
** a9592c075d3130e1d6af3748f201de3e973c3d0e, 
d2a4f1b0b1cc4f522b17338261b9c3c79b4b9c92
* 1.15
** d6e8e4287eaaf144a3a4798b89825370170dce97, 
d5921ddf4f7f80b8cc3b68533664ae583d746b76


was (Author: dawidwys):
Fixed in:
* master
** 40c5a71dbddaaa98d55f5fdb29f1c495b8084c83
* 1.16
** a9592c075d3130e1d6af3748f201de3e973c3d0e
* 1.15
** d6e8e4287eaaf144a3a4798b89825370170dce97

> Reactive Mode parallelism changes are not shown in the job graph 
> visualization in the UI
> 
>
> Key: FLINK-22243
> URL: https://issues.apache.org/jira/browse/FLINK-22243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Robert Metzger
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.15.3, 1.16.1
>
> Attachments: screenshot-1.png
>
>
> As reported here FLINK-22134, the parallelism in the visual job graph on top 
> of the detail page is not in sync with the parallelism listed in the task 
> list below, when reactive mode causes a parallelism change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-29500) InitializeOnMaster uses wrong parallelism with AdaptiveScheduler

2022-10-06 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-29500.

Fix Version/s: 1.17.0
   1.15.3
   1.16.1
 Assignee: Dawid Wysakowicz
   Resolution: Fixed

Fixed in:
* master
** cadfab59a9a5f6ee7f95e2a9fb3ad7b1b1d9
* 1.16
** 6c6f152e9f3ac10e8d6993e1d9ab8076053e2a44
* 1.15
** 74bc6d2f776baad35b2f7f5be41cdfe87759192f

> InitializeOnMaster uses wrong parallelism with AdaptiveScheduler
> 
>
> Key: FLINK-29500
> URL: https://issues.apache.org/jira/browse/FLINK-29500
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, Runtime / Coordination
>Affects Versions: 1.16.0, 1.15.2, 1.14.6
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.15.3, 1.16.1
>
>
> {{InputOutputFormatVertex}} uses {{JobVertex#getParallelism}} to invoke 
> {{InitializeOnMaster#initializeGlobal}}. However, this parallelism might not 
> be the actual one which will be used to execute the node in combination with 
> Adaptive Scheduler. In case of Adaptive Scheduler the execution parallelism 
> is provided via {{VertexParallelismStore}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29408) HiveCatalogITCase failed with NPE

2022-10-06 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17613375#comment-17613375
 ] 

Dawid Wysakowicz commented on FLINK-29408:
--

It fails reliable most of the time on my private Azure. Any progress on that?

https://dev.azure.com/wysakowiczdawid/Flink/_build/results?buildId=1531=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d=25733

> HiveCatalogITCase failed with NPE
> -
>
> Key: FLINK-29408
> URL: https://issues.apache.org/jira/browse/FLINK-29408
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: luoyuxia
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-25T03:41:07.4212129Z Sep 25 03:41:07 [ERROR] 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf  Time 
> elapsed: 0.098 s  <<< ERROR!
> 2022-09-25T03:41:07.4212662Z Sep 25 03:41:07 java.lang.NullPointerException
> 2022-09-25T03:41:07.4213189Z Sep 25 03:41:07  at 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf(HiveCatalogUdfITCase.java:109)
> 2022-09-25T03:41:07.4213753Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-25T03:41:07.4224643Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-25T03:41:07.4225311Z Sep 25 03:41:07  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-25T03:41:07.4225879Z Sep 25 03:41:07  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-25T03:41:07.4226405Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-25T03:41:07.4227201Z Sep 25 03:41:07  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-25T03:41:07.4227807Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-25T03:41:07.4228394Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-25T03:41:07.4228966Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4229514Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4230066Z Sep 25 03:41:07  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-09-25T03:41:07.4230587Z Sep 25 03:41:07  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-09-25T03:41:07.4231258Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4231823Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-25T03:41:07.4232384Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-25T03:41:07.4232930Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-25T03:41:07.4233511Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-25T03:41:07.4234039Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-25T03:41:07.4234546Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-25T03:41:07.4235057Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-25T03:41:07.4235573Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-25T03:41:07.4236087Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-25T03:41:07.4236635Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-25T03:41:07.4237314Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4238211Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4238775Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4239277Z Sep 25 03:41:07  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-09-25T03:41:07.4239769Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4240265Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 

[jira] [Commented] (FLINK-29419) HybridShuffle.testHybridFullExchangesRestart hangs

2022-10-06 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17613319#comment-17613319
 ] 

Dawid Wysakowicz commented on FLINK-29419:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41620=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=12048

> HybridShuffle.testHybridFullExchangesRestart hangs
> --
>
> Key: FLINK-29419
> URL: https://issues.apache.org/jira/browse/FLINK-29419
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: Weijie Guo
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> 2022-09-26T10:56:44.0766792Z Sep 26 10:56:44 "ForkJoinPool-1-worker-25" #27 
> daemon prio=5 os_prio=0 tid=0x7f41a4efa000 nid=0x6d76 waiting on 
> condition [0x7f40ac135000]
> 2022-09-26T10:56:44.0767432Z Sep 26 10:56:44java.lang.Thread.State: 
> WAITING (parking)
> 2022-09-26T10:56:44.0767892Z Sep 26 10:56:44  at sun.misc.Unsafe.park(Native 
> Method)
> 2022-09-26T10:56:44.0768644Z Sep 26 10:56:44  - parking to wait for  
> <0xa0704e18> (a java.util.concurrent.CompletableFuture$Signaller)
> 2022-09-26T10:56:44.0769287Z Sep 26 10:56:44  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2022-09-26T10:56:44.0769949Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> 2022-09-26T10:56:44.0770623Z Sep 26 10:56:44  at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313)
> 2022-09-26T10:56:44.0771349Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> 2022-09-26T10:56:44.0772092Z Sep 26 10:56:44  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2022-09-26T10:56:44.0772777Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:57)
> 2022-09-26T10:56:44.0773534Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.BatchShuffleITCaseBase.executeJob(BatchShuffleITCaseBase.java:115)
> 2022-09-26T10:56:44.0774333Z Sep 26 10:56:44  at 
> org.apache.flink.test.runtime.HybridShuffleITCase.testHybridFullExchangesRestart(HybridShuffleITCase.java:59)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41343=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-22243) Reactive Mode parallelism changes are not shown in the job graph visualization in the UI

2022-10-06 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-22243.

Fix Version/s: 1.17.0
   1.15.3
   1.16.1
   Resolution: Fixed

Fixed in:
* master
** 40c5a71dbddaaa98d55f5fdb29f1c495b8084c83
* 1.16
** a9592c075d3130e1d6af3748f201de3e973c3d0e
* 1.15
** d6e8e4287eaaf144a3a4798b89825370170dce97

> Reactive Mode parallelism changes are not shown in the job graph 
> visualization in the UI
> 
>
> Key: FLINK-22243
> URL: https://issues.apache.org/jira/browse/FLINK-22243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Robert Metzger
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.15.3, 1.16.1
>
> Attachments: screenshot-1.png
>
>
> As reported here FLINK-22134, the parallelism in the visual job graph on top 
> of the detail page is not in sync with the parallelism listed in the task 
> list below, when reactive mode causes a parallelism change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-26469) Adaptive job shows error in WebUI when not enough resource are available

2022-10-05 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz closed FLINK-26469.

Fix Version/s: 1.17.0
   1.15.3
   1.16.1
   Resolution: Fixed

Fixed in:
* master
** 9600a1858bf608a40a0b4c108b70c230e890ccc3
* 1.16
** 264afe134e70fb4d93032ba818b0fe05e964a03b
* 1.15
** 507913052b3a02d64d6f816e3f87cb059ef52990

> Adaptive job shows error in WebUI when not enough resource are available
> 
>
> Key: FLINK-26469
> URL: https://issues.apache.org/jira/browse/FLINK-26469
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Niklas Semmler
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.17.0, 1.15.3, 1.16.1
>
>
> When there is no resource and job is in CREATED state, the job page shows the 
> error: "Job failed during initialization of JobManager". 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29500) InitializeOnMaster uses wrong parallelism with AdaptiveScheduler

2022-10-04 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-29500:


 Summary: InitializeOnMaster uses wrong parallelism with 
AdaptiveScheduler
 Key: FLINK-29500
 URL: https://issues.apache.org/jira/browse/FLINK-29500
 Project: Flink
  Issue Type: Bug
  Components: API / Core, Runtime / Coordination
Affects Versions: 1.14.6, 1.15.2, 1.16.0
Reporter: Dawid Wysakowicz


{{InputOutputFormatVertex}} uses {{JobVertex#getParallelism}} to invoke 
{{InitializeOnMaster#initializeGlobal}}. However, this parallelism might not be 
the actual one which will be used to execute the node in combination with 
Adaptive Scheduler. In case of Adaptive Scheduler the execution parallelism is 
provided via {{VertexParallelismStore}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-26469) Adaptive job shows error in WebUI when not enough resource are available

2022-10-04 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-26469:


Assignee: Dawid Wysakowicz

> Adaptive job shows error in WebUI when not enough resource are available
> 
>
> Key: FLINK-26469
> URL: https://issues.apache.org/jira/browse/FLINK-26469
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Niklas Semmler
>Assignee: Dawid Wysakowicz
>Priority: Major
>
> When there is no resource and job is in CREATED state, the job page shows the 
> error: "Job failed during initialization of JobManager". 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-22243) Reactive Mode parallelism changes are not shown in the job graph visualization in the UI

2022-09-30 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-22243:


Assignee: Dawid Wysakowicz

> Reactive Mode parallelism changes are not shown in the job graph 
> visualization in the UI
> 
>
> Key: FLINK-22243
> URL: https://issues.apache.org/jira/browse/FLINK-22243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Robert Metzger
>Assignee: Dawid Wysakowicz
>Priority: Major
> Attachments: screenshot-1.png
>
>
> As reported here FLINK-22134, the parallelism in the visual job graph on top 
> of the detail page is not in sync with the parallelism listed in the task 
> list below, when reactive mode causes a parallelism change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


<    1   2   3   4   5   6   7   8   9   10   >