[jira] [Commented] (FLINK-27417) Flink JDBC SQL Connector:SELECT * FROM table WHERE co > 100; mysql will execute SELECT * FROM table to scan the whole table

2022-04-28 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529784#comment-17529784
 ] 

Shengkai Fang commented on FLINK-27417:
---

[~haojiawei] I think it's not a bug just optimaztion. Do you mind to implement 
the feature? You can the look up other connectors that implements the 
SupportsFilterPushDown interface for details. 

> Flink JDBC SQL Connector:SELECT * FROM table WHERE  co > 100; mysql will 
> execute SELECT * FROM table to scan the whole table
> 
>
> Key: FLINK-27417
> URL: https://issues.apache.org/jira/browse/FLINK-27417
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0
>Reporter: haojiawei
>Priority: Major
>
> Use flink cli to create a mysql mapping table, and execute the query SELECT * 
> FROM table WHERE co > 100;Mysql will execute SELECT * FROM table to scan the 
> whole table.
>  
> show mysql execute sql:  select * from information_schema.`PROCESSLIST` where 
> info is not null;
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] fsk119 commented on pull request #19498: [FLINK-26588][docs]Translate the new CAST documentation to Chinese

2022-04-28 Thread GitBox


fsk119 commented on PR #19498:
URL: https://github.com/apache/flink/pull/19498#issuecomment-1112900350

   Could you share the screen snapshot about the doc you modified?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] fsk119 commented on pull request #19498: [FLINK-26588][docs]Translate the new CAST documentation to Chinese

2022-04-28 Thread GitBox


fsk119 commented on PR #19498:
URL: https://github.com/apache/flink/pull/19498#issuecomment-1112900057

   Sorry for the late response. I will review it soon..


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-docker] gaoyunhaii closed pull request #114: Merge adjacent RUN commands to avoid too much levels

2022-04-28 Thread GitBox


gaoyunhaii closed pull request #114: Merge adjacent RUN commands to avoid too 
much levels
URL: https://github.com/apache/flink-docker/pull/114


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-docker] gaoyunhaii commented on pull request #114: Merge adjacent RUN commands to avoid too much levels

2022-04-28 Thread GitBox


gaoyunhaii commented on PR #114:
URL: https://github.com/apache/flink-docker/pull/114#issuecomment-1112898263

   Thanks @zhuzhurk for the review! Merged~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27449) The comment is lost when creating a Table from Datastream and Schema

2022-04-28 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529781#comment-17529781
 ] 

Shengkai Fang commented on FLINK-27449:
---

Hi, [~Yu.an]. You can open a PR to fix it and ping me later. 

> The comment is lost when creating a Table from Datastream and Schema
> 
>
> Key: FLINK-27449
> URL: https://issues.apache.org/jira/browse/FLINK-27449
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.4
>Reporter: Yuan Huang 
>Priority: Major
> Attachments: test_result.png
>
>
>  
> A user reported that the comment was lost when creating a Table from 
> Datastream and Schema.
>  
> So this test will fail:
> {code:java}
> @Test
> public void test() {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> DataStream dataStream = env.fromElements("Alice", "Bob", "John");
> Schema.Builder builder = Schema.newBuilder();
> builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
> comment");
> Table table = tableEnv.fromDataStream(dataStream, 
> builder.build()).as("user_name");
> table.getResolvedSchema();
> table.printSchema();
> String expected = "(\n  `user_name` STRING COMMENT 'this is a 
> comment'\n)";
> Assert.assertEquals(expected, table.getResolvedSchema().toString());
> } {code}
>  
> !test_result.png|width=577,height=139!
> I checked that the built schema did contains the comment. However, the 
> comment filed of resolved schema in the table is null.
> Is it a bug or just meets our expectations?
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27435) Kubernetes Operator keeps savepoint history

2022-04-28 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529778#comment-17529778
 ] 

Gyula Fora commented on FLINK-27435:


Do you think we should add this to the status or introduce a new Savepoint CR 
similar to how the Ververica platform handles this?

> Kubernetes Operator keeps savepoint history
> ---
>
> Key: FLINK-27435
> URL: https://issues.apache.org/jira/browse/FLINK-27435
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Thomas Weise
>Priority: Major
>
> Currently the operator keeps track of the most recent savepoint that was 
> triggered through savepointTriggerNonce. In some cases it is necessary to 
> find older savepoints. For that, it would be nice if the operator can 
> optionally maintain a savepoint history (and perhaps also trigger disposal of 
> savepoints that fall out of the history). The maximum number of savepoints 
> retained could be configured by cound and/or age.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] WencongLiu closed pull request #19596: [FLINK-25970][core] Add type of original exception to the detailMessage of SerializedThrowable.

2022-04-28 Thread GitBox


WencongLiu closed pull request #19596: [FLINK-25970][core] Add type of original 
exception to the detailMessage of SerializedThrowable.
URL: https://github.com/apache/flink/pull/19596


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] xintongsong commented on pull request #19596: [FLINK-25970][core] Add type of original exception to the detailMessage of SerializedThrowable.

2022-04-28 Thread GitBox


xintongsong commented on PR #19596:
URL: https://github.com/apache/flink/pull/19596#issuecomment-1112879304

   And you should not open a PR to `release-1.14.3-rc1`. Normally PRs should be 
opened to the `master` branch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] xintongsong commented on pull request #19596: [FLINK-25970][core] Add type of original exception to the detailMessage of SerializedThrowable.

2022-04-28 Thread GitBox


xintongsong commented on PR #19596:
URL: https://github.com/apache/flink/pull/19596#issuecomment-1112878445

   Hi @WencongLiu,
   I think the code changes show that you've understood the issue correctly. I 
have mainly two comments:
   1. I'm not sure about the `className(message)` format. Notice that the 
original message could be long and contain line breaks. Maybe `className: 
message` is better.
   2. CI report shows some test failures. Please take a look.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-25636) FLIP-199: Change some default config values of blocking shuffle for better usability

2022-04-28 Thread Yingjie Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yingjie Cao closed FLINK-25636.
---
Release Note: Since 1.15, sort-shuffle has become the default blocking 
shuffle implementation and shuffle data compression is enabled by default. 
These changes influence batch jobs only, for more information, please refer to 
the official document: 
https://nightlies.apache.org/flink/flink-docs-release-1.15.
  Resolution: Fixed

> FLIP-199: Change some default config values of blocking shuffle for better 
> usability
> 
>
> Key: FLINK-25636
> URL: https://issues.apache.org/jira/browse/FLINK-25636
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> This is the umbrella issue for FLIP-199, we will change the several default 
> config value for batch shuffle and update the document accordingly.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27444) Create standalone mode ClusterDescriptor

2022-04-28 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529760#comment-17529760
 ] 

Gyula Fora commented on FLINK-27444:


What do we mean here? There is already a StandaloneClusterDescriptor in Flink 
that submits to standalone clusters

 

> Create standalone mode ClusterDescriptor
> 
>
> Key: FLINK-27444
> URL: https://issues.apache.org/jira/browse/FLINK-27444
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Usamah Jassat
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-26497) Cast Exception when use split agg with multiple filters

2022-04-28 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529757#comment-17529757
 ] 

Jingsong Lee commented on FLINK-26497:
--

[~337361...@qq.com] Done, thanks!

> Cast Exception when use split agg with multiple filters
> ---
>
> Key: FLINK-26497
> URL: https://issues.apache.org/jira/browse/FLINK-26497
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0, 1.14.4
>Reporter: Jingsong Lee
>Assignee: Yunhong Zheng
>Priority: Major
>
> table.optimizer.distinct-agg.split.enabled is true.
> count(distinct c) filter (...), count(distinct c) filter (...), 
> count(distinct c) filter (...)
> Filtering conditions excess 8.
> java.lang.RuntimeException: [J cannot be cast to [Ljava.lang.Object;
> (BTW, it seems that we don't have test case to cover this situation.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-26497) Cast Exception when use split agg with multiple filters

2022-04-28 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-26497:


Assignee: Yunhong Zheng

> Cast Exception when use split agg with multiple filters
> ---
>
> Key: FLINK-26497
> URL: https://issues.apache.org/jira/browse/FLINK-26497
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0, 1.14.4
>Reporter: Jingsong Lee
>Assignee: Yunhong Zheng
>Priority: Major
>
> table.optimizer.distinct-agg.split.enabled is true.
> count(distinct c) filter (...), count(distinct c) filter (...), 
> count(distinct c) filter (...)
> Filtering conditions excess 8.
> java.lang.RuntimeException: [J cannot be cast to [Ljava.lang.Object;
> (BTW, it seems that we don't have test case to cover this situation.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Reopened] (FLINK-25636) FLIP-199: Change some default config values of blocking shuffle for better usability

2022-04-28 Thread Yingjie Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yingjie Cao reopened FLINK-25636:
-

> FLIP-199: Change some default config values of blocking shuffle for better 
> usability
> 
>
> Key: FLINK-25636
> URL: https://issues.apache.org/jira/browse/FLINK-25636
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> This is the umbrella issue for FLIP-199, we will change the several default 
> config value for batch shuffle and update the document accordingly.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27449) The comment is lost when creating a Table from Datastream and Schema

2022-04-28 Thread Yuan Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529749#comment-17529749
 ] 

Yuan Huang  edited comment on FLINK-27449 at 4/29/22 4:00 AM:
--

Thank you for your reply. Since it is a bug, I think it needs to be fixed 
although it is not urgent. If the community is busy, I am happy to work on it.


was (Author: JIRAUSER281248):
Thank you for your reply. So since it is a bug, I think it needs to be fixed 
although it is not urgent. If the community is busy, I am happy to work on it.

> The comment is lost when creating a Table from Datastream and Schema
> 
>
> Key: FLINK-27449
> URL: https://issues.apache.org/jira/browse/FLINK-27449
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.4
>Reporter: Yuan Huang 
>Priority: Major
> Attachments: test_result.png
>
>
>  
> A user reported that the comment was lost when creating a Table from 
> Datastream and Schema.
>  
> So this test will fail:
> {code:java}
> @Test
> public void test() {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> DataStream dataStream = env.fromElements("Alice", "Bob", "John");
> Schema.Builder builder = Schema.newBuilder();
> builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
> comment");
> Table table = tableEnv.fromDataStream(dataStream, 
> builder.build()).as("user_name");
> table.getResolvedSchema();
> table.printSchema();
> String expected = "(\n  `user_name` STRING COMMENT 'this is a 
> comment'\n)";
> Assert.assertEquals(expected, table.getResolvedSchema().toString());
> } {code}
>  
> !test_result.png|width=577,height=139!
> I checked that the built schema did contains the comment. However, the 
> comment filed of resolved schema in the table is null.
> Is it a bug or just meets our expectations?
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27449) The comment is lost when creating a Table from Datastream and Schema

2022-04-28 Thread Yuan Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529749#comment-17529749
 ] 

Yuan Huang  commented on FLINK-27449:
-

Thank you for your reply. So since it is a bug, I think it needs to be fixed 
although it is not urgent. If the community is busy, I am happy to work on it.

> The comment is lost when creating a Table from Datastream and Schema
> 
>
> Key: FLINK-27449
> URL: https://issues.apache.org/jira/browse/FLINK-27449
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.4
>Reporter: Yuan Huang 
>Priority: Major
> Attachments: test_result.png
>
>
>  
> A user reported that the comment was lost when creating a Table from 
> Datastream and Schema.
>  
> So this test will fail:
> {code:java}
> @Test
> public void test() {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> DataStream dataStream = env.fromElements("Alice", "Bob", "John");
> Schema.Builder builder = Schema.newBuilder();
> builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
> comment");
> Table table = tableEnv.fromDataStream(dataStream, 
> builder.build()).as("user_name");
> table.getResolvedSchema();
> table.printSchema();
> String expected = "(\n  `user_name` STRING COMMENT 'this is a 
> comment'\n)";
> Assert.assertEquals(expected, table.getResolvedSchema().toString());
> } {code}
>  
> !test_result.png|width=577,height=139!
> I checked that the built schema did contains the comment. However, the 
> comment filed of resolved schema in the table is null.
> Is it a bug or just meets our expectations?
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27449) The comment is lost when creating a Table from Datastream and Schema

2022-04-28 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529745#comment-17529745
 ] 

Shengkai Fang commented on FLINK-27449:
---

Hi. Thanks for your report. I think it's a bug  when convert the datastream to 
table. In the SchemaTranslator#addPhysicalSourceDataTypeFields, it misses the 
comment part in the column.  

> The comment is lost when creating a Table from Datastream and Schema
> 
>
> Key: FLINK-27449
> URL: https://issues.apache.org/jira/browse/FLINK-27449
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.4
>Reporter: Yuan Huang 
>Priority: Major
> Attachments: test_result.png
>
>
>  
> A user reported that the comment was lost when creating a Table from 
> Datastream and Schema.
>  
> So this test will fail:
> {code:java}
> @Test
> public void test() {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> DataStream dataStream = env.fromElements("Alice", "Bob", "John");
> Schema.Builder builder = Schema.newBuilder();
> builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
> comment");
> Table table = tableEnv.fromDataStream(dataStream, 
> builder.build()).as("user_name");
> table.getResolvedSchema();
> table.printSchema();
> String expected = "(\n  `user_name` STRING COMMENT 'this is a 
> comment'\n)";
> Assert.assertEquals(expected, table.getResolvedSchema().toString());
> } {code}
>  
> !test_result.png|width=577,height=139!
> I checked that the built schema did contains the comment. However, the 
> comment filed of resolved schema in the table is null.
> Is it a bug or just meets our expectations?
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-26497) Cast Exception when use split agg with multiple filters

2022-04-28 Thread Yunhong Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529744#comment-17529744
 ] 

Yunhong Zheng commented on FLINK-26497:
---

Hi [~lzljs3620320] , could you assign this issue to me, Thanks!

> Cast Exception when use split agg with multiple filters
> ---
>
> Key: FLINK-26497
> URL: https://issues.apache.org/jira/browse/FLINK-26497
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0, 1.14.4
>Reporter: Jingsong Lee
>Priority: Major
>
> table.optimizer.distinct-agg.split.enabled is true.
> count(distinct c) filter (...), count(distinct c) filter (...), 
> count(distinct c) filter (...)
> Filtering conditions excess 8.
> java.lang.RuntimeException: [J cannot be cast to [Ljava.lang.Object;
> (BTW, it seems that we don't have test case to cover this situation.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-25242) UDF with primitive int argument does not accept int values even after a not null filter

2022-04-28 Thread Yunhong Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529743#comment-17529743
 ] 

Yunhong Zheng commented on FLINK-25242:
---

Hi [~godfreyhe] , please assign this to me, Thanks!

> UDF with primitive int argument does not accept int values even after a not 
> null filter
> ---
>
> Key: FLINK-25242
> URL: https://issues.apache.org/jira/browse/FLINK-25242
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: Caizhi Weng
>Priority: Major
>
> Add the following test case to {{TableEnvironmentITCase}} to reproduce this 
> issue.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   tEnv.executeSql("CREATE TEMPORARY FUNCTION MyUdf AS 
> 'org.apache.flink.table.api.MyUdf'")
>   tEnv.executeSql(
> """
>   |CREATE TABLE T (
>   |  a INT
>   |) WITH (
>   |  'connector' = 'values',
>   |  'bounded' = 'true'
>   |)
>   |""".stripMargin)
>   tEnv.executeSql("SELECT MyUdf(a) FROM T WHERE a IS NOT NULL").print()
> }
> {code}
> UDF code
> {code:scala}
> class MyUdf extends ScalarFunction {
>   def eval(a: Int): Int = {
> a + 1
>   }
> }
> {code}
> Exception stack
> {code}
> org.apache.flink.table.api.ValidationException: SQL validation failed. 
> Invalid function call:
> default_catalog.default_database.MyUdf(INT)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:168)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:107)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:219)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:736)
>   at 
> org.apache.flink.table.api.TableEnvironmentITCase.myTest(TableEnvironmentITCase.scala:97)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 

[jira] [Commented] (FLINK-27420) Suspended SlotManager fail to reregister metrics when started again

2022-04-28 Thread Ben Augarten (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529741#comment-17529741
 ] 

Ben Augarten commented on FLINK-27420:
--

[~xtsong] – thanks, I posted a PR against master here: 
[https://github.com/apache/flink/pull/19607]. Let me know if you'd like any 
changes to the PR and I'm happy to make them. I can post PRs against 
release-1.14 and release-1.15 after.

That's good to know that there might be another bugfix release for 1.13.x. I'm 
going to work on a patch for 1.13 as well because most of our applications run 
on 1.13.

> Suspended SlotManager fail to reregister metrics when started again
> ---
>
> Key: FLINK-27420
> URL: https://issues.apache.org/jira/browse/FLINK-27420
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Runtime / Metrics
>Affects Versions: 1.13.5
>Reporter: Ben Augarten
>Assignee: Ben Augarten
>Priority: Major
>  Labels: pull-request-available
>
> The symptom is that SlotManager metrics are missing (taskslotsavailable and 
> taskslotstotal) when a SlotManager is suspended and then restarted. We 
> noticed this issue when running 1.13.5, but I believe this impacts 1.14.x, 
> 1.15.x, and master.
>  
> When a SlotManager is suspended, the [metrics group is 
> closed|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DeclarativeSlotManager.java#L214].
>  When the SlotManager is [started 
> again|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DeclarativeSlotManager.java#L181],
>  it makes an attempt to [reregister 
> metrics|[https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DeclarativeSlotManager.java#L199-L202],]
>  but that fails because the underlying metrics group [is still 
> closed|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java#L393]
>  
>  
> I was able to trace through this issue by restarting zookeeper nodes in a 
> staging environment and watching the JM with a debugger. 
>  
> A concise test, which currently fails, shows the expected behavior – 
> [https://github.com/apache/flink/compare/master...baugarten:baugarten/slot-manager-missing-metrics?expand=1]
>  
> I am happy to provide a PR to fix this issue, but first would like to verify 
> that this is not intended.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27155) Reduce multiple reads to the same Changelog file in the same taskmanager during restore

2022-04-28 Thread Feifan Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529740#comment-17529740
 ] 

Feifan Wang commented on FLINK-27155:
-

Thanks [~ym] and [~roman] , is there any inclinations and reasons regarding 
these four options?

> Reduce multiple reads to the same Changelog file in the same taskmanager 
> during restore
> ---
>
> Key: FLINK-27155
> URL: https://issues.apache.org/jira/browse/FLINK-27155
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Feifan Wang
>Assignee: Feifan Wang
>Priority: Major
> Fix For: 1.16.0
>
>
> h3. Background
> In the current implementation, State changes of different operators in the 
> same taskmanager may be written to the same changelog file, which effectively 
> reduces the number of files and requests to DFS.
> But on the other hand, the current implementation also reads the same 
> changelog file multiple times on recovery. More specifically, the number of 
> times the same changelog file is accessed is related to the number of 
> ChangeSets contained in it. And since each read needs to skip the preceding 
> bytes, this network traffic is also wasted.
> The result is a lot of unnecessary request to DFS when there are multiple 
> slots and keyed state in the same taskmanager.
> h3. Proposal
> We can reduce multiple reads to the same changelog file in the same 
> taskmanager during restore.
> One possible approach is to read the changelog file all at once and cache it 
> in memory or local file for a period of time when reading the changelog file.
> I think this could be a subtask of [v2 FLIP-158: Generalized incremental 
> checkpoints|https://issues.apache.org/jira/browse/FLINK-25842] .
> Hi [~ym] , [~roman]  how do you think about ?



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27391) Support Hive bucket table

2022-04-28 Thread luoyuxia (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529739#comment-17529739
 ] 

luoyuxia commented on FLINK-27391:
--

[~tartarus] Sorry for missing the feature of reading bucket table.  I expect 
the create/read/write bucket table can appear in Flink 1.16.

By the way, for reading bucket table, it can be ensured that the same bucket 
can be consider as same split and to be read by same TM. But when the bucket 
table is used to do bucket join,  it won't work as bukect join  for Flink 
itself doesn't support  bucket join at least now.  Once bucket join is 
supported in Flink itself, we can then do some adaption  in Hive connector.

> Support Hive bucket table
> -
>
> Key: FLINK-27391
> URL: https://issues.apache.org/jira/browse/FLINK-27391
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.13.1, 1.15.0
>Reporter: tartarus
>Priority: Major
>
> Support Hive bucket table



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27449) The comment is lost when creating a Table from Datastream and Schema

2022-04-28 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-27449:
--
Priority: Major  (was: Critical)

> The comment is lost when creating a Table from Datastream and Schema
> 
>
> Key: FLINK-27449
> URL: https://issues.apache.org/jira/browse/FLINK-27449
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.4
>Reporter: Yuan Huang 
>Priority: Major
> Attachments: test_result.png
>
>
>  
> A user reported that the comment was lost when creating a Table from 
> Datastream and Schema.
>  
> So this test will fail:
> {code:java}
> @Test
> public void test() {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> DataStream dataStream = env.fromElements("Alice", "Bob", "John");
> Schema.Builder builder = Schema.newBuilder();
> builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
> comment");
> Table table = tableEnv.fromDataStream(dataStream, 
> builder.build()).as("user_name");
> table.getResolvedSchema();
> table.printSchema();
> String expected = "(\n  `user_name` STRING COMMENT 'this is a 
> comment'\n)";
> Assert.assertEquals(expected, table.getResolvedSchema().toString());
> } {code}
>  
> !test_result.png|width=577,height=139!
> I checked that the built schema did contains the comment. However, the 
> comment filed of resolved schema in the table is null.
> Is it a bug or just meets our expectations?
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-25447) batch query cannot generate plan when a sorted view into multi sinks

2022-04-28 Thread Yunhong Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529738#comment-17529738
 ] 

Yunhong Zheng commented on FLINK-25447:
---

Hi, [~godfreyhe]  Can you assign this to me, Thanks!

> batch query cannot generate plan when a sorted view into multi sinks
> 
>
> Key: FLINK-25447
> URL: https://issues.apache.org/jira/browse/FLINK-25447
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.2
>Reporter: lincoln lee
>Priority: Major
> Fix For: 1.16.0
>
>
> A batch query  write a sorted view into multi sinks will get a cannot plan 
> exception
> {code}
>   @Test
>   def testSortedResultIntoMultiSinks(): Unit = {
> util.tableEnv.executeSql(
>   s"""
>  |CREATE TABLE Src (
>  |  `a` INT,
>  |  `b` BIGINT,
>  |  `c` STRING,
>  |  `d` STRING,
>  |  `e` STRING
>  |) WITH (
>  |  'connector' = 'values',
>  |  'bounded' = 'true'
>  |)
>""".stripMargin)
> val query = "SELECT * FROM Src order by c"
> val table = util.tableEnv.sqlQuery(query)
> util.tableEnv.registerTable("sortedTable", table)
> util.tableEnv.executeSql(
>   s"""
>  |CREATE TABLE sink1 (
>  |  `a` INT,
>  |  `b` BIGINT,
>  |  `c` STRING
>  |) WITH (
>  |  'connector' = 'filesystem',
>  |  'format' = 'testcsv',
>  |  'path' = '/tmp/test'
>  |)
>""".stripMargin)
> util.tableEnv.executeSql(
>   s"""
>  |CREATE TABLE sink2 (
>  |  `a` INT,
>  |  `b` BIGINT,
>  |  `c` STRING,
>  |  `d` STRING
>  |) WITH (
>  |  'connector' = 'filesystem',
>  |  'format' = 'testcsv',
>  |  'path' = '/tmp/test'
>  |)
>   """.stripMargin)
> val stmtSet= util.tableEnv.createStatementSet()
> stmtSet.addInsertSql(
>   "insert into sink1 select a, b, listagg(d) from sortedTable group by a, 
> b")
> stmtSet.addInsertSql(
>   "insert into sink2 select a, b, c, d from sortedTable")
> util.verifyExecPlan(stmtSet)
>   }
> {code}
> {code}
>   org.apache.flink.table.api.TableException: Cannot generate a valid 
> execution plan for the given query: 
>   LogicalSink(table=[default_catalog.default_database.sink2], fields=[a, b, 
> c, d])
>   +- LogicalProject(inputs=[0..3])
>  +- LogicalTableScan(table=[[IntermediateRelTable_0]])
>   This exception indicates that the query uses an unsupported SQL feature.
>   Please check the documentation for the set of currently supported SQL 
> features.
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:76)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.$anonfun$optimize$1(FlinkChainedProgram.scala:62)
>   at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:156)
>   at 
> scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:156)
>   at scala.collection.Iterator.foreach(Iterator.scala:937)
>   at scala.collection.Iterator.foreach$(Iterator.scala:937)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
>   at scala.collection.IterableLike.foreach(IterableLike.scala:70)
>   at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:156)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:154)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:58)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:88)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:59)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1(BatchCommonSubGraphBasedOptimizer.scala:47)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.$anonfun$doOptimize$1$adapted(BatchCommonSubGraphBasedOptimizer.scala:47)
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58)
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51)
>   at 

[GitHub] [flink] flinkbot commented on pull request #19607: [FLINK-27420][runtime] Recreate metric groups for each new RM to avoid metric loss

2022-04-28 Thread GitBox


flinkbot commented on PR #19607:
URL: https://github.com/apache/flink/pull/19607#issuecomment-1112840992

   
   ## CI report:
   
   * bf0074e894247bd4009d830b03ac52b90b7b7851 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27420) Suspended SlotManager fail to reregister metrics when started again

2022-04-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27420:
---
Labels: pull-request-available  (was: )

> Suspended SlotManager fail to reregister metrics when started again
> ---
>
> Key: FLINK-27420
> URL: https://issues.apache.org/jira/browse/FLINK-27420
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Runtime / Metrics
>Affects Versions: 1.13.5
>Reporter: Ben Augarten
>Assignee: Ben Augarten
>Priority: Major
>  Labels: pull-request-available
>
> The symptom is that SlotManager metrics are missing (taskslotsavailable and 
> taskslotstotal) when a SlotManager is suspended and then restarted. We 
> noticed this issue when running 1.13.5, but I believe this impacts 1.14.x, 
> 1.15.x, and master.
>  
> When a SlotManager is suspended, the [metrics group is 
> closed|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DeclarativeSlotManager.java#L214].
>  When the SlotManager is [started 
> again|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DeclarativeSlotManager.java#L181],
>  it makes an attempt to [reregister 
> metrics|[https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DeclarativeSlotManager.java#L199-L202],]
>  but that fails because the underlying metrics group [is still 
> closed|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/metrics/groups/AbstractMetricGroup.java#L393]
>  
>  
> I was able to trace through this issue by restarting zookeeper nodes in a 
> staging environment and watching the JM with a debugger. 
>  
> A concise test, which currently fails, shows the expected behavior – 
> [https://github.com/apache/flink/compare/master...baugarten:baugarten/slot-manager-missing-metrics?expand=1]
>  
> I am happy to provide a PR to fix this issue, but first would like to verify 
> that this is not intended.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] baugarten opened a new pull request, #19607: [FLINK-27420][runtime] Recreate metric groups for each new RM to avoid metric loss

2022-04-28 Thread GitBox


baugarten opened a new pull request, #19607:
URL: https://github.com/apache/flink/pull/19607

   
   
   ## What is the purpose of the change
   Recreate the metric groups for slot manager and resource manager when 
leadership is granted. This resolves an issue where the metric group is closed 
when a resource manager previously lost leadership.
   
   ## Brief change log
   - The MetricRegistry and Hostname are stored in the 
ResourceManagerProcessContext
   - The SlotManagerMetricGroup and ResourceManagerMetricGroup are created in 
the ResourceManagerFactory
   
   ## Verifying this change
   
   - Added tests for granting leadership, revoking leadership, and granting 
leadership again and verifying the metrics are registered. I verified this test 
fails before the patch.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: yes
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] gaoyunhaii commented on a diff in pull request #526: Announcement blogpost for the 1.15 release

2022-04-28 Thread GitBox


gaoyunhaii commented on code in PR #526:
URL: https://github.com/apache/flink-web/pull/526#discussion_r861431542


##
_posts/2022-04-26-1.15-announcement.md:
##
@@ -0,0 +1,434 @@
+---
+layout: post
+title:  "Announcing the Release of Apache Flink 1.15"
+subtitle: ""
+date: 2022-04-26T08:00:00.000Z
+categories: news
+authors:
+- yungao:
+  name: "Yun Gao"
+  twitter: "YunGao16"
+- joemoe:
+  name: "Joe Moser"
+  twitter: "JoemoeAT"
+
+---
+
+Thanks to our well-organized and open community, Apache Flink continues 
+[to grow](https://www.apache.org/foundation/docs/FY2021AnnualReport.pdf) as a 
+technology and remain one of the most active projects in
+the Apache community. With the release of Flink 1.15, we are proud to announce 
a number of 
+exciting changes.
+
+One of the main concepts that makes Apache Flink stand out is the unification 
of 
+batch (aka bounded) and stream (aka unbounded) data processing. A lot of 
+effort went into this unification in the previous releases but you can expect 
more efforts in this direction. 
+Apache Flink is not only growing when it comes to contributions and users, but

Review Comment:
   ```suggestion
   
   Apache Flink is not only growing when it comes to contributions and users, 
but
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] gaoyunhaii commented on a diff in pull request #526: Announcement blogpost for the 1.15 release

2022-04-28 Thread GitBox


gaoyunhaii commented on code in PR #526:
URL: https://github.com/apache/flink-web/pull/526#discussion_r861431461


##
_posts/2022-04-26-1.15-announcement.md:
##
@@ -0,0 +1,434 @@
+---
+layout: post
+title:  "Announcing the Release of Apache Flink 1.15"
+subtitle: ""
+date: 2022-04-26T08:00:00.000Z
+categories: news
+authors:
+- yungao:
+  name: "Yun Gao"
+  twitter: "YunGao16"
+- joemoe:
+  name: "Joe Moser"
+  twitter: "JoemoeAT"
+
+---
+
+Thanks to our well-organized and open community, Apache Flink continues 
+[to grow](https://www.apache.org/foundation/docs/FY2021AnnualReport.pdf) as a 
+technology and remain one of the most active projects in
+the Apache community. With the release of Flink 1.15, we are proud to announce 
a number of 
+exciting changes.
+
+One of the main concepts that makes Apache Flink stand out is the unification 
of 
+batch (aka bounded) and stream (aka unbounded) data processing. A lot of 
+effort went into this unification in the previous releases but you can expect 
more efforts in this direction. 
+Apache Flink is not only growing when it comes to contributions and users, but
+also out of the original use cases. We are seeing a trend towards more 
business/analytics 
+use cases implemented in low-/no-code. Flink SQL is the feature in the Flink 
ecosystem 
+that enables such uses cases and this is why its popularity continues to grow. 
 
+
+Apache Flink is an essential building block in data pipelines/architectures 
and 
+is used with many other technologies in order to drive all sorts of use cases. 
While new ideas/products
+may appear in this domain, existing technologies continue to establish 
themselves as standards for solving 
+mission-critical problems. Knowing that we have such a wide reach and play a 
role in the success of many 
+projects, it is important that the experience of 
+integrating with Apache Flink is as seamless and easy as possible. 

Review Comment:
   ```suggestion
   projects, it is important that the experience of 
   integrating Apache Flink with cloud infrastructures and other systems is as 
seamless and easy as possible. 
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] swuferhong commented on pull request #19579: [FLINK-25097][table-planner] Fix bug in inner join when the filter co…

2022-04-28 Thread GitBox


swuferhong commented on PR #19579:
URL: https://github.com/apache/flink/pull/19579#issuecomment-1112830466

   Hi @godfreyhe  could you please help review this PR. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27364) Support DESC EXTENDED partition statement for partitioned table

2022-04-28 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529709#comment-17529709
 ] 

Jingsong Lee commented on FLINK-27364:
--

If [~nicholasjiang]  still have time to develop FLINK-25177, [~lsy] you can 
review the FLINK-25177 and make sure to finish it according to the semantics we 
defined.

What do you think?

> Support DESC EXTENDED partition statement for partitioned table
> ---
>
> Key: FLINK-27364
> URL: https://issues.apache.org/jira/browse/FLINK-27364
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.16.0
>
>
> DESCRIBE [EXTENDED] [db_name.]table_name \{ [PARTITION partition_spec]  | 
> [col_name ] };



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27449) The comment is lost when creating a Table from Datastream and Schema

2022-04-28 Thread Yuan Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Huang  updated FLINK-27449:

Description: 
 

A user reported that the comment was lost when creating a Table from Datastream 
and Schema.

 

So this test will fail:
{code:java}
@Test
public void test() {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
DataStream dataStream = env.fromElements("Alice", "Bob", "John");
Schema.Builder builder = Schema.newBuilder();
builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
comment");

Table table = tableEnv.fromDataStream(dataStream, 
builder.build()).as("user_name");
table.getResolvedSchema();
table.printSchema();
String expected = "(\n  `user_name` STRING COMMENT 'this is a comment'\n)";
Assert.assertEquals(expected, table.getResolvedSchema().toString());
} {code}
 

!test_result.png|width=577,height=139!


I checked that the built schema did contains the comment. However, the comment 
filed of resolved schema in the table is null.

Is it a bug or just meets our expectations?

 

  was:
User reported that the comment was lost when creating a Table from Datastream 
and Schema.

So this test will fail:
{quote}@Test
public void test() {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
DataStream dataStream = env.fromElements("Alice", "Bob", "John");
Schema.Builder builder = Schema.newBuilder();
builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
comment");

Table table = tableEnv.fromDataStream(dataStream, 
builder.build()).as("user_name");
table.getResolvedSchema();
table.printSchema();
String expected = "(\n `user_name` STRING COMMENT 'this is a comment'\n)";
Assert.assertEquals(expected, table.getResolvedSchema().toString());
}{quote}
 

!test_result.png|width=577,height=139!

 

Is it a bug or just meets our expectations?

 


> The comment is lost when creating a Table from Datastream and Schema
> 
>
> Key: FLINK-27449
> URL: https://issues.apache.org/jira/browse/FLINK-27449
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.4
>Reporter: Yuan Huang 
>Priority: Critical
> Attachments: test_result.png
>
>
>  
> A user reported that the comment was lost when creating a Table from 
> Datastream and Schema.
>  
> So this test will fail:
> {code:java}
> @Test
> public void test() {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> DataStream dataStream = env.fromElements("Alice", "Bob", "John");
> Schema.Builder builder = Schema.newBuilder();
> builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
> comment");
> Table table = tableEnv.fromDataStream(dataStream, 
> builder.build()).as("user_name");
> table.getResolvedSchema();
> table.printSchema();
> String expected = "(\n  `user_name` STRING COMMENT 'this is a 
> comment'\n)";
> Assert.assertEquals(expected, table.getResolvedSchema().toString());
> } {code}
>  
> !test_result.png|width=577,height=139!
> I checked that the built schema did contains the comment. However, the 
> comment filed of resolved schema in the table is null.
> Is it a bug or just meets our expectations?
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27449) The comment is lost when creating a Table from Datastream and Schema

2022-04-28 Thread Yuan Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Huang  updated FLINK-27449:

Description: 
User reported that the comment was lost when creating a Table from Datastream 
and Schema.

So this test will fail:
{quote}@Test
public void test() {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
DataStream dataStream = env.fromElements("Alice", "Bob", "John");
Schema.Builder builder = Schema.newBuilder();
builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
comment");

Table table = tableEnv.fromDataStream(dataStream, 
builder.build()).as("user_name");
table.getResolvedSchema();
table.printSchema();
String expected = "(\n `user_name` STRING COMMENT 'this is a comment'\n)";
Assert.assertEquals(expected, table.getResolvedSchema().toString());
}{quote}
 

!test_result.png|width=577,height=139!

 

Is it a bug or just meets our expectations?

 

  was:
User reported that the comment was lost when creating a table from datastream 
and schema.

So this test will fail:
{quote}@Test
public void test() {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
DataStream dataStream = env.fromElements("Alice", "Bob", "John");
Schema.Builder builder = Schema.newBuilder();
builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
comment");

Table table = tableEnv.fromDataStream(dataStream, 
builder.build()).as("user_name");
table.getResolvedSchema();
table.printSchema();
String expected = "(\n `user_name` STRING COMMENT 'this is a comment'\n)";
Assert.assertEquals(expected, table.getResolvedSchema().toString());
}{quote}
 


```

@Test
public void test()

{ StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(); StreamTableEnvironment 
tableEnv = StreamTableEnvironment.create(env); DataStream dataStream = 
env.fromElements("Alice", "Bob", "John"); Schema.Builder builder = 
Schema.newBuilder(); builder.column("f0", 
DataTypes.of(String.class)).withComment("this is a comment"); Table table = 
tableEnv.fromDataStream(dataStream, builder.build()).as("user_name"); 
table.getResolvedSchema(); table.printSchema(); String expected = "(\n 
`user_name` STRING COMMENT 'this is a comment'\n)"; 
Assert.assertEquals(expected, table.getResolvedSchema().toString()); }

```

 

!test_result.png!

 

Is it a bug or just meets our expectations?

 


> The comment is lost when creating a Table from Datastream and Schema
> 
>
> Key: FLINK-27449
> URL: https://issues.apache.org/jira/browse/FLINK-27449
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.4
>Reporter: Yuan Huang 
>Priority: Critical
> Attachments: test_result.png
>
>
> User reported that the comment was lost when creating a Table from Datastream 
> and Schema.
> So this test will fail:
> {quote}@Test
> public void test() {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> DataStream dataStream = env.fromElements("Alice", "Bob", "John");
> Schema.Builder builder = Schema.newBuilder();
> builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
> comment");
> Table table = tableEnv.fromDataStream(dataStream, 
> builder.build()).as("user_name");
> table.getResolvedSchema();
> table.printSchema();
> String expected = "(\n `user_name` STRING COMMENT 'this is a comment'\n)";
> Assert.assertEquals(expected, table.getResolvedSchema().toString());
> }{quote}
>  
> !test_result.png|width=577,height=139!
>  
> Is it a bug or just meets our expectations?
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27449) The comment is lost when creating a Table from Datastream and Schema

2022-04-28 Thread Yuan Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Huang  updated FLINK-27449:

Summary: The comment is lost when creating a Table from Datastream and 
Schema  (was: The comment is lost when printing table schema)

> The comment is lost when creating a Table from Datastream and Schema
> 
>
> Key: FLINK-27449
> URL: https://issues.apache.org/jira/browse/FLINK-27449
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.4
>Reporter: Yuan Huang 
>Priority: Critical
> Attachments: test_result.png
>
>
> User reported that the comment was lost when creating a table from datastream 
> and schema.
> So this test will fail:
> {quote}@Test
> public void test() {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> DataStream dataStream = env.fromElements("Alice", "Bob", "John");
> Schema.Builder builder = Schema.newBuilder();
> builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
> comment");
> Table table = tableEnv.fromDataStream(dataStream, 
> builder.build()).as("user_name");
> table.getResolvedSchema();
> table.printSchema();
> String expected = "(\n `user_name` STRING COMMENT 'this is a comment'\n)";
> Assert.assertEquals(expected, table.getResolvedSchema().toString());
> }{quote}
>  
> ```
> @Test
> public void test()
> { StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment(); StreamTableEnvironment 
> tableEnv = StreamTableEnvironment.create(env); DataStream dataStream 
> = env.fromElements("Alice", "Bob", "John"); Schema.Builder builder = 
> Schema.newBuilder(); builder.column("f0", 
> DataTypes.of(String.class)).withComment("this is a comment"); Table table = 
> tableEnv.fromDataStream(dataStream, builder.build()).as("user_name"); 
> table.getResolvedSchema(); table.printSchema(); String expected = "(\n 
> `user_name` STRING COMMENT 'this is a comment'\n)"; 
> Assert.assertEquals(expected, table.getResolvedSchema().toString()); }
> ```
>  
> !test_result.png!
>  
> Is it a bug or just meets our expectations?
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27449) The comment is lost when printing table schema

2022-04-28 Thread Yuan Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Huang  updated FLINK-27449:

Description: 
User reported that the comment was lost when creating a table from datastream 
and schema.

So this test will fail:
{quote}@Test
public void test() {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
DataStream dataStream = env.fromElements("Alice", "Bob", "John");
Schema.Builder builder = Schema.newBuilder();
builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
comment");

Table table = tableEnv.fromDataStream(dataStream, 
builder.build()).as("user_name");
table.getResolvedSchema();
table.printSchema();
String expected = "(\n `user_name` STRING COMMENT 'this is a comment'\n)";
Assert.assertEquals(expected, table.getResolvedSchema().toString());
}{quote}
 


```

@Test
public void test()

{ StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(); StreamTableEnvironment 
tableEnv = StreamTableEnvironment.create(env); DataStream dataStream = 
env.fromElements("Alice", "Bob", "John"); Schema.Builder builder = 
Schema.newBuilder(); builder.column("f0", 
DataTypes.of(String.class)).withComment("this is a comment"); Table table = 
tableEnv.fromDataStream(dataStream, builder.build()).as("user_name"); 
table.getResolvedSchema(); table.printSchema(); String expected = "(\n 
`user_name` STRING COMMENT 'this is a comment'\n)"; 
Assert.assertEquals(expected, table.getResolvedSchema().toString()); }

```

 

!test_result.png!

 

Is it a bug or just meets our expectations?

 

  was:
User reported that the comment was lost when printing the table schema.

So this test will fail:
```

@Test
public void test() {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
DataStream dataStream = env.fromElements("Alice", "Bob", "John");
Schema.Builder builder = Schema.newBuilder();
builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
comment");

Table table = tableEnv.fromDataStream(dataStream, 
builder.build()).as("user_name");
table.getResolvedSchema();
table.printSchema();
String expected = "(\n `user_name` STRING COMMENT 'this is a comment'\n)";
Assert.assertEquals(expected, table.getResolvedSchema().toString());
}

```

 

!test_result.png!

 

Is it a bug or just meets our expectations?

 


> The comment is lost when printing table schema
> --
>
> Key: FLINK-27449
> URL: https://issues.apache.org/jira/browse/FLINK-27449
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.4
>Reporter: Yuan Huang 
>Priority: Critical
> Attachments: test_result.png
>
>
> User reported that the comment was lost when creating a table from datastream 
> and schema.
> So this test will fail:
> {quote}@Test
> public void test() {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
> DataStream dataStream = env.fromElements("Alice", "Bob", "John");
> Schema.Builder builder = Schema.newBuilder();
> builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
> comment");
> Table table = tableEnv.fromDataStream(dataStream, 
> builder.build()).as("user_name");
> table.getResolvedSchema();
> table.printSchema();
> String expected = "(\n `user_name` STRING COMMENT 'this is a comment'\n)";
> Assert.assertEquals(expected, table.getResolvedSchema().toString());
> }{quote}
>  
> ```
> @Test
> public void test()
> { StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment(); StreamTableEnvironment 
> tableEnv = StreamTableEnvironment.create(env); DataStream dataStream 
> = env.fromElements("Alice", "Bob", "John"); Schema.Builder builder = 
> Schema.newBuilder(); builder.column("f0", 
> DataTypes.of(String.class)).withComment("this is a comment"); Table table = 
> tableEnv.fromDataStream(dataStream, builder.build()).as("user_name"); 
> table.getResolvedSchema(); table.printSchema(); String expected = "(\n 
> `user_name` STRING COMMENT 'this is a comment'\n)"; 
> Assert.assertEquals(expected, table.getResolvedSchema().toString()); }
> ```
>  
> !test_result.png!
>  
> Is it a bug or just meets our expectations?
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27449) The comment is lost when printing table schema

2022-04-28 Thread Yuan Huang (Jira)
Yuan Huang  created FLINK-27449:
---

 Summary: The comment is lost when printing table schema
 Key: FLINK-27449
 URL: https://issues.apache.org/jira/browse/FLINK-27449
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.14.4
Reporter: Yuan Huang 
 Attachments: test_result.png

User reported that the comment was lost when printing the table schema.

So this test will fail:
```

@Test
public void test() {
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
DataStream dataStream = env.fromElements("Alice", "Bob", "John");
Schema.Builder builder = Schema.newBuilder();
builder.column("f0", DataTypes.of(String.class)).withComment("this is a 
comment");

Table table = tableEnv.fromDataStream(dataStream, 
builder.build()).as("user_name");
table.getResolvedSchema();
table.printSchema();
String expected = "(\n `user_name` STRING COMMENT 'this is a comment'\n)";
Assert.assertEquals(expected, table.getResolvedSchema().toString());
}

```

 

!test_result.png!

 

Is it a bug or just meets our expectations?

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-21301) Decouple window aggregate allow lateness with state ttl configuration

2022-04-28 Thread Shawn Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529678#comment-17529678
 ] 

Shawn Liu edited comment on FLINK-21301 at 4/28/22 10:59 PM:
-

[~lzljs3620320]  -what is the table config for late-fire? I didn't find it in 
the documentation. 
[https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/]-

 

-I also find it useful to have this allow-lateness config separate from state 
ttl. For example, when I have a join and windowing in the same SQL Script, I 
only want to set state ttl for join but not allowed lateness for windowing. I 
don't find another way to do so without table.exec.emit.allow-lateness-

I misunderstood the current solution. 
[https://www.mail-archive.com/issues@flink.apache.org/msg498605.html] provides 
a good summarization. 


was (Author: xiangcaohello):
[~lzljs3620320]  what is the table config for late-fire? I didn't find it in 
the documentation. 
[https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/]

 

I also find it useful to have this allow-lateness config separate from state 
ttl. For example, when I have a join and windowing in the same SQL Script, I 
only want to set state ttl for join but not allowed lateness for windowing. I 
don't find another way to do so without table.exec.emit.allow-lateness

 

> Decouple window aggregate allow lateness with state ttl configuration
> -
>
> Key: FLINK-21301
> URL: https://issues.apache.org/jira/browse/FLINK-21301
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.14.0
>
>
> Currently, state retention time config will also effect state clean behavior 
> of Window Aggregate, which is unexpected for most users.
> E.g for the following example,  User would set `MinIdleStateRetentionTime` to 
> 1 Day to clean state in `deduplicate` . However, it will also effects clean 
> behavior of window aggregate. For example, 2021-01-04 data would clean at 
> 2021-01-06 instead of 2021-01-05. 
> {code:sql}
> SELECT
>  DATE_FORMAT(tumble_end(ROWTIME ,interval '1' DAY),'-MM-dd') as stat_time,
>  count(1) first_phone_num
> FROM (
>  SELECT 
>  ROWTIME,
>  user_id,
>  row_number() over(partition by user_id, pdate order by ROWTIME ) as rn
>  FROM source_kafka_biz_shuidi_sdb_crm_call_record 
> ) cal 
> where rn =1
> group by tumble(ROWTIME,interval '1' DAY);{code}
> It's better to decouple window aggregate allow lateness with 
> `MinIdleStateRetentionTime` .



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-21301) Decouple window aggregate allow lateness with state ttl configuration

2022-04-28 Thread Shawn Liu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529678#comment-17529678
 ] 

Shawn Liu commented on FLINK-21301:
---

[~lzljs3620320]  what is the table config for late-fire? I didn't find it in 
the documentation. 
[https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/]

 

I also find it useful to have this allow-lateness config separate from state 
ttl. For example, when I have a join and windowing in the same SQL Script, I 
only want to set state ttl for join but not allowed lateness for windowing. I 
don't find another way to do so without table.exec.emit.allow-lateness

 

> Decouple window aggregate allow lateness with state ttl configuration
> -
>
> Key: FLINK-21301
> URL: https://issues.apache.org/jira/browse/FLINK-21301
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.14.0
>
>
> Currently, state retention time config will also effect state clean behavior 
> of Window Aggregate, which is unexpected for most users.
> E.g for the following example,  User would set `MinIdleStateRetentionTime` to 
> 1 Day to clean state in `deduplicate` . However, it will also effects clean 
> behavior of window aggregate. For example, 2021-01-04 data would clean at 
> 2021-01-06 instead of 2021-01-05. 
> {code:sql}
> SELECT
>  DATE_FORMAT(tumble_end(ROWTIME ,interval '1' DAY),'-MM-dd') as stat_time,
>  count(1) first_phone_num
> FROM (
>  SELECT 
>  ROWTIME,
>  user_id,
>  row_number() over(partition by user_id, pdate order by ROWTIME ) as rn
>  FROM source_kafka_biz_shuidi_sdb_crm_call_record 
> ) cal 
> where rn =1
> group by tumble(ROWTIME,interval '1' DAY);{code}
> It's better to decouple window aggregate allow lateness with 
> `MinIdleStateRetentionTime` .



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27364) Support DESC EXTENDED partition statement for partitioned table

2022-04-28 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529674#comment-17529674
 ] 

Nicholas Jiang commented on FLINK-27364:


[~lsy], IMO, the pull request for FLINK-25177 has been commented by 
[~lzljs3620320] to supports all connectors. This should be added the connectors 
support on this pull request, right?

> Support DESC EXTENDED partition statement for partitioned table
> ---
>
> Key: FLINK-27364
> URL: https://issues.apache.org/jira/browse/FLINK-27364
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
> Fix For: 1.16.0
>
>
> DESCRIBE [EXTENDED] [db_name.]table_name \{ [PARTITION partition_spec]  | 
> [col_name ] };



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27422) Do not create temporary pod template files for JobManager and TaskManager if not configured explicitly

2022-04-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27422:
---
Labels: Starter pull-request-available  (was: Starter)

> Do not create temporary pod template files for JobManager and TaskManager if 
> not configured explicitly
> --
>
> Key: FLINK-27422
> URL: https://issues.apache.org/jira/browse/FLINK-27422
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Yang Wang
>Priority: Major
>  Labels: Starter, pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> We do not need to create temporary pod template files for JobManager and 
> TaskManager if it is not configured explicitly via 
> {{.spec.JobManagerSpec.podTemplate}} or 
> {{{}.spec.TaskManagerSpec.podTemplate{}}}.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] SteNicholas opened a new pull request, #189: [FLINK-27422] Do not create temporary pod template files for JobManager and TaskManager if not configured explicitly

2022-04-28 Thread GitBox


SteNicholas opened a new pull request, #189:
URL: https://github.com/apache/flink-kubernetes-operator/pull/189

   It doesn't need to create temporary pod template files for JobManager and 
TaskManager if it is not configured explicitly via 
`.spec.JobManagerSpec.podTemplate` or `.spec.TaskManagerSpec.podTemplate`.
   
   **The brief change log**
   
   - `FlinkConfigBuilder#setPodTemplate` adds the check to create temporary pod 
template files for JobManager and TaskManager when it is configured explicitly 
via `.spec.JobManagerSpec.podTemplate` or `.spec.TaskManagerSpec.podTemplate`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] infoverload commented on a diff in pull request #517: [FLINK-24370] Addition of blog post describing features and usage of the Async Sink…

2022-04-28 Thread GitBox


infoverload commented on code in PR #517:
URL: https://github.com/apache/flink-web/pull/517#discussion_r861299671


##
_posts/2022-03-16-async-sink-base.md:
##
@@ -0,0 +1,160 @@
+---
+layout: post
+title: "The Generic Asynchronous Base Sink"
+date: 2022-04-05 16:00:00
+authors:
+- CrynetLogistics:
+  name: "Zichen Liu"
+excerpt: An overview of the new AsyncBaseSink and how to use it for building 
your own concrete sink
+---
+
+Flink sinks share a lot of similar behavior. All sinks batch records according 
to user defined buffering hints, sign requests, write them to the destination, 
retry unsuccessful or throttled requests, and participate in checkpointing.
+
+This is why for [Flink 
1.15](https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink) 
we have decided to create the `AsyncSinkBase`, an abstract sink with a number 
of common functionalities extracted. Adding support for a new destination now 
only requires a lightweight shim that implements the specific interfaces of the 
destination using a client that supports async requests. 
+
+This common abstraction will reduce the effort required to maintain individual 
sinks that extend from this abstract sink, with bugfixes and improvements to 
the sink core benefiting all implementations that extend it. The design of 
AsyncSyncBase focuses on extensibility and a broad support of destinations. The 
core of the sink is kept generic and free of any connector-specific 
dependencies.
+
+The sink base is designed to participate in checkpointing to provide 
at-least-once semantics and can work directly with destinations that provide a 
client that supports asynchronous requests. Alternatively, concrete sink 
implementers may manage their own thread pool with a synchronous client.
+
+{% toc %}
+
+# Adding the base sink as a dependency
+In order to use the base sink, you will need to add the following dependency 
to your project. The example below follows the Maven syntax:

Review Comment:
   ```suggestion
   # Adding the base sink as a dependency
   
   In order to use the base sink, you will need to add the following dependency 
to your project. The example below follows the Maven syntax:
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] infoverload commented on a diff in pull request #517: [FLINK-24370] Addition of blog post describing features and usage of the Async Sink…

2022-04-28 Thread GitBox


infoverload commented on code in PR #517:
URL: https://github.com/apache/flink-web/pull/517#discussion_r861298315


##
_posts/2022-03-16-async-sink-base.md:
##
@@ -0,0 +1,160 @@
+---
+layout: post
+title: "The Generic Asynchronous Base Sink"
+date: 2022-04-05 16:00:00
+authors:
+- CrynetLogistics:
+  name: "Zichen Liu"
+excerpt: An overview of the new AsyncBaseSink and how to use it for building 
your own concrete sink
+---
+
+Flink sinks share a lot of similar behavior. All sinks batch records according 
to user defined buffering hints, sign requests, write them to the destination, 
retry unsuccessful or throttled requests, and participate in checkpointing.
+
+This is why for [Flink 
1.15](https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink) 
we have decided to create the `AsyncSinkBase`, an abstract sink with a number 
of common functionalities extracted. Adding support for a new destination now 
only requires a lightweight shim that implements the specific interfaces of the 
destination using a client that supports async requests. 

Review Comment:
   ```suggestion
   This is why for [Flink 
1.15](https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink) 
we have decided to create the `AsyncSinkBase`, an abstract sink with a number 
of common functionalities extracted. 
   This is a base implementation for Async sinks, which you should use whenever 
you need to implement a sink that doesn't offer transactional capabilities 
(like most databases). Adding support for a new destination now only requires a 
lightweight shim that implements the specific interfaces of the destination 
using a client that supports async requests. 
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] infoverload commented on a diff in pull request #517: [FLINK-24370] Addition of blog post describing features and usage of the Async Sink…

2022-04-28 Thread GitBox


infoverload commented on code in PR #517:
URL: https://github.com/apache/flink-web/pull/517#discussion_r861295620


##
_posts/2022-03-16-async-sink-base.md:
##
@@ -0,0 +1,160 @@
+---
+layout: post
+title: "The Generic Asynchronous Base Sink"
+date: 2022-04-05 16:00:00
+authors:
+- CrynetLogistics:
+  name: "Zichen Liu"
+excerpt: An overview of the new AsyncBaseSink and how to use it for building 
your own concrete sink
+---
+
+Flink sinks share a lot of similar behavior. All sinks batch records according 
to user defined buffering hints, sign requests, write them to the destination, 
retry unsuccessful or throttled requests, and participate in checkpointing.
+
+This is why for [Flink 
1.15](https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink) 
we have decided to create the `AsyncSinkBase`, an abstract sink with a number 
of common functionalities extracted. Adding support for a new destination now 
only requires a lightweight shim that implements the specific interfaces of the 
destination using a client that supports async requests. 
+
+This common abstraction will reduce the effort required to maintain individual 
sinks that extend from this abstract sink, with bugfixes and improvements to 
the sink core benefiting all implementations that extend it. The design of 
AsyncSyncBase focuses on extensibility and a broad support of destinations. The 
core of the sink is kept generic and free of any connector-specific 
dependencies.

Review Comment:
   ```suggestion
   This common abstraction will reduce the effort required to maintain 
individual sinks that extend from this abstract sink, with bug fixes and 
improvements to the sink core benefiting all implementations that extend it. 
The design of `AsyncSinkBase` focuses on extensibility and a broad support of 
destinations. The core of the sink is kept generic and free of any 
connector-specific dependencies.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] infoverload commented on a diff in pull request #517: [FLINK-24370] Addition of blog post describing features and usage of the Async Sink…

2022-04-28 Thread GitBox


infoverload commented on code in PR #517:
URL: https://github.com/apache/flink-web/pull/517#discussion_r861295620


##
_posts/2022-03-16-async-sink-base.md:
##
@@ -0,0 +1,160 @@
+---
+layout: post
+title: "The Generic Asynchronous Base Sink"
+date: 2022-04-05 16:00:00
+authors:
+- CrynetLogistics:
+  name: "Zichen Liu"
+excerpt: An overview of the new AsyncBaseSink and how to use it for building 
your own concrete sink
+---
+
+Flink sinks share a lot of similar behavior. All sinks batch records according 
to user defined buffering hints, sign requests, write them to the destination, 
retry unsuccessful or throttled requests, and participate in checkpointing.
+
+This is why for [Flink 
1.15](https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink) 
we have decided to create the `AsyncSinkBase`, an abstract sink with a number 
of common functionalities extracted. Adding support for a new destination now 
only requires a lightweight shim that implements the specific interfaces of the 
destination using a client that supports async requests. 
+
+This common abstraction will reduce the effort required to maintain individual 
sinks that extend from this abstract sink, with bugfixes and improvements to 
the sink core benefiting all implementations that extend it. The design of 
AsyncSyncBase focuses on extensibility and a broad support of destinations. The 
core of the sink is kept generic and free of any connector-specific 
dependencies.

Review Comment:
   ```suggestion
   This common abstraction will reduce the effort required to maintain 
individual sinks that extend from this abstract sink, with bug fixes and 
improvements to the sink core benefiting all implementations that extend it. 
The design of AsyncSyncBase focuses on extensibility and a broad support of 
destinations. The core of the sink is kept generic and free of any 
connector-specific dependencies.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-web] infoverload commented on a diff in pull request #517: [FLINK-24370] Addition of blog post describing features and usage of the Async Sink…

2022-04-28 Thread GitBox


infoverload commented on code in PR #517:
URL: https://github.com/apache/flink-web/pull/517#discussion_r861293470


##
_posts/2022-03-16-async-sink-base.md:
##
@@ -0,0 +1,160 @@
+---
+layout: post
+title: "The Generic Asynchronous Base Sink"
+date: 2022-04-05 16:00:00
+authors:
+- CrynetLogistics:
+  name: "Zichen Liu"
+excerpt: An overview of the new AsyncBaseSink and how to use it for building 
your own concrete sink
+---
+
+Flink sinks share a lot of similar behavior. All sinks batch records according 
to user defined buffering hints, sign requests, write them to the destination, 
retry unsuccessful or throttled requests, and participate in checkpointing.

Review Comment:
   ```suggestion
   Flink sinks share a lot of similar behavior. All sinks batch records 
according to user-defined buffering hints, sign requests, write them to the 
destination, retry unsuccessful or throttled requests, and participate in 
checkpointing.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-24491) ExecutionGraphInfo may not be archived when the dispatcher terminates

2022-04-28 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529392#comment-17529392
 ] 

Matthias Pohl edited comment on FLINK-24491 at 4/28/22 8:06 PM:


1.14: f9d2c1c0d3aa7e72e95f1c75bcf7c77c1fceac22

The 1.14 backport required 1c492ed97fe2876041804b944c6cb370430b3519 and 
e0fb11741a000478852d51fa9f2823208e6717c6 to be added as backported commits as 
well


was (Author: mapohl):
1.14: f9d2c1c0d3aa7e72e95f1c75bcf7c77c1fceac22

The 1.14 backport required 1c492ed97fe2876041804b944c6cb370430b3519 and 
e0fb11741a000478852d51fa9f2823208e6717c6

> ExecutionGraphInfo may not be archived when the dispatcher terminates
> -
>
> Key: FLINK-24491
> URL: https://issues.apache.org/jira/browse/FLINK-24491
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Zhilong Hong
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.14.5, 1.15.1
>
>
> When a job finishes, its JobManagerRunnerResult will be processed in the 
> callback of {{Dispatcher#runJob}}. In the callback, ExecutionGraphInfo will 
> be archived by HistoryServerArchivist asynchronously. However, the 
> CompletableFuture of the archiving is ignored. The job may be removed before 
> the archiving is finished. For the batch job running in the 
> per-job/application mode, the dispatcher will terminate itself once the job 
> is finished. In this case, ExecutionGraphInfo may not be archived when the 
> dispatcher terminates.
> If the ExecutionGraphInfo is lost, users are not able to know whether the 
> batch job is finished normally or not. They have to refer to the logs for the 
> result.
> The session mode is not affected, since the dispatcher won't terminate itself 
> once the job is finished. The HistoryServerArchivist gets enough time to 
> archive the ExcutionGraphInfo.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-24491) ExecutionGraphInfo may not be archived when the dispatcher terminates

2022-04-28 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529392#comment-17529392
 ] 

Matthias Pohl edited comment on FLINK-24491 at 4/28/22 8:05 PM:


1.14: f9d2c1c0d3aa7e72e95f1c75bcf7c77c1fceac22

The 1.14 backport required 1c492ed97fe2876041804b944c6cb370430b3519 and 
e0fb11741a000478852d51fa9f2823208e6717c6


was (Author: mapohl):
1.14: f9d2c1c0d3aa7e72e95f1c75bcf7c77c1fceac22

> ExecutionGraphInfo may not be archived when the dispatcher terminates
> -
>
> Key: FLINK-24491
> URL: https://issues.apache.org/jira/browse/FLINK-24491
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0, 1.13.6, 1.14.4
>Reporter: Zhilong Hong
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.14.5, 1.15.1
>
>
> When a job finishes, its JobManagerRunnerResult will be processed in the 
> callback of {{Dispatcher#runJob}}. In the callback, ExecutionGraphInfo will 
> be archived by HistoryServerArchivist asynchronously. However, the 
> CompletableFuture of the archiving is ignored. The job may be removed before 
> the archiving is finished. For the batch job running in the 
> per-job/application mode, the dispatcher will terminate itself once the job 
> is finished. In this case, ExecutionGraphInfo may not be archived when the 
> dispatcher terminates.
> If the ExecutionGraphInfo is lost, users are not able to know whether the 
> batch job is finished normally or not. They have to refer to the logs for the 
> result.
> The session mode is not affected, since the dispatcher won't terminate itself 
> once the job is finished. The HistoryServerArchivist gets enough time to 
> archive the ExcutionGraphInfo.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (FLINK-27382) Make Job mode wait with cluster shutdown until the cleanup is done

2022-04-28 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-27382.
---
Fix Version/s: 1.16.0
   1.15.1
   Resolution: Fixed

master: 3690c3f44573c753bc5e9a397a62cec683dc
1.15: b3d3c052c36da1f1e696a4344d5c360d049b96f1

> Make Job mode wait with cluster shutdown until the cleanup is done
> --
>
> Key: FLINK-27382
> URL: https://issues.apache.org/jira/browse/FLINK-27382
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0, 1.15.1
>
>
> The shutdown is triggered as soon as the job terminates globally without 
> waiting for any cleanup. This behavior was ok'ish in 1.14- because we didn't 
> bother so much about the cleanup. In 1.15+ we might want to wait for the 
> cleanup to finish.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] XComp merged pull request #19603: [FLINK-27382][BP-1.15][runtime] Moves cluster shutdown to when the job cleanup is done in job mode

2022-04-28 Thread GitBox


XComp merged PR #19603:
URL: https://github.com/apache/flink/pull/19603


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp merged pull request #19567: [FLINK-27382][runtime] Moves cluster shutdown to when the job cleanup is done in job mode

2022-04-28 Thread GitBox


XComp merged PR #19567:
URL: https://github.com/apache/flink/pull/19567


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] afedulov commented on a diff in pull request #19405: [FLINK-27066] Reintroduce e2e tests in ES as Java tests.

2022-04-28 Thread GitBox


afedulov commented on code in PR #19405:
URL: https://github.com/apache/flink/pull/19405#discussion_r861131896


##
flink-end-to-end-tests/flink-end-to-end-tests-common-elasticsearch/src/main/java/org/apache/flink/streaming/tests/ElasticsearchTestEmitter.java:
##
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.connector.sink2.SinkWriter;
+import org.apache.flink.connector.elasticsearch.sink.ElasticsearchEmitter;
+import org.apache.flink.connector.elasticsearch.sink.RequestIndexer;
+
+import java.util.HashMap;
+import java.util.Map;
+
+/** Test emitter for performing ElasticSearch indexing requests. */
+public interface ElasticsearchTestEmitter extends 
ElasticsearchEmitter> {
+
+@Override
+default void emit(
+KeyValue element, SinkWriter.Context context, 
RequestIndexer indexer) {
+addUpsertRequest(indexer, element);

Review Comment:
   Thanks for the clarification. Please let me know if the changes from the 
latest commit are aligned with your idea.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27129) Hardcoded namespace in FlinkDeployment manifests may fail to deploy

2022-04-28 Thread Ted Chang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529612#comment-17529612
 ] 

Ted Chang commented on FLINK-27129:
---

[~wangyang0918] Please review the PR 
[https://github.com/apache/flink-kubernetes-operator/pull/188]
Thanks

> Hardcoded namespace in FlinkDeployment manifests may fail to deploy
> ---
>
> Key: FLINK-27129
> URL: https://issues.apache.org/jira/browse/FLINK-27129
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: 0.1.0
> Environment: Client Version: version.Info\{Major:"1", Minor:"22", 
> GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", 
> GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", 
> Compiler:"gc", Platform:"darwin/amd64"}
> Server Version: version.Info\{Major:"1", Minor:"22", 
> GitVersion:"v1.22.8+IKS", 
> GitCommit:"0d0ff1cc1dbe76cf96c33e7510b25c283ac29943", GitTreeState:"clean", 
> BuildDate:"2022-03-17T14:47:39Z", GoVersion:"go1.16.15", Compiler:"gc", 
> Platform:"linux/amd64"}
>Reporter: Ted Chang
>Assignee: Ted Chang
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> When the Flink operator is installed to non-default namespace these 
> FlinkDeployment manifests [1] may fail to deploy with the following error in 
> the Flink operator log [2]. We may want to remove the `namespace: default` 
> from these manifests and let user specify different one with the --namespace 
> flag in kubectl.
> [1][https://github.com/apache/flink-kubernetes-operator/tree/main/examples]
> [2]
> {code:java}
> 2022-04-08 00:42:02,803 o.a.f.k.o.c.FlinkDeploymentController 
> [ERROR][default/basic-example] Flink Deployment failed
> org.apache.flink.kubernetes.operator.exception.DeploymentFailedException: 
> pods "basic-example-5cc7894895-" is forbidden: error looking up service 
> account default/flink: serviceaccount "flink" not found
>     at 
> org.apache.flink.kubernetes.operator.observer.BaseObserver.checkFailedCreate(BaseObserver.java:135)
>     at 
> org.apache.flink.kubernetes.operator.observer.BaseObserver.observeJmDeployment(BaseObserver.java:102)
>     at 
> org.apache.flink.kubernetes.operator.observer.JobObserver.observe(JobObserver.java:51)
>     at 
> org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:122)
>     at 
> org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:56)
>     at 
> io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)
>     at 
> io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)
>     at 
> io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)
>     at 
> io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)
>     at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)
>     at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)
>     at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)
>     at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)
>     at 
> io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)
>     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
> Source)
>     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
> Source)
>     at java.base/java.lang.Thread.run(Unknown Source) {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27129) Hardcoded namespace in FlinkDeployment manifests may fail to deploy

2022-04-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27129:
---
Labels: newbie pull-request-available  (was: newbie)

> Hardcoded namespace in FlinkDeployment manifests may fail to deploy
> ---
>
> Key: FLINK-27129
> URL: https://issues.apache.org/jira/browse/FLINK-27129
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: 0.1.0
> Environment: Client Version: version.Info\{Major:"1", Minor:"22", 
> GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", 
> GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", 
> Compiler:"gc", Platform:"darwin/amd64"}
> Server Version: version.Info\{Major:"1", Minor:"22", 
> GitVersion:"v1.22.8+IKS", 
> GitCommit:"0d0ff1cc1dbe76cf96c33e7510b25c283ac29943", GitTreeState:"clean", 
> BuildDate:"2022-03-17T14:47:39Z", GoVersion:"go1.16.15", Compiler:"gc", 
> Platform:"linux/amd64"}
>Reporter: Ted Chang
>Assignee: Ted Chang
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: kubernetes-operator-1.0.0
>
>
> When the Flink operator is installed to non-default namespace these 
> FlinkDeployment manifests [1] may fail to deploy with the following error in 
> the Flink operator log [2]. We may want to remove the `namespace: default` 
> from these manifests and let user specify different one with the --namespace 
> flag in kubectl.
> [1][https://github.com/apache/flink-kubernetes-operator/tree/main/examples]
> [2]
> {code:java}
> 2022-04-08 00:42:02,803 o.a.f.k.o.c.FlinkDeploymentController 
> [ERROR][default/basic-example] Flink Deployment failed
> org.apache.flink.kubernetes.operator.exception.DeploymentFailedException: 
> pods "basic-example-5cc7894895-" is forbidden: error looking up service 
> account default/flink: serviceaccount "flink" not found
>     at 
> org.apache.flink.kubernetes.operator.observer.BaseObserver.checkFailedCreate(BaseObserver.java:135)
>     at 
> org.apache.flink.kubernetes.operator.observer.BaseObserver.observeJmDeployment(BaseObserver.java:102)
>     at 
> org.apache.flink.kubernetes.operator.observer.JobObserver.observe(JobObserver.java:51)
>     at 
> org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:122)
>     at 
> org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:56)
>     at 
> io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)
>     at 
> io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)
>     at 
> io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)
>     at 
> io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)
>     at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)
>     at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)
>     at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)
>     at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)
>     at 
> io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)
>     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
> Source)
>     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
> Source)
>     at java.base/java.lang.Thread.run(Unknown Source) {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] tedhtchang opened a new pull request, #188: [FLINK-27129][docs] Hardcoded namespace in FlinkDeployment manifests may fail to deploy

2022-04-28 Thread GitBox


tedhtchang opened a new pull request, #188:
URL: https://github.com/apache/flink-kubernetes-operator/pull/188

   Remove hardcode default namespace from the flinkdeployment examples.
   Document the step to create flinkdeployment in other namespaces when 
watchNamespaces not used.
   
   Signed-off-by: ted chang 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-25511) Pre-emptively uploaded changelog not discarded up if materialized before checkpoint

2022-04-28 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan resolved FLINK-25511.
---
Resolution: Fixed

Merged into master as 
5b58ef66a8835bf22db23e1d9836a1e9f4d94045..c5430e2e5d4eeb0aba14ce3ea8401747afe0182d

> Pre-emptively uploaded changelog not discarded up if materialized before 
> checkpoint
> ---
>
> Key: FLINK-25511
> URL: https://issues.apache.org/jira/browse/FLINK-25511
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.15.0
>Reporter: Yuan Mei
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> This is related to Garbage Collection. 
> The problem is more severe than FLINK-25512 because there is leftover after 
> materialization truncation each time (if pre-emptive uploads are enabled).
> However, the probability can still be not high because of grouping of 
> backends into a single file (so only if ALL the backends on TM materialize 
> semi-simultaneously then it happen).



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-23143) Support state migration

2022-04-28 Thread Roman Khachatryan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529581#comment-17529581
 ] 

Roman Khachatryan commented on FLINK-23143:
---

Do you mind sharing your POC?

IIRC, getOrCreateKeyedState of a nested backend is not called by the changelog 
backend.

 
My concern was that calling createInternalState() twice creates two state 
objects (e.g. HeapValueState).

Another concern is that besides of serializers, the state itself also needs to 
be migrated (in case of RocksDB). So maybe it makes sense to pull 
HeapKeyedStateBackend.tryRegisterStateTable and 
RocksDBKeyedStateBackend.tryRegisterKvStateInformation into 
AbstractKeyedStateBackend (or some interface) and call it directly from 
ChangelogKeyedStateBackend (delegation approach wouldn't be needed in this 
case). Or extract only the migration part, e.g. upgrade(StateDescriptor).

Besides that, updating the TTL policy should also be taken into account.

Do you mind creating a short design doc describing what needs to be upgraded, 
and one or more options of how to do it?

 

> Support state migration
> ---
>
> Key: FLINK-23143
> URL: https://issues.apache.org/jira/browse/FLINK-23143
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Priority: Minor
> Fix For: 1.16.0
>
>
> ChangelogKeyedStateBackend.getOrCreateKeyedState is currently used during 
> recovery; on 1st user access, it doesn't update metadata nor migrate state 
> (as opposed to other backends).
>  
> The proposed solution is to
>  # wrap serializers (and maybe other objects) in getOrCreateKeyedState
>  # store wrapping objects in a new map keyed by state name
>  # pass wrapped objects to delegatedBackend.createInternalState
>  # on 1st user access, lookup wrapper and upgrade its wrapped serializer
> This should be done for both KV/PQ states.
>  
> See also [https://github.com/apache/flink/pull/15420#discussion_r656934791]
>  
> cc: [~yunta]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] rkhachatryan merged pull request #19550: [FLINK-25511][state/changelog] Discard pre-emptively uploaded state changes not included into any checkpoint

2022-04-28 Thread GitBox


rkhachatryan merged PR #19550:
URL: https://github.com/apache/flink/pull/19550


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on pull request #19603: [FLINK-27382][BP-1.15][runtime] Moves cluster shutdown to when the job cleanup is done in job mode

2022-04-28 Thread GitBox


XComp commented on PR #19603:
URL: https://github.com/apache/flink/pull/19603#issuecomment-1112491432

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] afedulov commented on a diff in pull request #19405: [FLINK-27066] Reintroduce e2e tests in ES as Java tests.

2022-04-28 Thread GitBox


afedulov commented on code in PR #19405:
URL: https://github.com/apache/flink/pull/19405#discussion_r861131896


##
flink-end-to-end-tests/flink-end-to-end-tests-common-elasticsearch/src/main/java/org/apache/flink/streaming/tests/ElasticsearchTestEmitter.java:
##
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.streaming.tests;
+
+import org.apache.flink.api.connector.sink2.SinkWriter;
+import org.apache.flink.connector.elasticsearch.sink.ElasticsearchEmitter;
+import org.apache.flink.connector.elasticsearch.sink.RequestIndexer;
+
+import java.util.HashMap;
+import java.util.Map;
+
+/** Test emitter for performing ElasticSearch indexing requests. */
+public interface ElasticsearchTestEmitter extends 
ElasticsearchEmitter> {
+
+@Override
+default void emit(
+KeyValue element, SinkWriter.Context context, 
RequestIndexer indexer) {
+addUpsertRequest(indexer, element);

Review Comment:
   Thanks for the clarification. Please let me know if the changes from the 
topmost commit are aligned with your idea.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] empcl commented on pull request #19573: [FLINK-27384] solve the problem that the latest data cannot be read under the creat…

2022-04-28 Thread GitBox


empcl commented on PR #19573:
URL: https://github.com/apache/flink/pull/19573#issuecomment-1112448067

   @luoyuxia Regarding adding test cases, I'm sorry, I can't verify this change 
on HivePartitionFetcherTest. Because, the test case above 
HivePartitionFetcherTest cannot actually obtain the specific partition 
information (that is why `assertEquals(0, 
fetcherContext.getComparablePartitionValueList().size());`). In addition, 
because the aspect of my modification is to operate on a known partition during 
the running of the program. In terms of current test mocks, it doesn't meet my 
needs. So, I'm sorry for not being able to provide relevant test cases to meet 
my needs. However, I have verified the correctness of the code on a distributed 
cluster.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] empcl commented on a diff in pull request #19573: [FLINK-27384] solve the problem that the latest data cannot be read under the creat…

2022-04-28 Thread GitBox


empcl commented on code in PR #19573:
URL: https://github.com/apache/flink/pull/19573#discussion_r861119613


##
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/read/HivePartitionFetcherContextBase.java:
##
@@ -139,16 +138,9 @@ public List 
getComparablePartitionValueList() throws E
 tablePath.getDatabaseName(),
 tablePath.getObjectName(),
 Short.MAX_VALUE);
-List newNames =
-partitionNames.stream()
-.filter(
-n ->
-
!partValuesToCreateTime.containsKey(
-
extractPartitionValues(n)))
-.collect(Collectors.toList());
 List newPartitions =
 metaStoreClient.getPartitionsByNames(
-tablePath.getDatabaseName(), 
tablePath.getObjectName(), newNames);
+tablePath.getDatabaseName(), 
tablePath.getObjectName(), partitionNames);

Review Comment:
   @luoyuxia Hi, this is a great idea. However, this method 
getComparablePartitionValueList() returns all partitions in the current 
directory. The comparison and selection of the required partition information 
is done in the outer layer of the method. In addition, this method does not 
recommend directly returning the required partition information. Because in the 
case of bounded flow, all partition information is obtained by this method.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on pull request #19571: [FLINK-27394] Integrate the Flink Elasticsearch documentation in the Flink documentation

2022-04-28 Thread GitBox


zentol commented on PR #19571:
URL: https://github.com/apache/flink/pull/19571#issuecomment-1112439000

   guess go is indeed missing in the CI images; will prepare an update


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27429) Implement Vertica JDBC Dialect

2022-04-28 Thread Fabian Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Paul updated FLINK-27429:

Affects Version/s: 1.16.0

> Implement Vertica JDBC Dialect
> --
>
> Key: FLINK-27429
> URL: https://issues.apache.org/jira/browse/FLINK-27429
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Affects Versions: 1.16.0
>Reporter: Jasmin Redzepovic
>Assignee: Jasmin Redzepovic
>Priority: Minor
>
> In order to use Vertica database as a JDBC source or sink, corresponding 
> dialect has to be implemented.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27429) Implement Vertica JDBC Dialect

2022-04-28 Thread Fabian Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Paul reassigned FLINK-27429:
---

Assignee: Jasmin Redzepovic

> Implement Vertica JDBC Dialect
> --
>
> Key: FLINK-27429
> URL: https://issues.apache.org/jira/browse/FLINK-27429
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Jasmin Redzepovic
>Assignee: Jasmin Redzepovic
>Priority: Minor
>
> In order to use Vertica database as a JDBC source or sink, corresponding 
> dialect has to be implemented.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (FLINK-26298) [JUnit5 Migration] Module: flink-rpc-core

2022-04-28 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-26298.

Resolution: Fixed

master: c9b5e2f5d5351aa80192881c7af4d13987ed2e04

> [JUnit5 Migration] Module: flink-rpc-core
> -
>
> Key: FLINK-26298
> URL: https://issues.apache.org/jira/browse/FLINK-26298
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination, Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27416) FLIP-225: Implement standalone mode support in the kubernetes operator

2022-04-28 Thread Usamah Jassat (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Usamah Jassat updated FLINK-27416:
--
Description: 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-225%3A+Implement+standalone+mode+support+in+the+kubernetes+operator]
 (Under Discussion)  (was: 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-225%3A+Implement+standalone+mode+support+in+the+kubernetes+operator)

>  FLIP-225: Implement standalone mode support in the kubernetes operator 
> 
>
> Key: FLINK-27416
> URL: https://issues.apache.org/jira/browse/FLINK-27416
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Usamah Jassat
>Assignee: Usamah Jassat
>Priority: Major
>
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-225%3A+Implement+standalone+mode+support+in+the+kubernetes+operator]
>  (Under Discussion)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] zentol merged pull request #18877: [FLINK-26298][rpc][tests] Migrate tests to JUnit5

2022-04-28 Thread GitBox


zentol merged PR #18877:
URL: https://github.com/apache/flink/pull/18877


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27443) Add standalone mode parameters and decorators

2022-04-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-27443:
---
Labels: pull-request-available  (was: )

> Add standalone mode parameters and decorators 
> --
>
> Key: FLINK-27443
> URL: https://issues.apache.org/jira/browse/FLINK-27443
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Usamah Jassat
>Priority: Major
>  Labels: pull-request-available
>
> Add parameters and decorators  to create JM and TM deployments in standalone 
> mode



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink-kubernetes-operator] usamj opened a new pull request, #187: [FLINK-27443] Create standalone mode parameters and decorators for JM and TMs

2022-04-28 Thread GitBox


usamj opened a new pull request, #187:
URL: https://github.com/apache/flink-kubernetes-operator/pull/187

   This PR creates decorators and parameters which contain information needed 
to create JM and TM deployments in standalone mode. These will then be used by 
the Standalone ClusterDescriptor to actually create the resources needed for a 
standalone Flink Cluster. 
   
   The `flink-kubernetes-standalone-cluster` is introduced as the parameters 
and decorators integrate with their similar counterparts in the main Flink 
package which use a different Fabric8 version that is shaded. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] syhily commented on a diff in pull request #19430: [BK-1.15][FLINK-26931][Connector/pulsar] Make the producer name and consumer name unique in Pulsar

2022-04-28 Thread GitBox


syhily commented on code in PR #19430:
URL: https://github.com/apache/flink/pull/19430#discussion_r861034376


##
flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/sink/config/PulsarSinkConfigUtils.java:
##
@@ -70,7 +71,10 @@ public static  ProducerBuilder createProducerBuilder(
 PulsarClient client, Schema schema, SinkConfiguration 
configuration) {
 ProducerBuilder builder = client.newProducer(schema);
 
-configuration.useOption(PULSAR_PRODUCER_NAME, builder::producerName);
+configuration.useOption(

Review Comment:
   This wouldn't be a problem. Because we promise consistency by using 
transactions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27448) Enable standalone mode for old Flink versions

2022-04-28 Thread Usamah Jassat (Jira)
Usamah Jassat created FLINK-27448:
-

 Summary: Enable standalone mode for old Flink versions
 Key: FLINK-27448
 URL: https://issues.apache.org/jira/browse/FLINK-27448
 Project: Flink
  Issue Type: Sub-task
Reporter: Usamah Jassat






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] syhily commented on pull request #19430: [BK-1.15][FLINK-26931][Connector/pulsar] Make the producer name and consumer name unique in Pulsar

2022-04-28 Thread GitBox


syhily commented on PR #19430:
URL: https://github.com/apache/flink/pull/19430#issuecomment-1112349280

   > Since this PR fixes an actual bug, I would expect to add a test case. Can 
you add one so that the issue does not arise again?
   
   I think this could be hard to test. Because we just use the same producer 
name which is not allowed on the Pulsar. We just need to make sure it was 
unique.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27447) Implement last-state with ZooKeeper HA

2022-04-28 Thread Usamah Jassat (Jira)
Usamah Jassat created FLINK-27447:
-

 Summary: Implement last-state with ZooKeeper HA
 Key: FLINK-27447
 URL: https://issues.apache.org/jira/browse/FLINK-27447
 Project: Flink
  Issue Type: Sub-task
Reporter: Usamah Jassat






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27445) Create Flink Standalone Service

2022-04-28 Thread Usamah Jassat (Jira)
Usamah Jassat created FLINK-27445:
-

 Summary: Create Flink Standalone Service
 Key: FLINK-27445
 URL: https://issues.apache.org/jira/browse/FLINK-27445
 Project: Flink
  Issue Type: Sub-task
Reporter: Usamah Jassat






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27446) Introduce Standalone Mode

2022-04-28 Thread Usamah Jassat (Jira)
Usamah Jassat created FLINK-27446:
-

 Summary: Introduce Standalone Mode
 Key: FLINK-27446
 URL: https://issues.apache.org/jira/browse/FLINK-27446
 Project: Flink
  Issue Type: Sub-task
Reporter: Usamah Jassat






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27444) Create standalone mode ClusterDescriptor

2022-04-28 Thread Usamah Jassat (Jira)
Usamah Jassat created FLINK-27444:
-

 Summary: Create standalone mode ClusterDescriptor
 Key: FLINK-27444
 URL: https://issues.apache.org/jira/browse/FLINK-27444
 Project: Flink
  Issue Type: Sub-task
Reporter: Usamah Jassat






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] syhily commented on a diff in pull request #19430: [BK-1.15][FLINK-26931][Connector/pulsar] Make the producer name and consumer name unique in Pulsar

2022-04-28 Thread GitBox


syhily commented on code in PR #19430:
URL: https://github.com/apache/flink/pull/19430#discussion_r861027941


##
flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/common/config/PulsarClientFactory.java:
##
@@ -153,7 +153,7 @@ public static PulsarClient createClient(PulsarConfiguration 
configuration) {
 
 /**
  * PulsarAdmin shares almost the same configuration with PulsarClient, but 
we separate this
- * create method for directly creating it.
+ * creating method for directly use it.

Review Comment:
   OK.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] syhily commented on a diff in pull request #19433: [BK-1.14][FLINK-26645][Connector/pulsar] Support subscribe only one topic partition/

2022-04-28 Thread GitBox


syhily commented on code in PR #19433:
URL: https://github.com/apache/flink/pull/19433#discussion_r861027392


##
flink-connectors/flink-connector-pulsar/src/main/java/org/apache/flink/connector/pulsar/source/enumerator/subscriber/impl/BasePulsarSubscriber.java:
##
@@ -63,18 +63,15 @@ protected List toTopicPartitions(
 .map(range -> new TopicPartition(metadata.getName(), -1, 
range))
 .collect(toList());
 } else {
-return IntStream.range(0, metadata.getPartitionSize())
-.boxed()
-.flatMap(
-partitionId ->
-ranges.stream()
-.map(
-range ->
-new TopicPartition(
-
metadata.getName(),
-
partitionId,
-range)))
-.collect(toList());
+List partitions = new ArrayList<>();

Review Comment:
   This would make the code more readable because the lambda has been formatted 
in an ugly style.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] syhily commented on a diff in pull request #19433: [BK-1.14][FLINK-26645][Connector/pulsar] Support subscribe only one topic partition/

2022-04-28 Thread GitBox


syhily commented on code in PR #19433:
URL: https://github.com/apache/flink/pull/19433#discussion_r861026530


##
flink-connectors/flink-connector-pulsar/src/test/resources/containers/txnStandalone.conf:
##
@@ -14,8 +14,7 @@
 
 ### --- General broker settings --- ###
 
-# Zookeeper quorum connection string
-zookeeperServers=
+# Zookeeper quorum connection stringzookeeperServers=

Review Comment:
   Yep. This should be a mistake on my side.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27443) Add standalone mode parameters and decorators

2022-04-28 Thread Usamah Jassat (Jira)
Usamah Jassat created FLINK-27443:
-

 Summary: Add standalone mode parameters and decorators 
 Key: FLINK-27443
 URL: https://issues.apache.org/jira/browse/FLINK-27443
 Project: Flink
  Issue Type: Sub-task
Reporter: Usamah Jassat


Add parameters and decorators  to create JM and TM deployments in standalone 
mode



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27399) Pulsar connector didn't set start consuming position correctly

2022-04-28 Thread Yufan Sheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufan Sheng updated FLINK-27399:

Description: 
The Pulsar connector didn't use the consuming position from the checkpoint. 
They just commit the position to Pulsar after the checkpoint is complete. And 
the connector start to consume message from Pulsar directly by the offset 
stored on the Pulsar subscription.

This causes the test could be failed in some situations. The start cursor 
(position on Pulsar) would be reset to the wrong position which causes the 
results didn't match the desired records.

This issue fixes

# FLINK-23944
# FLINK-25884
# FLINK-26177
# FLINK-26237
# FLINK-26721

Although the test failure message could be various. They are truly sharing the 
same cause.

h2. How to fix this issue:

SourceEvent protocol for limiting the {{Consumer.seek}} operation.

The Pulsar source needs to seek the desired consuming position when 
bootstrapping. The seeking action couldn’t be executed concurrently. We have 
designed a [new 
mechanism|https://github.com/apache/flink/pull/17119#pullrequestreview-746035072].

  was:
The Pulsar connector didn't use the consuming position from the checkpoint. 
They just commit the position to Pulsar after the checkpoint is complete. And 
the connector start to consume message from Pulsar directly by the offset 
stored on the Pulsar subscription.

This causes the test could be failed in some situations. The start cursor 
(position on Pulsar) would be reset to the wrong position which causes the 
results didn't match the desired records.

This issue fixes

# FLINK-23944
# FLINK-25884
# FLINK-26177
# FLINK-26237
# FLINK-26721

Although the test failure message could be various. They are truly sharing the 
same cause.


> Pulsar connector didn't set start consuming position correctly
> --
>
> Key: FLINK-27399
> URL: https://issues.apache.org/jira/browse/FLINK-27399
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.15.0, 1.14.4, 1.16.0
>Reporter: Yufan Sheng
>Assignee: Yufan Sheng
>Priority: Major
>
> The Pulsar connector didn't use the consuming position from the checkpoint. 
> They just commit the position to Pulsar after the checkpoint is complete. And 
> the connector start to consume message from Pulsar directly by the offset 
> stored on the Pulsar subscription.
> This causes the test could be failed in some situations. The start cursor 
> (position on Pulsar) would be reset to the wrong position which causes the 
> results didn't match the desired records.
> This issue fixes
> # FLINK-23944
> # FLINK-25884
> # FLINK-26177
> # FLINK-26237
> # FLINK-26721
> Although the test failure message could be various. They are truly sharing 
> the same cause.
> h2. How to fix this issue:
> SourceEvent protocol for limiting the {{Consumer.seek}} operation.
> The Pulsar source needs to seek the desired consuming position when 
> bootstrapping. The seeking action couldn’t be executed concurrently. We have 
> designed a [new 
> mechanism|https://github.com/apache/flink/pull/17119#pullrequestreview-746035072].



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] afedulov commented on a diff in pull request #19584: release notes for the 1.15 release

2022-04-28 Thread GitBox


afedulov commented on code in PR #19584:
URL: https://github.com/apache/flink/pull/19584#discussion_r860569627


##
docs/content/release-notes/flink-1.15.md:
##
@@ -0,0 +1,379 @@
+---
+title: "Release Notes - Flink 1.15"
+---
+
+
+# Release notes - Flink 1.15
+
+These release notes discuss important aspects, such as configuration, 
behavior, 
+or dependencies, that changed between Flink 1.14 and Flink 1.15. Please read 
these 
+notes carefully if you are planning to upgrade your Flink version to 1.15.
+
+## Summary of changed dependency names
+
+There are three changes in Flink 1.15 that require updating dependency names 
when 
+upgrading from earlier versions:
+
+* The newly introduced module `flink-table-planner-loader` (FLINK-25128) could 
+  replace the legacy `flink-table-planner_2.12`. As a consequence, 
`flink-table-uber` 
+  has been split into `flink-table-api-java-uber`, 
`flink-table-planner(-loader), 
+  and `table-runtime`. Besides, the artifactId of `flink-sql-client` has no 
Scala 
+  suffix (_2.11 / _2.12) anymore, and Scala users need to explicitly add a 
+  dependency to `flink-table-api-scala` or `flink-table-api-scala-bridge`.
+* Due to the efforts of removing Scala dependency from `flink-table-runtime` 
+  (FLINK-25114), the artifactId of `flink-table-runtime` has no Scala version 
+  suffix (\_2.11 / \_2.12) any more.
+* The FileSystem connector is no longer a part of the `flink-table-uber` 
module 
+  and changed to an optional dedicated `flink-connector-files` module 
+  (FLINK-24687). Besides, the artifactId of `flink-orc`, `flink-orc-nohive`, 
+  `flink-parquet` has no Scala suffix (\_2.11 / \_2.12) anymore.
+
+## JDK Upgrade
+
+The support of Java 8 is now deprecated and will be removed in a future 
release 
+([FLINK-25247](https://issues.apache.org/jira/browse/FLINK-25247)). We 
recommend 
+all users to migrate to Java 11.
+
+The default Java version in the Flink docker images is now Java 11 
+([FLINK-25251](https://issues.apache.org/jira/browse/FLINK-25251)). 
+There are images built with Java 8, tagged with “java8”. 
+
+## DataStream API
+
+### [TypeSerializer version mismatch during eagerly restore 
(FLINK-24858)](https://issues.apache.org/jira/browse/FLINK-24858)
+
+This ticket resolves an issue that during state migration between Flink 
versions
+the wrong serializer might have been picked.
+
+When upgrading from Flink 1.13.x please immediately choose 1.14.3 or higher 
and 
+skip 1.14.0, 1.14.1, 1.14.2 because all are affected and it might prevent your 
+job from starting.
+
+## Table API & SQL
+
+### [Make the legacy behavior disabled by default 
(FLINK-26551)](https://issues.apache.org/jira/browse/FLINK-26551)
+
+The legacy casting behavior has been disabled by default. This might have 
+implications on corner cases (string parsing, numeric overflows, to string 
+representation, varchar/binary precisions). Set 
+`table.exec.legacy-cast-behaviour=ENABLED` to restore the old behavior.
+
+### [Enforce CHAR/VARCHAR precision when outputting to a Sink 
(FLINK-24753)](https://issues.apache.org/jira/browse/FLINK-24753)
+
+`CHAR`/`VARCHAR` lengths are enforced (trimmed/padded) by default now before 
entering
+the table sink.
+
+### [Support the new type inference in Scala Table API table functions 
(FLINK-26518)](https://issues.apache.org/jira/browse/FLINK-26518)
+
+Table functions that are called using Scala implicit conversions have been 
updated 
+to use the new type system and new type inference. Users are requested to 
update 
+their UDFs or use the deprecated `TableEnvironment.registerFunction` to 
restore 
+the old behavior temporarily by calling the function via name.
+
+### [Propagate executor config to TableConfig 
(FLINK-26421)](https://issues.apache.org/jira/browse/FLINK-26421)
+
+`flink-conf.yaml` and other configurations from outer layers (e.g. CLI) are 
now 
+propagated into `TableConfig`. Even though configuration set directly in 
`TableConfig` 
+has still precedence, this change can have side effects if table configuration 
+was accidentally set in other layers.
+
+### [Remove pre FLIP-84 methods 
(FLINK-26090)](https://issues.apache.org/jira/browse/FLINK-26090)
+
+The previously deprecated methods `TableEnvironment.execute`, 
`Table.insertInto`,
+`TableEnvironment.fromTableSource`, `TableEnvironment.sqlUpdate`, and 
+`TableEnvironment.explain` have been removed. Please use the provided 
+alternatives introduced in 
+[FLIP-84](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878).
+
+### [Fix parser generator warnings 
(FLINK-26053)](https://issues.apache.org/jira/browse/FLINK-26053)
+
+`STATEMENT` is a reserved keyword now. Use backticks to escape tables, fields 
and 
+other references.
+
+### [Expose uid generator for DataStream/Transformation providers 
(FLINK-25990)](https://issues.apache.org/jira/browse/FLINK-25990)
+
+`DataStreamScanProvider` and `DataStreamSinkProvider` for table connectors 
received 
+an additional 

[GitHub] [flink] gaoyunhaii commented on a diff in pull request #19584: release notes for the 1.15 release

2022-04-28 Thread GitBox


gaoyunhaii commented on code in PR #19584:
URL: https://github.com/apache/flink/pull/19584#discussion_r860966036


##
docs/content/release-notes/flink-1.15.md:
##
@@ -0,0 +1,379 @@
+---
+title: "Release Notes - Flink 1.15"
+---
+
+
+# Release notes - Flink 1.15
+
+These release notes discuss important aspects, such as configuration, 
behavior, 
+or dependencies, that changed between Flink 1.14 and Flink 1.15. Please read 
these 
+notes carefully if you are planning to upgrade your Flink version to 1.15.
+
+## Summary of changed dependency names
+
+There are three changes in Flink 1.15 that require updating dependency names 
when 
+upgrading from earlier versions:
+
+* The newly introduced module `flink-table-planner-loader` (FLINK-25128) could 
+  replace the legacy `flink-table-planner_2.12`. As a consequence, 
`flink-table-uber` 
+  has been split into `flink-table-api-java-uber`, 
`flink-table-planner(-loader), 
+  and `table-runtime`. Besides, the artifactId of `flink-sql-client` has no 
Scala 
+  suffix (_2.11 / _2.12) anymore, and Scala users need to explicitly add a 
+  dependency to `flink-table-api-scala` or `flink-table-api-scala-bridge`.
+* Due to the efforts of removing Scala dependency from `flink-table-runtime` 
+  (FLINK-25114), the artifactId of `flink-table-runtime` has no Scala version 
+  suffix (\_2.11 / \_2.12) any more.
+* The FileSystem connector is no longer a part of the `flink-table-uber` 
module 
+  and changed to an optional dedicated `flink-connector-files` module 
+  (FLINK-24687). Besides, the artifactId of `flink-orc`, `flink-orc-nohive`, 
+  `flink-parquet` has no Scala suffix (\_2.11 / \_2.12) anymore.
+
+## JDK Upgrade
+
+The support of Java 8 is now deprecated and will be removed in a future 
release 
+([FLINK-25247](https://issues.apache.org/jira/browse/FLINK-25247)). We 
recommend 
+all users to migrate to Java 11.
+
+The default Java version in the Flink docker images is now Java 11 
+([FLINK-25251](https://issues.apache.org/jira/browse/FLINK-25251)). 
+There are images built with Java 8, tagged with “java8”. 
+
+## DataStream API
+
+### [TypeSerializer version mismatch during eagerly restore 
(FLINK-24858)](https://issues.apache.org/jira/browse/FLINK-24858)
+
+This ticket resolves an issue that during state migration between Flink 
versions
+the wrong serializer might have been picked.
+
+When upgrading from Flink 1.13.x please immediately choose 1.14.3 or higher 
and 
+skip 1.14.0, 1.14.1, 1.14.2 because all are affected and it might prevent your 
+job from starting.
+
+## Table API & SQL
+
+### [Make the legacy behavior disabled by default 
(FLINK-26551)](https://issues.apache.org/jira/browse/FLINK-26551)
+
+The legacy casting behavior has been disabled by default. This might have 
+implications on corner cases (string parsing, numeric overflows, to string 
+representation, varchar/binary precisions). Set 
+`table.exec.legacy-cast-behaviour=ENABLED` to restore the old behavior.
+
+### [Enforce CHAR/VARCHAR precision when outputting to a Sink 
(FLINK-24753)](https://issues.apache.org/jira/browse/FLINK-24753)
+
+`CHAR`/`VARCHAR` lengths are enforced (trimmed/padded) by default now before 
entering
+the table sink.
+
+### [Support the new type inference in Scala Table API table functions 
(FLINK-26518)](https://issues.apache.org/jira/browse/FLINK-26518)
+
+Table functions that are called using Scala implicit conversions have been 
updated 
+to use the new type system and new type inference. Users are requested to 
update 
+their UDFs or use the deprecated `TableEnvironment.registerFunction` to 
restore 
+the old behavior temporarily by calling the function via name.
+
+### [Propagate executor config to TableConfig 
(FLINK-26421)](https://issues.apache.org/jira/browse/FLINK-26421)
+
+`flink-conf.yaml` and other configurations from outer layers (e.g. CLI) are 
now 
+propagated into `TableConfig`. Even though configuration set directly in 
`TableConfig` 
+has still precedence, this change can have side effects if table configuration 
+was accidentally set in other layers.
+
+### [Remove pre FLIP-84 methods 
(FLINK-26090)](https://issues.apache.org/jira/browse/FLINK-26090)
+
+The previously deprecated methods `TableEnvironment.execute`, 
`Table.insertInto`,
+`TableEnvironment.fromTableSource`, `TableEnvironment.sqlUpdate`, and 
+`TableEnvironment.explain` have been removed. Please use the provided 
+alternatives introduced in 
+[FLIP-84](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878).
+
+### [Fix parser generator warnings 
(FLINK-26053)](https://issues.apache.org/jira/browse/FLINK-26053)
+
+`STATEMENT` is a reserved keyword now. Use backticks to escape tables, fields 
and 
+other references.
+
+### [Expose uid generator for DataStream/Transformation providers 
(FLINK-25990)](https://issues.apache.org/jira/browse/FLINK-25990)
+
+`DataStreamScanProvider` and `DataStreamSinkProvider` for table connectors 
received 
+an 

[GitHub] [flink] Myasuka commented on a diff in pull request #19600: [FLINK-27433][tests] Move RocksDB's log location to /tmp for e2e tests

2022-04-28 Thread GitBox


Myasuka commented on code in PR #19600:
URL: https://github.com/apache/flink/pull/19600#discussion_r860965808


##
flink-end-to-end-tests/test-scripts/common.sh:
##
@@ -305,6 +305,10 @@ function wait_dispatcher_running {
 }
 
 function start_cluster {
+  # After FLINK-24785, RocksDB's log would be created under Flink's log 
directory by default,
+  # this would make e2e tests' artifacts containing too many log files.
+  # As RocksDB's log would not help much in e2e tests, move the location to 
the '/tmp' folder.
+  set_config_key "state.backend.rocksdb.log.dir" "/tmp"

Review Comment:
   If you want the logs still created under the DB folder, we can set this 
options to a non-existing folder, such as `/dev/null`. This will make the 
behavior same as before. Is this what you want?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-docker] gaoyunhaii commented on pull request #114: Merge adjacent RUN commands to avoid too much levels

2022-04-28 Thread GitBox


gaoyunhaii commented on PR #114:
URL: https://github.com/apache/flink-docker/pull/114#issuecomment-1112270926

   I tested with the created dockers and it works on my local machine. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-docker] gaoyunhaii opened a new pull request, #114: Merge adjacent RUN commands to avoid too much levels

2022-04-28 Thread GitBox


gaoyunhaii opened a new pull request, #114:
URL: https://github.com/apache/flink-docker/pull/114

   According to the suggestions from official docker teams: 
https://github.com/docker-library/official-images/pull/12318#issuecomment-442334
 , we merged the adjacent RUN commands to avoid adding too much layers. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-27442) Module flink-sql-avro-confluent-registry does not configure Confluent repo

2022-04-28 Thread Nicolaus Weidner (Jira)
Nicolaus Weidner created FLINK-27442:


 Summary: Module flink-sql-avro-confluent-registry does not 
configure Confluent repo
 Key: FLINK-27442
 URL: https://issues.apache.org/jira/browse/FLINK-27442
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.15.0
Reporter: Nicolaus Weidner


flink-sql-avro-confluent-registry depends on org.apache.kafka:kafka-clients, 
which is not available in Maven Central, but only in the Confluent repo. 
However, it does not configure this repo. This causes the build to fail for me 
locally with the following exception:

{code:java}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) on project flink-sql-avro-confluent-registry: Error 
resolving project artifact: Could not transfer artifact 
org.apache.kafka:kafka-clients:pom:6.2.2-ccs from/to : Not 
authorized , ReasonPhrase: . for project 
org.apache.kafka:kafka-clients:jar:6.2.2-ccs -> [Help 1]
{code}

This may be build order dependent, but the module should probably configure the 
repo to be safe, like done elsewhere: 
https://github.com/apache/flink/blob/dd48d058c6b745f505870836048284a76a23f7cc/flink-end-to-end-tests/flink-confluent-schema-registry/pom.xml#L36-L41

Looks like this is the case since 1.12 at least.




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [flink] echauchot commented on a diff in pull request #19586: [FLINK-26824] Upgrade Flink's supported Cassandra versions to match all Apache Cassandra supported versions

2022-04-28 Thread GitBox


echauchot commented on code in PR #19586:
URL: https://github.com/apache/flink/pull/19586#discussion_r860943074


##
flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java:
##
@@ -72,6 +74,16 @@
 ClosureCleaner.clean(builder, 
ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
 }
 
+/**
+ * Set writes to be synchronous (block until writes are completed).
+ *
+ * @param timeout Maximum number of seconds to wait for write completion
+ */
+public void setSynchronousWrites(int timeout) {

Review Comment:
   I dropped the commits 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol commented on a diff in pull request #19600: [FLINK-27433][tests] Move RocksDB's log location to /tmp for e2e tests

2022-04-28 Thread GitBox


zentol commented on code in PR #19600:
URL: https://github.com/apache/flink/pull/19600#discussion_r860936180


##
flink-end-to-end-tests/test-scripts/common.sh:
##
@@ -305,6 +305,10 @@ function wait_dispatcher_running {
 }
 
 function start_cluster {
+  # After FLINK-24785, RocksDB's log would be created under Flink's log 
directory by default,
+  # this would make e2e tests' artifacts containing too many log files.
+  # As RocksDB's log would not help much in e2e tests, move the location to 
the '/tmp' folder.
+  set_config_key "state.backend.rocksdb.log.dir" "/tmp"

Review Comment:
   isn't that like, really bad? Surely users would want some easy way to 
opt-out of rocks db dumping mountains of files into the log directory :/
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] rkhachatryan commented on pull request #19550: [FLINK-25511][state/changelog] Discard pre-emptively uploaded state changes not included into any checkpoint

2022-04-28 Thread GitBox


rkhachatryan commented on PR #19550:
URL: https://github.com/apache/flink/pull/19550#issuecomment-1112249471

   Thanks a lot for the thorough review @Myasuka,
   I'll squash the commits and merge the PR once the build completes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] rkhachatryan commented on a diff in pull request #19598: [FLINK-25470][changelog] Expose more metrics of materialization

2022-04-28 Thread GitBox


rkhachatryan commented on code in PR #19598:
URL: https://github.com/apache/flink/pull/19598#discussion_r860933184


##
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/ChangelogMetricGroup.java:
##
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.state.changelog;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.metrics.Counter;
+import org.apache.flink.metrics.Gauge;
+import org.apache.flink.metrics.MetricGroup;
+import org.apache.flink.runtime.metrics.groups.ProxyMetricGroup;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.concurrent.locks.ReentrantLock;
+
+/** Metrics related to the Changelog State Backend. */
+class ChangelogMetricGroup extends ProxyMetricGroup {
+private static final Logger LOG = 
LoggerFactory.getLogger(ChangelogMetricGroup.class);
+
+private static final String PREFIX = "Changelog";
+
+@VisibleForTesting
+static final String NUMBER_OF_TOTAL_MATERIALIZATION = PREFIX + 
".numberOfTotalMaterialization";
+
+@VisibleForTesting
+static final String NUMBER_OF_IN_PROGRESS_MATERIALIZATION =
+PREFIX + ".numberOfInProgressMaterialization";
+
+@VisibleForTesting
+static final String NUMBER_OF_COMPLETED_MATERIALIZATION =
+PREFIX + ".numberOfCompletedMaterialization";
+
+@VisibleForTesting
+static final String NUMBER_OF_FAILED_MATERIALIZATION =
+PREFIX + ".numberOfFailedMaterialization";
+
+@VisibleForTesting
+static final String LATEST_FULL_SIZE_OF_MATERIALIZATION =
+PREFIX + ".lastFullSizeOfMaterialization";
+
+@VisibleForTesting
+static final String LATEST_INC_SIZE_OF_MATERIALIZATION =
+PREFIX + ".lastIncSizeOfMaterialization";
+
+private final Counter totalMaterializationCounter;
+private final Counter inProgressMaterializationCounter;
+private final Counter completedMaterializationCounter;
+private final Counter failedMaterializationCounter;
+private final UpdatableGauge lastFullSizeOfMaterializationGauge;
+private final UpdatableGauge lastIncSizeOfMaterializationGauge;
+
+private final ReentrantLock metricsReadWriteLock = new ReentrantLock();
+
+ChangelogMetricGroup(MetricGroup parentMetricGroup) {
+super(parentMetricGroup);
+this.totalMaterializationCounter = 
counter(NUMBER_OF_TOTAL_MATERIALIZATION);
+this.inProgressMaterializationCounter = 
counter(NUMBER_OF_IN_PROGRESS_MATERIALIZATION);
+this.completedMaterializationCounter = 
counter(NUMBER_OF_COMPLETED_MATERIALIZATION);
+this.failedMaterializationCounter = 
counter(NUMBER_OF_FAILED_MATERIALIZATION);
+this.lastFullSizeOfMaterializationGauge =
+gauge(LATEST_FULL_SIZE_OF_MATERIALIZATION, new 
SimpleUpdatableGauge<>());
+this.lastIncSizeOfMaterializationGauge =
+gauge(LATEST_INC_SIZE_OF_MATERIALIZATION, new 
SimpleUpdatableGauge<>());
+}
+
+void reportPendingMaterialization() {
+metricsReadWriteLock.lock();
+try {
+inProgressMaterializationCounter.inc();
+totalMaterializationCounter.inc();
+} finally {
+metricsReadWriteLock.unlock();
+}
+}
+
+void reportCompletedMaterialization(
+long fullSizeOfMaterialization, long incSizeOfMaterialization) {
+metricsReadWriteLock.lock();
+try {
+completedMaterializationCounter.inc();
+if (canDecrementOfInProgressMaterializationNumber()) {
+inProgressMaterializationCounter.dec();
+}
+
lastFullSizeOfMaterializationGauge.updateValue(fullSizeOfMaterialization);
+
lastIncSizeOfMaterializationGauge.updateValue(incSizeOfMaterialization);
+} finally {
+metricsReadWriteLock.unlock();
+}
+}
+
+void reportFailedMaterialization() {
+metricsReadWriteLock.lock();

Review Comment:
   The number of failures can still be incremented by different threads IIUC.



-- 
This is an automated message from the Apache Git Service.
To 

[GitHub] [flink] rkhachatryan commented on a diff in pull request #19598: [FLINK-25470][changelog] Expose more metrics of materialization

2022-04-28 Thread GitBox


rkhachatryan commented on code in PR #19598:
URL: https://github.com/apache/flink/pull/19598#discussion_r860932007


##
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/ChangelogKeyedStateBackend.java:
##
@@ -216,6 +222,7 @@ public ChangelogKeyedStateBackend(
 this.executionConfig = executionConfig;
 this.ttlTimeProvider = ttlTimeProvider;
 this.keyValueStatesByName = new HashMap<>();
+this.metrics = new ChangelogMetricGroup(metricGroup);

Review Comment:
   Yes.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] rkhachatryan commented on a diff in pull request #19598: [FLINK-25470][changelog] Expose more metrics of materialization

2022-04-28 Thread GitBox


rkhachatryan commented on code in PR #19598:
URL: https://github.com/apache/flink/pull/19598#discussion_r860931294


##
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/ChangelogKeyedStateBackend.java:
##
@@ -694,6 +705,10 @@ public KeyedStateBackend 
getDelegatedKeyedStateBackend(boolean recursive) {
 return keyedStateBackend.getDelegatedKeyedStateBackend(recursive);
 }
 
+public ChangelogMetricGroup getMetrics() {
+return metrics;
+}

Review Comment:
   If I understand correctly, you are talking about where to call 
`reportPendingMaterialization()`: 
`ChangelogKeyedStateBackend.initMaterialization` or 
`PeriodicMaterializationManager.triggerMaterialization`.
   I think it doesn't matter much in terms of usefulness (on one hand, it might 
be useful to know whether the materialization was scheduled or not at all; OTH, 
`mailboxExecutor` might delay it after triggering).
   So I think the latter (from `PeriodicMaterializationManager`) is fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Myasuka commented on a diff in pull request #19600: [FLINK-27433][tests] Move RocksDB's log location to /tmp for e2e tests

2022-04-28 Thread GitBox


Myasuka commented on code in PR #19600:
URL: https://github.com/apache/flink/pull/19600#discussion_r860922881


##
flink-end-to-end-tests/test-scripts/common.sh:
##
@@ -305,6 +305,10 @@ function wait_dispatcher_running {
 }
 
 function start_cluster {
+  # After FLINK-24785, RocksDB's log would be created under Flink's log 
directory by default,
+  # this would make e2e tests' artifacts containing too many log files.
+  # As RocksDB's log would not help much in e2e tests, move the location to 
the '/tmp' folder.
+  set_config_key "state.backend.rocksdb.log.dir" "/tmp"

Review Comment:
   Even we set the log level as `HEADER_LEVEL`, it will still create a log file 
with configuration profiles.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] rkhachatryan commented on a diff in pull request #19550: [FLINK-25511][state/changelog] Discard pre-emptively uploaded state changes not included into any checkpoint

2022-04-28 Thread GitBox


rkhachatryan commented on code in PR #19550:
URL: https://github.com/apache/flink/pull/19550#discussion_r860918146


##
flink-dstl/flink-dstl-dfs/src/main/java/org/apache/flink/changelog/fs/ChangelogRegistryImpl.java:
##
@@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.changelog.fs;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.runtime.state.PhysicalStateHandleID;
+import org.apache.flink.runtime.state.StreamStateHandle;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.concurrent.ThreadSafe;
+
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.CopyOnWriteArraySet;
+import java.util.concurrent.Executor;
+
+@Internal
+@ThreadSafe
+class ChangelogRegistryImpl implements ChangelogRegistry {
+private static final Logger LOG = 
LoggerFactory.getLogger(ChangelogRegistryImpl.class);
+
+private final Map> entries = new 
ConcurrentHashMap<>();
+private final Executor executor;
+
+public ChangelogRegistryImpl(Executor executor) {
+this.executor = executor;
+}
+
+@Override
+public void startTracking(StreamStateHandle handle, Set backendIDs) {
+LOG.debug(
+"start tracking state, key: {}, state: {}",
+handle.getStreamStateHandleID(),
+handle);
+entries.put(handle.getStreamStateHandleID(), new 
CopyOnWriteArraySet<>(backendIDs));
+}
+
+@Override
+public void stopTracking(StreamStateHandle handle) {
+LOG.debug(
+"stop tracking state, key: {}, state: {}", 
handle.getStreamStateHandleID(), handle);
+entries.remove(handle.getStreamStateHandleID());
+}
+
+@Override
+public void notUsed(StreamStateHandle handle, UUID backendId) {

Review Comment:
   Added `testPreEmptiveUploadNotDiscardedWithoutNotification` in 5ea7f3650ee.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #19606: [FLINK-27441] fix "nzScroll" calculations

2022-04-28 Thread GitBox


flinkbot commented on PR #19606:
URL: https://github.com/apache/flink/pull/19606#issuecomment-1112225509

   
   ## CI report:
   
   * bdcfcdb311970e9d97472b9ffa478ee56c931e75 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] echauchot commented on a diff in pull request #19586: [FLINK-26824] Upgrade Flink's supported Cassandra versions to match all Apache Cassandra supported versions

2022-04-28 Thread GitBox


echauchot commented on code in PR #19586:
URL: https://github.com/apache/flink/pull/19586#discussion_r860910308


##
flink-connectors/flink-connector-cassandra/src/main/java/org/apache/flink/streaming/connectors/cassandra/CassandraSinkBase.java:
##
@@ -72,6 +74,16 @@
 ClosureCleaner.clean(builder, 
ExecutionConfig.ClosureCleanerLevel.RECURSIVE, true);
 }
 
+/**
+ * Set writes to be synchronous (block until writes are completed).
+ *
+ * @param timeout Maximum number of seconds to wait for write completion
+ */
+public void setSynchronousWrites(int timeout) {

Review Comment:
   :+1: 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27441) Scrollbar is missing for particular UI elements (Accumulators, Backpressure, Watermarks)

2022-04-28 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17529443#comment-17529443
 ] 

Ferenc Csaky commented on FLINK-27441:
--

Since this is a trivial fix, I opened a PR for it: 
https://github.com/apache/flink/pull/19606

> Scrollbar is missing for particular UI elements (Accumulators, Backpressure, 
> Watermarks)
> 
>
> Key: FLINK-27441
> URL: https://issues.apache.org/jira/browse/FLINK-27441
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Ferenc Csaky
>Priority: Minor
>  Labels: pull-request-available
>
> The angular version bump introduced a bug, where for {{nzScroll}} does not 
> support percentage in CSS calc, so the scrollbar will be invisible. There is 
> an easy workaround, the linked Angular discussion covers it.
> Angular issue: https://github.com/NG-ZORRO/ng-zorro-antd/issues/3090



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


  1   2   3   >