[GitHub] [flink] flinkbot edited a comment on pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #15055:
URL: https://github.com/apache/flink/pull/15055#issuecomment-788701462


   
   ## CI report:
   
   * 43550b885f3b52e7db0f82cbbf84c4086b66456c UNKNOWN
   * 087283320d89633e8c0cb08a331a0c0fe135549e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19582)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception

2021-06-26 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-23162.
---
Fix Version/s: 1.13.2
   Resolution: Fixed

Fixed in
- master: 4cc1246bce744c7ba4417b483643b8478132aff4
- release-1.13: 54c681835a8c73a498d8139ff09a400dcf1b182b

> Create table uses time_ltz in the column name and it's expression which 
> results in exception 
> -
>
> Key: FLINK-23162
> URL: https://issues.apache.org/jira/browse/FLINK-23162
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.13.1
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: doc, example, pull-request-available, sql
> Fix For: 1.14.0, 1.13.2
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> The create table example in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
>  uses the `time_ltz` in it's declaration and expression
> {quote}CREATE TABLE user_actions (
>  user_name STRING,
>  data STRING,
>  ts BIGINT,
>  time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
>  – declare time_ltz as event time attribute and use 5 seconds delayed 
> watermark strategy
>  WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
>  ) WITH (
>  ...
>  );
> {quote}
> When it is executed in the flink sql client it throws an exception:
> {quote}[ERROR] Could not execute SQL statement. Reason:
>  org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
> 'time_ltz'
> {quote}
> The create table works if the expression uses ts as the argument while 
> declaring time_ltz.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #16301: [FLINK-23162][docs] Updated column expression which throws exception

2021-06-26 Thread GitBox


wuchong merged pull request #16301:
URL: https://github.com/apache/flink/pull/16301


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception

2021-06-26 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17370157#comment-17370157
 ] 

Jark Wu commented on FLINK-23162:
-

Good catch!

> Create table uses time_ltz in the column name and it's expression which 
> results in exception 
> -
>
> Key: FLINK-23162
> URL: https://issues.apache.org/jira/browse/FLINK-23162
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.13.1
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: doc, example, pull-request-available, sql
> Fix For: 1.14.0
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> The create table example in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
>  uses the `time_ltz` in it's declaration and expression
> {quote}CREATE TABLE user_actions (
>  user_name STRING,
>  data STRING,
>  ts BIGINT,
>  time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
>  – declare time_ltz as event time attribute and use 5 seconds delayed 
> watermark strategy
>  WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
>  ) WITH (
>  ...
>  );
> {quote}
> When it is executed in the flink sql client it throws an exception:
> {quote}[ERROR] Could not execute SQL statement. Reason:
>  org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
> 'time_ltz'
> {quote}
> The create table works if the expression uses ts as the argument while 
> declaring time_ltz.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception

2021-06-26 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-23162:
---

Assignee: Mans Singh

> Create table uses time_ltz in the column name and it's expression which 
> results in exception 
> -
>
> Key: FLINK-23162
> URL: https://issues.apache.org/jira/browse/FLINK-23162
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.13.1
>Reporter: Mans Singh
>Assignee: Mans Singh
>Priority: Minor
>  Labels: doc, example, pull-request-available, sql
> Fix For: 1.14.0
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> The create table example in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
>  uses the `time_ltz` in it's declaration and expression
> {quote}CREATE TABLE user_actions (
>  user_name STRING,
>  data STRING,
>  ts BIGINT,
>  time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
>  – declare time_ltz as event time attribute and use 5 seconds delayed 
> watermark strategy
>  WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
>  ) WITH (
>  ...
>  );
> {quote}
> When it is executed in the flink sql client it throws an exception:
> {quote}[ERROR] Could not execute SQL statement. Reason:
>  org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
> 'time_ltz'
> {quote}
> The create table works if the expression uses ts as the argument while 
> declaring time_ltz.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception

2021-06-26 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-23162:

Component/s: (was: Table SQL / Client)
 (was: Examples)
 Table SQL / API

> Create table uses time_ltz in the column name and it's expression which 
> results in exception 
> -
>
> Key: FLINK-23162
> URL: https://issues.apache.org/jira/browse/FLINK-23162
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.13.1
>Reporter: Mans Singh
>Priority: Minor
>  Labels: doc, example, pull-request-available, sql
> Fix For: 1.14.0
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> The create table example in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
>  uses the `time_ltz` in it's declaration and expression
> {quote}CREATE TABLE user_actions (
>  user_name STRING,
>  data STRING,
>  ts BIGINT,
>  time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
>  – declare time_ltz as event time attribute and use 5 seconds delayed 
> watermark strategy
>  WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
>  ) WITH (
>  ...
>  );
> {quote}
> When it is executed in the flink sql client it throws an exception:
> {quote}[ERROR] Could not execute SQL statement. Reason:
>  org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
> 'time_ltz'
> {quote}
> The create table works if the expression uses ts as the argument while 
> declaring time_ltz.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23135) Flink SQL Error while applying rule AggregateReduceGroupingRule

2021-06-26 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17370154#comment-17370154
 ] 

godfrey he commented on FLINK-23135:


[~zhengjiewen] could you provide the ddl statements, I can not reproduce the 
error.

> Flink SQL Error while applying rule AggregateReduceGroupingRule
> ---
>
> Key: FLINK-23135
> URL: https://issues.apache.org/jira/browse/FLINK-23135
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.3, 1.12.4
>Reporter: zhengjiewen
>Assignee: godfrey he
>Priority: Critical
> Attachments: image-2021-06-24-18-04-03-473.png, 
> image-2021-06-24-18-13-16-056.png, image-2021-06-24-18-20-54-752.png, yarn.txt
>
>
> When I updated version from 1.12.1 to 1.12.4, the follow SQL was cannot run.
> {code:sql}
> //代码占位符
> String retailSql = "SELECT\n" +
> "customer_id,\n" +
> "ware_virtual_category,\n" +
> "min(pay_datetime) as pay_datetime\n" +
> " FROM " +
> "   
> `kudu`.`default_database`.`impala::cube_kudu.dwd_order_retail_order_pay` \n" +
> " WHERE " +
> "   pay_date = TO_TIMESTAMP('" + partitionTime + "')" +
> " AND " +
> "   freight_flag in (0)  " + 
> " AND   " +
> "   order_pay_type <> '3' " + 
> " GROUP BY \n" +
> "customer_id," +
> "ware_virtual_category";{code}
>  
> the error message is follow:
> {code:java}
> //代码占位符
> Exception in thread "main" java.lang.RuntimeException: Error while applying 
> rule AggregateReduceGroupingRule, args 
> [rel#833:FlinkLogicalAggregate.LOGICAL.any.[](input=RelSubset#832,group={0, 
> 1},pay_datetime=MIN($2))]Exception in thread "main" 
> java.lang.RuntimeException: Error while applying rule 
> AggregateReduceGroupingRule, args 
> [rel#833:FlinkLogicalAggregate.LOGICAL.any.[](input=RelSubset#832,group={0, 
> 1},pay_datetime=MIN($2))] at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:256)
>  at 
> org.apache.calcite.plan.volcano.IterativeRuleDriver.drive(IterativeRuleDriver.java:58)
>  at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:510)
>  at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:312) 
> at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:64)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:86)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:57)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:45)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:45)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:45)
>  at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:287)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:160)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329)
>  at 
> or

[GitHub] [flink] tisonkun commented on pull request #16300: [Hotfix][docs] Fixed typos.

2021-06-26 Thread GitBox


tisonkun commented on pull request #16300:
URL: https://github.com/apache/flink/pull/16300#issuecomment-869092410


   
https://github.com/apache/flink/blob/2a7e7d7e92f0610502e59f6104391a83dc8b5692/flink-yarn/pom.xml#L44-L54
   
   @RocMarshal nope. Our codebase is in the pattern 
`flink-runtime_${scala.binary.version}`.
   
   If you suffer from double underscore issue, you can provide minimal 
reproduce steps and seek the cause.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #15055:
URL: https://github.com/apache/flink/pull/15055#issuecomment-788701462


   
   ## CI report:
   
   * c4cfa5cb3a3e5c862c5107db1a67711dc6c4b8ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19473)
 
   * 43550b885f3b52e7db0f82cbbf84c4086b66456c UNKNOWN
   * 087283320d89633e8c0cb08a331a0c0fe135549e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19582)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal edited a comment on pull request #16300: [Hotfix][docs] Fixed typos.

2021-06-26 Thread GitBox


RocMarshal edited a comment on pull request #16300:
URL: https://github.com/apache/flink/pull/16300#issuecomment-869091110


   https://user-images.githubusercontent.com/64569824/123531173-dc58a400-d734-11eb-8509-b97fbbbdbdab.png";>
   
   
   https://user-images.githubusercontent.com/64569824/123531178-e7abcf80-d734-11eb-99fb-96cc8e1be180.png";>
   
   
   
   https://user-images.githubusercontent.com/64569824/123531181-ec708380-d734-11eb-94ad-60669a7571b8.png";>
   
   
   
   @dianfu ,@tisonkun Hi, I attach 3 pics about it. PTAL. Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on pull request #16300: [Hotfix][docs] Fixed typos.

2021-06-26 Thread GitBox


RocMarshal commented on pull request #16300:
URL: https://github.com/apache/flink/pull/16300#issuecomment-869091110


   https://user-images.githubusercontent.com/64569824/123531173-dc58a400-d734-11eb-8509-b97fbbbdbdab.png";>
   
   
   https://user-images.githubusercontent.com/64569824/123531178-e7abcf80-d734-11eb-99fb-96cc8e1be180.png";>
   
   
   
   https://user-images.githubusercontent.com/64569824/123531181-ec708380-d734-11eb-94ad-60669a7571b8.png";>
   
   
   
   @tisonkun Hi, I attach 3 pics about it. PTAL. Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #15055:
URL: https://github.com/apache/flink/pull/15055#issuecomment-788701462


   
   ## CI report:
   
   * c4cfa5cb3a3e5c862c5107db1a67711dc6c4b8ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19473)
 
   * 43550b885f3b52e7db0f82cbbf84c4086b66456c UNKNOWN
   * 087283320d89633e8c0cb08a331a0c0fe135549e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-23120) ByteArrayWrapperSerializer.serialize should use writeInt to serialize the length

2021-06-26 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-23120.
---
Resolution: Fixed

Fixed in
- master via 2a7e7d7e92f0610502e59f6104391a83dc8b5692
- release-1.13 via 862f14e411f4d5e8d7bc21a1dc003aeb507c8889
- release-1.12 via d77c51d449e8d25611b0d84fe5d438949f360384

> ByteArrayWrapperSerializer.serialize should use writeInt to serialize the 
> length
> 
>
> Key: FLINK-23120
> URL: https://issues.apache.org/jira/browse/FLINK-23120
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.14.0, 1.12.5, 1.13.2
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu closed pull request #16258: [FLINK-23120][python] Fix ByteArrayWrapperSerializer.serialize to use writeInt to serialize the length

2021-06-26 Thread GitBox


dianfu closed pull request #16258:
URL: https://github.com/apache/flink/pull/16258


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tisonkun commented on pull request #16300: [Hotfix][docs] Fixed typos.

2021-06-26 Thread GitBox


tisonkun commented on pull request #16300:
URL: https://github.com/apache/flink/pull/16300#issuecomment-869088182


   
![image](https://user-images.githubusercontent.com/18818196/123530671-b8469400-d72f-11eb-81c0-e04b1e08be5e.png)
   
   It is not a typo.
   
   Closing...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tisonkun closed pull request #16300: [Hotfix][docs] Fixed typos.

2021-06-26 Thread GitBox


tisonkun closed pull request #16300:
URL: https://github.com/apache/flink/pull/16300


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-26 Thread GitBox


gaoyunhaii commented on pull request #15055:
URL: https://github.com/apache/flink/pull/15055#issuecomment-869088100


   Hi @dawidwys very thanks for the reviewing! I have updated the PR according 
to the comments~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-26 Thread GitBox


gaoyunhaii commented on a change in pull request #15055:
URL: https://github.com/apache/flink/pull/15055#discussion_r659249409



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -695,6 +697,27 @@ protected void afterInvoke() throws Exception {
 // close all operators in a chain effect way
 operatorChain.closeOperators(actionExecutor);
 
+// If checkpoints are enabled, waits for all the records get processed 
by the downstream
+// tasks. During this process, this task could coordinate with its 
downstream tasks to
+// continue perform checkpoints.
+if (configuration.isCheckpointingEnabled()) {
+LOG.debug("Waiting for all the records processed by the downstream 
tasks.");
+CompletableFuture combineFuture =
+FutureUtils.waitForAll(
+Arrays.stream(getEnvironment().getAllWriters())
+.map(
+FunctionUtils.uncheckedFunction(
+ResultPartitionWriter
+
::getAllRecordsProcessedFuture))
+.collect(Collectors.toList()));
+
+MailboxExecutor mailboxExecutor =
+
mailboxProcessor.getMailboxExecutor(TaskMailbox.MIN_PRIORITY);
+while (!combineFuture.isDone()) {

Review comment:
   +1 for have a method to do blocking waiting should be more preferred. 
But considering the changes to the mailbox executor seems to be non-trivial, 
perhaps we could implement the mechanism separately, and for now we may first 
waiting with `future.get(1s)` like we do in 
[`StreamOperatorWrapper`](https://github.com/apache/flink/blob/84e1186d24ebbe6b3a4629496274d6337b333af1/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamOperatorWrapper.java#L161)
 ? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-26 Thread GitBox


gaoyunhaii commented on a change in pull request #15055:
URL: https://github.com/apache/flink/pull/15055#discussion_r659249409



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -695,6 +697,27 @@ protected void afterInvoke() throws Exception {
 // close all operators in a chain effect way
 operatorChain.closeOperators(actionExecutor);
 
+// If checkpoints are enabled, waits for all the records get processed 
by the downstream
+// tasks. During this process, this task could coordinate with its 
downstream tasks to
+// continue perform checkpoints.
+if (configuration.isCheckpointingEnabled()) {
+LOG.debug("Waiting for all the records processed by the downstream 
tasks.");
+CompletableFuture combineFuture =
+FutureUtils.waitForAll(
+Arrays.stream(getEnvironment().getAllWriters())
+.map(
+FunctionUtils.uncheckedFunction(
+ResultPartitionWriter
+
::getAllRecordsProcessedFuture))
+.collect(Collectors.toList()));
+
+MailboxExecutor mailboxExecutor =
+
mailboxProcessor.getMailboxExecutor(TaskMailbox.MIN_PRIORITY);
+while (!combineFuture.isDone()) {

Review comment:
   +1 for have a method to do blocking waiting should be more preferred. 
Considering the changes to the mailbox executor seems to be non-trivial, 
perhaps we could implement the mechanism separately, and for now we may first 
waiting with `future.get(1s)` like we do in 
[`StreamOperatorWrapper`](https://github.com/apache/flink/blob/84e1186d24ebbe6b3a4629496274d6337b333af1/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamOperatorWrapper.java#L161)
 ? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on a change in pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-26 Thread GitBox


gaoyunhaii commented on a change in pull request #15055:
URL: https://github.com/apache/flink/pull/15055#discussion_r659249409



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -695,6 +697,27 @@ protected void afterInvoke() throws Exception {
 // close all operators in a chain effect way
 operatorChain.closeOperators(actionExecutor);
 
+// If checkpoints are enabled, waits for all the records get processed 
by the downstream
+// tasks. During this process, this task could coordinate with its 
downstream tasks to
+// continue perform checkpoints.
+if (configuration.isCheckpointingEnabled()) {
+LOG.debug("Waiting for all the records processed by the downstream 
tasks.");
+CompletableFuture combineFuture =
+FutureUtils.waitForAll(
+Arrays.stream(getEnvironment().getAllWriters())
+.map(
+FunctionUtils.uncheckedFunction(
+ResultPartitionWriter
+
::getAllRecordsProcessedFuture))
+.collect(Collectors.toList()));
+
+MailboxExecutor mailboxExecutor =
+
mailboxProcessor.getMailboxExecutor(TaskMailbox.MIN_PRIORITY);
+while (!combineFuture.isDone()) {

Review comment:
   I would +1 for have a method to do blocking waiting should be more 
preferred. Considering the changes to the mailbox executor seems to be 
non-trivial, perhaps we could implement the mechanism separately, and for now 
we may first waiting with `future.get(1s)` like we do in 
[`StreamOperatorWrapper`](https://github.com/apache/flink/blob/84e1186d24ebbe6b3a4629496274d6337b333af1/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamOperatorWrapper.java#L161)
 ? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on pull request #16258: [FLINK-23120][python] Fix ByteArrayWrapperSerializer.serialize to use writeInt to serialize the length

2021-06-26 Thread GitBox


dianfu commented on pull request #16258:
URL: https://github.com/apache/flink/pull/16258#issuecomment-869087703


   @StephanEwen @HuangXingBo Thanks a lot for the review. I'm merging this PR 
to catch up with the bugfix release of 1.13 as this is a series bug for 
MapState in Python DataStream API. Feel free to let me know if there is 
anything we could improve.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15055: [FLINK-21086][runtime][checkpoint] Support checkpoint alignment with finished input channels

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #15055:
URL: https://github.com/apache/flink/pull/15055#issuecomment-788701462


   
   ## CI report:
   
   * c4cfa5cb3a3e5c862c5107db1a67711dc6c4b8ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19473)
 
   * 43550b885f3b52e7db0f82cbbf84c4086b66456c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21086) Support checkpoint alignment with finished input channels

2021-06-26 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-21086:

Description: 
For a non-source task, if one of its precedent task has finished, the precedent 
task would send EndOfPartition to it. However, currently the 
CheckpointBarrierHandler could not support receiving EndOfPartitionEvent. To 
support checkpoint after some tasks finish, when received EndOfPartitionEvent 
the CheckpointBarrierHandler should

#  For all the pending checkpoints, mark this channel as aligned.
#  For the future checkpoints, excludes this channel. 

  was:For a non-source task, if one of its precedent task has finished, the 
precedent task would send EndOfPartition to it. Then for checkpoint after that, 
this task would not receive the barrier from the channel that has sent 
EndOfPartition. To finish the alignment, CheckpointBarrierHandler would insert 
barriers before EndOfPartition for these channels.


> Support checkpoint alignment with finished input channels
> -
>
> Key: FLINK-21086
> URL: https://issues.apache.org/jira/browse/FLINK-21086
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Runtime / Checkpointing
>Reporter: Yun Gao
>Priority: Major
>  Labels: pull-request-available
>
> For a non-source task, if one of its precedent task has finished, the 
> precedent task would send EndOfPartition to it. However, currently the 
> CheckpointBarrierHandler could not support receiving EndOfPartitionEvent. To 
> support checkpoint after some tasks finish, when received EndOfPartitionEvent 
> the CheckpointBarrierHandler should
> #  For all the pending checkpoints, mark this channel as aligned.
> #  For the future checkpoints, excludes this channel. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16301: [FLINK-23162][docs] Updated column expression which throws exception

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16301:
URL: https://github.com/apache/flink/pull/16301#issuecomment-869067859


   
   ## CI report:
   
   * aac82eeaa2d944ecce4f420cb600c8ed74bce531 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19580)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16301: [FLINK-23162][docs] Updated column expression which throws exception

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16301:
URL: https://github.com/apache/flink/pull/16301#issuecomment-869067859


   
   ## CI report:
   
   * aac82eeaa2d944ecce4f420cb600c8ed74bce531 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19580)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21716)  Support higher precision for Data Type TIME(p)

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21716:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


>  Support higher precision for Data Type TIME(p)
> ---
>
> Key: FLINK-21716
> URL: https://issues.apache.org/jira/browse/FLINK-21716
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Leonard Xu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Due to the historical reason, we only support TIME(3) yet, we can support 
> higher precision eg. TIME(9).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22443) can not be execute an extreme long sql under batch mode

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22443:
---

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as a 
Blocker but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 1 days. I have gone ahead and marked it "stale-blocker". If this 
ticket is a Blocker, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> can not be execute an extreme long sql under batch mode
> ---
>
> Key: FLINK-22443
> URL: https://issues.apache.org/jira/browse/FLINK-22443
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
> Environment: execute command
>  
> {code:java}
> bin/sql-client.sh embedded -d conf/sql-client-batch.yaml 
> {code}
> content of conf/sql-client-batch.yaml
>  
> {code:java}
> catalogs:
> - name: bnpmphive
>   type: hive
>   hive-conf-dir: /home/gum/hive/conf
>   hive-version: 3.1.2
> execution:
>   planner: blink
>   type: batch
>   #type: streaming
>   result-mode: table
>   parallelism: 4
>   max-parallelism: 2000
>   current-catalog: bnpmphive
>   #current-database: snmpprobe 
> #configuration:
> #  table.sql-dialect: hivemodules:
>- name: core
>  type: core
>- name: myhive
>  type: hivedeployment:
>   # general cluster communication timeout in ms
>   response-timeout: 5000
>   # (optional) address from cluster to gateway
>   gateway-address: ""
>   # (optional) port from cluster to gateway
>   gateway-port: 0
> {code}
>  
>Reporter: macdoor615
>Priority: Blocker
>  Labels: stale-blocker, stale-critical
> Attachments: flink-gum-taskexecutor-8-hb3-prod-hadoop-002.log.4.zip, 
> raw_p_restapi_hcd.csv.zip
>
>
> 1. execute an extreme long sql under batch mode
>  
> {code:java}
> select
> 'CD' product_name,
> r.code business_platform,
> 5 statisticperiod,
> cast('2021-03-24 00:00:00' as timestamp) coltime,
> cast(r1.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_2,
> cast(r2.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_7,
> cast(r3.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_5,
> cast(r4.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_6,
> cast(r5.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00029,
> cast(r6.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00028,
> cast(r7.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00015,
> cast(r8.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00014,
> cast(r9.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00011,
> cast(r10.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00010,
> cast(r11.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00013,
> cast(r12.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00012,
> cast(r13.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00027,
> cast(r14.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00026,
> cast(r15.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00046,
> cast(r16.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00047,
> cast(r17.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00049,
> cast(r18.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00048,
> cast(r19.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00024,
> cast(r20.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00025,
> cast(r21.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00022,
> cast(r22.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00023,
> cast(r23.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00054,
> cast(r24.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00055,
> cast(r25.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00033,
> cast(r26.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00032,
> cast(r27.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00053,
> cast(r28.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00052,
> cast(r29.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00051,
> cast(r30.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00050,
> cast(r31.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00043,
> cast(r32.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00042,
> cast(r33.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00017,
> cast(r34.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00016,
> cast(r35.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_3,
> cast(r36.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00045,
> cast(r37.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00044,
> cast(r38.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00038,
> cast(r39.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00039,
> cast(r40.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00037,
> cast(r41

[jira] [Updated] (FLINK-22065) Improve the error message when input invalid command in the sql client

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22065:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Improve the error message when input invalid command in the sql client
> --
>
> Key: FLINK-22065
> URL: https://issues.apache.org/jira/browse/FLINK-22065
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
> Fix For: 1.14.0
>
>
> !https://static.dingtalk.com/media/lALPD26eOprT2ztwzQWg_1440_112.png_720x720g.jpg?renderWidth=1440&renderHeight=112&renderOrientation=1&isLocal=0&bizType=im!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21559) Python DataStreamTests::test_process_function failed on AZP

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21559:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major 
test-stability  (was: auto-deprioritized-critical stale-major test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Python DataStreamTests::test_process_function failed on AZP
> ---
>
> Key: FLINK-21559
> URL: https://issues.apache.org/jira/browse/FLINK-21559
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> test-stability
> Fix For: 1.14.0
>
>
> The Python test case {{DataStreamTests::test_process_function}} failed on AZP.
> {code}
> === short test summary info 
> 
> FAILED 
> pyflink/datastream/tests/test_data_stream.py::DataStreamTests::test_process_function
> = 1 failed, 705 passed, 22 skipped, 303 warnings in 583.39s (0:09:43) 
> ==
> ERROR: InvocationError for command /__w/3/s/flink-python/.tox/py38/bin/pytest 
> --durations=20 (exited with code 1)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13992&view=logs&j=9cada3cb-c1d3-5621-16da-0f718fb86602&t=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22128) Window aggregation should have unique keys

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22128:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Window aggregation should have unique keys
> --
>
> Key: FLINK-22128
> URL: https://issues.apache.org/jira/browse/FLINK-22128
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> We should add match method in {{FlinkRelMdUniqueKeys for 
> StreamPhysicalWindowAggregate}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21538) Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21538:
---
  Labels: auto-deprioritized-major auto-unassigned test-stability  (was: 
auto-unassigned stale-major test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Elasticsearch6DynamicSinkITCase.testWritingDocuments fails when submitting job
> --
>
> Key: FLINK-21538
> URL: https://issues.apache.org/jira/browse/FLINK-21538
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Runtime / Coordination
>Affects Versions: 1.12.1, 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13868&view=logs&j=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89&t=5d6e4255-0ea8-5e2a-f52c-c881b7872361
> {code}
> 2021-02-27T00:16:06.9493539Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-02-27T00:16:06.9494494Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2021-02-27T00:16:06.9495733Z  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$2(MiniClusterJobClient.java:117)
> 2021-02-27T00:16:06.9496596Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2021-02-27T00:16:06.9497354Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2021-02-27T00:16:06.9525795Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-02-27T00:16:06.9526744Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-02-27T00:16:06.9527784Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
> 2021-02-27T00:16:06.9528552Z  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2021-02-27T00:16:06.9529271Z  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2021-02-27T00:16:06.9530013Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-02-27T00:16:06.9530482Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-02-27T00:16:06.9531068Z  at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1046)
> 2021-02-27T00:16:06.9531544Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:264)
> 2021-02-27T00:16:06.9531908Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:261)
> 2021-02-27T00:16:06.9532449Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
> 2021-02-27T00:16:06.9532860Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
> 2021-02-27T00:16:06.9533245Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2021-02-27T00:16:06.9533721Z  at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
> 2021-02-27T00:16:06.9534225Z  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2021-02-27T00:16:06.9534697Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
> 2021-02-27T00:16:06.9535217Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
> 2021-02-27T00:16:06.9535718Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
> 2021-02-27T00:16:06.9536127Z  at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:573)
> 2021-02-27T00:16:06.9536861Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
> 2021-02-27T00:16:06.9537394Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
> 2021-02-27T00:16:06.9537916Z  at 
> scala.concurrent.Future.$anonfun$andThen$1(Future.scala:532)
> 2021-02-27T00:16:06.9605804Z  at 
> scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
> 2021-02-27T00:16:06.9606794Z  at 
> scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
> 2021-02-27T00:16:06.9607642Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2021-02-27T00:16:06.9608419Z  at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
> 2021-02-27T00:16:06.96092

[jira] [Updated] (FLINK-20696) Yarn Session Blob Directory is not deleted.

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20696:
---
Labels: auto-deprioritized-major stale-major  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Yarn Session Blob Directory is not deleted.
> ---
>
> Key: FLINK-20696
> URL: https://issues.apache.org/jira/browse/FLINK-20696
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.3, 1.12.0, 1.13.0
>Reporter: Ada Wong
>Priority: Major
>  Labels: auto-deprioritized-major, stale-major
> Attachments: image-2020-12-21-16-47-37-278.png
>
>
> This Job is finished, but blob directory is not deleted.
> There is a small probability that this problem will occur, when I submit so 
> many jobs .
>   !image-2020-12-21-16-47-37-278.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22357) Mark FLIP-27 Source API as stable

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22357:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Mark FLIP-27 Source API as stable
> -
>
> Key: FLINK-22357
> URL: https://issues.apache.org/jira/browse/FLINK-22357
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Core
>Reporter: Stephan Ewen
>Priority: Major
>  Labels: auto-unassigned, stale-major
> Fix For: 1.14.0
>
>
> The FLIP-27 source API was properly introduced in 1.11, has undergone some 
> major improvements in 1.12.
> During the stabilization in 1.13 we needed only one very minor change to 
> those interfaces.
> I think it is time to declare the core source API interfaces as stable, to 
> allow users to safely rely on them. I would suggest to do that for 1.14, 
> possibly even backport the annotation change to 1.13.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22178) Support ignore-first-line option in new csv format

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22178:
---
Labels: auto-unassigned pull-request-available stale-major  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support ignore-first-line option in new csv format
> --
>
> Key: FLINK-22178
> URL: https://issues.apache.org/jira/browse/FLINK-22178
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.13.0
>Reporter: Kurt Young
>Priority: Major
>  Labels: auto-unassigned, pull-request-available, stale-major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21877) Add E2E test for upsert-kafka connector

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21877:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add E2E test for upsert-kafka connector
> ---
>
> Key: FLINK-21877
> URL: https://issues.apache.org/jira/browse/FLINK-21877
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: Shengkai Fang
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22397) OrcFileSystemITCase fails on Azure

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22397:
---
Labels: auto-deprioritized-critical stale-major test-stability  (was: 
auto-deprioritized-critical test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> OrcFileSystemITCase fails on Azure
> --
>
> Key: FLINK-22397
> URL: https://issues.apache.org/jira/browse/FLINK-22397
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ORC, Table SQL / Runtime
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16923&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=03dca39c-73e8-5aaf-601d-328ae5c35f20&l=12429
> {code}
> 2021-04-21T06:00:43.1989525Z Apr 21 06:00:43 [ERROR] 
> testOrcFilterPushDown[false](org.apache.flink.orc.OrcFileSystemITCase)  Time 
> elapsed: 8.733 s  <<< ERROR!
> 2021-04-21T06:00:43.1991576Z Apr 21 06:00:43 java.lang.RuntimeException: 
> Failed to fetch next result
> 2021-04-21T06:00:43.1992690Z Apr 21 06:00:43  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
> 2021-04-21T06:00:43.1999796Z Apr 21 06:00:43  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> 2021-04-21T06:00:43.2021072Z Apr 21 06:00:43  at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:351)
> 2021-04-21T06:00:43.2022469Z Apr 21 06:00:43  at 
> java.util.Iterator.forEachRemaining(Iterator.java:115)
> 2021-04-21T06:00:43.2023496Z Apr 21 06:00:43  at 
> org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109)
> 2021-04-21T06:00:43.2024451Z Apr 21 06:00:43  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
> 2021-04-21T06:00:43.2025472Z Apr 21 06:00:43  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140)
> 2021-04-21T06:00:43.2026438Z Apr 21 06:00:43  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:106)
> 2021-04-21T06:00:43.2027531Z Apr 21 06:00:43  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.check(BatchFileSystemITCaseBase.scala:46)
> 2021-04-21T06:00:43.2028933Z Apr 21 06:00:43  at 
> org.apache.flink.table.planner.runtime.FileSystemITCaseBase$class.check(FileSystemITCaseBase.scala:57)
> 2021-04-21T06:00:43.2030041Z Apr 21 06:00:43  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.check(BatchFileSystemITCaseBase.scala:33)
> 2021-04-21T06:00:43.2031068Z Apr 21 06:00:43  at 
> org.apache.flink.orc.OrcFileSystemITCase.testOrcFilterPushDown(OrcFileSystemITCase.java:139)
> 2021-04-21T06:00:43.2031935Z Apr 21 06:00:43  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-04-21T06:00:43.2032743Z Apr 21 06:00:43  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-04-21T06:00:43.2033847Z Apr 21 06:00:43  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-04-21T06:00:43.2034729Z Apr 21 06:00:43  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-04-21T06:00:43.2035558Z Apr 21 06:00:43  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-04-21T06:00:43.2036546Z Apr 21 06:00:43  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-04-21T06:00:43.2037466Z Apr 21 06:00:43  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-04-21T06:00:43.2038683Z Apr 21 06:00:43  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-04-21T06:00:43.2039611Z Apr 21 06:00:43  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-04-21T06:00:43.2040473Z Apr 21 06:00:43  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2021-04-21T06:00:43.2041308Z Apr 21 06:00:43  at 
> org.junit.rules.ExternalResource$1.eval

[jira] [Updated] (FLINK-22551) checkpoints: strange behaviour

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22551:
---
Labels: auto-deprioritized-critical stale-major  (was: 
auto-deprioritized-critical)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> checkpoints: strange behaviour 
> ---
>
> Key: FLINK-22551
> URL: https://issues.apache.org/jira/browse/FLINK-22551
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
> Environment: {code:java}
>  java -version
> openjdk version "11.0.2" 2019-01-15
> OpenJDK Runtime Environment 18.9 (build 11.0.2+9)
> OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode)
> {code}
>Reporter: buom
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major
>
> * +*Case 1*:+ Work as expected
> {code:java}
> public class Example {
> public static class ExampleSource extends RichSourceFunction
> implements CheckpointedFunction {
> private volatile boolean isRunning = true;
> @Override
> public void open(Configuration parameters) throws Exception {
> System.out.println("[source] invoke open()");
> }
> @Override
> public void close() throws Exception {
> isRunning = false;
> System.out.println("[source] invoke close()");
> }
> @Override
> public void run(SourceContext ctx) throws Exception {
> System.out.println("[source] invoke run()");
> while (isRunning) {
> ctx.collect("Flink");
> Thread.sleep(500);
> }
> }
> @Override
> public void cancel() {
> isRunning = false;
> System.out.println("[source] invoke cancel()");
> }
> @Override
> public void snapshotState(FunctionSnapshotContext context) throws 
> Exception {
> System.out.println("[source] invoke snapshotState()");
> }
> @Override
> public void initializeState(FunctionInitializationContext context) 
> throws Exception {
> System.out.println("[source] invoke initializeState()");
> }
> }
> public static class ExampleSink extends PrintSinkFunction
> implements CheckpointedFunction {
> @Override
> public void snapshotState(FunctionSnapshotContext context) throws 
> Exception {
> System.out.println("[sink] invoke snapshotState()");
> }
> @Override
> public void initializeState(FunctionInitializationContext context) 
> throws Exception {
> System.out.println("[sink] invoke initializeState()");
> }
> }
> public static void main(String[] args) throws Exception {
> final StreamExecutionEnvironment env =
> 
> StreamExecutionEnvironment.getExecutionEnvironment().enableCheckpointing(1000);
> DataStream stream = env.addSource(new ExampleSource());
> stream.addSink(new ExampleSink()).setParallelism(1);
> env.execute();
> }
> }
> {code}
> {code:java}
> $ java -jar ./example.jar
> [sink] invoke initializeState()
> [source] invoke initializeState()
> [source] invoke open()
> [source] invoke run()
> Flink
> [sink] invoke snapshotState()
> [source] invoke snapshotState()
> Flink
> Flink
> [sink] invoke snapshotState()
> [source] invoke snapshotState()
> Flink
> Flink
> [sink] invoke snapshotState()
> [source] invoke snapshotState()
> ^C
> {code}
>  * *+Case 2:+* Run as unexpected (w/ _parallelism = 1_)
> {code:java}
> public class Example {
> public static class ExampleSource extends RichSourceFunction
> implements CheckpointedFunction {
> private volatile boolean isRunning = true;
> @Override
> public void open(Configuration parameters) throws Exception {
> System.out.println("[source] invoke open()");
> }
> @Override
> public void close() throws Exception {
> isRunning = false;
> System.out.println("[source] invoke close()");
> }
> @Override
> public void run(SourceContext ctx) throws Exception {
> System.out.println("[source] invoke run()");
> while (isRunning) {
> ctx.collect("Flink");
> Thread.sleep(500);
> }
> }
> @Overri

[jira] [Updated] (FLINK-22955) lookup join filter push down result to mismatch function signature

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22955:
---
Labels: stale-critical  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> lookup join filter push down result to mismatch function signature
> --
>
> Key: FLINK-22955
> URL: https://issues.apache.org/jira/browse/FLINK-22955
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.11.3, 1.13.1, 1.12.4
> Environment: Flink 1.13.1
> how to reproduce: patch file attached
>Reporter: Cooper Luan
>Priority: Critical
>  Labels: stale-critical
> Fix For: 1.11.4, 1.12.5, 1.13.2
>
> Attachments: 
> 0001-try-to-produce-lookup-join-filter-pushdown-expensive.patch
>
>
> a sql like this may result to look function signature mismatch exception when 
> explain sql
> {code:sql}
> CREATE TEMPORARY VIEW v_vvv AS
> SELECT * FROM MyTable AS T
> JOIN LookupTableAsync1 FOR SYSTEM_TIME AS OF T.proctime AS D
> ON T.a = D.id;
> SELECT a,b,id,name
> FROM v_vvv
> WHERE age = 10;{code}
> the lookup function is
> {code:scala}
> class AsyncTableFunction1 extends AsyncTableFunction[RowData] {
>   def eval(resultFuture: CompletableFuture[JCollection[RowData]], a: 
> Integer): Unit = {
>   }
> }{code}
> exec plan is
> {code:java}
> LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`], 
> fields=[a, b, id, name])
> +- LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], 
> joinType=[InnerJoin], async=[true], lookup=[age=10, id=a], where=[(age = 
> 10)], select=[a, b, id, name])
>+- Calc(select=[a, b])
>   +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], 
> fields=[a, b, c, proctime, rowtime])
> {code}
> the "lookup=[age=10, id=a]" result to mismatch signature mismatch
>  
> but if I add 1 more insert, it works well
> {code:sql}
> SELECT a,b,id,name
> FROM v_vvv
> WHERE age = 30
> {code}
> exec plan is
> {code:java}
> == Optimized Execution Plan ==
> LookupJoin(table=[default_catalog.default_database.LookupTableAsync1], 
> joinType=[InnerJoin], async=[true], lookup=[id=a], select=[a, b, c, proctime, 
> rowtime, id, name, age, ts])(reuse_id=[1])
> +- DataStreamScan(table=[[default_catalog, default_database, MyTable]], 
> fields=[a, b, c, proctime, 
> rowtime])LegacySink(name=[`default_catalog`.`default_database`.`appendSink1`],
>  fields=[a, b, id, name])
> +- Calc(select=[a, b, id, name], where=[(age = 10)])
>+- 
> Reused(reference_id=[1])LegacySink(name=[`default_catalog`.`default_database`.`appendSink2`],
>  fields=[a, b, id, name])
> +- Calc(select=[a, b, id, name], where=[(age = 30)])
>+- Reused(reference_id=[1])
> {code}
>  the LookupJoin node use "lookup=[id=a]"(right) not "lookup=[age=10, id=a]" 
> (wrong)
>  
> so, in "multi insert" case, planner works great
> in "single insert" case, planner throw exception



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22517) Fix pickle compatibility problem in different Python versions

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22517:
---
Labels: auto-unassigned stale-major  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Fix pickle compatibility problem in different Python versions
> -
>
> Key: FLINK-22517
> URL: https://issues.apache.org/jira/browse/FLINK-22517
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.0, 1.12.3
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: auto-unassigned, stale-major
>
> Since release-1.12, PyFlink has supported Python3 8. Starting from Python 
> 3.8, the default protocol version used by pickle is 
> pickle5(https://www.python.org/dev/peps/pep-0574/), which will raising the 
> following exception if the client uses python 3.8 to compile program and the 
> cluster node uses python 3.7 or python 3.6 to run python udf:
> {code:python}
> ValueError: unsupported pickle protocol: 5
> {code}
> The workaround is to first let the python version used by the client be 3.6 
> or 3.7. For how to specify the client-side python execution environment, 
> please refer to the 
> doc(https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/python/python_config.html#python-client-executable).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22311) Flink JDBC XA connector need to set maxRetries to 0 to properly working

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22311:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Flink JDBC XA connector need to set maxRetries to 0 to properly working
> ---
>
> Key: FLINK-22311
> URL: https://issues.apache.org/jira/browse/FLINK-22311
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.13.0
>Reporter: Maciej Bryński
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.14.0
>
>
> Hi,
> We're using XA connector from Flink 1.13 in one of our projects and we were 
> able to create duplicates of records during write to Oracle.
> The reason was that default MAX_RETRIES in JdbcExecutionOptions is 3 and this 
> can cause duplicates in DB.
> I think we should at least mention this in docs or even validate this option 
> when creating XA Sink.
> In documentation we're using defaults.
> https://github.com/apache/flink/pull/10847/files#diff-a585e56c997756bb7517ebd2424e5fab5813cee67d8dee3eab6ddd0780aff627R88



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22190) no guarantee on Flink exactly_once sink to Kafka

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22190:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> no guarantee on Flink exactly_once sink to Kafka 
> -
>
> Key: FLINK-22190
> URL: https://issues.apache.org/jira/browse/FLINK-22190
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.12.2
> Environment: *flink: 1.12.2*
> *kafka: 2.7.0*
>Reporter: Spongebob
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> When I tried to test the function of flink exactly_once sink to kafka, I 
> found it can not run as expectation.  here's the pipline of the flink 
> applications: raw data(flink app0)-> kafka topic1 -> flink app1 -> kafka 
> topic2 -> flink app2, flink tasks may met / byZeroException in random. Below 
> shows the codes:
> {code:java}
> //代码占位符
> raw data, flink app0:
> class SimpleSource1 extends SourceFunction[String] {
>  var switch = true
>  val students: Array[String] = Array("Tom", "Jerry", "Gory")
>  override def run(sourceContext: SourceFunction.SourceContext[String]): Unit 
> = {
>  var i = 0
>  while (switch) {
>  sourceContext.collect(s"${students(Random.nextInt(students.length))},$i")
>  i += 1
>  Thread.sleep(5000)
>  }
>  }
>  override def cancel(): Unit = switch = false
> }
> val streamEnv = StreamExecutionEnvironment.getExecutionEnvironment
> val dataStream = streamEnv.addSource(new SimpleSource1)
> dataStream.addSink(new FlinkKafkaProducer[String]("xfy:9092", 
> "single-partition-topic-2", new SimpleStringSchema()))
> streamEnv.execute("sink kafka")
>  
> flink-app1:
> val streamEnv = StreamExecutionEnvironment.getExecutionEnvironment
> streamEnv.enableCheckpointing(1000, CheckpointingMode.EXACTLY_ONCE)
> val prop = new Properties()
> prop.setProperty("bootstrap.servers", "xfy:9092")
> prop.setProperty("group.id", "test")
> val dataStream = streamEnv.addSource(new FlinkKafkaConsumer[String](
>  "single-partition-topic-2",
>  new SimpleStringSchema,
>  prop
> ))
> val resultStream = dataStream.map(x => {
>  val data = x.split(",")
>  (data(0), data(1), data(1).toInt / Random.nextInt(5)).toString()
> }
> )
> resultStream.print().setParallelism(1)
> val propProducer = new Properties()
> propProducer.setProperty("bootstrap.servers", "xfy:9092")
> propProducer.setProperty("transaction.timeout.ms", s"${1000 * 60 * 5}")
> resultStream.addSink(new FlinkKafkaProducer[String](
>  "single-partition-topic",
>  new MyKafkaSerializationSchema("single-partition-topic"),
>  propProducer,
>  Semantic.EXACTLY_ONCE))
> streamEnv.execute("sink kafka")
>  
> flink-app2:
> val streamEnv = StreamExecutionEnvironment.getExecutionEnvironment
> val prop = new Properties()
> prop.setProperty("bootstrap.servers", "xfy:9092")
> prop.setProperty("group.id", "test")
> prop.setProperty("isolation_level", "read_committed")
> val dataStream = streamEnv.addSource(new FlinkKafkaConsumer[String](
>  "single-partition-topic",
>  new SimpleStringSchema,
>  prop
> ))
> dataStream.print().setParallelism(1)
> streamEnv.execute("consumer kafka"){code}
>  
> flink app1 will print some duplicate numbers, and to my expectation flink 
> app2 will deduplicate them but the fact shows not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22017) Regions may never be scheduled when there are cross-region blocking edges

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22017:
---
Labels: auto-deprioritized-critical stale-major  (was: 
auto-deprioritized-critical)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Regions may never be scheduled when there are cross-region blocking edges
> -
>
> Key: FLINK-22017
> URL: https://issues.apache.org/jira/browse/FLINK-22017
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.3, 1.12.2, 1.13.0
>Reporter: Zhilong Hong
>Priority: Major
>  Labels: auto-deprioritized-critical, stale-major
> Attachments: Illustration.jpg
>
>
> For the topology with cross-region blocking edges, there are regions that may 
> never be scheduled. The case is illustrated in the figure below.
> !Illustration.jpg!
> Let's denote the vertices with layer_number. It's clear that the edge 
> connects v2_2 and v3_2 crosses region 1 and region 2. Since region 1 has no 
> blocking edges connected to other regions, it will be scheduled first. When 
> vertex2_2 is finished, PipelinedRegionSchedulingStrategy will trigger 
> {{onExecutionStateChange}} for it.
> As expected, region 2 will be scheduled since all its consumed partitions are 
> consumable. But in fact region 2 won't be scheduled, because the result 
> partition of vertex2_2 is not tagged as consumable. Whether it is consumable 
> or not is determined by its IntermediateDataSet.
> However, an IntermediateDataSet is consumable if and only if all the 
> producers of its IntermediateResultPartitions are finished. This 
> IntermediateDataSet will never be consumable since vertex2_3 is not 
> scheduled. All in all, this forms a deadlock that a region will never be 
> scheduled because it's not scheduled.
> As a solution we should let BLOCKING result partitions be consumable 
> individually. Note that this will result in the scheduling to become 
> execution-vertex-wise instead of stage-wise, with a nice side effect towards 
> better resource utilization. The PipelinedRegionSchedulingStrategy can be 
> simplified along with change to get rid of the correlatedResultPartitions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22075) Incorrect null outputs in left join

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22075:
---
Labels: auto-deprioritized-critical auto-unassigned stale-major  (was: 
auto-deprioritized-critical auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Incorrect null outputs in left join
> ---
>
> Key: FLINK-22075
> URL: https://issues.apache.org/jira/browse/FLINK-22075
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.2
> Environment: 
> https://github.com/jamii/streaming-consistency/blob/4e5d144dacf85e512bdc7afd77d031b5974d733e/pkgs.nix#L25-L46
> ```
> [nix-shell:~/streaming-consistency/flink]$ java -version
> openjdk version "1.8.0_265"
> OpenJDK Runtime Environment (build 1.8.0_265-ga)
> OpenJDK 64-Bit Server VM (build 25.265-bga, mixed mode)
> [nix-shell:~/streaming-consistency/flink]$ flink --version
> Version: 1.12.2, Commit ID: 4dedee0
> [nix-shell:~/streaming-consistency/flink]$ nix-info
> system: "x86_64-linux", multi-user?: yes, version: nix-env (Nix) 2.3.10, 
> channels(jamie): "", channels(root): "nixos-20.09.3554.f8929dce13e", nixpkgs: 
> /nix/var/nix/profiles/per-user/root/channels/nixos
> ```
>Reporter: Jamie Brandon
>Priority: Major
>  Labels: auto-deprioritized-critical, auto-unassigned, stale-major
>
> I'm left joining a table with itself 
> [here](https://github.com/jamii/streaming-consistency/blob/4e5d144dacf85e512bdc7afd77d031b5974d733e/flink/src/main/java/Demo.java#L55-L66).
>  The output should have no nulls, or at least emit nulls and then retract 
> them. Instead I see:
> ```
> jamie@machine:~/streaming-consistency/flink$ wc -l tmp/outer_join_with_time
> 10 tmp/outer_join_with_time
> jamie@machine:~/streaming-consistency/flink$ grep -c insert 
> tmp/outer_join_with_time
> 10
> jamie@machine:~/streaming-consistency/flink$ grep -c 'null' 
> tmp/outer_join_with_time
> 16943
> ```
> ~1.7% of the outputs are incorrect and never retracted.
> [Full 
> output](https://gist.githubusercontent.com/jamii/983fee41609b1425fe7fa59d3249b249/raw/069b9dcd4faf9f6113114381bc7028c6642ca787/gistfile1.txt)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22306) KafkaITCase.testCollectingSchema failed on AZP

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22306:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major 
test-stability  (was: auto-deprioritized-critical stale-major test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> KafkaITCase.testCollectingSchema failed on AZP
> --
>
> Key: FLINK-22306
> URL: https://issues.apache.org/jira/browse/FLINK-22306
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> test-stability
> Fix For: 1.14.0
>
>
> The {{KafkaITCase.testCollectingSchema}} failed on AZP with
> {code}
> 2021-04-15T10:22:06.8263865Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2021-04-15T10:22:06.8266577Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2021-04-15T10:22:06.8267526Z  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2021-04-15T10:22:06.8268034Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2021-04-15T10:22:06.8268496Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2021-04-15T10:22:06.8269133Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-15T10:22:06.8270205Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-15T10:22:06.8270698Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
> 2021-04-15T10:22:06.8271192Z  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2021-04-15T10:22:06.8274903Z  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2021-04-15T10:22:06.8275602Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2021-04-15T10:22:06.8276139Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2021-04-15T10:22:06.8276589Z  at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081)
> 2021-04-15T10:22:06.8276965Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:264)
> 2021-04-15T10:22:06.8277307Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:261)
> 2021-04-15T10:22:06.8277634Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
> 2021-04-15T10:22:06.8277971Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
> 2021-04-15T10:22:06.8278352Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-04-15T10:22:06.8278767Z  at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
> 2021-04-15T10:22:06.8279223Z  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> 2021-04-15T10:22:06.8279743Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> 2021-04-15T10:22:06.8280130Z  at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
> 2021-04-15T10:22:06.8280561Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
> 2021-04-15T10:22:06.8287231Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
> 2021-04-15T10:22:06.8291223Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
> 2021-04-15T10:22:06.8291779Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
> 2021-04-15T10:22:06.8292745Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2021-04-15T10:22:06.8293335Z  at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
> 2021-04-15T10:22:06.8294000Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8294702Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8295281Z  at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
> 2021-04-15T10:22:06.8295905Z  at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
> 2021-04-

[jira] [Updated] (FLINK-22342) FlinkKafkaProducerITCase fails with producer leak

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22342:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major 
test-stability  (was: auto-deprioritized-critical stale-major test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> FlinkKafkaProducerITCase fails with producer leak
> -
>
> Key: FLINK-22342
> URL: https://issues.apache.org/jira/browse/FLINK-22342
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.11.3
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16732&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=684b1416-4c17-504e-d5ab-97ee44e08a20&l=6386
> {code}
> [ERROR] 
> testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>   Time elapsed: 8.854 s  <<< FAILURE!
> java.lang.AssertionError: Detected producer leak. Thread name: 
> kafka-producer-network-thread | 
> producer-MockTask-002a002c-11
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.checkProducerLeak(FlinkKafkaProducerITCase.java:728)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint(FlinkKafkaProducerITCase.java:381)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21876) Handle it properly when the returned value of Python UDF doesn't match the defined result type

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21876:
---
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Handle it properly when the returned value of Python UDF doesn't match the 
> defined result type
> --
>
> Key: FLINK-21876
> URL: https://issues.apache.org/jira/browse/FLINK-21876
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Affects Versions: 1.10.0, 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
> Fix For: 1.14.0, 1.12.5
>
>
> Currently, when the returned value of Python UDF doesn't match the defined 
> result type of the Python UDF, it will thrown the following exception during 
> execution:
> {code:java}
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readFully(DataInputStream.java:197)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at 
> org.apache.flink.table.runtime.typeutils.StringDataSerializer.deserializeInternal(StringDataSerializer.java:88)
> at 
> org.apache.flink.table.runtime.typeutils.StringDataSerializer.deserialize(StringDataSerializer.java:82)
> at 
> org.apache.flink.table.runtime.typeutils.StringDataSerializer.deserialize(StringDataSerializer.java:34)
> at 
> org.apache.flink.table.runtime.typeutils.serializers.python.MapDataSerializer.deserializeInternal(MapDataSerializer.java:129)
> at 
> org.apache.flink.table.runtime.typeutils.serializers.python.MapDataSerializer.deserialize(MapDataSerializer.java:110)
> at 
> org.apache.flink.table.runtime.typeutils.serializers.python.MapDataSerializer.deserialize(MapDataSerializer.java:46)
> at 
> org.apache.flink.table.runtime.typeutils.serializers.python.RowDataSerializer.deserialize(RowDataSerializer.java:106)
> at 
> org.apache.flink.table.runtime.typeutils.serializers.python.RowDataSerializer.deserialize(RowDataSerializer.java:49)
> at 
> org.apache.flink.table.runtime.operators.python.scalar.RowDataPythonScalarFunctionOperator.emitResult(RowDataPythonScalarFunctionOperator.java:81)
> at 
> org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.emitResults(AbstractPythonFunctionOperator.java:250)
> at 
> org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.invokeFinishBundle(AbstractPythonFunctionOperator.java:273)
> at 
> org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.processWatermark(AbstractPythonFunctionOperator.java:199)
> at 
> org.apache.flink.streaming.runtime.tasks.ChainingOutput.emitWatermark(ChainingOutput.java:123)
> at 
> org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitWatermark(SourceOperatorStreamTask.java:170)
> at 
> org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask.advanceToEndOfEventTime(SourceOperatorStreamTask.java:110)
> at 
> org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask.afterInvoke(SourceOperatorStreamTask.java:116)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:589)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:755)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:570)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> The exception isn't straight forward for users and it's difficult for users 
> to figure out the root cause of the issue.
> As Python is dynamic language, this case should be very common and it would 
> be great if we could handle this case properly.
> See 
> [https://stackoverflow.com/questions/66687797/pyflink-java-io-eofexception-at-java-io-datainputstream-readfully]
>  for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22388) KafkaITCase.testMetricsAndEndOfStream times out on topic creation

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22388:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major 
test-stability  (was: auto-deprioritized-critical stale-major test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> KafkaITCase.testMetricsAndEndOfStream times out on topic creation
> -
>
> Key: FLINK-22388
> URL: https://issues.apache.org/jira/browse/FLINK-22388
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16904&view=logs&j=c5612577-f1f7-5977-6ff6-7432788526f7&t=53f6305f-55e6-561c-8f1e-3a1dde2c77df&l=6639
> {code}
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:502)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:217)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:112)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:212)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.writeSequence(KafkaConsumerTestBase.java:2265)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runEndOfStreamTest(KafkaConsumerTestBase.java:1705)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testMetricsAndEndOfStream(KafkaITCase.java:139)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22631) Metrics are incorrect when task finished

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22631:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Metrics are incorrect when task finished
> 
>
> Key: FLINK-22631
> URL: https://issues.apache.org/jira/browse/FLINK-22631
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: Li
>Priority: Minor
>  Labels: auto-deprioritized-major
> Attachments: image-2021-05-11-20-13-25-886.png, 
> image-2021-05-11-20-14-29-290.png, image-2021-05-19-10-10-29-765.png, 
> image-2021-05-19-10-11-02-764.png
>
>
> MetricReporters are reported periodically, default 10 seconds. The 
> final metrics may not be reported to metric system like pushgateway when task 
> finished. This makes users unable to obtain the correct metrics。
>   
> Maybe metricReporters should be reported once manually before closed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22004) Translate Flink Roadmap to Chinese.

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22004:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Translate Flink Roadmap to Chinese.
> ---
>
> Key: FLINK-22004
> URL: https://issues.apache.org/jira/browse/FLINK-22004
> Project: Flink
>  Issue Type: New Feature
>  Components: Documentation
>Reporter: Yuan Mei
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Attachments: Screen Shot 2021-04-11 at 10.24.02 PM.png
>
>
> https://flink.apache.org/roadmap.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22605) Python UDF should only be created once regardless of how many times it is invoked

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22605:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Python UDF should only be created once regardless of how many times it is 
> invoked
> -
>
> Key: FLINK-22605
> URL: https://issues.apache.org/jira/browse/FLINK-22605
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Yik San Chan
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.14.0
>
>
> Follow up to 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/I-call-Pandas-UDF-N-times-do-I-have-to-initiate-the-UDF-N-times-tt43576.html.
> Currently, when we call Python UDF N times, the Python UDF is constructed N 
> times. This may become a concern to performance, when we want to load large 
> resources in the open() method of the UDF, which is quite common in machine 
> learning use cases.
> I propose we optimize in PyFlink framework level s.t. no matter how many 
> times a UDF is called in the execution environment, it is only initiated once.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22705) SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to fail to download the tar

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22705:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to 
> fail to download the tar
> --
>
> Key: FLINK-22705
> URL: https://issues.apache.org/jira/browse/FLINK-22705
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18100&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=18408
> {code:java}
> May 18 17:24:23 Preparing Elasticsearch (version=7)...
> May 18 17:24:23 Downloading Elasticsearch from 
> https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.5.1-no-jdk-linux-x86_64.tar.gz
>  ...
> --2021-05-18 17:24:23--  
> https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.5.1-no-jdk-linux-x86_64.tar.gz
> Resolving artifacts.elastic.co (artifacts.elastic.co)... 34.120.127.130, 
> 2600:1901:0:1d7::
> Connecting to artifacts.elastic.co 
> (artifacts.elastic.co)|34.120.127.130|:443... failed: Connection timed out.
> Connecting to artifacts.elastic.co 
> (artifacts.elastic.co)|2600:1901:0:1d7::|:443... failed: Network is 
> unreachable.
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
> 0curl: (7) Failed to connect to localhost port 9200: Connection refused
> May 18 17:26:34 [FAIL] Test script contains errors.
> May 18 17:26:34 Checking for errors...
> May 18 17:26:34 No errors in log files.
> May 18 17:26:34 Checking for exceptions...
> May 18 17:26:34 No exceptions in log files.
> May 18 17:26:34 Checking for non-empty .out files...
> grep: /home/vsts/work/_temp/debug_files/flink-logs/*.out: No such file or 
> directory
> May 18 17:26:34 No non-empty .out files.
> May 18 17:26:34 
> May 18 17:26:34 [FAIL] 'SQL Client end-to-end test (Old planner) 
> Elasticsearch (v7.5.1)' failed after 2 minutes and 36 seconds! Test exited 
> with exit code 1
> May 18 17:26:34
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21336) Activate bloom filter in RocksDB State Backend via Flink configuration

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21336:
---
Labels: auto-unassigned stale-assigned  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 14 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it. If the "warning_label" label is not removed in 7 days, the 
issue will be automatically unassigned.


> Activate bloom filter in RocksDB State Backend via Flink configuration
> --
>
> Key: FLINK-21336
> URL: https://issues.apache.org/jira/browse/FLINK-21336
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Jun Qin
>Assignee: Jun Qin
>Priority: Major
>  Labels: auto-unassigned, stale-assigned
>
> Activating bloom filter in the RocksDB state backend improves read 
> performance. Currently activating bloom filter can only be done by 
> implementing a custom ConfigurableRocksDBOptionsFactory. I think we should 
> provide an option to activate bloom filter via Flink configuration.
> See also the discussion in ML:
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Activate-bloom-filter-in-RocksDB-State-Backend-via-Flink-configuration-td48636.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22710) ParquetProtoStreamingFileSinkITCase.testParquetProtoWriters failed

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22710:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> ParquetProtoStreamingFileSinkITCase.testParquetProtoWriters failed
> --
>
> Key: FLINK-22710
> URL: https://issues.apache.org/jira/browse/FLINK-22710
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.14.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18114&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=03dca39c-73e8-5aaf-601d-328ae5c35f20&l=13181
> {code:java}
> May 19 03:00:31 [ERROR]   
> ParquetProtoStreamingFileSinkITCase.testParquetProtoWriters:83->validateResults:92
>  expected:<1> but was:<2>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16301: [FLINK-23162][docs] Updated column expression which throws exception

2021-06-26 Thread GitBox


flinkbot commented on pull request #16301:
URL: https://github.com/apache/flink/pull/16301#issuecomment-869067859


   
   ## CI report:
   
   * aac82eeaa2d944ecce4f420cb600c8ed74bce531 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception

2021-06-26 Thread Mans Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mans Singh updated FLINK-23162:
---
Description: 
The create table example in 
[https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
 uses the `time_ltz` in it's declaration and expression
{quote}CREATE TABLE user_actions (
 user_name STRING,
 data STRING,
 ts BIGINT,
 time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
 – declare time_ltz as event time attribute and use 5 seconds delayed watermark 
strategy
 WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
 ) WITH (
 ...
 );
{quote}
When it is executed in the flink sql client it throws an exception:
{quote}[ERROR] Could not execute SQL statement. Reason:
 org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
'time_ltz'
{quote}
The create table works if the expression uses ts as the argument while 
declaring time_ltz.

  was:
The create table example in 
[https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
 uses the `time_ltz` in it's declaration
{quote}CREATE TABLE user_actions (
 user_name STRING,
 data STRING,
 ts BIGINT,
 time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
 – declare time_ltz as event time attribute and use 5 seconds delayed watermark 
strategy
 WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
 ) WITH (
 ...
 );
{quote}
When it is executed in the flink sql client it throws an exception:
{quote}[ERROR] Could not execute SQL statement. Reason:
 org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
'time_ltz'
{quote}
The create table works if the expression uses ts as the argument while 
declaring time_ltz.


> Create table uses time_ltz in the column name and it's expression which 
> results in exception 
> -
>
> Key: FLINK-23162
> URL: https://issues.apache.org/jira/browse/FLINK-23162
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Examples, Table SQL / Client
>Affects Versions: 1.13.1
>Reporter: Mans Singh
>Priority: Minor
>  Labels: doc, example, pull-request-available, sql
> Fix For: 1.14.0
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> The create table example in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
>  uses the `time_ltz` in it's declaration and expression
> {quote}CREATE TABLE user_actions (
>  user_name STRING,
>  data STRING,
>  ts BIGINT,
>  time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
>  – declare time_ltz as event time attribute and use 5 seconds delayed 
> watermark strategy
>  WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
>  ) WITH (
>  ...
>  );
> {quote}
> When it is executed in the flink sql client it throws an exception:
> {quote}[ERROR] Could not execute SQL statement. Reason:
>  org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
> 'time_ltz'
> {quote}
> The create table works if the expression uses ts as the argument while 
> declaring time_ltz.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception

2021-06-26 Thread Mans Singh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17370075#comment-17370075
 ] 

Mans Singh commented on FLINK-23162:


Please assign this ticket to me.  Thank

> Create table uses time_ltz in the column name and it's expression which 
> results in exception 
> -
>
> Key: FLINK-23162
> URL: https://issues.apache.org/jira/browse/FLINK-23162
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Examples, Table SQL / Client
>Affects Versions: 1.13.1
>Reporter: Mans Singh
>Priority: Minor
>  Labels: doc, example, pull-request-available, sql
> Fix For: 1.14.0
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> The create table example in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
>  uses the `time_ltz` in it's declaration
> {quote}CREATE TABLE user_actions (
>  user_name STRING,
>  data STRING,
>  ts BIGINT,
>  time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
>  – declare time_ltz as event time attribute and use 5 seconds delayed 
> watermark strategy
>  WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
>  ) WITH (
>  ...
>  );
> {quote}
> When it is executed in the flink sql client it throws an exception:
> {quote}[ERROR] Could not execute SQL statement. Reason:
>  org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
> 'time_ltz'
> {quote}
> The create table works if the expression uses ts as the argument while 
> declaring time_ltz.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16301: [FLINK-23162][docs] Updated column expression which throws exception

2021-06-26 Thread GitBox


flinkbot commented on pull request #16301:
URL: https://github.com/apache/flink/pull/16301#issuecomment-869064663


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit aac82eeaa2d944ecce4f420cb600c8ed74bce531 (Sat Jun 26 
21:39:10 UTC 2021)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-23162).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception

2021-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-23162:
---
Labels: doc example pull-request-available sql  (was: doc example sql)

> Create table uses time_ltz in the column name and it's expression which 
> results in exception 
> -
>
> Key: FLINK-23162
> URL: https://issues.apache.org/jira/browse/FLINK-23162
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Examples, Table SQL / Client
>Affects Versions: 1.13.1
>Reporter: Mans Singh
>Priority: Minor
>  Labels: doc, example, pull-request-available, sql
> Fix For: 1.14.0
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> The create table example in 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
>  uses the `time_ltz` in it's declaration
> {quote}CREATE TABLE user_actions (
>  user_name STRING,
>  data STRING,
>  ts BIGINT,
>  time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
>  – declare time_ltz as event time attribute and use 5 seconds delayed 
> watermark strategy
>  WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
>  ) WITH (
>  ...
>  );
> {quote}
> When it is executed in the flink sql client it throws an exception:
> {quote}[ERROR] Could not execute SQL statement. Reason:
>  org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
> 'time_ltz'
> {quote}
> The create table works if the expression uses ts as the argument while 
> declaring time_ltz.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] mans2singh opened a new pull request #16301: [FLINK-23162][docs] Updated column used in expression for creating table

2021-06-26 Thread GitBox


mans2singh opened a new pull request #16301:
URL: https://github.com/apache/flink/pull/16301


   
   ## What is the purpose of the change
   
   * Create table uses time_ltz in the column name and it's expression which 
results in exception
   
   ## Brief change log
   
   * Create table uses time_ltz in the column name and it's expression which 
results in exception
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-23162) Create table uses time_ltz in the column name and it's expression which results in exception

2021-06-26 Thread Mans Singh (Jira)
Mans Singh created FLINK-23162:
--

 Summary: Create table uses time_ltz in the column name and it's 
expression which results in exception 
 Key: FLINK-23162
 URL: https://issues.apache.org/jira/browse/FLINK-23162
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Examples, Table SQL / Client
Affects Versions: 1.13.1
Reporter: Mans Singh
 Fix For: 1.14.0


The create table example in 
[https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/time_attributes/]
 uses the `time_ltz` in it's declaration
{quote}CREATE TABLE user_actions (
 user_name STRING,
 data STRING,
 ts BIGINT,
 time_ltz AS TO_TIMESTAMP_LTZ(time_ltz, 3),
 – declare time_ltz as event time attribute and use 5 seconds delayed watermark 
strategy
 WATERMARK FOR time_ltz AS time_ltz - INTERVAL '5' SECOND
 ) WITH (
 ...
 );
{quote}
When it is executed in the flink sql client it throws an exception:
{quote}[ERROR] Could not execute SQL statement. Reason:
 org.apache.calcite.sql.validate.SqlValidatorException: Unknown identifier 
'time_ltz'
{quote}
The create table works if the expression uses ts as the argument while 
declaring time_ltz.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23147) ThreadPools can be poisoned by context class loaders

2021-06-26 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369943#comment-17369943
 ] 

Chesnay Schepler commented on FLINK-23147:
--

>  I don't see why this is different for Preconditions.

It's less about the class, and more about _our_ plugins so far not being 
considered consumers of purely-public APIs. Could we change that? Sure, but 
then we end up increasing the surface area of our considered-user-facing API 
for marginal gains.

What do _we_ actually gain by restricting _our_ reporters to either not use 
_our_ Preconditions or increasing _our_ public API surface?

> ThreadPools can be poisoned by context class loaders
> 
>
> Key: FLINK-23147
> URL: https://issues.apache.org/jira/browse/FLINK-23147
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.14.0
>
>
> Newly created threads in a thread pool inherit the context class loader (CCL) 
> of the currently running thread.
> For thread pools this is very problematic because the CCL is unlikely to be 
> reset at any point; not only can this leak another CL by accident, it can 
> also cause class loading issues, for example when using a {{ServiceLoader}} 
> because it relies on the CCL.
> With the scala-free runtime this for example means that if an actor system 
> threads schedules something into future thread pool of the JM then a new 
> thread is created which uses a plugin loader as a CCL. The plugin loaders are 
> quite restrictive and prevent the loading of 3rd-party dependencies; so if 
> the JM schedules something into the future thread pool which requires one of 
> these dependencies to be accessible then we're gambling as to whether this 
> dependency can actually be loaded in the end.
> Because it is difficult to ensure that we set the CCL correctly on all 
> transitions from akka to Flink land I suggest to add a safeguard to the 
> ExecutorThreadFactory to enforce that newly created threads are always 
> initialized with the CL that has loaded Flink.
> /cc [~arvid] [~sewen]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16299: Remove extra commas from the "create" page

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16299:
URL: https://github.com/apache/flink/pull/16299#issuecomment-868995159


   
   ## CI report:
   
   * e2e758de4bdb0eef6cc56752250bc9afd4274ff5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19575)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16300: [Hotfix][docs] Fixed typos.

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16300:
URL: https://github.com/apache/flink/pull/16300#issuecomment-869008061


   
   ## CI report:
   
   * 601bd5d457672104c5fe2c69963116af36b0ea34 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19576)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16299: Remove extra commas from the "create" page

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16299:
URL: https://github.com/apache/flink/pull/16299#issuecomment-868995159


   
   ## CI report:
   
   * 9bc98631ec8726b866311a41f2dd528e4e72acb9 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19574)
 
   * e2e758de4bdb0eef6cc56752250bc9afd4274ff5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19575)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16300: [Hotfix][docs] Fixed typos.

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16300:
URL: https://github.com/apache/flink/pull/16300#issuecomment-869008061


   
   ## CI report:
   
   * 601bd5d457672104c5fe2c69963116af36b0ea34 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19576)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16300: [Hotfix][docs] Fixed typos.

2021-06-26 Thread GitBox


flinkbot commented on pull request #16300:
URL: https://github.com/apache/flink/pull/16300#issuecomment-869008061


   
   ## CI report:
   
   * 601bd5d457672104c5fe2c69963116af36b0ea34 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16299: Remove extra commas from the "create" page

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16299:
URL: https://github.com/apache/flink/pull/16299#issuecomment-868995159


   
   ## CI report:
   
   * 9bc98631ec8726b866311a41f2dd528e4e72acb9 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19574)
 
   * e2e758de4bdb0eef6cc56752250bc9afd4274ff5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16300: [Hotfix][docs] Fixed typos.

2021-06-26 Thread GitBox


flinkbot commented on pull request #16300:
URL: https://github.com/apache/flink/pull/16300#issuecomment-869002179


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 601bd5d457672104c5fe2c69963116af36b0ea34 (Sat Jun 26 
13:29:58 UTC 2021)
   
   **Warnings:**
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal opened a new pull request #16300: [Hotfix][docs] Fixed typos.

2021-06-26 Thread GitBox


RocMarshal opened a new pull request #16300:
URL: https://github.com/apache/flink/pull/16300


   
   
   ## What is the purpose of the change
   
   *Fixed typos*
   
   
   ## Brief change log
   
   *Fixed typos*
   
   
   ## Verifying this change
   
   *Fixed typos*
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16299: Remove extra commas from the "create" page

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16299:
URL: https://github.com/apache/flink/pull/16299#issuecomment-868995159


   
   ## CI report:
   
   * 9bc98631ec8726b866311a41f2dd528e4e72acb9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19574)
 
   * e2e758de4bdb0eef6cc56752250bc9afd4274ff5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16299: Remove extra commas from the "create" page

2021-06-26 Thread GitBox


flinkbot commented on pull request #16299:
URL: https://github.com/apache/flink/pull/16299#issuecomment-868995159


   
   ## CI report:
   
   * 9bc98631ec8726b866311a41f2dd528e4e72acb9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16299: Remove extra commas from the "create" page

2021-06-26 Thread GitBox


flinkbot commented on pull request #16299:
URL: https://github.com/apache/flink/pull/16299#issuecomment-868992492


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9bc98631ec8726b866311a41f2dd528e4e72acb9 (Sat Jun 26 
12:12:08 UTC 2021)
   
   **Warnings:**
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangjunyou opened a new pull request #16299: Remove extra commas from the "create" page

2021-06-26 Thread GitBox


wangjunyou opened a new pull request #16299:
URL: https://github.com/apache/flink/pull/16299


   ###What is the purpose of the change
   ***
   Remove extra commas from the "create" page
   
   ###Verifying this change
   ***
   alter file:  
   /flink/docs/content/docs/dev/table/sql/create.md  
   /flink/docs/content.zh/docs/dev/table/sql/create.md
   
   ###Does this pull request potentially affect one of the following parts:
   ***
   - Dependencies (does it add or upgrade a dependency): (yes / **no**)
   - The public API, i.e., is any changed class annotated with 
@Public(Evolving): (yes / **no**)
   - The serializers: (yes / **no** / don't know)
   - The runtime per-record code paths (performance sensitive): (yes / **no** / 
don't know)
   - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
   - The S3 file system connector: (yes / **no** / don't know)
   
   ###Documentation
   ***
   - Does this pull request introduce a new feature? (yes / **no**)
   - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16290: [FLINK-23067][table-api-java] Introduce Table#executeInsert(TableDescriptor)

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16290:
URL: https://github.com/apache/flink/pull/16290#issuecomment-868395028


   
   ## CI report:
   
   * f7d3d618d7f73fceabe136cefeda2cb210f1d463 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19571)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19545)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16285: [FLINK-23070][table-api-java] Introduce TableEnvironment#createTable

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16285:
URL: https://github.com/apache/flink/pull/16285#issuecomment-868253867


   
   ## CI report:
   
   * 3a7708ead7c69816ae9debe89df06f06abed3381 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19570)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19544)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-22940) Make SQL client column max width configurable

2021-06-26 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-22940.
---
Fix Version/s: 1.14.0
   Resolution: Fixed

Fixed in master: 84e1186d24ebbe6b3a4629496274d6337b333af1

> Make SQL client column max width configurable
> -
>
> Key: FLINK-22940
> URL: https://issues.apache.org/jira/browse/FLINK-22940
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.13.0
>Reporter: Svend Vanderveken
>Assignee: Svend Vanderveken
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.14.0
>
>
> When displaying results interactively with the Flink SQL client, each column 
> is currently truncated based on its content type, up to a maximum of 30 
> characters, which is controlled by the java constant [1].
> In case some result to be displayed is too wide, a ~ is  appended a the end 
> to indicate the truncation (actually happening in practice at position 25), 
> as visible below:
>  
>  
> {code:java}
>  SELECT
>metadata.true_as_of_timestamp_millis,
>member_user_id
>  FROM some_table
>   
>  true_as_of_timestamp_mil~member_user_id 
>   1622811665919 45ca821f-c0fc-4114-bef8-~
>   1622811665919 45ca821f-c0fc-4114-bef8-~
>   1622118951005 b4734391-d3e1-417c-ad92-~
>  {code}
>  
> I suggest to make this max width configurable, by adding a parameter that can 
> be `SET` to [2].
>  
> I also suggest to make the default width wide enough s.t. 36 usable 
> characters can be displayed, since UUID (which are 36 character longs when 
> represented in text) are very
> commonly used as identifiers, and therefore as column values.
> This seems like a easy code update, if it's useful I'm happy to work on the 
> implementation.
> [1] 
> [https://github.com/apache/flink/blob/6d8c02f90a5a3054015f2f1ee83be821d925ccd1/flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/PrintUtils.java#L74]
> [2] 
> [https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/config/SqlClientOptions.java]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #16245: [FLINK-22940][SQL-CLIENT] Make sql client column max width configurable

2021-06-26 Thread GitBox


wuchong merged pull request #16245:
URL: https://github.com/apache/flink/pull/16245


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-4088) Add interface to save and load TableSources

2021-06-26 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-4088.
--
Resolution: Won't Fix

This can be achieved by FLIP-129. 

> Add interface to save and load TableSources
> ---
>
> Key: FLINK-4088
> URL: https://issues.apache.org/jira/browse/FLINK-4088
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.1.0
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Add an interface to save and load table sources similar to Java's 
> {{Serializable}} interface. 
> TableSources should implement the interface to become saveable and loadable.
> This could be used as follows:
> {code}
> val cts = new CsvTableSource(
>   "/path/to/csvfile",
>   Array("name", "age", "address"),
>   Array(BasicTypeInfo.STRING_TYPEINFO, ...),
>   ...
> )
> cts.saveToFile("/path/to/tablesource/file")
> // -
> val tEnv: TableEnvironment = ???
> tEnv.loadTableSource("persons", "/path/to/tablesource/file")
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangjunyou closed pull request #16298: Remove extra commas from the create table page

2021-06-26 Thread GitBox


wangjunyou closed pull request #16298:
URL: https://github.com/apache/flink/pull/16298


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-3462) Add position and magic number checks to record (de)serializers

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-3462:
--
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add position and magic number checks to record (de)serializers
> --
>
> Key: FLINK-3462
> URL: https://issues.apache.org/jira/browse/FLINK-3462
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Ufuk Celebi
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> In order to improve the debugging experience in case of serialization errors, 
> we can add further sanity checks to the deserializers:
> - Check expected position in buffer in order to ensure that record boundaries 
> are not crossed within a buffer or bytes left unconsumed.
> - Add magic number to record serializer and deserializer in order to check 
> byte corruption. We currently only have this on a per buffer level.
> We will be able to use these checks to improve the error messages.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-4004) Do not pass custom flink kafka connector properties to Kafka to avoid warnings

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-4004:
--
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Do not pass custom flink kafka connector properties to Kafka to avoid warnings
> --
>
> Key: FLINK-4004
> URL: https://issues.apache.org/jira/browse/FLINK-4004
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Robert Metzger
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> The FlinkKafkaConsumer has some custom properties, which we pass to the 
> KafkaConsumer as well (such as {{flink.poll-timeout}}). This causes Kafka to 
> log warnings about unused properties.
> We should not pass Flink-internal properties to Kafka, to avoid those 
> warnings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-3857) Add reconnect attempt to Elasticsearch host

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-3857:
--
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add reconnect attempt to Elasticsearch host
> ---
>
> Key: FLINK-3857
> URL: https://issues.apache.org/jira/browse/FLINK-3857
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.0.2, 1.1.0
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the connection to the Elasticsearch host is opened in 
> {{ElasticsearchSink.open()}}. In case the connection is lost (maybe due to a 
> changed DNS entry), the sink fails.
> I propose to catch the Exception for lost connections in the {{invoke()}} 
> method and try to re-open the connection for a configurable number of times 
> with a certain delay.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-3850) Add forward field annotations to DataSet operators generated by the Table API

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-3850:
--
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add forward field annotations to DataSet operators generated by the Table API
> -
>
> Key: FLINK-3850
> URL: https://issues.apache.org/jira/browse/FLINK-3850
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The DataSet API features semantic annotations [1] to hint the optimizer which 
> input fields an operator copies. This information is valuable for the 
> optimizer because it can infer that certain physical properties such as 
> partitioning or sorting are not destroyed by user functions and thus generate 
> more efficient execution plans.
> The Table API is built on top of the DataSet API and generates DataSet 
> programs and code for user-defined functions. Hence, it knows exactly which 
> fields are modified and which not. We should use this information to 
> automatically generate forward field annotations and attach them to the 
> operators. This can help to significantly improve the performance of certain 
> jobs.
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/index.html#semantic-annotations



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-3769) RabbitMQ Sink ability to publish to a different exchange

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-3769:
--
  Labels: auto-deprioritized-major auto-unassigned rabbitmq  (was: 
auto-unassigned rabbitmq stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> RabbitMQ Sink ability to publish to a different exchange
> 
>
> Key: FLINK-3769
> URL: https://issues.apache.org/jira/browse/FLINK-3769
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.0.1
>Reporter: Robert Batts
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, rabbitmq
>
> The RabbitMQ Sink can currently only publish to the "default" exchange. This 
> exchange is a direct exchange, so the routing key will route directly to the 
> queue name. Because of this, the current sink will only be 1-to-1-to-1 (1 job 
> to 1 exchange which routes to 1 queue). Additionally, I believe that if a 
> user decides to use a different exchange I think the following can be assumed:
> 1.) The provided exchange exists
> 2.) The user has declared the appropriate mapping and the appropriate queues 
> exist in RabbitMQ (therefore, nothing needs to be created)
> RabbitMQ currently provides four types of exchanges. Three of these will be 
> covered by just enabling exchanges (Direct, Fanout, Topic) because they use 
> the routingkey (or nothing). 
> The fourth exchange type relies on the message headers, which are currently 
> set to null by default on the publish. These headers may be on a per message 
> level, so the input of this stream will need to take this as input as well. 
> This forth exchange could very well be outside of the scope of this 
> Improvement and a "RabbitMQ Sink enable headers" Improvement might be the 
> better way to go with this.
> Exchange Types: https://www.rabbitmq.com/tutorials/amqp-concepts.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-4088) Add interface to save and load TableSources

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-4088:
--
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add interface to save and load TableSources
> ---
>
> Key: FLINK-4088
> URL: https://issues.apache.org/jira/browse/FLINK-4088
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.1.0
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> Add an interface to save and load table sources similar to Java's 
> {{Serializable}} interface. 
> TableSources should implement the interface to become saveable and loadable.
> This could be used as follows:
> {code}
> val cts = new CsvTableSource(
>   "/path/to/csvfile",
>   Array("name", "age", "address"),
>   Array(BasicTypeInfo.STRING_TYPEINFO, ...),
>   ...
> )
> cts.saveToFile("/path/to/tablesource/file")
> // -
> val tEnv: TableEnvironment = ???
> tEnv.loadTableSource("persons", "/path/to/tablesource/file")
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-4043) Generalize RabbitMQ connector into AMQP connector

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-4043:
--
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Generalize RabbitMQ connector into AMQP connector
> -
>
> Key: FLINK-4043
> URL: https://issues.apache.org/jira/browse/FLINK-4043
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Reporter: Robert Metzger
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Our current RabbitMQ connector is actually using a AMQP client implemented by 
> RabbitMQ.
> AMQP is a protocol for message queues, implemented by different clients and 
> brokers.
> I'm suggesting to rename the connector so that its more obvious to users of 
> other brokers that they can use the connector as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-3473) Add partial aggregate support in Flink

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-3473:
--
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add partial aggregate support in Flink
> --
>
> Key: FLINK-3473
> URL: https://issues.apache.org/jira/browse/FLINK-3473
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Legacy Planner
>Reporter: Chengxiang Li
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
> Attachments: PartialAggregateinFlink_v1.pdf, 
> PartialAggregateinFlink_v2.pdf
>
>
> For decomposable aggregate function, partial aggregate is more efficient as 
> it significantly reduce the network traffic during shuffle and parallelize 
> part of the aggregate calculation. This is an umbrella task for partial 
> aggregate.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-4194) Implement isEndOfStream() for KinesisDeserializationSchema

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-4194:
--
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Implement isEndOfStream() for KinesisDeserializationSchema
> --
>
> Key: FLINK-4194
> URL: https://issues.apache.org/jira/browse/FLINK-4194
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kinesis
>Reporter: Robert Metzger
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> **Original JIRA title: KinesisDeserializationSchema.isEndOfStream() is never 
> called. The corresponding part of the code has been commented out with 
> reference to this JIRA.**
> The Kinesis connector does not respect the 
> {{KinesisDeserializationSchema.isEndOfStream()}} method.
> The purpose of this method is to stop consuming from a source, based on input 
> data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-4498) Better Cassandra sink documentation

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-4498:
--
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Better Cassandra sink documentation
> ---
>
> Key: FLINK-4498
> URL: https://issues.apache.org/jira/browse/FLINK-4498
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Cassandra, Documentation
>Affects Versions: 1.1.0
>Reporter: Elias Levy
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> The Cassandra sink documentation is somewhat muddled and could be improved.  
> For instance, the fact that is only supports tuples and POJO's that use 
> DataStax Mapper annotations is only mentioned in passing, and it is not clear 
> that the reference to tuples only applies to Flink Java tuples and not Scala 
> tuples.  
> The documentation also does not mention that setQuery() is only necessary for 
> tuple streams. 
> The explanation of the write ahead log could use some cleaning up to clarify 
> when it is appropriate to use, ideally with an example.  Maybe this would be 
> best as a blog post to expand on the type of non-deterministic streams this 
> applies to.
> It would also be useful to mention that tuple elements will be mapped to 
> Cassandra columns using the Datastax Java driver's default encoders, which 
> are somewhat limited (e.g. to write to a blob column the type in the tuple 
> must be a java.nio.ByteBuffer and not just a byte[]).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-3957) Breaking changes for Flink 2.0

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-3957:
--
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Breaking changes for Flink 2.0
> --
>
> Key: FLINK-3957
> URL: https://issues.apache.org/jira/browse/FLINK-3957
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataSet, API / DataStream, Build System
>Affects Versions: 1.0.0
>Reporter: Robert Metzger
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 2.0.0
>
>
> From time to time, we find APIs in Flink (1.x.y) marked as stable, even 
> though we would like to change them at some point.
> This JIRA is to track all planned breaking API changes.
> I would suggest to add subtasks to this one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-4642) Remove unnecessary Guava dependency from flink-streaming-java

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-4642:
--
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Remove unnecessary Guava dependency from flink-streaming-java
> -
>
> Key: FLINK-4642
> URL: https://issues.apache.org/jira/browse/FLINK-4642
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.1.2
>Reporter: Stephan Ewen
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-3997) PRNG Skip-ahead

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-3997:
--
  Labels: auto-deprioritized-major auto-unassigned  (was: auto-unassigned 
stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> PRNG Skip-ahead
> ---
>
> Key: FLINK-3997
> URL: https://issues.apache.org/jira/browse/FLINK-3997
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / Graph Processing (Gelly)
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned
>
> The current sources of randomness for Gelly Graph Generators use fixed-size 
> blocks of work which include an initial seed. There are two issues with this 
> approach. First, the size of the collection of blocks can exceed the Akka 
> limit and cause the job to silently fail. Second, as the block seeds are 
> randomly chosen, the likelihood of blocks overlapping and producing the same 
> sequence increases with the size of the graph.
> The random generators will be reimplemented using {{SplittableIterator}} and 
> PRNGs supporting skip-ahead.
> This ticket will implement skip-ahead with LCGs [0]. Future work may add 
> support for xorshift generators ([1], section 5 "Jumping Ahead").
> [0] 
> https://mit-crpg.github.io/openmc/methods/random_numbers.html#skip-ahead-capability
> [1] https://arxiv.org/pdf/1404.0390.pdf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-4785) Flink string parser doesn't handle string fields containing two consecutive double quotes

2021-06-26 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-4785:
--
  Labels: auto-deprioritized-major csv  (was: csv stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Flink string parser doesn't handle string fields containing two consecutive 
> double quotes
> -
>
> Key: FLINK-4785
> URL: https://issues.apache.org/jira/browse/FLINK-4785
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataSet
>Affects Versions: 1.1.2
>Reporter: Flavio Pompermaier
>Priority: Minor
>  Labels: auto-deprioritized-major, csv
>
> To reproduce the error run 
> https://github.com/okkam-it/flink-examples/blob/master/src/main/java/it/okkam/flink/Csv2RowExample.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-22757) Update GCS documentation

2021-06-26 Thread Ankush Khanna (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankush Khanna resolved FLINK-22757.
---
Fix Version/s: 1.13.3
   1.12.6
   Resolution: Fixed

https://github.com/apache/flink/pull/15991

> Update GCS documentation
> 
>
> Key: FLINK-22757
> URL: https://issues.apache.org/jira/browse/FLINK-22757
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.13.0, 1.12.4
>Reporter: Ankush Khanna
>Assignee: Ankush Khanna
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.12.6, 1.13.3
>
>
> Currently, GCS filesystem documentation points to 
> [https://cloud.google.com/dataproc.] This does cover the correct way to 
> connect to GCS. 
> Following from this [blog 
> post|https://www.ververica.com/blog/getting-started-with-da-platform-on-google-kubernetes-engine]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16245: [FLINK-22940][SQL-CLIENT] Make sql client column max width configurable

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16245:
URL: https://github.com/apache/flink/pull/16245#issuecomment-866278518


   
   ## CI report:
   
   * c333291b38c611e884641b75092501bd841109ae Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19569)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16298: Remove extra commas from the create table page

2021-06-26 Thread GitBox


flinkbot edited a comment on pull request #16298:
URL: https://github.com/apache/flink/pull/16298#issuecomment-868974627


   
   ## CI report:
   
   * 8626d3c3d2c523c5617a24871b7024856238f18f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=19572)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #16298: Remove extra commas from the create table page

2021-06-26 Thread GitBox


flinkbot commented on pull request #16298:
URL: https://github.com/apache/flink/pull/16298#issuecomment-868974627


   
   ## CI report:
   
   * 8626d3c3d2c523c5617a24871b7024856238f18f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23128) Translate update to operations playground docs to Chinese

2021-06-26 Thread kevin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369838#comment-17369838
 ] 

kevin commented on FLINK-23128:
---

Hi [~alpinegizmo], I have created PR and link it to this issue, can you 
recommend someone to review this PR? thanks.

> Translate update to operations playground docs to Chinese
> -
>
> Key: FLINK-23128
> URL: https://issues.apache.org/jira/browse/FLINK-23128
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation / Training
>Affects Versions: 1.13.1
>Reporter: David Anderson
>Assignee: kevin
>Priority: Major
>
> The documentation changes committed in 
> [https://github.com/apache/flink/pull/16218] 
> need to be applied to the Chinese translation as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #16298: Remove extra commas from the create table page

2021-06-26 Thread GitBox


flinkbot commented on pull request #16298:
URL: https://github.com/apache/flink/pull/16298#issuecomment-868972180


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 8626d3c3d2c523c5617a24871b7024856238f18f (Sat Jun 26 
09:00:41 UTC 2021)
   
   **Warnings:**
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangjunyou opened a new pull request #16298: Remove extra commas from the create table page

2021-06-26 Thread GitBox


wangjunyou opened a new pull request #16298:
URL: https://github.com/apache/flink/pull/16298


   Remove extra commas from the create table page


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >