[jira] [Created] (FLINK-29438) Setup DynamoDB Connector Project Structure

2022-09-28 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-29438:
-

 Summary: Setup DynamoDB Connector Project Structure
 Key: FLINK-29438
 URL: https://issues.apache.org/jira/browse/FLINK-29438
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / DynamoDB
Reporter: Danny Cranmer
 Fix For: dynamodb-1.0.0


Umbrella task for DynamoDB Connector setup.

Please put all DynamoDB connector setup related tasks under this task. We will 
review the umbrella task to develop the migration guide and best practices for 
further connector creation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29246) Setup DynamoDB Maven Modules

2022-09-28 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-29246:
--
Summary: Setup DynamoDB Maven Modules  (was: Setup DynamoDB Connector 
Project Structure)

> Setup DynamoDB Maven Modules
> 
>
> Key: FLINK-29246
> URL: https://issues.apache.org/jira/browse/FLINK-29246
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / DynamoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: dynamodb-1.0.0
>
>
> Setup initial project structure for 
> [flink-connector-dynamodb|https://github.com/apache/flink-connector-dynamodb] 
> including:
> - Parent and module pom files
> - Basic build configuration
> - Quality plugin configuration (checkstyle)
> Related to 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-252%3A+Amazon+DynamoDB+Sink+Connector



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29438) Setup DynamoDB Connector Project Structure

2022-09-28 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-29438:
-

Assignee: Danny Cranmer

> Setup DynamoDB Connector Project Structure
> --
>
> Key: FLINK-29438
> URL: https://issues.apache.org/jira/browse/FLINK-29438
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / DynamoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
> Fix For: dynamodb-1.0.0
>
>
> Umbrella task for DynamoDB Connector setup.
> Please put all DynamoDB connector setup related tasks under this task. We 
> will review the umbrella task to develop the migration guide and best 
> practices for further connector creation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29246) Setup DynamoDB Maven Modules

2022-09-28 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-29246:
--
Parent: FLINK-29438
Issue Type: Sub-task  (was: Improvement)

> Setup DynamoDB Maven Modules
> 
>
> Key: FLINK-29246
> URL: https://issues.apache.org/jira/browse/FLINK-29246
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / DynamoDB
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
> Fix For: dynamodb-1.0.0
>
>
> Setup initial project structure for 
> [flink-connector-dynamodb|https://github.com/apache/flink-connector-dynamodb] 
> including:
> - Parent and module pom files
> - Basic build configuration
> - Quality plugin configuration (checkstyle)
> Related to 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-252%3A+Amazon+DynamoDB+Sink+Connector



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29439) Add DynamoDB project readme

2022-09-28 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-29439:
-

 Summary: Add DynamoDB project readme
 Key: FLINK-29439
 URL: https://issues.apache.org/jira/browse/FLINK-29439
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


Copy and update from the ElasticSearch connector repo



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29439) Add DynamoDB Project Readme

2022-09-28 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-29439:
-

Assignee: Danny Cranmer

> Add DynamoDB Project Readme
> ---
>
> Key: FLINK-29439
> URL: https://issues.apache.org/jira/browse/FLINK-29439
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>
> Copy and update from the ElasticSearch connector repo



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29439) Add DynamoDB Project Readme

2022-09-28 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-29439:
--
Summary: Add DynamoDB Project Readme  (was: Add DynamoDB project readme)

> Add DynamoDB Project Readme
> ---
>
> Key: FLINK-29439
> URL: https://issues.apache.org/jira/browse/FLINK-29439
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Priority: Major
>
> Copy and update from the ElasticSearch connector repo



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29439) Add DynamoDB Project Readme

2022-09-28 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer updated FLINK-29439:
--
Description: 
Copy and update from the ElasticSearch connector repo:

- https://github.com/apache/flink-connector-elasticsearch/blob/main/README.md

  was:Copy and update from the ElasticSearch connector repo


> Add DynamoDB Project Readme
> ---
>
> Key: FLINK-29439
> URL: https://issues.apache.org/jira/browse/FLINK-29439
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>
> Copy and update from the ElasticSearch connector repo:
> - https://github.com/apache/flink-connector-elasticsearch/blob/main/README.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-dynamodb] dannycranmer opened a new pull request, #5: [FLINK-29439] Add project README

2022-09-28 Thread GitBox


dannycranmer opened a new pull request, #5:
URL: https://github.com/apache/flink-connector-dynamodb/pull/5

   ## What is the purpose of the change
   
   Adding the project README
   
   ## Brief change log
   
   * Adding the project README
   
   ## Verifying this change
   
   * Tested the clone and build command locally
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? n/a
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29439) Add DynamoDB Project Readme

2022-09-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29439:
---
Labels: pull-request-available  (was: )

> Add DynamoDB Project Readme
> ---
>
> Key: FLINK-29439
> URL: https://issues.apache.org/jira/browse/FLINK-29439
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> Copy and update from the ElasticSearch connector repo:
> - https://github.com/apache/flink-connector-elasticsearch/blob/main/README.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29408) HiveCatalogITCase failed with NPE

2022-09-28 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17610389#comment-17610389
 ] 

Huang Xingbo commented on FLINK-29408:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41396&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=5acec1b4-945b-59ca-34f8-168928ce5199&l=24932

> HiveCatalogITCase failed with NPE
> -
>
> Key: FLINK-29408
> URL: https://issues.apache.org/jira/browse/FLINK-29408
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-25T03:41:07.4212129Z Sep 25 03:41:07 [ERROR] 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf  Time 
> elapsed: 0.098 s  <<< ERROR!
> 2022-09-25T03:41:07.4212662Z Sep 25 03:41:07 java.lang.NullPointerException
> 2022-09-25T03:41:07.4213189Z Sep 25 03:41:07  at 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf(HiveCatalogUdfITCase.java:109)
> 2022-09-25T03:41:07.4213753Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-25T03:41:07.4224643Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-25T03:41:07.4225311Z Sep 25 03:41:07  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-25T03:41:07.4225879Z Sep 25 03:41:07  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-25T03:41:07.4226405Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-25T03:41:07.4227201Z Sep 25 03:41:07  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-25T03:41:07.4227807Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-25T03:41:07.4228394Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-25T03:41:07.4228966Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4229514Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4230066Z Sep 25 03:41:07  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-09-25T03:41:07.4230587Z Sep 25 03:41:07  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-09-25T03:41:07.4231258Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4231823Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-25T03:41:07.4232384Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-25T03:41:07.4232930Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-25T03:41:07.4233511Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-25T03:41:07.4234039Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-25T03:41:07.4234546Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-25T03:41:07.4235057Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-25T03:41:07.4235573Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-25T03:41:07.4236087Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-25T03:41:07.4236635Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-25T03:41:07.4237314Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4238211Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4238775Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4239277Z Sep 25 03:41:07  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-09-25T03:41:07.4239769Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4240265Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-09-25T03:41:07.4240731Z Sep 25 03:41:07  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022

[jira] [Commented] (FLINK-29408) HiveCatalogITCase failed with NPE

2022-09-28 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17610390#comment-17610390
 ] 

Huang Xingbo commented on FLINK-29408:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=41397&view=logs&j=5cae8624-c7eb-5c51-92d3-4d2dacedd221&t=5acec1b4-945b-59ca-34f8-168928ce5199&l=26033

> HiveCatalogITCase failed with NPE
> -
>
> Key: FLINK-29408
> URL: https://issues.apache.org/jira/browse/FLINK-29408
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-25T03:41:07.4212129Z Sep 25 03:41:07 [ERROR] 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf  Time 
> elapsed: 0.098 s  <<< ERROR!
> 2022-09-25T03:41:07.4212662Z Sep 25 03:41:07 java.lang.NullPointerException
> 2022-09-25T03:41:07.4213189Z Sep 25 03:41:07  at 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf(HiveCatalogUdfITCase.java:109)
> 2022-09-25T03:41:07.4213753Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-25T03:41:07.4224643Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-25T03:41:07.4225311Z Sep 25 03:41:07  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-25T03:41:07.4225879Z Sep 25 03:41:07  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-25T03:41:07.4226405Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-25T03:41:07.4227201Z Sep 25 03:41:07  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-25T03:41:07.4227807Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-25T03:41:07.4228394Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-25T03:41:07.4228966Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4229514Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4230066Z Sep 25 03:41:07  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-09-25T03:41:07.4230587Z Sep 25 03:41:07  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-09-25T03:41:07.4231258Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4231823Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-25T03:41:07.4232384Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-25T03:41:07.4232930Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-25T03:41:07.4233511Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-25T03:41:07.4234039Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-25T03:41:07.4234546Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-25T03:41:07.4235057Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-25T03:41:07.4235573Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-25T03:41:07.4236087Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-25T03:41:07.4236635Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-25T03:41:07.4237314Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4238211Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4238775Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4239277Z Sep 25 03:41:07  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-09-25T03:41:07.4239769Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4240265Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-09-25T03:41:07.4240731Z Sep 25 03:41:07  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022

[jira] [Assigned] (FLINK-29408) HiveCatalogITCase failed with NPE

2022-09-28 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo reassigned FLINK-29408:


Assignee: luoyuxia

> HiveCatalogITCase failed with NPE
> -
>
> Key: FLINK-29408
> URL: https://issues.apache.org/jira/browse/FLINK-29408
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Huang Xingbo
>Assignee: luoyuxia
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> 2022-09-25T03:41:07.4212129Z Sep 25 03:41:07 [ERROR] 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf  Time 
> elapsed: 0.098 s  <<< ERROR!
> 2022-09-25T03:41:07.4212662Z Sep 25 03:41:07 java.lang.NullPointerException
> 2022-09-25T03:41:07.4213189Z Sep 25 03:41:07  at 
> org.apache.flink.table.catalog.hive.HiveCatalogUdfITCase.testFlinkUdf(HiveCatalogUdfITCase.java:109)
> 2022-09-25T03:41:07.4213753Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-09-25T03:41:07.4224643Z Sep 25 03:41:07  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-09-25T03:41:07.4225311Z Sep 25 03:41:07  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-09-25T03:41:07.4225879Z Sep 25 03:41:07  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-09-25T03:41:07.4226405Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-09-25T03:41:07.4227201Z Sep 25 03:41:07  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-09-25T03:41:07.4227807Z Sep 25 03:41:07  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-09-25T03:41:07.4228394Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-09-25T03:41:07.4228966Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4229514Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4230066Z Sep 25 03:41:07  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-09-25T03:41:07.4230587Z Sep 25 03:41:07  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
> 2022-09-25T03:41:07.4231258Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4231823Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> 2022-09-25T03:41:07.4232384Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> 2022-09-25T03:41:07.4232930Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> 2022-09-25T03:41:07.4233511Z Sep 25 03:41:07  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> 2022-09-25T03:41:07.4234039Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> 2022-09-25T03:41:07.4234546Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> 2022-09-25T03:41:07.4235057Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> 2022-09-25T03:41:07.4235573Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> 2022-09-25T03:41:07.4236087Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> 2022-09-25T03:41:07.4236635Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-09-25T03:41:07.4237314Z Sep 25 03:41:07  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-09-25T03:41:07.4238211Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4238775Z Sep 25 03:41:07  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-09-25T03:41:07.4239277Z Sep 25 03:41:07  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-09-25T03:41:07.4239769Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-09-25T03:41:07.4240265Z Sep 25 03:41:07  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-09-25T03:41:07.4240731Z Sep 25 03:41:07  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-09-25T03:41:07.4241196Z Sep 25 03:41:07  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-09-25T03:41:07.4241715Z Sep 25 03:41:07  at 
> org.junit.vintag

[jira] [Assigned] (FLINK-29440) Setup CI Logging

2022-09-28 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-29440:
-

Assignee: Danny Cranmer

> Setup CI Logging
> 
>
> Key: FLINK-29440
> URL: https://issues.apache.org/jira/browse/FLINK-29440
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>
> Configure Log4J for CI builds



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29440) Setup CI Logging

2022-09-28 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-29440:
-

 Summary: Setup CI Logging
 Key: FLINK-29440
 URL: https://issues.apache.org/jira/browse/FLINK-29440
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


Configure Log4J for CI builds



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-dynamodb] dannycranmer opened a new pull request, #6: [FLINK-29440] Setup logging on CI

2022-09-28 Thread GitBox


dannycranmer opened a new pull request, #6:
URL: https://github.com/apache/flink-connector-dynamodb/pull/6

   ## What is the purpose of the change
   
   Setup logging on CI
   
   ## Brief change log
   
   * Adding log4j configuration file
   * Referencing the CI file in build
   
   ## Verifying this change
   
   * None, will check CI on PR
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? n/a
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29440) Setup CI Logging

2022-09-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29440:
---
Labels: pull-request-available  (was: )

> Setup CI Logging
> 
>
> Key: FLINK-29440
> URL: https://issues.apache.org/jira/browse/FLINK-29440
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> Configure Log4J for CI builds



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] sethsaperstein-lyft commented on pull request #20844: [FLINK-29099][connectors/kinesis] Update global watermark for idle subtask

2022-09-28 Thread GitBox


sethsaperstein-lyft commented on PR #20844:
URL: https://github.com/apache/flink/pull/20844#issuecomment-1260493869

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-dynamodb] dannycranmer merged pull request #5: [FLINK-29439] Add project README

2022-09-28 Thread GitBox


dannycranmer merged PR #5:
URL: https://github.com/apache/flink-connector-dynamodb/pull/5


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29439) Add DynamoDB Project Readme

2022-09-28 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17610396#comment-17610396
 ] 

Danny Cranmer commented on FLINK-29439:
---

Merged commit 
[{{9060c82}}|https://github.com/apache/flink-connector-dynamodb/commit/9060c823279f0766c075b807b371dbe4f6b7bf98]
 into main

> Add DynamoDB Project Readme
> ---
>
> Key: FLINK-29439
> URL: https://issues.apache.org/jira/browse/FLINK-29439
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> Copy and update from the ElasticSearch connector repo:
> - https://github.com/apache/flink-connector-elasticsearch/blob/main/README.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29439) Add DynamoDB Project Readme

2022-09-28 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer resolved FLINK-29439.
---
Resolution: Fixed

> Add DynamoDB Project Readme
> ---
>
> Key: FLINK-29439
> URL: https://issues.apache.org/jira/browse/FLINK-29439
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> Copy and update from the ElasticSearch connector repo:
> - https://github.com/apache/flink-connector-elasticsearch/blob/main/README.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29350) Add a section for moving planner jar in Hive dependencies page

2022-09-28 Thread luoyuxia (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luoyuxia updated FLINK-29350:
-
Summary: Add a section for moving planner jar in Hive dependencies page  
(was: Add a note for swapping planner jar in Hive dependencies page)

> Add a section for moving planner jar in Hive dependencies page
> --
>
> Key: FLINK-29350
> URL: https://issues.apache.org/jira/browse/FLINK-29350
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] luoyuxia opened a new pull request, #20913: [FLINK-29350][hive] Add a section for moving planner jar in Hive dependencies page

2022-09-28 Thread GitBox


luoyuxia opened a new pull request, #20913:
URL: https://github.com/apache/flink/pull/20913

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29350) Add a section for moving planner jar in Hive dependencies page

2022-09-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29350:
---
Labels: pull-request-available  (was: )

> Add a section for moving planner jar in Hive dependencies page
> --
>
> Key: FLINK-29350
> URL: https://issues.apache.org/jira/browse/FLINK-29350
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Affects Versions: 1.16.0
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29441) Setup dependency convergence check

2022-09-28 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-29441:
-

 Summary: Setup dependency convergence check
 Key: FLINK-29441
 URL: https://issues.apache.org/jira/browse/FLINK-29441
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


Following the Flink repo:
 * Convergence should be disabled by default
 * Enabled via {{-Pcheck-convergence}}
 * Enabled on CI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28824) StreamingJobGraphGenerator#triggerSerializationAndReturnFuture might swallow exception

2022-09-28 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17610400#comment-17610400
 ] 

Yun Gao commented on FLINK-28824:
-

Thanks [~Weijie Guo] for looking at this issue!

> StreamingJobGraphGenerator#triggerSerializationAndReturnFuture might swallow 
> exception 
> ---
>
> Key: FLINK-28824
> URL: https://issues.apache.org/jira/browse/FLINK-28824
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.16.0
>Reporter: Yun Gao
>Assignee: Weijie Guo
>Priority: Major
>
> Currently the `triggerSerializationAndReturnFuture` is executed in a separate 
> thread. Currently when this method throws exception (for example, users may 
> pass an UDF that is not serializable), then the main thread would be blocked.
> We'd better fail the future for any exception thrown in this thread. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29441) Setup dependency convergence check

2022-09-28 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer reassigned FLINK-29441:
-

Assignee: Danny Cranmer

> Setup dependency convergence check
> --
>
> Key: FLINK-29441
> URL: https://issues.apache.org/jira/browse/FLINK-29441
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>
> Following the Flink repo:
>  * Convergence should be disabled by default
>  * Enabled via {{-Pcheck-convergence}}
>  * Enabled on CI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-28824) StreamingJobGraphGenerator#triggerSerializationAndReturnFuture might swallow exception

2022-09-28 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao closed FLINK-28824.
---
Resolution: Duplicate

> StreamingJobGraphGenerator#triggerSerializationAndReturnFuture might swallow 
> exception 
> ---
>
> Key: FLINK-28824
> URL: https://issues.apache.org/jira/browse/FLINK-28824
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.16.0
>Reporter: Yun Gao
>Assignee: Weijie Guo
>Priority: Major
>
> Currently the `triggerSerializationAndReturnFuture` is executed in a separate 
> thread. Currently when this method throws exception (for example, users may 
> pass an UDF that is not serializable), then the main thread would be blocked.
> We'd better fail the future for any exception thrown in this thread. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #20913: [FLINK-29350][hive] Add a section for moving planner jar in Hive dependencies page

2022-09-28 Thread GitBox


flinkbot commented on PR #20913:
URL: https://github.com/apache/flink/pull/20913#issuecomment-1260510368

   
   ## CI report:
   
   * 1ecd37b22a20ff3fe1d712482db2ca7838dfce9f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29441) Setup dependency convergence check

2022-09-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29441:
---
Labels: pull-request-available  (was: )

> Setup dependency convergence check
> --
>
> Key: FLINK-29441
> URL: https://issues.apache.org/jira/browse/FLINK-29441
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> Following the Flink repo:
>  * Convergence should be disabled by default
>  * Enabled via {{-Pcheck-convergence}}
>  * Enabled on CI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-dynamodb] dannycranmer opened a new pull request, #7: [FLINK-29441] Setup (fix) convergence configuration

2022-09-28 Thread GitBox


dannycranmer opened a new pull request, #7:
URL: https://github.com/apache/flink-connector-dynamodb/pull/7

   ## What is the purpose of the change
   
   Setup (fix) convergence configuration such that convergence is checked on CI
   
   ## Brief change log
   
   * Enable convergence check (still requires `-Pcheck-convergence` to run it)
   
   ## Verifying this change
   
   Run locally:
   * Enabled: `mvn validate -Pcheck-convergence`
   * Disabled: `mvn validate`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? n/a
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-dynamodb] dannycranmer commented on pull request #7: [FLINK-29441] Setup (fix) convergence configuration

2022-09-28 Thread GitBox


dannycranmer commented on PR #7:
URL: 
https://github.com/apache/flink-connector-dynamodb/pull/7#issuecomment-1260514041

   Can see convergence check running in action logs:
   ```
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (dependency-convergence) @ 
flink-connector-dynamodb-parent ---
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (dependency-convergence) @ 
flink-connector-dynamodb ---
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (dependency-convergence) @ 
flink-sql-connector-dynamodb ---
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] pltbkd commented on pull request #20906: [FLINK-29348][table] DynamicFilteringEvent can be stored in the CoordinatorStore if it's received before the listener source coordinator is st

2022-09-28 Thread GitBox


pltbkd commented on PR #20906:
URL: https://github.com/apache/flink/pull/20906#issuecomment-1260523309

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29437) The partition of data before and after the Kafka Shuffle are not aligned

2022-09-28 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17610411#comment-17610411
 ] 

Martijn Visser commented on FLINK-29437:


[~renqs] WDYT?

> The partition of data before and after the Kafka Shuffle are not aligned
> 
>
> Key: FLINK-29437
> URL: https://issues.apache.org/jira/browse/FLINK-29437
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.15.2
>Reporter: Zakelly Lan
>Priority: Major
> Fix For: 1.16.0
>
> Attachments: image-2022-09-28-14-32-28-116.png, 
> image-2022-09-28-14-35-47-954.png
>
>
> I notice that the key group range in consumer side of Kafka Shuffle is not 
> aligned with the producer side, there are two problems:
>  # The data partitioning of the sink(producer) is exactly the same way as a 
> keyed stream that as the same maximum parallelism as the number of kafka 
> partitions does, but in consumer side the number of partitions and key groups 
> are not the same.
>  # There is a distribution of assigning kafka partitions to consumer subtasks 
> (See KafkaTopicPartitionAssigner#assign), but the producer of Kafka Shuffle 
> simply assume the partition index equals the subtask index. e.g. 
> !image-2022-09-28-14-32-28-116.png|width=1133,height=274!
> My proposed change:
>  # Set the max parallelism of the key stream in consumer size as the number 
> of kafka partitions. 
>  # Use the same method when assigning kafka partitions to consumer subtasks 
> to maintain a map from subtasks to kafka partitions, which is used by the 
> producer to insert into the right partition for data for a subtask. i.e. 
> !image-2022-09-28-14-35-47-954.png|width=1030,height=283!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-dynamodb] hlteoh37 commented on pull request #7: [FLINK-29441] Setup (fix) convergence configuration

2022-09-28 Thread GitBox


hlteoh37 commented on PR #7:
URL: 
https://github.com/apache/flink-connector-dynamodb/pull/7#issuecomment-1260529635

   Should we enforce minimum required Maven version too? 
https://github.com/apache/flink-connector-elasticsearch/blob/main/pom.xml#L1001


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29436) Upgrade Spotless Maven Plugin to 2.27.0 for running on Java 17

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-29436:
---
Parent: FLINK-15736
Issue Type: Sub-task  (was: Improvement)

> Upgrade Spotless Maven Plugin to 2.27.0 for running on Java 17
> --
>
> Key: FLINK-29436
> URL: https://issues.apache.org/jira/browse/FLINK-29436
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Major
>  Labels: pull-request-available
>
> This blocker is fixed by: https://github.com/diffplug/spotless/pull/1224 and 
> https://github.com/diffplug/spotless/pull/1228.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24590) Consider removing timeout from FlinkMatchers#futureWillCompleteExceptionally

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-24590:
---
Fix Version/s: (was: 1.16.0)

> Consider removing timeout from FlinkMatchers#futureWillCompleteExceptionally
> 
>
> Key: FLINK-24590
> URL: https://issues.apache.org/jira/browse/FLINK-24590
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Tests
>Reporter: Chesnay Schepler
>Priority: Major
>
> We concluded to not use timeouts in tests, but certain utility methods still 
> ask for a timeout argument.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21564) CommonTestUtils.waitUntilCondition could fail with condition meets before

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-21564:
---
Fix Version/s: (was: 1.16.0)

> CommonTestUtils.waitUntilCondition could fail with condition meets before
> -
>
> Key: FLINK-21564
> URL: https://issues.apache.org/jira/browse/FLINK-21564
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Kezhu Wang
>Assignee: Pedro Silva
>Priority: Minor
>  Labels: auto-unassigned, pull-request-available, stale-assigned
>
> {code}
> public static void waitUntilCondition(
> SupplierWithException condition,
> Deadline timeout,
> long retryIntervalMillis,
> String errorMsg)
> throws Exception {
> while (timeout.hasTimeLeft() && !condition.get()) {
> final long timeLeft = Math.max(0, timeout.timeLeft().toMillis());
> Thread.sleep(Math.min(retryIntervalMillis, timeLeft));
> }
> if (!timeout.hasTimeLeft()) {
> throw new TimeoutException(errorMsg);
> }
> }
> {code}
> The timeout could run off between truth condition and last checking.
> Besides this, I also see time-out blocking condition in some tests, the 
> combination could be worse.
> Not a big issue, but worth to be aware of and solved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29437) The partition of data before and after the Kafka Shuffle are not aligned

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-29437:
---
Fix Version/s: (was: 1.16.0)

> The partition of data before and after the Kafka Shuffle are not aligned
> 
>
> Key: FLINK-29437
> URL: https://issues.apache.org/jira/browse/FLINK-29437
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.15.2
>Reporter: Zakelly Lan
>Priority: Major
> Attachments: image-2022-09-28-14-32-28-116.png, 
> image-2022-09-28-14-35-47-954.png
>
>
> I notice that the key group range in consumer side of Kafka Shuffle is not 
> aligned with the producer side, there are two problems:
>  # The data partitioning of the sink(producer) is exactly the same way as a 
> keyed stream that as the same maximum parallelism as the number of kafka 
> partitions does, but in consumer side the number of partitions and key groups 
> are not the same.
>  # There is a distribution of assigning kafka partitions to consumer subtasks 
> (See KafkaTopicPartitionAssigner#assign), but the producer of Kafka Shuffle 
> simply assume the partition index equals the subtask index. e.g. 
> !image-2022-09-28-14-32-28-116.png|width=1133,height=274!
> My proposed change:
>  # Set the max parallelism of the key stream in consumer size as the number 
> of kafka partitions. 
>  # Use the same method when assigning kafka partitions to consumer subtasks 
> to maintain a map from subtasks to kafka partitions, which is used by the 
> producer to insert into the right partition for data for a subtask. i.e. 
> !image-2022-09-28-14-35-47-954.png|width=1030,height=283!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-20896) Support SupportsAggregatePushDown for JDBC TableSource

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-20896:
---
Fix Version/s: (was: 1.16.0)

> Support SupportsAggregatePushDown for JDBC TableSource
> --
>
> Key: FLINK-20896
> URL: https://issues.apache.org/jira/browse/FLINK-20896
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Reporter: Sebastian Liu
>Priority: Major
>  Labels: auto-unassigned
>
> Will add SupportsAggregatePushDown implementation for JDBC TableSource.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22742) Lookup join condition with process time throws org.codehaus.commons.compiler.CompileException

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-22742:
---
Fix Version/s: (was: 1.16.0)

> Lookup join condition with process time throws 
> org.codehaus.commons.compiler.CompileException
> -
>
> Key: FLINK-22742
> URL: https://issues.apache.org/jira/browse/FLINK-22742
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.0, 1.13.0, 1.14.0
>Reporter: Caizhi Weng
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Add the following test case to 
> {{org.apache.flink.table.api.TableEnvironmentITCase}} to reproduce this bug.
> {code:scala}
> @Test
> def myTest(): Unit = {
>   val id1 = TestValuesTableFactory.registerData(
> Seq(Row.of("abc", LocalDateTime.of(2000, 1, 1, 0, 0
>   val ddl1 =
> s"""
>|CREATE TABLE Ta (
>|  id VARCHAR,
>|  ts TIMESTAMP,
>|  proc AS PROCTIME()
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$id1',
>|  'bounded' = 'true'
>|)
>|""".stripMargin
>   tEnv.executeSql(ddl1)
>   val id2 = TestValuesTableFactory.registerData(
> Seq(Row.of("abc", LocalDateTime.of(2000, 1, 2, 0, 0
>   val ddl2 =
> s"""
>|CREATE TABLE Tb (
>|  id VARCHAR,
>|  ts TIMESTAMP
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$id2',
>|  'bounded' = 'true'
>|)
>|""".stripMargin
>   tEnv.executeSql(ddl2)
>   val it = tEnv.executeSql(
> """
>   |SELECT * FROM Ta AS t1
>   |INNER JOIN Tb FOR SYSTEM_TIME AS OF t1.proc AS t2
>   |ON t1.id = t2.id
>   |WHERE CAST(coalesce(t1.ts, t2.ts) AS VARCHAR) >= 
> CONCAT(DATE_FORMAT(t1.proc, '-MM-dd'), ' 00:00:00')
>   |""".stripMargin).collect()
>   while (it.hasNext) {
> System.out.println(it.next())
>   }
> }
> {code}
> The exception stack is
> {code}
> /* 1 */
> /* 2 */  public class JoinTableFuncCollector$25 extends 
> org.apache.flink.table.runtime.collector.TableFunctionCollector {
> /* 3 */
> /* 4 */org.apache.flink.table.data.GenericRowData out = new 
> org.apache.flink.table.data.GenericRowData(2);
> /* 5 */org.apache.flink.table.data.utils.JoinedRowData joinedRow$9 = new 
> org.apache.flink.table.data.utils.JoinedRowData();
> /* 6 */
> /* 7 */private final org.apache.flink.table.data.binary.BinaryStringData 
> str$17 = 
> org.apache.flink.table.data.binary.BinaryStringData.fromString("-MM-dd");
> /* 8 */   
> /* 9 */private static final java.util.TimeZone timeZone =
> /* 10 */ java.util.TimeZone.getTimeZone("Asia/Shanghai");
> /* 11 */
> /* 12 */private final org.apache.flink.table.data.binary.BinaryStringData 
> str$20 = org.apache.flink.table.data.binary.BinaryStringData.fromString(" 
> 00:00:00");
> /* 13 */   
> /* 14 */
> /* 15 */public JoinTableFuncCollector$25(Object[] references) throws 
> Exception {
> /* 16 */  
> /* 17 */}
> /* 18 */
> /* 19 */@Override
> /* 20 */public void open(org.apache.flink.configuration.Configuration 
> parameters) throws Exception {
> /* 21 */  
> /* 22 */}
> /* 23 */
> /* 24 */@Override
> /* 25 */public void collect(Object record) throws Exception {
> /* 26 */  org.apache.flink.table.data.RowData in1 = 
> (org.apache.flink.table.data.RowData) getInput();
> /* 27 */  org.apache.flink.table.data.RowData in2 = 
> (org.apache.flink.table.data.RowData) record;
> /* 28 */  
> /* 29 */  org.apache.flink.table.data.binary.BinaryStringData field$7;
> /* 30 */boolean isNull$7;
> /* 31 */org.apache.flink.table.data.TimestampData field$8;
> /* 32 */boolean isNull$8;
> /* 33 */org.apache.flink.table.data.TimestampData field$10;
> /* 34 */boolean isNull$10;
> /* 35 */boolean isNull$13;
> /* 36 */org.apache.flink.table.data.binary.BinaryStringData result$14;
> /* 37 */org.apache.flink.table.data.TimestampData field$15;
> /* 38 */boolean isNull$15;
> /* 39 */org.apache.flink.table.data.TimestampData result$16;
> /* 40 */boolean isNull$18;
> /* 41 */org.apache.flink.table.data.binary.BinaryStringData result$19;
> /* 42 */boolean isNull$21;
> /* 43 */org.apache.flink.table.data.binary.BinaryStringData result$22;
> /* 44 */boolean isNull$23;
> /* 45 */boolean result$24;
> /* 46 */  isNull$15 = in1.isNullAt(2);
> /* 47 */field$15 = null;
> /* 48 */if (!isNull$15) {
> /* 49 */  field$15 = in1.getTimestamp(2, 3);
> /* 50 */}
> /* 51 */isNull$8 = in2.isNullAt(1);
> /* 52 */field$8 = null;
> /* 53 */if (!isNull$8) {
> /* 54 */  field$8 = in2.getTimestamp(1, 6);
> /* 55 */}
> /* 56 */isNull$7 = in2.isNu

[GitHub] [flink] MartijnVisser commented on pull request #15344: [FLINK-21564][tests] Don't call condition or throw exception after condition met for CommonTestUtils.waitUntil

2022-09-28 Thread GitBox


MartijnVisser commented on PR #15344:
URL: https://github.com/apache/flink/pull/15344#issuecomment-1260534744

   @kezhuw Could you please rebase?
   
   @zentol Is it worth to take one more look at this one? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-dynamodb] dannycranmer commented on pull request #7: [FLINK-29441] Setup (fix) convergence configuration

2022-09-28 Thread GitBox


dannycranmer commented on PR #7:
URL: 
https://github.com/apache/flink-connector-dynamodb/pull/7#issuecomment-1260534977

   > Should we enforce minimum required Maven version too? 
https://github.com/apache/flink-connector-elasticsearch/blob/main/pom.xml#L1001
   
   We already inherit all these:
   
   ```
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven-version) @ 
flink-sql-connector-dynamodb ---
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-maven) @ 
flink-sql-connector-dynamodb ---
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (ban-unsafe-snakeyaml) @ 
flink-sql-connector-dynamodb ---
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (ban-unsafe-jackson) @ 
flink-sql-connector-dynamodb ---
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (forbid-log4j-1) @ 
flink-sql-connector-dynamodb ---
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce 
(forbid-direct-akka-rpc-dependencies) @ flink-sql-connector-dynamodb ---
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce 
(forbid-direct-table-planner-dependencies) @ flink-sql-connector-dynamodb ---
   [INFO] --- maven-enforcer-plugin:3.0.0-M1:enforce (enforce-versions) @ 
flink-sql-connector-dynamodb ---
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-dynamodb] dannycranmer merged pull request #7: [FLINK-29441] Setup (fix) convergence configuration

2022-09-28 Thread GitBox


dannycranmer merged PR #7:
URL: https://github.com/apache/flink-connector-dynamodb/pull/7


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-23683) KafkaSinkITCase hangs on azure

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-23683.
--
Resolution: Cannot Reproduce

> KafkaSinkITCase hangs on azure
> --
>
> Key: FLINK-23683
> URL: https://issues.apache.org/jira/browse/FLINK-23683
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Major
>  Labels: stale-assigned, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21738&view=logs&j=4be4ed2b-549a-533d-aa33-09e28e360cc8&t=f7d83ad5-3324-5307-0eb3-819065cdcb38&l=7886



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-14023) Support accessing job parameters in Python user-defined functions

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-14023:
---
Fix Version/s: (was: 1.16.0)

> Support accessing job parameters in Python user-defined functions
> -
>
> Key: FLINK-14023
> URL: https://issues.apache.org/jira/browse/FLINK-14023
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Dian Fu
>Priority: Major
>
> Currently, it’s possible to access job parameters in the Java user-defined 
> functions. It could be used to define the behavior according to job 
> parameters. It should also be supported for Python user-defined functions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-22344) KafkaTableITCase.testKafkaSourceSinkWithKeyAndFullValue times out

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-22344.
--
Fix Version/s: (was: 1.16.0)
   Resolution: Cannot Reproduce

> KafkaTableITCase.testKafkaSourceSinkWithKeyAndFullValue times out
> -
>
> Key: FLINK-22344
> URL: https://issues.apache.org/jira/browse/FLINK-22344
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16731&view=logs&j=72d4811f-9f0d-5fd0-014a-0bc26b72b642&t=c1d93a6a-ba91-515d-3196-2ee8019fbda7&l=6619
> {code}
> Apr 18 21:49:28 [ERROR] testKafkaSourceSinkWithKeyAndFullValue[legacy = 
> false, format = 
> csv](org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase)  
> Time elapsed: 30.121 s  <<< ERROR!
> Apr 18 21:49:28 org.junit.runners.model.TestTimedOutException: test timed out 
> after 30 seconds
> Apr 18 21:49:28   at java.lang.Thread.sleep(Native Method)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sleepBeforeRetry(CollectResultFetcher.java:237)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:113)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> Apr 18 21:49:28   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:351)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestUtils.collectRows(KafkaTableTestUtils.java:52)
> Apr 18 21:49:28   at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testKafkaSourceSinkWithKeyAndFullValue(KafkaTableITCase.java:538)
> Apr 18 21:49:28   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Apr 18 21:49:28   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 18 21:49:28   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 18 21:49:28   at java.lang.reflect.Method.invoke(Method.java:498)
> Apr 18 21:49:28   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 18 21:49:28   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Apr 18 21:49:28   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Apr 18 21:49:28   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Apr 18 21:49:28   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23683) KafkaSinkITCase hangs on azure

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-23683:
---
Fix Version/s: (was: 1.16.0)

> KafkaSinkITCase hangs on azure
> --
>
> Key: FLINK-23683
> URL: https://issues.apache.org/jira/browse/FLINK-23683
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Major
>  Labels: stale-assigned, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21738&view=logs&j=4be4ed2b-549a-533d-aa33-09e28e360cc8&t=f7d83ad5-3324-5307-0eb3-819065cdcb38&l=7886



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22226) Insufficient slots when submitting jobs results in massive stacktrace

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-6:
---
Fix Version/s: (was: 1.16.0)

> Insufficient slots when submitting jobs results in massive stacktrace
> -
>
> Key: FLINK-6
> URL: https://issues.apache.org/jira/browse/FLINK-6
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client / Job Submission
>Reporter: Chesnay Schepler
>Priority: Major
>
> Submitting a job without enough slots being available causes the mother of 
> all stacktraces to show up (110 lines...).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26351) After scaling a flink task running on k8s, the flink web ui graph always shows the parallelism of the first deployment.

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-26351:
---
Fix Version/s: (was: 1.16.0)

> After scaling a flink task running on k8s, the flink web ui graph always 
> shows the parallelism of the first deployment.
> ---
>
> Key: FLINK-26351
> URL: https://issues.apache.org/jira/browse/FLINK-26351
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.15.0
>Reporter: qiunan
>Priority: Major
>  Labels: pull-request-available
>
> In the code,flink web ui graph data from under method.
> AdaptiveScheduler.requestJob()
> {code:java}
>  @Override
>     public ExecutionGraphInfo requestJob() {
>         return new ExecutionGraphInfo(state.getJob(), 
> exceptionHistory.toArrayList());
>    } {code}
> This executionGraphInfo is task restart build and restore to state.
> You can see the code, the parallelism recalculate and copy jobGraph to reset.
> AdaptiveScheduler.createExecutionGraphWithAvailableResourcesAsync().
> {code:java}
> vertexParallelism = determineParallelism(slotAllocator);
> JobGraph adjustedJobGraph = jobInformation.copyJobGraph();
> for (JobVertex vertex : adjustedJobGraph.getVertices()) {
> JobVertexID id = vertex.getID();
> // use the determined "available parallelism" to use
> // the resources we have access to
> vertex.setParallelism(vertexParallelism.getParallelism(id));
> }{code}
> But in the restoreState copy jobGraph again, so the jobGraph parallelism 
> always deployed for the first time.
> AdaptiveScheduler.createExecutionGraphAndRestoreState(VertexParallelismStore 
> adjustedParallelismStore)  
> {code:java}
> private ExecutionGraph createExecutionGraphAndRestoreState(
> VertexParallelismStore adjustedParallelismStore) throws Exception {
> return executionGraphFactory.createAndRestoreExecutionGraph(
> jobInformation.copyJobGraph(),
> completedCheckpointStore,
> checkpointsCleaner,
> checkpointIdCounter,
> 
> TaskDeploymentDescriptorFactory.PartitionLocationConstraint.MUST_BE_KNOWN,
> initializationTimestamp,
> vertexAttemptNumberStore,
> adjustedParallelismStore,
> deploymentTimeMetrics,
> LOG);
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21340) CheckForbiddenMethodsUsage is not run and fails

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-21340:
---
Fix Version/s: (was: 1.16.0)

> CheckForbiddenMethodsUsage is not run and fails
> ---
>
> Key: FLINK-21340
> URL: https://issues.apache.org/jira/browse/FLINK-21340
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> This seems to be intended as a manual test, that checks that certain methods 
> are not being used.
> It currently fails for 35 instances, 26 of which are for classes from 
> flink-shaded (i.e., out of our control).
> We should either adjust the test it ignore flink-shaded and fix the remaining 
> issues, or remove the test.
> flink-shaded classes could be ignored pretty easily like this:
> {code}
> methodUsages.removeIf(
> memberUsage ->
> memberUsage
> .getDeclaringClass()
> .getPackageName()
> .startsWith("org.apache.flink.shaded"));
> {code}
> These are the remaining failures:
> {code}
> private static void 
> org.apache.flink.api.common.typeutils.TypeSerializerSerializationUtilTest.modifySerialVersionUID(byte[],java.lang.String,long)
>  throws java.lang.Exception
> public void 
> org.apache.flink.util.IOUtilsTest.testTryReadFullyFromLongerStream() throws 
> java.io.IOException
> public void 
> org.apache.flink.core.io.PostVersionedIOReadableWritableTest.testReadNonVersionedWithLongPayload()
>  throws java.io.IOException
> public void 
> org.apache.flink.streaming.api.operators.AbstractStreamOperatorTest.testCustomRawKeyedStateSnapshotAndRestore()
>  throws java.lang.Exception
> public void 
> org.apache.flink.util.IOUtilsTest.testTryReadFullyFromShorterStream() throws 
> java.io.IOException
> public void 
> org.apache.flink.core.io.PostVersionedIOReadableWritableTest.testReadVersioned()
>  throws java.io.IOException
> private 
> org.apache.flink.runtime.checkpoint.channel.ChannelStateWriter$ChannelStateWriteResult
>  
> org.apache.flink.runtime.state.ChannelPersistenceITCase.write(long,java.util.Map,java.util.Map)
>  throws java.lang.Exception
> public void 
> org.apache.flink.core.memory.HybridOnHeapMemorySegmentTest.testReadOnlyByteBufferPut()
> private static 
> org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle 
> org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandleTest.create(java.util.Random)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24430) Clean up log pollution

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-24430:
---
Fix Version/s: (was: 1.16.0)

> Clean up log pollution
> --
>
> Key: FLINK-24430
> URL: https://issues.apache.org/jira/browse/FLINK-24430
> Project: Flink
>  Issue Type: Technical Debt
>Affects Versions: 1.14.1, 1.15.0
>Reporter: Till Rohrmann
>Priority: Major
>
> This ticket collects various issues that cause log pollution and that should 
> be therefore resolved.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25201) Implement duplicating for gcs

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-25201:
---
Fix Version/s: (was: 1.16.0)

> Implement duplicating for gcs
> -
>
> Key: FLINK-25201
> URL: https://issues.apache.org/jira/browse/FLINK-25201
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
>
> We can use https://cloud.google.com/storage/docs/json_api/v1/objects/rewrite



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29441) Setup dependency convergence check

2022-09-28 Thread Danny Cranmer (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Cranmer resolved FLINK-29441.
---
Resolution: Fixed

Merged commit 
[{{4357607}}|https://github.com/apache/flink-connector-dynamodb/commit/43576072f540b9086f0c628ee81104d5aec07cc7]
 into apache:main 

> Setup dependency convergence check
> --
>
> Key: FLINK-29441
> URL: https://issues.apache.org/jira/browse/FLINK-29441
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Major
>  Labels: pull-request-available
>
> Following the Flink repo:
>  * Convergence should be disabled by default
>  * Enabled via {{-Pcheck-convergence}}
>  * Enabled on CI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25203) Implement duplicating for aliyun

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-25203:
---
Fix Version/s: (was: 1.16.0)

> Implement duplicating for aliyun
> 
>
> Key: FLINK-25203
> URL: https://issues.apache.org/jira/browse/FLINK-25203
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
>
> We can use: https://www.alibabacloud.com/help/doc-detail/31979.htm



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22222) Global config is logged twice by the client for batch jobs

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-2:
---
Fix Version/s: (was: 1.16.0)

> Global config is logged twice by the client for batch jobs
> --
>
> Key: FLINK-2
> URL: https://issues.apache.org/jira/browse/FLINK-2
> Project: Flink
>  Issue Type: Sub-task
>  Components: Command Line Client
>Reporter: Chesnay Schepler
>Priority: Major
>
> Global config is loaded twice by the client for batch jobs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21513) Rethink up-/down-/restartingTime metrics

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-21513:
---
Fix Version/s: (was: 1.16.0)

> Rethink up-/down-/restartingTime metrics
> 
>
> Key: FLINK-21513
> URL: https://issues.apache.org/jira/browse/FLINK-21513
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Runtime / Metrics
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major, reactive
>
> While thinking about FLINK-21510 I stumbled upon some issues in the the 
> semantics of these metrics, both from a user perspective and from our own, 
> and I think we need to clarify some things.
> h4. upTime
> This metric describes the time since the job transitioned RUNNING state.
> It is meant as a measure for how stably a deployment is.
> In the default scheduler this transitions happens before we do any actual 
> scheduling work, and as a result this also includes the time it takes for the 
> JM to request slots and deploy tasks. In practive this means we start the 
> timer once the job has been submitted and the JobMaster/Scheduler/EG have 
> been initialized.
> For the adaptive scheduler this now puts us a bit into an odd situation 
> because it first acquires slots before actually transitioning the EG into a 
> RUNNING state, so as is we'd end up measuring 2 slightly different things.
> The question now is whether this is a problem.
> While we could certainly stick with the definition of "time since EG switched 
> to RUNNING", it raises the question what the semantics of this metric are 
> should a scheduler use a different data-structure than the EG.
> In other words, what I'm looking for is a definition that is independent from 
> existing data-structures; a crude example could be "The time since the job is 
> in a state where the deployment of a task is possible.".
> An alternative for the adaptive scheduler would be to measure the time since 
> we transitioned to WaitingForResources, with which we would also include the 
> slot acquisition, but it would be inconsistent with the logs and UI (because 
> they only display an INITIALIZING job).
> h4. restartingTime
> This metric describes the time since the job transitioned into a RESTARTING 
> state.
> It is meant as a measure for how long the recovery in case of a job failure 
> takes.
> In the default scheduler this in practice is the time between a failure 
> arriving at the JM and the cancellation of tasks being completed / restart 
> backoff (whichever is higher).
> This is consistent with the semantics of the upTime metric, because upTime 
> also includes the time required for acquiring slots and deploying tasks.
> For the adaptive scheduler we can follow similar semantics, by measuring the 
> time we spend in the {{Restarting}} state.
> However, if we stick to the definition of upTime as time spent in RUNNING, 
> then we will end up with a gap for the time spent in WaitingForResources.
> h4. downTime
> This metric describes the time between the job transitioning from FAILING to 
> RUNNING.
> It is meant as a measure for how long the recovery in case of a job failure 
> takes.
> You may be wondering what the difference between {{downTime}} and 
> {{restartingTime}} is meant to be. Unfortunately I do not have the answer to 
> that.
> Presumably, at the time they were added, they were covering different parts 
> of the recovery process, but since we never documented these steps explicitly 
> the exact semantics are no longer clear and there are no specs that a 
> scheduler can follow.
> Furthermore, this metric is currently broken because a FAILING job _never_ 
> transitions into RUNNING anymore.
> The default scheduler transitions from RUNNING -> RESTARTING -> RUNNING, 
> whereas the adaptive scheduler cancels the job and creates a new EG.
> As it is we could probably merge downTime and restartingTime because they 
> seem to cover the exact same thing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22228) Random log exception when tasks finish after a checkpoint was started

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-8:
---
Fix Version/s: (was: 1.16.0)

> Random log exception when tasks finish after a checkpoint was started
> -
>
> Key: FLINK-8
> URL: https://issues.apache.org/jira/browse/FLINK-8
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Chesnay Schepler
>Priority: Major
>
> ??StephanEwen: Random log exception when tasks finish after a checkpoint was 
> started.??
>  
> {code:java}
> 9407 [Checkpoint Timer] INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Triggering 
> checkpoint 1 (type=CHECKPOINT) @ 1617983491976 for job 
> d8225dac771bf607cf8dd869964c6265.
> 9501 [Source: numbers -> Map -> Sink: Data stream collect sink (1/1)#0] INFO  
> org.apache.flink.runtime.taskmanager.Task [] - Source: numbers -> Map -> 
> Sink: Data stream collect sink (1/1)#0 (3f61526eef13676a7d96010799c04e1c) 
> switched from RUNNING to FINISHED.
> 9501 [Source: numbers -> Map -> Sink: Data stream collect sink (1/1)#0] INFO  
> org.apache.flink.runtime.taskmanager.Task [] - Freeing task resources for 
> Source: numbers -> Map -> Sink: Data stream collect sink (1/1)#0 
> (3f61526eef13676a7d96010799c04e1c).
> 9501 [flink-akka.actor.default-dispatcher-2] INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Un-registering task 
> and sending final execution state FINISHED to JobManager for task Source: 
> numbers -> Map -> Sink: Data stream collect sink (1/1)#0 
> 3f61526eef13676a7d96010799c04e1c.
> 9501 [AsyncOperations-thread-1] INFO  
> org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable [] - Source: 
> numbers -> Map -> Sink: Data stream collect sink (1/1)#0 - asynchronous part 
> of checkpoint 1 could not be completed.
> java.util.concurrent.CancellationException: null
>   at java.util.concurrent.FutureTask.report(FutureTask.java:121) ~[?:?]
>   at java.util.concurrent.FutureTask.get(FutureTask.java:191) ~[?:?]
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils.runIfNotDoneAndGet(FutureUtils.java:636)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.(OperatorSnapshotFinalizer.java:60)
>  ~[classes/:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.AsyncCheckpointRunnable.run(AsyncCheckpointRunnable.java:128)
>  [classes/:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.lang.Thread.run(Thread.java:834) [?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25195) Use duplicating API for shared artefacts in RocksDB snapshots

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-25195:
---
Fix Version/s: (was: 1.16.0)

> Use duplicating API for shared artefacts in RocksDB snapshots
> -
>
> Key: FLINK-25195
> URL: https://issues.apache.org/jira/browse/FLINK-25195
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Reporter: Dawid Wysakowicz
>Assignee: Yun Tang
>Priority: Major
>
> Instead of uploading all artefacts, we could use the duplicating API to 
> cheaply create an independent copy of shared artefacts instead of uploading 
> them again (as described in FLINK-25192)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25200) Implement duplicating for s3 filesystem

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-25200:
---
Fix Version/s: (was: 1.16.0)

> Implement duplicating for s3 filesystem
> ---
>
> Key: FLINK-25200
> URL: https://issues.apache.org/jira/browse/FLINK-25200
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
>
> We can use https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23485) Add a benchmark for the Changelog DFS writer

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-23485:
---
Fix Version/s: (was: 1.16.0)

> Add a benchmark for the Changelog DFS writer
> 
>
> Key: FLINK-23485
> URL: https://issues.apache.org/jira/browse/FLINK-23485
> Project: Flink
>  Issue Type: Sub-task
>  Components: Benchmarks
>Reporter: Roman Khachatryan
>Priority: Major
>
> http://codespeed.dak8s.net:8000/
> Besides of adding a benchmark, update (or better re-write) the script for 
> regression detection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25202) Implement duplicating for azure

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-25202:
---
Fix Version/s: (was: 1.16.0)

> Implement duplicating for azure
> ---
>
> Key: FLINK-25202
> URL: https://issues.apache.org/jira/browse/FLINK-25202
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems
>Reporter: Dawid Wysakowicz
>Priority: Major
>
> We can use: 
> https://docs.microsoft.com/en-us/rest/api/storageservices/copy-blob



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22230) Misleading TaskExecutorResourceUtils log messages

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-22230:
---
Fix Version/s: (was: 1.16.0)

> Misleading TaskExecutorResourceUtils log messages
> -
>
> Key: FLINK-22230
> URL: https://issues.apache.org/jira/browse/FLINK-22230
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Reporter: Chesnay Schepler
>Priority: Major
>
> ??Stephan Ewen: These lines show up on any execution of a local job and make 
> me think I forgot to configure something I probably should have, wondering 
> whether this might cause problems later? These have been in Flink for a few 
> releases now, might be worth rephrasing, though.??
> {code}
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.cpu.cores required for local execution is 
> not set, setting it to the maximal possible value.
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.memory.task.heap.size required for local 
> execution is not set, setting it to the maximal possible value.
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.memory.task.off-heap.size required for local 
> execution is not set, setting it to the maximal possible value.
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.memory.network.min required for local 
> execution is not set, setting it to its default value 64 mb.
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.memory.network.max required for local 
> execution is not set, setting it to its default value 64 mb.
> 2021-03-30 17:57:22,483 INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
> configuration option taskmanager.memory.managed.size required for local 
> execution is not set, setting it to its default value 128 mb.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-17397) Support processing-time temporal join with filesystem source

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-17397:
---
Fix Version/s: (was: 1.16.0)

> Support processing-time temporal join with filesystem source
> 
>
> Key: FLINK-17397
> URL: https://issues.apache.org/jira/browse/FLINK-17397
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23236) FanOutShardSubscriberTest.testTimeoutEnqueuingEvent fails on azure

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-23236:
---
Fix Version/s: (was: 1.16.0)

> FanOutShardSubscriberTest.testTimeoutEnqueuingEvent fails on azure
> --
>
> Key: FLINK-23236
> URL: https://issues.apache.org/jira/browse/FLINK-23236
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.14.0, 1.13.1, 1.12.4, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19874&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=03dca39c-73e8-5aaf-601d-328ae5c35f20&l=14218
> {code}
> Jul 04 22:22:17 [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 3.415 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriberTest
> Jul 04 22:22:17 [ERROR] 
> testTimeoutEnqueuingEvent(org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriberTest)
>   Time elapsed: 2.552 s  <<< FAILURE!
> Jul 04 22:22:17 java.lang.AssertionError: Expected test to throw (an instance 
> of 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriber$RecoverableFanOutSubscriberException
>  and exception with message a string containing "Timed out enqueuing event 
> SubscriptionNextEvent")
> Jul 04 22:22:17   at org.junit.Assert.fail(Assert.java:88)
> Jul 04 22:22:17   at 
> org.junit.rules.ExpectedException.failDueToMissingException(ExpectedException.java:263)
> Jul 04 22:22:17   at 
> org.junit.rules.ExpectedException.access$200(ExpectedException.java:106)
> Jul 04 22:22:17   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:245)
> Jul 04 22:22:17   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jul 04 22:22:17   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jul 04 22:22:17   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-23236) FanOutShardSubscriberTest.testTimeoutEnqueuingEvent fails on azure

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-23236.
--
Resolution: Cannot Reproduce

> FanOutShardSubscriberTest.testTimeoutEnqueuingEvent fails on azure
> --
>
> Key: FLINK-23236
> URL: https://issues.apache.org/jira/browse/FLINK-23236
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.14.0, 1.13.1, 1.12.4, 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19874&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=03dca39c-73e8-5aaf-601d-328ae5c35f20&l=14218
> {code}
> Jul 04 22:22:17 [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 3.415 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriberTest
> Jul 04 22:22:17 [ERROR] 
> testTimeoutEnqueuingEvent(org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriberTest)
>   Time elapsed: 2.552 s  <<< FAILURE!
> Jul 04 22:22:17 java.lang.AssertionError: Expected test to throw (an instance 
> of 
> org.apache.flink.streaming.connectors.kinesis.internals.publisher.fanout.FanOutShardSubscriber$RecoverableFanOutSubscriberException
>  and exception with message a string containing "Timed out enqueuing event 
> SubscriptionNextEvent")
> Jul 04 22:22:17   at org.junit.Assert.fail(Assert.java:88)
> Jul 04 22:22:17   at 
> org.junit.rules.ExpectedException.failDueToMissingException(ExpectedException.java:263)
> Jul 04 22:22:17   at 
> org.junit.rules.ExpectedException.access$200(ExpectedException.java:106)
> Jul 04 22:22:17   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:245)
> Jul 04 22:22:17   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> Jul 04 22:22:17   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> Jul 04 22:22:17   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> Jul 04 22:22:17   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> Jul 04 22:22:17   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26479) Update the documentation of KeyedStream.window adding a few examples on how to create count window and time window in Python DataStream API

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-26479:
---
Fix Version/s: (was: 1.16.0)

> Update the documentation of KeyedStream.window adding a few examples on how 
> to create count window and time window in Python DataStream API
> ---
>
> Key: FLINK-26479
> URL: https://issues.apache.org/jira/browse/FLINK-26479
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Documentation
>Reporter: Dian Fu
>Assignee: zhangjingcun
>Priority: Major
>
> A few handy WindowAssigners have been added in FLINK-26444 and it would be 
> great to add few examples in KeyedStream.window on how to use them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23484) Add benchmarks for the ChangelogStateBackend

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-23484:
---
Fix Version/s: (was: 1.16.0)

> Add benchmarks for the ChangelogStateBackend
> 
>
> Key: FLINK-23484
> URL: https://issues.apache.org/jira/browse/FLINK-23484
> Project: Flink
>  Issue Type: Sub-task
>  Components: Benchmarks
>Reporter: Roman Khachatryan
>Priority: Major
>
> [http://codespeed.dak8s.net:8000/]
> (similar to existing RocksDB or Heap)
> Should test several configurations (varying number of  task per TM, barching 
> options, etc.).
> Besides of adding a benchmark, update (or better re-write) the script for 
> regression detection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-26966) Implement incremental checkpoints

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-26966:
--

Assignee: Roman Khachatryan

> Implement incremental checkpoints
> -
>
> Key: FLINK-26966
> URL: https://issues.apache.org/jira/browse/FLINK-26966
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27146) [JUnit5 Migration] Module: flink-filesystems

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-27146:
---
Fix Version/s: (was: 1.16.0)

> [JUnit5 Migration] Module: flink-filesystems
> 
>
> Key: FLINK-27146
> URL: https://issues.apache.org/jira/browse/FLINK-27146
> Project: Flink
>  Issue Type: Sub-task
>  Components: FileSystems, Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26966) Implement incremental checkpoints

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-26966:
---
Fix Version/s: (was: 1.16.0)

> Implement incremental checkpoints
> -
>
> Key: FLINK-26966
> URL: https://issues.apache.org/jira/browse/FLINK-26966
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Reporter: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-22892) Resuming Externalized Checkpoint (rocks, incremental, scale down) end-to-end test fails

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-22892.
--
Fix Version/s: (was: 1.16.0)
   Resolution: Cannot Reproduce

> Resuming Externalized Checkpoint (rocks, incremental, scale down) end-to-end 
> test fails
> ---
>
> Key: FLINK-22892
> URL: https://issues.apache.org/jira/browse/FLINK-22892
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18700&view=logs&j=739e6eac-8312-5d31-d437-294c4d26fced&t=a68b8d89-50e9-5977-4500-f4fde4f57f9b&l=921
> {code}
> Jun 05 20:47:11 Running 'Resuming Externalized Checkpoint (rocks, 
> incremental, scale down) end-to-end test'
> Jun 05 20:47:11 
> ==
> Jun 05 20:47:11 TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-11283890527
> Jun 05 20:47:11 Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.14-SNAPSHOT-bin/flink-1.14-SNAPSHOT
> Jun 05 20:47:11 Starting cluster.
> Jun 05 20:47:12 Starting standalonesession daemon on host fv-az83-351.
> Jun 05 20:47:13 Starting taskexecutor daemon on host fv-az83-351.
> Jun 05 20:47:13 Waiting for Dispatcher REST endpoint to come up...
> Jun 05 20:47:14 Waiting for Dispatcher REST endpoint to come up...
> Jun 05 20:47:16 Waiting for Dispatcher REST endpoint to come up...
> Jun 05 20:47:17 Waiting for Dispatcher REST endpoint to come up...
> Jun 05 20:47:18 Dispatcher REST endpoint is up.
> Jun 05 20:47:18 Running externalized checkpoints test,   with ORIGINAL_DOP=4 
> NEW_DOP=2   and STATE_BACKEND_TYPE=rocks STATE_BACKEND_FILE_ASYNC=true   
> STATE_BACKEND_ROCKSDB_INCREMENTAL=true SIMULATE_FAILURE=false ...
> Jun 05 20:47:25 Job (b84a167a07b862a9dbbcdc6a5969f75c) is running.
> Jun 05 20:47:25 Waiting for job (b84a167a07b862a9dbbcdc6a5969f75c) to have at 
> least 1 completed checkpoints ...
> Jun 05 20:57:30 A timeout occurred waiting for job 
> (b84a167a07b862a9dbbcdc6a5969f75c) to have at least 1 completed checkpoints .
> Jun 05 20:57:30 Stopping job timeout watchdog (with pid=23016)
> Jun 05 20:57:30 [FAIL] Test script contains errors.
> Jun 05 20:57:30 Checking of logs skipped.
> Jun 05 20:57:30 
> Jun 05 20:57:30 [FAIL] 'Resuming Externalized Checkpoint (rocks, incremental, 
> scale down) end-to-end test' failed after 10 minutes and 19 seconds! Test 
> exited with exit code 1
> Jun 05 20:57:30 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-24949) KafkaITCase.testBigRecordJob fails on azure

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-24949.
--
Fix Version/s: (was: 1.16.0)
   Resolution: Cannot Reproduce

> KafkaITCase.testBigRecordJob fails on azure
> ---
>
> Key: FLINK-24949
> URL: https://issues.apache.org/jira/browse/FLINK-24949
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.1
>Reporter: Yun Gao
>Assignee: Fabian Paul
>Priority: Major
>  Labels: stale-assigned, test-stability
>
> {code:java}
> Nov 17 23:39:39 [ERROR] Tests run: 23, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 222.57 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> Nov 17 23:39:39 [ERROR] testBigRecordJob  Time elapsed: 60.02 s  <<< ERROR!
> Nov 17 23:39:39 org.junit.runners.model.TestTimedOutException: test timed out 
> after 6 milliseconds
> Nov 17 23:39:39   at sun.misc.Unsafe.park(Native Method)
> Nov 17 23:39:39   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Nov 17 23:39:39   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Nov 17 23:39:39   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> Nov 17 23:39:39   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Nov 17 23:39:39   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Nov 17 23:39:39   at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:58)
> Nov 17 23:39:39   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runBigRecordTestTopology(KafkaConsumerTestBase.java:1473)
> Nov 17 23:39:39   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testBigRecordJob(KafkaITCase.java:119)
> Nov 17 23:39:39   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Nov 17 23:39:39   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Nov 17 23:39:39   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Nov 17 23:39:39   at java.lang.reflect.Method.invoke(Method.java:498)
> Nov 17 23:39:39   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Nov 17 23:39:39   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Nov 17 23:39:39   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Nov 17 23:39:39   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Nov 17 23:39:39   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Nov 17 23:39:39   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Nov 17 23:39:39   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Nov 17 23:39:39   at java.lang.Thread.run(Thread.java:748)
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=26679&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=15a22db7-8faa-5b34-3920-d33c9f0ca23c&l=7161
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27081) Remove Azure integration

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-27081:
---
Fix Version/s: (was: 1.16.0)

> Remove Azure integration
> 
>
> Key: FLINK-27081
> URL: https://issues.apache.org/jira/browse/FLINK-27081
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System / CI
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>
> Remove all traces of azure pipelines from Flink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26829) ClassCastException will be thrown when the second operand of divide is a function call

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-26829:
---
Fix Version/s: (was: 1.16.0)

> ClassCastException will be thrown  when the second operand of divide is a 
> function call
> ---
>
> Key: FLINK-26829
> URL: https://issues.apache.org/jira/browse/FLINK-26829
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: luoyuxia
>Assignee: luoyuxia
>Priority: Major
>
> Can be reproduced by add the following code in 
> SqlExpressionTest#testDivideFunctions
> {code:java}
> testExpectedSqlException(
>   "1/POWER(5, 5)", divisorZeroException, classOf[ArithmeticException]) {code}
> Then the method ExpressionReducer#skipAndValidateExprs will throw the 
> exception:
> {code:java}
> java.lang.ClassCastException: org.apache.calcite.rex.RexCall cannot be cast 
> to org.apache.calcite.rex.RexLiteral  {code}
> The following code will cast the DEVIDE's second op to RexLiteral, but it 
> maybe a function call.
> {code:java}
> // according to BuiltInFunctionDefinitions, the DEVIDE's second op must be 
> numeric
> assert(RexUtil.isDeterministic(divisionLiteral))
> val divisionComparable = {
>   
> divisionLiteral.asInstanceOf[RexLiteral].getValue.asInstanceOf[Comparable[Any]]
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26948) Add SORT_ARRAY supported in SQL & Table API

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-26948:
---
Fix Version/s: (was: 1.16.0)

> Add SORT_ARRAY supported in SQL & Table API
> ---
>
> Key: FLINK-26948
> URL: https://issues.apache.org/jira/browse/FLINK-26948
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Priority: Major
>
> Returns the array in {{expr}} in sorted order.
> Syntax:
> {code:java}
> sort_array(expr [, ascendingOrder] ) {code}
> Arguments:
>  * {{{}expr{}}}: An ARRAY expression of sortable elements.
>  * {{{}ascendingOrder{}}}: An optional BOOLEAN expression defaulting to 
> {{{}true{}}}.
> Returns:
> The result type matches {{{}expr{}}}.
> Sorts the input array in ascending or descending order according to the 
> natural ordering of the array elements. {{NULL}} elements are placed at the 
> beginning of the returned array in ascending order or at the end of the 
> returned array in descending order.
> Examples:
> {code:java}
> > SELECT sort_array(array('b', 'd', NULL, 'c', 'a'), true);
>  [NULL,a,b,c,d] {code}
> See more:
>  * 
> [Spark|https://spark.apache.org/docs/latest/sql-ref-functions-builtin.html#date-and-timestamp-functions]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-22572) JsonFsStreamSinkITCase.testPart fails running in a timeout

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-22572.
--
Fix Version/s: (was: 1.16.0)
   Resolution: Cannot Reproduce

> JsonFsStreamSinkITCase.testPart fails running in a timeout
> --
>
> Key: FLINK-22572
> URL: https://issues.apache.org/jira/browse/FLINK-22572
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile), Table SQL / Ecosystem
>Affects Versions: 1.14.0
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> [This build|https://dev.azure.com/mapohl/flink/_build/results?buildId=408] 
> failed (not exclusively) due to a timeout in 
> [testPart(org.apache.flink.formats.json.JsonFsStreamSinkITCase)|https://dev.azure.com/mapohl/flink/_build/results?buildId=408&view=logs&j=dafbab6d-4616-5d7b-ee37-3c54e4828fd7&t=777327ab-6d4e-582e-3e76-4a9391c57e59&l=11295].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-26651) The connector CI build timeout in azure

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-26651.
--
Fix Version/s: (was: 1.16.0)
   Resolution: Done

> The connector CI build timeout in azure
> ---
>
> Key: FLINK-26651
> URL: https://issues.apache.org/jira/browse/FLINK-26651
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Leonard Xu
>Priority: Major
>
> {code:java}
> his task took already 95% of the available time budget of 234 minutes 
> ...
> = 
> === WARNING: Killing task === 
> = 
> ./tools/azure-pipelines/uploading_watchdog.sh: line 76: 304 Terminated 
> $COMMAND 
> {code}
> Failed instance: 
> https://dev.azure.com/leonardBang/Azure_CI/_build/results?buildId=676&view=logs&j=dafbab6d-4616-5d7b-ee37-3c54e4828fd7&t=e204f081-e6cd-5c04-4f4c-919639b63be9
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-24854) StateHandleSerializationTest unit test error

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-24854.
--
Fix Version/s: (was: 1.16.0)
   Resolution: Done

> StateHandleSerializationTest unit test error
> 
>
> Key: FLINK-24854
> URL: https://issues.apache.org/jira/browse/FLINK-24854
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.14.0
>Reporter: zlzhang0122
>Priority: Major
>  Labels: pull-request-available
>
> StateHandleSerializationTest.ensureStateHandlesHaveSerialVersionUID() will 
> fail beacuse RocksDBStateDownloaderTest has an anonymous class of 
> StreamStateHandle, and this class is a subtype of StateObject, since the 
> class is an anonymous, the assertFalse will fail as well as this unit test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19085) Remove deprecated methods for writing CSV and Text files from DataStream

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-19085:
---
Fix Version/s: (was: 1.16.0)

> Remove deprecated methods for writing CSV and Text files from DataStream
> 
>
> Key: FLINK-19085
> URL: https://issues.apache.org/jira/browse/FLINK-19085
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-unassigned
>
> We can remove long deprecated {{PublicEvolving}} methods:
> - DataStream#writeAsText
> - DataStream#writeAsCsv



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-23880) WindowAggregateITCase.testCascadeEventTimeTumbleWindowWithOffset fail with Artificial Failure

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-23880.
--
Resolution: Cannot Reproduce

> WindowAggregateITCase.testCascadeEventTimeTumbleWindowWithOffset fail with  
> Artificial Failure
> --
>
> Key: FLINK-23880
> URL: https://issues.apache.org/jira/browse/FLINK-23880
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22511&view=logs&j=a9db68b9-a7e0-54b6-0f98-010e0aff39e2&t=cdd32e0b-6047-565b-c58f-14054472f1be&l=9621
> {code}
> Aug 20 00:28:31 Caused by: java.lang.Exception: Artificial Failure
> Aug 20 00:28:31   at 
> org.apache.flink.table.planner.runtime.utils.FailingCollectionSource.run(FailingCollectionSource.java:172)
> Aug 20 00:28:31   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:116)
> Aug 20 00:28:31   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:73)
> Aug 20 00:28:31   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:323)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23880) WindowAggregateITCase.testCascadeEventTimeTumbleWindowWithOffset fail with Artificial Failure

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-23880:
---
Fix Version/s: (was: 1.16.0)

> WindowAggregateITCase.testCascadeEventTimeTumbleWindowWithOffset fail with  
> Artificial Failure
> --
>
> Key: FLINK-23880
> URL: https://issues.apache.org/jira/browse/FLINK-23880
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22511&view=logs&j=a9db68b9-a7e0-54b6-0f98-010e0aff39e2&t=cdd32e0b-6047-565b-c58f-14054472f1be&l=9621
> {code}
> Aug 20 00:28:31 Caused by: java.lang.Exception: Artificial Failure
> Aug 20 00:28:31   at 
> org.apache.flink.table.planner.runtime.utils.FailingCollectionSource.run(FailingCollectionSource.java:172)
> Aug 20 00:28:31   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:116)
> Aug 20 00:28:31   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:73)
> Aug 20 00:28:31   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:323)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21693) TestStreamEnvironment does not implement executeAsync

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-21693:
---
Fix Version/s: (was: 1.16.0)

> TestStreamEnvironment does not implement executeAsync
> -
>
> Key: FLINK-21693
> URL: https://issues.apache.org/jira/browse/FLINK-21693
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.11.3, 1.12.2, 1.13.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> When implementing FLINK-14392 we forgot to implement 
> {{TestStreamEnvironment.executeAsync}}. As a consequence, when using 
> {{TestStreamEnvironment}} and calling {{executeAsync}} the system will always 
> start a new local {{MiniCluster}}. The proper behaviour would be to use the 
> {{MiniCluster}} specified in the {{TestStreamEnvironment}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13400) Remove Hive and Hadoop dependencies from SQL Client

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-13400:
---
Fix Version/s: (was: 1.16.0)

> Remove Hive and Hadoop dependencies from SQL Client
> ---
>
> Key: FLINK-13400
> URL: https://issues.apache.org/jira/browse/FLINK-13400
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Reporter: Timo Walther
>Assignee: frank wang
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> pull-request-available, stale-assigned
>
> 340/550 lines in the SQL Client {{pom.xml}} are just around Hive and Hadoop 
> dependencies.  Hive has nothing to do with the SQL Client and it will be hard 
> to maintain the long list of  exclusion there. Some dependencies are even in 
> a {{provided}} scope and not {{test}} scope.
> We should remove all dependencies on Hive/Hadoop and replace catalog-related 
> tests by a testing catalog. Similar to how we tests source/sinks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23310) Correct the `ModifyKindSetTrait` for `GroupWindowAggregate` when the input is an update stream

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-23310:
---
Fix Version/s: (was: 1.16.0)

> Correct the `ModifyKindSetTrait` for `GroupWindowAggregate` when the input is 
> an update stream 
> ---
>
> Key: FLINK-23310
> URL: https://issues.apache.org/jira/browse/FLINK-23310
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Shuo Cheng
>Priority: Minor
>
> Following FLINK-22781, currently group window supports update input stream, 
> just like unbounded aggregate, group window may also emit DELETE records, so 
> the `ModifyKindSetTrait` for group window should be modified as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-23390) FlinkKafkaInternalProducerITCase.testResumeTransactionAfterClosed

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-23390.
--
Fix Version/s: (was: 1.16.0)
   Resolution: Cannot Reproduce

> FlinkKafkaInternalProducerITCase.testResumeTransactionAfterClosed
> -
>
> Key: FLINK-23390
> URL: https://issues.apache.org/jira/browse/FLINK-23390
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Major
>  Labels: stale-assigned, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20454&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=f266c805-9429-58ed-2f9e-482e7b82f58b&l=6914
> {code}
> Jul 14 22:01:05 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 49 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase
> Jul 14 22:01:05 [ERROR] 
> testResumeTransactionAfterClosed(org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase)
>   Time elapsed: 5.271 s  <<< ERROR!
> Jul 14 22:01:05 java.lang.Exception: Unexpected exception, 
> expected but was
> Jul 14 22:01:05   at 
> org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:30)
> Jul 14 22:01:05   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Jul 14 22:01:05   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Jul 14 22:01:05   at 
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> Jul 14 22:01:05   at java.base/java.lang.Thread.run(Thread.java:834)
> Jul 14 22:01:05 Caused by: java.lang.AssertionError: The message should have 
> been successfully sent expected null, but 
> was:
> Jul 14 22:01:05   at org.junit.Assert.fail(Assert.java:89)
> Jul 14 22:01:05   at org.junit.Assert.failNotNull(Assert.java:756)
> Jul 14 22:01:05   at org.junit.Assert.assertNull(Assert.java:738)
> Jul 14 22:01:05   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.getClosedProducer(FlinkKafkaInternalProducerITCase.java:228)
> Jul 14 22:01:05   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.testResumeTransactionAfterClosed(FlinkKafkaInternalProducerITCase.java:184)
> Jul 14 22:01:05   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Jul 14 22:01:05   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 14 22:01:05   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 14 22:01:05   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> Jul 14 22:01:05   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 14 22:01:05   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 14 22:01:05   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 14 22:01:05   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 14 22:01:05   at 
> org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:19)
> Jul 14 22:01:05   ... 4 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19670) Query on a view of temporal join throws ParseException

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-19670:
---
Fix Version/s: (was: 1.16.0)

> Query on a view of temporal join throws ParseException
> --
>
> Key: FLINK-19670
> URL: https://issues.apache.org/jira/browse/FLINK-19670
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.1
>Reporter: simenliuxing
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> When I run a sql task with flink1.11.1 and blink planner, the following 
> syntax error appears, sql is as follows:
> {code:java}
> CREATE TABLE orders (
>   order_id INT,
>   product_id INT,
>   proctime AS PROCTIME()
> ) WITH (
>   'connector' = 'kafka',
>   'topic' = 'test',
>   'properties.bootstrap.servers' = 'localhost:9092',
>   'format' = 'json'
> );
> create table products(
>   pro_id  INT ,
>   product_name STRING,
>   PRIMARY  KEY (pro_id) NOT ENFORCED
> ) WITH  (
>   'connector'='jdbc',
>   'url'='jdbc:mysql://localhost:3306/test',
>   'username'='root',
>   'password'='root',
>   'table-name'='result4'
> );
> CREATE TABLE orders_info (
>   order_id INT,
>   pro_id INT,
>   product_name STRING
> ) WITH (
> 'connector' = 'print'
> );
> create view orders_view
> as
> SELECT
> order_id,
> pro_id,
> product_name
> FROM orders
> LEFT JOIN products FOR SYSTEM_TIME AS OF orders.proctime
> ON orders.product_id = products.pro_id;
> INSERT INTO orders_info SELECT * FROM orders_view;
> {code}
> The error is as follows:
> {code:java}
> Caused by: org.apache.flink.sql.parser.impl.ParseException: Encountered "FOR" 
> at line 3, column 73.
> Was expecting one of:
>  
> "EXCEPT" ...
> "FETCH" ...
> "GROUP" ...
> "HAVING" ...
> "INTERSECT" ...
> "LIMIT" ...
> "OFFSET" ...
> "ON" ...
> "ORDER" ...
> "MINUS" ...
> "TABLESAMPLE" ...
> "UNION" ...
> "USING" ...
> "WHERE" ...
> "WINDOW" ...
> "(" ...
> "NATURAL" ...
> "JOIN" ...
> "INNER" ...
> "LEFT" ...
> "RIGHT" ...
> "FULL" ...
> "CROSS" ...
> "," ...
> "OUTER" ...
> 
>   at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.generateParseException(FlinkSqlParserImpl.java:36086)
>   at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.jj_consume_token(FlinkSqlParserImpl.java:35900)
>   at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.SqlStmtEof(FlinkSqlParserImpl.java:3801)
>   at 
> org.apache.flink.sql.parser.impl.FlinkSqlParserImpl.parseSqlStmtEof(FlinkSqlParserImpl.java:248)
>   at 
> org.apache.calcite.sql.parser.SqlParser.parseQuery(SqlParser.java:161)
>   ... 25 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-20410) SQLClientSchemaRegistryITCase.testWriting failed with "Subject 'user_behavior' not found.; error code: 40401"

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-20410.
--
Fix Version/s: (was: 1.16.0)
   Resolution: Cannot Reproduce

> SQLClientSchemaRegistryITCase.testWriting failed with "Subject 
> 'user_behavior' not found.; error code: 40401"
> -
>
> Key: FLINK-20410
> URL: https://issues.apache.org/jira/browse/FLINK-20410
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.0, 1.13.0, 1.14.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10276&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 2020-11-28T01:14:08.6444305Z Nov 28 01:14:08 [ERROR] 
> testWriting(org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase)  
> Time elapsed: 74.818 s  <<< ERROR!
> 2020-11-28T01:14:08.6445353Z Nov 28 01:14:08 
> io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: 
> Subject 'user_behavior' not found.; error code: 40401
> 2020-11-28T01:14:08.6446071Z Nov 28 01:14:08  at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:292)
> 2020-11-28T01:14:08.6446910Z Nov 28 01:14:08  at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352)
> 2020-11-28T01:14:08.6447522Z Nov 28 01:14:08  at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:769)
> 2020-11-28T01:14:08.6448352Z Nov 28 01:14:08  at 
> io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:760)
> 2020-11-28T01:14:08.6449091Z Nov 28 01:14:08  at 
> io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getAllVersions(CachedSchemaRegistryClient.java:364)
> 2020-11-28T01:14:08.6449878Z Nov 28 01:14:08  at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testWriting(SQLClientSchemaRegistryITCase.java:195)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-21438) Broken links in content.zh/docs/concepts/flink-architecture.md

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-21438:
--

Assignee: Ting Sun

> Broken links in content.zh/docs/concepts/flink-architecture.md
> --
>
> Key: FLINK-21438
> URL: https://issues.apache.org/jira/browse/FLINK-21438
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.13.0
>Reporter: Ting Sun
>Assignee: Ting Sun
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> When reading the Chinese doc I find some links are broken and the responses 
> for these links are 404. And I find that this is because of the format error: 
>  these links are different from their corresponding original links in the 
> English doc, while the links which are identical to the corresponding links 
> in the English doc are OK for me.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22243) Reactive Mode parallelism changes are not shown in the job graph visualization in the UI

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-22243:
---
Fix Version/s: (was: 1.16.0)

> Reactive Mode parallelism changes are not shown in the job graph 
> visualization in the UI
> 
>
> Key: FLINK-22243
> URL: https://issues.apache.org/jira/browse/FLINK-22243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Robert Metzger
>Priority: Major
> Attachments: screenshot-1.png
>
>
> As reported here FLINK-22134, the parallelism in the visual job graph on top 
> of the detail page is not in sync with the parallelism listed in the task 
> list below, when reactive mode causes a parallelism change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21438) Broken links in content.zh/docs/concepts/flink-architecture.md

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-21438:
---
Fix Version/s: (was: 1.16.0)

> Broken links in content.zh/docs/concepts/flink-architecture.md
> --
>
> Key: FLINK-21438
> URL: https://issues.apache.org/jira/browse/FLINK-21438
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.13.0
>Reporter: Ting Sun
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> When reading the Chinese doc I find some links are broken and the responses 
> for these links are 404. And I find that this is because of the format error: 
>  these links are different from their corresponding original links in the 
> English doc, while the links which are identical to the corresponding links 
> in the English doc are OK for me.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-24173) BatchFileSystemITCaseBase.testProjectPushDown hangs on azure

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-24173.
--
Fix Version/s: (was: 1.16.0)
   Resolution: Cannot Reproduce

> BatchFileSystemITCaseBase.testProjectPushDown hangs on azure
> 
>
> Key: FLINK-24173
> URL: https://issues.apache.org/jira/browse/FLINK-24173
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.15.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23579&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d&l=14779
> {code}
> Sep 06 09:32:51 "main" #1 prio=5 os_prio=0 tid=0x7f4ae400b800 nid=0x75e9 
> waiting on condition [0x7f4aee144000]
> Sep 06 09:32:51java.lang.Thread.State: WAITING (parking)
> Sep 06 09:32:51   at sun.misc.Unsafe.park(Native Method)
> Sep 06 09:32:51   - parking to wait for  <0x8ef18ae8> (a 
> java.util.concurrent.CompletableFuture$Signaller)
> Sep 06 09:32:51   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Sep 06 09:32:51   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Sep 06 09:32:51   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
> Sep 06 09:32:51   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Sep 06 09:32:51   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Sep 06 09:32:51   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sendRequest(CollectResultFetcher.java:163)
> Sep 06 09:32:51   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:128)
> Sep 06 09:32:51   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
> Sep 06 09:32:51   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> Sep 06 09:32:51   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
> Sep 06 09:32:51   at 
> java.util.Iterator.forEachRemaining(Iterator.java:115)
> Sep 06 09:32:51   at 
> org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109)
> Sep 06 09:32:51   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
> Sep 06 09:32:51   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140)
> Sep 06 09:32:51   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:106)
> Sep 06 09:32:51   at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.check(BatchFileSystemITCaseBase.scala:46)
> Sep 06 09:32:51   at 
> org.apache.flink.table.planner.runtime.FileSystemITCaseBase$class.testProjectPushDown(FileSystemITCaseBase.scala:251)
> Sep 06 09:32:51   at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.testProjectPushDown(BatchFileSystemITCaseBase.scala:33)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25250) Consider Java 11 as default version on CI

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-25250:
---
Fix Version/s: (was: 1.16.0)

> Consider Java 11 as default version on CI
> -
>
> Key: FLINK-25250
> URL: https://issues.apache.org/jira/browse/FLINK-25250
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System / CI
>Reporter: Chesnay Schepler
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-20576) Flink Temporal Join Hive Dim Error

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-20576:
---
Fix Version/s: (was: 1.16.0)

> Flink Temporal Join Hive Dim Error
> --
>
> Key: FLINK-20576
> URL: https://issues.apache.org/jira/browse/FLINK-20576
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: HideOnBush
>Priority: Major
>  Labels: auto-unassigned
>
>  
> KAFKA DDL
> {code:java}
> CREATE TABLE hive_catalog.flink_db_test.kfk_master_test (
> master Row String, action int, orderStatus int, orderKey String, actionTime bigint, 
> areaName String, paidAmount double, foodAmount double, startTime String, 
> person double, orderSubType int, checkoutTime String>,
> proctime as PROCTIME()
> ) WITH (properties ..){code}
>  
> FLINK client query sql
> {noformat}
> SELECT * FROM hive_catalog.flink_db_test.kfk_master_test AS kafk_tbl 
>   JOIN hive_catalog.gauss.dim_extend_shop_info /*+ 
> OPTIONS('streaming-source.enable'='true', 
> 'streaming-source.partition.include' = 'latest', 
>'streaming-source.monitor-interval' = '12 
> h','streaming-source.partition-order' = 'partition-name') */ FOR SYSTEM_TIME 
> AS OF kafk_tbl.proctime AS dim 
>ON kafk_tbl.groupID = dim.group_id where kafk_tbl.groupID is not 
> null;{noformat}
> When I execute the above statement, these stack error messages are returned
> Caused by: java.lang.NullPointerException: bufferCaused by: 
> java.lang.NullPointerException: buffer at 
> org.apache.flink.core.memory.MemorySegment.(MemorySegment.java:161) 
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.core.memory.HybridMemorySegment.(HybridMemorySegment.java:86)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.core.memory.MemorySegmentFactory.wrap(MemorySegmentFactory.java:55)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.data.binary.BinaryStringData.fromBytes(BinaryStringData.java:98)
>  ~[flink-table_2.11-1.12.0.jar:1.12.0]
>  
> Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to load table 
> into cache after 3 retriesCaused by: 
> org.apache.flink.util.FlinkRuntimeException: Failed to load table into cache 
> after 3 retries at 
> org.apache.flink.table.filesystem.FileSystemLookupFunction.checkCacheReload(FileSystemLookupFunction.java:143)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.filesystem.FileSystemLookupFunction.eval(FileSystemLookupFunction.java:103)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> LookupFunction$1577.flatMap(Unknown Source) ~[?:?] at 
> org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
>  ~[flink-dist_2.11-1.12.0.jar:1.12.0]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-23867) FlinkKafkaInternalProducerITCase.testCommitTransactionAfterClosed fails with IllegalStateException

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-23867.
--
Fix Version/s: (was: 1.16.0)
   Resolution: Cannot Reproduce

> FlinkKafkaInternalProducerITCase.testCommitTransactionAfterClosed fails with 
> IllegalStateException
> --
>
> Key: FLINK-23867
> URL: https://issues.apache.org/jira/browse/FLINK-23867
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Xintong Song
>Assignee: Fabian Paul
>Priority: Major
>  Labels: stale-assigned, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22465&view=logs&j=72d4811f-9f0d-5fd0-014a-0bc26b72b642&t=c1d93a6a-ba91-515d-3196-2ee8019fbda7&l=6862
> {code}
> Aug 18 23:20:14 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 51.905 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase
> Aug 18 23:20:14 [ERROR] 
> testCommitTransactionAfterClosed(org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase)
>   Time elapsed: 7.848 s  <<< ERROR!
> Aug 18 23:20:14 java.lang.Exception: Unexpected exception, 
> expected but was
> Aug 18 23:20:14   at 
> org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:28)
> Aug 18 23:20:14   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> Aug 18 23:20:14   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> Aug 18 23:20:14   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 18 23:20:14   at java.lang.Thread.run(Thread.java:748)
> Aug 18 23:20:14 Caused by: java.lang.AssertionError: The message should have 
> been successfully sent expected null, but 
> was:
> Aug 18 23:20:14   at org.junit.Assert.fail(Assert.java:88)
> Aug 18 23:20:14   at org.junit.Assert.failNotNull(Assert.java:755)
> Aug 18 23:20:14   at org.junit.Assert.assertNull(Assert.java:737)
> Aug 18 23:20:14   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.getClosedProducer(FlinkKafkaInternalProducerITCase.java:228)
> Aug 18 23:20:14   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.testCommitTransactionAfterClosed(FlinkKafkaInternalProducerITCase.java:177)
> Aug 18 23:20:14   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 18 23:20:14   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 18 23:20:14   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 18 23:20:14   at java.lang.reflect.Method.invoke(Method.java:498)
> Aug 18 23:20:14   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Aug 18 23:20:14   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Aug 18 23:20:14   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Aug 18 23:20:14   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Aug 18 23:20:14   at 
> org.junit.internal.runners.statements.ExpectException.evaluate(ExpectException.java:19)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29442) Setup license checks

2022-09-28 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-29442:
-

 Summary: Setup license checks
 Key: FLINK-29442
 URL: https://issues.apache.org/jira/browse/FLINK-29442
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


See https://issues.apache.org/jira/browse/FLINK-29310



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23324) Postgres of JDBC Connector enable case-sensitive.

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-23324:
---
Fix Version/s: (was: 1.16.0)

> Postgres of JDBC Connector enable case-sensitive.
> -
>
> Key: FLINK-23324
> URL: https://issues.apache.org/jira/browse/FLINK-23324
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.13.1, 1.12.4
>Reporter: Luning Wang
>Assignee: Luning Wang
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> Now the PostgresDialect is case-insensitive. I think this is a bug.
> https://stackoverflow.com/questions/20878932/are-postgresql-column-names-case-sensitive
> https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
> Could we delete PostgresDialect#quoteIdentifier, make it using super class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29443) Replicate packaging tests

2022-09-28 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-29443:
-

 Summary: Replicate packaging tests
 Key: FLINK-29443
 URL: https://issues.apache.org/jira/browse/FLINK-29443
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


See https://issues.apache.org/jira/browse/FLINK-29316



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-17855) UDF with parameter Array(Row) can not work

2022-09-28 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-17855:
---
Fix Version/s: (was: 1.16.0)

> UDF with parameter Array(Row) can not work
> --
>
> Key: FLINK-17855
> URL: https://issues.apache.org/jira/browse/FLINK-17855
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, sprint
>
> {code:java}
> public String eval(Row[] rows) {
>   ...
> }
> {code}
> Can not work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29444) Setup release scripts

2022-09-28 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-29444:
-

 Summary: Setup release scripts
 Key: FLINK-29444
 URL: https://issues.apache.org/jira/browse/FLINK-29444
 Project: Flink
  Issue Type: Sub-task
Reporter: Danny Cranmer


See https://issues.apache.org/jira/browse/FLINK-29320



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   >