[jira] [Created] (FLINK-35944) Add CompiledPlan annotations to BatchExecUnion

2024-07-31 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35944:
--

 Summary: Add CompiledPlan annotations to BatchExecUnion
 Key: FLINK-35944
 URL: https://issues.apache.org/jira/browse/FLINK-35944
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes


In addition to the annotations, implement the BatchCompiledPlan test for this 
operator.

For this operator, the BatchExecHashAggregate operator must be annotated as 
well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35943) Add CompiledPlan annotations to BatchExecHashJoin and BatchExecNestedLoopJoin

2024-07-31 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35943:
--

 Summary: Add CompiledPlan annotations to BatchExecHashJoin and 
BatchExecNestedLoopJoin
 Key: FLINK-35943
 URL: https://issues.apache.org/jira/browse/FLINK-35943
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes


In addition to the annotations, implement the BatchCompiledPlan test for these 
two operators.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35942) Add CompiledPlan annotations to BatchExecCorrelate

2024-07-31 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-35942:
---
Summary: Add CompiledPlan annotations to BatchExecCorrelate  (was: Add 
CompiledPlan annotations to BatchExecSort)

> Add CompiledPlan annotations to BatchExecCorrelate
> --
>
> Key: FLINK-35942
> URL: https://issues.apache.org/jira/browse/FLINK-35942
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>
> In addition to the annotations, implement the BatchCompiledPlan test for this 
> operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35942) Add CompiledPlan annotations to BatchExecSort

2024-07-31 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35942:
--

 Summary: Add CompiledPlan annotations to BatchExecSort
 Key: FLINK-35942
 URL: https://issues.apache.org/jira/browse/FLINK-35942
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes


In addition to the annotations, implement the BatchCompiledPlan test for this 
operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35941) Add CompiledPlan annotations to BatchExecLimit

2024-07-31 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35941:
--

 Summary: Add CompiledPlan annotations to BatchExecLimit
 Key: FLINK-35941
 URL: https://issues.apache.org/jira/browse/FLINK-35941
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes


In addition to the annotations, implement the BatchCompiledPlan test for this 
operator.

Additionally, tests for the TableSource operator will be pulled into this work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35934) Add CompiledPlan annotations to BatchExecValues

2024-07-30 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-35934:
---
Description: In addition to the annotations, implement the 
BatchCompiledPlan test for this operator.

> Add CompiledPlan annotations to BatchExecValues
> ---
>
> Key: FLINK-35934
> URL: https://issues.apache.org/jira/browse/FLINK-35934
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>
> In addition to the annotations, implement the BatchCompiledPlan test for this 
> operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35934) Add CompiledPlan annotations to BatchExecValues

2024-07-30 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35934:
--

 Summary: Add CompiledPlan annotations to BatchExecValues
 Key: FLINK-35934
 URL: https://issues.apache.org/jira/browse/FLINK-35934
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35923) Add CompiledPlan annotations to BatchExecSort

2024-07-30 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-35923:
---
Description: 
In addition to the annotations, implement the BatchCompiledPlan test for this 
operator.

For this operator, the exchange operator must be annotated as well.

  was:
In addition to the annotations, implement the BatchCompiledPlan test for this 
operator.

Since this is the first operator, exchange operator must be annotated as well.


> Add CompiledPlan annotations to BatchExecSort
> -
>
> Key: FLINK-35923
> URL: https://issues.apache.org/jira/browse/FLINK-35923
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
> Fix For: 2.0.0
>
>
> In addition to the annotations, implement the BatchCompiledPlan test for this 
> operator.
> For this operator, the exchange operator must be annotated as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35923) Add CompiledPlan annotations to BatchExecSort

2024-07-29 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35923:
--

 Summary: Add CompiledPlan annotations to BatchExecSort
 Key: FLINK-35923
 URL: https://issues.apache.org/jira/browse/FLINK-35923
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes
 Fix For: 2.0.0


In addition to the annotations, implement the BatchCompiledPlan test for this 
operator.

Since this is the first operator, exchange operator must be annotated as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35798) Implement a BatchRestoreTestBase

2024-07-29 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes resolved FLINK-35798.

Fix Version/s: 2.0.0
   Resolution: Fixed

> Implement a BatchRestoreTestBase
> 
>
> Key: FLINK-35798
> URL: https://issues.apache.org/jira/browse/FLINK-35798
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
> Fix For: 2.0.0
>
>
> * The goal of BatchRestoreTestBase has two goals:
> 1. Take TableTestPrograms and produce compiled plans for the latest version 
> of the operator being tested.
> 2. Load all compiled plans from disk and execute them against the first batch 
> of data described by the TableTestProgram.
> This will ensure that there are no errors in serialization or deserialization 
> for the operator under test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35798) Implement a BatchRestoreTestBase

2024-07-29 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-35798:
---
Description: 
* The goal of BatchRestoreTestBase has two goals:
1. Take TableTestPrograms and produce compiled plans for the latest version of 
the operator being tested.
2. Load all compiled plans from disk and execute them against the first batch 
of data described by the TableTestProgram.

This will ensure that there are no errors in serialization or deserialization 
for the operator under test.

  was:
The goal of BatchCompiledPlanTestBase has two golas:
1. Take TableTestPrograms and produce compiled plans for the latest version of 
the operator being tested.
2. Load all compiled plans from disk and execute them against the first batch 
of data described by the TableTestProgram.

This will ensure that there are no errors in serialization or deserialization 
for the operator under test.


> Implement a BatchRestoreTestBase
> 
>
> Key: FLINK-35798
> URL: https://issues.apache.org/jira/browse/FLINK-35798
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>
> * The goal of BatchRestoreTestBase has two goals:
> 1. Take TableTestPrograms and produce compiled plans for the latest version 
> of the operator being tested.
> 2. Load all compiled plans from disk and execute them against the first batch 
> of data described by the TableTestProgram.
> This will ensure that there are no errors in serialization or deserialization 
> for the operator under test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35798) Implement a BatchRestoreTestBase

2024-07-29 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-35798:
---
Summary: Implement a BatchRestoreTestBase  (was: Implement a 
BatchCompiledPlanTestBase)

> Implement a BatchRestoreTestBase
> 
>
> Key: FLINK-35798
> URL: https://issues.apache.org/jira/browse/FLINK-35798
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>
> The goal of BatchCompiledPlanTestBase has two golas:
> 1. Take TableTestPrograms and produce compiled plans for the latest version 
> of the operator being tested.
> 2. Load all compiled plans from disk and execute them against the first batch 
> of data described by the TableTestProgram.
> This will ensure that there are no errors in serialization or deserialization 
> for the operator under test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35798) Implement a BatchRestoreTestBase

2024-07-29 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17869414#comment-17869414
 ] 

Jim Hughes commented on FLINK-35798:


This was implemented in 
https://github.com/apache/flink/pull/25064/files#diff-0e95c7e094b3be8a14f71a109d952768b59fd17a9af827f0eb01a515b543aea5.

> Implement a BatchRestoreTestBase
> 
>
> Key: FLINK-35798
> URL: https://issues.apache.org/jira/browse/FLINK-35798
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>
> * The goal of BatchRestoreTestBase has two goals:
> 1. Take TableTestPrograms and produce compiled plans for the latest version 
> of the operator being tested.
> 2. Load all compiled plans from disk and execute them against the first batch 
> of data described by the TableTestProgram.
> This will ensure that there are no errors in serialization or deserialization 
> for the operator under test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35799) Add CompiledPlan annotations to BatchExecCalc

2024-07-09 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-35799:
--

Assignee: Jim Hughes

> Add CompiledPlan annotations to BatchExecCalc
> -
>
> Key: FLINK-35799
> URL: https://issues.apache.org/jira/browse/FLINK-35799
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
>
> In addition to the annotations, implement the BatchCompiledPlan test for this 
> operator.
> Since this is the first operator, sink and source operators must be annotated 
> as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35797) FLIP-456: CompiledPlan support for Batch Execution Mode

2024-07-09 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17864427#comment-17864427
 ] 

Jim Hughes commented on FLINK-35797:


In terms of strategy, I'll be creating the initial tickets as I go.  At some 
point, I may enter tickets to indicate the remaining work.

> FLIP-456: CompiledPlan support for Batch Execution Mode
> ---
>
> Key: FLINK-35797
> URL: https://issues.apache.org/jira/browse/FLINK-35797
> Project: Flink
>  Issue Type: New Feature
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>  Labels: pull-request-available
>
> The CompiledPlan feature, introduced in FLIP-190: Support Version Upgrades 
> for Table API & SQL Programs, supports only streaming execution mode. Batch 
> ExecNodes were explicitly excluded from the JSON plan (aka CompiledPlan).
> This ticket will cover adding CompiledPlan support for the Batch ExecNodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35798) Implement a BatchCompiledPlanTestBase

2024-07-09 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-35798:
--

Assignee: Jim Hughes

> Implement a BatchCompiledPlanTestBase
> -
>
> Key: FLINK-35798
> URL: https://issues.apache.org/jira/browse/FLINK-35798
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>
> The goal of BatchCompiledPlanTestBase has two golas:
> 1. Take TableTestPrograms and produce compiled plans for the latest version 
> of the operator being tested.
> 2. Load all compiled plans from disk and execute them against the first batch 
> of data described by the TableTestProgram.
> This will ensure that there are no errors in serialization or deserialization 
> for the operator under test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35799) Add CompiledPlan annotations to BatchExecCalc

2024-07-09 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35799:
--

 Summary: Add CompiledPlan annotations to BatchExecCalc
 Key: FLINK-35799
 URL: https://issues.apache.org/jira/browse/FLINK-35799
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes


In addition to the annotations, implement the BatchCompiledPlan test for this 
operator.

Since this is the first operator, sink and source operators must be annotated 
as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35798) Implement a BatchCompiledPlanTestBase

2024-07-09 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35798:
--

 Summary: Implement a BatchCompiledPlanTestBase
 Key: FLINK-35798
 URL: https://issues.apache.org/jira/browse/FLINK-35798
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes


The goal of BatchCompiledPlanTestBase has two golas:
1. Take TableTestPrograms and produce compiled plans for the latest version of 
the operator being tested.
2. Load all compiled plans from disk and execute them against the first batch 
of data described by the TableTestProgram.

This will ensure that there are no errors in serialization or deserialization 
for the operator under test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35797) FLIP-456: CompiledPlan support for Batch Execution Mode

2024-07-09 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35797:
--

 Summary: FLIP-456: CompiledPlan support for Batch Execution Mode
 Key: FLINK-35797
 URL: https://issues.apache.org/jira/browse/FLINK-35797
 Project: Flink
  Issue Type: New Feature
Reporter: Jim Hughes
Assignee: Jim Hughes


The CompiledPlan feature, introduced in FLIP-190: Support Version Upgrades for 
Table API & SQL Programs, supports only streaming execution mode. Batch 
ExecNodes were explicitly excluded from the JSON plan (aka CompiledPlan).

This ticket will cover adding CompiledPlan support for the Batch ExecNodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33494) FLIP-376: Add DISTRIBUTED BY clause

2024-07-09 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17864195#comment-17864195
 ] 

Jim Hughes commented on FLINK-33494:


Hi [~Weijie Guo], thank you!  I thought we were updating a release notes 
markdown file in Git.  I like what you wrote; I made a few small copy-editing 
changes.

Thanks for helping with the release!

> FLIP-376: Add DISTRIBUTED BY clause
> ---
>
> Key: FLINK-33494
> URL: https://issues.apache.org/jira/browse/FLINK-33494
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Jim Hughes
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: image-2024-07-08-23-45-43-850.png, 
> image-2024-07-08-23-46-11-355.png
>
>
> Many SQL vendors expose the concepts of Partitioning, Bucketing, and 
> Clustering.
> [FLIP-376|https://cwiki.apache.org/confluence/x/loxEE] proposes to introduce 
> the concept of Bucketing to Flink.
> It focuses solely on the syntax and necessary API changes to offer a native 
> way of declaring bucketing. Whether this is supported or not during runtime 
> should then be a connector characteristic - similar to partitioning. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33494) FLIP-376: Add DISTRIBUTED BY clause

2024-07-09 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-33494:
---
Release Note: 
Many SQL vendors expose the concepts of Partitioning, Bucketing, and 
Clustering. Here, we introduce the concept of Bucketing to Flink.

This change focuses solely on the syntax and necessary API changes to offer a 
native way of declaring bucketing.  Similar to partitioning, runtime support is 
determined by a table's connector.

  was:
Many SQL vendors expose the concepts of Partitioning, Bucketing, and 
Clustering. We introduce the concept of Bucketing to Flink.

This changes focuses solely on the syntax and necessary API changes to offer a 
native way of declaring bucketing.  Similar to partitioning, runtime support is 
determined by the connector for a table.


> FLIP-376: Add DISTRIBUTED BY clause
> ---
>
> Key: FLINK-33494
> URL: https://issues.apache.org/jira/browse/FLINK-33494
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Jim Hughes
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: image-2024-07-08-23-45-43-850.png, 
> image-2024-07-08-23-46-11-355.png
>
>
> Many SQL vendors expose the concepts of Partitioning, Bucketing, and 
> Clustering.
> [FLIP-376|https://cwiki.apache.org/confluence/x/loxEE] proposes to introduce 
> the concept of Bucketing to Flink.
> It focuses solely on the syntax and necessary API changes to offer a native 
> way of declaring bucketing. Whether this is supported or not during runtime 
> should then be a connector characteristic - similar to partitioning. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33494) FLIP-376: Add DISTRIBUTED BY clause

2024-07-09 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-33494:
---
Release Note: 
Many SQL vendors expose the concepts of Partitioning, Bucketing, and 
Clustering. We introduce the concept of Bucketing to Flink.

This changes focuses solely on the syntax and necessary API changes to offer a 
native way of declaring bucketing.  Similar to partitioning, runtime support is 
determined by the connector for a table.

  was:
Many SQL vendors expose the concepts of Partitioning, Bucketing, and 
Clustering. We proposes to introduce the concept of Bucketing to Flink.

It focuses solely on the syntax and necessary API changes to offer a native way 
of declaring bucketing. Whether this is supported or not during runtime should 
then be a connector characteristic - similar to partitioning. 


> FLIP-376: Add DISTRIBUTED BY clause
> ---
>
> Key: FLINK-33494
> URL: https://issues.apache.org/jira/browse/FLINK-33494
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Jim Hughes
>Priority: Major
> Fix For: 1.20.0
>
> Attachments: image-2024-07-08-23-45-43-850.png, 
> image-2024-07-08-23-46-11-355.png
>
>
> Many SQL vendors expose the concepts of Partitioning, Bucketing, and 
> Clustering.
> [FLIP-376|https://cwiki.apache.org/confluence/x/loxEE] proposes to introduce 
> the concept of Bucketing to Flink.
> It focuses solely on the syntax and necessary API changes to offer a native 
> way of declaring bucketing. Whether this is supported or not during runtime 
> should then be a connector characteristic - similar to partitioning. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33494) FLIP-376: Add DISTRIBUTED BY clause

2024-07-08 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17863819#comment-17863819
 ] 

Jim Hughes commented on FLINK-33494:


Hi [~Weijie Guo] sorry to be slow to respond.  

Where did you add to the release notes?  (I tried to find them, and I did not 
know where to look.)  Happy to take a look.

> FLIP-376: Add DISTRIBUTED BY clause
> ---
>
> Key: FLINK-33494
> URL: https://issues.apache.org/jira/browse/FLINK-33494
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Jim Hughes
>Priority: Major
> Fix For: 1.20.0
>
>
> Many SQL vendors expose the concepts of Partitioning, Bucketing, and 
> Clustering.
> [FLIP-376|https://cwiki.apache.org/confluence/x/loxEE] proposes to introduce 
> the concept of Bucketing to Flink.
> It focuses solely on the syntax and necessary API changes to offer a native 
> way of declaring bucketing. Whether this is supported or not during runtime 
> should then be a connector characteristic - similar to partitioning. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33497) Update the Kafka connector to support DISTRIBUTED BY clause

2024-07-08 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-33497:
--

Assignee: Arvid Heise  (was: Tzu-Li (Gordon) Tai)

> Update the Kafka connector to support DISTRIBUTED BY clause
> ---
>
> Key: FLINK-33497
> URL: https://issues.apache.org/jira/browse/FLINK-33497
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Timo Walther
>Assignee: Arvid Heise
>Priority: Major
> Fix For: kafka-4.0.0
>
>
> The Kafka connector can be one of the first connectors supporting the 
> DISTRIBUTED BY clause. The clause can be translated into 'key.fields' and 
> 'properties.num.partitons' in the WITH clause.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35608) Release Testing Instructions: Verify FLIP-376: Add DISTRIBUTED BY clause

2024-07-01 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17861214#comment-17861214
 ] 

Jim Hughes commented on FLINK-35608:


I did the implementation for FLIP-376 / FLINK-33494.  Since there are no open 
source connectors which implement this new feature, I do not believe there is 
any additional testing which we can do as part of the release.

(The work is covered by unit and integration tests in the codebase.)

> Release Testing Instructions: Verify FLIP-376: Add DISTRIBUTED BY clause
> 
>
> Key: FLINK-35608
> URL: https://issues.apache.org/jira/browse/FLINK-35608
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Rui Fan
>Assignee: Timo Walther
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.20.0
>
>
> Follow up the test for https://issues.apache.org/jira/browse/FLINK-33494



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33496) Expose DISTRIBUTED BY clause via parser

2024-06-28 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-33496:
--

Assignee: Jim Hughes

> Expose DISTRIBUTED BY clause via parser
> ---
>
> Key: FLINK-33496
> URL: https://issues.apache.org/jira/browse/FLINK-33496
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Timo Walther
>Assignee: Jim Hughes
>Priority: Major
> Fix For: 1.20.0
>
>
> Expose DISTRIBUTED BY clause via parser and TableDescriptor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33497) Update the Kafka connector to support DISTRIBUTED BY clause

2024-06-28 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-33497:
---
Parent: (was: FLINK-33494)
Issue Type: New Feature  (was: Sub-task)

> Update the Kafka connector to support DISTRIBUTED BY clause
> ---
>
> Key: FLINK-33497
> URL: https://issues.apache.org/jira/browse/FLINK-33497
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka
>Reporter: Timo Walther
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
> Fix For: kafka-4.0.0
>
>
> The Kafka connector can be one of the first connectors supporting the 
> DISTRIBUTED BY clause. The clause can be translated into 'key.fields' and 
> 'properties.num.partitons' in the WITH clause.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35725) Translate new DISTRIBUTED BY documentation into Chinese

2024-06-28 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35725:
--

 Summary: Translate new DISTRIBUTED BY documentation into Chinese
 Key: FLINK-35725
 URL: https://issues.apache.org/jira/browse/FLINK-35725
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes


In Flink 1.20, FLIP-376 adds DISTRIBUTED BY.  The documentation PR is here: 
https://github.com/apache/flink/pull/24929 and it should be translated into 
Chinese.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35585) Add documentation for distribution

2024-06-12 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-35585:
--

Assignee: Jim Hughes

> Add documentation for distribution
> --
>
> Key: FLINK-35585
> URL: https://issues.apache.org/jira/browse/FLINK-35585
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>
> Add documentation for ALTER TABLE, CREATE TABLE, and the sink abilities.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35585) Add documentation for distribution

2024-06-12 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-35585:
--

 Summary: Add documentation for distribution
 Key: FLINK-35585
 URL: https://issues.apache.org/jira/browse/FLINK-35585
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes


Add documentation for ALTER TABLE, CREATE TABLE, and the sink abilities.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34513) GroupAggregateRestoreTest.testRestore fails

2024-03-25 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830640#comment-17830640
 ] 

Jim Hughes commented on FLINK-34513:


>From looking at this quickly, here are some notes:

First, there are at least two different test cases which are failing.  Both 
seem to be related to enabling mini.batch.

Second, I tried to fix this via `.testMaterializedData()`.  This approach 
(presently) fails since records are updated between between and after the 
restore.  We could extend the idea of `.testMaterializedData()` check only the 
records specified in `.consumedAfterRestore()`.

> GroupAggregateRestoreTest.testRestore fails
> ---
>
> Key: FLINK-34513
> URL: https://issues.apache.org/jira/browse/FLINK-34513
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Assignee: Bonnie Varghese
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57828=logs=26b84117-e436-5720-913e-3e280ce55cae=77cc7e77-39a0-5007-6d65-4137ac13a471=10881
> {code}
> Feb 24 01:12:01 01:12:01.384 [ERROR] Tests run: 10, Failures: 1, Errors: 0, 
> Skipped: 1, Time elapsed: 2.957 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.plan.nodes.exec.stream.GroupAggregateRestoreTest
> Feb 24 01:12:01 01:12:01.384 [ERROR] 
> org.apache.flink.table.planner.plan.nodes.exec.stream.GroupAggregateRestoreTest.testRestore(TableTestProgram,
>  ExecNodeMetadata)[4] -- Time elapsed: 0.653 s <<< FAILURE!
> Feb 24 01:12:01 java.lang.AssertionError: 
> Feb 24 01:12:01 
> Feb 24 01:12:01 Expecting actual:
> Feb 24 01:12:01   ["+I[3, 1, 2, 8, 31, 10.0, 3]",
> Feb 24 01:12:01 "+I[2, 1, 4, 14, 42, 7.0, 6]",
> Feb 24 01:12:01 "+I[1, 1, 4, 12, 24, 6.0, 4]",
> Feb 24 01:12:01 "+U[2, 1, 4, 14, 57, 8.0, 7]",
> Feb 24 01:12:01 "+U[1, 1, 4, 12, 32, 6.0, 5]",
> Feb 24 01:12:01 "+I[7, 0, 1, 7, 7, 7.0, 1]",
> Feb 24 01:12:01 "+U[2, 1, 4, 14, 57, 7.0, 7]",
> Feb 24 01:12:01 "+U[1, 1, 4, 12, 32, 5.0, 5]",
> Feb 24 01:12:01 "+U[3, 1, 2, 8, 31, 9.0, 3]",
> Feb 24 01:12:01 "+U[7, 0, 1, 7, 7, 7.0, 2]"]
> Feb 24 01:12:01 to contain exactly in any order:
> Feb 24 01:12:01   ["+I[3, 1, 2, 8, 31, 10.0, 3]",
> Feb 24 01:12:01 "+I[2, 1, 4, 14, 42, 7.0, 6]",
> Feb 24 01:12:01 "+I[1, 1, 4, 12, 24, 6.0, 4]",
> Feb 24 01:12:01 "+U[2, 1, 4, 14, 57, 8.0, 7]",
> Feb 24 01:12:01 "+U[1, 1, 4, 12, 32, 6.0, 5]",
> Feb 24 01:12:01 "+U[3, 1, 2, 8, 31, 9.0, 3]",
> Feb 24 01:12:01 "+U[2, 1, 4, 14, 57, 7.0, 7]",
> Feb 24 01:12:01 "+I[7, 0, 1, 7, 7, 7.0, 2]",
> Feb 24 01:12:01 "+U[1, 1, 4, 12, 32, 5.0, 5]"]
> Feb 24 01:12:01 elements not found:
> Feb 24 01:12:01   ["+I[7, 0, 1, 7, 7, 7.0, 2]"]
> Feb 24 01:12:01 and elements not expected:
> Feb 24 01:12:01   ["+I[7, 0, 1, 7, 7, 7.0, 1]", "+U[7, 0, 1, 7, 7, 7.0, 2]"]
> Feb 24 01:12:01 
> Feb 24 01:12:01   at 
> org.apache.flink.table.planner.plan.nodes.exec.testutils.RestoreTestBase.testRestore(RestoreTestBase.java:313)
> Feb 24 01:12:01   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33521) Implement restore tests for PythonCalc node

2024-03-25 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-33521:
--

Assignee: (was: Jim Hughes)

> Implement restore tests for PythonCalc node
> ---
>
> Key: FLINK-33521
> URL: https://issues.apache.org/jira/browse/FLINK-33521
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29107) Upgrade spotless version to improve spotless check efficiency

2024-01-30 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17812488#comment-17812488
 ] 

Jim Hughes commented on FLINK-29107:


[~chesnay] Do you know if the IntelliJ plugin has improved in the last year?  

If so, I'd suggest we update the Maven Spotless plugin to version 2.35.0.  That 
is the last version which does not require changing the `google-java-format`. 

```
Execution default-cli of goal 
com.diffplug.spotless:spotless-maven-plugin:2.36.0:apply failed: You are 
running Spotless on JVM 11. This requires google-java-format of at least 1.8 
(you are using 1.7).
```

[~yunta] as a work around, you can add `-Dspotless.version=2.35.0` to your 
Maven commands to override the version locally. 

> Upgrade spotless version to improve spotless check efficiency
> -
>
> Key: FLINK-29107
> URL: https://issues.apache.org/jira/browse/FLINK-29107
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.15.3
>Reporter: Yu Chen
>Assignee: Yu Chen
>Priority: Major
> Attachments: image-2022-08-25-22-10-54-453.png
>
>
> I noticed a [discussion|https://github.com/diffplug/spotless/issues/927] in 
> the spotless GitHub repository that we can improve the efficiency of spotless 
> checks significantly by upgrading the version of spotless and enabling the 
> `upToDateChecking`.
> I have made a simple test locally and the improvement of the spotless check 
> after the upgrade is shown in the figure.
> !image-2022-08-25-22-10-54-453.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34173) Implement CatalogTable.Builder

2024-01-19 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-34173:
--

 Summary: Implement CatalogTable.Builder
 Key: FLINK-34173
 URL: https://issues.apache.org/jira/browse/FLINK-34173
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34172) Add support for altering a distribution via ALTER TABLE

2024-01-19 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-34172:
--

Assignee: Jim Hughes

> Add support for altering a distribution via ALTER TABLE 
> 
>
> Key: FLINK-34172
> URL: https://issues.apache.org/jira/browse/FLINK-34172
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34172) Add support for altering a distribution via ALTER TABLE

2024-01-19 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-34172:
--

 Summary: Add support for altering a distribution via ALTER TABLE 
 Key: FLINK-34172
 URL: https://issues.apache.org/jira/browse/FLINK-34172
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34067) Fix javacc warnings in flink-sql-parser

2024-01-11 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-34067:
---
Description: 
While extending the Flink SQL parser, I noticed these two warnings:
```
[INFO] — javacc:2.4:javacc (javacc) @ flink-sql-parser ---                      
                           
Java Compiler Compiler Version 4.0 (Parser Generator)                           
                            
(type "javacc" with no arguments for help)                                      
                                                           
Reading from file 
.../flink-table/flink-sql-parser/target/generated-sources/javacc/Parser.jj . . 
.               
Note: UNICODE_INPUT option is specified. Please make sure you create the 
parser/lexer using a Reader with the correct character encoding.  
Warning: Choice conflict involving two expansions at                            
                               
         line 2043, column 13 and line 2052, column 9 respectively.             
                           
         A common prefix is: "IF"                                               
                                                            Consider using a 
lookahead of 2 for earlier expansion.                                           
     
Warning: Choice conflict involving two expansions at                            
                              
         line 2097, column 13 and line 2105, column 8 respectively.             
                            
         A common prefix is: "IF"                                               
                                                   
         Consider using a lookahead of 2 for earlier expansion.     
```

As the warning suggestions, adding `LOOKAHEAD(2)` in a few places addresses the 
warning.

  was:
While extending the Flink SQL parser, I noticed these two warnings:

```
[INFO] --- javacc:2.4:javacc (javacc) @ flink-sql-parser ---                    
                             
Java Compiler Compiler Version 4.0 (Parser Generator)                           
                            
(type "javacc" with no arguments for help)                                      
                                                           
Reading from file 
.../flink-table/flink-sql-parser/target/generated-sources/javacc/Parser.jj . . 
.               
Note: UNICODE_INPUT option is specified. Please make sure you create the 
parser/lexer using a Reader with the correct character encoding.  
Warning: Choice conflict involving two expansions at                            
                               
         line 2043, column 13 and line 2052, column 9 respectively.             
                           
         A common prefix is: "IF"                                               
                                                            Consider using a 
lookahead of 2 for earlier expansion.                                           
     
Warning: Choice conflict involving two expansions at                            
                              
         line 2097, column 13 and line 2105, column 8 respectively.             
                            
         A common prefix is: "IF"                                               
                                                   
         Consider using a lookahead of 2 for earlier expansion.     
```

As the warning suggestions, adding `LOOKAHEAD(2)` in a few places addresses the 
warning.


> Fix javacc warnings in flink-sql-parser
> ---
>
> Key: FLINK-34067
> URL: https://issues.apache.org/jira/browse/FLINK-34067
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.19.0
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Minor
>
> While extending the Flink SQL parser, I noticed these two warnings:
> ```
> [INFO] — javacc:2.4:javacc (javacc) @ flink-sql-parser ---                    
>                              
> Java Compiler Compiler Version 4.0 (Parser Generator)                         
>                               
> (type "javacc" with no arguments for help)                                    
>                                                              
> Reading from file 
> .../flink-table/flink-sql-parser/target/generated-sources/javacc/Parser.jj . 
> . .               
> Note: UNICODE_INPUT option is specified. Please make sure you create the 
> parser/lexer using a Reader with the correct character encoding.  
> Warning: Choice conflict involving two expansions at                          
>                                  
>          line 2043, column 13 and line 2052, column 9 respectively.           
>                              
>          A common prefix is: "IF"                                             
>        

[jira] [Created] (FLINK-34067) Fix javacc warnings in flink-sql-parser

2024-01-11 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-34067:
--

 Summary: Fix javacc warnings in flink-sql-parser
 Key: FLINK-34067
 URL: https://issues.apache.org/jira/browse/FLINK-34067
 Project: Flink
  Issue Type: Improvement
Reporter: Jim Hughes
Assignee: Jim Hughes


While extending the Flink SQL parser, I noticed these two warnings:

```
[INFO] --- javacc:2.4:javacc (javacc) @ flink-sql-parser ---                    
                             
Java Compiler Compiler Version 4.0 (Parser Generator)                           
                            
(type "javacc" with no arguments for help)                                      
                                                           
Reading from file 
.../flink-table/flink-sql-parser/target/generated-sources/javacc/Parser.jj . . 
.               
Note: UNICODE_INPUT option is specified. Please make sure you create the 
parser/lexer using a Reader with the correct character encoding.  
Warning: Choice conflict involving two expansions at                            
                               
         line 2043, column 13 and line 2052, column 9 respectively.             
                           
         A common prefix is: "IF"                                               
                                                            Consider using a 
lookahead of 2 for earlier expansion.                                           
     
Warning: Choice conflict involving two expansions at                            
                              
         line 2097, column 13 and line 2105, column 8 respectively.             
                            
         A common prefix is: "IF"                                               
                                                   
         Consider using a lookahead of 2 for earlier expansion.     
```

As the warning suggestions, adding `LOOKAHEAD(2)` in a few places addresses the 
warning.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34067) Fix javacc warnings in flink-sql-parser

2024-01-11 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-34067:
---
Affects Version/s: 1.19.0

> Fix javacc warnings in flink-sql-parser
> ---
>
> Key: FLINK-34067
> URL: https://issues.apache.org/jira/browse/FLINK-34067
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.19.0
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Minor
>
> While extending the Flink SQL parser, I noticed these two warnings:
> ```
> [INFO] --- javacc:2.4:javacc (javacc) @ flink-sql-parser ---                  
>                                
> Java Compiler Compiler Version 4.0 (Parser Generator)                         
>                               
> (type "javacc" with no arguments for help)                                    
>                                                              
> Reading from file 
> .../flink-table/flink-sql-parser/target/generated-sources/javacc/Parser.jj . 
> . .               
> Note: UNICODE_INPUT option is specified. Please make sure you create the 
> parser/lexer using a Reader with the correct character encoding.  
> Warning: Choice conflict involving two expansions at                          
>                                  
>          line 2043, column 13 and line 2052, column 9 respectively.           
>                              
>          A common prefix is: "IF"                                             
>                                                               Consider using 
> a lookahead of 2 for earlier expansion.                                       
>          
> Warning: Choice conflict involving two expansions at                          
>                                 
>          line 2097, column 13 and line 2105, column 8 respectively.           
>                               
>          A common prefix is: "IF"                                             
>                                                      
>          Consider using a lookahead of 2 for earlier expansion.     
> ```
> As the warning suggestions, adding `LOOKAHEAD(2)` in a few places addresses 
> the warning.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-21224) Remove BatchExecExchange and StreamExecExchange, and replace their functionality with ExecEdge

2023-12-20 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17799190#comment-17799190
 ] 

Jim Hughes edited comment on FLINK-21224 at 12/20/23 10:08 PM:
---

As a note, if this node is used, it would need a compiled plan representation.


was (Author: JIRAUSER284726):
As a note, if this node is used, it would need a compilled plan representation.

> Remove BatchExecExchange and StreamExecExchange, and replace their 
> functionality with ExecEdge
> --
>
> Key: FLINK-21224
> URL: https://issues.apache.org/jira/browse/FLINK-21224
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Major
>  Labels: auto-unassigned
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-21224) Remove BatchExecExchange and StreamExecExchange, and replace their functionality with ExecEdge

2023-12-20 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17799190#comment-17799190
 ] 

Jim Hughes commented on FLINK-21224:


As a note, if this node is used, it would need a compilled plan representation.

> Remove BatchExecExchange and StreamExecExchange, and replace their 
> functionality with ExecEdge
> --
>
> Key: FLINK-21224
> URL: https://issues.apache.org/jira/browse/FLINK-21224
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Major
>  Labels: auto-unassigned
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33918) Fix AsyncSinkWriterThrottlingTest test failure

2023-12-20 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes closed FLINK-33918.
--
Resolution: Duplicate

Duplicate of https://issues.apache.org/jira/browse/FLINK-31472

> Fix AsyncSinkWriterThrottlingTest test failure
> --
>
> Key: FLINK-33918
> URL: https://issues.apache.org/jira/browse/FLINK-33918
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0
>Reporter: Jim Hughes
>Priority: Major
>  Labels: test-stability
>
> From 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55700=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd]
> {code:java}
> Dec 20 03:09:03 03:09:03.411 [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>  – Time elapsed: 0.879 s <<< ERROR! 
> Dec 20 03:09:03 java.lang.IllegalStateException: Illegal thread detected. 
> This method must be called from inside the mailbox thread! 
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>  
> Dec 20 03:09:03 at java.lang.reflect.Method.invoke(Method.java:498) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33918) Fix AsyncSinkWriterThrottlingTest test failure

2023-12-20 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17799149#comment-17799149
 ] 

Jim Hughes commented on FLINK-33918:


Sorry for the duplicate!

> Fix AsyncSinkWriterThrottlingTest test failure
> --
>
> Key: FLINK-33918
> URL: https://issues.apache.org/jira/browse/FLINK-33918
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0
>Reporter: Jim Hughes
>Priority: Major
>  Labels: test-stability
>
> From 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55700=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd]
> {code:java}
> Dec 20 03:09:03 03:09:03.411 [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>  – Time elapsed: 0.879 s <<< ERROR! 
> Dec 20 03:09:03 java.lang.IllegalStateException: Illegal thread detected. 
> This method must be called from inside the mailbox thread! 
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>  
> Dec 20 03:09:03 at java.lang.reflect.Method.invoke(Method.java:498) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33918) Fix AsyncSinkWriterThrottlingTest test failure

2023-12-20 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-33918:
---
Description: 
>From 
>[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55700=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd]
{code:java}
Dec 20 03:09:03 03:09:03.411 [ERROR] 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
 – Time elapsed: 0.879 s <<< ERROR! 
Dec 20 03:09:03 java.lang.IllegalStateException: Illegal thread detected. This 
method must be called from inside the mailbox thread! 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
 
Dec 20 03:09:03 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
 
Dec 20 03:09:03 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
 
Dec 20 03:09:03 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
 
Dec 20 03:09:03 at java.lang.reflect.Method.invoke(Method.java:498) {code}

  was:
>From 
>[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55700=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd]

 

```
Dec 20 03:09:03 03:09:03.411 [ERROR] 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
 -- Time elapsed: 0.879 s <<< ERROR! 
Dec 20 03:09:03 java.lang.IllegalStateException: Illegal thread detected. This 
method must be called from inside the mailbox thread! 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
 
Dec 20 03:09:03 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
 
Dec 20 03:09:03 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
 
Dec 20 03:09:03 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
 
Dec 20 03:09:03 at java.lang.reflect.Method.invoke(Method.java:498) 
```


> Fix AsyncSinkWriterThrottlingTest test failure
> --
>
> Key: FLINK-33918
> URL: https://issues.apache.org/jira/browse/FLINK-33918
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0
>Reporter: Jim Hughes
>Priority: Major
>  Labels: test-stability
>
> From 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55700=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd]
> {code:java}
> Dec 20 03:09:03 03:09:03.411 [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>  – Time elapsed: 0.879 s <<< ERROR! 
> Dec 20 03:09:03 java.lang.IllegalStateException: Illegal thread detected. 
> This method must be called from inside the mailbox thread! 
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>  
> Dec 20 03:09:03 at 
> 

[jira] [Commented] (FLINK-33918) Fix AsyncSinkWriterThrottlingTest test failure

2023-12-20 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17799139#comment-17799139
 ] 

Jim Hughes commented on FLINK-33918:


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55701=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd]

Same test; slightly different stack trace:
{code:java}
Dec 20 03:28:41 java.lang.IllegalStateException: Illegal thread detected. This 
method must be called from inside the mailbox thread!
Dec 20 03:28:41 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
Dec 20 03:28:41 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
Dec 20 03:28:41 at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
Dec 20 03:28:41 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
Dec 20 03:28:41 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
Dec 20 03:28:41 at 
org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
Dec 20 03:28:41 at 
org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
Dec 20 03:28:41 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
Dec 20 03:28:41 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
 {code}

> Fix AsyncSinkWriterThrottlingTest test failure
> --
>
> Key: FLINK-33918
> URL: https://issues.apache.org/jira/browse/FLINK-33918
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.19.0
>Reporter: Jim Hughes
>Priority: Major
>  Labels: test-stability
>
> From 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55700=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd]
>  
> ```
> Dec 20 03:09:03 03:09:03.411 [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>  -- Time elapsed: 0.879 s <<< ERROR! 
> Dec 20 03:09:03 java.lang.IllegalStateException: Illegal thread detected. 
> This method must be called from inside the mailbox thread! 
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>  
> Dec 20 03:09:03 at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>  
> Dec 20 03:09:03 at java.lang.reflect.Method.invoke(Method.java:498) 
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33918) Fix AsyncSinkWriterThrottlingTest test failure

2023-12-20 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33918:
--

 Summary: Fix AsyncSinkWriterThrottlingTest test failure
 Key: FLINK-33918
 URL: https://issues.apache.org/jira/browse/FLINK-33918
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.19.0
Reporter: Jim Hughes


>From 
>[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55700=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd]

 

```
Dec 20 03:09:03 03:09:03.411 [ERROR] 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
 -- Time elapsed: 0.879 s <<< ERROR! 
Dec 20 03:09:03 java.lang.IllegalStateException: Illegal thread detected. This 
method must be called from inside the mailbox thread! 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
 
Dec 20 03:09:03 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
 
Dec 20 03:09:03 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
 
Dec 20 03:09:03 at 
org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
 
Dec 20 03:09:03 at 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
 
Dec 20 03:09:03 at java.lang.reflect.Method.invoke(Method.java:498) 
```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33877) CollectSinkFunctionTest.testConfiguredPortIsUsed fails due to BindException

2023-12-19 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798614#comment-17798614
 ] 

Jim Hughes commented on FLINK-33877:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55652=logs=675bf62c-8558-587e-2555-dcad13acefb5=5878eed3-cc1e-5b12-1ed0-9e7139ce0992

> CollectSinkFunctionTest.testConfiguredPortIsUsed fails due to BindException
> ---
>
> Key: FLINK-33877
> URL: https://issues.apache.org/jira/browse/FLINK-33877
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.19.0
>Reporter: Jiabao Sun
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55646=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=9482
> {noformat}
> Dec 18 17:49:57 17:49:57.241 [ERROR] 
> org.apache.flink.streaming.api.operators.collect.CollectSinkFunctionTest.testConfiguredPortIsUsed
>  -- Time elapsed: 0.021 s <<< ERROR!
> Dec 18 17:49:57 java.net.BindException: Address already in use (Bind failed)
> Dec 18 17:49:57   at java.net.PlainSocketImpl.socketBind(Native Method)
> Dec 18 17:49:57   at 
> java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
> Dec 18 17:49:57   at java.net.ServerSocket.bind(ServerSocket.java:390)
> Dec 18 17:49:57   at java.net.ServerSocket.(ServerSocket.java:252)
> Dec 18 17:49:57   at 
> org.apache.flink.streaming.api.operators.collect.CollectSinkFunction$ServerThread.(CollectSinkFunction.java:375)
> Dec 18 17:49:57   at 
> org.apache.flink.streaming.api.operators.collect.CollectSinkFunction$ServerThread.(CollectSinkFunction.java:362)
> Dec 18 17:49:57   at 
> org.apache.flink.streaming.api.operators.collect.CollectSinkFunction.open(CollectSinkFunction.java:252)
> Dec 18 17:49:57   at 
> org.apache.flink.streaming.api.operators.collect.utils.CollectSinkFunctionTestWrapper.openFunction(CollectSinkFunctionTestWrapper.java:103)
> Dec 18 17:49:57   at 
> org.apache.flink.streaming.api.operators.collect.CollectSinkFunctionTest.testConfiguredPortIsUsed(CollectSinkFunctionTest.java:138)
> Dec 18 17:49:57   at java.lang.reflect.Method.invoke(Method.java:498)
> Dec 18 17:49:57   at 
> java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> Dec 18 17:49:57   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> Dec 18 17:49:57   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> Dec 18 17:49:57   at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> Dec 18 17:49:57   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2023-12-18 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798369#comment-17798369
 ] 

Jim Hughes commented on FLINK-27756:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55640=logs=2e8cb2f7-b2d3-5c62-9c05-cd756d33a819=2dd510a3-5041-5201-6dc3-54d310f68906

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0, 1.17.0, 1.19.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2023-12-18 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798368#comment-17798368
 ] 

Jim Hughes commented on FLINK-27756:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55636=logs=2e8cb2f7-b2d3-5c62-9c05-cd756d33a819=2dd510a3-5041-5201-6dc3-54d310f68906

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0, 1.17.0, 1.19.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33756) Missing record with CUMULATE/HOP windows using an optimization

2023-12-14 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796949#comment-17796949
 ] 

Jim Hughes commented on FLINK-33756:


Hi [~jeyhunkarimov], I am not actively working on it; I'll assign it to you for 
now.

I think there are two (likely related) things going on:  First, some watermark 
is getting miscomputed, and second, the hashing happening in the exchange step 
is allowing things to happen in either order.

>From the initial time that I looked into this, I also ran across 
>`TimeWindow.getWindowStartWithOffset`.  I noticed that this method is being 
>called with an offset of 0L in `TimeWindowUtil.getNextTriggerWatermark`.  I 
>cannot be 100% sure that's the problem, but that's the next place I'd be 
>checking if I were to continue looking!

 

> Missing record with CUMULATE/HOP windows using an optimization
> --
>
> Key: FLINK-33756
> URL: https://issues.apache.org/jira/browse/FLINK-33756
> Project: Flink
>  Issue Type: Bug
>Reporter: Jim Hughes
>Priority: Major
>
> I have seen an optimization cause a window fail to emit a record.
> With the optimization `TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_ENABLED` set to 
> true, 
> the configuration AggregatePhaseStrategy.TWO_PHASE set, using a HOP or 
> CUMULATE window with an offset, a record can be sent which causes one of the 
> multiple active windows to fail to emit a record.
> The linked code 
> (https://github.com/jnh5y/flink/commit/ec90aa501d86f95559f8b22b0610e9fb786f05d4)
>  modifies the `WindowAggregateJsonITCase` to demonstrate the case.  
>  
> The test `testDistinctSplitDisabled` shows the expected behavior.  The test 
> `testDistinctSplitEnabled` tests the above configurations and shows that one 
> record is missing from the output.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33666) MergeTableLikeUtil uses different constraint name than Schema

2023-12-14 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796882#comment-17796882
 ] 

Jim Hughes commented on FLINK-33666:


[~twalthr] Yes, the issue has been resolved.

The issue was a change in expectations around PK.  From offline conversations, 
you assured me that the change is ok and will not impact folks using compiled 
plans.  I think we are good.

An example stack trace is:

```
Dec 07 13:49:11 13:49:11.037 [ERROR] 
org.apache.flink.table.planner.plan.nodes.exec.stream.GroupAggregateRestoreTest.testRestore(TableTestProgram,
 ExecNodeMetadata)[1] Time elapsed: 0.17 s <<< ERROR! 
Dec 07 13:49:11 org.apache.flink.table.api.TableException: Cannot load Plan 
from file 
'/__w/1/s/flink-table/flink-table-planner/src/test/resources/restore-tests/stream-exec-group-aggregate_1/group-aggregate-simple/plan/group-aggregate-simple.json'.
 
Dec 07 13:49:11 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.loadPlan(TableEnvironmentImpl.java:760)
 
Dec 07 13:49:11 at 
org.apache.flink.table.planner.plan.nodes.exec.testutils.RestoreTestBase.testRestore(RestoreTestBase.java:279)


Dec 07 13:49:11 Caused by: 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
 The schema of table 'default_catalog.default_database.sink_t' from the 
persisted plan does not match the schema loaded from the catalog: '( 
Dec 07 13:49:11 `b` BIGINT NOT NULL, 
Dec 07 13:49:11 `cnt` BIGINT, 
Dec 07 13:49:11 `avg_a` DOUBLE, 
Dec 07 13:49:11 `min_c` STRING, 
Dec 07 13:49:11 CONSTRAINT `PK_129` PRIMARY KEY (`b`) NOT ENFORCED 
Dec 07 13:49:11 )' != '( 
Dec 07 13:49:11 `b` BIGINT NOT NULL, 
Dec 07 13:49:11 `cnt` BIGINT, 
Dec 07 13:49:11 `avg_a` DOUBLE, 
Dec 07 13:49:11 `min_c` STRING, 
Dec 07 13:49:11 CONSTRAINT `PK_b` PRIMARY KEY (`b`) NOT ENFORCED 
Dec 07 13:49:11 )'. Make sure the table schema in the catalog is still 
identical. (through reference chain: 
org.apache.flink.table.planner.plan.nodes.exec.serde.JsonPlanGraph["nodes"]->java.util.ArrayList[5]->org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink["dynamicTableSink"]->org.apache.flink.table.planner.plan.nodes.exec.spec.DynamicTableSinkSpec["table"])
 
...
```

> MergeTableLikeUtil uses different constraint name than Schema
> -
>
> Key: FLINK-33666
> URL: https://issues.apache.org/jira/browse/FLINK-33666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Timo Walther
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> {{MergeTableLikeUtil}} uses a different algorithm to name constraints than 
> {{Schema}}. 
> {{Schema}} includes the column names.
> {{MergeTableLikeUtil}} uses a hashCode which means it might depend on JVM 
> specifics.
> For consistency we should use the same algorithm. I propose to use 
> {{Schema}}'s logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33756) Missing record with CUMULATE/HOP windows using an optimization

2023-12-13 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796533#comment-17796533
 ] 

Jim Hughes commented on FLINK-33756:


Hi [~jeyhunkarimov], nice analysis!  I did see that there were two pairs of 
Local-Global window aggregates when I very briefly looked initially; I totally 
agree that has to be part of the issue.  

Out of curiosity, how did you see the value coming out of the various windows?  
Was it println debugging or something else?

I like your explanation about the order of `processWatermark` and 
`processElement`; that explains the apparent flakiness.  

Looks like the different orderings is coming from the exchanging / hashing 
which is happening between the windows.  Perhaps thinking about how timestamps 
and the exchange operator will help us sort this out.  (Along with your note 
that we are "losing" the original timestamp in some sense.)

> Missing record with CUMULATE/HOP windows using an optimization
> --
>
> Key: FLINK-33756
> URL: https://issues.apache.org/jira/browse/FLINK-33756
> Project: Flink
>  Issue Type: Bug
>Reporter: Jim Hughes
>Priority: Major
>
> I have seen an optimization cause a window fail to emit a record.
> With the optimization `TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_ENABLED` set to 
> true, 
> the configuration AggregatePhaseStrategy.TWO_PHASE set, using a HOP or 
> CUMULATE window with an offset, a record can be sent which causes one of the 
> multiple active windows to fail to emit a record.
> The linked code 
> (https://github.com/jnh5y/flink/commit/ec90aa501d86f95559f8b22b0610e9fb786f05d4)
>  modifies the `WindowAggregateJsonITCase` to demonstrate the case.  
>  
> The test `testDistinctSplitDisabled` shows the expected behavior.  The test 
> `testDistinctSplitEnabled` tests the above configurations and shows that one 
> record is missing from the output.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26644) python StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies failed on azure

2023-12-13 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796276#comment-17796276
 ] 

Jim Hughes commented on FLINK-26644:


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55469=logs=bf5e383b-9fd3-5f02-ca1c-8f788e2e76d3=85189c57-d8a0-5c9c-b61d-fc05cfac62cf]
 

> python 
> StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies 
> failed on azure
> ---
>
> Key: FLINK-26644
> URL: https://issues.apache.org/jira/browse/FLINK-26644
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.4, 1.15.0, 1.16.0, 1.19.0
>Reporter: Yun Gao
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code:java}
> 2022-03-14T18:50:24.6842853Z Mar 14 18:50:24 
> === FAILURES 
> ===
> 2022-03-14T18:50:24.6844089Z Mar 14 18:50:24 _ 
> StreamExecutionEnvironmentTests.test_generate_stream_graph_with_dependencies _
> 2022-03-14T18:50:24.6844846Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6846063Z Mar 14 18:50:24 self = 
>   testMethod=test_generate_stream_graph_with_dependencies>
> 2022-03-14T18:50:24.6847104Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6847766Z Mar 14 18:50:24 def 
> test_generate_stream_graph_with_dependencies(self):
> 2022-03-14T18:50:24.6848677Z Mar 14 18:50:24 python_file_dir = 
> os.path.join(self.tempdir, "python_file_dir_" + str(uuid.uuid4()))
> 2022-03-14T18:50:24.6849833Z Mar 14 18:50:24 os.mkdir(python_file_dir)
> 2022-03-14T18:50:24.6850729Z Mar 14 18:50:24 python_file_path = 
> os.path.join(python_file_dir, "test_stream_dependency_manage_lib.py")
> 2022-03-14T18:50:24.6852679Z Mar 14 18:50:24 with 
> open(python_file_path, 'w') as f:
> 2022-03-14T18:50:24.6853646Z Mar 14 18:50:24 f.write("def 
> add_two(a):\nreturn a + 2")
> 2022-03-14T18:50:24.6854394Z Mar 14 18:50:24 env = self.env
> 2022-03-14T18:50:24.6855019Z Mar 14 18:50:24 
> env.add_python_file(python_file_path)
> 2022-03-14T18:50:24.6855519Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6856254Z Mar 14 18:50:24 def plus_two_map(value):
> 2022-03-14T18:50:24.6857045Z Mar 14 18:50:24 from 
> test_stream_dependency_manage_lib import add_two
> 2022-03-14T18:50:24.6857865Z Mar 14 18:50:24 return value[0], 
> add_two(value[1])
> 2022-03-14T18:50:24.6858466Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6858924Z Mar 14 18:50:24 def add_from_file(i):
> 2022-03-14T18:50:24.6859806Z Mar 14 18:50:24 with 
> open("data/data.txt", 'r') as f:
> 2022-03-14T18:50:24.6860266Z Mar 14 18:50:24 return i[0], 
> i[1] + int(f.read())
> 2022-03-14T18:50:24.6860879Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6862022Z Mar 14 18:50:24 from_collection_source = 
> env.from_collection([('a', 0), ('b', 0), ('c', 1), ('d', 1),
> 2022-03-14T18:50:24.6863259Z Mar 14 18:50:24  
>  ('e', 2)],
> 2022-03-14T18:50:24.6864057Z Mar 14 18:50:24  
> type_info=Types.ROW([Types.STRING(),
> 2022-03-14T18:50:24.6864651Z Mar 14 18:50:24  
>  Types.INT()]))
> 2022-03-14T18:50:24.6865150Z Mar 14 18:50:24 
> from_collection_source.name("From Collection")
> 2022-03-14T18:50:24.6866212Z Mar 14 18:50:24 keyed_stream = 
> from_collection_source.key_by(lambda x: x[1], key_type=Types.INT())
> 2022-03-14T18:50:24.6867083Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6867793Z Mar 14 18:50:24 plus_two_map_stream = 
> keyed_stream.map(plus_two_map).name("Plus Two Map").set_parallelism(3)
> 2022-03-14T18:50:24.6868620Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6869412Z Mar 14 18:50:24 add_from_file_map = 
> plus_two_map_stream.map(add_from_file).name("Add From File Map")
> 2022-03-14T18:50:24.6870239Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6870883Z Mar 14 18:50:24 test_stream_sink = 
> add_from_file_map.add_sink(self.test_sink).name("Test Sink")
> 2022-03-14T18:50:24.6871803Z Mar 14 18:50:24 
> test_stream_sink.set_parallelism(4)
> 2022-03-14T18:50:24.6872291Z Mar 14 18:50:24 
> 2022-03-14T18:50:24.6872756Z Mar 14 18:50:24 archive_dir_path = 
> os.path.join(self.tempdir, "archive_" + str(uuid.uuid4()))
> 2022-03-14T18:50:24.6873557Z Mar 14 18:50:24 
> os.mkdir(archive_dir_path)
> 2022-03-14T18:50:24.6874817Z Mar 14 18:50:24 with 
> open(os.path.join(archive_dir_path, "data.txt"), 'w') as f:
> 2022-03-14T18:50:24.6875414Z Mar 14 18:50:24 f.write("3")
> 

[jira] [Assigned] (FLINK-33805) Implement restore tests for OverAggregate node

2023-12-12 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-33805:
--

Assignee: Jim Hughes

> Implement restore tests for OverAggregate node
> --
>
> Key: FLINK-33805
> URL: https://issues.apache.org/jira/browse/FLINK-33805
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33805) Implement restore tests for OverAggregate node

2023-12-12 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33805:
--

 Summary: Implement restore tests for OverAggregate node
 Key: FLINK-33805
 URL: https://issues.apache.org/jira/browse/FLINK-33805
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33706) Build_wheels_on_macos fails on AZP

2023-12-08 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17794729#comment-17794729
 ] 

Jim Hughes commented on FLINK-33706:


Another instance here: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55339=logs=f73b5736-8355-5390-ec71-4dfdec0ce6c5

> Build_wheels_on_macos fails on AZP
> --
>
> Key: FLINK-33706
> URL: https://issues.apache.org/jira/browse/FLINK-33706
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Test Infrastructure
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55044=logs=f73b5736-8355-5390-ec71-4dfdec0ce6c5=90f7230e-bf5a-531b-8566-ad48d3e03bbb=427
> fails on AZP as 
> {noformat}
>note: This error originates from a subprocess, and is likely not a 
> problem with pip.
> ERROR: Failed cleaning build dir for crcmod
> Building wheel for dill (setup.py): started
> Building wheel for dill (setup.py): finished with status 'done'
> Created wheel for dill: filename=dill-0.0.0-py3-none-any.whl size=899 
> sha256=39d0b4b66ce11f42313482f4ad825029e861fd6dab87a743a95d75a44a1fedd6
> Stored in directory: 
> /Users/runner/Library/Caches/pip/wheels/07/35/78/e9004fa30578734db7f10e7a211605f3f0778d2bdde38a239d
> Building wheel for hdfs (setup.py): started
> Building wheel for hdfs (setup.py): finished with status 'done'
> Created wheel for hdfs: filename=UNKNOWN-0.0.0-py3-none-any.whl 
> size=928 
> sha256=cb3fd7d8c71b52bbc27cfb75842f9d4d9c6f3b847f3f4fe50323c945a0e38ccc
> Stored in directory: 
> /Users/runner/Library/Caches/pip/wheels/68/dd/29/c1a590238f9ebbe4f7ee9b3583f5185d0b9577e23f05c990eb
> WARNING: Built wheel for hdfs is invalid: Wheel has unexpected file 
> name: expected 'hdfs', got 'UNKNOWN'
> Building wheel for pymongo (pyproject.toml): started
> Building wheel for pymongo (pyproject.toml): finished with status 
> 'done'
> Created wheel for pymongo: 
> filename=pymongo-4.6.1-cp38-cp38-macosx_10_9_x86_64.whl size=478012 
> sha256=5dfc6fdb6a8a399f8f9da44e28bae19be244b15c8000cd3b2d7d6ff513cc6277
> Stored in directory: 
> /Users/runner/Library/Caches/pip/wheels/54/d8/0e/2a61e90bb3872d903b15eb3c94cb70f438fb8792a28fee7bb1
> Building wheel for docopt (setup.py): started
> Building wheel for docopt (setup.py): finished with status 'done'
> Created wheel for docopt: filename=UNKNOWN-0.0.0-py3-none-any.whl 
> size=920 
> sha256=612c56cd1a6344b8def6c4d3c3c1c8bb10e1f2b0d978fee0fc8b9281026e8288
> Stored in directory: 
> /Users/runner/Library/Caches/pip/wheels/56/ea/58/ead137b087d9e326852a851351d1debf4ada529b6ac0ec4e8c
> WARNING: Built wheel for docopt is invalid: Wheel has unexpected file 
> name: expected 'docopt', got 'UNKNOWN'
>   Successfully built dill pymongo
>   Failed to build fastavro crcmod hdfs docopt
>   ERROR: Could not build wheels for fastavro, which is required to 
> install pyproject.toml-based projects
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33777) ParquetTimestampITCase>FsStreamingSinkITCaseBase failing in CI

2023-12-07 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33777:
--

 Summary: ParquetTimestampITCase>FsStreamingSinkITCaseBase failing 
in CI
 Key: FLINK-33777
 URL: https://issues.apache.org/jira/browse/FLINK-33777
 Project: Flink
  Issue Type: Bug
Reporter: Jim Hughes


>From this CI run: 
>[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55334=logs=2e8cb2f7-b2d3-5c62-9c05-cd756d33a819=2dd510a3-5041-5201-6dc3-54d310f68906]

```
Dec 07 19:57:30 19:57:30.026 [ERROR] Errors: 
Dec 07 19:57:30 19:57:30.026 [ERROR] 
ParquetTimestampITCase>FsStreamingSinkITCaseBase.testNonPart:84->FsStreamingSinkITCaseBase.testPartitionCustomFormatDate:151->FsStreamingSinkITCaseBase.test:186
 » Validation 
Dec 07 19:57:30 19:57:30.026 [ERROR] 
ParquetTimestampITCase>FsStreamingSinkITCaseBase.testPart:89->FsStreamingSinkITCaseBase.testPartitionCustomFormatDate:151->FsStreamingSinkITCaseBase.test:186
 » Validation 
Dec 07 19:57:30 19:57:30.026 [ERROR] 
ParquetTimestampITCase>FsStreamingSinkITCaseBase.testPartitionWithBasicDate:126->FsStreamingSinkITCaseBase.test:186
 » Validation 
```
The errors each appear somewhat similar:
```

Dec 07 19:54:43 19:54:43.934 [ERROR] 
org.apache.flink.formats.parquet.ParquetTimestampITCase.testPartitionWithBasicDate
 Time elapsed: 1.822 s <<< ERROR! 
Dec 07 19:54:43 org.apache.flink.table.api.ValidationException: Unable to find 
a field named 'f0' in the physical data type derived from the given type 
information for schema declaration. Make sure that the type information is not 
a generic raw type. Currently available fields are: [a, b, c, d, e] 
Dec 07 19:54:43 at 
org.apache.flink.table.catalog.SchemaTranslator.patchDataTypeFromColumn(SchemaTranslator.java:350)
 
Dec 07 19:54:43 at 
org.apache.flink.table.catalog.SchemaTranslator.patchDataTypeFromDeclaredSchema(SchemaTranslator.java:337)
 
Dec 07 19:54:43 at 
org.apache.flink.table.catalog.SchemaTranslator.createConsumingResult(SchemaTranslator.java:235)
 
Dec 07 19:54:43 at 
org.apache.flink.table.catalog.SchemaTranslator.createConsumingResult(SchemaTranslator.java:180)
 
Dec 07 19:54:43 at 
org.apache.flink.table.api.bridge.internal.AbstractStreamTableEnvironmentImpl.fromStreamInternal(AbstractStreamTableEnvironmentImpl.java:141)
 
Dec 07 19:54:43 at 
org.apache.flink.table.api.bridge.scala.internal.StreamTableEnvironmentImpl.createTemporaryView(StreamTableEnvironmentImpl.scala:121)
 
Dec 07 19:54:43 at 
org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.test(FsStreamingSinkITCaseBase.scala:186)
 
Dec 07 19:54:43 at 
org.apache.flink.table.planner.runtime.stream.FsStreamingSinkITCaseBase.testPartitionWithBasicDate(FsStreamingSinkITCaseBase.scala:126)
```
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33666) MergeTableLikeUtil uses different constraint name than Schema

2023-12-07 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17794460#comment-17794460
 ] 

Jim Hughes commented on FLINK-33666:


This change seems to prevent 
`org.apache.flink.table.api.internal.TableEnvironmentImpl.loadPlan` from 
working on Compiled Plans which used the previous method.  

The PR changes restore test plans which should never be updated.  

I was able to see this since [~bvarghese]'s separate PR was merged to master 
today and it failed CI here:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55321=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4

> MergeTableLikeUtil uses different constraint name than Schema
> -
>
> Key: FLINK-33666
> URL: https://issues.apache.org/jira/browse/FLINK-33666
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Timo Walther
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> {{MergeTableLikeUtil}} uses a different algorithm to name constraints than 
> {{Schema}}. 
> {{Schema}} includes the column names.
> {{MergeTableLikeUtil}} uses a hashCode which means it might depend on JVM 
> specifics.
> For consistency we should use the same algorithm. I propose to use 
> {{Schema}}'s logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33767) Implement restore tests for TemporalJoin node

2023-12-06 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33767:
--

 Summary: Implement restore tests for TemporalJoin node
 Key: FLINK-33767
 URL: https://issues.apache.org/jira/browse/FLINK-33767
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33758) Implement restore tests for TemporalSort node

2023-12-05 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-33758:
--

Assignee: Jim Hughes

> Implement restore tests for TemporalSort node
> -
>
> Key: FLINK-33758
> URL: https://issues.apache.org/jira/browse/FLINK-33758
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33758) Implement restore tests for TemporalSort node

2023-12-05 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33758:
--

 Summary: Implement restore tests for TemporalSort node
 Key: FLINK-33758
 URL: https://issues.apache.org/jira/browse/FLINK-33758
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33757) Implement restore tests for Rank node

2023-12-05 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33757:
--

 Summary: Implement restore tests for Rank node
 Key: FLINK-33757
 URL: https://issues.apache.org/jira/browse/FLINK-33757
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33756) Missing record with CUMULATE/HOP windows using an optimization

2023-12-05 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes updated FLINK-33756:
---
Description: 
I have seen an optimization cause a window fail to emit a record.

With the optimization `TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_ENABLED` set to true, 
the configuration AggregatePhaseStrategy.TWO_PHASE set, using a HOP or CUMULATE 
window with an offset, a record can be sent which causes one of the multiple 
active windows to fail to emit a record.

The linked code 
(https://github.com/jnh5y/flink/commit/ec90aa501d86f95559f8b22b0610e9fb786f05d4)
 modifies the `WindowAggregateJsonITCase` to demonstrate the case.  
 
The test `testDistinctSplitDisabled` shows the expected behavior.  The test 
`testDistinctSplitEnabled` tests the above configurations and shows that one 
record is missing from the output.  

  was:
I have seen an optimization cause a window fail to emit a record.

With the optimization `TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_ENABLED` set to true, 
the configuration AggregatePhaseStrategy.TWO_PHASE set, using a HOP or CUMULATE 
window with an offset, a record can be sent which causes one of the multiple 
active windows to fail to emit a record.

The link code modifies the `WindowAggregateJsonITCase` to demonstrate the case. 
 
 
The test `testDistinctSplitDisabled` shows the expected behavior.  The test 
`testDistinctSplitEnabled` tests the above configurations and shows that one 
record is missing from the output.  


> Missing record with CUMULATE/HOP windows using an optimization
> --
>
> Key: FLINK-33756
> URL: https://issues.apache.org/jira/browse/FLINK-33756
> Project: Flink
>  Issue Type: Bug
>Reporter: Jim Hughes
>Priority: Major
>
> I have seen an optimization cause a window fail to emit a record.
> With the optimization `TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_ENABLED` set to 
> true, 
> the configuration AggregatePhaseStrategy.TWO_PHASE set, using a HOP or 
> CUMULATE window with an offset, a record can be sent which causes one of the 
> multiple active windows to fail to emit a record.
> The linked code 
> (https://github.com/jnh5y/flink/commit/ec90aa501d86f95559f8b22b0610e9fb786f05d4)
>  modifies the `WindowAggregateJsonITCase` to demonstrate the case.  
>  
> The test `testDistinctSplitDisabled` shows the expected behavior.  The test 
> `testDistinctSplitEnabled` tests the above configurations and shows that one 
> record is missing from the output.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33756) Missing record with CUMULATE/HOP windows using an optimization

2023-12-05 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33756:
--

 Summary: Missing record with CUMULATE/HOP windows using an 
optimization
 Key: FLINK-33756
 URL: https://issues.apache.org/jira/browse/FLINK-33756
 Project: Flink
  Issue Type: Bug
Reporter: Jim Hughes


I have seen an optimization cause a window fail to emit a record.

With the optimization `TABLE_OPTIMIZER_DISTINCT_AGG_SPLIT_ENABLED` set to true, 
the configuration AggregatePhaseStrategy.TWO_PHASE set, using a HOP or CUMULATE 
window with an offset, a record can be sent which causes one of the multiple 
active windows to fail to emit a record.

The link code modifies the `WindowAggregateJsonITCase` to demonstrate the case. 
 
 
The test `testDistinctSplitDisabled` shows the expected behavior.  The test 
`testDistinctSplitEnabled` tests the above configurations and shows that one 
record is missing from the output.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33727) JoinRestoreTest is failing on AZP

2023-12-03 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792599#comment-17792599
 ] 

Jim Hughes commented on FLINK-33727:


That works for me; I approved that PR.

> JoinRestoreTest is failing on AZP
> -
>
> Key: FLINK-33727
> URL: https://issues.apache.org/jira/browse/FLINK-33727
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> Since {{JoinRestoreTest}} was introduced in FLINK-33470 it seems to be a 
> reason
> {noformat}
> Dec 02 04:42:26 04:42:26.408 [ERROR] Failures: 
> Dec 02 04:42:26 04:42:26.408 [ERROR]   
> JoinRestoreTest>RestoreTestBase.testRestore:283 
> Dec 02 04:42:26 Expecting actual:
> Dec 02 04:42:26   ["+I[9, carol, apple, 9000]",
> Dec 02 04:42:26 "+I[8, bill, banana, 8000]",
> Dec 02 04:42:26 "+I[6, jerry, pen, 6000]"]
> Dec 02 04:42:26 to contain exactly in any order:
> Dec 02 04:42:26   ["+I[Adam, null]",
> Dec 02 04:42:26 "+I[Baker, Research]",
> Dec 02 04:42:26 "+I[Charlie, Human Resources]",
> Dec 02 04:42:26 "+I[Charlie, HR]",
> Dec 02 04:42:26 "+I[Don, Sales]",
> Dec 02 04:42:26 "+I[Victor, null]",
> Dec 02 04:42:26 "+I[Helena, Engineering]",
> Dec 02 04:42:26 "+I[Juliet, Engineering]",
> Dec 02 04:42:26 "+I[Ivana, Research]",
> Dec 02 04:42:26 "+I[Charlie, People Operations]"]
> {noformat}
> examples of failures
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55120=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55129=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11786
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55136=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55137=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11779



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33727) JoinRestoreTest is failing on AZP

2023-12-03 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792538#comment-17792538
 ] 

Jim Hughes commented on FLINK-33727:


{quote}if so I'm curious whether it would be more helpful for others to have at 
least a comment about that in sources
{quote}
Absolutely!  The RestoreTest framework is new, and this discussion shows that 
there are a number of non-obvious assumptions.  [~bvarghese] and I are new to 
Flink, and as we've worked with it, we have extended it to have the features 
necessary to test various capabilities.

I apologize for taking out CI!  The order merges resulted in PRs not testing 
all of the programs together in one branch until they were merged.  (Which is 
why I'm willing to suggest disabling all RestoreTests temporarily or reverting 
the commits which caused the issue.)

As an additional improvement to the RestoreTestBase, we could save the SQL text 
in a file and fail if it is changed.  Of course, then folks could still update 
the test files which ought to be "immutable" in some sense.

> JoinRestoreTest is failing on AZP
> -
>
> Key: FLINK-33727
> URL: https://issues.apache.org/jira/browse/FLINK-33727
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> Since {{JoinRestoreTest}} was introduced in FLINK-33470 it seems to be a 
> reason
> {noformat}
> Dec 02 04:42:26 04:42:26.408 [ERROR] Failures: 
> Dec 02 04:42:26 04:42:26.408 [ERROR]   
> JoinRestoreTest>RestoreTestBase.testRestore:283 
> Dec 02 04:42:26 Expecting actual:
> Dec 02 04:42:26   ["+I[9, carol, apple, 9000]",
> Dec 02 04:42:26 "+I[8, bill, banana, 8000]",
> Dec 02 04:42:26 "+I[6, jerry, pen, 6000]"]
> Dec 02 04:42:26 to contain exactly in any order:
> Dec 02 04:42:26   ["+I[Adam, null]",
> Dec 02 04:42:26 "+I[Baker, Research]",
> Dec 02 04:42:26 "+I[Charlie, Human Resources]",
> Dec 02 04:42:26 "+I[Charlie, HR]",
> Dec 02 04:42:26 "+I[Don, Sales]",
> Dec 02 04:42:26 "+I[Victor, null]",
> Dec 02 04:42:26 "+I[Helena, Engineering]",
> Dec 02 04:42:26 "+I[Juliet, Engineering]",
> Dec 02 04:42:26 "+I[Ivana, Research]",
> Dec 02 04:42:26 "+I[Charlie, People Operations]"]
> {noformat}
> examples of failures
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55120=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55129=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11786
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55136=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55137=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11779



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33727) JoinRestoreTest is failing on AZP

2023-12-03 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792535#comment-17792535
 ] 

Jim Hughes commented on FLINK-33727:


As an alternative, to the existing PR and disabling the RestoreTests, I'm 
totally fine with you reverting the commits from my PR:
[https://github.com/apache/flink/pull/23680] 
[https://github.com/apache/flink/commit/e886dfdda6cd927548c8af0a88e78171e7ba34a8]
[https://github.com/apache/flink/commit/5edc7d7b18e88cc86e84d197202d8cbb40621864]
 

> JoinRestoreTest is failing on AZP
> -
>
> Key: FLINK-33727
> URL: https://issues.apache.org/jira/browse/FLINK-33727
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> Since {{JoinRestoreTest}} was introduced in FLINK-33470 it seems to be a 
> reason
> {noformat}
> Dec 02 04:42:26 04:42:26.408 [ERROR] Failures: 
> Dec 02 04:42:26 04:42:26.408 [ERROR]   
> JoinRestoreTest>RestoreTestBase.testRestore:283 
> Dec 02 04:42:26 Expecting actual:
> Dec 02 04:42:26   ["+I[9, carol, apple, 9000]",
> Dec 02 04:42:26 "+I[8, bill, banana, 8000]",
> Dec 02 04:42:26 "+I[6, jerry, pen, 6000]"]
> Dec 02 04:42:26 to contain exactly in any order:
> Dec 02 04:42:26   ["+I[Adam, null]",
> Dec 02 04:42:26 "+I[Baker, Research]",
> Dec 02 04:42:26 "+I[Charlie, Human Resources]",
> Dec 02 04:42:26 "+I[Charlie, HR]",
> Dec 02 04:42:26 "+I[Don, Sales]",
> Dec 02 04:42:26 "+I[Victor, null]",
> Dec 02 04:42:26 "+I[Helena, Engineering]",
> Dec 02 04:42:26 "+I[Juliet, Engineering]",
> Dec 02 04:42:26 "+I[Ivana, Research]",
> Dec 02 04:42:26 "+I[Charlie, People Operations]"]
> {noformat}
> examples of failures
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55120=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55129=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11786
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55136=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55137=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11779



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33727) JoinRestoreTest is failing on AZP

2023-12-03 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792534#comment-17792534
 ] 

Jim Hughes commented on FLINK-33727:


{quote}..., however tests are continuing passing. Is it expected?
{quote}
"Yes", but the reason is a little confusing.

The RestoreTest framework has two methods:  `generateTestSetupFiles` and 
`testRestore`.  

Presently, the method `generateTestSetupFiles` is disabled and only run by test 
developers before a PR is submitted.  This method takes the SQL, gets and saves 
a compiled plan, and runs through the beforeRestore data making some 
comparisons, and finally stopping the job and taking a savepoint.  

The second method uses the compiled plan and the savepoint.

Since you are only running the second method, changing the SQL is irrelevant 
and not tested (unless you manually run `generateTestSetupFiles`).
{quote}what is wrong with current PR for this JIRA?
{quote}
CI failing is showing that the RestoreTestBase has some limitations/assumptions 
around state which we need to address.  The current PR fixes CI, but does not 
address those, rather it works around them.  I'd prefer that we fix the 
limitations rather than work around them.  That's why I'm suggesting to disable 
the RestoreTests as a whole until Monday when Dawid and Timo can weigh in.

 

> JoinRestoreTest is failing on AZP
> -
>
> Key: FLINK-33727
> URL: https://issues.apache.org/jira/browse/FLINK-33727
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> Since {{JoinRestoreTest}} was introduced in FLINK-33470 it seems to be a 
> reason
> {noformat}
> Dec 02 04:42:26 04:42:26.408 [ERROR] Failures: 
> Dec 02 04:42:26 04:42:26.408 [ERROR]   
> JoinRestoreTest>RestoreTestBase.testRestore:283 
> Dec 02 04:42:26 Expecting actual:
> Dec 02 04:42:26   ["+I[9, carol, apple, 9000]",
> Dec 02 04:42:26 "+I[8, bill, banana, 8000]",
> Dec 02 04:42:26 "+I[6, jerry, pen, 6000]"]
> Dec 02 04:42:26 to contain exactly in any order:
> Dec 02 04:42:26   ["+I[Adam, null]",
> Dec 02 04:42:26 "+I[Baker, Research]",
> Dec 02 04:42:26 "+I[Charlie, Human Resources]",
> Dec 02 04:42:26 "+I[Charlie, HR]",
> Dec 02 04:42:26 "+I[Don, Sales]",
> Dec 02 04:42:26 "+I[Victor, null]",
> Dec 02 04:42:26 "+I[Helena, Engineering]",
> Dec 02 04:42:26 "+I[Juliet, Engineering]",
> Dec 02 04:42:26 "+I[Ivana, Research]",
> Dec 02 04:42:26 "+I[Charlie, People Operations]"]
> {noformat}
> examples of failures
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55120=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55129=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11786
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55136=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55137=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11779



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33727) JoinRestoreTest is failing on AZP

2023-12-03 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792532#comment-17792532
 ] 

Jim Hughes commented on FLINK-33727:


{quote}...
It seems it relies on some internal state...
{quote}
The tests have internal state.  Your testing shows that it not being reset 
between test classes!

Thanks for digging into that; that will help us identify what we need to sort 
out with the TestRestoreBase.  

If you are looking to sort things immediately, I'd suggest adding `Disabled` to 
`testRestore` here: 
[https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/testutils/RestoreTestBase.java#L229]

That'd turn off all of these tests until [~twalthr] [~dwysakowicz] and I have a 
solution.[|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=dwysakowicz]

> JoinRestoreTest is failing on AZP
> -
>
> Key: FLINK-33727
> URL: https://issues.apache.org/jira/browse/FLINK-33727
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> Since {{JoinRestoreTest}} was introduced in FLINK-33470 it seems to be a 
> reason
> {noformat}
> Dec 02 04:42:26 04:42:26.408 [ERROR] Failures: 
> Dec 02 04:42:26 04:42:26.408 [ERROR]   
> JoinRestoreTest>RestoreTestBase.testRestore:283 
> Dec 02 04:42:26 Expecting actual:
> Dec 02 04:42:26   ["+I[9, carol, apple, 9000]",
> Dec 02 04:42:26 "+I[8, bill, banana, 8000]",
> Dec 02 04:42:26 "+I[6, jerry, pen, 6000]"]
> Dec 02 04:42:26 to contain exactly in any order:
> Dec 02 04:42:26   ["+I[Adam, null]",
> Dec 02 04:42:26 "+I[Baker, Research]",
> Dec 02 04:42:26 "+I[Charlie, Human Resources]",
> Dec 02 04:42:26 "+I[Charlie, HR]",
> Dec 02 04:42:26 "+I[Don, Sales]",
> Dec 02 04:42:26 "+I[Victor, null]",
> Dec 02 04:42:26 "+I[Helena, Engineering]",
> Dec 02 04:42:26 "+I[Juliet, Engineering]",
> Dec 02 04:42:26 "+I[Ivana, Research]",
> Dec 02 04:42:26 "+I[Charlie, People Operations]"]
> {noformat}
> examples of failures
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55120=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55129=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11786
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55136=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55137=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11779



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-33727) JoinRestoreTest is failing on AZP

2023-12-03 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792532#comment-17792532
 ] 

Jim Hughes edited comment on FLINK-33727 at 12/3/23 2:26 PM:
-

{quote}...
It seems it relies on some internal state...
{quote}
The tests have internal state.  Your testing shows that it not being reset 
between test classes!

Thanks for digging into that; that will help us identify what we need to sort 
out with the TestRestoreBase.  

If you are looking to sort things immediately, I'd suggest adding `Disabled` to 
`testRestore` here: 
[https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/testutils/RestoreTestBase.java#L229]

That'd turn off all of these tests until [~twalthr] [~dwysakowicz] [~bvarghese] 
and I have a 
solution.[https://issues.apache.org/jira/secure/ViewProfile.jspa?name=dwysakowicz]


was (Author: JIRAUSER284726):
{quote}...
It seems it relies on some internal state...
{quote}
The tests have internal state.  Your testing shows that it not being reset 
between test classes!

Thanks for digging into that; that will help us identify what we need to sort 
out with the TestRestoreBase.  

If you are looking to sort things immediately, I'd suggest adding `Disabled` to 
`testRestore` here: 
[https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/testutils/RestoreTestBase.java#L229]

That'd turn off all of these tests until [~twalthr] [~dwysakowicz] and I have a 
solution.[|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=dwysakowicz]

> JoinRestoreTest is failing on AZP
> -
>
> Key: FLINK-33727
> URL: https://issues.apache.org/jira/browse/FLINK-33727
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> Since {{JoinRestoreTest}} was introduced in FLINK-33470 it seems to be a 
> reason
> {noformat}
> Dec 02 04:42:26 04:42:26.408 [ERROR] Failures: 
> Dec 02 04:42:26 04:42:26.408 [ERROR]   
> JoinRestoreTest>RestoreTestBase.testRestore:283 
> Dec 02 04:42:26 Expecting actual:
> Dec 02 04:42:26   ["+I[9, carol, apple, 9000]",
> Dec 02 04:42:26 "+I[8, bill, banana, 8000]",
> Dec 02 04:42:26 "+I[6, jerry, pen, 6000]"]
> Dec 02 04:42:26 to contain exactly in any order:
> Dec 02 04:42:26   ["+I[Adam, null]",
> Dec 02 04:42:26 "+I[Baker, Research]",
> Dec 02 04:42:26 "+I[Charlie, Human Resources]",
> Dec 02 04:42:26 "+I[Charlie, HR]",
> Dec 02 04:42:26 "+I[Don, Sales]",
> Dec 02 04:42:26 "+I[Victor, null]",
> Dec 02 04:42:26 "+I[Helena, Engineering]",
> Dec 02 04:42:26 "+I[Juliet, Engineering]",
> Dec 02 04:42:26 "+I[Ivana, Research]",
> Dec 02 04:42:26 "+I[Charlie, People Operations]"]
> {noformat}
> examples of failures
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55120=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55129=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11786
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55136=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55137=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11779



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33727) JoinRestoreTest is failing on AZP

2023-12-03 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792531#comment-17792531
 ] 

Jim Hughes commented on FLINK-33727:


> is there any reason to have same name?

Yes and no.  The SQL text will need to reference the input and output tables.  
In other restore tests, it makes sense to have functions which generate sources 
/ sinks, so being able to reuse a table name is nice.

> what is the reason to have {{runSql}} this in {{DeduplicationTestPrograms?}}
`runSql` adds the steps which will actually execute something to the 
TestProgram.  

I'm pretty sure that if you removed the run SQL, that'd be like removing the 
section in a JUnit test function which does anything and then asserts that it 
works.  (That'd explain why the tests pass without it.)

In some sense, I see the TestProgram and RestoreTestBase as setting up a 
Builder/DSL for JUnit tests that a) test things about CompiledPlans and b) make 
sure that a streaming can be restored sensibly.  

> JoinRestoreTest is failing on AZP
> -
>
> Key: FLINK-33727
> URL: https://issues.apache.org/jira/browse/FLINK-33727
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> Since {{JoinRestoreTest}} was introduced in FLINK-33470 it seems to be a 
> reason
> {noformat}
> Dec 02 04:42:26 04:42:26.408 [ERROR] Failures: 
> Dec 02 04:42:26 04:42:26.408 [ERROR]   
> JoinRestoreTest>RestoreTestBase.testRestore:283 
> Dec 02 04:42:26 Expecting actual:
> Dec 02 04:42:26   ["+I[9, carol, apple, 9000]",
> Dec 02 04:42:26 "+I[8, bill, banana, 8000]",
> Dec 02 04:42:26 "+I[6, jerry, pen, 6000]"]
> Dec 02 04:42:26 to contain exactly in any order:
> Dec 02 04:42:26   ["+I[Adam, null]",
> Dec 02 04:42:26 "+I[Baker, Research]",
> Dec 02 04:42:26 "+I[Charlie, Human Resources]",
> Dec 02 04:42:26 "+I[Charlie, HR]",
> Dec 02 04:42:26 "+I[Don, Sales]",
> Dec 02 04:42:26 "+I[Victor, null]",
> Dec 02 04:42:26 "+I[Helena, Engineering]",
> Dec 02 04:42:26 "+I[Juliet, Engineering]",
> Dec 02 04:42:26 "+I[Ivana, Research]",
> Dec 02 04:42:26 "+I[Charlie, People Operations]"]
> {noformat}
> examples of failures
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55120=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55129=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11786
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55136=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55137=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11779



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33727) JoinRestoreTest is failing on AZP

2023-12-02 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792413#comment-17792413
 ] 

Jim Hughes commented on FLINK-33727:


>From a quick look, the data is coming from `DeduplicationTestPrograms.java`.  

I believe that this shows that the various `RestoreTest`s are being executed 
concurrently and are interfering with each other.  

Two obvious ideas would be:
1. Have each RestoreTest use differently named sinks/sources.  (Right now, the 
DeduplicationTestPrograms and JoinTestPrograms both use sinks called "MySink".)
2. Do something at the JUnit level so that implementations of RestoreTestBase 
do not run concurrently.

Thoughts?

> JoinRestoreTest is failing on AZP
> -
>
> Key: FLINK-33727
> URL: https://issues.apache.org/jira/browse/FLINK-33727
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> Since {{JoinRestoreTest}} was introduced in FLINK-33470 it seems to be a 
> reason
> {noformat}
> Dec 02 04:42:26 04:42:26.408 [ERROR] Failures: 
> Dec 02 04:42:26 04:42:26.408 [ERROR]   
> JoinRestoreTest>RestoreTestBase.testRestore:283 
> Dec 02 04:42:26 Expecting actual:
> Dec 02 04:42:26   ["+I[9, carol, apple, 9000]",
> Dec 02 04:42:26 "+I[8, bill, banana, 8000]",
> Dec 02 04:42:26 "+I[6, jerry, pen, 6000]"]
> Dec 02 04:42:26 to contain exactly in any order:
> Dec 02 04:42:26   ["+I[Adam, null]",
> Dec 02 04:42:26 "+I[Baker, Research]",
> Dec 02 04:42:26 "+I[Charlie, Human Resources]",
> Dec 02 04:42:26 "+I[Charlie, HR]",
> Dec 02 04:42:26 "+I[Don, Sales]",
> Dec 02 04:42:26 "+I[Victor, null]",
> Dec 02 04:42:26 "+I[Helena, Engineering]",
> Dec 02 04:42:26 "+I[Juliet, Engineering]",
> Dec 02 04:42:26 "+I[Ivana, Research]",
> Dec 02 04:42:26 "+I[Charlie, People Operations]"]
> {noformat}
> examples of failures
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55120=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55129=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11786
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55136=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=12099
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55137=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11779



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33676) Implement restore tests for WindowAggregate node

2023-11-28 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33676:
--

 Summary: Implement restore tests for WindowAggregate node
 Key: FLINK-33676
 URL: https://issues.apache.org/jira/browse/FLINK-33676
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33667) Implement restore tests for MatchRecognize node

2023-11-27 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33667:
--

 Summary: Implement restore tests for MatchRecognize node
 Key: FLINK-33667
 URL: https://issues.apache.org/jira/browse/FLINK-33667
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33601) Implement restore tests for Expand node

2023-11-20 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33601:
--

 Summary: Implement restore tests for Expand node
 Key: FLINK-33601
 URL: https://issues.apache.org/jira/browse/FLINK-33601
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33601) Implement restore tests for Expand node

2023-11-20 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-33601:
--

Assignee: Jim Hughes

> Implement restore tests for Expand node
> ---
>
> Key: FLINK-33601
> URL: https://issues.apache.org/jira/browse/FLINK-33601
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33521) Implement restore tests for PythonCalc node

2023-11-10 Thread Jim Hughes (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Hughes reassigned FLINK-33521:
--

Assignee: Jim Hughes

> Implement restore tests for PythonCalc node
> ---
>
> Key: FLINK-33521
> URL: https://issues.apache.org/jira/browse/FLINK-33521
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Jim Hughes
>Assignee: Jim Hughes
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33521) Implement restore tests for PythonCalc node

2023-11-10 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33521:
--

 Summary: Implement restore tests for PythonCalc node
 Key: FLINK-33521
 URL: https://issues.apache.org/jira/browse/FLINK-33521
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33488) Implement restore tests for Deduplicate node

2023-11-08 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33488:
--

 Summary: Implement restore tests for Deduplicate node
 Key: FLINK-33488
 URL: https://issues.apache.org/jira/browse/FLINK-33488
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33470) Implement restore tests for Join node

2023-11-06 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33470:
--

 Summary: Implement restore tests for Join node
 Key: FLINK-33470
 URL: https://issues.apache.org/jira/browse/FLINK-33470
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33469) Implement restore tests for Limit node

2023-11-06 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-33469:
--

 Summary: Implement restore tests for Limit node 
 Key: FLINK-33469
 URL: https://issues.apache.org/jira/browse/FLINK-33469
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32691) SELECT fcn does not work with an unset catalog or database

2023-07-26 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-32691:
--

 Summary: SELECT fcn does not work with an unset catalog or database
 Key: FLINK-32691
 URL: https://issues.apache.org/jira/browse/FLINK-32691
 Project: Flink
  Issue Type: Bug
Reporter: Jim Hughes
 Fix For: 1.18.0


Relative to https://issues.apache.org/jira/browse/FLINK-32584, function lookup 
fails without the catalog and database set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-20092) [Java 11] Multi-thread Flink compilation not working

2022-11-21 Thread Jim Hughes (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636830#comment-17636830
 ] 

Jim Hughes commented on FLINK-20092:


Hi all, I chatted with [~chesnay] about this some.  He identified that 
https://issues.apache.org/jira/browse/MSHADE-413 may be the issue and suggested 
that I could try version 3.4.1 (the latest and the fix version for that JIRA) 
of the Maven Shade plugin.

Updating the plugin is being discussed on this PR: 
[https://github.com/apache/flink/pull/21344/files] 

> [Java 11] Multi-thread Flink compilation not working
> 
>
> Key: FLINK-20092
> URL: https://issues.apache.org/jira/browse/FLINK-20092
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Maciej Bryński
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> I'd like to use maven -T option when compiling flink.
> {code:java}
>  mvn -T 2C clean install -D"scala-2.12" -DskipTests{code}
> Unfortunately my build is stuck on:
> {code:java}
> [INFO] --- maven-shade-plugin:3.2.1:shade (shade-flink) @ 
> flink-fs-hadoop-shaded ---
> [INFO] Including org.apache.hadoop:hadoop-common:jar:3.1.0 in the shaded jar.
> [INFO] Including org.apache.hadoop:hadoop-annotations:jar:3.1.0 in the shaded 
> jar.
> [INFO] Including com.google.guava:guava:jar:11.0.2 in the shaded jar.
> [INFO] Including commons-io:commons-io:jar:2.7 in the shaded jar.
> [INFO] Including commons-collections:commons-collections:jar:3.2.2 in the 
> shaded jar.
> [INFO] Including commons-logging:commons-logging:jar:1.1.3 in the shaded jar.
> [INFO] Including commons-lang:commons-lang:jar:2.6 in the shaded jar.
> [INFO] Including commons-beanutils:commons-beanutils:jar:1.9.3 in the shaded 
> jar.
> [INFO] Including org.apache.commons:commons-configuration2:jar:2.1.1 in the 
> shaded jar.
> [INFO] Including org.apache.commons:commons-lang3:jar:3.3.2 in the shaded jar.
> [INFO] Including com.google.re2j:re2j:jar:1.1 in the shaded jar.
> [INFO] Including org.apache.hadoop:hadoop-auth:jar:3.1.0 in the shaded jar.
> [INFO] Including org.apache.htrace:htrace-core4:jar:4.1.0-incubating in the 
> shaded jar.
> [INFO] Including com.fasterxml.jackson.core:jackson-databind:jar:2.10.1 in 
> the shaded jar.
> [INFO] Including com.fasterxml.jackson.core:jackson-annotations:jar:2.10.1 in 
> the shaded jar.
> [INFO] Including com.fasterxml.jackson.core:jackson-core:jar:2.10.1 in the 
> shaded jar.
> [INFO] Including org.codehaus.woodstox:stax2-api:jar:3.1.4 in the shaded jar.
> [INFO] Including com.fasterxml.woodstox:woodstox-core:jar:5.0.3 in the shaded 
> jar.
> [INFO] Including org.apache.flink:force-shading:jar:1.12-SNAPSHOT in the 
> shaded jar.
> [INFO] No artifact matching filter io.netty:netty
> [WARNING] Discovered module-info.class. Shading will break its strong 
> encapsulation.
> [WARNING] Discovered module-info.class. Shading will break its strong 
> encapsulation.
> [WARNING] Discovered module-info.class. Shading will break its strong 
> encapsulation.
> [INFO] Replacing original artifact with shaded artifact.
> [INFO] Replacing 
> /home/maverick/flink/flink-filesystems/flink-fs-hadoop-shaded/target/flink-fs-hadoop-shaded-1.12-SNAPSHOT.jar
>  with 
> /home/maverick/flink/flink-filesystems/flink-fs-hadoop-shaded/target/flink-fs-hadoop-shaded-1.12-SNAPSHOT-shaded.jar
> [INFO] Dependency-reduced POM written at: 
> /home/maverick/flink/flink-filesystems/flink-fs-hadoop-shaded/target/dependency-reduced-pom.xml
> {code}
> Can we make flink compilation working with multiple maven threads ?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)