Re: [PR] FLINK-33600][table] print the query time cost for batch query in cli [flink]

2023-11-26 Thread via GitHub


flinkbot commented on PR #23809:
URL: https://github.com/apache/flink/pull/23809#issuecomment-1827310156

   
   ## CI report:
   
   * 20ebdff8ebda287f1640c770a3bbe164a29f2be8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] FLINK-33600][table] print the query time cost for batch query in cli [flink]

2023-11-26 Thread via GitHub


JingGe opened a new pull request, #23809:
URL: https://github.com/apache/flink/pull/23809

   ## What is the purpose of the change
   
   print the query time cost for batch query in cli
   
   
   ## Brief change log
   
 - define new table config to swtich the display the query time cost on/off
 - extend `PrintStyle` and `TableauStyle`
 - update `CliTableauResultView` to show the result for batch
 - extend tests to cover the new feature
   
   
   ## Verifying this change
   
   After update existing tests, this change is already covered by those tests, 
such as `TableauStyleTest` and `CliTableauResultViewTest`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - Document will be updated in another PR once the community reach the 
consensus of the solution.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33600) Print cost time for batch queries in SQL Client

2023-11-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33600:
---
Labels: pull-request-available  (was: )

> Print cost time for batch queries in SQL Client
> ---
>
> Key: FLINK-33600
> URL: https://issues.apache.org/jira/browse/FLINK-33600
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Client
>Reporter: Jark Wu
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently, there is no cost time information when executing batch queries in 
> SQL CLI. But this is very helpful in OLAP/ad-hoc scenarios. 
> For example: 
> {code}
> Flink SQL> select * from (values ('abc', 123));
> +++
> | EXPR$0 | EXPR$1 |
> +++
> |abc |123 |
> +++
> 1 row in set  (0.22 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32667) Use standalone store and embedding writer for jobs with no-restart-strategy in session cluster

2023-11-26 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789941#comment-17789941
 ] 

Yangze Guo commented on FLINK-32667:


FYI, since FLINK-33033 has been merged and proves the benefit we could gain 
from embedded meta store. I'd like to revive this work and file a FLIP in the 
near future.

> Use standalone store and embedding writer for jobs with no-restart-strategy 
> in session cluster
> --
>
> Key: FLINK-32667
> URL: https://issues.apache.org/jira/browse/FLINK-32667
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> When a flink session cluster use zk or k8s high availability service, it will 
> store jobs in zk or ConfigMap. When we submit flink olap jobs to the session 
> cluster, they always turn off restart strategy. These jobs with 
> no-restart-strategy should not be stored in zk or ConfigMap in k8s



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32667) Use standalone store and embedding writer for jobs with no-restart-strategy in session cluster

2023-11-26 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo reassigned FLINK-32667:
--

Assignee: Yangze Guo  (was: Fang Yong)

> Use standalone store and embedding writer for jobs with no-restart-strategy 
> in session cluster
> --
>
> Key: FLINK-32667
> URL: https://issues.apache.org/jira/browse/FLINK-32667
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> When a flink session cluster use zk or k8s high availability service, it will 
> store jobs in zk or ConfigMap. When we submit flink olap jobs to the session 
> cluster, they always turn off restart strategy. These jobs with 
> no-restart-strategy should not be stored in zk or ConfigMap in k8s



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-11-26 Thread via GitHub


KarmaGYZ commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1405582679


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/TaskBalancedPreferredSlotSharingStrategyTest.java:
##
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.jobmanager.scheduler.CoLocationGroup;
+import org.apache.flink.runtime.jobmanager.scheduler.SlotSharingGroup;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingTopology;
+import 
org.apache.flink.runtime.scheduler.strategy.TestingSchedulingExecutionVertex;
+
+import org.apache.flink.shaded.guava31.com.google.common.collect.Lists;
+import org.apache.flink.shaded.guava31.com.google.common.collect.Sets;
+
+import org.assertj.core.data.Offset;
+import org.junit.jupiter.api.Test;
+
+import java.util.ArrayList;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Tests for {@link TaskBalancedPreferredSlotSharingStrategy}. */
+class TaskBalancedPreferredSlotSharingStrategyTest extends 
AbstractSlotSharingStrategyTest {

Review Comment:
   Why did we delete the other two tests?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26013][flink-core][test] add archunit tests for the test code [flink]

2023-11-26 Thread via GitHub


JingGe merged PR #23799:
URL: https://github.com/apache/flink/pull/23799


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Description: 
If json.ignore-parse-errors is set to true and Array parsing errors occur, the 
fields following array are resolved as empty in the complete json message

1. !image-2023-11-27-13-59-42-066.png!

!image-2023-11-27-14-00-04-672.png!

 

2. !image-2023-11-27-14-00-41-176.png!

!image-2023-11-27-14-01-12-187.png!

3. !image-2023-11-27-14-02-52-065.png!

!image-2023-11-27-14-03-10-885.png!

  was:Flink SQL  If json.ignose-parse-errors =true is configured and Array 
parsing errors occur, other columns will be empty


> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
> Attachments: image-2023-11-27-13-58-22-513.png, 
> image-2023-11-27-13-59-42-066.png, image-2023-11-27-14-00-04-672.png, 
> image-2023-11-27-14-00-41-176.png, image-2023-11-27-14-01-12-187.png, 
> image-2023-11-27-14-02-01-252.png, image-2023-11-27-14-02-30-666.png, 
> image-2023-11-27-14-02-52-065.png, image-2023-11-27-14-03-10-885.png
>
>
> If json.ignore-parse-errors is set to true and Array parsing errors occur, 
> the fields following array are resolved as empty in the complete json message
> 1. !image-2023-11-27-13-59-42-066.png!
> !image-2023-11-27-14-00-04-672.png!
>  
> 2. !image-2023-11-27-14-00-41-176.png!
> !image-2023-11-27-14-01-12-187.png!
> 3. !image-2023-11-27-14-02-52-065.png!
> !image-2023-11-27-14-03-10-885.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Attachment: image-2023-11-27-14-03-10-885.png

> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
> Attachments: image-2023-11-27-13-58-22-513.png, 
> image-2023-11-27-13-59-42-066.png, image-2023-11-27-14-00-04-672.png, 
> image-2023-11-27-14-00-41-176.png, image-2023-11-27-14-01-12-187.png, 
> image-2023-11-27-14-02-01-252.png, image-2023-11-27-14-02-30-666.png, 
> image-2023-11-27-14-02-52-065.png, image-2023-11-27-14-03-10-885.png
>
>
> Flink SQL  If json.ignose-parse-errors =true is configured and Array parsing 
> errors occur, other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Attachment: image-2023-11-27-14-02-52-065.png

> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
> Attachments: image-2023-11-27-13-58-22-513.png, 
> image-2023-11-27-13-59-42-066.png, image-2023-11-27-14-00-04-672.png, 
> image-2023-11-27-14-00-41-176.png, image-2023-11-27-14-01-12-187.png, 
> image-2023-11-27-14-02-01-252.png, image-2023-11-27-14-02-30-666.png, 
> image-2023-11-27-14-02-52-065.png
>
>
> Flink SQL  If json.ignose-parse-errors =true is configured and Array parsing 
> errors occur, other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Attachment: image-2023-11-27-14-02-30-666.png

> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
> Attachments: image-2023-11-27-13-58-22-513.png, 
> image-2023-11-27-13-59-42-066.png, image-2023-11-27-14-00-04-672.png, 
> image-2023-11-27-14-00-41-176.png, image-2023-11-27-14-01-12-187.png, 
> image-2023-11-27-14-02-01-252.png, image-2023-11-27-14-02-30-666.png, 
> image-2023-11-27-14-02-52-065.png
>
>
> Flink SQL  If json.ignose-parse-errors =true is configured and Array parsing 
> errors occur, other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Attachment: image-2023-11-27-14-02-01-252.png

> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
> Attachments: image-2023-11-27-13-58-22-513.png, 
> image-2023-11-27-13-59-42-066.png, image-2023-11-27-14-00-04-672.png, 
> image-2023-11-27-14-00-41-176.png, image-2023-11-27-14-01-12-187.png, 
> image-2023-11-27-14-02-01-252.png, image-2023-11-27-14-02-30-666.png, 
> image-2023-11-27-14-02-52-065.png
>
>
> Flink SQL  If json.ignose-parse-errors =true is configured and Array parsing 
> errors occur, other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Attachment: image-2023-11-27-14-01-12-187.png

> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
> Attachments: image-2023-11-27-13-58-22-513.png, 
> image-2023-11-27-13-59-42-066.png, image-2023-11-27-14-00-04-672.png, 
> image-2023-11-27-14-00-41-176.png, image-2023-11-27-14-01-12-187.png
>
>
> Flink SQL  If json.ignose-parse-errors =true is configured and Array parsing 
> errors occur, other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Attachment: image-2023-11-27-14-00-41-176.png

> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
> Attachments: image-2023-11-27-13-58-22-513.png, 
> image-2023-11-27-13-59-42-066.png, image-2023-11-27-14-00-04-672.png, 
> image-2023-11-27-14-00-41-176.png
>
>
> Flink SQL  If json.ignose-parse-errors =true is configured and Array parsing 
> errors occur, other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Attachment: image-2023-11-27-14-00-04-672.png

> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
> Attachments: image-2023-11-27-13-58-22-513.png, 
> image-2023-11-27-13-59-42-066.png, image-2023-11-27-14-00-04-672.png, 
> image-2023-11-27-14-00-41-176.png
>
>
> Flink SQL  If json.ignose-parse-errors =true is configured and Array parsing 
> errors occur, other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Attachment: image-2023-11-27-13-59-42-066.png

> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
> Attachments: image-2023-11-27-13-58-22-513.png, 
> image-2023-11-27-13-59-42-066.png
>
>
> Flink SQL  If json.ignose-parse-errors =true is configured and Array parsing 
> errors occur, other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Attachment: image-2023-11-27-13-58-22-513.png

> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
> Attachments: image-2023-11-27-13-58-22-513.png
>
>
> Flink SQL  If json.ignose-parse-errors =true is configured and Array parsing 
> errors occur, other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix][table][test] remove duplicated code [flink]

2023-11-26 Thread via GitHub


JingGe merged PR #23786:
URL: https://github.com/apache/flink/pull/23786


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-27681) Improve the availability of Flink when the RocksDB file is corrupted.

2023-11-26 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789903#comment-17789903
 ] 

Hangxiang Yu edited comment on FLINK-27681 at 11/27/23 4:33 AM:


[~pnowojski] [~fanrui]  Thanks for joining in the discussion.

Thanks [~mayuehappy] for expalining the case which we also saw in our 
production environment.

Let me also try to share my thoughts about your questions.
{quote}I'm worried that flink add check can't completely solve the problem of 
file corruption. Is it possible that file corruption occurs after flink check 
but before uploading the file to hdfs?
{quote}
I think the concern is right.

Actually, file corruption may occurs in all stages:
 # File generation at runtime (RocksDB memtable flush or Compaction)
 # Uploading when checkpoint (local file -> memory buffer -> network transfer 
-> DFS)
 # Downloading when recovery(reversed path with 2)

 

For the first situation: 
 * File corruption will not affect the read path because the checksum will be 
checked when reading rocksdb block. The job will failover when read the 
corrupted one.
 * So the core problem is that a corruped file which is not read at runtime 
will be uploaded to remote DFS when checkpoint. It will affect the normal 
processing once failover which will have severe consequence especially for high 
priority job.
 * That's why we'd like to focus on not uploading the corruped files (Also for 
just fail the job simply to make job restore from the last complete checkpoint).

For the second and third situations:
 * The ideal way is that we should unify the checksum machnism of local db and 
remote DFS.
 * Many FileSystems supports to pass the file checksum and verify it in their 
remote server. We could use this to verify the checksum end-to-end.
 * But this may introduce a new API in some public classes like FileSystem 
which is a bigger topic.
 * As we also saw many cases like [~mayuehappy] mentioned. So I think maybe we 
could resolve this case at first. I'd also like to see we have the ideal way to 
go if it worth doing.


was (Author: masteryhx):
[~pnowojski] [~fanrui]  Thanks for joining in the discussion.

Thanks [~mayuehappy] for expalining the case which we also saw in our 
production environment.

Let me also try to share my thoughts about your questions.
{quote}I'm worried that flink add check can't completely solve the problem of 
file corruption. Is it possible that file corruption occurs after flink check 
but before uploading the file to hdfs?
{quote}
I think the concern is right.

Actually, file corruption may occurs in all stages:
 # File generation at runtime (RocksDB memtable flush or Compaction)
 # Uploading when checkpoint (local file -> memory buffer -> network transfer 
-> DFS)
 # Downloading when recovery(reversed path with 2)

 

For the first situation: 
 * File corruption will not affect the read path because the checksum will be 
checked when reading rocksdb block. The job will failover when read the 
corrupted one.
 * So the core problem is that a corruption file which is not read at runtime 
will be uploaded to remote DFS when checkpoint. It will affect the normal 
processing once failover which will have severe consequence especially for high 
priority job.
 * That's why we'd like to focus on not uploading the corruped files.

For the second and third situations:
 * The ideal way is that we should unify the checksum machnism of local db and 
remote DFS.
 * Many FileSystems supports to pass the file checksum and verify it in their 
remote server. We could use this to verify the checksum end-to-end.
 * But this may introduce a new API in some public classes like FileSystem 
which is a bigger topic.
 * As we also saw many cases like [~mayuehappy] mentioned. So I think maybe we 
could resolve this case at first. I'd also like to see we have the ideal way to 
go if it worth doing.

> Improve the availability of Flink when the RocksDB file is corrupted.
> -
>
> Key: FLINK-27681
> URL: https://issues.apache.org/jira/browse/FLINK-27681
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Ming Li
>Assignee: Yue Ma
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2023-08-23-15-06-16-717.png
>
>
> We have encountered several times when the RocksDB checksum does not match or 
> the block verification fails when the job is restored. The reason for this 
> situation is generally that there are some problems with the machine where 
> the task is located, which causes the files uploaded to HDFS to be incorrect, 
> but it has been a long time (a dozen minutes to half an hour) when we found 
> this problem. I'm not sure if anyone else has 

[jira] [Commented] (FLINK-27681) Improve the availability of Flink when the RocksDB file is corrupted.

2023-11-26 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789903#comment-17789903
 ] 

Hangxiang Yu commented on FLINK-27681:
--

[~pnowojski] [~fanrui]  Thanks for joining in the discussion.

Thanks [~mayuehappy] for expalining the case which we also saw in our 
production environment.

Let me also try to share my thoughts about your questions.
{quote}I'm worried that flink add check can't completely solve the problem of 
file corruption. Is it possible that file corruption occurs after flink check 
but before uploading the file to hdfs?
{quote}
I think the concern is right.

Actually, file corruption may occurs in all stages:
 # File generation at runtime (RocksDB memtable flush or Compaction)
 # Uploading when checkpoint (local file -> memory buffer -> network transfer 
-> DFS)
 # Downloading when recovery(reversed path with 2)

 

For the first situation: 
 * File corruption will not affect the read path because the checksum will be 
checked when reading rocksdb block. The job will failover when read the 
corrupted one.
 * So the core problem is that a corruption file which is not read at runtime 
will be uploaded to remote DFS when checkpoint. It will affect the normal 
processing once failover which will have severe consequence especially for high 
priority job.
 * That's why we'd like to focus on not uploading the corruped files.

For the second and third situations:
 * The ideal way is that we should unify the checksum machnism of local db and 
remote DFS.
 * Many FileSystems supports to pass the file checksum and verify it in their 
remote server. We could use this to verify the checksum end-to-end.
 * But this may introduce a new API in some public classes like FileSystem 
which is a bigger topic.
 * As we also saw many cases like [~mayuehappy] mentioned. So I think maybe we 
could resolve this case at first. I'd also like to see we have the ideal way to 
go if it worth doing.

> Improve the availability of Flink when the RocksDB file is corrupted.
> -
>
> Key: FLINK-27681
> URL: https://issues.apache.org/jira/browse/FLINK-27681
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Ming Li
>Assignee: Yue Ma
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2023-08-23-15-06-16-717.png
>
>
> We have encountered several times when the RocksDB checksum does not match or 
> the block verification fails when the job is restored. The reason for this 
> situation is generally that there are some problems with the machine where 
> the task is located, which causes the files uploaded to HDFS to be incorrect, 
> but it has been a long time (a dozen minutes to half an hour) when we found 
> this problem. I'm not sure if anyone else has had a similar problem.
> Since this file is referenced by incremental checkpoints for a long time, 
> when the maximum number of checkpoints reserved is exceeded, we can only use 
> this file until it is no longer referenced. When the job failed, it cannot be 
> recovered.
> Therefore we consider:
> 1. Can RocksDB periodically check whether all files are correct and find the 
> problem in time?
> 2. Can Flink automatically roll back to the previous checkpoint when there is 
> a problem with the checkpoint data, because even with manual intervention, it 
> just tries to recover from the existing checkpoint or discard the entire 
> state.
> 3. Can we increase the maximum number of references to a file based on the 
> maximum number of checkpoints reserved? When the number of references exceeds 
> the maximum number of checkpoints -1, the Task side is required to upload a 
> new file for this reference. Not sure if this way will ensure that the new 
> file we upload will be correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)
duke created FLINK-33656:


 Summary: If json.ignose-parse-errors =true is configured and Array 
parsing errors occur, other columns will be empty
 Key: FLINK-33656
 URL: https://issues.apache.org/jira/browse/FLINK-33656
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.17.1, 1.18.0, 1.16.2, 1.16.1, 1.17.0, 1.16.0
Reporter: duke


If json.ignose-parse-errors =true is configured and Array parsing errors occur, 
other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33656) If json.ignose-parse-errors =true is configured and Array parsing errors occur, other columns will be empty

2023-11-26 Thread duke (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

duke updated FLINK-33656:
-
Description: Flink SQL  If json.ignose-parse-errors =true is configured and 
Array parsing errors occur, other columns will be empty  (was: If 
json.ignose-parse-errors =true is configured and Array parsing errors occur, 
other columns will be empty)

> If json.ignose-parse-errors =true is configured and Array parsing errors 
> occur, other columns will be empty
> ---
>
> Key: FLINK-33656
> URL: https://issues.apache.org/jira/browse/FLINK-33656
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0, 1.17.0, 1.16.1, 1.16.2, 1.18.0, 1.17.1
>Reporter: duke
>Priority: Critical
>
> Flink SQL  If json.ignose-parse-errors =true is configured and Array parsing 
> errors occur, other columns will be empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32881][checkpoint] Support triggering savepoint in detach mode for CLI and dumping all pending savepoint ids by rest api [flink]

2023-11-26 Thread via GitHub


masteryhx commented on code in PR #23253:
URL: https://github.com/apache/flink/pull/23253#discussion_r1403926743


##
flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontendParser.java:
##
@@ -157,6 +157,14 @@ public class CliFrontendParser {
 + " for changing state backends, native = a 
specific format for the"
 + " chosen state backend, might be faster to take 
and restore from.");
 
+public static final Option SAVEPOINT_DETACH_OPTION =

Review Comment:
   Whatever we define a new option or just reuse existing one, we should update 
all related docs.



##
flink-clients/src/main/java/org/apache/flink/client/program/rest/RestClusterClient.java:
##
@@ -608,6 +616,27 @@ private CompletableFuture triggerSavepoint(
 });
 }
 
+private CompletableFuture triggerDetachSavepoint(
+final JobID jobId,
+final @Nullable String savepointDirectory,
+final boolean cancelJob,
+final SavepointFormatType formatType) {
+final SavepointTriggerHeaders savepointTriggerHeaders =

Review Comment:
   The previous logic is similar with savepoint, right ?
   Could we unify them into a method ?



##
flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontendParser.java:
##
@@ -157,6 +157,14 @@ public class CliFrontendParser {
 + " for changing state backends, native = a 
specific format for the"
 + " chosen state backend, might be faster to take 
and restore from.");
 
+public static final Option SAVEPOINT_DETACH_OPTION =
+new Option(
+"dcp",

Review Comment:
   BTW, I also thinked this again.
   Could `stop-with-savepoint` also has this problem ?
   If ture, I think we could resolve this together (you could add a new commit).
   Then maybe reusing "detached" command looks good to me.



##
flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontendParser.java:
##
@@ -157,6 +157,14 @@ public class CliFrontendParser {
 + " for changing state backends, native = a 
specific format for the"
 + " chosen state backend, might be faster to take 
and restore from.");
 
+public static final Option SAVEPOINT_DETACH_OPTION =
+new Option(
+"dcp",

Review Comment:
   How about "dsp" ?
   Or Could we just reuse existing "detached" ?



##
flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java:
##
@@ -178,6 +178,23 @@ CompletableFuture stopWithSavepoint(
 CompletableFuture triggerSavepoint(
 JobID jobId, @Nullable String savepointDirectory, 
SavepointFormatType formatType);
 
+/**
+ * Triggers a detach savepoint for the job identified by the job id. The 
savepoint will be
+ * written to the given savepoint directory, or {@link
+ * 
org.apache.flink.configuration.CheckpointingOptions#SAVEPOINT_DIRECTORY} if it 
is null.
+ * Notice that: detach savepoint will return with a savepoint trigger id 
instead of the path
+ * future, that means the client will return very quickly.
+ *
+ * @param jobId job id
+ * @param savepointDirectory directory the savepoint should be written to
+ * @param formatType a binary format of the savepoint
+ * @return The savepoint trigger id
+ */
+default CompletableFuture triggerDetachSavepoint(
+JobID jobId, @Nullable String savepointDirectory, 
SavepointFormatType formatType) {
+return triggerSavepoint(jobId, savepointDirectory, formatType);

Review Comment:
   Do we really need the default method ?
   It's not a public interface, and seems the default semantic doesn't match 
`detach`



##
flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java:
##
@@ -178,6 +178,23 @@ CompletableFuture stopWithSavepoint(
 CompletableFuture triggerSavepoint(
 JobID jobId, @Nullable String savepointDirectory, 
SavepointFormatType formatType);
 
+/**
+ * Triggers a detach savepoint for the job identified by the job id. The 
savepoint will be

Review Comment:
   ```suggestion
* Triggers a detached savepoint for the job identified by the job id. 
The savepoint will be
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-27681] RocksdbStatebackend verify incremental sst file checksum during checkpoint [flink]

2023-11-26 Thread via GitHub


mayuehappy commented on code in PR #23765:
URL: https://github.com/apache/flink/pull/23765#discussion_r1405583287


##
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java:
##
@@ -350,6 +359,10 @@ private long uploadSnapshotFiles(
 ? CheckpointedStateScope.EXCLUSIVE
 : CheckpointedStateScope.SHARED;
 
+if (stateFileVerifier != null) {
+stateFileVerifier.verifySstFilesChecksum(sstFilePaths);
+}
+
 List sstFilesUploadResult =
 stateUploader.uploadFilesToCheckpointFs(

Review Comment:
   > > most files are damaged due to hardware failures on the machine where the 
file is written
   > 
   > Check with you first, if there is hardware failures after the file is 
written,, doesn't this issue happen?
   
   Well, just like I replied to you in the ticket, I think this possibility is 
very small. We can continue the discussion in the ticket and then come back to 
continue reviewing this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-11-26 Thread via GitHub


KarmaGYZ commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1405582679


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/TaskBalancedPreferredSlotSharingStrategyTest.java:
##
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.jobmanager.scheduler.CoLocationGroup;
+import org.apache.flink.runtime.jobmanager.scheduler.SlotSharingGroup;
+import org.apache.flink.runtime.scheduler.strategy.SchedulingTopology;
+import 
org.apache.flink.runtime.scheduler.strategy.TestingSchedulingExecutionVertex;
+
+import org.apache.flink.shaded.guava31.com.google.common.collect.Lists;
+import org.apache.flink.shaded.guava31.com.google.common.collect.Sets;
+
+import org.assertj.core.data.Offset;
+import org.junit.jupiter.api.Test;
+
+import java.util.ArrayList;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Tests for {@link TaskBalancedPreferredSlotSharingStrategy}. */
+class TaskBalancedPreferredSlotSharingStrategyTest extends 
AbstractSlotSharingStrategyTest {

Review Comment:
   Why did we delete the other two tests?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-27681) Improve the availability of Flink when the RocksDB file is corrupted.

2023-11-26 Thread Yue Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789892#comment-17789892
 ] 

Yue Ma commented on FLINK-27681:


Hi [~pnowojski] and [~fanrui] , thanks for your repiles.  
{quote}If one file is uploaded to hdfs in the previous checkpoint, and it's 
corrupted now
{quote}
If the file uploaded in HDFS is good, but it may be corrupted by local disk 
after download during data processing,  can this problem be solved by 
scheduling the TM to another machine after Failover ? Is it more important to 
ensure that the Checkpoint data on HDFS is available. ? BTW, we don’t seem to 
have encountered this situation in our actual production environment. I don’t 
know if you have actually encountered it, or whether we still need to consider 
this situation.

> Improve the availability of Flink when the RocksDB file is corrupted.
> -
>
> Key: FLINK-27681
> URL: https://issues.apache.org/jira/browse/FLINK-27681
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Ming Li
>Assignee: Yue Ma
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2023-08-23-15-06-16-717.png
>
>
> We have encountered several times when the RocksDB checksum does not match or 
> the block verification fails when the job is restored. The reason for this 
> situation is generally that there are some problems with the machine where 
> the task is located, which causes the files uploaded to HDFS to be incorrect, 
> but it has been a long time (a dozen minutes to half an hour) when we found 
> this problem. I'm not sure if anyone else has had a similar problem.
> Since this file is referenced by incremental checkpoints for a long time, 
> when the maximum number of checkpoints reserved is exceeded, we can only use 
> this file until it is no longer referenced. When the job failed, it cannot be 
> recovered.
> Therefore we consider:
> 1. Can RocksDB periodically check whether all files are correct and find the 
> problem in time?
> 2. Can Flink automatically roll back to the previous checkpoint when there is 
> a problem with the checkpoint data, because even with manual intervention, it 
> just tries to recover from the existing checkpoint or discard the entire 
> state.
> 3. Can we increase the maximum number of references to a file based on the 
> maximum number of checkpoints reserved? When the number of references exceeds 
> the maximum number of checkpoints -1, the Task side is required to upload a 
> new file for this reference. Not sure if this way will ensure that the new 
> file we upload will be correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-30613] Improve resolving schema compatibility -- Milestone one [flink]

2023-11-26 Thread via GitHub


masteryhx commented on PR #21635:
URL: https://github.com/apache/flink/pull/21635#issuecomment-1827061892

   > Thanks @masteryhx for the contribution!
   > 
   > I didn't finish the review, and left some minor comments. Please take a 
look in your free time, thanks~
   
   Thanks a lot for your time!
   I have updated the pr by adding two commits.
   I will rebase some commits after the pr is approved.
   Please take a review again in your free time. Thanks a lot again!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-30613] Improve resolving schema compatibility -- Milestone one [flink]

2023-11-26 Thread via GitHub


masteryhx commented on code in PR #21635:
URL: https://github.com/apache/flink/pull/21635#discussion_r1405573786


##
flink-core/src/main/java/org/apache/flink/api/common/typeutils/CompositeTypeSerializerUtil.java:
##
@@ -34,21 +34,21 @@ public class CompositeTypeSerializerUtil {
  * can be used by legacy snapshot classes, which have a newer 
implementation implemented as a
  * {@link CompositeTypeSerializerSnapshot}.
  *
- * @param newSerializer the new serializer to check for compatibility.
+ * @param legacySerializerSnapshot the legacy serializer snapshot to check 
for compatibility.
  * @param newCompositeSnapshot an instance of the new snapshot class to 
delegate compatibility
  * checks to. This instance should already contain the outer snapshot 
information.
  * @param legacyNestedSnapshots the nested serializer snapshots of the 
legacy composite
  * snapshot.
  * @return the result compatibility.
  */
 public static  TypeSerializerSchemaCompatibility 
delegateCompatibilityCheckToNewSnapshot(
-TypeSerializer newSerializer,
-CompositeTypeSerializerSnapshot 
newCompositeSnapshot,
+TypeSerializerSnapshot legacySerializerSnapshot,

Review Comment:
   The `legacy` means it's deprecated and the code is just used for temporary 
compatibility check as before.
   It's different from regular compatibility check from old/previous to new one.
   
   I have migrated `previous` to `old` for some classes.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-31449) Remove DeclarativeSlotManager related logic

2023-11-26 Thread Weihua Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weihua Hu reassigned FLINK-31449:
-

Assignee: Weihua Hu

> Remove DeclarativeSlotManager related logic
> ---
>
> Key: FLINK-31449
> URL: https://issues.apache.org/jira/browse/FLINK-31449
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Reporter: !huwh
>Assignee: Weihua Hu
>Priority: Major
> Fix For: 1.19.0
>
>
> The DeclarativeSlotManager and related configs will be completely removed in 
> the next release after the default SlotManager change to 
> FineGrainedSlotManager.
>  
> We should do the job in 1.19 version.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32881][checkpoint] Support triggering savepoint in detach mode for CLI and dumping all pending savepoint ids by rest api [flink]

2023-11-26 Thread via GitHub


xiangforever2014 commented on PR #23253:
URL: https://github.com/apache/flink/pull/23253#issuecomment-1827056077

   @masteryhx hope to get your reply, many thanks~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31275) Flink supports reporting and storage of source/sink tables relationship

2023-11-26 Thread Fang Yong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789890#comment-17789890
 ] 

Fang Yong commented on FLINK-31275:
---

[~mobuchowski] Sorry for the late reply.

What does the `DataSet` in `LineageVertex` use for? Why a `LineageVertex` have 
multiple inputs or outputs? We hope that 'LineageVertex' describes a single 
source or sink, rather than multiple. So I prefer to add `Map` 
to `LineageVertex` directly to describe the particular information. We 
introduce `LineageEdge` in this FLIP to describe the relation between sources 
and sinks instead of add `input` or `output` in `LineageVertex`.



> Flink supports reporting and storage of source/sink tables relationship
> ---
>
> Key: FLINK-31275
> URL: https://issues.apache.org/jira/browse/FLINK-31275
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>
> FLIP-314 has been accepted 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-314%3A+Support+Customized+Job+Lineage+Listener



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33638) Support variable-length data generation for variable-length data types

2023-11-26 Thread Yubin Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789889#comment-17789889
 ] 

Yubin Li commented on FLINK-33638:
--

[~lincoln] thanks very much!

> Support variable-length data generation for variable-length data types
> --
>
> Key: FLINK-33638
> URL: https://issues.apache.org/jira/browse/FLINK-33638
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.19.0
>Reporter: Yubin Li
>Assignee: Yubin Li
>Priority: Major
>
> Currently, for variable-length data types (varchar, varbinary, string, 
> bytes), datagen connector always generates max-length data, we can extending 
> datagen to generate variable length values(using a new option to enable it, 
> e.g.,'fields.f0.var-len'='true').
> the topic has been discussed in the mail thread 
> [https://lists.apache.org/thread/kp6popo4cnhl6vx31sdn6mlscpzj9tgc]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33643][runtime] Allow StreamExecutionEnvironment's executeAsync API to use default JobName [flink]

2023-11-26 Thread via GitHub


huwh commented on code in PR #23794:
URL: https://github.com/apache/flink/pull/23794#discussion_r1405565272


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java:
##
@@ -2213,12 +2211,24 @@ public final JobClient executeAsync() throws Exception {
  */
 @PublicEvolving
 public JobClient executeAsync(String jobName) throws Exception {
-Preconditions.checkNotNull(jobName, "Streaming Job name should not be 
null.");
 final StreamGraph streamGraph = getStreamGraph();
-streamGraph.setJobName(jobName);
+setJobNameForJobGraph(streamGraph, jobName);

Review Comment:
   IMO, The logic is simple and clear, and there is no need to introduce a new 
function.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-33643) Allow StreamExecutionEnvironment's executeAsync API to use default JobName

2023-11-26 Thread Weihua Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weihua Hu reassigned FLINK-33643:
-

Assignee: Matt Wang  (was: Weihua Hu)

> Allow StreamExecutionEnvironment's executeAsync API to use default JobName
> --
>
> Key: FLINK-33643
> URL: https://issues.apache.org/jira/browse/FLINK-33643
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Affects Versions: 1.19.0
>Reporter: Matt Wang
>Assignee: Matt Wang
>Priority: Minor
>  Labels: pull-request-available
>
> On the `execute` API of StreamExecutionEnvironment, jobName is allowed to be 
> Null. In this case, the default jobName in StreamGraphGenerator 
> (`DEFAULT_STREAMING_JOB_NAME` or `DEFAULT_BATCH_JOB_NAME`) will be used, but 
> the logic of `executeAsync` does not allow jobName to be Null. I think the 
> processing logic should be unified here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33643) Allow StreamExecutionEnvironment's executeAsync API to use default JobName

2023-11-26 Thread Weihua Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weihua Hu reassigned FLINK-33643:
-

Assignee: Weihua Hu

> Allow StreamExecutionEnvironment's executeAsync API to use default JobName
> --
>
> Key: FLINK-33643
> URL: https://issues.apache.org/jira/browse/FLINK-33643
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Affects Versions: 1.19.0
>Reporter: Matt Wang
>Assignee: Weihua Hu
>Priority: Minor
>  Labels: pull-request-available
>
> On the `execute` API of StreamExecutionEnvironment, jobName is allowed to be 
> Null. In this case, the default jobName in StreamGraphGenerator 
> (`DEFAULT_STREAMING_JOB_NAME` or `DEFAULT_BATCH_JOB_NAME`) will be used, but 
> the logic of `executeAsync` does not allow jobName to be Null. I think the 
> processing logic should be unified here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33378) Prepare actions for flink version 1.18

2023-11-26 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789866#comment-17789866
 ] 

Sergey Nuyanzin commented on FLINK-33378:
-

Merged as 
[79e7509256753765813d2e2c970506414de3302a|https://github.com/apache/flink-connector-jdbc/commit/79e7509256753765813d2e2c970506414de3302a]

> Prepare actions for flink version 1.18
> --
>
> Key: FLINK-33378
> URL: https://issues.apache.org/jira/browse/FLINK-33378
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: João Boto
>Assignee: João Boto
>Priority: Major
>  Labels: pull-request-available
>
> With the release of Flink 1.18, bump flink version on connector 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33378) Prepare actions for flink version 1.18

2023-11-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33378.
-
Fix Version/s: jdbc-3.2.0
   Resolution: Fixed

> Prepare actions for flink version 1.18
> --
>
> Key: FLINK-33378
> URL: https://issues.apache.org/jira/browse/FLINK-33378
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: João Boto
>Assignee: João Boto
>Priority: Major
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0
>
>
> With the release of Flink 1.18, bump flink version on connector 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33378) Prepare actions for flink version 1.18

2023-11-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33378:

Summary: Prepare actions for flink version 1.18  (was: Bump flink version 
on flink-connectors-jdbc)

> Prepare actions for flink version 1.18
> --
>
> Key: FLINK-33378
> URL: https://issues.apache.org/jira/browse/FLINK-33378
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: João Boto
>Assignee: João Boto
>Priority: Major
>  Labels: pull-request-available
>
> With the release of Flink 1.18, bump flink version on connector 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33378] Prepare actions for flink version 1.18 [flink-connector-jdbc]

2023-11-26 Thread via GitHub


snuyanzin merged PR #76:
URL: https://github.com/apache/flink-connector-jdbc/pull/76


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33378] Prepare actions for flink version 1.18 [flink-connector-jdbc]

2023-11-26 Thread via GitHub


snuyanzin commented on PR #76:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/76#issuecomment-1826937228

   ok, let's merge it 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33655] Upgrade ArchUnit to 1.2.0 to support java 21 [flink]

2023-11-26 Thread via GitHub


flinkbot commented on PR #23808:
URL: https://github.com/apache/flink/pull/23808#issuecomment-1826929519

   
   ## CI report:
   
   * 2a21eee935d0b0d9db3195d3b2ac02d18e42e348 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33655) Upgrade Archunit to 1.1.0+

2023-11-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33655:
---
Labels: pull-request-available  (was: )

> Upgrade Archunit to 1.1.0+
> --
>
> Key: FLINK-33655
> URL: https://issues.apache.org/jira/browse/FLINK-33655
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> ASM 9.5 (which has support for java 21[1]) came only with ArchUnit 1.1.0 [2]
> With current ArchUnit(1.0.1) in case of jdk21 it fails
> {noformat}
> mvn clean install -DskipTests -Dfast -Pjava21-target
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> like
> {noformat}
> Nov 26 16:07:42 16:07:42.024 [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 5.742 s <<< FAILURE! - in 
> org.apache.flink.architecture.rules.ITCaseRules
> Nov 26 16:07:42 16:07:42.025 [ERROR] ITCaseRules.ITCASE_USE_MINICLUSTER  Time 
> elapsed: 0.005 s  <<< ERROR!
> Nov 26 16:07:42 
> com.tngtech.archunit.library.freeze.StoreUpdateFailedException: Updating 
> frozen violations is disabled (enable by configuration 
> freeze.store.default.allowStoreUpdate=true)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.ViolationStoreFactory$TextFileBasedViolationStore.save(ViolationStoreFactory.java:125)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule$ViolationStoreLineBreakAdapter.save(FreezingArchRule.java:277)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule.removeObsoleteViolationsFromStore(FreezingArchRule.java:154)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule.removeObsoleteViolationsFromStoreAndReturnNewViolations(FreezingArchRule.java:146)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule.evaluate(FreezingArchRule.java:127)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.lang.ArchRule$Assertions.check(ArchRule.java:84)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule.check(FreezingArchRule.java:97)
> {noformat}
> [1] https://asm.ow2.io/versions.html#9.5
> [2] https://github.com/TNG/ArchUnit/pull/1098



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33655) Upgrade Archunit to 1.1.0+

2023-11-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33655:

Description: 
ASM 9.5 (which has support for java 21[1]) came only with ArchUnit 1.1.0 [2]

With current ArchUnit(1.0.0) in case of jdk21 it fails

{noformat}
mvn clean install -DskipTests -Dfast -Pjava21-target
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{noformat}
like
{noformat}
Nov 26 16:07:42 16:07:42.024 [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 5.742 s <<< FAILURE! - in 
org.apache.flink.architecture.rules.ITCaseRules
Nov 26 16:07:42 16:07:42.025 [ERROR] ITCaseRules.ITCASE_USE_MINICLUSTER  Time 
elapsed: 0.005 s  <<< ERROR!
Nov 26 16:07:42 com.tngtech.archunit.library.freeze.StoreUpdateFailedException: 
Updating frozen violations is disabled (enable by configuration 
freeze.store.default.allowStoreUpdate=true)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.ViolationStoreFactory$TextFileBasedViolationStore.save(ViolationStoreFactory.java:125)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule$ViolationStoreLineBreakAdapter.save(FreezingArchRule.java:277)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.removeObsoleteViolationsFromStore(FreezingArchRule.java:154)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.removeObsoleteViolationsFromStoreAndReturnNewViolations(FreezingArchRule.java:146)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.evaluate(FreezingArchRule.java:127)
Nov 26 16:07:42 at 
com.tngtech.archunit.lang.ArchRule$Assertions.check(ArchRule.java:84)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.check(FreezingArchRule.java:97)

{noformat}

[1] https://asm.ow2.io/versions.html#9.5
[2] https://github.com/TNG/ArchUnit/pull/1098

  was:
ASM 9.5 (which has support for java 21[1]) came only with ArchUnit 1.1.0 [2]

With current ArchUnit(1.0.1) in case of jdk21 it fails

{noformat}
mvn clean install -DskipTests -Dfast -Pjava21-target
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{noformat}
like
{noformat}
Nov 26 16:07:42 16:07:42.024 [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 5.742 s <<< FAILURE! - in 
org.apache.flink.architecture.rules.ITCaseRules
Nov 26 16:07:42 16:07:42.025 [ERROR] ITCaseRules.ITCASE_USE_MINICLUSTER  Time 
elapsed: 0.005 s  <<< ERROR!
Nov 26 16:07:42 com.tngtech.archunit.library.freeze.StoreUpdateFailedException: 
Updating frozen violations is disabled (enable by configuration 
freeze.store.default.allowStoreUpdate=true)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.ViolationStoreFactory$TextFileBasedViolationStore.save(ViolationStoreFactory.java:125)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule$ViolationStoreLineBreakAdapter.save(FreezingArchRule.java:277)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.removeObsoleteViolationsFromStore(FreezingArchRule.java:154)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.removeObsoleteViolationsFromStoreAndReturnNewViolations(FreezingArchRule.java:146)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.evaluate(FreezingArchRule.java:127)
Nov 26 16:07:42 at 
com.tngtech.archunit.lang.ArchRule$Assertions.check(ArchRule.java:84)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.check(FreezingArchRule.java:97)

{noformat}

[1] https://asm.ow2.io/versions.html#9.5
[2] https://github.com/TNG/ArchUnit/pull/1098


> Upgrade Archunit to 1.1.0+
> --
>
> Key: FLINK-33655
> URL: https://issues.apache.org/jira/browse/FLINK-33655
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> ASM 9.5 (which has support for java 21[1]) came only with ArchUnit 1.1.0 [2]
> With current ArchUnit(1.0.0) in case of jdk21 it fails
> {noformat}
> mvn clean install -DskipTests -Dfast -Pjava21-target
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> like
> {noformat}
> Nov 26 16:07:42 16:07:42.024 [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 5.742 s <<< FAILURE! - in 
> org.apache.flink.architecture.rules.ITCaseRules
> Nov 26 16:07:42 16:07:42.025 [ERROR] 

[PR] [FLINK-33655] Upgrade ArchUnit to 1.2.0 to support java 21 [flink]

2023-11-26 Thread via GitHub


snuyanzin opened a new pull request, #23808:
URL: https://github.com/apache/flink/pull/23808

   
   ## What is the purpose of the change
   
   The minimum ArchUnit supporting java 21 is 1.1.0 so need to bump ArchUnit to 
support java 21
   
   ## Verifying this change
   
   This change is a trivial rework
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes )
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: (no )
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no )
 - The S3 file system connector: (no )
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33655) Upgrade Archunit to 1.1.0+

2023-11-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33655:

Description: 
ASM 9.5 (which has support for java 21[1]) came only with ArchUnit 1.1.0 [2]

With current ArchUnit(1.0.1) in case of jdk21 it fails

{noformat}
mvn clean install -DskipTests -Dfast -Pjava21-target
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{noformat}
like
{noformat}
Nov 26 16:07:42 16:07:42.024 [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 5.742 s <<< FAILURE! - in 
org.apache.flink.architecture.rules.ITCaseRules
Nov 26 16:07:42 16:07:42.025 [ERROR] ITCaseRules.ITCASE_USE_MINICLUSTER  Time 
elapsed: 0.005 s  <<< ERROR!
Nov 26 16:07:42 com.tngtech.archunit.library.freeze.StoreUpdateFailedException: 
Updating frozen violations is disabled (enable by configuration 
freeze.store.default.allowStoreUpdate=true)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.ViolationStoreFactory$TextFileBasedViolationStore.save(ViolationStoreFactory.java:125)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule$ViolationStoreLineBreakAdapter.save(FreezingArchRule.java:277)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.removeObsoleteViolationsFromStore(FreezingArchRule.java:154)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.removeObsoleteViolationsFromStoreAndReturnNewViolations(FreezingArchRule.java:146)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.evaluate(FreezingArchRule.java:127)
Nov 26 16:07:42 at 
com.tngtech.archunit.lang.ArchRule$Assertions.check(ArchRule.java:84)
Nov 26 16:07:42 at 
com.tngtech.archunit.library.freeze.FreezingArchRule.check(FreezingArchRule.java:97)

{noformat}

[1] https://asm.ow2.io/versions.html#9.5
[2] https://github.com/TNG/ArchUnit/pull/1098

  was:
ASM 9.5 (which has support for java 21) came only with ArchUnit 1.1.0

With current ArchUnit(1.0.1) in case of jdk21 it fails

{noformat}
mvn clean install -DskipTests -Dfast -Pjava21-target
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{noformat}
like
{noformat}

{noformat}


> Upgrade Archunit to 1.1.0+
> --
>
> Key: FLINK-33655
> URL: https://issues.apache.org/jira/browse/FLINK-33655
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>
> ASM 9.5 (which has support for java 21[1]) came only with ArchUnit 1.1.0 [2]
> With current ArchUnit(1.0.1) in case of jdk21 it fails
> {noformat}
> mvn clean install -DskipTests -Dfast -Pjava21-target
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> like
> {noformat}
> Nov 26 16:07:42 16:07:42.024 [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 5.742 s <<< FAILURE! - in 
> org.apache.flink.architecture.rules.ITCaseRules
> Nov 26 16:07:42 16:07:42.025 [ERROR] ITCaseRules.ITCASE_USE_MINICLUSTER  Time 
> elapsed: 0.005 s  <<< ERROR!
> Nov 26 16:07:42 
> com.tngtech.archunit.library.freeze.StoreUpdateFailedException: Updating 
> frozen violations is disabled (enable by configuration 
> freeze.store.default.allowStoreUpdate=true)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.ViolationStoreFactory$TextFileBasedViolationStore.save(ViolationStoreFactory.java:125)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule$ViolationStoreLineBreakAdapter.save(FreezingArchRule.java:277)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule.removeObsoleteViolationsFromStore(FreezingArchRule.java:154)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule.removeObsoleteViolationsFromStoreAndReturnNewViolations(FreezingArchRule.java:146)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule.evaluate(FreezingArchRule.java:127)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.lang.ArchRule$Assertions.check(ArchRule.java:84)
> Nov 26 16:07:42   at 
> com.tngtech.archunit.library.freeze.FreezingArchRule.check(FreezingArchRule.java:97)
> {noformat}
> [1] https://asm.ow2.io/versions.html#9.5
> [2] https://github.com/TNG/ArchUnit/pull/1098



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33655) Upgrade Archunit to 1.1.0+

2023-11-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33655:

Description: 
ASM 9.5 (which has support for java 21) came only with ArchUnit 1.1.0

With current ArchUnit(1.0.1) in case of jdk21 it fails

{noformat}
mvn clean install -DskipTests -Dfast -Pjava21-target
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{noformat}
like
{noformat}

{noformat}

  was:
{noformat} 
23:21:01.875 [ERROR] 
org.apache.flink.api.scala.migration.ScalaSerializersMigrationTest.testStableAnonymousClassnameGeneration
  Time elapsed: 0.209 s  <<< FAILURE!
org.junit.ComparisonFailure: expected:<...MigrationTest$$anon$[8]> but 
was:<...MigrationTest$$anon$[1]>
at org.junit.Assert.assertEquals(Assert.java:117)
at org.junit.Assert.assertEquals(Assert.java:146)
at 
org.apache.flink.api.scala.migration.ScalaSerializersMigrationTest.testStableAnonymousClassnameGeneration(ScalaSerializersMigrationTest.scala:60)
at 
java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)

{noformat}

it could/should be skipped in same way it is skipped for java 17


> Upgrade Archunit to 1.1.0+
> --
>
> Key: FLINK-33655
> URL: https://issues.apache.org/jira/browse/FLINK-33655
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>
> ASM 9.5 (which has support for java 21) came only with ArchUnit 1.1.0
> With current ArchUnit(1.0.1) in case of jdk21 it fails
> {noformat}
> mvn clean install -DskipTests -Dfast -Pjava21-target
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> like
> {noformat}
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33655) Upgrade Archunit to 1.1.0+

2023-11-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33655:

Summary: Upgrade Archunit to 1.1.0+  (was: Upgrade archunit to 1.1.0+)

> Upgrade Archunit to 1.1.0+
> --
>
> Key: FLINK-33655
> URL: https://issues.apache.org/jira/browse/FLINK-33655
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>
> {noformat} 
> 23:21:01.875 [ERROR] 
> org.apache.flink.api.scala.migration.ScalaSerializersMigrationTest.testStableAnonymousClassnameGeneration
>   Time elapsed: 0.209 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...MigrationTest$$anon$[8]> but 
> was:<...MigrationTest$$anon$[1]>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.flink.api.scala.migration.ScalaSerializersMigrationTest.testStableAnonymousClassnameGeneration(ScalaSerializersMigrationTest.scala:60)
>   at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:580)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> {noformat}
> it could/should be skipped in same way it is skipped for java 17



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33655) Upgrade archunit to 1.1.0+

2023-11-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-33655:

Summary: Upgrade archunit to 1.1.0+  (was: 
ScalaSerializersMigrationTest#testStableAnonymousClassnameGeneration fails with 
jdk21)

> Upgrade archunit to 1.1.0+
> --
>
> Key: FLINK-33655
> URL: https://issues.apache.org/jira/browse/FLINK-33655
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>
> {noformat} 
> 23:21:01.875 [ERROR] 
> org.apache.flink.api.scala.migration.ScalaSerializersMigrationTest.testStableAnonymousClassnameGeneration
>   Time elapsed: 0.209 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...MigrationTest$$anon$[8]> but 
> was:<...MigrationTest$$anon$[1]>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.flink.api.scala.migration.ScalaSerializersMigrationTest.testStableAnonymousClassnameGeneration(ScalaSerializersMigrationTest.scala:60)
>   at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:580)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> {noformat}
> it could/should be skipped in same way it is skipped for java 17



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33655) ScalaSerializersMigrationTest#testStableAnonymousClassnameGeneration fails with jdk21

2023-11-26 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-33655:
---

 Summary: 
ScalaSerializersMigrationTest#testStableAnonymousClassnameGeneration fails with 
jdk21
 Key: FLINK-33655
 URL: https://issues.apache.org/jira/browse/FLINK-33655
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.19.0
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


{noformat} 
23:21:01.875 [ERROR] 
org.apache.flink.api.scala.migration.ScalaSerializersMigrationTest.testStableAnonymousClassnameGeneration
  Time elapsed: 0.209 s  <<< FAILURE!
org.junit.ComparisonFailure: expected:<...MigrationTest$$anon$[8]> but 
was:<...MigrationTest$$anon$[1]>
at org.junit.Assert.assertEquals(Assert.java:117)
at org.junit.Assert.assertEquals(Assert.java:146)
at 
org.apache.flink.api.scala.migration.ScalaSerializersMigrationTest.testStableAnonymousClassnameGeneration(ScalaSerializersMigrationTest.scala:60)
at 
java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)

{noformat}

it could/should be skipped in same way it is skipped for java 17



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33443) Make the test "testWriteComplexType" stable

2023-11-26 Thread huiyang chi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789859#comment-17789859
 ] 

huiyang chi commented on FLINK-33443:
-

Hi Martijn : I can understand that currently this problem is not exposed in the 
CI, because this kind of 80% problem we detected exposed mainly because the 
change of the test environment (and more dangerous for lower-version JDK), 
while for the CI the test environment won't change. In detail, ~50% of problem 
we address is because the non-deterministic of the Hash (Set and Map), this is 
one example Apache repo 
[wicket|https://github.com/apache/wicket/commit/ed64e166dcba6715eafcbb7ca460d2b87e84cffc]
 had encountered and addressed, this is kind of problem resolved for the 
long-term gain :) . I also have some tests detected fixed: 
https://github.com/MyEnthusiastic/flink/pulls

> Make the test "testWriteComplexType" stable
> ---
>
> Key: FLINK-33443
> URL: https://issues.apache.org/jira/browse/FLINK-33443
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Krishna Anandan Ganesan
>Priority: Minor
>
> We are proposing to make the following test stable:
> {code:java}
> org.apache.flink.connectors.hive.HiveRunnerITCase.testWriteComplexType{code}
> *STEPS TO REPRODUCE THE ISSUE:*
>  * The following command can be run to execute the test with the 
> [NonDex|https://github.com/TestingResearchIllinois/NonDex] plugin:
> {code:java}
> mvn -pl flink-connectors/flink-connector-hive 
> edu.illinois:nondex-maven-plugin:2.1.1:nondex 
> -Dtest=org.apache.flink.connectors.hive.HiveRunnerITCase#testWriteComplexType 
> {code}
>  * The following error will be encountered:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   HiveRunnerITCase.testWriteComplexType:166 
> expected: "[1,2,3]{1:"a",2:"b"}   {"f1":3,"f2":"c"}"
>  but was: "[1,2,3]{2:"b",1:"a"}   {"f1":3,"f2":"c"}"
> [INFO] 
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0{code}
> *ROOT CAUSE ANALYSIS:*
> The test is currently flaky because of the assumption that the order of 
> elements received in the _result_ variable will be consistent. There are 
> currently two versions of query output that can be stored in _result._
>  # The actual order that is expected where the output of the map attribute is 
> \{1: "a", 2: "b"}.
>  # The other order is the one shown in the error extract above where the 
> ordering of the map attribute from the table is \{2: "b", 1: "a"}.
> *POTENTIAL FIX:*
>  * The fix that I can suggest/have ready to raise a PR for is introducing 
> another assertion on the second variant of the query output.
>  * By asserting on whether the contents in _result_ are in one of the two 
> orders, we can ascertain that the expected attributes with their contents are 
> received as expected should the order in which they are received, not matter.
> Please share your thoughts on this finding and let me know if any other 
> potential fix is possible for this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33654) deprecate DummyStreamExecutionEnvironment

2023-11-26 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-33654:
---

Assignee: Jing Ge

> deprecate DummyStreamExecutionEnvironment
> -
>
> Key: FLINK-33654
> URL: https://issues.apache.org/jira/browse/FLINK-33654
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Minor
>
> Deprecate DummyStreamExecutionEnvironment first since it will take time to 
> remove it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33654) deprecate DummyStreamExecutionEnvironment

2023-11-26 Thread Jing Ge (Jira)
Jing Ge created FLINK-33654:
---

 Summary: deprecate DummyStreamExecutionEnvironment
 Key: FLINK-33654
 URL: https://issues.apache.org/jira/browse/FLINK-33654
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Jing Ge


Deprecate DummyStreamExecutionEnvironment first since it will take time to 
remove it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33651) Update the Chinese version checkpointing doc in fault tolerance

2023-11-26 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge resolved FLINK-33651.
-
Resolution: Fixed

> Update the Chinese version checkpointing doc in fault tolerance
> ---
>
> Key: FLINK-33651
> URL: https://issues.apache.org/jira/browse/FLINK-33651
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / Checkpointing
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
>
> the Chinese doc missed some items compared to the English doc[1], would you 
> mind adding them together? Such as:
>  * checkpoint storage
>  * unaligned checkpoints:
>  * checkpoints with finished tasks:
> [1] 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/checkpointing/#enabling-and-configuring-checkpointing]
>  
> Reference: https://github.com/apache/flink/pull/23795#discussion_r1404855432



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33651) Update the Chinese version checkpointing doc in fault tolerance

2023-11-26 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789845#comment-17789845
 ] 

Jing Ge commented on FLINK-33651:
-

master: db0403c3f39ccd60f0c0a59f12caa63c521e1d17

> Update the Chinese version checkpointing doc in fault tolerance
> ---
>
> Key: FLINK-33651
> URL: https://issues.apache.org/jira/browse/FLINK-33651
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Runtime / Checkpointing
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>  Labels: pull-request-available
>
> the Chinese doc missed some items compared to the English doc[1], would you 
> mind adding them together? Such as:
>  * checkpoint storage
>  * unaligned checkpoints:
>  * checkpoints with finished tasks:
> [1] 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/checkpointing/#enabling-and-configuring-checkpointing]
>  
> Reference: https://github.com/apache/flink/pull/23795#discussion_r1404855432



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-33457) FlinkImageBuilder checks for Java 21

2023-11-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-33457.
---

> FlinkImageBuilder checks for Java 21
> 
>
> Key: FLINK-33457
> URL: https://issues.apache.org/jira/browse/FLINK-33457
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently for java 21 it fails like
> {noformat}
> Nov 04 03:03:08 Caused by: 
> org.apache.flink.connector.testframe.container.ImageBuildException: Failed to 
> build image "flink-configured-jobmanager"
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:234)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:65)
> Nov 04 03:03:08   ... 61 more
> Nov 04 03:03:08 Caused by: java.lang.IllegalStateException: Unexpected Java 
> version: 21
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.getJavaVersionSuffix(FlinkImageBuilder.java:284)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.lambda$buildBaseImage$3(FlinkImageBuilder.java:250)
> Nov 04 03:03:08   at 
> org.testcontainers.images.builder.traits.DockerfileTrait.withDockerfileFromBuilder(DockerfileTrait.java:19)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.buildBaseImage(FlinkImageBuilder.java:246)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:206)
> Nov 04 03:03:08   ... 62 more
> Nov 04 03:03:08 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33457) FlinkImageBuilder checks for Java 21

2023-11-26 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17789844#comment-17789844
 ] 

Sergey Nuyanzin commented on FLINK-33457:
-

Merged to master as 
[6c429c5450a003d6521693116e0fbb2dab543d6e|https://github.com/apache/flink/commit/6c429c5450a003d6521693116e0fbb2dab543d6e]

> FlinkImageBuilder checks for Java 21
> 
>
> Key: FLINK-33457
> URL: https://issues.apache.org/jira/browse/FLINK-33457
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> Currently for java 21 it fails like
> {noformat}
> Nov 04 03:03:08 Caused by: 
> org.apache.flink.connector.testframe.container.ImageBuildException: Failed to 
> build image "flink-configured-jobmanager"
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:234)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:65)
> Nov 04 03:03:08   ... 61 more
> Nov 04 03:03:08 Caused by: java.lang.IllegalStateException: Unexpected Java 
> version: 21
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.getJavaVersionSuffix(FlinkImageBuilder.java:284)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.lambda$buildBaseImage$3(FlinkImageBuilder.java:250)
> Nov 04 03:03:08   at 
> org.testcontainers.images.builder.traits.DockerfileTrait.withDockerfileFromBuilder(DockerfileTrait.java:19)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.buildBaseImage(FlinkImageBuilder.java:246)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:206)
> Nov 04 03:03:08   ... 62 more
> Nov 04 03:03:08 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33457) FlinkImageBuilder checks for Java 21

2023-11-26 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-33457.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

> FlinkImageBuilder checks for Java 21
> 
>
> Key: FLINK-33457
> URL: https://issues.apache.org/jira/browse/FLINK-33457
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently for java 21 it fails like
> {noformat}
> Nov 04 03:03:08 Caused by: 
> org.apache.flink.connector.testframe.container.ImageBuildException: Failed to 
> build image "flink-configured-jobmanager"
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:234)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:65)
> Nov 04 03:03:08   ... 61 more
> Nov 04 03:03:08 Caused by: java.lang.IllegalStateException: Unexpected Java 
> version: 21
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.getJavaVersionSuffix(FlinkImageBuilder.java:284)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.lambda$buildBaseImage$3(FlinkImageBuilder.java:250)
> Nov 04 03:03:08   at 
> org.testcontainers.images.builder.traits.DockerfileTrait.withDockerfileFromBuilder(DockerfileTrait.java:19)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.buildBaseImage(FlinkImageBuilder.java:246)
> Nov 04 03:03:08   at 
> org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:206)
> Nov 04 03:03:08   ... 62 more
> Nov 04 03:03:08 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33457][tests] FlinkImageBuilder checks for Java 21 [flink]

2023-11-26 Thread via GitHub


snuyanzin merged PR #23662:
URL: https://github.com/apache/flink/pull/23662


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][table][test] remove duplicated code [flink]

2023-11-26 Thread via GitHub


JingGe commented on PR #23786:
URL: https://github.com/apache/flink/pull/23786#issuecomment-1826886399

   Thanks @snuyanzin for the review!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33457][tests] FlinkImageBuilder checks for Java 21 [flink]

2023-11-26 Thread via GitHub


snuyanzin commented on PR #23662:
URL: https://github.com/apache/flink/pull/23662#issuecomment-1826886015

   Thanks for taking a look


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][table][test] remove duplicated code [flink]

2023-11-26 Thread via GitHub


snuyanzin commented on code in PR #23786:
URL: https://github.com/apache/flink/pull/23786#discussion_r1405469301


##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliTableauResultViewTest.java:
##
@@ -240,6 +232,16 @@ void testBatchResult() {
 assertThat(collectResult.closed).isFalse();
 }
 
+@NotNull
+private TestChangelogResult createTestChangelogResult() {

Review Comment:
   nit: it's better to put private/utils methods below all the tests 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33651][Runtime][doc] update state checkpoins doc in Chinese [flink]

2023-11-26 Thread via GitHub


JingGe merged PR #23807:
URL: https://github.com/apache/flink/pull/23807


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][table][doc] add hint to suggest using UserDefinedFunction class over instance [flink]

2023-11-26 Thread via GitHub


JingGe merged PR #23805:
URL: https://github.com/apache/flink/pull/23805


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33609] Take into account the resource limit specified in the pod template. [flink]

2023-11-26 Thread via GitHub


surendralilhore commented on PR #23768:
URL: https://github.com/apache/flink/pull/23768#issuecomment-1826843398

   There is one more Jira to fix this issue in better way : 
[FLINK-33548](https://issues.apache.org/jira/browse/FLINK-33548)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33609] Take into account the resource limit specified in the pod template. [flink]

2023-11-26 Thread via GitHub


surendralilhore closed pull request #23768: [FLINK-33609] Take into account the 
resource limit specified in the pod template.
URL: https://github.com/apache/flink/pull/23768


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-33609) Take into account the resource limit specified in the pod template.

2023-11-26 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore resolved FLINK-33609.

Resolution: Duplicate

There is one more Jira to fix this issue in better way : FLINK-33548

> Take into account the resource limit specified in the pod template.
> ---
>
> Key: FLINK-33609
> URL: https://issues.apache.org/jira/browse/FLINK-33609
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.16.0
>Reporter: Surendra Singh Lilhore
>Priority: Major
>  Labels: pull-request-available
>
> Flink is currently not considering the pod template resource limits and is 
> only utilizing the limit obtained from the configured or default limit 
> factor. Flink should consider both the value obtained from the limit factor 
> and the pod template resource limits. It should take the maximum value of the 
> pod template resource limits and the value obtained from the limit factor 
> calculation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix][table][test] remove duplicated code [flink]

2023-11-26 Thread via GitHub


JingGe commented on PR #23786:
URL: https://github.com/apache/flink/pull/23786#issuecomment-1826828710

   good catch! Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][table][test] remove duplicated code [flink]

2023-11-26 Thread via GitHub


snuyanzin commented on PR #23786:
URL: https://github.com/apache/flink/pull/23786#issuecomment-1826815890

   Thanks for driving this
   i would suggest to replace 2 more same occurrences e.g. at
   
https://github.com/apache/flink/blob/0c724f5597a1654d83682d5c39c7ee0ef2959cca/flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliTableauResultViewTest.java#L374-L378
   
   and 
   
https://github.com/apache/flink/blob/0c724f5597a1654d83682d5c39c7ee0ef2959cca/flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliTableauResultViewTest.java#L225-L229


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][table][test] remove duplicated code [flink]

2023-11-26 Thread via GitHub


snuyanzin commented on code in PR #23786:
URL: https://github.com/apache/flink/pull/23786#discussion_r1405421318


##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliTableauResultViewTest.java:
##
@@ -226,7 +222,7 @@ void testBatchResult() {
 // adjust the max column width for printing
 testConfig.set(DISPLAY_MAX_COLUMN_WIDTH, 80);
 
-collectResult =
+TestChangelogResult collectResult =

Review Comment:
   I guess this also should be replaced as
   ```java
   TestChangelogResult collectResult = createTestChangelogResult();
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][table][test] remove duplicated code [flink]

2023-11-26 Thread via GitHub


snuyanzin commented on code in PR #23786:
URL: https://github.com/apache/flink/pull/23786#discussion_r1405421199


##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliTableauResultViewTest.java:
##
@@ -226,7 +222,7 @@ void testBatchResult() {
 // adjust the max column width for printing
 testConfig.set(DISPLAY_MAX_COLUMN_WIDTH, 80);
 
-collectResult =

Review Comment:
   I guess this also should be replaced as
   ```java
   TestChangelogResult collectResult = createTestChangelogResult();
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][table][test] remove duplicated code [flink]

2023-11-26 Thread via GitHub


snuyanzin commented on code in PR #23786:
URL: https://github.com/apache/flink/pull/23786#discussion_r1405421199


##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliTableauResultViewTest.java:
##
@@ -226,7 +222,7 @@ void testBatchResult() {
 // adjust the max column width for printing
 testConfig.set(DISPLAY_MAX_COLUMN_WIDTH, 80);
 
-collectResult =

Review Comment:
   I guess this also should be replaced as
   ```java
   TestChangelogResult collectResult = createTestChangelogResult();
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33651][Runtime][doc] update state checkpoins doc in Chinese [flink]

2023-11-26 Thread via GitHub


1996fanrui commented on PR #23807:
URL: https://github.com/apache/flink/pull/23807#issuecomment-1826810102

   Hey @JingGe , would you mind squashing all commits in advance? Or squash and 
merge it is fine.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][table][doc] add hint to suggest using UserDefinedFunction class over instance [flink]

2023-11-26 Thread via GitHub


JingGe commented on PR #23805:
URL: https://github.com/apache/flink/pull/23805#issuecomment-1826798523

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][table][doc] add hint to suggest using UserDefinedFunction class over instance [flink]

2023-11-26 Thread via GitHub


JingGe commented on PR #23805:
URL: https://github.com/apache/flink/pull/23805#issuecomment-1826797827

   Thanks @snuyanzin for the review!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33651][Runtime][doc] update state checkpoins doc in Chinese [flink]

2023-11-26 Thread via GitHub


JingGe commented on code in PR #23807:
URL: https://github.com/apache/flink/pull/23807#discussion_r1405404799


##
docs/content.zh/docs/dev/datastream/fault-tolerance/checkpointing.md:
##
@@ -71,7 +74,13 @@ Checkpoint 其他的属性包括:
 
   - *externalized checkpoints*: 你可以配置周期存储 checkpoint 到外部系统中。Externalized 
checkpoints 将他们的元数据写到持久化存储上并且在 job 失败的时候*不会*被自动删除。
 这种方式下,如果你的 job 失败,你将会有一个现有的 checkpoint 去恢复。更多的细节请看 [保留 checkpoints 
的部署文档]({{< ref "docs/ops/state/checkpoints" >}}#保留-checkpoint)。
-
+
+  - *非对齐 checkpoints*: 你可以启用[非对齐 checkpoints]({{< ref 
"docs/ops/state/checkpointing_under_backpressure" >}}#非对齐-checkpoints)
+ 以在背压时大大减少创建checkpoint的时间。这仅适用于精确一次(exactly-once)checkpoints 并且只有一个并发检查点。
+
+  - *部分任务结束的 checkpoints*: 默认情况下,即使DAG的部分已经处理完其所有记录,Flink也会继续执行 
checkpoints。请参阅[重要注意事项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" 
>}}#任务结束前等待最后一次-checkpoint)以了解详细信息。

Review Comment:
   gotcha, thanks for the clarification!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][table][doc] add hint to suggest using UserDefinedFunction class over instance [flink]

2023-11-26 Thread via GitHub


snuyanzin commented on code in PR #23805:
URL: https://github.com/apache/flink/pull/23805#discussion_r1405381059


##
docs/content/docs/dev/table/functions/udfs.md:
##
@@ -227,6 +227,20 @@ env.from("MyTable").select(call(classOf[MyConcatFunction], 
$"a", $"b", $"c"));
 {{< /tab >}}
 {{< /tabs >}}
 
+{{< hint info >}}
+`TableEnvironment` provides two overload methods to create temporary system 
function with an `UserDefinedFunction`:
+
+- *createTemporarySystemFunction(
+  String name, Class functionClass)*
+- *createTemporarySystemFunction(String name, UserDefinedFunction 
functionInstance)*
+
+It is recommended to use `functionClass` over `functionInstance` as far as you 
can, 

Review Comment:
   May be it would make sense to mention that in case of `functionClass` it 
requires the ability to invoke constructor with no args



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33653) Introduce a benchmark for balanced tasks scheduling

2023-11-26 Thread RocMarshal (Jira)
RocMarshal created FLINK-33653:
--

 Summary: Introduce a benchmark for balanced tasks scheduling
 Key: FLINK-33653
 URL: https://issues.apache.org/jira/browse/FLINK-33653
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Task
Reporter: RocMarshal






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33652) First Steps documentation is having empty page link

2023-11-26 Thread Pranav Sharma (Jira)
Pranav Sharma created FLINK-33652:
-

 Summary: First Steps documentation is having empty page link
 Key: FLINK-33652
 URL: https://issues.apache.org/jira/browse/FLINK-33652
 Project: Flink
  Issue Type: Bug
 Environment: Web
Reporter: Pranav Sharma
 Attachments: image-2023-11-26-15-23-02-007.png, 
image-2023-11-26-15-25-04-708.png

 

Under this page URL 
[link|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/try-flink/local_installation/],
 under "Summary" heading, the "concepts" link is pointing to an empty page 
[link_on_concepts|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/concepts/].
 Upon visiting, the tab heading contains HTML as well. (Attached screenshots)

It may be pointed to concepts/overview instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33651][Runtime][doc] update state checkpoins doc in Chinese [flink]

2023-11-26 Thread via GitHub


1996fanrui commented on code in PR #23807:
URL: https://github.com/apache/flink/pull/23807#discussion_r1405361586


##
docs/content.zh/docs/dev/datastream/fault-tolerance/checkpointing.md:
##
@@ -71,7 +74,13 @@ Checkpoint 其他的属性包括:
 
   - *externalized checkpoints*: 你可以配置周期存储 checkpoint 到外部系统中。Externalized 
checkpoints 将他们的元数据写到持久化存储上并且在 job 失败的时候*不会*被自动删除。
 这种方式下,如果你的 job 失败,你将会有一个现有的 checkpoint 去恢复。更多的细节请看 [保留 checkpoints 
的部署文档]({{< ref "docs/ops/state/checkpoints" >}}#保留-checkpoint)。
-
+
+  - *非对齐 checkpoints*: 你可以启用[非对齐 checkpoints]({{< ref 
"docs/ops/state/checkpointing_under_backpressure" >}}#非对齐-checkpoints)
+ 以在背压时大大减少创建checkpoint的时间。这仅适用于精确一次(exactly-once)checkpoints 并且只有一个并发检查点。
+
+  - *部分任务结束的 checkpoints*: 默认情况下,即使DAG的部分已经处理完其所有记录,Flink也会继续执行 
checkpoints。请参阅[重要注意事项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" 
>}}#任务结束前等待最后一次-checkpoint)以了解详细信息。

Review Comment:
   I mean we should redirect it to this link: 
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/checkpointing/#%e9%83%a8%e5%88%86%e4%bb%bb%e5%8a%a1%e7%bb%93%e6%9d%9f%e5%90%8e%e7%9a%84-checkpoint
   
   You can check with English doc[1], it is redirected to `部分任务结束后的-checkpoint` 
link instead of `务结束前等待最后一次-checkpoint` link.
   
   [1] 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/checkpointing/#enabling-and-configuring-checkpointing



##
docs/content.zh/docs/dev/datastream/fault-tolerance/checkpointing.md:
##
@@ -71,7 +74,13 @@ Checkpoint 其他的属性包括:
 
   - *externalized checkpoints*: 你可以配置周期存储 checkpoint 到外部系统中。Externalized 
checkpoints 将他们的元数据写到持久化存储上并且在 job 失败的时候*不会*被自动删除。
 这种方式下,如果你的 job 失败,你将会有一个现有的 checkpoint 去恢复。更多的细节请看 [保留 checkpoints 
的部署文档]({{< ref "docs/ops/state/checkpoints" >}}#保留-checkpoint)。
-
+
+  - *非对齐 checkpoints*: 你可以启用[非对齐 checkpoints]({{< ref 
"docs/ops/state/checkpointing_under_backpressure" >}}#非对齐-checkpoints)
+ 以在背压时大大减少创建checkpoint的时间。这仅适用于精确一次(exactly-once)checkpoints 并且只有一个并发检查点。
+
+  - *部分任务结束的 checkpoints*: 默认情况下,即使DAG的部分已经处理完其所有记录,Flink也会继续执行 
checkpoints。请参阅[重要注意事项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" 
>}}#任务结束前等待最后一次-checkpoint)以了解详细信息。

Review Comment:
   I mean we should redirect it to this link: 
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/checkpointing/#%e9%83%a8%e5%88%86%e4%bb%bb%e5%8a%a1%e7%bb%93%e6%9d%9f%e5%90%8e%e7%9a%84-checkpoint
   
   You can check with English doc[1], it is redirected to `部分任务结束后的-checkpoint` 
link instead of `务结束前等待最后一次-checkpoint` link.
   
   [1] 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/checkpointing/#enabling-and-configuring-checkpointing



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33301) Add Java and Maven version checks in the bash script of Flink release process

2023-11-26 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge closed FLINK-33301.
---
Resolution: Won't Fix

> Add Java and Maven version checks in the bash script of Flink release process
> -
>
> Key: FLINK-33301
> URL: https://issues.apache.org/jira/browse/FLINK-33301
> Project: Flink
>  Issue Type: Bug
>  Components: Release System
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Minor
>  Labels: pull-request-available
>
> During the release, Flink requires specific version of Java and Maven[1]. It 
> makes sense to check those versions at the very beginning of some bash 
> scripts to let it fail fast and therefore improve the efficiency.
>  
> [1][https://lists.apache.org/thread/fbdl2w6wjmwk55y94ml91bpnhmh4rnm0|https://lists.apache.org/thread/fbdl2w6wjmwk55y94ml91bpnhmh4rnm0]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33651][Runtime][doc] update state checkpoins doc in Chinese [flink]

2023-11-26 Thread via GitHub


JingGe commented on code in PR #23807:
URL: https://github.com/apache/flink/pull/23807#discussion_r1405358365


##
docs/content.zh/docs/dev/datastream/fault-tolerance/checkpointing.md:
##
@@ -71,7 +74,13 @@ Checkpoint 其他的属性包括:
 
   - *externalized checkpoints*: 你可以配置周期存储 checkpoint 到外部系统中。Externalized 
checkpoints 将他们的元数据写到持久化存储上并且在 job 失败的时候*不会*被自动删除。
 这种方式下,如果你的 job 失败,你将会有一个现有的 checkpoint 去恢复。更多的细节请看 [保留 checkpoints 
的部署文档]({{< ref "docs/ops/state/checkpoints" >}}#保留-checkpoint)。
-
+
+  - *非对齐 checkpoints*: 你可以启用[非对齐 checkpoints]({{< ref 
"docs/ops/state/checkpointing_under_backpressure" >}}#非对齐-checkpoints)
+ 以在背压时大大减少创建checkpoint的时间。这仅适用于精确一次(exactly-once)checkpoints 并且只有一个并发检查点。
+
+  - *部分任务结束的 checkpoints*: 默认情况下,即使DAG的部分已经处理完其所有记录,Flink也会继续执行 
checkpoints。请参阅[重要注意事项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" 
>}}#任务结束前等待最后一次-checkpoint)以了解详细信息。

Review Comment:
   Thanks @1996fanrui for the review.
   
   `#任务结束前等待最后一次-checkpoint` - this can not be changed because it is the the 
link of other section: 
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/checkpointing/#%e4%bb%bb%e5%8a%a1%e7%bb%93%e6%9d%9f%e5%89%8d%e7%ad%89%e5%be%85%e6%9c%80%e5%90%8e%e4%b8%80%e6%ac%a1-checkpoint



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33651][Runtime][doc] update state checkpoins doc in Chinese [flink]

2023-11-26 Thread via GitHub


JingGe commented on code in PR #23807:
URL: https://github.com/apache/flink/pull/23807#discussion_r1405358365


##
docs/content.zh/docs/dev/datastream/fault-tolerance/checkpointing.md:
##
@@ -71,7 +74,13 @@ Checkpoint 其他的属性包括:
 
   - *externalized checkpoints*: 你可以配置周期存储 checkpoint 到外部系统中。Externalized 
checkpoints 将他们的元数据写到持久化存储上并且在 job 失败的时候*不会*被自动删除。
 这种方式下,如果你的 job 失败,你将会有一个现有的 checkpoint 去恢复。更多的细节请看 [保留 checkpoints 
的部署文档]({{< ref "docs/ops/state/checkpoints" >}}#保留-checkpoint)。
-
+
+  - *非对齐 checkpoints*: 你可以启用[非对齐 checkpoints]({{< ref 
"docs/ops/state/checkpointing_under_backpressure" >}}#非对齐-checkpoints)
+ 以在背压时大大减少创建checkpoint的时间。这仅适用于精确一次(exactly-once)checkpoints 并且只有一个并发检查点。
+
+  - *部分任务结束的 checkpoints*: 默认情况下,即使DAG的部分已经处理完其所有记录,Flink也会继续执行 
checkpoints。请参阅[重要注意事项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" 
>}}#任务结束前等待最后一次-checkpoint)以了解详细信息。

Review Comment:
   `#任务结束前等待最后一次-checkpoint` - this can not be changed because it is the the 
link of other section: 
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/checkpointing/#%e4%bb%bb%e5%8a%a1%e7%bb%93%e6%9d%9f%e5%89%8d%e7%ad%89%e5%be%85%e6%9c%80%e5%90%8e%e4%b8%80%e6%ac%a1-checkpoint



##
docs/content.zh/docs/dev/datastream/fault-tolerance/checkpointing.md:
##
@@ -71,7 +74,13 @@ Checkpoint 其他的属性包括:
 
   - *externalized checkpoints*: 你可以配置周期存储 checkpoint 到外部系统中。Externalized 
checkpoints 将他们的元数据写到持久化存储上并且在 job 失败的时候*不会*被自动删除。
 这种方式下,如果你的 job 失败,你将会有一个现有的 checkpoint 去恢复。更多的细节请看 [保留 checkpoints 
的部署文档]({{< ref "docs/ops/state/checkpoints" >}}#保留-checkpoint)。
-
+
+  - *非对齐 checkpoints*: 你可以启用[非对齐 checkpoints]({{< ref 
"docs/ops/state/checkpointing_under_backpressure" >}}#非对齐-checkpoints)
+ 以在背压时大大减少创建checkpoint的时间。这仅适用于精确一次(exactly-once)checkpoints 并且只有一个并发检查点。
+
+  - *部分任务结束的 checkpoints*: 默认情况下,即使DAG的部分已经处理完其所有记录,Flink也会继续执行 
checkpoints。请参阅[重要注意事项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" 
>}}#任务结束前等待最后一次-checkpoint)以了解详细信息。

Review Comment:
   `#任务结束前等待最后一次-checkpoint` - this can not be changed because it is the the 
link of other section: 
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/fault-tolerance/checkpointing/#%e4%bb%bb%e5%8a%a1%e7%bb%93%e6%9d%9f%e5%89%8d%e7%ad%89%e5%be%85%e6%9c%80%e5%90%8e%e4%b8%80%e6%ac%a1-checkpoint



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-11-26 Thread via GitHub


1996fanrui commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1405313903


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/TaskBalancedPreferredSlotSharingStrategyTest.java:
##
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.jobmanager.scheduler.CoLocationGroup;
+import org.apache.flink.runtime.jobmanager.scheduler.CoLocationGroupImpl;
+import org.apache.flink.runtime.jobmanager.scheduler.SlotSharingGroup;
+import 
org.apache.flink.runtime.scheduler.strategy.TestingSchedulingExecutionVertex;
+import org.apache.flink.runtime.scheduler.strategy.TestingSchedulingTopology;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.guava31.com.google.common.collect.Lists;
+import org.apache.flink.shaded.guava31.com.google.common.collect.Sets;
+
+import org.assertj.core.data.Offset;
+import org.junit.jupiter.api.Test;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Tests for {@link TaskBalancedPreferredSlotSharingStrategy}. */
+class TaskBalancedPreferredSlotSharingStrategyTest {

Review Comment:
   > In addition to extracting common code, it is also necessary to consider 
the upgrade issue of Junit4
   
   In order to simply the review of https://github.com/apache/flink/pull/23635 
, I submit an hotfix [PR](https://github.com/apache/flink/pull/23806) to 
upgrade the `LocalInputPreferredSlotSharingStrategyTest` to junit5.
   
   > If there are common parts, it may add some complexity to the review 
process and update process.
   
   If it has some common part, we just review the different part. It will 
simply the review process. And it doesn't introduce any duplicated code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-11-26 Thread via GitHub


1996fanrui commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1405352714


##
flink-runtime/src/test/java/org/apache/flink/runtime/scheduler/TaskBalancedPreferredSlotSharingStrategyTest.java:
##
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.scheduler;
+
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
+import org.apache.flink.runtime.jobgraph.JobVertex;
+import org.apache.flink.runtime.jobgraph.JobVertexID;
+import org.apache.flink.runtime.jobmanager.scheduler.CoLocationGroup;
+import org.apache.flink.runtime.jobmanager.scheduler.CoLocationGroupImpl;
+import org.apache.flink.runtime.jobmanager.scheduler.SlotSharingGroup;
+import 
org.apache.flink.runtime.scheduler.strategy.TestingSchedulingExecutionVertex;
+import org.apache.flink.runtime.scheduler.strategy.TestingSchedulingTopology;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.guava31.com.google.common.collect.Lists;
+import org.apache.flink.shaded.guava31.com.google.common.collect.Sets;
+
+import org.assertj.core.data.Offset;
+import org.junit.jupiter.api.Test;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Optional;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Tests for {@link TaskBalancedPreferredSlotSharingStrategy}. */
+class TaskBalancedPreferredSlotSharingStrategyTest {

Review Comment:
   `LocalInputPreferredSlotSharingStrategyTest` is upgraded to junit5 in the 
master branch, please rebase master and go ahead in your free time, thanks~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33651][Runtime][doc] update state checkpoins doc in Chinese [flink]

2023-11-26 Thread via GitHub


1996fanrui commented on code in PR #23807:
URL: https://github.com/apache/flink/pull/23807#discussion_r1405351938


##
docs/content.zh/docs/dev/datastream/fault-tolerance/checkpointing.md:
##
@@ -71,7 +74,13 @@ Checkpoint 其他的属性包括:
 
   - *externalized checkpoints*: 你可以配置周期存储 checkpoint 到外部系统中。Externalized 
checkpoints 将他们的元数据写到持久化存储上并且在 job 失败的时候*不会*被自动删除。
 这种方式下,如果你的 job 失败,你将会有一个现有的 checkpoint 去恢复。更多的细节请看 [保留 checkpoints 
的部署文档]({{< ref "docs/ops/state/checkpoints" >}}#保留-checkpoint)。
-
+
+  - *非对齐 checkpoints*: 你可以启用[非对齐 checkpoints]({{< ref 
"docs/ops/state/checkpointing_under_backpressure" >}}#非对齐-checkpoints)
+ 以在背压时大大减少创建checkpoint的时间。这仅适用于精确一次(exactly-once)checkpoints 并且只有一个并发检查点。
+
+  - *部分任务结束的 checkpoints*: 默认情况下,即使DAG的部分已经处理完其所有记录,Flink也会继续执行 
checkpoints。请参阅[重要注意事项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" 
>}}#任务结束前等待最后一次-checkpoint)以了解详细信息。

Review Comment:
   ```suggestion
 - *部分任务结束的 checkpoints*: 默认情况下,即使DAG的部分已经处理完它们的所有记录,Flink也会继续执行 
checkpoints。请参阅[重要注意事项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" 
>}}#部分任务结束后的-checkpoint)以了解详细信息。
   ```



##
docs/content.zh/docs/dev/datastream/fault-tolerance/checkpointing.md:
##
@@ -71,7 +74,13 @@ Checkpoint 其他的属性包括:
 
   - *externalized checkpoints*: 你可以配置周期存储 checkpoint 到外部系统中。Externalized 
checkpoints 将他们的元数据写到持久化存储上并且在 job 失败的时候*不会*被自动删除。
 这种方式下,如果你的 job 失败,你将会有一个现有的 checkpoint 去恢复。更多的细节请看 [保留 checkpoints 
的部署文档]({{< ref "docs/ops/state/checkpoints" >}}#保留-checkpoint)。
-
+
+  - *非对齐 checkpoints*: 你可以启用[非对齐 checkpoints]({{< ref 
"docs/ops/state/checkpointing_under_backpressure" >}}#非对齐-checkpoints)
+ 以在背压时大大减少创建checkpoint的时间。这仅适用于精确一次(exactly-once)checkpoints 并且只有一个并发检查点。
+
+  - *部分任务结束的 checkpoints*: 默认情况下,即使DAG的部分已经处理完其所有记录,Flink也会继续执行 
checkpoints。请参阅[重要注意事项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" 
>}}#任务结束前等待最后一次-checkpoint)以了解详细信息。

Review Comment:
   ```suggestion
 - *部分任务结束的 checkpoints*: 默认情况下,即使DAG的部分已经处理完它们的所有记录,Flink也会继续执行 
checkpoints。请参阅[重要注意事项]({{< ref 
"docs/dev/datastream/fault-tolerance/checkpointing" 
>}}#部分任务结束后的-checkpoint)以了解详细信息。
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix][Junit5 Migration] Upgrade the LocalInputPreferredSlotSharingStrategyTest to junit5 [flink]

2023-11-26 Thread via GitHub


1996fanrui merged PR #23806:
URL: https://github.com/apache/flink/pull/23806


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org