[jira] [Commented] (FLINK-33977) Adaptive scheduler may not minimize the number of TMs during downscaling

2024-06-16 Thread Aviv Dozorets (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855391#comment-17855391
 ] 

Aviv Dozorets commented on FLINK-33977:
---

[~Zhanghao Chen] do you think there is a way to increase priority on this issue 
?

> Adaptive scheduler may not minimize the number of TMs during downscaling
> 
>
> Key: FLINK-33977
> URL: https://issues.apache.org/jira/browse/FLINK-33977
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler, Runtime / Coordination
>Affects Versions: 1.18.0, 1.19.0, 1.20.0
>Reporter: Zhanghao Chen
>Priority: Major
>
> Adaptive Scheduler uses SlotAssigner to assign free slots to slot sharing 
> groups. Currently, there're two implementations of SlotAssigner available: 
> the 
> DefaultSlotAssigner that treats all slots and slot sharing groups equally and 
> the {color:#172b4d}StateLocalitySlotAssigner{color} that assigns slots based 
> on the number of local key groups to utilize local state recovery. The 
> scheduler will use the DefaultSlotAssigner when no key group assignment info 
> is available and use the StateLocalitySlotAssigner otherwise.
>  
> However, none of the SlotAssigner targets at minimizing the number of TMs, 
> which may produce suboptimal slot assignment under the Application Mode. For 
> example, when a job with 8 slot sharing groups and 2 TMs (each 4 slots) is 
> downscaled through the FLIP-291 API to have 4 slot sharing groups instead, 
> the cluster may still have 2 TMs, one with 1 free slot, and the other with 3 
> free slots. For end-users, this implies an ineffective downscaling as the 
> total cluster resources are not reduced.
>  
> We should take minimizing number of TMs into consideration as well. A 
> possible approach is to enhance the {color:#172b4d}StateLocalitySlotAssigner: 
> when the number of free slots exceeds need, sort all the TMs by a score 
> summing from the allocation scores of all slots on it, remove slots from the 
> excessive TMs with the lowest score and proceed the remaining slot 
> assignment.{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35622) Filter out noisy "Coordinator of operator xxxx does not exist" exceptions in batch mode

2024-06-16 Thread Junrui Li (Jira)
Junrui Li created FLINK-35622:
-

 Summary: Filter out noisy "Coordinator of operator  does not 
exist" exceptions in batch mode
 Key: FLINK-35622
 URL: https://issues.apache.org/jira/browse/FLINK-35622
 Project: Flink
  Issue Type: Technical Debt
  Components: Runtime / Coordination
Reporter: Junrui Li


In batch mode, the Flink JobManager logs frequently print "Coordinator of 
operator  does not exist or the job vertex this operator belongs to is not 
initialized." exceptions when using the collect() method.

This exception is caused by the collect sink attempting to fetch data from the 
corresponding operator coordinator on the JM based on the operator ID. However, 
batch jobs do not initialize all job vertices at the beginning, and instead, 
initialize them in a sequential manner. If a job vertex has not been 
initialized yet, the corresponding coordinator cannot be found, leading to the 
printing of this message.

These exceptions are harmless and do not affect the job execution, but they can 
clutter the logs and make it difficult to find relevant information, especially 
for large-scale batch jobs.
 
 
 
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35548] Add E2E tests for PubSubSinkV2 [flink-connector-gcp-pubsub]

2024-06-16 Thread via GitHub


snuyanzin commented on code in PR #28:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/28#discussion_r1641831775


##
flink-connector-gcp-pubsub/src/main/java/org/apache/flink/streaming/connectors/gcp/pubsub/emulator/EmulatorCredentialsProvider.java:
##
@@ -20,12 +20,14 @@
 import com.google.api.gax.core.CredentialsProvider;
 import com.google.auth.Credentials;
 
+import java.io.Serializable;
+
 /**
  * A CredentialsProvider that simply provides the right credentials that are 
to be used for
  * connecting to an emulator. NOTE: The Google provided NoCredentials and 
NoCredentialsProvider do
  * not behave as expected. See 
https://github.com/googleapis/gax-java/issues/1148
  */
-public final class EmulatorCredentialsProvider implements CredentialsProvider {
+public final class EmulatorCredentialsProvider implements CredentialsProvider, 
Serializable {

Review Comment:
   Looks like we missed it
   according to Flink contribution guide there should be `serialVersionUID` 
   
https://flink.apache.org/how-to-contribute/code-style-and-quality-java/#java-reflection



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35548] Add E2E tests for PubSubSinkV2 [flink-connector-gcp-pubsub]

2024-06-16 Thread via GitHub


snuyanzin commented on code in PR #28:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/28#discussion_r1641832027


##
flink-connector-gcp-pubsub/src/main/java/org/apache/flink/streaming/connectors/gcp/pubsub/emulator/EmulatorCredentialsProvider.java:
##
@@ -20,12 +20,14 @@
 import com.google.api.gax.core.CredentialsProvider;
 import com.google.auth.Credentials;
 
+import java.io.Serializable;
+
 /**
  * A CredentialsProvider that simply provides the right credentials that are 
to be used for
  * connecting to an emulator. NOTE: The Google provided NoCredentials and 
NoCredentialsProvider do
  * not behave as expected. See 
https://github.com/googleapis/gax-java/issues/1148
  */
-public final class EmulatorCredentialsProvider implements CredentialsProvider {
+public final class EmulatorCredentialsProvider implements CredentialsProvider, 
Serializable {

Review Comment:
   Seems like same should be done for `GcpPublisherConfig`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35548] Add E2E tests for PubSubSinkV2 [flink-connector-gcp-pubsub]

2024-06-16 Thread via GitHub


snuyanzin commented on code in PR #28:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/28#discussion_r1641831775


##
flink-connector-gcp-pubsub/src/main/java/org/apache/flink/streaming/connectors/gcp/pubsub/emulator/EmulatorCredentialsProvider.java:
##
@@ -20,12 +20,14 @@
 import com.google.api.gax.core.CredentialsProvider;
 import com.google.auth.Credentials;
 
+import java.io.Serializable;
+
 /**
  * A CredentialsProvider that simply provides the right credentials that are 
to be used for
  * connecting to an emulator. NOTE: The Google provided NoCredentials and 
NoCredentialsProvider do
  * not behave as expected. See 
https://github.com/googleapis/gax-java/issues/1148
  */
-public final class EmulatorCredentialsProvider implements CredentialsProvider {
+public final class EmulatorCredentialsProvider implements CredentialsProvider, 
Serializable {

Review Comment:
   Looks like we missed it
   according to Flink contribution guide there should be `serialVersionUID` 
   
https://flink.apache.org/how-to-contribute/code-style-and-quality-java/#java-serialization



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34111][table] Add JSON_QUOTE and JSON_UNQUOTE function [flink]

2024-06-16 Thread via GitHub


jeyhunkarimov commented on PR #24156:
URL: https://github.com/apache/flink/pull/24156#issuecomment-2171513067

   Hi @anupamaggarwal thanks for your proposal. Please feel free to continue!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32086][checkpointing] Cleanup useless file-merging managed directory on exit of TM [flink]

2024-06-16 Thread via GitHub


zoltar9264 commented on code in PR #24933:
URL: https://github.com/apache/flink/pull/24933#discussion_r1641835072


##
flink-runtime/src/main/java/org/apache/flink/runtime/state/filesystem/FsMergingCheckpointStorageLocation.java:
##
@@ -80,6 +80,9 @@ public FsMergingCheckpointStorageLocation(
 reference,
 fileStateSizeThreshold,
 writeBufferSize);
+
+// notify file merging snapshot manager when 
FsMergingCheckpointStorageLocation create.
+fileMergingSnapshotManager.notifyCheckpointStart(subtaskKey, 
checkpointId);

Review Comment:
   Good suggestion, done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855405#comment-17855405
 ] 

Matthias Pohl commented on FLINK-35601:
---

This seems to be caused by recently added [PR 
#24481|https://github.com/apache/flink/pull/24881] that is connected to 
FLINK-25537. I haven't been able to reproduce the error of this Jira issue but 
running the test repeatedly causes the test to eventually timeout (tried 3x 
where the test ran into a deadlock (?) after 512, 618 and 3724 repetitions). It 
might be worth reverting the changes of PR #24481

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855406#comment-17855406
 ] 

Matthias Pohl commented on FLINK-35601:
---

[~gongzhongqiang] can you have a look?

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855405#comment-17855405
 ] 

Matthias Pohl edited comment on FLINK-35601 at 6/16/24 1:29 PM:


This seems to be caused by recently added [PR 
#24481|https://github.com/apache/flink/pull/24881] that is connected to 
FLINK-25537. I haven't been able to reproduce the error of this Jira issue (I 
haven't tried it with JDK 17) but running the test repeatedly causes the test 
to eventually timeout (tried 3x where the test ran into a deadlock (?) after 
512, 618 and 3724 repetitions). It might be worth reverting the changes of PR 
#24481


was (Author: mapohl):
This seems to be caused by recently added [PR 
#24481|https://github.com/apache/flink/pull/24881] that is connected to 
FLINK-25537. I haven't been able to reproduce the error of this Jira issue but 
running the test repeatedly causes the test to eventually timeout (tried 3x 
where the test ran into a deadlock (?) after 512, 618 and 3724 repetitions). It 
might be worth reverting the changes of PR #24481

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-20398][e2e] Migrate test_batch_sql.sh to Java e2e tests framework [flink]

2024-06-16 Thread via GitHub


XComp commented on PR #24471:
URL: https://github.com/apache/flink/pull/24471#issuecomment-2171654110

   Yeah, the test failure is unrelated (FLINK-35042). Unfortunately, we missed 
to get it in before the feature freeze. Let's merge it after the release branch 
for 1.20 is created.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [mysql] Mysql-cdc adapt mariadb. [flink-cdc]

2024-06-16 Thread via GitHub


ThisisWilli commented on PR #2494:
URL: https://github.com/apache/flink-cdc/pull/2494#issuecomment-2171678737

   > hello en how about this fix going
   
   still in progress now, maybe finish in the end of June.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32955) Support state compatibility between enabling TTL and disabling TTL

2024-06-16 Thread xiangyu feng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855415#comment-17855415
 ] 

xiangyu feng commented on FLINK-32955:
--

Hi [~lijinzhong] , this issue hasn't been updated for a long time. I would like 
to work on this if it is ok for you.

> Support state compatibility between enabling TTL and disabling TTL
> --
>
> Key: FLINK-32955
> URL: https://issues.apache.org/jira/browse/FLINK-32955
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Jinzhong Li
>Assignee: Jinzhong Li
>Priority: Major
>
> Currently, trying to restore state, which was previously configured without 
> TTL, using TTL enabled descriptor or vice versa will lead to compatibility 
> failure and StateMigrationException.
> In some scenario, user may enable state ttl and restore from old state which 
> was previously configured without TTL;  or vice versa.
> It would be useful for users if we support state compatibility between 
> enabling TTL and disabling TTL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35548] Add E2E tests for PubSubSinkV2 [flink-connector-gcp-pubsub]

2024-06-16 Thread via GitHub


snuyanzin commented on PR #28:
URL: 
https://github.com/apache/flink-connector-gcp-pubsub/pull/28#issuecomment-2171763168

   Thanks for addressing feedback
   There is one more finding a put in comments


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34545][cdc-pipeline-connector]Add OceanBase pipeline connector to Flink CDC [flink-cdc]

2024-06-16 Thread via GitHub


yuanoOo commented on code in PR #3360:
URL: https://github.com/apache/flink-cdc/pull/3360#discussion_r1642033832


##
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-oceanbase/src/main/java/org/apache/flink/cdc/connectors/oceanbase/catalog/OceanBaseCatalogFactory.java:
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.connectors.oceanbase.catalog;
+
+import com.oceanbase.connector.flink.OceanBaseConnectorOptions;
+import com.oceanbase.connector.flink.connection.OceanBaseConnectionProvider;
+import com.oceanbase.connector.flink.dialect.OceanBaseDialect;
+import com.oceanbase.connector.flink.dialect.OceanBaseMySQLDialect;
+import com.oceanbase.connector.flink.dialect.OceanBaseOracleDialect;
+
+/** A {@link OceanBaseCatalogFactory} to create {@link OceanBaseCatalog}. */
+public class OceanBaseCatalogFactory {
+
+public static OceanBaseCatalog createOceanBaseCatalog(
+OceanBaseConnectorOptions connectorOptions) throws Exception {
+try (OceanBaseConnectionProvider connectionProvider =
+new OceanBaseConnectionProvider(connectorOptions)) {
+OceanBaseDialect dialect = connectionProvider.getDialect();
+if (dialect instanceof OceanBaseMySQLDialect) {
+return new OceanBaseMySQLCatalog(connectorOptions);
+} else if (dialect instanceof OceanBaseOracleDialect) {
+return new OceanBaseOracleCatalog(connectorOptions);
+} else {
+throw new OceanBaseCatalogException("This tenant is not 
supported currently");

Review Comment:
   I changed it to: Fail to create OceanBaseCatalog: unknown tenant.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34545][cdc-pipeline-connector]Add OceanBase pipeline connector to Flink CDC [flink-cdc]

2024-06-16 Thread via GitHub


yuanoOo commented on code in PR #3360:
URL: https://github.com/apache/flink-cdc/pull/3360#discussion_r1641144855


##
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-oceanbase/src/main/java/org/apache/flink/cdc/connectors/oceanbase/sink/OceanBaseMetadataApplier.java:
##
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.connectors.oceanbase.sink;
+
+import org.apache.flink.cdc.common.configuration.Configuration;
+import org.apache.flink.cdc.common.event.AddColumnEvent;
+import org.apache.flink.cdc.common.event.AlterColumnTypeEvent;
+import org.apache.flink.cdc.common.event.CreateTableEvent;
+import org.apache.flink.cdc.common.event.DropColumnEvent;
+import org.apache.flink.cdc.common.event.RenameColumnEvent;
+import org.apache.flink.cdc.common.event.SchemaChangeEvent;
+import org.apache.flink.cdc.common.event.TableId;
+import org.apache.flink.cdc.common.schema.Column;
+import org.apache.flink.cdc.common.schema.Schema;
+import org.apache.flink.cdc.common.sink.MetadataApplier;
+import org.apache.flink.cdc.common.utils.Preconditions;
+import org.apache.flink.cdc.connectors.oceanbase.catalog.OceanBaseCatalog;
+import 
org.apache.flink.cdc.connectors.oceanbase.catalog.OceanBaseCatalogException;
+import 
org.apache.flink.cdc.connectors.oceanbase.catalog.OceanBaseCatalogFactory;
+import org.apache.flink.cdc.connectors.oceanbase.catalog.OceanBaseColumn;
+import org.apache.flink.cdc.connectors.oceanbase.catalog.OceanBaseTable;
+
+import com.oceanbase.connector.flink.OceanBaseConnectorOptions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/** Supports {@link OceanBaseDataSink} to schema evolution. */
+public class OceanBaseMetadataApplier implements MetadataApplier {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(OceanBaseMetadataApplier.class);
+
+private final OceanBaseCatalog catalog;
+
+public OceanBaseMetadataApplier(
+OceanBaseConnectorOptions connectorOptions, Configuration config) 
throws Exception {
+this.catalog = 
OceanBaseCatalogFactory.createOceanBaseCatalog(connectorOptions);

Review Comment:
   Removed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]

2024-06-16 Thread via GitHub


lvyanquan commented on PR #3414:
URL: https://github.com/apache/flink-cdc/pull/3414#issuecomment-2171998668

   What's more, considering that the number of buckets and parallelism may not 
be consistent, should we remove the constraint on 
[EventPartitioner](https://github.com/apache/flink-cdc/blob/2bd2e4ce24ec0cc6a11129e3e3b32af6a09dd977/flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/partitioning/EventPartitioner.java#L25)?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34908][pipeline-connector][starrocks] Fix MySQL to doris pipeline will lose precision for timestamp type [flink-cdc]

2024-06-16 Thread via GitHub


yuxiqian commented on PR #3417:
URL: https://github.com/apache/flink-cdc/pull/3417#issuecomment-2172008618

   Thanks for @ChengJie1053's great work, could you please add some test cases 
to verify this change? Also some existed testcases might need to be tweaked to 
reflect this change:
   
   ```diff
   Error:  Failures: 
   Error:
EventRecordSerializationSchemaTest.testMixedSchemaAndDataChanges:133->verifySerializeResult:275
   
   +expected:<{__op=0, col1=1, col2=true, col3=2023-11-27 18:00:00}>
   - but was:<{__op=0, col1=1, col2=true, col3=2023-11-27 18:00:00.00}>
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34908][pipeline-connector][starrocks] Fix MySQL to doris pipeline will lose precision for timestamp type [flink-cdc]

2024-06-16 Thread via GitHub


ChengJie1053 commented on PR #3417:
URL: https://github.com/apache/flink-cdc/pull/3417#issuecomment-2172020816

   > Thanks for @ChengJie1053's great work, could you please add some tests to 
verify this change? Also some existing testcases might need to be tweaked to 
reflect this change:
   > 
   > ```diff
   > Error:  Failures: 
   > Error:
EventRecordSerializationSchemaTest.testMixedSchemaAndDataChanges:133->verifySerializeResult:275
   > 
   > +expected:<{__op=0, col1=1, col2=true, col3=2023-11-27 18:00:00}>
   > - but was:<{__op=0, col1=1, col2=true, col3=2023-11-27 18:00:00.00}>
   > ```
   
   Ok, thanks for reviewing the code for me


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]

2024-06-16 Thread via GitHub


dingxin-tech commented on PR #3414:
URL: https://github.com/apache/flink-cdc/pull/3414#issuecomment-2172040689

   > What's more, considering that the number of buckets and parallelism may 
not be consistent, should we remove the constraint on 
[EventPartitioner](https://github.com/apache/flink-cdc/blob/2bd2e4ce24ec0cc6a11129e3e3b32af6a09dd977/flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/partitioning/EventPartitioner.java#L25)?
   
   Although the number of buckets and parallelism will differ, we can only 
distribute based on parallelism rather than buckets, right? We have already 
distributed the hash values to the various parallelisms 
[here](https://github.com/apache/flink-cdc/blob/2bd2e4ce24ec0cc6a11129e3e3b32af6a09dd977/flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/partitioning/PrePartitionOperator.java#L104C43-L104C64),
 so I think there's no need to change anything here.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35067) Support metadata 'op_type' virtual column for Postgres CDC Connector.

2024-06-16 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855467#comment-17855467
 ] 

Leonard Xu commented on FLINK-35067:


Assigned this ticket to [~loserwang1024]  as he'd like to take this stuck 
ticket soon, [~walls.flink.m] you can help review this PR if you'd like to 
contribute this feature with Hongshun.

>  Support metadata 'op_type' virtual column for Postgres CDC Connector. 
> ---
>
> Key: FLINK-35067
> URL: https://issues.apache.org/jira/browse/FLINK-35067
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Hongshun Wang
>Assignee: Vallari Rastogi
>Priority: Major
> Fix For: cdc-3.2.0
>
>
> Like [https://github.com/apache/flink-cdc/pull/2913,] Support metadata 
> 'op_type' virtual column for Postgres CDC Connector. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35067) Support metadata 'op_type' virtual column for Postgres CDC Connector.

2024-06-16 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-35067:
--

Assignee: Hongshun Wang  (was: Vallari Rastogi)

>  Support metadata 'op_type' virtual column for Postgres CDC Connector. 
> ---
>
> Key: FLINK-35067
> URL: https://issues.apache.org/jira/browse/FLINK-35067
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Hongshun Wang
>Assignee: Hongshun Wang
>Priority: Major
> Fix For: cdc-3.2.0
>
>
> Like [https://github.com/apache/flink-cdc/pull/2913,] Support metadata 
> 'op_type' virtual column for Postgres CDC Connector. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34545][cdc-pipeline-connector]Add OceanBase pipeline connector to Flink CDC [flink-cdc]

2024-06-16 Thread via GitHub


yuanoOo commented on PR #3360:
URL: https://github.com/apache/flink-cdc/pull/3360#issuecomment-2172064348

   @GOODBOY008  I made some changes according to your review. Please review 
these changes. Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35623) Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0

2024-06-16 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-35623:
--

 Summary: Bump mongo-driver version from 4.7.2 to 5.1.1 to support 
MongoDB 7.0
 Key: FLINK-35623
 URL: https://issues.apache.org/jira/browse/FLINK-35623
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / MongoDB
Affects Versions: mongodb-1.2.0
Reporter: Jiabao Sun
Assignee: Jiabao Sun
 Fix For: mongodb-1.3.0


Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0

 

[https://www.mongodb.com/docs/drivers/java/sync/current/compatibility/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35623) Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0

2024-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35623:
---
Labels: pull-request-available  (was: )

> Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0
> 
>
> Key: FLINK-35623
> URL: https://issues.apache.org/jira/browse/FLINK-35623
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.2.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: mongodb-1.3.0
>
>
> Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0
>  
> [https://www.mongodb.com/docs/drivers/java/sync/current/compatibility/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855469#comment-17855469
 ] 

Weijie Guo commented on FLINK-35601:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60312&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6464

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]

2024-06-16 Thread via GitHub


lvyanquan commented on PR #3414:
URL: https://github.com/apache/flink-cdc/pull/3414#issuecomment-2172081220

   Got it,  There is indeed no need for adjustment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32955) Support state compatibility between enabling TTL and disabling TTL

2024-06-16 Thread Jinzhong Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855470#comment-17855470
 ] 

Jinzhong Li commented on FLINK-32955:
-

[~xiangyu0xf]  Thanks for your volunteering. It's fine for me

> Support state compatibility between enabling TTL and disabling TTL
> --
>
> Key: FLINK-32955
> URL: https://issues.apache.org/jira/browse/FLINK-32955
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Jinzhong Li
>Assignee: Jinzhong Li
>Priority: Major
>
> Currently, trying to restore state, which was previously configured without 
> TTL, using TTL enabled descriptor or vice versa will lead to compatibility 
> failure and StateMigrationException.
> In some scenario, user may enable state ttl and restore from old state which 
> was previously configured without TTL;  or vice versa.
> It would be useful for users if we support state compatibility between 
> enabling TTL and disabling TTL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35616) Support upsert into sharded collections for MongoRowDataSerializationSchema

2024-06-16 Thread Jiabao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiabao Sun updated FLINK-35616:
---
Summary: Support upsert into sharded collections for 
MongoRowDataSerializationSchema  (was: Support upsert into sharded collections)

> Support upsert into sharded collections for MongoRowDataSerializationSchema
> ---
>
> Key: FLINK-35616
> URL: https://issues.apache.org/jira/browse/FLINK-35616
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / MongoDB
>Affects Versions: mongodb-1.2.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Major
>
> {panel:}
> For a db.collection.update() operation that includes upsert: true and is on a 
> sharded collection, the full sharded key must be included in the filter:
> * For an update operation.
> * For a replace document operation (starting in MongoDB 4.2).
> {panel}
> https://www.mongodb.com/docs/manual/reference/method/db.collection.update/#upsert-on-a-sharded-collection
> We need to allow users to configure the full sharded key field names to 
> upsert into the sharded collection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855472#comment-17855472
 ] 

Weijie Guo commented on FLINK-35601:


> I haven't been able to reproduce the error of this Jira issue (I haven't 
> tried it with JDK 17) but running the test repeatedly causes the test to 
> eventually timeout

 

I also ran it in my IDE, and can reproduce the timeout/deadlock.

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [hotfix][test] Revert the junit5 migration of InitOutputPathTest [flink]

2024-06-16 Thread via GitHub


reswqa opened a new pull request, #24945:
URL: https://github.com/apache/flink/pull/24945

   PR#24881 introduce some issue during the migration to junit5, temporarily 
revert the changes to `InitOutputPathTest` file.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35605) Release Testing Instructions: Verify FLIP-306 Unified File Merging Mechanism for Checkpoints

2024-06-16 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan updated FLINK-35605:

Description: 
Follow up the test for https://issues.apache.org/jira/browse/FLINK-32070

 

1.20 is the MVP version for FLIP-306. It is a little bit complex and should be 
tested carefully. The main idea of FLIP-306 is to merge checkpoint files in TM 
side, and provide new {{{}StateHandle{}}}s to the JM. There will be a 
TM-managed directory under the 'shared' checkpoint directory for each subtask, 
and a TM-managed directory under the 'taskowned' checkpoint directory for each 
Task Manager. Under those new introduced directories, the checkpoint files will 
be merged into smaller file set. The following scenarios need to be tested, 
including but not limited to:
 # With the file merging enabled, periodic checkpoints perform properly, and 
the failover, restore and rescale would also work well.
 # Switch the file merging on and off across jobs, checkpoints and recovery 
also work properly.
 # There will be no left-over TM-managed directory, especially when there is no 
cp complete before the job cancellation.
 # File merging takes no effect in (native) savepoints.

Besides the behaviors above, it is better to validate the function of space 
amplification control and metrics. All the config options can be found under 
'execution.checkpointing.file-merging'.

  was:Follow up the test for https://issues.apache.org/jira/browse/FLINK-32070


> Release Testing Instructions: Verify FLIP-306 Unified File Merging Mechanism 
> for Checkpoints
> 
>
> Key: FLINK-35605
> URL: https://issues.apache.org/jira/browse/FLINK-35605
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Rui Fan
>Assignee: Zakelly Lan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.20.0
>
>
> Follow up the test for https://issues.apache.org/jira/browse/FLINK-32070
>  
> 1.20 is the MVP version for FLIP-306. It is a little bit complex and should 
> be tested carefully. The main idea of FLIP-306 is to merge checkpoint files 
> in TM side, and provide new {{{}StateHandle{}}}s to the JM. There will be a 
> TM-managed directory under the 'shared' checkpoint directory for each 
> subtask, and a TM-managed directory under the 'taskowned' checkpoint 
> directory for each Task Manager. Under those new introduced directories, the 
> checkpoint files will be merged into smaller file set. The following 
> scenarios need to be tested, including but not limited to:
>  # With the file merging enabled, periodic checkpoints perform properly, and 
> the failover, restore and rescale would also work well.
>  # Switch the file merging on and off across jobs, checkpoints and recovery 
> also work properly.
>  # There will be no left-over TM-managed directory, especially when there is 
> no cp complete before the job cancellation.
>  # File merging takes no effect in (native) savepoints.
> Besides the behaviors above, it is better to validate the function of space 
> amplification control and metrics. All the config options can be found under 
> 'execution.checkpointing.file-merging'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855472#comment-17855472
 ] 

Weijie Guo edited comment on FLINK-35601 at 6/17/24 3:21 AM:
-

> I haven't been able to reproduce the error of this Jira issue (I haven't 
> tried it with JDK 17) but running the test repeatedly causes the test to 
> eventually timeout

I also ran it in my IDE, and can reproduce the timeout/deadlock.

 

In order not to block CI(Especially we are in the important stage of the 1.20 
release), I file a PR to revert the change of `InitOutputPathTest`. 


was (Author: weijie guo):
> I haven't been able to reproduce the error of this Jira issue (I haven't 
> tried it with JDK 17) but running the test repeatedly causes the test to 
> eventually timeout

 

I also ran it in my IDE, and can reproduce the timeout/deadlock.

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]

2024-06-16 Thread via GitHub


lvyanquan commented on code in PR #3414:
URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1642102553


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunctionProvider.java:
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.sink;
+
+import org.apache.flink.cdc.common.schema.Schema;
+
+import java.io.Serializable;
+
+/**
+ * Provide {@link HashFunction} to help PrePartitionOperator to shuffle 
DataChangeEvent to
+ * designated subtask. This is usually beneficial for load balancing, when 
writing to different
+ * partitions/buckets in {@link DataSink}, add custom Implementation to 
further improve efficiency.
+ */
+public interface HashFunctionProvider extends Serializable {
+HashFunction getHashFunction(Schema schema);

Review Comment:
   Can we add `open`/`close` method and call in PrePartitionOperator? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-35601:
---
Priority: Blocker  (was: Major)

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]

2024-06-16 Thread via GitHub


lvyanquan commented on code in PR #3414:
URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1642104736


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunctionProvider.java:
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.sink;
+
+import org.apache.flink.cdc.common.schema.Schema;
+
+import java.io.Serializable;
+
+/**
+ * Provide {@link HashFunction} to help PrePartitionOperator to shuffle 
DataChangeEvent to
+ * designated subtask. This is usually beneficial for load balancing, when 
writing to different
+ * partitions/buckets in {@link DataSink}, add custom Implementation to 
further improve efficiency.
+ */
+public interface HashFunctionProvider extends Serializable {
+HashFunction getHashFunction(Schema schema);

Review Comment:
   Can we pass `TableId` in `getHashFunction` method  too? 
   When integrating third-party SDK, we may not want to rebuild the 
HashFunction using the Schema definition of FlinkCDC, but instead directly 
obtain thecalculation method in the SDK based on the table name.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]

2024-06-16 Thread via GitHub


lvyanquan commented on code in PR #3414:
URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1642104736


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunctionProvider.java:
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.sink;
+
+import org.apache.flink.cdc.common.schema.Schema;
+
+import java.io.Serializable;
+
+/**
+ * Provide {@link HashFunction} to help PrePartitionOperator to shuffle 
DataChangeEvent to
+ * designated subtask. This is usually beneficial for load balancing, when 
writing to different
+ * partitions/buckets in {@link DataSink}, add custom Implementation to 
further improve efficiency.
+ */
+public interface HashFunctionProvider extends Serializable {
+HashFunction getHashFunction(Schema schema);

Review Comment:
   Can we pass `TableId` in `getHashFunction` method  too? 
   When integrating third-party SDK, we may not want to rebuild the 
HashFunction using the Schema defination of FlinkCDC, but instead directly 
obtain thecalculation method in the SDK based on the table name.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]

2024-06-16 Thread via GitHub


lvyanquan commented on code in PR #3414:
URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1642104736


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunctionProvider.java:
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.sink;
+
+import org.apache.flink.cdc.common.schema.Schema;
+
+import java.io.Serializable;
+
+/**
+ * Provide {@link HashFunction} to help PrePartitionOperator to shuffle 
DataChangeEvent to
+ * designated subtask. This is usually beneficial for load balancing, when 
writing to different
+ * partitions/buckets in {@link DataSink}, add custom Implementation to 
further improve efficiency.
+ */
+public interface HashFunctionProvider extends Serializable {
+HashFunction getHashFunction(Schema schema);

Review Comment:
   Can we pass `TableId` in `getHashFunction` method  too? 
   When integrating third-party SDK, we may not want to rebuild the 
`HashFunction` using the `Schema` definition of FlinkCDC, but instead directly 
obtain the calculation method in the SDK based on the table name.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35601][test] Revert the junit5 migration of InitOutputPathTest [flink]

2024-06-16 Thread via GitHub


flinkbot commented on PR #24945:
URL: https://github.com/apache/flink/pull/24945#issuecomment-2172111552

   
   ## CI report:
   
   * 4b510674aee2557067a8da5ce536e23415d06340 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35601:
---
Labels: pull-request-available  (was: )

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]

2024-06-16 Thread via GitHub


lvyanquan commented on code in PR #3414:
URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1642102553


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunctionProvider.java:
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.sink;
+
+import org.apache.flink.cdc.common.schema.Schema;
+
+import java.io.Serializable;
+
+/**
+ * Provide {@link HashFunction} to help PrePartitionOperator to shuffle 
DataChangeEvent to
+ * designated subtask. This is usually beneficial for load balancing, when 
writing to different
+ * partitions/buckets in {@link DataSink}, add custom Implementation to 
further improve efficiency.
+ */
+public interface HashFunctionProvider extends Serializable {
+HashFunction getHashFunction(Schema schema);

Review Comment:
   Can we add `open`/`close` method and call in `PrePartitionOperator`? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35601][test] Revert the junit5 migration of InitOutputPathTest [flink]

2024-06-16 Thread via GitHub


reswqa commented on PR #24945:
URL: https://github.com/apache/flink/pull/24945#issuecomment-2172114912

   > would you mind adding some background to explain the issue caused by 
https://github.com/apache/flink/pull/24881?
   
   I didn't look into the exact reasons, but as Matthias said in 
[FLINK-35601](https://issues.apache.org/jira/browse/FLINK-35601), I was also 
able to reproduce the timeout/deadlock caused by the migration.
   
   > Are there any CI failures?
   Most of the recent failure reported in the build channel are related to this 
test. (: 
   
   For instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60312&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6464
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35601][test] Revert the junit5 migration of InitOutputPathTest [flink]

2024-06-16 Thread via GitHub


reswqa commented on PR #24945:
URL: https://github.com/apache/flink/pull/24945#issuecomment-2172123699

   I don't want to block our testing pipeline and things like migrating to 
juint5 didn't have to be in 1.20. IMHO, it makes sense to revert it first.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855473#comment-17855473
 ] 

Rui Fan commented on FLINK-35601:
-

Thanks [~Weijie Guo] and [~mapohl]  for the detailed test!
{quote}I also ran it in my IDE, and can reproduce the timeout/deadlock.
{quote}
Hi [~Weijie Guo] , the timeout cannot be reproduced with 
[https://github.com/apache/flink/pull/24945,|https://github.com/apache/flink/pull/24945]right?

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan reassigned FLINK-35601:
---

Assignee: Weijie Guo

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32955) Support state compatibility between enabling TTL and disabling TTL

2024-06-16 Thread xiangyu feng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855474#comment-17855474
 ] 

xiangyu feng commented on FLINK-32955:
--

[~lijinzhong] thx, [~Zakelly] [~masteryhx] would you kindly assign this issue 
to me? 

> Support state compatibility between enabling TTL and disabling TTL
> --
>
> Key: FLINK-32955
> URL: https://issues.apache.org/jira/browse/FLINK-32955
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Jinzhong Li
>Assignee: Jinzhong Li
>Priority: Major
>
> Currently, trying to restore state, which was previously configured without 
> TTL, using TTL enabled descriptor or vice versa will lead to compatibility 
> failure and StateMigrationException.
> In some scenario, user may enable state ttl and restore from old state which 
> was previously configured without TTL;  or vice versa.
> It would be useful for users if we support state compatibility between 
> enabling TTL and disabling TTL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855475#comment-17855475
 ] 

Weijie Guo commented on FLINK-35601:


jdk21
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60320&view=logs&j=d06b80b4-9e88-5d40-12a2-18072cf60528&t=609ecd5a-3f6e-5d0c-2239-2096b155a4d0&l=6464]

 

jdk17

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60320&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6541

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855469#comment-17855469
 ] 

Weijie Guo edited comment on FLINK-35601 at 6/17/24 3:55 AM:
-

jdk17

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60312&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6464]


was (Author: weijie guo):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60312&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6464

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32955) Support state compatibility between enabling TTL and disabling TTL

2024-06-16 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan reassigned FLINK-32955:
---

Assignee: xiangyu feng  (was: Zakelly Lan)

> Support state compatibility between enabling TTL and disabling TTL
> --
>
> Key: FLINK-32955
> URL: https://issues.apache.org/jira/browse/FLINK-32955
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Jinzhong Li
>Assignee: xiangyu feng
>Priority: Major
>
> Currently, trying to restore state, which was previously configured without 
> TTL, using TTL enabled descriptor or vice versa will lead to compatibility 
> failure and StateMigrationException.
> In some scenario, user may enable state ttl and restore from old state which 
> was previously configured without TTL;  or vice versa.
> It would be useful for users if we support state compatibility between 
> enabling TTL and disabling TTL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32955) Support state compatibility between enabling TTL and disabling TTL

2024-06-16 Thread Zakelly Lan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zakelly Lan reassigned FLINK-32955:
---

Assignee: Zakelly Lan  (was: Jinzhong Li)

> Support state compatibility between enabling TTL and disabling TTL
> --
>
> Key: FLINK-32955
> URL: https://issues.apache.org/jira/browse/FLINK-32955
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Jinzhong Li
>Assignee: Zakelly Lan
>Priority: Major
>
> Currently, trying to restore state, which was previously configured without 
> TTL, using TTL enabled descriptor or vice versa will lead to compatibility 
> failure and StateMigrationException.
> In some scenario, user may enable state ttl and restore from old state which 
> was previously configured without TTL;  or vice versa.
> It would be useful for users if we support state compatibility between 
> enabling TTL and disabling TTL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32955) Support state compatibility between enabling TTL and disabling TTL

2024-06-16 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855476#comment-17855476
 ] 

Zakelly Lan commented on FLINK-32955:
-

[~xiangyu0xf] Assigned, thanks for volunteering~

> Support state compatibility between enabling TTL and disabling TTL
> --
>
> Key: FLINK-32955
> URL: https://issues.apache.org/jira/browse/FLINK-32955
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Jinzhong Li
>Assignee: xiangyu feng
>Priority: Major
>
> Currently, trying to restore state, which was previously configured without 
> TTL, using TTL enabled descriptor or vice versa will lead to compatibility 
> failure and StateMigrationException.
> In some scenario, user may enable state ttl and restore from old state which 
> was previously configured without TTL;  or vice versa.
> It would be useful for users if we support state compatibility between 
> enabling TTL and disabling TTL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855477#comment-17855477
 ] 

Weijie Guo commented on FLINK-35601:


> Hi [~Weijie Guo] , the timeout cannot be reproduced with 
> [https://github.com/apache/flink/pull/24945,|https://github.com/apache/flink/pull/24945]right?

I didn't reproduce the tiemout after revert, but I only ran it about 1500 times.

Note that the difference between the time to run this test before and after 
revert is very large.

 

The {{NoSuchFieldException: modifiers}} error that appear in CI should only be 
present in JDK17+.

 

 

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855477#comment-17855477
 ] 

Weijie Guo edited comment on FLINK-35601 at 6/17/24 4:00 AM:
-

> Hi [~Weijie Guo] , the timeout cannot be reproduced with 
> [https://github.com/apache/flink/pull/24945,|https://github.com/apache/flink/pull/24945]right?

I didn't reproduce the tiemout after revert, but I only ran it about 1500 times.

Note that the difference between the time to run this test before and after 
revert is very large.

 

The {{NoSuchFieldException: modifiers}} error that appear in CI should only be 
present in JDK17+.


was (Author: weijie guo):
> Hi [~Weijie Guo] , the timeout cannot be reproduced with 
> [https://github.com/apache/flink/pull/24945,|https://github.com/apache/flink/pull/24945]right?

I didn't reproduce the tiemout after revert, but I only ran it about 1500 times.

Note that the difference between the time to run this test before and after 
revert is very large.

 

The {{NoSuchFieldException: modifiers}} error that appear in CI should only be 
present in JDK17+.

 

 

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855477#comment-17855477
 ] 

Weijie Guo edited comment on FLINK-35601 at 6/17/24 4:06 AM:
-

> Hi [~Weijie Guo] , the timeout cannot be reproduced with 
> [https://github.com/apache/flink/pull/24945,|https://github.com/apache/flink/pull/24945]right?

I didn't reproduce the tiemout after revert, but I only ran it about 1500 times.

Note that the difference between the time to run this test before and after 
revert is very large.

 

The {{NoSuchFieldException: modifiers}} error that appear in CI should only be 
present in JDK17+.  This results from the more restrictive reflection 
protection of java.base classes in the newer java versions. We could refer to 
https://github.com/prestodb/presto/pull/15240/files to fix it.

As for the timeout issue, I didn't dig into it now.


was (Author: weijie guo):
> Hi [~Weijie Guo] , the timeout cannot be reproduced with 
> [https://github.com/apache/flink/pull/24945,|https://github.com/apache/flink/pull/24945]right?

I didn't reproduce the tiemout after revert, but I only ran it about 1500 times.

Note that the difference between the time to run this test before and after 
revert is very large.

 

The {{NoSuchFieldException: modifiers}} error that appear in CI should only be 
present in JDK17+.

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Weijie Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855477#comment-17855477
 ] 

Weijie Guo edited comment on FLINK-35601 at 6/17/24 4:06 AM:
-

> Hi [~Weijie Guo] , the timeout cannot be reproduced with 
> [https://github.com/apache/flink/pull/24945,|https://github.com/apache/flink/pull/24945]right?

I didn't reproduce the tiemout after revert, but I only ran it about 1500 times.

Note that the difference between the time to run this test before and after 
revert is very large.

 

The {{NoSuchFieldException: modifiers}} error that appear in CI should only be 
present in JDK12+.  This results from the more restrictive reflection 
protection of java.base classes in the newer java versions. We could refer to 
https://github.com/prestodb/presto/pull/15240/files to fix it.

As for the timeout issue, I didn't dig into it now.


was (Author: weijie guo):
> Hi [~Weijie Guo] , the timeout cannot be reproduced with 
> [https://github.com/apache/flink/pull/24945,|https://github.com/apache/flink/pull/24945]right?

I didn't reproduce the tiemout after revert, but I only ran it about 1500 times.

Note that the difference between the time to run this test before and after 
revert is very large.

 

The {{NoSuchFieldException: modifiers}} error that appear in CI should only be 
present in JDK17+.  This results from the more restrictive reflection 
protection of java.base classes in the newer java versions. We could refer to 
https://github.com/prestodb/presto/pull/15240/files to fix it.

As for the timeout issue, I didn't dig into it now.

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35624) Release Testing: Verify FLIP-306 Unified File Merging Mechanism for Checkpoints

2024-06-16 Thread Zakelly Lan (Jira)
Zakelly Lan created FLINK-35624:
---

 Summary: Release Testing: Verify FLIP-306 Unified File Merging 
Mechanism for Checkpoints
 Key: FLINK-35624
 URL: https://issues.apache.org/jira/browse/FLINK-35624
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Zakelly Lan
 Fix For: 1.20.0


Follow up the test for https://issues.apache.org/jira/browse/FLINK-32070

 

1.20 is the MVP version for FLIP-306. It is a little bit complex and should be 
tested carefully. The main idea of FLIP-306 is to merge checkpoint files in TM 
side, and provide new {{{}StateHandle{}}}s to the JM. There will be a 
TM-managed directory under the 'shared' checkpoint directory for each subtask, 
and a TM-managed directory under the 'taskowned' checkpoint directory for each 
Task Manager. Under those new introduced directories, the checkpoint files will 
be merged into smaller file set. The following scenarios need to be tested, 
including but not limited to:
 # With the file merging enabled, periodic checkpoints perform properly, and 
the failover, restore and rescale would also work well.
 # Switch the file merging on and off across jobs, checkpoints and recovery 
also work properly.
 # There will be no left-over TM-managed directory, especially when there is no 
cp complete before the job cancellation.
 # File merging takes no effect in (native) savepoints.

Besides the behaviors above, it is better to validate the function of space 
amplification control and metrics. All the config options can be found under 
'execution.checkpointing.file-merging'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35624) Release Testing: Verify FLIP-306 Unified File Merging Mechanism for Checkpoints

2024-06-16 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855478#comment-17855478
 ] 

Zakelly Lan edited comment on FLINK-35624 at 6/17/24 4:17 AM:
--

And it is better to test under different job submission modes, e.g. Per-job 
mode and Session mode.


was (Author: zakelly):
And it is better to test under different job submission mode, e.g. Per-job mode 
and Session mode.

> Release Testing: Verify FLIP-306 Unified File Merging Mechanism for 
> Checkpoints
> ---
>
> Key: FLINK-35624
> URL: https://issues.apache.org/jira/browse/FLINK-35624
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Zakelly Lan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.20.0
>
>
> Follow up the test for https://issues.apache.org/jira/browse/FLINK-32070
>  
> 1.20 is the MVP version for FLIP-306. It is a little bit complex and should 
> be tested carefully. The main idea of FLIP-306 is to merge checkpoint files 
> in TM side, and provide new {{{}StateHandle{}}}s to the JM. There will be a 
> TM-managed directory under the 'shared' checkpoint directory for each 
> subtask, and a TM-managed directory under the 'taskowned' checkpoint 
> directory for each Task Manager. Under those new introduced directories, the 
> checkpoint files will be merged into smaller file set. The following 
> scenarios need to be tested, including but not limited to:
>  # With the file merging enabled, periodic checkpoints perform properly, and 
> the failover, restore and rescale would also work well.
>  # Switch the file merging on and off across jobs, checkpoints and recovery 
> also work properly.
>  # There will be no left-over TM-managed directory, especially when there is 
> no cp complete before the job cancellation.
>  # File merging takes no effect in (native) savepoints.
> Besides the behaviors above, it is better to validate the function of space 
> amplification control and metrics. All the config options can be found under 
> 'execution.checkpointing.file-merging'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35624) Release Testing: Verify FLIP-306 Unified File Merging Mechanism for Checkpoints

2024-06-16 Thread Zakelly Lan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855478#comment-17855478
 ] 

Zakelly Lan commented on FLINK-35624:
-

And it is better to test under different job submission mode, e.g. Per-job mode 
and Session mode.

> Release Testing: Verify FLIP-306 Unified File Merging Mechanism for 
> Checkpoints
> ---
>
> Key: FLINK-35624
> URL: https://issues.apache.org/jira/browse/FLINK-35624
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Zakelly Lan
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.20.0
>
>
> Follow up the test for https://issues.apache.org/jira/browse/FLINK-32070
>  
> 1.20 is the MVP version for FLIP-306. It is a little bit complex and should 
> be tested carefully. The main idea of FLIP-306 is to merge checkpoint files 
> in TM side, and provide new {{{}StateHandle{}}}s to the JM. There will be a 
> TM-managed directory under the 'shared' checkpoint directory for each 
> subtask, and a TM-managed directory under the 'taskowned' checkpoint 
> directory for each Task Manager. Under those new introduced directories, the 
> checkpoint files will be merged into smaller file set. The following 
> scenarios need to be tested, including but not limited to:
>  # With the file merging enabled, periodic checkpoints perform properly, and 
> the failover, restore and rescale would also work well.
>  # Switch the file merging on and off across jobs, checkpoints and recovery 
> also work properly.
>  # There will be no left-over TM-managed directory, especially when there is 
> no cp complete before the job cancellation.
>  # File merging takes no effect in (native) savepoints.
> Besides the behaviors above, it is better to validate the function of space 
> amplification control and metrics. All the config options can be found under 
> 'execution.checkpointing.file-merging'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]

2024-06-16 Thread via GitHub


dingxin-tech commented on code in PR #3414:
URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1642206938


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunctionProvider.java:
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.sink;
+
+import org.apache.flink.cdc.common.schema.Schema;
+
+import java.io.Serializable;
+
+/**
+ * Provide {@link HashFunction} to help PrePartitionOperator to shuffle 
DataChangeEvent to
+ * designated subtask. This is usually beneficial for load balancing, when 
writing to different
+ * partitions/buckets in {@link DataSink}, add custom Implementation to 
further improve efficiency.
+ */
+public interface HashFunctionProvider extends Serializable {
+HashFunction getHashFunction(Schema schema);

Review Comment:
   Yes, I encountered this issue while implementing the MaxCompute Connector. I 
resolved it by converting the Flink Schema to the MaxCompute Schema.
   
   You are right, passing the TableId would be a better approach.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35237] Allow Sink to Choose HashFunction in PrePartitionOperator [flink-cdc]

2024-06-16 Thread via GitHub


dingxin-tech commented on code in PR #3414:
URL: https://github.com/apache/flink-cdc/pull/3414#discussion_r1642207213


##
flink-cdc-common/src/main/java/org/apache/flink/cdc/common/sink/HashFunctionProvider.java:
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.common.sink;
+
+import org.apache.flink.cdc.common.schema.Schema;
+
+import java.io.Serializable;
+
+/**
+ * Provide {@link HashFunction} to help PrePartitionOperator to shuffle 
DataChangeEvent to
+ * designated subtask. This is usually beneficial for load balancing, when 
writing to different
+ * partitions/buckets in {@link DataSink}, add custom Implementation to 
further improve efficiency.
+ */
+public interface HashFunctionProvider extends Serializable {
+HashFunction getHashFunction(Schema schema);

Review Comment:
   Sure, if you feel it's necessary.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35601][test] Revert the junit5 migration of InitOutputPathTest [flink]

2024-06-16 Thread via GitHub


1996fanrui merged PR #24945:
URL: https://github.com/apache/flink/pull/24945


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855501#comment-17855501
 ] 

Rui Fan commented on FLINK-35601:
-

Merged to master(1.20.0) via: 3a15d1ce69ac21d619f60033ec45cae303489c8f

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35601) InitOutputPathTest.testErrorOccursUnSynchronized failed due to NoSuchFieldException

2024-06-16 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-35601.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> InitOutputPathTest.testErrorOccursUnSynchronized failed due to 
> NoSuchFieldException
> ---
>
> Key: FLINK-35601
> URL: https://issues.apache.org/jira/browse/FLINK-35601
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> {code:java}
> Jun 14 02:17:56 02:17:56.037 [ERROR] 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized -- 
> Time elapsed: 0.021 s <<< ERROR!
> Jun 14 02:17:56 java.lang.NoSuchFieldException: modifiers
> Jun 14 02:17:56   at 
> java.base/java.lang.Class.getDeclaredField(Class.java:2610)
> Jun 14 02:17:56   at 
> org.apache.flink.core.fs.InitOutputPathTest.testErrorOccursUnSynchronized(InitOutputPathTest.java:59)
> Jun 14 02:17:56   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:194)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
> Jun 14 02:17:56   at 
> java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=60259&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=6491



--
This message was sent by Atlassian Jira
(v8.20.10#820010)