Re: [PR] [FLINK-35068][core][type] Introduce built-in serialization support for java.util.Set [flink]

2024-05-27 Thread via GitHub


X-czh commented on PR #24845:
URL: https://github.com/apache/flink/pull/24845#issuecomment-2134366106

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35469) Unstable test case 'testProject' of JdbcDynamicTableSourceITCase

2024-05-27 Thread RocMarshal (Jira)
RocMarshal created FLINK-35469:
--

 Summary: Unstable test case 'testProject' of 
JdbcDynamicTableSourceITCase
 Key: FLINK-35469
 URL: https://issues.apache.org/jira/browse/FLINK-35469
 Project: Flink
  Issue Type: Bug
Reporter: RocMarshal


https://github.com/apache/flink-connector-jdbc/actions/runs/9263628064/job/25482376215?pr=119



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33607][build]: Add checksum verification for Maven wrapper [flink]

2024-05-27 Thread via GitHub


flinkbot commented on PR #24852:
URL: https://github.com/apache/flink/pull/24852#issuecomment-2134328506

   
   ## CI report:
   
   * 35c1fce18fa24f0232edb86eee232482804795d2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33607) Add checksum verification for Maven wrapper as well

2024-05-27 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849874#comment-17849874
 ] 

Luke Chen commented on FLINK-33607:
---

[~mapohl] , I've opened a PR to improve it. Please help assign the ticket to 
me. Thanks.

> Add checksum verification for Maven wrapper as well
> ---
>
> Key: FLINK-33607
> URL: https://issues.apache.org/jira/browse/FLINK-33607
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-33503 enabled us to add checksum checks for the Maven wrapper binaries 
> along the update from 3.1.0 to 3.2.0.
> But there seems to be an issue with verifying the wrapper's checksum under 
> windows (see [related PR discussion in 
> Guava|https://github.com/google/guava/pull/6807/files]).
> This issue covers the fix as soon as MVRAPPER-103 is resolved. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33607) Add checksum verification for Maven wrapper as well

2024-05-27 Thread Luke Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849873#comment-17849873
 ] 

Luke Chen commented on FLINK-33607:
---

PR: [https://github.com/apache/flink/pull/24852]

After MVRAPPER-103 is resolved in 3.3.0, we should upgrade Maven wrapper to 
3.3.0 or later to enable the feature to verify the wrapper's checksum.

This PR did:
 # Upgrade Maven wrapper to the latest 3.3.2 version to include more bug fixes.
 # Update the {{wrapperSha256Sum}} for {{maven-wrapper-3.3.2.jar}}

> Add checksum verification for Maven wrapper as well
> ---
>
> Key: FLINK-33607
> URL: https://issues.apache.org/jira/browse/FLINK-33607
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-33503 enabled us to add checksum checks for the Maven wrapper binaries 
> along the update from 3.1.0 to 3.2.0.
> But there seems to be an issue with verifying the wrapper's checksum under 
> windows (see [related PR discussion in 
> Guava|https://github.com/google/guava/pull/6807/files]).
> This issue covers the fix as soon as MVRAPPER-103 is resolved. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33607) Add checksum verification for Maven wrapper as well

2024-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33607:
---
Labels: pull-request-available  (was: )

> Add checksum verification for Maven wrapper as well
> ---
>
> Key: FLINK-33607
> URL: https://issues.apache.org/jira/browse/FLINK-33607
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-33503 enabled us to add checksum checks for the Maven wrapper binaries 
> along the update from 3.1.0 to 3.2.0.
> But there seems to be an issue with verifying the wrapper's checksum under 
> windows (see [related PR discussion in 
> Guava|https://github.com/google/guava/pull/6807/files]).
> This issue covers the fix as soon as MVRAPPER-103 is resolved. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] FLINK-33607: Add checksum verification for Maven wrapper [flink]

2024-05-27 Thread via GitHub


showuon opened a new pull request, #24852:
URL: https://github.com/apache/flink/pull/24852

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   [FLINK-33503](https://issues.apache.org/jira/browse/FLINK-33503) enabled us 
to add checksum checks for the Maven wrapper binaries along the update from 
3.1.0 to 3.2.0.
   
   But there seems to be an issue with verifying the wrapper's checksum under 
windows (see 
[MVRAPPER-103](https://issues.apache.org/jira/browse/MWRAPPER-103)). After 
MVRAPPER-103 is resolved in 3.3.0, we should upgrade Maven wrapper to 3.3.0 or 
later to enable the feature to verify the wrapper's checksum. 
   
   This PR did:
   1. Upgrade Maven wrapper to the latest 3.3.2 version to include more bug 
fixes.
   2. Update the `wrapperSha256Sum` for `maven-wrapper-3.3.2.jar`
   
   ## Brief change log
   
 - Upgrade Maven wrapper to the latest 3.3.2 version to include more bug 
fixes.
 - Update the `wrapperSha256Sum` for `maven-wrapper-3.3.2.jar`
   
   ## Verifying this change
   
   After the change, maven build runs successfully.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency):  no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35468) [Connectors/Pulsar] update isEnableAutoAcknowledgeMessage config comment

2024-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35468:
---
Labels: pull-request-available  (was: )

> [Connectors/Pulsar] update isEnableAutoAcknowledgeMessage config comment
> 
>
> Key: FLINK-35468
> URL: https://issues.apache.org/jira/browse/FLINK-35468
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: pulsar-4.1.0
>Reporter: zhou zhuohan
>Priority: Major
>  Labels: pull-request-available
> Fix For: pulsar-4.1.0
>
>
> Since we [deleted {{shared}} and {{key-shared}} subscription 
> type|https://issues.apache.org/jira/browse/FLINK-30413] in pulsar source 
> connector, I think it is better to remove these subscription type in 
> {{isEnableAutoAcknowledgeMessage}} option comment to prevent misunderstanding.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35468][Connectors/Pulsar] update isEnableAutoAcknowledgeMessage config comment [flink-connector-pulsar]

2024-05-27 Thread via GitHub


boring-cyborg[bot] commented on PR #93:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/93#issuecomment-2134308720

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-35468][Connectors/Pulsar] update isEnableAutoAcknowledgeMessage config comment [flink-connector-pulsar]

2024-05-27 Thread via GitHub


geniusjoe opened a new pull request, #93:
URL: https://github.com/apache/flink-connector-pulsar/pull/93

   
   
   ## Purpose of the change
   
   
   Since we [deleted shared and key-shared subscription 
type](https://issues.apache.org/jira/browse/FLINK-30413) in pulsar source 
connector, I think it is better to remove these subscription type in 
isEnableAutoAcknowledgeMessage option comment to prevent misunderstanding.
   
   ## Brief change log
   update SourceConfiguration.java `isEnableAutoAcknowledgeMessage` comment.
   
   ## Significant changes
   
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-35468) [Connectors/Pulsar] update isEnableAutoAcknowledgeMessage config comment

2024-05-27 Thread zhou zhuohan (Jira)
zhou zhuohan created FLINK-35468:


 Summary: [Connectors/Pulsar] update isEnableAutoAcknowledgeMessage 
config comment
 Key: FLINK-35468
 URL: https://issues.apache.org/jira/browse/FLINK-35468
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Affects Versions: pulsar-4.1.0
Reporter: zhou zhuohan
 Fix For: pulsar-4.1.0


Since we [deleted {{shared}} and {{key-shared}} subscription 
type|https://issues.apache.org/jira/browse/FLINK-30413] in pulsar source 
connector, I think it is better to remove these subscription type in 
{{isEnableAutoAcknowledgeMessage}} option comment to prevent misunderstanding.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-35438) SourceCoordinatorTest.testErrorThrownFromSplitEnumerator fails on wrong error

2024-05-27 Thread Rob Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849850#comment-17849850
 ] 

Rob Young edited comment on FLINK-35438 at 5/28/24 3:45 AM:


I agree there's a race that's hard to reproduce, I can only provoke it by 
adding in a thread sleep in the spot where it can occur in 
`MockOperatorCoordinatorContext`
{code:java}
@Override
public void failJob(Throwable cause) {
jobFailed = true;
try {
Thread.sleep(50);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
jobFailureReason = cause;
jobFailedFuture.complete(null);
} 

public boolean isJobFailed() {
return jobFailed;
}

public Throwable getJobFailureReason() {
return jobFailureReason;
}{code}
If getJobFailureReason() is called between jobFailed and jobFailureReason being 
assigned then the test thread can unexpectedly observe a null jobFailureReason 
while jobFailed is true.

The same race is present in master.

Happy to contribute a fix if someone could please assign me to the ticket. 
Synchronizing access to the two fields would be one way. Perhaps you could 
replace the two fields with one that contains the failed bool and reason so 
they are assigned together.


was (Author: JIRAUSER298079):
I agree there's a race that's hard to reproduce, I can only provoke it by 
adding in a thread sleep in the spot where it can occur in 
`MockOperatorCoordinatorContext`


{code:java}
@Override
public void failJob(Throwable cause) {
jobFailed = true;
try {
Thread.sleep(50);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
jobFailureReason = cause;
jobFailedFuture.complete(null);
} 

public boolean isJobFailed() {
return jobFailed;
}

public Throwable getJobFailureReason() {
return jobFailureReason;
}{code}
If getJobFailureReason() is called between jobFailed and jobFailureReason being 
assigned then the test thread can unexpectedly observe a null jobFailureReason 
while jobFailed is true.

The same race is present in master

Happy to contribute a fix if someone could please assign me to the ticket

> SourceCoordinatorTest.testErrorThrownFromSplitEnumerator fails on wrong error
> -
>
> Key: FLINK-35438
> URL: https://issues.apache.org/jira/browse/FLINK-35438
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.2
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: test-stability
>
> * 1.18 Java 11 / Test (module: core) 
> https://github.com/apache/flink/actions/runs/9201159842/job/25309197630#step:10:7375
> We expect to see an artificial {{Error("Test Error")}} being reported in the 
> test as the cause of a job failure, but the reported job failure is null:
> {code}
> Error: 02:32:31 02:32:31.950 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 0.187 s <<< FAILURE! - in 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest
> Error: 02:32:31 02:32:31.950 [ERROR] 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest.testErrorThrownFromSplitEnumerator
>   Time elapsed: 0.01 s  <<< FAILURE!
> May 23 02:32:31 org.opentest4j.AssertionFailedError: 
> May 23 02:32:31 
> May 23 02:32:31 expected: 
> May 23 02:32:31   java.lang.Error: Test Error
> May 23 02:32:31   at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest.testErrorThrownFromSplitEnumerator(SourceCoordinatorTest.java:296)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 23 02:32:31   ...(57 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed)
> May 23 02:32:31  but was: 
> May 23 02:32:31   null
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> May 23 02:32:31   at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest.testErrorThrownFromSplitEnumerator(SourceCoordinatorTest.java:322)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 23 02:32:31   at 
> 

Re: [PR] [FLINK-35432]Support catch modify event for the mysql. [flink-cdc]

2024-05-27 Thread via GitHub


yuxiqian commented on PR #3352:
URL: https://github.com/apache/flink-cdc/pull/3352#issuecomment-2134293705

   > @yuxiqian can you help me trigger cicd?
   
   Sorry I do not have such privilege, either. You may create a PR to your fork 
repository (e.g. from `hk-lrzy:FLINK-35432` to `hk-lrzy:master`) to run CI on 
your own repository.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Fix incompatible type of JobManagerOptions.SLOT_REQUEST_OUT value [flink]

2024-05-27 Thread via GitHub


X-czh closed pull request #24851: [hotfix] Fix incompatible type of 
JobManagerOptions.SLOT_REQUEST_OUT value
URL: https://github.com/apache/flink/pull/24851


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Fix incompatible type of JobManagerOptions.SLOT_REQUEST_OUT value [flink]

2024-05-27 Thread via GitHub


JunRuiLee commented on PR #24851:
URL: https://github.com/apache/flink/pull/24851#issuecomment-2134282190

   @X-czh Thank you very much for the reminder. Another fix has been merged 
(https://github.com/apache/flink/pull/24850), you can try to rebase onto master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35432]Support catch modify event for the mysql. [flink-cdc]

2024-05-27 Thread via GitHub


hk-lrzy commented on PR #3352:
URL: https://github.com/apache/flink-cdc/pull/3352#issuecomment-2134282172

   @yuxiqian can you help me trigger cicd?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Fix incompatible type of JobManagerOptions.SLOT_REQUEST_OUT value [flink]

2024-05-27 Thread via GitHub


flinkbot commented on PR #24851:
URL: https://github.com/apache/flink/pull/24851#issuecomment-2134281847

   
   ## CI report:
   
   * e672a30887809c13a14c4103076f560ebbcee092 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Fix incompatible type of JobManagerOptions.SLOT_REQUEST_OUT value [flink]

2024-05-27 Thread via GitHub


X-czh commented on PR #24851:
URL: https://github.com/apache/flink/pull/24851#issuecomment-2134279139

   @JunRuiLee Hi, a recent change (FLINK-35359) changes the type of 
JobManagerOptions.SLOT_REQUEST_OUT to Duration, which conflicts with your 
recent change in FLINK-35465, leading to compilation error. Could you help take 
a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix] Fix incompatible type of JobManagerOptions.SLOT_REQUEST_OUT value [flink]

2024-05-27 Thread via GitHub


X-czh opened a new pull request, #24851:
URL: https://github.com/apache/flink/pull/24851

   
   
   ## What is the purpose of the change
   
   Fix incompatible type of JobManagerOptions.SLOT_REQUEST_OUT value.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35425][table-common] Support convert freshness to cron expression in full refresh mode [flink]

2024-05-27 Thread via GitHub


lsyldliu commented on code in PR #24849:
URL: https://github.com/apache/flink/pull/24849#discussion_r1616527538


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/IntervalFreshness.java:
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+import java.util.Objects;
+
+/**
+ * The {@link IntervalFreshness} represents freshness definition of {@link
+ * CatalogMaterializedTable}. It encapsulates the string interval value along 
with time unit,
+ * allowing for flexible representation of different freshness type. Moreover, 
it can provide
+ * detailed raw information for some specific operations.
+ */
+@PublicEvolving
+public class IntervalFreshness {
+
+private final String interval;
+private final TimeUnit timeUnit;
+
+public IntervalFreshness(String interval, TimeUnit timeUnit) {

Review Comment:
   Considering future extensibility, it may support other types of time units, 
such as `DAY TO HOUR`, `DAY TO SECOND`, which are not directly convertible to 
numeric types, so here it is defined as String type first, which is the widest 
type.
   
   1. https://calcite.apache.org/docs/reference.html



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35425][table-common] Support convert freshness to cron expression in full refresh mode [flink]

2024-05-27 Thread via GitHub


lsyldliu commented on code in PR #24849:
URL: https://github.com/apache/flink/pull/24849#discussion_r1616527538


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/IntervalFreshness.java:
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+import java.util.Objects;
+
+/**
+ * The {@link IntervalFreshness} represents freshness definition of {@link
+ * CatalogMaterializedTable}. It encapsulates the string interval value along 
with time unit,
+ * allowing for flexible representation of different freshness type. Moreover, 
it can provide
+ * detailed raw information for some specific operations.
+ */
+@PublicEvolving
+public class IntervalFreshness {
+
+private final String interval;
+private final TimeUnit timeUnit;
+
+public IntervalFreshness(String interval, TimeUnit timeUnit) {

Review Comment:
   Considering future extensibility, it may support other types of time units, 
such as DAY TO HOUR, DAY TO SECOND, which are not directly convertible to 
numeric types, so here it is defined as String type first, which is the widest 
type.
   
   1. https://calcite.apache.org/docs/reference.html



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Fix modification conflict between FLINK-35465 and FLINK-35359 [flink]

2024-05-27 Thread via GitHub


zhuzhurk commented on PR #24850:
URL: https://github.com/apache/flink/pull/24850#issuecomment-2134275369

   The change is trivial. Will merge it right now to unblock others.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-35457) EventTimeWindowCheckpointingITCase fails on AZP as NPE

2024-05-27 Thread Yanfei Lei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanfei Lei resolved FLINK-35457.

Resolution: Fixed

> EventTimeWindowCheckpointingITCase fails on AZP as NPE
> --
>
> Key: FLINK-35457
> URL: https://issues.apache.org/jira/browse/FLINK-35457
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Yanfei Lei
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.deleteIfNecessary(PhysicalFile.java:155)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.decRefCount(PhysicalFile.java:141)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.LogicalFile.discardWithCheckpointId(LogicalFile.java:118)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardSingleLogicalFile(FileMergingSnapshotManagerBase.java:574)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardLogicalFiles(FileMergingSnapshotManagerBase.java:588)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.notifyCheckpointAborted(FileMergingSnapshotManagerBase.java:490)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.WithinCheckpointFileMergingSnapshotManager.notifyCheckpointAborted(WithinCheckpointFileMergingSnapshotManager.java:61)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyFileMergingSnapshotManagerCheckpoint(SubtaskCheckpointCoordinatorImpl.java:505)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpoint(SubtaskCheckpointCoordinatorImpl.java:490)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointAborted(SubtaskCheckpointCoordinatorImpl.java:414)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointAbortAsync$21(StreamTask.java:1513)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$23(StreamTask.java:1536)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMail(MailboxProcessor.java:398)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:367)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:352)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:229)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:998)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:923)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:970)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:949)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:763)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=8538



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [hotfix] Fix modification conflict between FLINK-35465 and FLINK-35359 [flink]

2024-05-27 Thread via GitHub


zhuzhurk merged PR #24850:
URL: https://github.com/apache/flink/pull/24850


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35457) EventTimeWindowCheckpointingITCase fails on AZP as NPE

2024-05-27 Thread Yanfei Lei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanfei Lei reassigned FLINK-35457:
--

Assignee: Yanfei Lei

> EventTimeWindowCheckpointingITCase fails on AZP as NPE
> --
>
> Key: FLINK-35457
> URL: https://issues.apache.org/jira/browse/FLINK-35457
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Assignee: Yanfei Lei
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.deleteIfNecessary(PhysicalFile.java:155)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.decRefCount(PhysicalFile.java:141)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.LogicalFile.discardWithCheckpointId(LogicalFile.java:118)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardSingleLogicalFile(FileMergingSnapshotManagerBase.java:574)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardLogicalFiles(FileMergingSnapshotManagerBase.java:588)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.notifyCheckpointAborted(FileMergingSnapshotManagerBase.java:490)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.WithinCheckpointFileMergingSnapshotManager.notifyCheckpointAborted(WithinCheckpointFileMergingSnapshotManager.java:61)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyFileMergingSnapshotManagerCheckpoint(SubtaskCheckpointCoordinatorImpl.java:505)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpoint(SubtaskCheckpointCoordinatorImpl.java:490)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointAborted(SubtaskCheckpointCoordinatorImpl.java:414)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointAbortAsync$21(StreamTask.java:1513)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$23(StreamTask.java:1536)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMail(MailboxProcessor.java:398)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:367)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:352)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:229)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:998)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:923)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:970)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:949)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:763)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=8538



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35457) EventTimeWindowCheckpointingITCase fails on AZP as NPE

2024-05-27 Thread Yanfei Lei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849864#comment-17849864
 ] 

Yanfei Lei commented on FLINK-35457:


Merged into master via 1e996b8

> EventTimeWindowCheckpointingITCase fails on AZP as NPE
> --
>
> Key: FLINK-35457
> URL: https://issues.apache.org/jira/browse/FLINK-35457
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20.0
>Reporter: Weijie Guo
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.deleteIfNecessary(PhysicalFile.java:155)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.PhysicalFile.decRefCount(PhysicalFile.java:141)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.LogicalFile.discardWithCheckpointId(LogicalFile.java:118)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardSingleLogicalFile(FileMergingSnapshotManagerBase.java:574)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.discardLogicalFiles(FileMergingSnapshotManagerBase.java:588)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.FileMergingSnapshotManagerBase.notifyCheckpointAborted(FileMergingSnapshotManagerBase.java:490)
>   at 
> org.apache.flink.runtime.checkpoint.filemerging.WithinCheckpointFileMergingSnapshotManager.notifyCheckpointAborted(WithinCheckpointFileMergingSnapshotManager.java:61)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyFileMergingSnapshotManagerCheckpoint(SubtaskCheckpointCoordinatorImpl.java:505)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpoint(SubtaskCheckpointCoordinatorImpl.java:490)
>   at 
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.notifyCheckpointAborted(SubtaskCheckpointCoordinatorImpl.java:414)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointAbortAsync$21(StreamTask.java:1513)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$notifyCheckpointOperation$23(StreamTask.java:1536)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMail(MailboxProcessor.java:398)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsWhenDefaultActionUnavailable(MailboxProcessor.java:367)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:352)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:229)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:998)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:923)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:970)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:949)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:763)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=59821=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=8538



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35425][table-common] Support convert freshness to cron expression in full refresh mode [flink]

2024-05-27 Thread via GitHub


lsyldliu commented on code in PR #24849:
URL: https://github.com/apache/flink/pull/24849#discussion_r1616524117


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/IntervalFreshness.java:
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+import java.util.Objects;
+
+/**
+ * The {@link IntervalFreshness} represents freshness definition of {@link
+ * CatalogMaterializedTable}. It encapsulates the string interval value along 
with time unit,
+ * allowing for flexible representation of different freshness type. Moreover, 
it can provide
+ * detailed raw information for some specific operations.
+ */
+@PublicEvolving
+public class IntervalFreshness {
+
+private final String interval;
+private final TimeUnit timeUnit;
+
+public IntervalFreshness(String interval, TimeUnit timeUnit) {
+this.interval = interval;

Review Comment:
   good point



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35457][checkpoint] Hotfix! close physical file under the protection of lock [flink]

2024-05-27 Thread via GitHub


fredia merged PR #24846:
URL: https://github.com/apache/flink/pull/24846


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Fix modification conflict between FLINK-35465 and FLINK-35359 [flink]

2024-05-27 Thread via GitHub


flinkbot commented on PR #24850:
URL: https://github.com/apache/flink/pull/24850#issuecomment-2134269337

   
   ## CI report:
   
   * 90c1170d07dea3e1373a34fe245e4ec7ccdab592 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-35447) Flink CDC Document document file had removed but website can access

2024-05-27 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849862#comment-17849862
 ] 

Leonard Xu edited comment on FLINK-35447 at 5/28/24 2:58 AM:
-

master: d97124fb9f531193d8b232862af44e6bcca03277
release-3.1: f47a95629e077f4e4f14d81c429788042ff39bf8
release-3.0: 93d82c327d2ac86b07d295ed64b74c22967435ba



was (Author: leonard xu):
master: d97124fb9f531193d8b232862af44e6bcca03277

> Flink CDC Document document file had removed but website can access
> ---
>
> Key: FLINK-35447
> URL: https://issues.apache.org/jira/browse/FLINK-35447
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/overview/
>  the link should not appeared.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35447) Flink CDC Document document file had removed but website can access

2024-05-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-35447.

Resolution: Implemented

> Flink CDC Document document file had removed but website can access
> ---
>
> Key: FLINK-35447
> URL: https://issues.apache.org/jira/browse/FLINK-35447
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/overview/
>  the link should not appeared.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [BP-3.1][FLINK-35447][doc-ci] Flink CDC Document document file had removed but website can access [flink-cdc]

2024-05-27 Thread via GitHub


leonardBang merged PR #3367:
URL: https://github.com/apache/flink-cdc/pull/3367


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BP-3.0][FLINK-35447][doc-ci] Flink CDC Document document file had removed but website can access [flink-cdc]

2024-05-27 Thread via GitHub


leonardBang merged PR #3368:
URL: https://github.com/apache/flink-cdc/pull/3368


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35447) Flink CDC Document document file had removed but website can access

2024-05-27 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849862#comment-17849862
 ] 

Leonard Xu commented on FLINK-35447:


master: d97124fb9f531193d8b232862af44e6bcca03277

> Flink CDC Document document file had removed but website can access
> ---
>
> Key: FLINK-35447
> URL: https://issues.apache.org/jira/browse/FLINK-35447
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/overview/
>  the link should not appeared.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35447) Flink CDC Document document file had removed but website can access

2024-05-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-35447:
---
Fix Version/s: cdc-3.2.0
   cdc-3.1.1

> Flink CDC Document document file had removed but website can access
> ---
>
> Key: FLINK-35447
> URL: https://issues.apache.org/jira/browse/FLINK-35447
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: cdc-3.2.0, cdc-3.1.1
>
>
> https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/overview/
>  the link should not appeared.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35447) Flink CDC Document document file had removed but website can access

2024-05-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-35447:
---
Affects Version/s: cdc-3.1.0

> Flink CDC Document document file had removed but website can access
> ---
>
> Key: FLINK-35447
> URL: https://issues.apache.org/jira/browse/FLINK-35447
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
>
> https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/overview/
>  the link should not appeared.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35348][table] Introduce refresh materialized table rest api [flink]

2024-05-27 Thread via GitHub


hackergin commented on code in PR #24844:
URL: https://github.com/apache/flink/pull/24844#discussion_r1616519135


##
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpointMaterializedTableITCase.java:
##
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.gateway.rest;
+
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.gateway.AbstractMaterializedTableStatementITCase;
+import org.apache.flink.table.gateway.api.operation.OperationHandle;
+import 
org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler;
+import 
org.apache.flink.table.gateway.rest.header.materializedtable.RefreshMaterializedTableHeaders;
+import 
org.apache.flink.table.gateway.rest.message.materializedtable.RefreshMaterializedTableParameters;
+import 
org.apache.flink.table.gateway.rest.message.materializedtable.RefreshMaterializedTableRequestBody;
+import 
org.apache.flink.table.gateway.rest.message.materializedtable.RefreshMaterializedTableResponseBody;
+import 
org.apache.flink.table.gateway.rest.util.SqlGatewayRestEndpointExtension;
+import org.apache.flink.table.gateway.rest.util.TestingRestClient;
+import org.apache.flink.table.planner.factories.TestValuesTableFactory;
+import org.apache.flink.types.Row;
+
+import org.junit.jupiter.api.BeforeAll;
+import org.junit.jupiter.api.Order;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.RegisterExtension;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+
+import static 
org.apache.flink.table.gateway.rest.util.TestingRestClient.getTestingRestClient;
+import static 
org.apache.flink.table.gateway.service.utils.SqlGatewayServiceTestUtil.fetchAllResults;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Test basic logic of handlers inherited from {@link 
AbstractSqlGatewayRestHandler} in materialized
+ * table related cases.
+ */
+public class SqlGatewayRestEndpointMaterializedTableITCase
+extends AbstractMaterializedTableStatementITCase {
+
+private static TestingRestClient restClient;
+
+@RegisterExtension
+@Order(4)
+private static final SqlGatewayRestEndpointExtension 
SQL_GATEWAY_REST_ENDPOINT_EXTENSION =
+new 
SqlGatewayRestEndpointExtension(SQL_GATEWAY_SERVICE_EXTENSION::getService);
+
+@BeforeAll
+static void setup() throws Exception {
+restClient = getTestingRestClient();
+}
+
+@Test
+void testRefreshMaterializedTable() throws Exception {

Review Comment:
   Since this is only used to verify the functionality of the rest API, I have 
included two scenarios for checking whether the partitioner formatter is 
configured in testRefreshMaterializedTableWithPeriodSchedule.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35447) Flink CDC Document document file had removed but website can access

2024-05-27 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-35447:
--

Assignee: Zhongqiang Gong

> Flink CDC Document document file had removed but website can access
> ---
>
> Key: FLINK-35447
> URL: https://issues.apache.org/jira/browse/FLINK-35447
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Reporter: Zhongqiang Gong
>Assignee: Zhongqiang Gong
>Priority: Minor
>  Labels: pull-request-available
>
> https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/overview/
>  the link should not appeared.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35447][doc-ci] Flink CDC Document document file had removed but website can access [flink-cdc]

2024-05-27 Thread via GitHub


leonardBang merged PR #3362:
URL: https://github.com/apache/flink-cdc/pull/3362


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix] Fix modification conflict between FLINK-35465 and FLINK-35359 [flink]

2024-05-27 Thread via GitHub


zoltar9264 opened a new pull request, #24850:
URL: https://github.com/apache/flink/pull/24850

   ## What is the purpose of the change
   
   As title said, fix the compile error cause by modification conflict between 
FLINK-35465 and FLINK-35359.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:(no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35348][table] Introduce refresh materialized table rest api [flink]

2024-05-27 Thread via GitHub


hackergin commented on code in PR #24844:
URL: https://github.com/apache/flink/pull/24844#discussion_r1616517183


##
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java:
##
@@ -909,47 +694,172 @@ void testDropMaterializedTable(@InjectClusterClient 
RestClusterClient restClu
 .asSerializableString()));
 }
 
-private SessionHandle initializeSession() {
-SessionHandle sessionHandle = 
service.openSession(defaultSessionEnvironment);
-String catalogDDL =
-String.format(
-"CREATE CATALOG %s\n"
-+ "WITH (\n"
-+ "  'type' = 'test-filesystem',\n"
-+ "  'path' = '%s',\n"
-+ "  'default-database' = '%s'\n"
-+ "  )",
-fileSystemCatalogName, fileSystemCatalogPath, 
TEST_DEFAULT_DATABASE);
-service.configureSession(sessionHandle, catalogDDL, -1);
-service.configureSession(
-sessionHandle, String.format("USE CATALOG %s", 
fileSystemCatalogName), -1);
-
-// create source table
-String dataGenSource =
-"CREATE TABLE datagenSource (\n"
-+ "  order_id BIGINT,\n"
-+ "  order_number VARCHAR(20),\n"
-+ "  user_id BIGINT,\n"
-+ "  shop_id BIGINT,\n"
-+ "  product_id BIGINT,\n"
-+ "  status BIGINT,\n"
-+ "  order_type BIGINT,\n"
-+ "  order_created_at TIMESTAMP,\n"
-+ "  payment_amount_cents BIGINT\n"
-+ ")\n"
-+ "WITH (\n"
-+ "  'connector' = 'datagen',\n"
-+ "  'rows-per-second' = '10'\n"
-+ ")";
-service.configureSession(sessionHandle, dataGenSource, -1);
-return sessionHandle;
+@Test
+void testRefreshMaterializedTable() throws Exception {
+long timeout = Duration.ofSeconds(20).toMillis();
+long pause = Duration.ofSeconds(2).toMillis();
+
+List data = new ArrayList<>();
+data.add(Row.of(1L, 1L, 1L, "2024-01-01"));
+data.add(Row.of(2L, 2L, 2L, "2024-01-01"));
+data.add(Row.of(3L, 3L, 3L, "2024-01-02"));
+data.add(Row.of(4L, 4L, 4L, "2024-01-02"));
+data.add(Row.of(5L, 5L, 5L, "2024-01-03"));
+data.add(Row.of(6L, 6L, 6L, "2024-01-03"));
+String dataId = TestValuesTableFactory.registerData(data);
+
+
createAndVerifyCreateMaterializedTableWithData("my_materialized_table", dataId, 
data);
+
+// remove element of partition '2024-01-02'
+removePartitionValue(data, "2024-01-02");
+
+// refresh the materialized table with static partition
+long startTime = System.currentTimeMillis();
+Map staticPartitions = new HashMap<>();
+staticPartitions.put("ds", "2024-01-02");
+ObjectIdentifier objectIdentifier =
+ObjectIdentifier.of(
+fileSystemCatalogName, TEST_DEFAULT_DATABASE, 
"my_materialized_table");
+OperationHandle refreshTableHandle =
+service.refreshMaterializedTable(
+sessionHandle,
+objectIdentifier.asSerializableString(),
+false,
+null,
+Collections.emptyMap(),
+staticPartitions,
+Collections.emptyMap());
+
+awaitOperationTermination(service, sessionHandle, refreshTableHandle);
+List result = fetchAllResults(service, sessionHandle, 
refreshTableHandle);
+assertThat(result.size()).isEqualTo(1);
+String jobId = result.get(0).getString(0).toString();
+
+// 1. verify fresh job created
+verifyRefreshJobCreated(restClusterClient, jobId, startTime);
+
+// 2. verify the new job overwrite the data
+CommonTestUtils.waitUtil(
+() ->
+fetchTableData(sessionHandle, "SELECT * FROM 
my_materialized_table").size()
+== data.size(),
+Duration.ofMillis(timeout),
+Duration.ofMillis(pause),
+"Failed to verify the data in materialized table.");
+assertThat(
+fetchTableData(
+sessionHandle,
+"SELECT * FROM my_materialized_table 
where ds = '2024-01-02'")
+.size())
+.isEqualTo(1);
+
+// remove element of partition '2024-01-03' and '2024-01-01'
+// 

Re: [PR] [FLINK-35129][cdc][postgres] Add checkpoint cycle option for commit offsets [flink-cdc]

2024-05-27 Thread via GitHub


loserwang1024 commented on PR #3349:
URL: https://github.com/apache/flink-cdc/pull/3349#issuecomment-2134261272

   LGTM! @leonardBang @ruanhang1993 , CC, WDYT of the option name 
`checkpoint.cycle`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35425][table-common] Support convert freshness to cron expression in full refresh mode [flink]

2024-05-27 Thread via GitHub


hackergin commented on code in PR #24849:
URL: https://github.com/apache/flink/pull/24849#discussion_r161654


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/IntervalFreshness.java:
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+import java.util.Objects;
+
+/**
+ * The {@link IntervalFreshness} represents freshness definition of {@link
+ * CatalogMaterializedTable}. It encapsulates the string interval value along 
with time unit,
+ * allowing for flexible representation of different freshness type. Moreover, 
it can provide
+ * detailed raw information for some specific operations.
+ */
+@PublicEvolving
+public class IntervalFreshness {
+
+private final String interval;
+private final TimeUnit timeUnit;
+
+public IntervalFreshness(String interval, TimeUnit timeUnit) {
+this.interval = interval;

Review Comment:
   For constructors, I think we can provide constructor methods like Duration 
does, such as second, minute, day: Duration.ofSeconds, Duration.ofHours, 
Duration.ofDays.



##
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/catalog/CatalogBaseTableResolutionTest.java:
##
@@ -424,7 +424,8 @@ private static Map 
catalogMaterializedTableAsProperties() throws
 properties.put("schema.3.comment", "");
 properties.put("schema.primary-key.name", "primary_constraint");
 properties.put("schema.primary-key.columns", "id");
-properties.put("freshness", "PT30S");
+properties.put("freshness-interval", "30");

Review Comment:
   The configuration built here corresponds to the testPropertyDeSerialization 
test, which can also be modified.



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/catalog/IntervalFreshness.java:
##
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+import java.util.Objects;
+
+/**
+ * The {@link IntervalFreshness} represents freshness definition of {@link
+ * CatalogMaterializedTable}. It encapsulates the string interval value along 
with time unit,
+ * allowing for flexible representation of different freshness type. Moreover, 
it can provide
+ * detailed raw information for some specific operations.
+ */
+@PublicEvolving
+public class IntervalFreshness {
+
+private final String interval;
+private final TimeUnit timeUnit;
+
+public IntervalFreshness(String interval, TimeUnit timeUnit) {

Review Comment:
   I'm curious why it's a String here. If it were an int, we could validate the 
legality of the interval before converting it into IntervalFreshness.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-35360) [Feature] Submit Flink CDC pipeline job yarn Application mode

2024-05-27 Thread wangjunbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849856#comment-17849856
 ] 

wangjunbo edited comment on FLINK-35360 at 5/28/24 2:38 AM:


discuss in FLINK-34853 and FLINK-34904


was (Author: kwafor):
discuss in FLINK-34853 and FLINK-34904

> [Feature] Submit Flink CDC pipeline job yarn Application mode
> -
>
> Key: FLINK-35360
> URL: https://issues.apache.org/jira/browse/FLINK-35360
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.2.0
>Reporter: wangjunbo
>Priority: Minor
>  Labels: pull-request-available
>
> For now flink-cdc pipeline support cli yarn session mode submit.I'm willing 
> to support yarn application mode submit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34904) [Feature] submit Flink CDC pipeline job to yarn application cluster.

2024-05-27 Thread wangjunbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849860#comment-17849860
 ] 

wangjunbo edited comment on FLINK-34904 at 5/28/24 2:37 AM:


[~czy006] Looking forward to the realization of this and issue FLINK-35360, if 
possible I'm willing to investigate yarn appliaction mode after 
[https://github.com/apache/flink-cdc/pull/3093].


was (Author: kwafor):
[~czy006] Looking forward to the realization of this(issue FLINK-35360), if 
possible I'm willing to investigate yarn appliaction mode after 
[https://github.com/apache/flink-cdc/pull/3093].

> [Feature] submit Flink CDC pipeline job to yarn application cluster.
> 
>
> Key: FLINK-34904
> URL: https://issues.apache.org/jira/browse/FLINK-34904
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: ZhengYu Chen
>Priority: Minor
> Fix For: cdc-3.2.0
>
>
> support flink cdc cli submit pipeline job to yarn application cluster.discuss 
> in FLINK-34853



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34904) [Feature] submit Flink CDC pipeline job to yarn application cluster.

2024-05-27 Thread wangjunbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849860#comment-17849860
 ] 

wangjunbo edited comment on FLINK-34904 at 5/28/24 2:36 AM:


[~czy006] Looking forward to the realization of this(issue FLINK-35360), if 
possible I'm willing to investigate yarn appliaction mode after 
[https://github.com/apache/flink-cdc/pull/3093].


was (Author: kwafor):
[~czy006] Looking forward to the realization of this(duplicate issue 
FLINK-35360), if possible I'm willing to investigate this after 
[https://github.com/apache/flink-cdc/pull/3093].

> [Feature] submit Flink CDC pipeline job to yarn application cluster.
> 
>
> Key: FLINK-34904
> URL: https://issues.apache.org/jira/browse/FLINK-34904
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: ZhengYu Chen
>Priority: Minor
> Fix For: cdc-3.2.0
>
>
> support flink cdc cli submit pipeline job to yarn application cluster.discuss 
> in FLINK-34853



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34904) [Feature] submit Flink CDC pipeline job to yarn application cluster.

2024-05-27 Thread wangjunbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849860#comment-17849860
 ] 

wangjunbo edited comment on FLINK-34904 at 5/28/24 2:35 AM:


[~czy006] Looking forward to the realization of this(duplicate issue 
FLINK-35360), if possible I'm willing to investigate this after 
[https://github.com/apache/flink-cdc/pull/3093].


was (Author: kwafor):
[~czy006] Looking forward to the realization of this(/FLINK-35360), if possible 
I'm willing to investigate this after 
https://github.com/apache/flink-cdc/pull/3093.

> [Feature] submit Flink CDC pipeline job to yarn application cluster.
> 
>
> Key: FLINK-34904
> URL: https://issues.apache.org/jira/browse/FLINK-34904
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: ZhengYu Chen
>Priority: Minor
> Fix For: cdc-3.2.0
>
>
> support flink cdc cli submit pipeline job to yarn application cluster.discuss 
> in FLINK-34853



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34904) [Feature] submit Flink CDC pipeline job to yarn application cluster.

2024-05-27 Thread wangjunbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849860#comment-17849860
 ] 

wangjunbo commented on FLINK-34904:
---

[~czy006] Looking forward to the realization of this(/FLINK-35360), if possible 
I'm willing to investigate this after 
https://github.com/apache/flink-cdc/pull/3093.

> [Feature] submit Flink CDC pipeline job to yarn application cluster.
> 
>
> Key: FLINK-34904
> URL: https://issues.apache.org/jira/browse/FLINK-34904
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: ZhengYu Chen
>Priority: Minor
> Fix For: cdc-3.2.0
>
>
> support flink cdc cli submit pipeline job to yarn application cluster.discuss 
> in FLINK-34853



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35457][checkpoint] Hotfix! close physical file under the protection of lock [flink]

2024-05-27 Thread via GitHub


fredia commented on code in PR #24846:
URL: https://github.com/apache/flink/pull/24846#discussion_r1616500402


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/filemerging/FileMergingSnapshotManagerBase.java:
##
@@ -131,19 +131,22 @@ public abstract class FileMergingSnapshotManagerBase 
implements FileMergingSnaps
 /**
  * The {@link DirectoryStreamStateHandle} for shared state directories, 
one for each subtask.
  */
+@GuardedBy("lock")

Review Comment:
   Thanks for the suggestion, removed `@GuardedBy("lock")`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34456][configuration]Move all checkpoint-related options into CheckpointingOptions [flink]

2024-05-27 Thread via GitHub


spoon-lz commented on PR #24374:
URL: https://github.com/apache/flink/pull/24374#issuecomment-2134232797

   @Zakelly @masteryhx Sorry I made the mistake of committing to the wrong 
branch, the code has been updated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-35360) [Feature] Submit Flink CDC pipeline job yarn Application mode

2024-05-27 Thread wangjunbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849856#comment-17849856
 ] 

wangjunbo edited comment on FLINK-35360 at 5/28/24 2:04 AM:


discuss in FLINK-34853 and FLINK-34904


was (Author: kwafor):
discuss in FLINK-34853 and FLINK-34904

> [Feature] Submit Flink CDC pipeline job yarn Application mode
> -
>
> Key: FLINK-35360
> URL: https://issues.apache.org/jira/browse/FLINK-35360
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.2.0
>Reporter: wangjunbo
>Priority: Minor
>  Labels: pull-request-available
>
> For now flink-cdc pipeline support cli yarn session mode submit.I'm willing 
> to support yarn application mode submit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35360] Submit Flink CDC pipeline job yarn Application mode [flink-cdc]

2024-05-27 Thread via GitHub


beryllw commented on PR #3366:
URL: https://github.com/apache/flink-cdc/pull/3366#issuecomment-2134231318

   k8s application mode pr: https://github.com/apache/flink-cdc/pull/3093/files


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35360) [Feature] Submit Flink CDC pipeline job yarn Application mode

2024-05-27 Thread wangjunbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849856#comment-17849856
 ] 

wangjunbo commented on FLINK-35360:
---

discuss in FLINK-34853 and FLINK-34904

> [Feature] Submit Flink CDC pipeline job yarn Application mode
> -
>
> Key: FLINK-35360
> URL: https://issues.apache.org/jira/browse/FLINK-35360
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.2.0
>Reporter: wangjunbo
>Priority: Minor
>  Labels: pull-request-available
>
> For now flink-cdc pipeline support cli yarn session mode submit.I'm willing 
> to support yarn application mode submit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35460) Check file size when position read for ForSt

2024-05-27 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-35460.
--
Fix Version/s: 1.20.0
   Resolution: Fixed

Merged 9fbe9f30 into master.

> Check file size when position read for ForSt
> 
>
> Key: FLINK-35460
> URL: https://issues.apache.org/jira/browse/FLINK-35460
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Hangxiang Yu
>Assignee: Hangxiang Yu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35460][state] Adjust read size when ByteBuffer size is larger than file size for ForSt [flink]

2024-05-27 Thread via GitHub


masteryhx closed pull request #24847: [FLINK-35460][state] Adjust read size 
when ByteBuffer size is larger than file size for ForSt
URL: https://github.com/apache/flink/pull/24847


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35121) CDC pipeline connector should verify requiredOptions and optionalOptions

2024-05-27 Thread wangjunbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849854#comment-17849854
 ] 

wangjunbo commented on FLINK-35121:
---

[~loserwang1024] I'm willing to take this.

> CDC pipeline connector should verify requiredOptions and optionalOptions
> 
>
> Key: FLINK-35121
> URL: https://issues.apache.org/jira/browse/FLINK-35121
> Project: Flink
>  Issue Type: Improvement
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: Hongshun Wang
>Priority: Major
> Fix For: cdc-3.2.0
>
>
> At present, though we provide 
> org.apache.flink.cdc.common.factories.Factory#requiredOptions and 
> org.apache.flink.cdc.common.factories.Factory#optionalOptions, but both are 
> not used anywhere. This means not verifying requiredOptions and 
> optionalOptions.
> Thus, like what DynamicTableFactory does, provide 
> FactoryHelper to help verify requiredOptions and optionalOptions.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34595) Fix ClassNotFoundException: com.ververica.cdc.common.utils.StringUtils

2024-05-27 Thread wangjunbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849852#comment-17849852
 ] 

wangjunbo edited comment on FLINK-34595 at 5/28/24 1:54 AM:


[~kunni] Looks like closed by [https://github.com/apache/flink-cdc/pull/3118].


was (Author: kwafor):
Looks like closed by https://github.com/apache/flink-cdc/pull/3118.

> Fix ClassNotFoundException: com.ververica.cdc.common.utils.StringUtils
> --
>
> Key: FLINK-34595
> URL: https://issues.apache.org/jira/browse/FLINK-34595
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: LvYanquan
>Priority: Major
> Fix For: cdc-3.2.0
>
>
> In [this 
> pr|[https://github.com/apache/flink-cdc/pull/2986/files#diff-cec13810c47e9465e4f2a72507f655b86f41579768b9924fe024aabc60b31d17R21|https://github.com/apache/flink-cdc/pull/2986/files#diff-cec13810c47e9465e4f2a72507f655b86f41579768b9924fe024aabc60b31d17R21[]],
>  we introduced 
> org.apache.flink.cdc.common.utils.StringUtils class of flink-cdc-common 
> module in flink-connector-mysql-cdc module.
> However, the sub module flink-sql-connector-mysql-cdc doesn't include 
> flink-cdc-common module when packaging, so we can't find this class in sql 
> jar.. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-35465) Introduce BatchJobRecoveryHandler for recovery of batch jobs from JobMaster failures.

2024-05-27 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-35465.
---
Resolution: Done

master:
b8173eb662ee5823de40de356869d0064de2c22a
3206659db5b7c4ce645072f11f091e0e9e92b0ce
e964af392476e011147be73ae4dab8ff89512994

> Introduce BatchJobRecoveryHandler for recovery of batch jobs from JobMaster 
> failures.
> -
>
> Key: FLINK-35465
> URL: https://issues.apache.org/jira/browse/FLINK-35465
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35465) Introduce BatchJobRecoveryHandler for recovery of batch jobs from JobMaster failures.

2024-05-27 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-35465:

Fix Version/s: 1.20.0

> Introduce BatchJobRecoveryHandler for recovery of batch jobs from JobMaster 
> failures.
> -
>
> Key: FLINK-35465
> URL: https://issues.apache.org/jira/browse/FLINK-35465
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34595) Fix ClassNotFoundException: com.ververica.cdc.common.utils.StringUtils

2024-05-27 Thread wangjunbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849852#comment-17849852
 ] 

wangjunbo commented on FLINK-34595:
---

Looks like closed by https://github.com/apache/flink-cdc/pull/3118.

> Fix ClassNotFoundException: com.ververica.cdc.common.utils.StringUtils
> --
>
> Key: FLINK-34595
> URL: https://issues.apache.org/jira/browse/FLINK-34595
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Reporter: LvYanquan
>Priority: Major
> Fix For: cdc-3.2.0
>
>
> In [this 
> pr|[https://github.com/apache/flink-cdc/pull/2986/files#diff-cec13810c47e9465e4f2a72507f655b86f41579768b9924fe024aabc60b31d17R21|https://github.com/apache/flink-cdc/pull/2986/files#diff-cec13810c47e9465e4f2a72507f655b86f41579768b9924fe024aabc60b31d17R21[]],
>  we introduced 
> org.apache.flink.cdc.common.utils.StringUtils class of flink-cdc-common 
> module in flink-connector-mysql-cdc module.
> However, the sub module flink-sql-connector-mysql-cdc doesn't include 
> flink-cdc-common module when packaging, so we can't find this class in sql 
> jar.. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35465) Introduce BatchJobRecoveryHandler for recovery of batch jobs from JobMaster failures.

2024-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35465:
---
Labels: pull-request-available  (was: )

> Introduce BatchJobRecoveryHandler for recovery of batch jobs from JobMaster 
> failures.
> -
>
> Key: FLINK-35465
> URL: https://issues.apache.org/jira/browse/FLINK-35465
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35465][runtime] Introduce BatchJobRecoveryHandler for recovery of batch jobs from JobMaster failures. [flink]

2024-05-27 Thread via GitHub


zhuzhurk merged PR #24771:
URL: https://github.com/apache/flink/pull/24771


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-35465) Introduce BatchJobRecoveryHandler for recovery of batch jobs from JobMaster failures.

2024-05-27 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-35465:
---

Assignee: Junrui Li

> Introduce BatchJobRecoveryHandler for recovery of batch jobs from JobMaster 
> failures.
> -
>
> Key: FLINK-35465
> URL: https://issues.apache.org/jira/browse/FLINK-35465
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]

2024-05-27 Thread via GitHub


RocMarshal commented on code in PR #830:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1616483112


##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -93,16 +90,18 @@ spec:
 ```
 
 {{< hint info >}}
-When using the operator with Flink native Kubernetes integration, please refer 
to [pod template field precedence](
-https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink).
+当使用具有 Flink 原生 Kubernetes 集成的 operator 时,请参考 [pod template 字段优先级](
+https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#fields-overwritten-by-flink)。
 {{< /hint >}}
 
+
 ## Array Merging Behaviour
 
-When layering pod templates (defining both a top level and jobmanager specific 
podtemplate for example) the corresponding yamls are merged together.
+
+
+当分层 pod templates(例如同时定义顶级和任务管理器特定的 pod 模板)时,相应的 yaml 会合并在一起。
 
-The default behaviour of the pod template mechanism is to merge array arrays 
by merging the objects in the respective array positions.
-This requires that containers in the podTemplates are defined in the same 
order otherwise the results may be undefined.
+默认的 pod 模板机制行为是通过合并相应数组位置的对象来合并数组数组。

Review Comment:
   why '数组数组'?
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35448] Translate pod templates documentation into Chinese [flink-kubernetes-operator]

2024-05-27 Thread via GitHub


RocMarshal commented on code in PR #830:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/830#discussion_r1616482602


##
docs/content/docs/custom-resource/pod-template.md:
##
@@ -26,21 +26,18 @@ under the License.
 
 # Pod template
 
-The operator CRD is designed to have a minimal set of direct, short-hand CRD 
settings to express the most
-basic attributes of a deployment. For all other settings the CRD provides the 
`flinkConfiguration` and
-`podTemplate` fields.
+
 
-Pod templates permit customization of the Flink job and task manager pods, for 
example to specify
-volume mounts, ephemeral storage, sidecar containers etc.
+Operator CRD 被设计为一组直接、简短的 CRD 设置,以表达部署的最基本属性。对于所有其他设置,CRD 提供了 
`flinkConfiguration` 和 `podTemplate` 字段。
 
-Pod templates can be layered, as shown in the example below.
-A common pod template may hold the settings that apply to both job and task 
manager,
-like `volumeMounts`. Another template under job or task manager can define 
additional settings that supplement or override those
-in the common template, such as a task manager sidecar.
+Pod templates 保证了 Flink 作业和任务管理器 pod 的自定义,例如指定卷挂载、临时存储、边车容器等。
 
-The operator is going to merge the common and specific templates for job and 
task manager respectively.
+Pod template 可以被分层,如下面的示例所示。
+一个通用的 pod template 可以保存适用于作业和任务管理器的设置,比如 
`volumeMounts`。作业或任务管理器下的另一个模板可以定义补充或覆盖通用模板中的其他设置,比如任务管理器边车。

Review Comment:
   why "边车" ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-35464) Flink CDC 3.1 breaks operator state compatiblity

2024-05-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-35464:
---
Labels: pull-request-available  (was: )

> Flink CDC 3.1 breaks operator state compatiblity
> 
>
> Key: FLINK-35464
> URL: https://issues.apache.org/jira/browse/FLINK-35464
> Project: Flink
>  Issue Type: Bug
>  Components: Flink CDC
>Affects Versions: cdc-3.1.0
>Reporter: yux
>Assignee: yux
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: cdc-3.1.1
>
>
> Flink CDC 3.1 changed how SchemaRegistry [de]serializes state data, which 
> causes any checkpoint states saved with earlier version could not be restored 
> in version 3.1.0.
> This could be resolved by adding serialization versioning control logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-35068][core][type] Introduce built-in serialization support for java.util.Set [flink]

2024-05-27 Thread via GitHub


X-czh commented on PR #24845:
URL: https://github.com/apache/flink/pull/24845#issuecomment-2134201750

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-35464] Fixes operator state backwards compatibility from CDC 3.0.x [flink-cdc]

2024-05-27 Thread via GitHub


yuxiqian opened a new pull request, #3369:
URL: https://github.com/apache/flink-cdc/pull/3369

   This closes FLINK-35441 and FLINK-35464.
   
   Flink CDC 3.1 changes how SchemaRegistry [de]serializes state data, which 
causes any checkpoint states saved with earlier version could not be restored 
in version 3.1.0. This PR adds serialization versioning for state payloads and 
ensures 3.0.x state could be successfully restored.
   
   Unfortunately 3.1.0 introduces breaking changes without bumping 
serialization version, so this release will be excluded from state 
compatibility guarantee.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35438) SourceCoordinatorTest.testErrorThrownFromSplitEnumerator fails on wrong error

2024-05-27 Thread Rob Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849850#comment-17849850
 ] 

Rob Young commented on FLINK-35438:
---

I agree there's a race that's hard to reproduce, I can only provoke it by 
adding in a thread sleep in the spot where it can occur in 
`MockOperatorCoordinatorContext`


{code:java}
@Override
public void failJob(Throwable cause) {
jobFailed = true;
try {
Thread.sleep(50);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
jobFailureReason = cause;
jobFailedFuture.complete(null);
} 

public boolean isJobFailed() {
return jobFailed;
}

public Throwable getJobFailureReason() {
return jobFailureReason;
}{code}
If getJobFailureReason() is called between jobFailed and jobFailureReason being 
assigned then the test thread can unexpectedly observe a null jobFailureReason 
while jobFailed is true.

The same race is present in master

Happy to contribute a fix if someone could please assign me to the ticket

> SourceCoordinatorTest.testErrorThrownFromSplitEnumerator fails on wrong error
> -
>
> Key: FLINK-35438
> URL: https://issues.apache.org/jira/browse/FLINK-35438
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.2
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: test-stability
>
> * 1.18 Java 11 / Test (module: core) 
> https://github.com/apache/flink/actions/runs/9201159842/job/25309197630#step:10:7375
> We expect to see an artificial {{Error("Test Error")}} being reported in the 
> test as the cause of a job failure, but the reported job failure is null:
> {code}
> Error: 02:32:31 02:32:31.950 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 0.187 s <<< FAILURE! - in 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest
> Error: 02:32:31 02:32:31.950 [ERROR] 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest.testErrorThrownFromSplitEnumerator
>   Time elapsed: 0.01 s  <<< FAILURE!
> May 23 02:32:31 org.opentest4j.AssertionFailedError: 
> May 23 02:32:31 
> May 23 02:32:31 expected: 
> May 23 02:32:31   java.lang.Error: Test Error
> May 23 02:32:31   at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest.testErrorThrownFromSplitEnumerator(SourceCoordinatorTest.java:296)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 23 02:32:31   ...(57 remaining lines not displayed - this can be 
> changed with Assertions.setMaxStackTraceElementsDisplayed)
> May 23 02:32:31  but was: 
> May 23 02:32:31   null
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> May 23 02:32:31   at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorTest.testErrorThrownFromSplitEnumerator(SourceCoordinatorTest.java:322)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 23 02:32:31   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 23 02:32:31   at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> May 23 02:32:31   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
> May 23 02:32:31   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> May 23 02:32:31   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> May 23 02:32:31   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
> May 23 02:32:31   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
> May 23 02:32:31   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
> May 23 02:32:31   at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
> May 23 02:32:31  

[jira] [Created] (FLINK-35467) The flink-cdc.sh script of flink-cdc-3.1.0 cannot read the correct Flink configuration information specified by the flink-conf.yaml through the FLINK_CONF_DIR environmen

2024-05-27 Thread Justin.Liu (Jira)
Justin.Liu created FLINK-35467:
--

 Summary: The flink-cdc.sh script of flink-cdc-3.1.0 cannot read 
the correct Flink configuration information specified by the flink-conf.yaml 
through the FLINK_CONF_DIR environment variable.
 Key: FLINK-35467
 URL: https://issues.apache.org/jira/browse/FLINK-35467
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Affects Versions: cdc-3.1.0
 Environment: Environment Information:
flink-1.17.1
flink-cdc-3.1.0
Reporter: Justin.Liu


Problem Description:
When starting our Flink service, we use the FLINK_CONF_DIR environment variable 
to specify the flink-conf.yaml to a directory other than $FLINK_HOME/conf. The 
flink-conf.yaml directory under $FLINK_HOME/conf is incorrect.

When submitting CDC tasks using the flink-cdc.sh script of flink-cdc-3.1.0, we 
attempted to let flink-cdc read information from the correct flink-conf.yaml 
configuration file through the FLINK_CONF_DIR environment variable. However, it 
failed, and it still reads the configuration information from the 
flink-conf.yaml under $FLINK_HOME/conf, causing task submission failures.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35418) EventTimeWindowCheckpointingITCase fails with an NPE

2024-05-27 Thread Rob Young (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849841#comment-17849841
 ] 

Rob Young commented on FLINK-35418:
---

looks like the same issue as https://issues.apache.org/jira/browse/FLINK-35457 
being addressed in https://github.com/apache/flink/pull/24846

> EventTimeWindowCheckpointingITCase fails with an NPE
> 
>
> Key: FLINK-35418
> URL: https://issues.apache.org/jira/browse/FLINK-35418
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: test-stability
>
> * 1.20 Default (Java 8) / Test (module: tests) 
> [https://github.com/apache/flink/actions/runs/9185169193/job/25258948607#step:10:8106]
> It looks like it's possible for PhysicalFile to generate a 
> NullPointerException while a checkpoint is being aborted:
> {code}
> May 22 04:35:18 Starting 
> org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase#testTumblingTimeWindow[statebackend
>  type =ROCKSDB_INCREMENTAL_ZK, buffersPerChannel = 2].
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.pekko.PekkoInvocationHandler.lambda$invokeRpc$1(PekkoInvocationHandler.java:268)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1287)
>   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
>   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
>   at 
> org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$1.onComplete(ScalaFutureUtils.java:47)
>   at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:310)
>   at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307)
>   at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234)
>   at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
>   at 
> org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$DirectExecutionContext.execute(ScalaFutureUtils.java:65)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
>   at org.apache.pekko.pattern.PromiseActorRef.$bang(AskSupport.scala:629)
>   at 
> org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:34)
>   at 
> org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:33)
>   at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:536)
>   at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
>   at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
>   at 
> 

[jira] [Resolved] (FLINK-33759) flink parquet writer support write nested array or map type

2024-05-27 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge resolved FLINK-33759.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> flink parquet writer support write nested array or map type
> ---
>
> Key: FLINK-33759
> URL: https://issues.apache.org/jira/browse/FLINK-33759
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Cai Liuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> When we use flink-parquet format wirte Map[] type (which will 
> be read by spark job), we encounter an exception:
> {code:java}
> // code placeholder
> Caused by: org.apache.parquet.io.ParquetEncodingException: empty fields are 
> illegal, the field should be ommited completely instead
>     at 
> org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:329)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter$ArrayWriter.writeArrayData(ParquetRowDataWriter.java:438)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter$ArrayWriter.write(ParquetRowDataWriter.java:419)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter$RowWriter.write(ParquetRowDataWriter.java:471)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter.write(ParquetRowDataWriter.java:81)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataBuilder$ParquetWriteSupport.write(ParquetRowDataBuilder.java:89){code}
> after review the code, we found flink-parquet doesn't support write nested 
> array or map, because 
> [[ArrayWriter|https://github.com/apache/flink/blob/master/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java#L437]|#L437]
>  and 
> [MapWriter|https://github.com/apache/flink/blob/master/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java#L391]
>  doesn't impl `public void write(ArrayData arrayData, int ordinal) {}` 
> function.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33759) flink parquet writer support write nested array or map type

2024-05-27 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-33759:

Affects Version/s: 1.19.0

> flink parquet writer support write nested array or map type
> ---
>
> Key: FLINK-33759
> URL: https://issues.apache.org/jira/browse/FLINK-33759
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.19.0
>Reporter: Cai Liuyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> When we use flink-parquet format wirte Map[] type (which will 
> be read by spark job), we encounter an exception:
> {code:java}
> // code placeholder
> Caused by: org.apache.parquet.io.ParquetEncodingException: empty fields are 
> illegal, the field should be ommited completely instead
>     at 
> org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:329)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter$ArrayWriter.writeArrayData(ParquetRowDataWriter.java:438)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter$ArrayWriter.write(ParquetRowDataWriter.java:419)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter$RowWriter.write(ParquetRowDataWriter.java:471)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter.write(ParquetRowDataWriter.java:81)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataBuilder$ParquetWriteSupport.write(ParquetRowDataBuilder.java:89){code}
> after review the code, we found flink-parquet doesn't support write nested 
> array or map, because 
> [[ArrayWriter|https://github.com/apache/flink/blob/master/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java#L437]|#L437]
>  and 
> [MapWriter|https://github.com/apache/flink/blob/master/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java#L391]
>  doesn't impl `public void write(ArrayData arrayData, int ordinal) {}` 
> function.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33759) flink parquet writer support write nested array or map type

2024-05-27 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849837#comment-17849837
 ] 

Jing Ge commented on FLINK-33759:
-

master: 
https://github.com/apache/flink/commit/57b20051a5aa6426d0a6ded71f5e0d550572428c

> flink parquet writer support write nested array or map type
> ---
>
> Key: FLINK-33759
> URL: https://issues.apache.org/jira/browse/FLINK-33759
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Cai Liuyang
>Priority: Major
>  Labels: pull-request-available
>
> When we use flink-parquet format wirte Map[] type (which will 
> be read by spark job), we encounter an exception:
> {code:java}
> // code placeholder
> Caused by: org.apache.parquet.io.ParquetEncodingException: empty fields are 
> illegal, the field should be ommited completely instead
>     at 
> org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:329)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter$ArrayWriter.writeArrayData(ParquetRowDataWriter.java:438)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter$ArrayWriter.write(ParquetRowDataWriter.java:419)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter$RowWriter.write(ParquetRowDataWriter.java:471)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataWriter.write(ParquetRowDataWriter.java:81)
>     at 
> org.apache.flink.formats.parquet.row.ParquetRowDataBuilder$ParquetWriteSupport.write(ParquetRowDataBuilder.java:89){code}
> after review the code, we found flink-parquet doesn't support write nested 
> array or map, because 
> [[ArrayWriter|https://github.com/apache/flink/blob/master/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java#L437]|#L437]
>  and 
> [MapWriter|https://github.com/apache/flink/blob/master/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java#L391]
>  doesn't impl `public void write(ArrayData arrayData, int ordinal) {}` 
> function.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33759] [flink-parquet] Add support for nested array with row/map/array type [flink]

2024-05-27 Thread via GitHub


JingGe merged PR #24795:
URL: https://github.com/apache/flink/pull/24795


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Enable async profiler [flink-benchmarks]

2024-05-27 Thread via GitHub


SamBarker commented on PR #90:
URL: https://github.com/apache/flink-benchmarks/pull/90#issuecomment-2134019437

   > AFAIU, by default profiling is not turned on. One has to activate 
enable-async-profiler profile?
   
   Yes that's almost right. The profile is enabled automatically by specifying 
the path the profiler shared library (as opposed to running `-P 
enable-async-profiler`) Async profiler needs downloaded manually and the path 
is platform specific I didn't want to tackle automation of that unless there 
was some interest in the idea.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Enable async profiler [flink-benchmarks]

2024-05-27 Thread via GitHub


SamBarker commented on code in PR #90:
URL: https://github.com/apache/flink-benchmarks/pull/90#discussion_r1616369948


##
pom.xml:
##
@@ -290,25 +291,31 @@ under the License.


${skipTests}

test
-   
${executableJava}
+   
${basedir}/benchmark.sh

Review Comment:
   The interface with maven is identical so how the Java env is controlled 
stays consistent. The change is in how Jenkins triggers the benchmarks.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35348][table] Introduce refresh materialized table rest api [flink]

2024-05-27 Thread via GitHub


hackergin commented on code in PR #24844:
URL: https://github.com/apache/flink/pull/24844#discussion_r1616305580


##
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/service/MaterializedTableStatementITCase.java:
##
@@ -909,47 +694,172 @@ void testDropMaterializedTable(@InjectClusterClient 
RestClusterClient restClu
 .asSerializableString()));
 }
 
-private SessionHandle initializeSession() {
-SessionHandle sessionHandle = 
service.openSession(defaultSessionEnvironment);
-String catalogDDL =
-String.format(
-"CREATE CATALOG %s\n"
-+ "WITH (\n"
-+ "  'type' = 'test-filesystem',\n"
-+ "  'path' = '%s',\n"
-+ "  'default-database' = '%s'\n"
-+ "  )",
-fileSystemCatalogName, fileSystemCatalogPath, 
TEST_DEFAULT_DATABASE);
-service.configureSession(sessionHandle, catalogDDL, -1);
-service.configureSession(
-sessionHandle, String.format("USE CATALOG %s", 
fileSystemCatalogName), -1);
-
-// create source table
-String dataGenSource =
-"CREATE TABLE datagenSource (\n"
-+ "  order_id BIGINT,\n"
-+ "  order_number VARCHAR(20),\n"
-+ "  user_id BIGINT,\n"
-+ "  shop_id BIGINT,\n"
-+ "  product_id BIGINT,\n"
-+ "  status BIGINT,\n"
-+ "  order_type BIGINT,\n"
-+ "  order_created_at TIMESTAMP,\n"
-+ "  payment_amount_cents BIGINT\n"
-+ ")\n"
-+ "WITH (\n"
-+ "  'connector' = 'datagen',\n"
-+ "  'rows-per-second' = '10'\n"
-+ ")";
-service.configureSession(sessionHandle, dataGenSource, -1);
-return sessionHandle;
+@Test
+void testRefreshMaterializedTable() throws Exception {
+long timeout = Duration.ofSeconds(20).toMillis();
+long pause = Duration.ofSeconds(2).toMillis();
+
+List data = new ArrayList<>();
+data.add(Row.of(1L, 1L, 1L, "2024-01-01"));
+data.add(Row.of(2L, 2L, 2L, "2024-01-01"));
+data.add(Row.of(3L, 3L, 3L, "2024-01-02"));
+data.add(Row.of(4L, 4L, 4L, "2024-01-02"));
+data.add(Row.of(5L, 5L, 5L, "2024-01-03"));
+data.add(Row.of(6L, 6L, 6L, "2024-01-03"));
+String dataId = TestValuesTableFactory.registerData(data);
+
+
createAndVerifyCreateMaterializedTableWithData("my_materialized_table", dataId, 
data);
+
+// remove element of partition '2024-01-02'
+removePartitionValue(data, "2024-01-02");
+
+// refresh the materialized table with static partition
+long startTime = System.currentTimeMillis();
+Map staticPartitions = new HashMap<>();
+staticPartitions.put("ds", "2024-01-02");
+ObjectIdentifier objectIdentifier =
+ObjectIdentifier.of(
+fileSystemCatalogName, TEST_DEFAULT_DATABASE, 
"my_materialized_table");
+OperationHandle refreshTableHandle =
+service.refreshMaterializedTable(
+sessionHandle,
+objectIdentifier.asSerializableString(),
+false,
+null,
+Collections.emptyMap(),
+staticPartitions,
+Collections.emptyMap());
+
+awaitOperationTermination(service, sessionHandle, refreshTableHandle);
+List result = fetchAllResults(service, sessionHandle, 
refreshTableHandle);
+assertThat(result.size()).isEqualTo(1);
+String jobId = result.get(0).getString(0).toString();
+
+// 1. verify fresh job created
+verifyRefreshJobCreated(restClusterClient, jobId, startTime);
+
+// 2. verify the new job overwrite the data
+CommonTestUtils.waitUtil(
+() ->
+fetchTableData(sessionHandle, "SELECT * FROM 
my_materialized_table").size()

Review Comment:
   The data being queried here is for the entire table. To avoid 
misunderstanding, this test has been removed. In theory, we only need to check 
the partitions that have been refreshed and the partitions that have not been 
refreshed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35348][table] Introduce refresh materialized table rest api [flink]

2024-05-27 Thread via GitHub


hackergin commented on code in PR #24844:
URL: https://github.com/apache/flink/pull/24844#discussion_r1616304880


##
flink-table/flink-sql-gateway/src/test/java/org/apache/flink/table/gateway/rest/SqlGatewayRestEndpointMaterializedTableITCase.java:
##
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.gateway.rest;
+
+import org.apache.flink.table.catalog.ObjectIdentifier;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.gateway.AbstractMaterializedTableStatementITCase;
+import org.apache.flink.table.gateway.api.operation.OperationHandle;
+import 
org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler;
+import 
org.apache.flink.table.gateway.rest.header.materializedtable.RefreshMaterializedTableHeaders;
+import 
org.apache.flink.table.gateway.rest.message.materializedtable.RefreshMaterializedTableParameters;
+import 
org.apache.flink.table.gateway.rest.message.materializedtable.RefreshMaterializedTableRequestBody;
+import 
org.apache.flink.table.gateway.rest.message.materializedtable.RefreshMaterializedTableResponseBody;
+import 
org.apache.flink.table.gateway.rest.util.SqlGatewayRestEndpointExtension;
+import org.apache.flink.table.gateway.rest.util.TestingRestClient;
+import org.apache.flink.table.planner.factories.TestValuesTableFactory;
+import org.apache.flink.types.Row;
+
+import org.junit.jupiter.api.BeforeAll;
+import org.junit.jupiter.api.Order;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.extension.RegisterExtension;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+
+import static 
org.apache.flink.table.gateway.rest.util.TestingRestClient.getTestingRestClient;
+import static 
org.apache.flink.table.gateway.service.utils.SqlGatewayServiceTestUtil.fetchAllResults;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Test basic logic of handlers inherited from {@link 
AbstractSqlGatewayRestHandler} in materialized
+ * table related cases.
+ */
+public class SqlGatewayRestEndpointMaterializedTableITCase

Review Comment:
   SqlGatewayRestEndpointMaterializedTableITCase is separated out mainly to 
maintain consistency with previous tests, and all RestEndpoint-related tests 
are placed under the rest module.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34487][ci] Adds Python Wheels nightly GHA workflow [flink]

2024-05-27 Thread via GitHub


XComp commented on code in PR #24426:
URL: https://github.com/apache/flink/pull/24426#discussion_r1616244045


##
.github/workflows/nightly.yml:
##
@@ -94,3 +94,46 @@ jobs:
   s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }}
   s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }}
   s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }}
+
+  build_python_wheels:
+name: "Build Python Wheels on ${{ matrix.os_name }}"
+runs-on: ${{ matrix.os }}
+strategy:
+  fail-fast: false
+  matrix:
+include:
+  - os: ubuntu-latest
+os_name: linux
+  - os: macos-latest
+os_name: macos
+steps:
+  - name: "Checkout the repository"
+uses: actions/checkout@v4
+with:
+  fetch-depth: 0
+  persist-credentials: false
+  - name: "Stringify workflow name"
+uses: "./.github/actions/stringify"
+id: stringify_workflow
+with:
+  value: ${{ github.workflow }}
+  - name: "Build python wheels for ${{ matrix.os_name }}"
+uses: pypa/cibuildwheel@v2.16.5

Review Comment:
   That should work  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-20398][e2e] Migrate test_batch_sql.sh to Java e2e tests framework [flink]

2024-05-27 Thread via GitHub


XComp commented on code in PR #24471:
URL: https://github.com/apache/flink/pull/24471#discussion_r1616241149


##
flink-end-to-end-tests/flink-batch-sql-test/src/test/java/org/apache/flink/sql/tests/Generator.java:
##
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.sql.tests;
+
+import org.apache.flink.connector.testframe.source.FromElementsSource;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.types.Row;
+
+import org.jetbrains.annotations.NotNull;
+
+import java.time.Instant;
+import java.time.LocalDateTime;
+import java.time.ZoneOffset;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NoSuchElementException;
+
+class Generator implements FromElementsSource.ElementsSupplier {
+private static final long serialVersionUID = -8455653458083514261L;
+private final List elements;
+
+static Generator create(
+int numKeys, float rowsPerKeyAndSecond, int durationSeconds, int 
offsetSeconds) {
+int sleepMs = (int) (1000 / rowsPerKeyAndSecond);
+InternalGenerator gen =
+new InternalGenerator(
+numKeys, durationSeconds * 1000L, sleepMs, 
offsetSeconds * 2000L);
+List elements = new ArrayList<>();
+gen.forEachRemaining(elements::add);
+return new Generator(elements);
+}

Review Comment:
   ```suggestion
   static Generator create(
   int numKeys, float rowsPerKeyAndSecond, int durationSeconds, int 
offsetSeconds) {
   final int stepMs = (int) (1000 / rowsPerKeyAndSecond);
   final long durationMs = durationSeconds * 1000L;
   final long offsetMs = offsetSeconds * 2000L;
   final List elements = new ArrayList<>();
   
   int keyIndex = 0;
   long ms = 0;
   while (ms < durationMs) {
   elements.add(createRow(keyIndex++, ms, offsetMs));
   if (keyIndex >= numKeys) {
   keyIndex = 0;
   ms += stepMs;
   }
   }
   
   return new Generator(elements);
   }
   
   private static Row createRow(int keyIndex, long milliseconds, long 
offsetMillis) {
   return Row.of(
   keyIndex,
   LocalDateTime.ofInstant(
   Instant.ofEpochMilli(milliseconds + offsetMillis), 
ZoneOffset.UTC),
   "Some payload...");
   }
   ```
   nit: what we could also do is to get rid of the `InternalGenerator` class. 
It's just a while loop in the end. WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Enable async profiler [flink-benchmarks]

2024-05-27 Thread via GitHub


pnowojski commented on code in PR #90:
URL: https://github.com/apache/flink-benchmarks/pull/90#discussion_r1616223910


##
benchmark.sh:
##
@@ -0,0 +1,54 @@
+#!/usr/bin/env bash

Review Comment:
   I would mark this as executable.



##
benchmark.sh:
##
@@ -0,0 +1,48 @@
+#!/usr/bin/env bash
+
+JAVA_ARGS=()
+JMH_ARGS=()
+BINARY="java"
+BENCHMARK_PATTERN=
+
+while getopts ":j:c:b:e:p:a:m:h" opt; do
+  case $opt in
+j) JAVA_ARGS+=("${OPTARG}")
+;;
+c) CLASSPATH_ARG="${OPTARG}"
+;;
+b) BINARY="${OPTARG}"
+;;
+p) PROFILER_ARG="${OPTARG:+-prof ${OPTARG}}"
+# conditional prefixing inspired by 
https://stackoverflow.com/a/40771884/1389220
+;;
+a) JMH_ARGS+=("${OPTARG}")
+;;
+e) BENCHMARK_EXCLUDES="${OPTARG:+-e ${OPTARG}}"
+;;
+m) BENCHMARK_PATTERN="${OPTARG}"
+  echo "parsing -m"
+;;
+h)
+  1>&2 cat << EOF
+usage: TODO
+EOF
+  exit 1
+;;
+\?) echo "Invalid option -$opt ${OPTARG}" >&2
+exit 1
+;;
+  esac
+done
+shift "$(($OPTIND -1))"

Review Comment:
   I would suggest to use python or java wrapper instead of bash, if only for 
the sake of being able to use a nice args library. For example take a look at 
https://github.com/apache/flink-benchmarks/blob/master/regression_report.py or 
some other python script in this repo's root.



##
pom.xml:
##
@@ -290,25 +291,31 @@ under the License.


${skipTests}

test
-   
${executableJava}
+   
${basedir}/benchmark.sh

Review Comment:
   Are the executable commands the same? There are some scripts (in 
[jenkins](https://github.com/apache/flink-benchmarks/tree/master/jenkinsfiles)?)
 that are relaying for those commands to behave the way they were behaving. If 
something changes, they would have to be changed as well.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34487][ci] Adds Python Wheels nightly GHA workflow [flink]

2024-05-27 Thread via GitHub


morazow commented on code in PR #24426:
URL: https://github.com/apache/flink/pull/24426#discussion_r1616229933


##
.github/workflows/nightly.yml:
##
@@ -94,3 +94,46 @@ jobs:
   s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }}
   s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }}
   s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }}
+
+  build_python_wheels:
+name: "Build Python Wheels on ${{ matrix.os_name }}"
+runs-on: ${{ matrix.os }}
+strategy:
+  fail-fast: false
+  matrix:
+include:
+  - os: ubuntu-latest
+os_name: linux
+  - os: macos-latest
+os_name: macos
+steps:
+  - name: "Checkout the repository"
+uses: actions/checkout@v4
+with:
+  fetch-depth: 0
+  persist-credentials: false
+  - name: "Stringify workflow name"
+uses: "./.github/actions/stringify"
+id: stringify_workflow
+with:
+  value: ${{ github.workflow }}
+  - name: "Build python wheels for ${{ matrix.os_name }}"
+uses: pypa/cibuildwheel@v2.16.5

Review Comment:
   Ahh I see. I would be fine to use generic install aproach, right?
   
   ```
 # Used to host cibuildwheel
 - uses: actions/setup-python@v5
   
 - name: Install cibuildwheel
   run: python -m pip install cibuildwheel==2.18.1
   
 - name: Build wheels
   run: python -m cibuildwheel --output-dir wheelhouse
   ```
   
   cibuildwheel docs also suggest this generic way for GitHub actions



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35446) FileMergingSnapshotManagerBase throws a NullPointerException

2024-05-27 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849767#comment-17849767
 ] 

Ryan Skraba commented on FLINK-35446:
-

Thanks for the fix!  There were a bunch of failures over the weekend before the 
merge to master:

* 1.20 Default (Java 8) / Test (module: table) 
https://github.com/apache/flink/actions/runs/9249920179/job/25442781056#step:10:12157
* 1.20 Java 8 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9248172120/job/25438340923#step:10:8501
* 1.20 Java 17 / Test (module: table) 
https://github.com/apache/flink/actions/runs/9248172120/job/25438314807#step:10:11974
* 1.20 Java 17 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9248172120/job/25438315031#step:10:8441
* 1.20 Java 21 / Test (module: table) 
https://github.com/apache/flink/actions/runs/9248172120/job/25438306000#step:10:12064
* 1.20 Java 21 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9248172120/job/25438306359#step:10:9072
* 1.20 Hadoop 3.1.3 / Test (module: table) 
https://github.com/apache/flink/actions/runs/9248172120/job/25438381891#step:10:12151
* 1.20 Hadoop 3.1.3 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9248172120/job/25438382250#step:10:8131
* 1.20 AdaptiveScheduler / Test (module: table) 
https://github.com/apache/flink/actions/runs/9248172120/job/25438295648#step:10:12081
* 1.20 Default (Java 8) / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9248089774/job/25438060032#step:10:8040
* 1.20 Default (Java 8) / Test (module: table) 
https://github.com/apache/flink/actions/runs/9244756333/job/25430934260#step:10:11992
* 1.20 Default (Java 8) / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9244756333/job/25430934479#step:10:8471
* 1.20 Java 8 / Test (module: table) 
https://github.com/apache/flink/actions/runs/9239908683/job/25419730553#step:10:11972
* 1.20 Java 11 / Test (module: table) 
https://github.com/apache/flink/actions/runs/9239908683/job/25419746284#step:10:11933
* 1.20 Java 17 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9239908683/job/25419747284#step:10:8437
* 1.20 Default (Java 8) / Test (module: table) 
https://github.com/apache/flink/actions/runs/9236391640/job/25412610305#step:10:12028
* 1.20 Default (Java 8) / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9236391640/job/25412610424#step:10:8615
* 1.20 Java 8 / Test (module: table) 
https://github.com/apache/flink/actions/runs/9232146809/job/25403130654#step:10:11954
* 1.20 Java 17 / Test (module: table) 
https://github.com/apache/flink/actions/runs/9232146809/job/25403143495#step:10:12425
* 1.20 Java 17 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9232146809/job/25403143840#step:10:8431
* 1.20 Java 21 / Test (module: table) 
https://github.com/apache/flink/actions/runs/9232146809/job/25403134721#step:10:11960
* 1.20 Java 21 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9232146809/job/25403134721#step:10:11960
* 1.20 Hadoop 3.1.3 / Test (module: table) 
https://github.com/apache/flink/actions/runs/9232146809/job/25403165764#step:10:12305
* 1.20 AdaptiveScheduler / Test (module: table) 
https://github.com/apache/flink/actions/runs/9232146809/job/25403133340#step:10:12266
* 1.20 AdaptiveScheduler / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9232146809/job/25403133470#step:10:8553

Unfortunately, I think these two failures happened on master **after** the fix 
was merged -- do you think something was missed?  This can definitely be 
verified with the next nightly build!

* 1.20 Default (Java 8) / Test (module: table) 
https://github.com/apache/flink/actions/runs/9250759677/job/25445310702#step:10:12049
* 1.20 Default (Java 8) / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9250759677/job/25445311108#step:10:8510

> FileMergingSnapshotManagerBase throws a NullPointerException
> 
>
> Key: FLINK-35446
> URL: https://issues.apache.org/jira/browse/FLINK-35446
> Project: Flink
>  Issue Type: Bug
>Reporter: Ryan Skraba
>Assignee: Zakelly Lan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.20.0
>
>
> * 1.20 Java 11 / Test (module: tests) 
> https://github.com/apache/flink/actions/runs/9217608897/job/25360103124#step:10:8641
> {{ResumeCheckpointManuallyITCase.testExternalizedIncrementalRocksDBCheckpointsWithLocalRecoveryZookeeper}}
>  throws a NullPointerException when it tries to restore state handles: 
> {code}
> Error: 02:57:52 02:57:52.551 [ERROR] Tests run: 48, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 268.6 s <<< FAILURE! -- in 
> 

Re: [PR] [FLINK-34487][ci] Adds Python Wheels nightly GHA workflow [flink]

2024-05-27 Thread via GitHub


XComp commented on code in PR #24426:
URL: https://github.com/apache/flink/pull/24426#discussion_r1616177599


##
.github/workflows/nightly.yml:
##
@@ -94,3 +94,46 @@ jobs:
   s3_bucket: ${{ secrets.IT_CASE_S3_BUCKET }}
   s3_access_key: ${{ secrets.IT_CASE_S3_ACCESS_KEY }}
   s3_secret_key: ${{ secrets.IT_CASE_S3_SECRET_KEY }}
+
+  build_python_wheels:
+name: "Build Python Wheels on ${{ matrix.os_name }}"
+runs-on: ${{ matrix.os }}
+strategy:
+  fail-fast: false
+  matrix:
+include:
+  - os: ubuntu-latest
+os_name: linux
+  - os: macos-latest
+os_name: macos
+steps:
+  - name: "Checkout the repository"
+uses: actions/checkout@v4
+with:
+  fetch-depth: 0
+  persist-credentials: false
+  - name: "Stringify workflow name"
+uses: "./.github/actions/stringify"
+id: stringify_workflow
+with:
+  value: ${{ github.workflow }}
+  - name: "Build python wheels for ${{ matrix.os_name }}"
+uses: pypa/cibuildwheel@v2.16.5

Review Comment:
   Interesting approach. It's just that Apache doesn't allow actions other than 
the ones from the `apache`, `github` and `actions` 
([source](https://infra.apache.org/github-actions-policy.html)). We could use a 
fixed version for the custom action and review that specific version. But 
`pypa/cibuildwheel@v2.16.5` seems to be a bigger project. I'm wondering whether 
it's worth it or just use the previous approach (even though I liked your 
intention  ). WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-35380) ResumeCheckpointManuallyITCase hanging on tests

2024-05-27 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849765#comment-17849765
 ] 

Ryan Skraba commented on FLINK-35380:
-

* 1.20 Java 21 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9239908683/job/25419736576#step:10:11668
* 1.20 Hadoop 3.1.3 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9239908683/job/25419763729#step:10:12152

> ResumeCheckpointManuallyITCase hanging on tests 
> 
>
> Key: FLINK-35380
> URL: https://issues.apache.org/jira/browse/FLINK-35380
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: test-stability
>
> * 1.20 Default (Java 8) / Test (module: tests) 
> https://github.com/apache/flink/actions/runs/9105407291/job/25031170942#step:10:11841
>  
> (This is a slightly different error, waiting in a different place than 
> FLINK-28319)
> {code}
> May 16 03:23:58 
> ==
> May 16 03:23:58 Process produced no output for 900 seconds.
> May 16 03:23:58 
> ==
> ... snip until stack trace ...
> ay 16 03:23:58at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> May 16 03:23:58   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> May 16 03:23:58   at 
> java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> May 16 03:23:58   at 
> org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.runJobAndGetExternalizedCheckpoint(ResumeCheckpointManuallyITCase.java:410)
> May 16 03:23:58   at 
> org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedCheckpoints(ResumeCheckpointManuallyITCase.java:378)
> May 16 03:23:58   at 
> org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedCheckpoints(ResumeCheckpointManuallyITCase.java:318)
> May 16 03:23:58   at 
> org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedFullRocksDBCheckpointsWithLocalRecoveryStandalone(ResumeCheckpointManuallyITCase.java:133)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28440) EventTimeWindowCheckpointingITCase failed with restore

2024-05-27 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849759#comment-17849759
 ] 

Ryan Skraba commented on FLINK-28440:
-

* 1.19 Java 21 / Test (module: tests) 
https://github.com/apache/flink/actions/runs/9232147048/job/25403143624#step:10:8022

> EventTimeWindowCheckpointingITCase failed with restore
> --
>
> Key: FLINK-28440
> URL: https://issues.apache.org/jira/browse/FLINK-28440
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.16.0, 1.17.0, 1.18.0, 1.19.0
>Reporter: Huang Xingbo
>Assignee: Yanfei Lei
>Priority: Critical
>  Labels: auto-deprioritized-critical, pull-request-available, 
> stale-assigned, test-stability
> Fix For: 1.20.0
>
> Attachments: image-2023-02-01-00-51-54-506.png, 
> image-2023-02-01-01-10-01-521.png, image-2023-02-01-01-19-12-182.png, 
> image-2023-02-01-16-47-23-756.png, image-2023-02-01-16-57-43-889.png, 
> image-2023-02-02-10-52-56-599.png, image-2023-02-03-10-09-07-586.png, 
> image-2023-02-03-12-03-16-155.png, image-2023-02-03-12-03-56-614.png
>
>
> {code:java}
> Caused by: java.lang.Exception: Exception while creating 
> StreamOperatorStateContext.
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:256)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:268)
>   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:722)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:698)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:665)
>   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
>   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.FlinkException: Could not restore keyed 
> state backend for WindowOperator_0a448493b4782967b150582570326227_(2/4) from 
> any of the 1 provided restore options.
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:353)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:165)
>   ... 11 more
> Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: 
> /tmp/junit1835099326935900400/junit1113650082510421526/52ee65b7-033f-4429-8ddd-adbe85e27ced
>  (No such file or directory)
>   at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
>   at 
> org.apache.flink.runtime.state.changelog.StateChangelogHandleStreamHandleReader$1.advance(StateChangelogHandleStreamHandleReader.java:87)
>   at 
> org.apache.flink.runtime.state.changelog.StateChangelogHandleStreamHandleReader$1.hasNext(StateChangelogHandleStreamHandleReader.java:69)
>   at 
> org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.readBackendHandle(ChangelogBackendRestoreOperation.java:96)
>   at 
> org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.restore(ChangelogBackendRestoreOperation.java:75)
>   at 
> org.apache.flink.state.changelog.ChangelogStateBackend.restore(ChangelogStateBackend.java:92)
>   at 
> org.apache.flink.state.changelog.AbstractChangelogStateBackend.createKeyedStateBackend(AbstractChangelogStateBackend.java:136)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:336)
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
>   ... 13 more
> Caused by: java.io.FileNotFoundException: 
> 

[jira] [Commented] (FLINK-35002) GitHub action request timeout to ArtifactService

2024-05-27 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849763#comment-17849763
 ] 

Ryan Skraba commented on FLINK-35002:
-

* 1.20 Java 11 / Compile 
https://github.com/apache/flink/commit/f860631c523c1d446c0d01046f0fbe6055174dc6/checks/25438061803/logs
* 1.19 Java 17 / Compile 
https://github.com/apache/flink/commit/a450980de65eaead734349ed44452f572e5e329d/checks/25402960967/logs

> GitHub action request timeout  to ArtifactService
> -
>
> Key: FLINK-35002
> URL: https://issues.apache.org/jira/browse/FLINK-35002
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Ryan Skraba
>Priority: Major
>  Labels: github-actions, test-stability
>
> A timeout can occur when uploading a successfully built artifact:
>  * [https://github.com/apache/flink/actions/runs/8516411871/job/23325392650]
> {code:java}
> 2024-04-02T02:20:15.6355368Z With the provided path, there will be 1 file 
> uploaded
> 2024-04-02T02:20:15.6360133Z Artifact name is valid!
> 2024-04-02T02:20:15.6362872Z Root directory input is valid!
> 2024-04-02T02:20:20.6975036Z Attempt 1 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 3000 ms...
> 2024-04-02T02:20:28.7084937Z Attempt 2 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 4785 ms...
> 2024-04-02T02:20:38.5015936Z Attempt 3 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 7375 ms...
> 2024-04-02T02:20:50.8901508Z Attempt 4 of 5 failed with error: Request 
> timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. 
> Retrying request in 14988 ms...
> 2024-04-02T02:21:10.9028438Z ##[error]Failed to CreateArtifact: Failed to 
> make request after 5 attempts: Request timeout: 
> /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact
> 2024-04-02T02:22:59.9893296Z Post job cleanup.
> 2024-04-02T02:22:59.9958844Z Post job cleanup. {code}
> (This is unlikely to be something we can fix, but we can track it.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35012) ChangelogNormalizeRestoreTest.testRestore failure

2024-05-27 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849764#comment-17849764
 ] 

Ryan Skraba commented on FLINK-35012:
-

* 1.20 AdaptiveScheduler / Test (module: table) 
https://github.com/apache/flink/actions/runs/9239908683/job/25419731096#step:10:10621

> ChangelogNormalizeRestoreTest.testRestore failure
> -
>
> Key: FLINK-35012
> URL: https://issues.apache.org/jira/browse/FLINK-35012
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: Ryan Skraba
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58716=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4=11921
> {code}
> Apr 03 22:57:43 22:57:43.159 [ERROR] Failures: 
> Apr 03 22:57:43 22:57:43.160 [ERROR]   
> ChangelogNormalizeRestoreTest>RestoreTestBase.testRestore:337 
> Apr 03 22:57:43 Expecting actual:
> Apr 03 22:57:43   ["+I[two, 2, b]",
> Apr 03 22:57:43 "+I[one, 1, a]",
> Apr 03 22:57:43 "+I[three, 3, c]",
> Apr 03 22:57:43 "-U[one, 1, a]",
> Apr 03 22:57:43 "+U[one, 1, aa]",
> Apr 03 22:57:43 "-U[three, 3, c]",
> Apr 03 22:57:43 "+U[three, 3, cc]",
> Apr 03 22:57:43 "-D[two, 2, b]",
> Apr 03 22:57:43 "+I[four, 4, d]",
> Apr 03 22:57:43 "+I[five, 5, e]",
> Apr 03 22:57:43 "-U[four, 4, d]",
> Apr 03 22:57:43 "+U[four, 4, dd]"]
> Apr 03 22:57:43 to contain exactly in any order:
> Apr 03 22:57:43   ["+I[one, 1, a]",
> Apr 03 22:57:43 "+I[two, 2, b]",
> Apr 03 22:57:43 "-U[one, 1, a]",
> Apr 03 22:57:43 "+U[one, 1, aa]",
> Apr 03 22:57:43 "+I[three, 3, c]",
> Apr 03 22:57:43 "-D[two, 2, b]",
> Apr 03 22:57:43 "-U[three, 3, c]",
> Apr 03 22:57:43 "+U[three, 3, cc]",
> Apr 03 22:57:43 "+I[four, 4, d]",
> Apr 03 22:57:43 "+I[five, 5, e]",
> Apr 03 22:57:43 "-U[four, 4, d]",
> Apr 03 22:57:43 "+U[four, 4, dd]",
> Apr 03 22:57:43 "+I[six, 6, f]",
> Apr 03 22:57:43 "-D[six, 6, f]"]
> Apr 03 22:57:43 but could not find the following elements:
> Apr 03 22:57:43   ["+I[six, 6, f]", "-D[six, 6, f]"]
> Apr 03 22:57:43 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34224) ChangelogStorageMetricsTest.testAttemptsPerUpload(ChangelogStorageMetricsTest timed out

2024-05-27 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849760#comment-17849760
 ] 

Ryan Skraba commented on FLINK-34224:
-

* 1.20 Hadoop 3.1.3 / Test (module: core) 
https://github.com/apache/flink/actions/runs/9239908683/job/25419763061#step:10:12699

> ChangelogStorageMetricsTest.testAttemptsPerUpload(ChangelogStorageMetricsTest 
> timed out
> ---
>
> Key: FLINK-34224
> URL: https://issues.apache.org/jira/browse/FLINK-34224
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: github-actions, test-stability
>
> The timeout appeared in the GitHub Actions workflow (currently in test phase; 
> [FLIP-396|https://cwiki.apache.org/confluence/display/FLINK/FLIP-396%3A+Trial+to+test+GitHub+Actions+as+an+alternative+for+Flink%27s+current+Azure+CI+infrastructure]):
> https://github.com/XComp/flink/actions/runs/7632434859/job/20793613726#step:10:11040
> {code}
> Jan 24 01:38:36 "ForkJoinPool-1-worker-1" #16 daemon prio=5 os_prio=0 
> tid=0x7f3b200ae800 nid=0x406e3 waiting on condition [0x7f3b1ba0e000]
> Jan 24 01:38:36java.lang.Thread.State: WAITING (parking)
> Jan 24 01:38:36   at sun.misc.Unsafe.park(Native Method)
> Jan 24 01:38:36   - parking to wait for  <0xdfbbb358> (a 
> java.util.concurrent.CompletableFuture$Signaller)
> Jan 24 01:38:36   at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> Jan 24 01:38:36   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
> Jan 24 01:38:36   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313)
> Jan 24 01:38:36   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
> Jan 24 01:38:36   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Jan 24 01:38:36   at 
> org.apache.flink.changelog.fs.ChangelogStorageMetricsTest.testAttemptsPerUpload(ChangelogStorageMetricsTest.java:251)
> Jan 24 01:38:36   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34645) StreamArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount fails

2024-05-27 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849762#comment-17849762
 ] 

Ryan Skraba commented on FLINK-34645:
-

* 1.18 Hadoop 3.1.3 / Test (module: misc) 
https://github.com/apache/flink/actions/runs/9232146944

> StreamArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount
>  fails
> 
>
> Key: FLINK-34645
> URL: https://issues.apache.org/jira/browse/FLINK-34645
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.1
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: github-actions, test-stability
>
> {code}
> Error: 02:27:17 02:27:17.025 [ERROR] Tests run: 3, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 0.658 s <<< FAILURE! - in 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.stream.StreamArrowPythonGroupWindowAggregateFunctionOperatorTest
> Error: 02:27:17 02:27:17.025 [ERROR] 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.stream.StreamArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount
>   Time elapsed: 0.3 s  <<< FAILURE!
> Mar 09 02:27:17 java.lang.AssertionError: 
> Mar 09 02:27:17 
> Mar 09 02:27:17 Expected size: 8 but was: 6 in:
> Mar 09 02:27:17 [Record @ (undef) : 
> +I(c1,0,1969-12-31T23:59:55,1970-01-01T00:00:05),
> Mar 09 02:27:17 Record @ (undef) : 
> +I(c2,3,1969-12-31T23:59:55,1970-01-01T00:00:05),
> Mar 09 02:27:17 Record @ (undef) : 
> +I(c2,3,1970-01-01T00:00,1970-01-01T00:00:10),
> Mar 09 02:27:17 Record @ (undef) : 
> +I(c1,0,1970-01-01T00:00,1970-01-01T00:00:10),
> Mar 09 02:27:17 Watermark @ 1,
> Mar 09 02:27:17 Watermark @ 2]
> Mar 09 02:27:17   at 
> org.apache.flink.table.runtime.util.RowDataHarnessAssertor.assertOutputEquals(RowDataHarnessAssertor.java:110)
> Mar 09 02:27:17   at 
> org.apache.flink.table.runtime.util.RowDataHarnessAssertor.assertOutputEquals(RowDataHarnessAssertor.java:70)
> Mar 09 02:27:17   at 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.ArrowPythonAggregateFunctionOperatorTestBase.assertOutputEquals(ArrowPythonAggregateFunctionOperatorTestBase.java:62)
> Mar 09 02:27:17   at 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.stream.StreamArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount(StreamArrowPythonGroupWindowAggregateFunctionOperatorTest.java:326)
> Mar 09 02:27:17   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34227) Job doesn't disconnect from ResourceManager

2024-05-27 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849761#comment-17849761
 ] 

Ryan Skraba commented on FLINK-34227:
-

* 1.18 AdaptiveScheduler / Test (module: table) 
https://github.com/apache/flink/actions/runs/9248172203/job/25438330034#step:10:15163
* 1.18 AdaptiveScheduler / Test (module: table) 
https://github.com/apache/flink/actions/runs/9239908314/job/25419753266#step:10:12055

> Job doesn't disconnect from ResourceManager
> ---
>
> Key: FLINK-34227
> URL: https://issues.apache.org/jira/browse/FLINK-34227
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, pull-request-available, test-stability
> Attachments: FLINK-34227.7e7d69daebb438b8d03b7392c9c55115.log, 
> FLINK-34227.log
>
>
> https://github.com/XComp/flink/actions/runs/7634987973/job/20800205972#step:10:14557
> {code}
> [...]
> "main" #1 prio=5 os_prio=0 tid=0x7f4b7000 nid=0x24ec0 waiting on 
> condition [0x7fccce1eb000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xbdd52618> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2131)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2099)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2077)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:876)
>   at 
> org.apache.flink.table.planner.runtime.stream.sql.WindowDistinctAggregateITCase.testHopWindow_Cube(WindowDistinctAggregateITCase.scala:550)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-18476) PythonEnvUtilsTest#testStartPythonProcess fails

2024-05-27 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849758#comment-17849758
 ] 

Ryan Skraba commented on FLINK-18476:
-

* 1.20 Java 21 / Test (module: misc) 
https://github.com/apache/flink/actions/runs/9232146809/job/25403134721#step:10:11960

> PythonEnvUtilsTest#testStartPythonProcess fails
> ---
>
> Key: FLINK-18476
> URL: https://issues.apache.org/jira/browse/FLINK-18476
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0, 1.15.3, 1.18.0, 1.19.0, 1.20.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> The 
> {{org.apache.flink.client.python.PythonEnvUtilsTest#testStartPythonProcess}} 
> failed in my local environment as it assumes the environment has 
> {{/usr/bin/python}}. 
> I don't know exactly how did I get python in Ubuntu 20.04, but I have only 
> alias for {{python = python3}}. Therefore the tests fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [doc]Flink sql gettingstarted Expression 'dept_id' is not being grouped [flink]

2024-05-27 Thread via GitHub


ZmmBigdata commented on PR #24841:
URL: https://github.com/apache/flink/pull/24841#issuecomment-2133532442

   Two more files have been committed
   docs:flink-docs-release-1.19/docs/dev/table/sql/queries/overview/
   [docs]Flink sql queries-overview Missing required options are:path
   
   Exception in thread "main" org.apache.flink.table.api.ValidationException: 
Unable to create a sink for writing table 
'default_catalog.default_database.RubberOrders'.
   
   Table options are:
   
   'connector'='filesystem'
   'csv.field-delimiter'=','
   'format'='csv'
   at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:270)
   at 
org.apache.flink.table.planner.delegation.PlannerBase.getTableSink(PlannerBase.scala:459)
   at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:236)
   at 
org.apache.flink.table.planner.delegation.PlannerBase.$anonfun$translate$1(PlannerBase.scala:194)
   at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
   at scala.collection.Iterator.foreach(Iterator.scala:937)
   at scala.collection.Iterator.foreach$(Iterator.scala:937)
   at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
   at scala.collection.IterableLike.foreach(IterableLike.scala:70)
   at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
   at scala.collection.TraversableLike.map(TraversableLike.scala:233)
   at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
   at scala.collection.AbstractTraversable.map(Traversable.scala:104)
   at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:194)
   at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1803)
   at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:881)
   at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:989)
   at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:765)
   at com.yushu.table.TableAPIDemo1.main(TableAPIDemo1.java:65)
   Caused by: org.apache.flink.table.api.ValidationException: One or more 
required options are missing.
   
   Missing required options are:
   
   path
   at 
org.apache.flink.table.factories.FactoryUtil.validateFactoryOptions(FactoryUtil.java:612)
   at 
org.apache.flink.table.factories.FactoryUtil.validateFactoryOptions(FactoryUtil.java:582)
   at 
org.apache.flink.table.factories.FactoryUtil$FactoryHelper.validate(FactoryUtil.java:930)
   at 
org.apache.flink.table.factories.FactoryUtil$FactoryHelper.validateExcept(FactoryUtil.java:955)
   at 
org.apache.flink.connector.file.table.FileSystemTableFactory.validate(FileSystemTableFactory.java:152)
   at 
org.apache.flink.connector.file.table.FileSystemTableFactory.createDynamicTableSink(FileSystemTableFactory.java:84)
   at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:267)
   ... 19 more
   
   
![068ff8b05220e937e3f5db4c7053f0d](https://github.com/apache/flink/assets/102840730/3edd99a0-cc41-4b55-a3a0-f6e509fe8a9d)
   
![54ce1c2c1ad112e9fd4be700f9e7fda](https://github.com/apache/flink/assets/102840730/037e5247-758b-4148-a67c-2e18b4401ab0)
   
![4e2d63556fcdd3930fdb4cd5211d369](https://github.com/apache/flink/assets/102840730/afa258ec-570a-437f-bf84-f9a626af5e12)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34379) table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError

2024-05-27 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849740#comment-17849740
 ] 

Jing Ge commented on FLINK-34379:
-

master: 87b7193846090897b2feabf716ee5378bcd7585b

> table.optimizer.dynamic-filtering.enabled lead to OutOfMemoryError
> --
>
> Key: FLINK-34379
> URL: https://issues.apache.org/jira/browse/FLINK-34379
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.17.2, 1.18.1
> Environment: 1.17.1
>Reporter: zhu
>Assignee: Jeyhun Karimov
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.17.3, 1.18.2, 1.20.0, 1.19.1
>
>
> When using batch computing, I union all about 50 tables and then join other 
> table. When compiling the execution plan, 
> there throws OutOfMemoryError: Java heap space, which was no problem in  
> 1.15.2. However, both 1.17.2 and 1.18.1 all throws same errors,This causes 
> jobmanager to restart. Currently,it has been found that this is caused by 
> table.optimizer.dynamic-filtering.enabled, which defaults is true,When I set 
> table.optimizer.dynamic-filtering.enabled to false, it can be compiled and 
> executed normally
> code
> TableEnvironment.create(EnvironmentSettings.newInstance()
> .withConfiguration(configuration)
> .inBatchMode().build())
> sql=select att,filename,'table0' as mo_name from table0 UNION All select 
> att,filename,'table1' as mo_name from table1 UNION All select 
> att,filename,'table2' as mo_name from table2 UNION All select 
> att,filename,'table3' as mo_name from table3 UNION All select 
> att,filename,'table4' as mo_name from table4 UNION All select 
> att,filename,'table5' as mo_name from table5 UNION All select 
> att,filename,'table6' as mo_name from table6 UNION All select 
> att,filename,'table7' as mo_name from table7 UNION All select 
> att,filename,'table8' as mo_name from table8 UNION All select 
> att,filename,'table9' as mo_name from table9 UNION All select 
> att,filename,'table10' as mo_name from table10 UNION All select 
> att,filename,'table11' as mo_name from table11 UNION All select 
> att,filename,'table12' as mo_name from table12 UNION All select 
> att,filename,'table13' as mo_name from table13 UNION All select 
> att,filename,'table14' as mo_name from table14 UNION All select 
> att,filename,'table15' as mo_name from table15 UNION All select 
> att,filename,'table16' as mo_name from table16 UNION All select 
> att,filename,'table17' as mo_name from table17 UNION All select 
> att,filename,'table18' as mo_name from table18 UNION All select 
> att,filename,'table19' as mo_name from table19 UNION All select 
> att,filename,'table20' as mo_name from table20 UNION All select 
> att,filename,'table21' as mo_name from table21 UNION All select 
> att,filename,'table22' as mo_name from table22 UNION All select 
> att,filename,'table23' as mo_name from table23 UNION All select 
> att,filename,'table24' as mo_name from table24 UNION All select 
> att,filename,'table25' as mo_name from table25 UNION All select 
> att,filename,'table26' as mo_name from table26 UNION All select 
> att,filename,'table27' as mo_name from table27 UNION All select 
> att,filename,'table28' as mo_name from table28 UNION All select 
> att,filename,'table29' as mo_name from table29 UNION All select 
> att,filename,'table30' as mo_name from table30 UNION All select 
> att,filename,'table31' as mo_name from table31 UNION All select 
> att,filename,'table32' as mo_name from table32 UNION All select 
> att,filename,'table33' as mo_name from table33 UNION All select 
> att,filename,'table34' as mo_name from table34 UNION All select 
> att,filename,'table35' as mo_name from table35 UNION All select 
> att,filename,'table36' as mo_name from table36 UNION All select 
> att,filename,'table37' as mo_name from table37 UNION All select 
> att,filename,'table38' as mo_name from table38 UNION All select 
> att,filename,'table39' as mo_name from table39 UNION All select 
> att,filename,'table40' as mo_name from table40 UNION All select 
> att,filename,'table41' as mo_name from table41 UNION All select 
> att,filename,'table42' as mo_name from table42 UNION All select 
> att,filename,'table43' as mo_name from table43 UNION All select 
> att,filename,'table44' as mo_name from table44 UNION All select 
> att,filename,'table45' as mo_name from table45 UNION All select 
> att,filename,'table46' as mo_name from table46 UNION All select 
> att,filename,'table47' as mo_name from table47 UNION All select 
> att,filename,'table48' as mo_name from table48 UNION All select 
> att,filename,'table49' as mo_name from table49 UNION All select 
> att,filename,'table50' as mo_name from table50 UNION All select 
> att,filename,'table51' as 

Re: [PR] [FLINK-34379][table] Fix adding catalogtable logic [flink]

2024-05-27 Thread via GitHub


JingGe merged PR #24788:
URL: https://github.com/apache/flink/pull/24788


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35412][State/Runtime] Batch execution of async state request callback [flink]

2024-05-27 Thread via GitHub


jectpro7 commented on code in PR #24832:
URL: https://github.com/apache/flink/pull/24832#discussion_r1616029786


##
flink-runtime/src/main/java/org/apache/flink/runtime/asyncprocessing/BatchCallbackRunner.java:
##
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.asyncprocessing;
+
+import org.apache.flink.api.common.operators.MailboxExecutor;
+import org.apache.flink.util.function.ThrowingRunnable;
+
+import javax.annotation.concurrent.GuardedBy;
+
+import java.util.ArrayList;
+import java.util.concurrent.ConcurrentLinkedDeque;
+import java.util.concurrent.atomic.AtomicInteger;
+
+/**
+ * A runner for {@link StateFutureFactory} to build {@link
+ * org.apache.flink.core.state.InternalStateFuture} that put one mail in 
{@link MailboxExecutor}
+ * whenever there are callbacks to run but run multiple callbacks within one 
mail.
+ */
+public class BatchCallbackRunner {
+
+private static final int DEFAULT_BATCH_SIZE = 100;
+
+private final MailboxExecutor mailboxExecutor;
+
+private final int batchSize;
+
+/** The callbacks divided in batch. */
+private final ConcurrentLinkedDeque>>
+callbackQueue;
+
+/** The lock to protect the active buffer (batch). */
+private final Object activeBufferLock = new Object();
+
+/** The active buffer (batch) to gather incoming callbacks. */
+@GuardedBy("activeBufferLock")
+private ArrayList> activeBuffer;
+
+/** Counter of current callbacks. */
+private final AtomicInteger currentCallbacks = new AtomicInteger(0);
+
+/** Whether there is a mail in mailbox. */
+private volatile boolean hasMail = false;
+
+BatchCallbackRunner(MailboxExecutor mailboxExecutor) {
+this.mailboxExecutor = mailboxExecutor;
+this.batchSize = DEFAULT_BATCH_SIZE;
+this.callbackQueue = new ConcurrentLinkedDeque<>();
+this.activeBuffer = new ArrayList<>();
+}
+
+/**
+ * Submit a callback to run.
+ *
+ * @param task the callback.
+ */
+public void submit(ThrowingRunnable task) {
+synchronized (activeBufferLock) {
+activeBuffer.add(task);
+if (activeBuffer.size() >= batchSize) {
+callbackQueue.offerLast(activeBuffer);
+activeBuffer = new ArrayList<>(batchSize);
+}
+}
+currentCallbacks.incrementAndGet();
+insertMail(false);
+}
+
+private void insertMail(boolean force) {
+if (force || !hasMail) {
+if (currentCallbacks.get() > 0) {
+hasMail = true;
+mailboxExecutor.execute(this::runBatch, "Batch running 
callback of state requests");

Review Comment:
   The `hasMail` udpate operation is not in sync block. So the second thread 
will also been executed here, even the `force` property is `false`. It might 
cause some batch been executed unexpectedly.



##
flink-runtime/src/main/java/org/apache/flink/runtime/asyncprocessing/BatchCallbackRunner.java:
##
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.asyncprocessing;
+
+import org.apache.flink.api.common.operators.MailboxExecutor;
+import org.apache.flink.util.function.ThrowingRunnable;
+
+import javax.annotation.concurrent.GuardedBy;
+
+import java.util.ArrayList;
+import java.util.concurrent.ConcurrentLinkedDeque;

Re: [PR] [FLINK-34379][table] Fix adding catalogtable logic [flink]

2024-05-27 Thread via GitHub


JingGe commented on code in PR #24788:
URL: https://github.com/apache/flink/pull/24788#discussion_r1611948679


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/utils/DynamicPartitionPruningUtils.java:
##
@@ -234,20 +234,16 @@ private static boolean isSuitableFilter(RexNode 
filterCondition) {
 }
 
 private void setTables(ContextResolvedTable catalogTable) {
-if (tables.size() == 0) {
-tables.add(catalogTable);
-} else {
-boolean hasAdded = false;
-for (ContextResolvedTable thisTable : new ArrayList<>(tables)) 
{
-if (hasAdded) {
-break;
-}
-if 
(!thisTable.getIdentifier().equals(catalogTable.getIdentifier())) {
-tables.add(catalogTable);
-hasAdded = true;
-}
+boolean alreadyExists = false;
+for (ContextResolvedTable table : tables) {

Review Comment:
   Sorry, I don't get your point. I meant looping the `Set` might have 
performance issue.



##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/utils/DynamicPartitionPruningUtils.java:
##
@@ -234,20 +234,16 @@ private static boolean isSuitableFilter(RexNode 
filterCondition) {
 }
 
 private void setTables(ContextResolvedTable catalogTable) {
-if (tables.size() == 0) {
-tables.add(catalogTable);
-} else {
-boolean hasAdded = false;
-for (ContextResolvedTable thisTable : new ArrayList<>(tables)) 
{
-if (hasAdded) {
-break;
-}
-if 
(!thisTable.getIdentifier().equals(catalogTable.getIdentifier())) {
-tables.add(catalogTable);
-hasAdded = true;
-}
+boolean alreadyExists = false;
+for (ContextResolvedTable table : tables) {
+if 
(table.getIdentifier().equals(catalogTable.getIdentifier())) {

Review Comment:
   This is a typical hash map logic



##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/utils/DynamicPartitionPruningUtils.java:
##
@@ -234,20 +234,16 @@ private static boolean isSuitableFilter(RexNode 
filterCondition) {
 }
 
 private void setTables(ContextResolvedTable catalogTable) {
-if (tables.size() == 0) {
-tables.add(catalogTable);
-} else {
-boolean hasAdded = false;
-for (ContextResolvedTable thisTable : new ArrayList<>(tables)) 
{
-if (hasAdded) {
-break;
-}
-if 
(!thisTable.getIdentifier().equals(catalogTable.getIdentifier())) {
-tables.add(catalogTable);
-hasAdded = true;
-}
+boolean alreadyExists = false;
+for (ContextResolvedTable table : tables) {

Review Comment:
   I think there are many reasons to use `Map` instead of `Set`:
   
   1. the logic is point search instead of loop search as I mentioned below.
   2. O(1) than O(n) for better performance, because the The 
`DynamicPartitionPruningUtils` class will be used centrally for batch jobs[1], 
i.e. for large projects with many tables, it could be a bottleneck.
   3. less code while using e.g. Map.putIfAbsent​(K, V)
   
   [1] 
https://github.com/apache/flink/blob/0737220959fe52ee22535e7db55b015a46a6294e/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/optimize/program/FlinkDynamicPartitionPruningProgram.java#L103



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-35049][state] Implement Map Async State API for ForStStateBackend [flink]

2024-05-27 Thread via GitHub


masteryhx commented on code in PR #24812:
URL: https://github.com/apache/flink/pull/24812#discussion_r1616045622


##
flink-state-backends/flink-statebackend-forst/src/main/java/org/apache/flink/state/forst/ForStIterateOperation.java:
##
@@ -0,0 +1,226 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.state.forst;
+
+import org.apache.flink.runtime.asyncprocessing.StateRequestType;
+
+import org.rocksdb.RocksDB;
+import org.rocksdb.RocksIterator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.Executor;
+import java.util.concurrent.atomic.AtomicInteger;
+
+/**
+ * The iterate operation implementation for ForStDB, which leverages rocksdb's 
iterator directly.
+ */
+public class ForStIterateOperation implements ForStDBOperation {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(ForStIterateOperation.class);
+
+public static final int CACHE_SIZE_LIMIT = 128;
+
+private final RocksDB db;
+
+private final List> batchRequest;
+
+private final Executor executor;
+
+ForStIterateOperation(RocksDB db, List> 
batchRequest, Executor executor) {
+this.db = db;
+this.batchRequest = batchRequest;
+this.executor = executor;
+}
+
+@Override
+public CompletableFuture process() {
+CompletableFuture future = new CompletableFuture<>();
+
+AtomicInteger counter = new AtomicInteger(batchRequest.size());
+for (int i = 0; i < batchRequest.size(); i++) {
+ForStDBIterRequest request = batchRequest.get(i);
+executor.execute(
+() -> {
+// todo: config read options
+try (RocksIterator iter = 
db.newIterator(request.getColumnFamilyHandle())) {

Review Comment:
   This logic seems a bit complex.
   Could you add some descriptions in some key steps or split them into methods 
?



##
flink-state-backends/flink-statebackend-forst/src/main/java/org/apache/flink/state/forst/ForStDBPutRequest.java:
##
@@ -34,23 +35,33 @@
  */
 public class ForStDBPutRequest {
 
-private final K key;
-@Nullable private final V value;
-private final ForStInnerTable table;
-private final InternalStateFuture future;
+protected final K key;
 
-private ForStDBPutRequest(
+@Nullable protected final V value;
+
+protected final ForStInnerTable table;
+
+protected final InternalStateFuture future;
+
+protected final boolean tableIsMap;
+
+protected ForStDBPutRequest(
 K key, V value, ForStInnerTable table, 
InternalStateFuture future) {
 this.key = key;
 this.value = value;
 this.table = table;
 this.future = future;
+this.tableIsMap = table instanceof ForStMapState;
 }
 
 public boolean valueIsNull() {
 return value == null;
 }
 
+public boolean valueIsMap() {

Review Comment:
   Same as `GetRequest`.
   Maybe we could have a more clear structure.



##
flink-state-backends/flink-statebackend-forst/src/main/java/org/apache/flink/state/forst/ForStIterateOperation.java:
##
@@ -0,0 +1,226 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package 

Re: [PR] [FLINK-35425][table-common] Support convert freshness to cron expression in full refresh mode [flink]

2024-05-27 Thread via GitHub


flinkbot commented on PR #24849:
URL: https://github.com/apache/flink/pull/24849#issuecomment-2133449299

   
   ## CI report:
   
   * 0bd839dee3158adc10d417d271a13f4aa14669e7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >