Re: [I] 社区群满了,还有其他群吗 [flink-cdc]

2024-03-15 Thread via GitHub


gtk96 commented on issue #3149:
URL: https://github.com/apache/flink-cdc/issues/3149#issuecomment-2001039313

   
![image](https://github.com/apache/flink-cdc/assets/34996528/40ea60de-05ff-4ee4-8a92-95c368cb7f3d)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



(flink) branch dependabot/npm_and_yarn/flink-runtime-web/web-dashboard/follow-redirects-1.15.6 created (now 9904004e6d4)

2024-03-15 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch 
dependabot/npm_and_yarn/flink-runtime-web/web-dashboard/follow-redirects-1.15.6
in repository https://gitbox.apache.org/repos/asf/flink.git


  at 9904004e6d4 Bump follow-redirects in /flink-runtime-web/web-dashboard

No new revisions were added by this update.



(flink-ml) branch dependabot/maven/org.apache.zookeeper-zookeeper-3.8.4 created (now d306ffa1)

2024-03-15 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch 
dependabot/maven/org.apache.zookeeper-zookeeper-3.8.4
in repository https://gitbox.apache.org/repos/asf/flink-ml.git


  at d306ffa1 Bump org.apache.zookeeper:zookeeper from 3.6.3 to 3.8.4

No new revisions were added by this update.



(flink) branch master updated: [FLINK-34643][tests] Fix JobIDLoggingITCase

2024-03-15 Thread roman
This is an automated email from the ASF dual-hosted git repository.

roman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 6b5ae445724 [FLINK-34643][tests] Fix JobIDLoggingITCase
6b5ae445724 is described below

commit 6b5ae445724b68db05a3f9687cff6dd68e2129d7
Author: Roman Khachatryan 
AuthorDate: Mon Mar 11 16:22:42 2024 +0100

[FLINK-34643][tests] Fix JobIDLoggingITCase
---
 .../apache/flink/test/misc/JobIDLoggingITCase.java | 134 +++--
 1 file changed, 98 insertions(+), 36 deletions(-)

diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/misc/JobIDLoggingITCase.java 
b/flink-tests/src/test/java/org/apache/flink/test/misc/JobIDLoggingITCase.java
index 3380698feb7..e13bfce16e3 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/misc/JobIDLoggingITCase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/misc/JobIDLoggingITCase.java
@@ -37,9 +37,9 @@ import org.apache.flink.test.junit5.InjectClusterClient;
 import org.apache.flink.test.junit5.MiniClusterExtension;
 import org.apache.flink.testutils.logging.LoggerAuditingExtension;
 import org.apache.flink.util.ExceptionUtils;
-import org.apache.flink.util.MdcUtils;
 
 import org.apache.logging.log4j.core.LogEvent;
+import org.apache.logging.log4j.util.ReadOnlyStringMap;
 import org.junit.jupiter.api.Test;
 import org.junit.jupiter.api.extension.RegisterExtension;
 import org.slf4j.Logger;
@@ -52,17 +52,14 @@ import java.util.List;
 import java.util.Objects;
 import java.util.concurrent.ExecutionException;
 import java.util.regex.Pattern;
-import java.util.stream.Collectors;
 
-import static org.apache.flink.util.Preconditions.checkState;
+import static java.util.Arrays.asList;
+import static java.util.stream.Collectors.toList;
+import static org.apache.flink.util.MdcUtils.JOB_ID;
 import static org.assertj.core.api.Assertions.assertThat;
-import static org.junit.jupiter.api.Assertions.assertTrue;
 import static org.slf4j.event.Level.DEBUG;
 
-/**
- * Tests adding of {@link JobID} to logs (via {@link org.slf4j.MDC}) in the 
most important cases.
- */
-public class JobIDLoggingITCase {
+class JobIDLoggingITCase {
 private static final Logger logger = 
LoggerFactory.getLogger(JobIDLoggingITCase.class);
 
 @RegisterExtension
@@ -104,8 +101,7 @@ public class JobIDLoggingITCase {
 .build());
 
 @Test
-public void testJobIDLogging(@InjectClusterClient ClusterClient 
clusterClient)
-throws Exception {
+void testJobIDLogging(@InjectClusterClient ClusterClient clusterClient) 
throws Exception {
 JobID jobID = runJob(clusterClient);
 clusterClient.cancel(jobID).get();
 
@@ -114,53 +110,113 @@ public class JobIDLoggingITCase {
 // - how many messages to expect
 // - which log patterns to ignore
 
-assertJobIDPresent(jobID, 3, checkpointCoordinatorLogging);
-assertJobIDPresent(jobID, 6, streamTaskLogging);
 assertJobIDPresent(
 jobID,
-9,
+checkpointCoordinatorLogging,
+asList(
+"No checkpoint found during restore.",
+"Resetting the master hooks.",
+"Triggering checkpoint .*",
+"Received acknowledge message for checkpoint .*",
+"Completed checkpoint .*",
+"Checkpoint state: .*"));
+
+assertJobIDPresent(
+jobID,
+streamTaskLogging,
+asList(
+"State backend is set to .*",
+"Initializing Source: .*",
+"Invoking Source: .*",
+"Starting checkpoint .*",
+"Notify checkpoint \\d+ complete .*"));
+
+assertJobIDPresent(
+jobID,
 taskExecutorLogging,
+asList(
+"Received task .*",
+"Trigger checkpoint .*",
+"Confirm completed checkpoint .*"),
 "Un-registering task.*",
 "Successful registration.*",
 "Establish JobManager connection.*",
 "Offer reserved slots.*",
 ".*ResourceManager.*",
-"Operator event.*");
+"Operator event.*",
+"Recovered slot allocation snapshots.*",
+".*heartbeat.*");
+
+assertJobIDPresent(
+jobID,
+taskLogging,
+asList(
+"Source: .* switched from CREATED to DEPLOYING.",
+"Source: .* switched from DEPLOYING to INITIALIZING.",
+"Source: .* switched from INITIALIZING to RUNNING."));
+
+

(flink) branch master updated: [hotfix] In case of unexpected errors do not loose the primary failure reason

2024-03-15 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new f6e1b493bd6 [hotfix] In case of unexpected errors do not loose the 
primary failure reason
f6e1b493bd6 is described below

commit f6e1b493bd6292a87efd130a0e76af8bd750c1c9
Author: Piotr Nowojski 
AuthorDate: Tue Mar 12 17:14:13 2024 +0100

[hotfix] In case of unexpected errors do not loose the primary failure 
reason

Unexpected error can be for example NPE
---
 .../apache/flink/runtime/checkpoint/CheckpointCoordinator.java | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
index 25afade0239..c05efb10b48 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java
@@ -1045,10 +1045,10 @@ public class CheckpointCoordinator {
 @Nullable PendingCheckpoint checkpoint,
 CheckpointProperties checkpointProperties,
 Throwable throwable) {
-// beautify the stack trace a bit
-throwable = ExceptionUtils.stripCompletionException(throwable);
-
 try {
+// beautify the stack trace a bit
+throwable = ExceptionUtils.stripCompletionException(throwable);
+
 coordinatorsToCheckpoint.forEach(
 
OperatorCoordinatorCheckpointContext::abortCurrentTriggering);
 
@@ -1064,6 +1064,10 @@ public class CheckpointCoordinator {
 failureManager.handleCheckpointException(
 checkpoint, checkpointProperties, cause, null, job, 
null, statsTracker);
 }
+} catch (Throwable secondThrowable) {
+secondThrowable.addSuppressed(throwable);
+throw secondThrowable;
+
 } finally {
 isTriggering = false;
 executeQueuedRequest();



(flink) branch master updated: [FLINK-33816][streaming] Fix unstable test SourceStreamTaskTest.testTriggeringStopWithSavepointWithDrain

2024-03-15 Thread pnowojski
This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 5aebb04b305 [FLINK-33816][streaming] Fix unstable test 
SourceStreamTaskTest.testTriggeringStopWithSavepointWithDrain
5aebb04b305 is described below

commit 5aebb04b3055fbec6a74eaf4226c4a88d3fd2d6e
Author: Jiabao Sun 
AuthorDate: Tue Jan 2 14:33:17 2024 +0800

[FLINK-33816][streaming] Fix unstable test 
SourceStreamTaskTest.testTriggeringStopWithSavepointWithDrain
---
 .../org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java   | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java
 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java
index 29bcf6b5990..5e47d36040b 100644
--- 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java
+++ 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java
@@ -708,7 +708,6 @@ class SourceStreamTaskTest extends SourceStreamTaskTestBase 
{
 harness.streamTask.runMailboxLoop();
 harness.finishProcessing();
 
-assertThat(triggerResult.isDone()).isTrue();
 assertThat(triggerResult.get()).isTrue();
 assertThat(checkpointCompleted.isDone()).isTrue();
 }



(flink) branch master updated: [FLINK-34194][ci] Updates test CI container to be based on Ubuntu 22.04

2024-03-15 Thread mapohl
This is an automated email from the ASF dual-hosted git repository.

mapohl pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 2b94a57f59e [FLINK-34194][ci] Updates test CI container to be based on 
Ubuntu 22.04
2b94a57f59e is described below

commit 2b94a57f59ee5a4fa09831236764712d1f5affc6
Author: Matthias Pohl 
AuthorDate: Mon Jan 22 10:59:25 2024 +0100

[FLINK-34194][ci] Updates test CI container to be based on Ubuntu 22.04
---
 .github/workflows/docs.yml  | 2 +-
 azure-pipelines.yml | 8 
 tools/azure-pipelines/build-apache-repo.yml | 2 +-
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml
index 2997f94ffab..290db164770 100644
--- a/.github/workflows/docs.yml
+++ b/.github/workflows/docs.yml
@@ -49,7 +49,7 @@ jobs:
   fi
   - name: Build documentation
 run: |
-  docker run --rm --volume "$PWD:/root/flink" 
chesnay/flink-ci:java_8_11_17_21_maven_386 bash -c "cd /root/flink && 
./.github/workflows/docs.sh"
+  docker run --rm --volume "$PWD:/root/flink" 
mapohl/flink-ci:FLINK-34194 bash -c "cd /root/flink && 
./.github/workflows/docs.sh"
   - name: Upload documentation
 uses: burnett01/rsync-deployments@5.2
 with:
diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index 81fbad42d21..afe1bfd70d9 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -39,7 +39,7 @@ resources:
   # Container with SSL to have the same environment everywhere.
   # see https://github.com/apache/flink-connector-shared-utils/tree/ci_utils
   - container: flink-build-container
-image: chesnay/flink-ci:java_8_11_17_21_maven_386
+image: mapohl/flink-ci:FLINK-34194
 # On AZP provided machines, set this flag to allow writing coredumps in 
docker
 options: --privileged
 
@@ -73,16 +73,16 @@ stages:
 parameters: # see template file for a definition of the parameters.
   stage_name: ci_build
   test_pool_definition:
-vmImage: 'ubuntu-20.04'
+vmImage: 'ubuntu-22.04'
   e2e_pool_definition:
-vmImage: 'ubuntu-20.04'
+vmImage: 'ubuntu-22.04'
   environment: PROFILE="-Dflink.hadoop.version=2.10.2"
   run_end_to_end: false
   container: flink-build-container
   jdk: 8
   - job: docs_404_check # run on a MSFT provided machine
 pool:
-  vmImage: 'ubuntu-20.04'
+  vmImage: 'ubuntu-22.04'
 steps:
   - task: GoTool@0
 inputs:
diff --git a/tools/azure-pipelines/build-apache-repo.yml 
b/tools/azure-pipelines/build-apache-repo.yml
index a229ee0029a..673979767c1 100644
--- a/tools/azure-pipelines/build-apache-repo.yml
+++ b/tools/azure-pipelines/build-apache-repo.yml
@@ -39,7 +39,7 @@ resources:
   # Container with SSL to have the same environment everywhere.
   # see https://github.com/apache/flink-connector-shared-utils/tree/ci_utils
   - container: flink-build-container
-image: chesnay/flink-ci:java_8_11_17_21_maven_386
+image: mapohl/flink-ci:FLINK-34194
 
 variables:
   MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository



(flink) branch master updated: [FLINK-34593][release] Add release note for version 1.19

2024-03-15 Thread lincoln
This is an automated email from the ASF dual-hosted git repository.

lincoln pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 875a431950d [FLINK-34593][release] Add release note for version 1.19
875a431950d is described below

commit 875a431950da066aa696f2c3a335f3267b85194a
Author: lincoln lee 
AuthorDate: Fri Mar 15 17:37:16 2024 +0800

[FLINK-34593][release] Add release note for version 1.19

This closes #24394
---
 docs/content.zh/_index.md   |   3 +-
 docs/content.zh/release-notes/flink-1.19.md | 362 
 docs/content/_index.md  |   1 +
 docs/content/release-notes/flink-1.19.md| 362 
 4 files changed, 727 insertions(+), 1 deletion(-)

diff --git a/docs/content.zh/_index.md b/docs/content.zh/_index.md
index f25de48eb0f..1516a960740 100644
--- a/docs/content.zh/_index.md
+++ b/docs/content.zh/_index.md
@@ -85,7 +85,8 @@ under the License.
 For some reason Hugo will only allow linking to the 
 release notes if there is a leading '/' and file extension.
 -->
-请参阅 [Flink 1.18]({{< ref "/release-notes/flink-1.18.md" >}}),
+请参阅 [Flink 1.19]({{< ref "/release-notes/flink-1.19.md" >}}),
+[Flink 1.18]({{< ref "/release-notes/flink-1.18.md" >}}),
 [Flink 1.17]({{< ref "/release-notes/flink-1.17.md" >}}),
 [Flink 1.16]({{< ref "/release-notes/flink-1.16.md" >}}),
 [Flink 1.15]({{< ref "/release-notes/flink-1.15.md" >}}),
diff --git a/docs/content.zh/release-notes/flink-1.19.md 
b/docs/content.zh/release-notes/flink-1.19.md
new file mode 100644
index 000..efb5a3228b6
--- /dev/null
+++ b/docs/content.zh/release-notes/flink-1.19.md
@@ -0,0 +1,362 @@
+---
+title: "Release Notes - Flink 1.19"
+---
+
+
+# Release notes - Flink 1.19
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.18 and Flink 1.19. Please read these notes 
carefully if you are 
+planning to upgrade your Flink version to 1.19.
+
+## Dependency upgrades
+
+ Drop support for python 3.7
+
+# [FLINK-33029](https://issues.apache.org/jira/browse/FLINK-33029)
+
+ Add support for python 3.11
+
+# [FLINK-33030](https://issues.apache.org/jira/browse/FLINK-33030)
+
+## Build System
+
+ Support Java 21
+
+# [FLINK-33163](https://issues.apache.org/jira/browse/FLINK-33163)
+Apache Flink was made ready to compile and run with Java 21. This feature is 
still in beta mode.
+Issues should be reported in Flink's bug tracker.
+
+## Checkpoints
+
+ Deprecate RestoreMode#LEGACY
+
+# [FLINK-34190](https://issues.apache.org/jira/browse/FLINK-34190)
+
+`RestoreMode#LEGACY` is deprecated. Please use `RestoreMode#CLAIM` or 
`RestoreMode#NO_CLAIM` mode
+instead to get a clear state file ownership when restoring.
+
+ CheckpointsCleaner clean individual checkpoint states in parallel
+
+# [FLINK-33090](https://issues.apache.org/jira/browse/FLINK-33090)
+
+Now when disposing of no longer needed checkpoints, every state handle/state 
file will be disposed
+in parallel by the ioExecutor, vastly improving the disposing speed of a 
single checkpoint (for
+large checkpoints, the disposal time can be improved from 10 minutes to < 1 
minute). The old
+behavior can be restored by setting `state.checkpoint.cleaner.parallel-mode` 
to false.
+
+ Support using larger checkpointing interval when source is processing 
backlog
+
+# [FLINK-32514](https://issues.apache.org/jira/browse/FLINK-32514)
+
+`ProcessingBacklog` is introduced to demonstrate whether a record should be 
processed with low latency
+or high throughput. `ProcessingBacklog` can be set by source operators and can 
be used to change the
+checkpoint interval of a job during runtime.
+
+ Allow triggering Checkpoints through command line client
+
+# [FLINK-6755](https://issues.apache.org/jira/browse/FLINK-6755)
+
+The command line interface supports triggering a checkpoint manually. Usage:
+```
+./bin/flink checkpoint $JOB_ID [-full]
+```
+By specifying the '-full' option, a full checkpoint is triggered. Otherwise an 
incremental
+checkpoint is triggered if the job is configured to take incremental ones 
periodically.
+
+
+## Runtime & Coordination
+
+ Migrate TypeSerializerSnapshot#resolveSchemaCompatibility
+
+# [FLINK-30613](https://issues.apache.org/jira/browse/FLINK-30613)
+
+In Flink 1.19, the old method of resolving schema compatibility has been 
deprecated and the new one
+is introduced. See 
[FLIP-263](https://cwiki.apache.org/confluence/display/FLINK/FLIP-263%3A+Improve+resolving+schema+compatibility?src=contextnavpagetreemode)
 for more details.
+Please migrate to the new method following 

(flink-kubernetes-operator) branch main updated: [release] Update version to 1.9-SNAPSHOT (#798)

2024-03-15 Thread mxm
This is an automated email from the ASF dual-hosted git repository.

mxm pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 51fe0027 [release] Update version to 1.9-SNAPSHOT (#798)
51fe0027 is described below

commit 51fe002769897275b7de76c924f07f57dbbd22ad
Author: Maximilian Michels 
AuthorDate: Fri Mar 15 10:27:33 2024 +0100

[release] Update version to 1.9-SNAPSHOT (#798)
---
 .asf.yaml   | 1 +
 .github/workflows/docs.yaml | 6 +++---
 Dockerfile  | 2 +-
 docs/config.toml| 6 +++---
 examples/autoscaling/pom.xml| 2 +-
 examples/flink-beam-example/pom.xml | 2 +-
 examples/flink-sql-runner-example/pom.xml   | 2 +-
 examples/kubernetes-client-examples/pom.xml | 4 ++--
 flink-autoscaler-plugin-jdbc/pom.xml| 2 +-
 flink-autoscaler-standalone/pom.xml | 2 +-
 flink-autoscaler/pom.xml| 2 +-
 flink-kubernetes-docs/pom.xml   | 2 +-
 flink-kubernetes-operator-api/pom.xml   | 2 +-
 flink-kubernetes-operator/pom.xml   | 2 +-
 flink-kubernetes-standalone/pom.xml | 2 +-
 flink-kubernetes-webhook/pom.xml| 2 +-
 helm/flink-kubernetes-operator/Chart.yaml   | 4 ++--
 pom.xml | 2 +-
 18 files changed, 24 insertions(+), 23 deletions(-)

diff --git a/.asf.yaml b/.asf.yaml
index 7d48cb75..346f3ead 100644
--- a/.asf.yaml
+++ b/.asf.yaml
@@ -18,6 +18,7 @@ github:
 release-1.5: {}
 release-1.6: {}
 release-1.7: {}
+release-1.8: {}
 
 
 notifications:
diff --git a/.github/workflows/docs.yaml b/.github/workflows/docs.yaml
index 4068df96..2a7b0b4e 100644
--- a/.github/workflows/docs.yaml
+++ b/.github/workflows/docs.yaml
@@ -28,8 +28,8 @@ jobs:
   matrix:
 branch:
   - main
-  - release-1.6
   - release-1.7
+  - release-1.8
 steps:
   - uses: actions/checkout@v3
 with:
@@ -41,8 +41,8 @@ jobs:
   echo "flink_branch=${currentBranch}"
   echo "flink_branch=${currentBranch}" >> ${GITHUB_ENV}
   if [ "${currentBranch}" = "main" ]; then
-echo "flink_alias=release-1.8" >> ${GITHUB_ENV}
-  elif [ "${currentBranch}" = "release-1.7" ]; then
+echo "flink_alias=release-1.9" >> ${GITHUB_ENV}
+  elif [ "${currentBranch}" = "release-1.8" ]; then
 echo "flink_alias=stable" >> ${GITHUB_ENV}
   else
 echo "flink_alias=${currentBranch}" >> ${GITHUB_ENV}
diff --git a/Dockerfile b/Dockerfile
index ac0ee06d..ac516aeb 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -36,7 +36,7 @@ RUN cd /app/tools/license; mkdir jars; cd jars; \
 FROM eclipse-temurin:11-jre-jammy
 ENV FLINK_HOME=/opt/flink
 ENV FLINK_PLUGINS_DIR=$FLINK_HOME/plugins
-ENV OPERATOR_VERSION=1.8-SNAPSHOT
+ENV OPERATOR_VERSION=1.9-SNAPSHOT
 ENV OPERATOR_JAR=flink-kubernetes-operator-$OPERATOR_VERSION-shaded.jar
 ENV WEBHOOK_JAR=flink-kubernetes-webhook-$OPERATOR_VERSION-shaded.jar
 ENV KUBERNETES_STANDALONE_JAR=flink-kubernetes-standalone-$OPERATOR_VERSION.jar
diff --git a/docs/config.toml b/docs/config.toml
index 4b5317ed..a7433d0e 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -34,11 +34,11 @@ pygmentsUseClasses = true
   # we change the version for the complete docs when forking of a release 
branch
   # etc.
   # The full version string as referenced in Maven (e.g. 1.2.1)
-  Version = "1.8-SNAPSHOT"
+  Version = "1.9-SNAPSHOT"
 
   # For stable releases, leave the bugfix version out (e.g. 1.2). For snapshot
   # release this should be the same as the regular version
-  VersionTitle = "1.8-SNAPSHOT"
+  VersionTitle = "1.9-SNAPSHOT"
 
   # The branch for this version of the Apache Flink Kubernetes Operator
   Branch = "main"
@@ -63,8 +63,8 @@ pygmentsUseClasses = true
   ]
 
   PreviousDocs = [
+["1.8", 
"https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.8;],
 ["1.7", 
"https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.7;],
-["1.6", 
"https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.6;]
   ]
 
 [markup]
diff --git a/examples/autoscaling/pom.xml b/examples/autoscaling/pom.xml
index 9626cf2a..0afc0bec 100644
--- a/examples/autoscaling/pom.xml
+++ b/examples/autoscaling/pom.xml
@@ -23,7 +23,7 @@ under the License.
 
 org.apache.flink
 flink-kubernetes-operator-parent
-1.8-SNAPSHOT
+1.9-SNAPSHOT
 ../..
 
 
diff --git a/examples/flink-beam-example/pom.xml 
b/examples/flink-beam-example/pom.xml
index 8a69a2a2..61fc3d6e 100644
--- a/examples/flink-beam-example/pom.xml
+++ b/examples/flink-beam-example/pom.xml
@@ -23,7 +23,7 @@ under the License.
 
 org.apache.flink
 

(flink) annotated tag release-1.19.0 updated (eaffd227d85 -> 518023aa69e)

2024-03-15 Thread lincoln
This is an automated email from the ASF dual-hosted git repository.

lincoln pushed a change to annotated tag release-1.19.0
in repository https://gitbox.apache.org/repos/asf/flink.git


*** WARNING: tag release-1.19.0 was modified! ***

from eaffd227d85 (commit)
  to 518023aa69e (tag)
 tagging eaffd227d853e0cdef03f1af5016e00f950930a9 (commit)
 replaces pre-apache-rename
  by lincoln lee
  on Fri Mar 15 17:03:07 2024 +0800

- Log -
Release Flink 1.19.0
-BEGIN PGP SIGNATURE-

iQIzBAABCAAdFiEEAotmBfUbwpa1alBC5X0wq+51ygYFAmX0DssACgkQ5X0wq+51
ygYlKRAAiyAf6JuuLwaUrdBmXc5BawfuT3LMSD28VQc1ehOnyAHu2idMPW8HCM6M
EGtFyBKAS763GsAz3G5xMjIbyRl0fFhhSIBe7hTxPMY15apTCCgQveeFEyLia9Ni
WMDT1dZ81qllhdaBlLEeEclxNNftfso1hsP0JZJktuBSJPoNI0hnrqVs+H/zGt1Q
TW5qFN1CIDNPetTuW9dF24GtzC4tgYaTW0YAE8uy1clSCVQR3bbK2mm0R29SgDgM
kQrkBF3U/2366Wb18+g1OBiuMWVnK9teXV5QDUMXCYyaIIQebH0XfVPjHoRe0KGz
LaDrurMuUrbwzwzqlgxY/fv2O3Tvg74pnYsuqNy6ZPylGvuCZUsfc1UWX/HmUZ43
rx8nuaVoKss1rGOhchUH6BOWIABCLHFcNWVAZEvMt/RaFvSZ93GaZ9nSL9kP6a/l
NdPpYSUzE6f3cSYImerKfvvr+Y+MKv6R228rUPmkyPt5nbFL7Q1RkR+RJsZ/YrOL
b2NvALDuzB9cHM154BXtR3en5Z+tNO5mTBOEgrAzIKG5zF74N1fSjO64a4os5Mm+
Ls+MySQjE/58Y9CyPvpFS/SFwcvoxFATD9OcEBJW1A2him4j65Cjxiz+lSDpu9pp
AET8g+ZTUAXEPksLlD1KyPL1BtWH/zf9vYqEum3UTJdNAcE+VMI=
=rE1A
-END PGP SIGNATURE-
---


No new revisions were added by this update.

Summary of changes:



svn commit: r67958 - /dev/flink/flink-1.19.0-rc2/ /release/flink/flink-1.19.0/

2024-03-15 Thread leonard
Author: leonard
Date: Fri Mar 15 08:46:16 2024
New Revision: 67958

Log:
Release Flink 1.19.0

Added:
release/flink/flink-1.19.0/
  - copied from r67957, dev/flink/flink-1.19.0-rc2/
Removed:
dev/flink/flink-1.19.0-rc2/



(flink) branch master updated: [FLINK-34546] Emit span with failure labels on failure in AdaptiveScheduler. (#24498)

2024-03-15 Thread srichter
This is an automated email from the ASF dual-hosted git repository.

srichter pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 10bff3dbad1 [FLINK-34546] Emit span with failure labels on failure in 
AdaptiveScheduler. (#24498)
10bff3dbad1 is described below

commit 10bff3dbad103b60915be817a3408820ed09b6cf
Author: Stefan Richter 
AuthorDate: Fri Mar 15 09:36:43 2024 +0100

[FLINK-34546] Emit span with failure labels on failure in 
AdaptiveScheduler. (#24498)
---
 .../failover/ExecutionFailureHandler.java  | 32 ++---
 .../scheduler/adaptive/AdaptiveScheduler.java  | 22 +-
 .../runtime/scheduler/adaptive/Canceling.java  |  4 +-
 .../runtime/scheduler/adaptive/Executing.java  | 10 ++-
 .../flink/runtime/scheduler/adaptive/Failing.java  |  4 +-
 .../adaptive/JobFailureMetricReporter.java | 84 ++
 .../runtime/scheduler/adaptive/Restarting.java |  4 +-
 .../adaptive/StateWithExecutionGraph.java  |  7 +-
 .../scheduler/adaptive/StopWithSavepoint.java  |  9 ++-
 .../failover/ExecutionFailureHandlerTest.java  |  4 +-
 .../scheduler/adaptive/AdaptiveSchedulerTest.java  | 78 +++-
 .../runtime/scheduler/adaptive/ExecutingTest.java  |  3 +-
 .../adaptive/StateWithExecutionGraphTest.java  |  2 +-
 .../scheduler/adaptive/StopWithSavepointTest.java  | 13 +++-
 14 files changed, 228 insertions(+), 48 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/ExecutionFailureHandler.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/ExecutionFailureHandler.java
index 3d36a9e6bff..94130bc2f5f 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/ExecutionFailureHandler.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/ExecutionFailureHandler.java
@@ -26,13 +26,12 @@ import org.apache.flink.runtime.JobException;
 import org.apache.flink.runtime.concurrent.ComponentMainThreadExecutor;
 import org.apache.flink.runtime.executiongraph.Execution;
 import org.apache.flink.runtime.failure.FailureEnricherUtils;
+import org.apache.flink.runtime.scheduler.adaptive.JobFailureMetricReporter;
 import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID;
 import org.apache.flink.runtime.scheduler.strategy.SchedulingExecutionVertex;
 import org.apache.flink.runtime.scheduler.strategy.SchedulingTopology;
 import org.apache.flink.runtime.throwable.ThrowableClassifier;
 import org.apache.flink.runtime.throwable.ThrowableType;
-import org.apache.flink.traces.Span;
-import org.apache.flink.traces.SpanBuilder;
 import org.apache.flink.util.IterableUtils;
 
 import javax.annotation.Nullable;
@@ -70,8 +69,8 @@ public class ExecutionFailureHandler {
 private final Collection failureEnrichers;
 private final ComponentMainThreadExecutor mainThreadExecutor;
 private final MetricGroup metricGroup;
-
 private final boolean reportEventsAsSpans;
+private final JobFailureMetricReporter jobFailureMetricReporter;
 
 /**
  * Creates the handler to deal with task failures.
@@ -105,6 +104,7 @@ public class ExecutionFailureHandler {
 this.globalFailureCtx = globalFailureCtx;
 this.metricGroup = metricGroup;
 this.reportEventsAsSpans = 
jobMasterConfig.get(TraceOptions.REPORT_EVENTS_AS_SPANS);
+this.jobFailureMetricReporter = new 
JobFailureMetricReporter(metricGroup);
 }
 
 /**
@@ -171,35 +171,15 @@ public class ExecutionFailureHandler {
 failureHandlingResult
 .getFailureLabels()
 .thenAcceptAsync(
-labels -> 
reportFailureHandling(failureHandlingResult, labels),
+labels ->
+jobFailureMetricReporter.reportJobFailure(
+failureHandlingResult, labels),
 mainThreadExecutor);
 }
 
 return failureHandlingResult;
 }
 
-private void reportFailureHandling(
-FailureHandlingResult failureHandlingResult, Map 
failureLabels) {
-
-// Add base attributes
-SpanBuilder spanBuilder =
-Span.builder(ExecutionFailureHandler.class, "JobFailure")
-.setStartTsMillis(failureHandlingResult.getTimestamp())
-.setEndTsMillis(failureHandlingResult.getTimestamp())
-.setAttribute(
-"canRestart", 
String.valueOf(failureHandlingResult.canRestart()))
-.setAttribute(
-"isGlobalFailure",
-
String.valueOf(failureHandlingResult.isGlobalFailure()));
-
-// Add all failure labels

Re: [I] mysql同步到StarRocks时,如果StarRocks和flink不是同一台服务器时,同步失败 [flink-cdc]

2024-03-15 Thread via GitHub


ihadoop commented on issue #3122:
URL: https://github.com/apache/flink-cdc/issues/3122#issuecomment-1999084064

   it looks config parameters error


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] mysql同步到StarRocks时,如果StarRocks和flink不是同一台服务器时,同步失败 [flink-cdc]

2024-03-15 Thread via GitHub


liisaxin commented on issue #3122:
URL: https://github.com/apache/flink-cdc/issues/3122#issuecomment-1999040783

   the same question 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] 社区群满了,还有其他群吗 [flink-cdc]

2024-03-15 Thread via GitHub


Yokixixi opened a new issue, #3149:
URL: https://github.com/apache/flink-cdc/issues/3149

   ### Please don't check the box below
   
   - [X] I'm aware that bugs and new features should be reported on Apache Jira 
or Flink mailing list instead of here
   
   ### Again, please don't check the box below
   
   - [X] I'm aware that new issues on GitHub will be ignored and automatically 
closed.
   
   ### 请不要勾选以下选项
   
   - [X] 我已知悉缺陷和新功能需要在 Apache Jira 或 Flink 邮件列表中反馈,而不是在这里创建新 issue。
   
   ### 也请不要勾选以下选项
   
   - [X] 我已知悉 GitHub 上的新 issue 会被忽略且自动关闭。


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@flink.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org