(flink-web) branch asf-site updated: Bump Slack invite link

2024-10-02 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 82af91b9d Bump Slack invite link
82af91b9d is described below

commit 82af91b9d4a7eb80fc46b59fbd8fa8da7700bce2
Author: Marton Balassi 
AuthorDate: Wed Oct 2 11:21:49 2024 +0200

Bump Slack invite link

Generated on Oct 2nd
---
 docs/config.toml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/config.toml b/docs/config.toml
index 5e5fa48af..1de65c61b 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -41,7 +41,7 @@ posts = "/:year/:month/:day/:title/"
   #  2. In the dropdown menu, click on "Invite people to Apache Flink"
   #  3. Click "Edit link settings" and select "Never expires". Save the change.
   #  4. Copy the invitation link by clicking on "Copy invite link".
-  FlinkSlackInviteUrl = 
"https://join.slack.com/t/apache-flink/shared_invite/zt-2k0fdioxx-D0kTYYLh3pPjMu5IItqx3Q";
+  FlinkSlackInviteUrl = 
"https://join.slack.com/t/apache-flink/shared_invite/zt-2s1eayay8-gKQ5sj8c6rcEKh4wtW5feA";
 
   FlinkStableVersion = "1.20.0"
   FlinkStableShortVersion = "1.20"



(flink) branch master updated (cfd532d5520 -> 488a4ce4eae)

2024-09-18 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from cfd532d5520 [FLINK-36283][table-planner] Fix udaf incorrectly 
accessing the built-in function PERCENTILE's implementation with the same name
 add 488a4ce4eae [docs] Update Java 17 compatibility docs

No new revisions were added by this update.

Summary of changes:
 docs/content/docs/deployment/java_compatibility.md | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)



(flink-docker) branch master updated: Update GPG key for 1.19.1 release to full key ID

2024-06-17 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/master by this push:
 new f77b347  Update GPG key for 1.19.1 release to full key ID
f77b347 is described below

commit f77b347d0a534da0482e692d80f559f47041829e
Author: Hong Liang Teoh 
AuthorDate: Mon Jun 17 20:26:44 2024 +0100

Update GPG key for 1.19.1 release to full key ID
---
 1.19/scala_2.12-java11-ubuntu/Dockerfile | 2 +-
 1.19/scala_2.12-java17-ubuntu/Dockerfile | 2 +-
 1.19/scala_2.12-java8-ubuntu/Dockerfile  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/1.19/scala_2.12-java11-ubuntu/Dockerfile 
b/1.19/scala_2.12-java11-ubuntu/Dockerfile
index 650188c..e935aff 100644
--- a/1.19/scala_2.12-java11-ubuntu/Dockerfile
+++ b/1.19/scala_2.12-java11-ubuntu/Dockerfile
@@ -46,7 +46,7 @@ RUN set -ex; \
 # Configure Flink version
 ENV 
FLINK_TGZ_URL=https://dlcdn.apache.org/flink/flink-1.19.1/flink-1.19.1-bin-scala_2.12.tgz
 \
 
FLINK_ASC_URL=https://downloads.apache.org/flink/flink-1.19.1/flink-1.19.1-bin-scala_2.12.tgz.asc
 \
-GPG_KEY=B78A5EA1 \
+GPG_KEY=6378E37EB3AAEA188B9CB0D396C2914BB78A5EA1 \
 CHECK_GPG=true
 
 # Prepare environment
diff --git a/1.19/scala_2.12-java17-ubuntu/Dockerfile 
b/1.19/scala_2.12-java17-ubuntu/Dockerfile
index 95f81e1..1c9ab39 100644
--- a/1.19/scala_2.12-java17-ubuntu/Dockerfile
+++ b/1.19/scala_2.12-java17-ubuntu/Dockerfile
@@ -46,7 +46,7 @@ RUN set -ex; \
 # Configure Flink version
 ENV 
FLINK_TGZ_URL=https://dlcdn.apache.org/flink/flink-1.19.1/flink-1.19.1-bin-scala_2.12.tgz
 \
 
FLINK_ASC_URL=https://downloads.apache.org/flink/flink-1.19.1/flink-1.19.1-bin-scala_2.12.tgz.asc
 \
-GPG_KEY=B78A5EA1 \
+GPG_KEY=6378E37EB3AAEA188B9CB0D396C2914BB78A5EA1 \
 CHECK_GPG=true
 
 # Prepare environment
diff --git a/1.19/scala_2.12-java8-ubuntu/Dockerfile 
b/1.19/scala_2.12-java8-ubuntu/Dockerfile
index cd0a512..d079f01 100644
--- a/1.19/scala_2.12-java8-ubuntu/Dockerfile
+++ b/1.19/scala_2.12-java8-ubuntu/Dockerfile
@@ -46,7 +46,7 @@ RUN set -ex; \
 # Configure Flink version
 ENV 
FLINK_TGZ_URL=https://dlcdn.apache.org/flink/flink-1.19.1/flink-1.19.1-bin-scala_2.12.tgz
 \
 
FLINK_ASC_URL=https://downloads.apache.org/flink/flink-1.19.1/flink-1.19.1-bin-scala_2.12.tgz.asc
 \
-GPG_KEY=B78A5EA1 \
+GPG_KEY=6378E37EB3AAEA188B9CB0D396C2914BB78A5EA1 \
 CHECK_GPG=true
 
 # Prepare environment



(flink-docker) branch dev-1.19 updated: Update GPG key for 1.19.1 release to full key ID

2024-06-17 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch dev-1.19
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/dev-1.19 by this push:
 new 01e3f32  Update GPG key for 1.19.1 release to full key ID
01e3f32 is described below

commit 01e3f324f4a7368851f730e144380d9a98277c59
Author: Hong Liang Teoh 
AuthorDate: Mon Jun 17 20:25:51 2024 +0100

Update GPG key for 1.19.1 release to full key ID
---
 add-version.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/add-version.sh b/add-version.sh
index 97ed0e7..1b2518a 100755
--- a/add-version.sh
+++ b/add-version.sh
@@ -105,7 +105,7 @@ elif [ "$flink_version" = "1.18.0" ]; then
 elif [ "$flink_version" = "1.19.0" ]; then
 gpg_key="028B6605F51BC296B56A5042E57D30ABEE75CA06"
 elif [ "$flink_version" = "1.19.1" ]; then
-gpg_key="B78A5EA1"
+gpg_key="6378E37EB3AAEA188B9CB0D396C2914BB78A5EA1"
 else
 error "Missing GPG key ID for this release"
 fi



(flink-kubernetes-operator) branch main updated: [FLINK-35183] MinorVersion metric for tracking applications

2024-05-09 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 84d9b745 [FLINK-35183] MinorVersion metric for tracking applications
84d9b745 is described below

commit 84d9b7459b01dab434b2ea8a43f9c9163c753273
Author: Márton Balassi 
AuthorDate: Thu May 9 10:53:28 2024 +0200

[FLINK-35183] MinorVersion metric for tracking applications
---
 .../operator/metrics/FlinkDeploymentMetrics.java   | 43 --
 .../metrics/FlinkDeploymentMetricsTest.java| 51 --
 2 files changed, 77 insertions(+), 17 deletions(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
index a5871544..310bb75a 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
@@ -42,11 +42,17 @@ public class FlinkDeploymentMetrics implements 
CustomResourceMetrics>> 
deploymentFlinkVersions =
 new ConcurrentHashMap<>();
+// map(namespace, map(version, set(deployment)))
+private final Map>> 
deploymentFlinkMinorVersions =
+new ConcurrentHashMap<>();
 // map(namespace, map(deployment, cpu))
 private final Map> deploymentCpuUsage = new 
ConcurrentHashMap<>();
 // map(namespace, map(deployment, memory))
 private final Map> deploymentMemoryUsage = new 
ConcurrentHashMap<>();
 public static final String FLINK_VERSION_GROUP_NAME = "FlinkVersion";
+public static final String FLINK_MINOR_VERSION_GROUP_NAME = 
"FlinkMinorVersion";
+public static final String UNKNOWN_VERSION = "UNKNOWN";
+public static final String MALFORMED_MINOR_VERSION = "MALFORMED";
 public static final String STATUS_GROUP_NAME = "JmDeploymentStatus";
 public static final String RESOURCE_USAGE_GROUP_NAME = "ResourceUsage";
 public static final String COUNTER_NAME = "Count";
@@ -77,12 +83,13 @@ public class FlinkDeploymentMetrics implements 
CustomResourceMetrics new ConcurrentHashMap<>())
@@ -94,6 +101,22 @@ public class FlinkDeploymentMetrics implements 
CustomResourceMetrics= 2) {
+minorVersion = subVersions[0].concat(".").concat(subVersions[1]);
+}
+deploymentFlinkMinorVersions
+.computeIfAbsent(namespace, ns -> new ConcurrentHashMap<>())
+.computeIfAbsent(
+minorVersion,
+v -> {
+initFlinkMinorVersions(namespace, v);
+return ConcurrentHashMap.newKeySet();
+})
+.add(deploymentName);
+
 var totalCpu =
 NumberUtils.toDouble(
 
clusterInfo.getOrDefault(AbstractFlinkService.FIELD_NAME_TOTAL_CPU, "0"));
@@ -133,6 +156,12 @@ public class FlinkDeploymentMetrics implements 
CustomResourceMetrics 
names.remove(name));
 }
+if (deploymentFlinkMinorVersions.containsKey(namespace)) {
+deploymentFlinkMinorVersions
+.get(namespace)
+.values()
+.forEach(names -> names.remove(name));
+}
 if (deploymentCpuUsage.containsKey(namespace)) {
 deploymentCpuUsage.get(namespace).remove(name);
 }
@@ -165,13 +194,21 @@ public class FlinkDeploymentMetrics implements 
CustomResourceMetrics 
deploymentFlinkVersions.get(ns).get(flinkVersion).size());
 }
 
+private void initFlinkMinorVersions(String ns, String minorVersion) {
+parentMetricGroup
+.createResourceNamespaceGroup(configuration, 
FlinkDeployment.class, ns)
+.addGroup(FLINK_MINOR_VERSION_GROUP_NAME, minorVersion)
+.gauge(
+COUNTER_NAME,
+() -> 
deploymentFlinkMinorVersions.get(ns).get(minorVersion).size());
+}
+
 private void initNamespaceCpuUsage(String ns) {
 parentMetricGroup
 .createResourceNamespaceGroup(configuration, 
FlinkDeployment.class, ns)
diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetricsTest.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetricsTest.java
index 73dd58a2..1776cad2 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/f

(flink-kubernetes-operator) branch main updated: [FLINK-35278] Fix occasional NPE on getting latest resource for status replace

2024-05-04 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new e73363f3 [FLINK-35278] Fix occasional NPE on getting latest resource 
for status replace
e73363f3 is described below

commit e73363f3486ed9e1df5cc05c9d0baec7c8c3a37f
Author: Ferenc Csaky 
AuthorDate: Sat May 4 11:12:40 2024 +0200

[FLINK-35278] Fix occasional NPE on getting latest resource for status 
replace
---
 .../kubernetes/operator/utils/StatusRecorder.java  | 87 +-
 .../operator/utils/StatusRecorderTest.java | 27 ++-
 2 files changed, 76 insertions(+), 38 deletions(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/StatusRecorder.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/StatusRecorder.java
index 0197dadd..1b7e3fc2 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/StatusRecorder.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/StatusRecorder.java
@@ -18,6 +18,7 @@
 
 package org.apache.flink.kubernetes.operator.utils;
 
+import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.kubernetes.operator.api.AbstractFlinkResource;
 import org.apache.flink.kubernetes.operator.api.FlinkDeployment;
 import 
org.apache.flink.kubernetes.operator.api.lifecycle.ResourceLifecycleState;
@@ -127,47 +128,65 @@ public class StatusRecorder<
 } catch (KubernetesClientException kce) {
 // 409 is the error code for conflicts resulting from the 
locking
 if (kce.getCode() == 409) {
-var currentVersion = 
resource.getMetadata().getResourceVersion();
-LOG.debug(
-"Could not apply status update for resource 
version {}",
-currentVersion);
-
-var latest = client.resource(resource).get();
-var latestVersion = 
latest.getMetadata().getResourceVersion();
-
-if (latestVersion.equals(currentVersion)) {
-// This should not happen as long as the client works 
consistently
-LOG.error("Unable to fetch latest resource version");
-throw kce;
-}
-
-if (latest.getStatus().equals(prevStatus)) {
-if (retries++ < 3) {
-LOG.debug(
-"Retrying status update for latest version 
{}", latestVersion);
-
resource.getMetadata().setResourceVersion(latestVersion);
-} else {
-// If we cannot get the latest version in 3 tries 
we throw the error to
-// retry with delay
-throw kce;
-}
-} else {
-throw new StatusConflictException(
-"Status have been modified externally in 
version "
-+ latestVersion
-+ " Previous: "
-+ 
objectMapper.writeValueAsString(prevStatus)
-+ " Latest: "
-+ 
objectMapper.writeValueAsString(latest.getStatus()));
-}
+handleLockingError(resource, prevStatus, client, retries, 
kce);
+++retries;
 } else {
-// We simply throw non conflict errors, to trigger retry 
with delay
+// We simply throw non-conflict errors, to trigger retry 
with delay
 throw kce;
 }
 }
 }
 }
 
+@VisibleForTesting
+void handleLockingError(
+CR resource,
+STATUS prevStatus,
+KubernetesClient client,
+int retries,
+KubernetesClientException kce)
+throws JsonProcessingException {
+
+var currentVersion = resource.getMetadata().getResourceVersion();
+LOG.debug("Could not apply status update for resource version {}", 
currentVersion);
+
+var latest = client.resource(resource).get();
+if (latest == null || latest.getMetadata() == null) {
+// This can happen occasionally, we throw the error to retry with 
delay.
+throw new KubernetesClientException(
+String.format(
+"Failed to retrieve

(flink) branch master updated: [FLINK-32315][k8s] Support uploading "local://" artifacts in Kubernetes Application Mode (#24303)

2024-04-20 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new b3fdb07c041 [FLINK-32315][k8s] Support uploading "local://" artifacts 
in Kubernetes Application Mode (#24303)
b3fdb07c041 is described below

commit b3fdb07c04114c515cfc5893e89528bbfb4384ed
Author: Ferenc Csaky 
AuthorDate: Sat Apr 20 20:17:54 2024 +0200

[FLINK-32315][k8s] Support uploading "local://" artifacts in Kubernetes 
Application Mode (#24303)
---
 docs/content.zh/docs/deployment/config.md  |   8 +-
 .../resource-providers/native_kubernetes.md|  43 +++-
 docs/content/docs/deployment/config.md |   8 +-
 .../resource-providers/native_kubernetes.md|  43 +++-
 .../generated/kubernetes_config_configuration.html |  18 ++
 .../program/artifact/ArtifactFetchManager.java |   2 +-
 .../program/artifact/ArtifactFetchManagerTest.java |  51 ++---
 .../org/apache/flink/testutils/TestingUtils.java   |  29 +++
 .../kubernetes/KubernetesClusterClientFactory.java |   6 +-
 .../kubernetes/KubernetesClusterDescriptor.java|  14 +-
 .../DefaultKubernetesArtifactUploader.java | 152 +++
 .../artifact/KubernetesArtifactUploader.java   |  46 +
 .../configuration/KubernetesConfigOptions.java |  20 ++
 .../KubernetesClusterDescriptorTest.java   |   3 +-
 .../DefaultKubernetesArtifactUploaderTest.java | 217 +
 .../apache/flink/kubernetes/artifact/DummyFs.java  |  66 +++
 .../flink/kubernetes/artifact/DummyFsFactory.java  |  39 
 .../org.apache.flink.core.fs.FileSystemFactory |  16 ++
 18 files changed, 728 insertions(+), 53 deletions(-)

diff --git a/docs/content.zh/docs/deployment/config.md 
b/docs/content.zh/docs/deployment/config.md
index 711d97aef40..94addd8ccad 100644
--- a/docs/content.zh/docs/deployment/config.md
+++ b/docs/content.zh/docs/deployment/config.md
@@ -402,11 +402,13 @@ See the [History Server Docs]({{< ref 
"docs/deployment/advanced/historyserver" >
 
 
 
-# Artifact Fetching
+# User Artifact Management
 
-Flink can fetch user artifacts stored locally, on remote DFS, or accessible 
via an HTTP(S) endpoint.
+Flink is capable to upload and fetch local user artifacts in Application Mode. 
An artifact can be the actual job archive, a UDF that is packaged separately, 
etc.
+1. Uploading local artifacts to a DFS is a Kubernetes specific feature, see 
the [Kubernetes](#kubernetes) section and look for `kubernetes.artifacts.*` 
prefixed options.
+2. Fetching remote artifacts on the deployed application cluster is supported 
from DFS or an HTTP(S) endpoint.
 {{< hint info >}}
-**Note:** This is only supported in Standalone Application Mode and Native 
Kubernetes Application Mode.
+**Note:** Artifact Fetching is supported in Standalone Application Mode and 
Native Kubernetes Application Mode.
 {{< /hint >}}
 
 {{< generated/artifact_fetch_configuration >}}
diff --git 
a/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md 
b/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md
index d8ca037d441..0f8f586d09e 100644
--- a/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md
+++ b/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md
@@ -85,6 +85,9 @@ For production use, we recommend deploying Flink Applications 
in the [Applicatio
 
 The [Application Mode]({{< ref "docs/deployment/overview" 
>}}#application-mode) requires that the user code is bundled together with the 
Flink image because it runs the user code's `main()` method on the cluster.
 The Application Mode makes sure that all Flink components are properly cleaned 
up after the termination of the application.
+Bundling can be done by modifying the base Flink Docker image, or via the User 
Artifact Management, which makes it possible to upload and download artifacts 
that are not available locally.
+
+ Modify the Docker image
 
 The Flink community provides a [base Docker image]({{< ref 
"docs/deployment/resource-providers/standalone/docker" 
>}}#docker-hub-flink-images) which can be used to bundle the user code:
 
@@ -97,13 +100,44 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
-# Local Schema
 $ ./bin/flink run-application \
 --target kubernetes-application \
 -Dkubernetes.cluster-id=my-first-application-cluster \
 -Dkubernetes.container.image.ref=custom-image-name \
 local:///opt/flink/usrlib/my-flink-job.jar
+```
+
+ Configure User Artifact Management
+
+In case you have a locally available Flink job JAR, artifact upl

(flink-kubernetes-operator) 01/02: [FLINK-35116] Bump operator sdk version to 4.8.3

2024-04-17 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git

commit 29a4aa5adf101920cbe1a3a9a178ff16f52c746d
Author: Marton Balassi 
AuthorDate: Tue Apr 16 13:56:51 2024 +0200

[FLINK-35116] Bump operator sdk version to 4.8.3
---
 .../generated/kubernetes_operator_config_configuration.html |  2 +-
 docs/layouts/shortcodes/generated/system_section.html   |  2 +-
 .../operator/api/validation/CrdCompatibilityChecker.java|  6 +-
 .../org/apache/flink/kubernetes/operator/FlinkOperator.java |  2 +-
 .../operator/config/FlinkOperatorConfiguration.java |  3 +--
 .../kubernetes/operator/metrics/OperatorJosdkMetrics.java   |  4 ++--
 .../src/main/resources/META-INF/NOTICE  |  6 +++---
 .../apache/flink/kubernetes/operator/FlinkOperatorTest.java | 13 +
 .../org/apache/flink/kubernetes/operator/TestUtils.java |  6 ++
 .../operator/service/AbstractFlinkServiceTest.java  |  2 +-
 pom.xml |  2 +-
 11 files changed, 23 insertions(+), 25 deletions(-)

diff --git 
a/docs/layouts/shortcodes/generated/kubernetes_operator_config_configuration.html
 
b/docs/layouts/shortcodes/generated/kubernetes_operator_config_configuration.html
index d0680627..918b4074 100644
--- 
a/docs/layouts/shortcodes/generated/kubernetes_operator_config_configuration.html
+++ 
b/docs/layouts/shortcodes/generated/kubernetes_operator_config_configuration.html
@@ -304,7 +304,7 @@
 
 
 kubernetes.operator.reconcile.parallelism
-200
+50
 Integer
 The maximum number of threads running the reconciliation loop. 
Use -1 for infinite.
 
diff --git a/docs/layouts/shortcodes/generated/system_section.html 
b/docs/layouts/shortcodes/generated/system_section.html
index aa053c2f..ec303011 100644
--- a/docs/layouts/shortcodes/generated/system_section.html
+++ b/docs/layouts/shortcodes/generated/system_section.html
@@ -100,7 +100,7 @@
 
 
 kubernetes.operator.reconcile.parallelism
-200
+50
 Integer
 The maximum number of threads running the reconciliation loop. 
Use -1 for infinite.
 
diff --git 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/validation/CrdCompatibilityChecker.java
 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/validation/CrdCompatibilityChecker.java
index 658e25aa..faf6024a 100644
--- 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/validation/CrdCompatibilityChecker.java
+++ 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/validation/CrdCompatibilityChecker.java
@@ -93,7 +93,11 @@ public class CrdCompatibilityChecker {
 // This field was removed from Kubernetes ObjectMeta v1 in 
1.25 as it was unused
 // for a long time. If set for any reason (very unlikely 
as it does nothing),
 // the property will be dropped / ignored by the api 
server.
-if (!fieldPath.endsWith(".metadata.clusterName")) {
+if (!fieldPath.endsWith(".metadata.clusterName")
+// This claims field was removed in Kubernetes 
1.28 as it was mistakenly
+// added in the first place. For more context 
please refer to
+// https://github.com/kubernetes/api/commit/8b14183
+&& 
!fieldPath.contains(".volumeClaimTemplate.spec.resources.claims")) {
 err(fieldPath + " has been removed");
 }
 } else {
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/FlinkOperator.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/FlinkOperator.java
index a5846f59..0ecd7c83 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/FlinkOperator.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/FlinkOperator.java
@@ -137,7 +137,7 @@ public class FlinkOperator {
 overrider.withExecutorService(Executors.newCachedThreadPool());
 } else {
 LOG.info("Configuring operator with {} reconciliation threads.", 
parallelism);
-
overrider.withExecutorService(Executors.newFixedThreadPool(parallelism));
+overrider.withConcurrentReconciliationThreads(parallelism);
 }
 
 if (operatorConf.isJosdkMetricsEnabled()) {
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkOperato

(flink-kubernetes-operator) branch main updated (75eb206e -> 4ec4b319)

2024-04-17 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


from 75eb206e [FLINK-34906][autoscaler] Only scale when all tasks are 
running
 new 29a4aa5a [FLINK-35116] Bump operator sdk version to 4.8.3
 new 4ec4b319 [FLINK-35116] Bump fabric8 version to 6.11.0

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../kubernetes_operator_config_configuration.html  | 2 +-
 .../shortcodes/generated/system_section.html   | 2 +-
 .../operator/api/status/CommonStatus.java  | 2 +-
 .../kubernetes/operator/api/status/JobStatus.java  | 2 +-
 .../api/validation/CrdCompatibilityChecker.java| 6 +-
 .../flink/kubernetes/operator/FlinkOperator.java   | 2 +-
 .../config/FlinkOperatorConfiguration.java | 3 +-
 .../operator/metrics/OperatorJosdkMetrics.java | 4 +-
 .../src/main/resources/META-INF/NOTICE |56 +-
 .../kubernetes/operator/FlinkOperatorTest.java |13 +-
 .../flink/kubernetes/operator/TestUtils.java   | 6 +
 .../operator/service/AbstractFlinkServiceTest.java | 2 +-
 .../crds/flinkdeployments.flink.apache.org-v1.yml  | 11150 ++-
 .../crds/flinksessionjobs.flink.apache.org-v1.yml  |   230 +-
 pom.xml| 4 +-
 15 files changed, 5906 insertions(+), 5578 deletions(-)



(flink) branch master updated: [FLINK-35015][formats] allow passing hadoop config into parquet reader (#24623)

2024-04-10 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new c0891cfe289 [FLINK-35015][formats] allow passing hadoop config into 
parquet reader (#24623)
c0891cfe289 is described below

commit c0891cfe28903c5a362a35ea3bb8702bdc92cf43
Author: Peter Huang 
AuthorDate: Wed Apr 10 03:47:53 2024 -0700

[FLINK-35015][formats] allow passing hadoop config into parquet reader 
(#24623)

Co-authored-by: Peter (ACS) Huang 
---
 flink-formats/flink-parquet/pom.xml| 24 ++
 .../parquet/avro/AvroParquetRecordFormat.java  |  2 ++
 2 files changed, 26 insertions(+)

diff --git a/flink-formats/flink-parquet/pom.xml 
b/flink-formats/flink-parquet/pom.xml
index 7335c0fc85c..9baa9687f17 100644
--- a/flink-formats/flink-parquet/pom.xml
+++ b/flink-formats/flink-parquet/pom.xml
@@ -80,6 +80,14 @@ under the License.
true

 
+   
+   org.apache.flink
+   flink-hadoop-fs
+   ${project.version}
+   provided
+   true
+   
+

 

@@ -115,6 +123,22 @@ under the License.


 
+   
+   org.apache.hadoop
+   hadoop-hdfs
+   provided
+   
+   
+   ch.qos.reload4j
+   reload4j
+   
+   
+   org.slf4j
+   slf4j-reload4j
+   
+   
+   
+

org.apache.hadoop
hadoop-mapreduce-client-core
diff --git 
a/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/avro/AvroParquetRecordFormat.java
 
b/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/avro/AvroParquetRecordFormat.java
index 3b8d032d32b..9c93e48ff99 100644
--- 
a/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/avro/AvroParquetRecordFormat.java
+++ 
b/flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/avro/AvroParquetRecordFormat.java
@@ -25,6 +25,7 @@ import 
org.apache.flink.connector.file.src.reader.StreamFormat;
 import org.apache.flink.connector.file.src.util.CheckpointedPosition;
 import org.apache.flink.core.fs.FSDataInputStream;
 import org.apache.flink.formats.parquet.ParquetInputFile;
+import org.apache.flink.runtime.util.HadoopUtils;
 import org.apache.flink.util.function.SerializableSupplier;
 
 import org.apache.avro.generic.GenericData;
@@ -82,6 +83,7 @@ class AvroParquetRecordFormat implements StreamFormat {
 return new AvroParquetRecordReader(
 AvroParquetReader.builder(new ParquetInputFile(stream, 
fileLen))
 .withDataModel(getDataModel())
+.withConf(HadoopUtils.getHadoopConfiguration(config))
 .build());
 }
 



(flink) branch master updated: [FLINK-34657] add lineage provider API (#24619)

2024-04-05 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new d3d8ae42839 [FLINK-34657] add lineage provider API (#24619)
d3d8ae42839 is described below

commit d3d8ae428397794683e5ba05c5ca70622da99e5e
Author: Peter Huang 
AuthorDate: Fri Apr 5 08:48:53 2024 -0700

[FLINK-34657] add lineage provider API (#24619)
---
 .../api/lineage/LineageVertexProvider.java | 32 ++
 1 file changed, 32 insertions(+)

diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/lineage/LineageVertexProvider.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/lineage/LineageVertexProvider.java
new file mode 100644
index 000..1819a33b7f5
--- /dev/null
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/lineage/LineageVertexProvider.java
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.flink.streaming.api.lineage;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+/**
+ * Create lineage vertex for source and sink in DataStream. If the source and 
sink connectors in
+ * datastream job implement this interface, flink will get lineage vertex and 
create all-to-all
+ * lineages between sources and sinks by default.
+ */
+@PublicEvolving
+public interface LineageVertexProvider {
+LineageVertex getLineageVertex();
+}



(flink) branch release-1.18 updated: [FLINK-34955] Upgrade commons-compress to 1.26.0.

2024-04-03 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch release-1.18
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.18 by this push:
 new 1711ba85744 [FLINK-34955] Upgrade commons-compress to 1.26.0.
1711ba85744 is described below

commit 1711ba85744d917ca63d989bf4c120c6aebda9ba
Author: Márton Balassi 
AuthorDate: Wed Apr 3 15:06:53 2024 +0200

[FLINK-34955] Upgrade commons-compress to 1.26.0.

Addresses 2 CVE as described at 
https://mvnrepository.com/artifact/org.apache.commons/commons-compress.

-

Co-authored-by: slfan1989 <55643692+slfan1...@users.noreply.github.com>
---
 flink-dist/src/main/resources/META-INF/NOTICE|  4 ++--
 flink-end-to-end-tests/flink-sql-client-test/pom.xml |  7 +++
 .../src/main/resources/META-INF/NOTICE   |  4 ++--
 .../flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE|  4 ++--
 .../flink-s3-fs-presto/src/main/resources/META-INF/NOTICE|  4 ++--
 .../src/main/resources/META-INF/NOTICE   |  4 +++-
 .../flink-sql-avro/src/main/resources/META-INF/NOTICE|  2 +-
 flink-python/pom.xml | 12 
 .../flink-table-planner/src/main/resources/META-INF/NOTICE   |  2 +-
 pom.xml  |  5 +++--
 10 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/flink-dist/src/main/resources/META-INF/NOTICE 
b/flink-dist/src/main/resources/META-INF/NOTICE
index 8eb3dbc5dc7..bb94111ed64 100644
--- a/flink-dist/src/main/resources/META-INF/NOTICE
+++ b/flink-dist/src/main/resources/META-INF/NOTICE
@@ -11,8 +11,8 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - com.ververica:frocksdbjni:6.20.3-ververica-2.0
 - commons-cli:commons-cli:1.5.0
 - commons-collections:commons-collections:3.2.2
-- commons-io:commons-io:2.11.0
-- org.apache.commons:commons-compress:1.21
+- commons-io:commons-io:2.15.1
+- org.apache.commons:commons-compress:1.26.0
 - org.apache.commons:commons-lang3:3.12.0
 - org.apache.commons:commons-math3:3.6.1
 - org.apache.commons:commons-text:1.10.0
diff --git a/flink-end-to-end-tests/flink-sql-client-test/pom.xml 
b/flink-end-to-end-tests/flink-sql-client-test/pom.xml
index 5e816c66943..d7c1c1dc567 100644
--- a/flink-end-to-end-tests/flink-sql-client-test/pom.xml
+++ b/flink-end-to-end-tests/flink-sql-client-test/pom.xml
@@ -69,6 +69,13 @@ under the License.
kafka
test

+
+   
+   commons-codec
+   commons-codec
+   test
+   
+

 

diff --git 
a/flink-filesystems/flink-fs-hadoop-shaded/src/main/resources/META-INF/NOTICE 
b/flink-filesystems/flink-fs-hadoop-shaded/src/main/resources/META-INF/NOTICE
index 0236725e0a4..41d0788e3b7 100644
--- 
a/flink-filesystems/flink-fs-hadoop-shaded/src/main/resources/META-INF/NOTICE
+++ 
b/flink-filesystems/flink-fs-hadoop-shaded/src/main/resources/META-INF/NOTICE
@@ -16,9 +16,9 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - com.google.j2objc:j2objc-annotations:1.1
 - commons-beanutils:commons-beanutils:1.9.4
 - commons-collections:commons-collections:3.2.2
-- commons-io:commons-io:2.11.0
+- commons-io:commons-io:2.15.1
 - commons-logging:commons-logging:1.1.3
-- org.apache.commons:commons-compress:1.21
+- org.apache.commons:commons-compress:1.26.0
 - org.apache.commons:commons-configuration2:2.1.1
 - org.apache.commons:commons-lang3:3.12.0
 - org.apache.commons:commons-text:1.10.0
diff --git 
a/flink-filesystems/flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE 
b/flink-filesystems/flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE
index c16ab1adc98..5e66fa4612a 100644
--- a/flink-filesystems/flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE
+++ b/flink-filesystems/flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE
@@ -21,10 +21,10 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - commons-beanutils:commons-beanutils:1.9.4
 - commons-codec:commons-codec:1.15
 - commons-collections:commons-collections:3.2.2
-- commons-io:commons-io:2.11.0
+- commons-io:commons-io:2.15.1
 - commons-logging:commons-logging:1.1.3
 - joda-time:joda-time:2.5
-- org.apache.commons:commons-compress:1.21
+- org.apache.commons:commons-compress:1.26.0
 - org.apache.commons:commons-configuration2:2.1.1
 - org.apache.commons:commons-lang3:3.12.0
 - org.apache.commons:commons-text:1.10.0
diff --git 
a/flink-filesystems/flink-s3-fs-presto/src/main/resources/META-INF/NOTICE 
b/flink-filesystems/flink-s3-fs-presto/src/main/resources/META-INF/NOTICE
index 3356afa2205..eccf85b9a1a 100644
--- a/flink-filesystems/flink-s3-fs-presto/src/main/resource

(flink) branch release-1.19 updated: [FLINK-34955] Upgrade commons-compress to 1.26.0.

2024-04-03 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch release-1.19
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.19 by this push:
 new f17217100cf [FLINK-34955] Upgrade commons-compress to 1.26.0.
f17217100cf is described below

commit f17217100cf7d28bf6a1b687427c01e30b77e900
Author: Márton Balassi 
AuthorDate: Wed Apr 3 15:08:32 2024 +0200

[FLINK-34955] Upgrade commons-compress to 1.26.0.

Addresses 2 CVE as described at 
https://mvnrepository.com/artifact/org.apache.commons/commons-compress.

-

Co-authored-by: slfan1989 <55643692+slfan1...@users.noreply.github.com>
---
 flink-dist/src/main/resources/META-INF/NOTICE|  4 ++--
 flink-end-to-end-tests/flink-sql-client-test/pom.xml |  7 +++
 .../src/main/resources/META-INF/NOTICE   |  4 ++--
 .../flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE|  4 ++--
 .../flink-s3-fs-presto/src/main/resources/META-INF/NOTICE|  4 ++--
 .../src/main/resources/META-INF/NOTICE   |  4 +++-
 .../flink-sql-avro/src/main/resources/META-INF/NOTICE|  2 +-
 flink-python/pom.xml | 12 
 .../flink-table-planner/src/main/resources/META-INF/NOTICE   |  2 +-
 pom.xml  |  5 +++--
 10 files changed, 35 insertions(+), 13 deletions(-)

diff --git a/flink-dist/src/main/resources/META-INF/NOTICE 
b/flink-dist/src/main/resources/META-INF/NOTICE
index 2eb8a611431..9e249c17f6f 100644
--- a/flink-dist/src/main/resources/META-INF/NOTICE
+++ b/flink-dist/src/main/resources/META-INF/NOTICE
@@ -11,8 +11,8 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - com.ververica:frocksdbjni:6.20.3-ververica-2.0
 - commons-cli:commons-cli:1.5.0
 - commons-collections:commons-collections:3.2.2
-- commons-io:commons-io:2.11.0
-- org.apache.commons:commons-compress:1.24.0
+- commons-io:commons-io:2.15.1
+- org.apache.commons:commons-compress:1.26.0
 - org.apache.commons:commons-lang3:3.12.0
 - org.apache.commons:commons-math3:3.6.1
 - org.apache.commons:commons-text:1.10.0
diff --git a/flink-end-to-end-tests/flink-sql-client-test/pom.xml 
b/flink-end-to-end-tests/flink-sql-client-test/pom.xml
index 4ab55e744c0..037c7e6c026 100644
--- a/flink-end-to-end-tests/flink-sql-client-test/pom.xml
+++ b/flink-end-to-end-tests/flink-sql-client-test/pom.xml
@@ -69,6 +69,13 @@ under the License.
kafka
test

+
+   
+   commons-codec
+   commons-codec
+   test
+   
+

 

diff --git 
a/flink-filesystems/flink-fs-hadoop-shaded/src/main/resources/META-INF/NOTICE 
b/flink-filesystems/flink-fs-hadoop-shaded/src/main/resources/META-INF/NOTICE
index 7163355eb5b..9e5bde2492b 100644
--- 
a/flink-filesystems/flink-fs-hadoop-shaded/src/main/resources/META-INF/NOTICE
+++ 
b/flink-filesystems/flink-fs-hadoop-shaded/src/main/resources/META-INF/NOTICE
@@ -16,9 +16,9 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - com.google.j2objc:j2objc-annotations:1.1
 - commons-beanutils:commons-beanutils:1.9.4
 - commons-collections:commons-collections:3.2.2
-- commons-io:commons-io:2.11.0
+- commons-io:commons-io:2.15.1
 - commons-logging:commons-logging:1.1.3
-- org.apache.commons:commons-compress:1.24.0
+- org.apache.commons:commons-compress:1.26.0
 - org.apache.commons:commons-configuration2:2.1.1
 - org.apache.commons:commons-lang3:3.12.0
 - org.apache.commons:commons-text:1.10.0
diff --git 
a/flink-filesystems/flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE 
b/flink-filesystems/flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE
index ae30f0d85df..5c9c2a03e15 100644
--- a/flink-filesystems/flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE
+++ b/flink-filesystems/flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE
@@ -21,10 +21,10 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - commons-beanutils:commons-beanutils:1.9.4
 - commons-codec:commons-codec:1.15
 - commons-collections:commons-collections:3.2.2
-- commons-io:commons-io:2.11.0
+- commons-io:commons-io:2.15.1
 - commons-logging:commons-logging:1.1.3
 - joda-time:joda-time:2.5
-- org.apache.commons:commons-compress:1.24.0
+- org.apache.commons:commons-compress:1.26.0
 - org.apache.commons:commons-configuration2:2.1.1
 - org.apache.commons:commons-lang3:3.12.0
 - org.apache.commons:commons-text:1.10.0
diff --git 
a/flink-filesystems/flink-s3-fs-presto/src/main/resources/META-INF/NOTICE 
b/flink-filesystems/flink-s3-fs-presto/src/main/resources/META-INF/NOTICE
index d84ee2c57d7..75a956d1707 100644
--- a/flink-filesystems/flink-s3-fs-presto/src/main/res

(flink) branch master updated (8a18b119c95 -> 163b9cca6d2)

2024-04-02 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 8a18b119c95 [FLINK-21943][core] Remove redundant checkNotNull in 
Transformation#setResources
 add 163b9cca6d2 [FLINK-34955] Upgrade commons-compress to 1.26.0. (#24580)

No new revisions were added by this update.

Summary of changes:
 flink-dist/src/main/resources/META-INF/NOTICE|  4 ++--
 flink-end-to-end-tests/flink-sql-client-test/pom.xml |  7 +++
 .../src/main/resources/META-INF/NOTICE   |  4 ++--
 .../flink-s3-fs-hadoop/src/main/resources/META-INF/NOTICE|  4 ++--
 .../flink-s3-fs-presto/src/main/resources/META-INF/NOTICE|  4 ++--
 .../src/main/resources/META-INF/NOTICE   |  4 +++-
 .../flink-sql-avro/src/main/resources/META-INF/NOTICE|  2 +-
 flink-python/pom.xml | 12 
 .../flink-table-planner/src/main/resources/META-INF/NOTICE   |  2 +-
 pom.xml  |  5 +++--
 10 files changed, 35 insertions(+), 13 deletions(-)



(flink) 01/02: [FLINK-28915][k8s] Kubernetes client supports s3 jar location

2024-01-25 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a85bcb47d853a30b78cb5f17cd53f5c033110a60
Author: Hjw <1010445...@qq.com>
AuthorDate: Wed Sep 7 19:09:03 2022 +0200

[FLINK-28915][k8s] Kubernetes client supports s3 jar location
---
 docs/content.zh/docs/deployment/config.md  |   9 ++
 .../resource-providers/native_kubernetes.md|  26 +++-
 .../resource-providers/standalone/docker.md|  28 +++-
 .../resource-providers/standalone/kubernetes.md|  10 +-
 docs/content/docs/deployment/config.md |   9 ++
 .../resource-providers/native_kubernetes.md|  25 +++-
 .../resource-providers/standalone/docker.md|  28 +++-
 .../resource-providers/standalone/kubernetes.md|  16 ++-
 .../generated/artifact_fetch_configuration.html|  24 
 .../generated/kubernetes_config_configuration.html |   6 +
 .../flink/client/cli/ArtifactFetchOptions.java |  42 ++
 .../client/program/artifact/ArtifactFetcher.java   |  38 ++
 .../client/program/artifact/ArtifactUtils.java |  71 ++
 .../artifact/FileSystemBasedArtifactFetcher.java   |  55 
 .../program/artifact/HttpArtifactFetcher.java  |  67 ++
 .../client/program/artifact/ArtifactUtilsTest.java | 146 +
 .../StandaloneApplicationClusterConfiguration.java |  11 +-
 ...plicationClusterConfigurationParserFactory.java |  14 +-
 .../StandaloneApplicationClusterEntryPoint.java|  49 ++-
 ...ationClusterConfigurationParserFactoryTest.java |  34 -
 ...StandaloneApplicationClusterEntryPointTest.java |  56 
 .../kubernetes/KubernetesClusterDescriptor.java|   4 +-
 .../configuration/KubernetesConfigOptions.java |  28 
 .../KubernetesApplicationClusterEntrypoint.java|  56 +++-
 .../decorators/InitJobManagerDecorator.java|  20 ++-
 .../parameters/KubernetesJobManagerParameters.java |   5 +
 .../apache/flink/kubernetes/utils/Constants.java   |   2 +
 .../flink/kubernetes/utils/KubernetesUtils.java|  15 +--
 .../KubernetesClusterDescriptorTest.java   |  15 ---
 ...KubernetesApplicationClusterEntrypointTest.java |  62 +
 .../factory/KubernetesJobManagerFactoryTest.java   |  55 +++-
 .../KubernetesJobManagerParametersTest.java|   8 ++
 .../runtime/entrypoint/ClusterEntrypoint.java  |   3 +-
 33 files changed, 967 insertions(+), 70 deletions(-)

diff --git a/docs/content.zh/docs/deployment/config.md 
b/docs/content.zh/docs/deployment/config.md
index cf0740bf8de..7561852e4d0 100644
--- a/docs/content.zh/docs/deployment/config.md
+++ b/docs/content.zh/docs/deployment/config.md
@@ -318,6 +318,15 @@ See the [History Server Docs]({{< ref 
"docs/deployment/advanced/historyserver" >
 
 
 
+# Artifact Fetch
+
+*Artifact Fetch* is a features that Flink will fetch user artifact stored in 
DFS or download by HTTP/HTTPS.
+Note that it is only supported in StandAlone Application Mode and Native 
Kubernetes Application Mode.
+{{< generated/artifact_fetch_configuration >}}
+
+
+
+
 # Execution
 
 {{< generated/deployment_configuration >}}
diff --git 
a/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md 
b/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md
index fa3f965e96b..048a4e800c1 100644
--- a/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md
+++ b/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md
@@ -97,14 +97,36 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
+# Local Schema
 $ ./bin/flink run-application \
 --target kubernetes-application \
 -Dkubernetes.cluster-id=my-first-application-cluster \
 -Dkubernetes.container.image.ref=custom-image-name \
 local:///opt/flink/usrlib/my-flink-job.jar
+
+# FileSystem
+$ ./bin/flink run-application \
+--target kubernetes-application \
+
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-1.17-SNAPSHOT.jar
 \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+s3://my-bucket/my-flink-job.jar
+
+# Http/Https Schema
+$ ./bin/flink run-application \
+--target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+http://ip:port/my-flink-job.jar
 ```
+{{< hint info >}}
+Now, The jar artifact supports downloading from the [flink filesystem]({{< ref 
"docs/deployment/filesystems/overview" >}}) or Http/Https in Application Mode.  
+The jar package will be downloaded from filesystem to
+[user.artifacts.base.dir]({{<

(flink) 02/02: [FLINK-28915][k8s] Support fetching remote job jar and additional dependencies on Kubernetes

2024-01-25 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit e63aa12252843d0098a56f3091b28d48aff5b5af
Author: Ferenc Csaky 
AuthorDate: Wed Jan 10 17:19:57 2024 +0100

[FLINK-28915][k8s] Support fetching remote job jar and additional 
dependencies on Kubernetes

Closes #24065.
---
 docs/content.zh/docs/deployment/config.md  |   9 +-
 .../resource-providers/native_kubernetes.md|  16 +-
 .../resource-providers/standalone/docker.md|  28 +-
 .../resource-providers/standalone/kubernetes.md|   6 +-
 docs/content/docs/deployment/config.md |   9 +-
 .../resource-providers/native_kubernetes.md|  15 +-
 .../resource-providers/standalone/docker.md|  28 +-
 .../resource-providers/standalone/kubernetes.md|   6 +-
 .../resource-providers/standalone/overview.md  |  16 +-
 .../generated/artifact_fetch_configuration.html|  18 +-
 .../generated/kubernetes_config_configuration.html |   6 -
 flink-clients/pom.xml  |   8 +
 .../flink/client/cli/ArtifactFetchOptions.java |  30 ++-
 .../program/DefaultPackagedProgramRetriever.java   |  80 --
 .../program/artifact/ArtifactFetchManager.java | 195 ++
 .../client/program/artifact/ArtifactFetcher.java   |   7 +-
 .../client/program/artifact/ArtifactUtils.java |  50 ++--
 ...ArtifactFetcher.java => FsArtifactFetcher.java} |  14 +-
 .../program/artifact/HttpArtifactFetcher.java  |  12 +-
 .../program/artifact/LocalArtifactFetcher.java |  42 +++
 .../DefaultPackagedProgramRetrieverITCase.java | 125 -
 .../program/artifact/ArtifactFetchManagerTest.java | 281 +
 .../client/program/artifact/ArtifactUtilsTest.java | 123 +
 .../client/testjar/ClasspathProviderExtension.java |  11 +
 .../StandaloneApplicationClusterConfiguration.java |  14 +-
 ...plicationClusterConfigurationParserFactory.java |  19 +-
 .../StandaloneApplicationClusterEntryPoint.java|  54 ++--
 ...ationClusterConfigurationParserFactoryTest.java |  30 ++-
 ...StandaloneApplicationClusterEntryPointTest.java |  56 
 .../flink/configuration/GlobalConfiguration.java   |   3 +-
 .../configuration/KubernetesConfigOptions.java |  28 --
 .../KubernetesApplicationClusterEntrypoint.java|  55 ++--
 .../decorators/InitJobManagerDecorator.java|  38 +--
 .../parameters/KubernetesJobManagerParameters.java |   2 +-
 ...KubernetesApplicationClusterEntrypointTest.java |   3 +-
 .../factory/KubernetesJobManagerFactoryTest.java   |  24 +-
 .../KubernetesJobManagerParametersTest.java|   2 +-
 .../flink-clients-test-utils/pom.xml   |  20 ++
 .../TestUserClassLoaderAdditionalArtifact.java |  23 +-
 39 files changed, 1000 insertions(+), 506 deletions(-)

diff --git a/docs/content.zh/docs/deployment/config.md 
b/docs/content.zh/docs/deployment/config.md
index 7561852e4d0..35b85240d03 100644
--- a/docs/content.zh/docs/deployment/config.md
+++ b/docs/content.zh/docs/deployment/config.md
@@ -318,10 +318,13 @@ See the [History Server Docs]({{< ref 
"docs/deployment/advanced/historyserver" >
 
 
 
-# Artifact Fetch
+# Artifact Fetching
+
+Flink can fetch user artifacts stored locally, on remote DFS, or accessible 
via an HTTP(S) endpoint.
+{{< hint info >}}
+**Note:** This is only supported in Standalone Application Mode and Native 
Kubernetes Application Mode.
+{{< /hint >}}
 
-*Artifact Fetch* is a features that Flink will fetch user artifact stored in 
DFS or download by HTTP/HTTPS.
-Note that it is only supported in StandAlone Application Mode and Native 
Kubernetes Application Mode.
 {{< generated/artifact_fetch_configuration >}}
 
 
diff --git 
a/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md 
b/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md
index 048a4e800c1..4f3a537cdbc 100644
--- a/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md
+++ b/docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md
@@ -107,26 +107,24 @@ $ ./bin/flink run-application \
 # FileSystem
 $ ./bin/flink run-application \
 --target kubernetes-application \
-
-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-1.17-SNAPSHOT.jar
 \
 -Dkubernetes.cluster-id=my-first-application-cluster \
 -Dkubernetes.container.image=custom-image-name \
 s3://my-bucket/my-flink-job.jar
 
-# Http/Https Schema
+# HTTP(S)
 $ ./bin/flink run-application \
 --target kubernetes-application \
 -Dkubernetes.cluster-id=my-first-application-cluster \
 -Dkubernetes.container.image=custom-image-name \
-http://ip:port/my-flink-job.jar
+https://ip:port/my-flink-job.jar
 ```
 {{< hint info >}}
-Now, The jar artifact supports downloading from the [flink filesystem]({{&

(flink) branch master updated (4f7725aa790 -> e63aa122528)

2024-01-25 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 4f7725aa790 [FLINK-33819][rocksdb] Support setting CompressionType for 
RocksDBStateBackend
 new a85bcb47d85 [FLINK-28915][k8s] Kubernetes client supports s3 jar 
location
 new e63aa122528 [FLINK-28915][k8s] Support fetching remote job jar and 
additional dependencies on Kubernetes

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/content.zh/docs/deployment/config.md  |  12 +
 .../resource-providers/native_kubernetes.md|  24 +-
 .../resource-providers/standalone/docker.md|  36 ++-
 .../resource-providers/standalone/kubernetes.md|  10 +-
 docs/content/docs/deployment/config.md |  12 +
 .../resource-providers/native_kubernetes.md|  24 +-
 .../resource-providers/standalone/docker.md|  36 ++-
 .../resource-providers/standalone/kubernetes.md|  16 +-
 .../resource-providers/standalone/overview.md  |  16 +-
 .../generated/artifact_fetch_configuration.html|  36 +++
 flink-clients/pom.xml  |   8 +
 .../flink/client/cli/ArtifactFetchOptions.java |  60 +
 .../program/DefaultPackagedProgramRetriever.java   |  80 --
 .../program/artifact/ArtifactFetchManager.java | 195 ++
 .../client/program/artifact/ArtifactFetcher.java   |  39 +++
 .../client/program/artifact/ArtifactUtils.java |  59 +
 .../client/program/artifact/FsArtifactFetcher.java |  53 
 .../program/artifact/HttpArtifactFetcher.java  |  65 +
 .../program/artifact/LocalArtifactFetcher.java |  42 +++
 .../DefaultPackagedProgramRetrieverITCase.java | 125 -
 .../program/artifact/ArtifactFetchManagerTest.java | 281 +
 .../client/program/artifact/ArtifactUtilsTest.java |  39 +++
 .../client/testjar/ClasspathProviderExtension.java |  11 +
 .../StandaloneApplicationClusterConfiguration.java |  15 +-
 ...plicationClusterConfigurationParserFactory.java |  15 +-
 .../StandaloneApplicationClusterEntryPoint.java|  33 ++-
 ...ationClusterConfigurationParserFactoryTest.java |  42 ++-
 .../flink/configuration/GlobalConfiguration.java   |   3 +-
 .../kubernetes/KubernetesClusterDescriptor.java|   4 +-
 .../KubernetesApplicationClusterEntrypoint.java|  59 -
 .../decorators/InitJobManagerDecorator.java|  18 ++
 .../parameters/KubernetesJobManagerParameters.java |   5 +
 .../apache/flink/kubernetes/utils/Constants.java   |   2 +
 .../flink/kubernetes/utils/KubernetesUtils.java|  15 +-
 .../KubernetesClusterDescriptorTest.java   |  15 --
 ...KubernetesApplicationClusterEntrypointTest.java |  61 +
 .../factory/KubernetesJobManagerFactoryTest.java   |  33 ++-
 .../KubernetesJobManagerParametersTest.java|   8 +
 .../runtime/entrypoint/ClusterEntrypoint.java  |   3 +-
 .../flink-clients-test-utils/pom.xml   |  20 ++
 .../TestUserClassLoaderAdditionalArtifact.java |  27 ++
 41 files changed, 1543 insertions(+), 114 deletions(-)
 create mode 100644 
docs/layouts/shortcodes/generated/artifact_fetch_configuration.html
 create mode 100644 
flink-clients/src/main/java/org/apache/flink/client/cli/ArtifactFetchOptions.java
 create mode 100644 
flink-clients/src/main/java/org/apache/flink/client/program/artifact/ArtifactFetchManager.java
 create mode 100644 
flink-clients/src/main/java/org/apache/flink/client/program/artifact/ArtifactFetcher.java
 create mode 100644 
flink-clients/src/main/java/org/apache/flink/client/program/artifact/ArtifactUtils.java
 create mode 100644 
flink-clients/src/main/java/org/apache/flink/client/program/artifact/FsArtifactFetcher.java
 create mode 100644 
flink-clients/src/main/java/org/apache/flink/client/program/artifact/HttpArtifactFetcher.java
 create mode 100644 
flink-clients/src/main/java/org/apache/flink/client/program/artifact/LocalArtifactFetcher.java
 create mode 100644 
flink-clients/src/test/java/org/apache/flink/client/program/artifact/ArtifactFetchManagerTest.java
 create mode 100644 
flink-clients/src/test/java/org/apache/flink/client/program/artifact/ArtifactUtilsTest.java
 create mode 100644 
flink-kubernetes/src/test/java/org/apache/flink/kubernetes/entrypoint/KubernetesApplicationClusterEntrypointTest.java
 create mode 100644 
flink-test-utils-parent/flink-clients-test-utils/src/main/java/org/apache/flink/client/testjar/TestUserClassLoaderAdditionalArtifact.java



(flink-kubernetes-operator) branch main updated: [FLINK-33812] Consider override parallelism when calculating the parallelism of a job

2023-12-22 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new f01f8ec0 [FLINK-33812] Consider override parallelism when calculating 
the parallelism of a job
f01f8ec0 is described below

commit f01f8ec0dcde79b21c164143128f0c30ef1aa091
Author: Jerry Wang 
AuthorDate: Fri Dec 22 17:33:26 2023 +0800

[FLINK-33812] Consider override parallelism when calculating the 
parallelism of a job
---
 .../kubernetes/operator/config/FlinkConfigBuilder.java   | 16 
 .../operator/config/FlinkConfigBuilderTest.java  | 14 ++
 2 files changed, 30 insertions(+)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java
index f99c08fc..39d0fed6 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java
@@ -68,6 +68,7 @@ import java.nio.file.Files;
 import java.time.Duration;
 import java.util.Arrays;
 import java.util.Collections;
+import java.util.Optional;
 
 import static 
org.apache.flink.configuration.DeploymentOptions.SHUTDOWN_ON_APPLICATION_FINISH;
 import static 
org.apache.flink.configuration.DeploymentOptions.SUBMIT_FAILED_JOB_ON_APPLICATION_ERROR;
@@ -375,9 +376,24 @@ public class FlinkConfigBuilder {
 * effectiveConfig.get(TaskManagerOptions.NUM_TASK_SLOTS);
 }
 
+Optional maxOverrideParallelism = 
getMaxParallelismFromOverrideConfig();
+if (maxOverrideParallelism.isPresent() && maxOverrideParallelism.get() 
> 0) {
+return maxOverrideParallelism.get();
+}
+
 return spec.getJob().getParallelism();
 }
 
+private Optional getMaxParallelismFromOverrideConfig() {
+return effectiveConfig
+.getOptional(PipelineOptions.PARALLELISM_OVERRIDES)
+.flatMap(
+overrides ->
+overrides.values().stream()
+.map(Integer::valueOf)
+.max(Integer::compareTo));
+}
+
 protected Configuration build() {
 
 // Set cluster config
diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilderTest.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilderTest.java
index 00b418eb..4c905c3f 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilderTest.java
+++ 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilderTest.java
@@ -854,6 +854,20 @@ public class FlinkConfigBuilderTest {
 5,
 configuration.get(
 
StandaloneKubernetesConfigOptionsInternal.KUBERNETES_TASKMANAGER_REPLICAS));
+
+dep.getSpec()
+.getFlinkConfiguration()
+.put(PipelineOptions.PARALLELISM_OVERRIDES.key(), 
"vertex1:10,vertex2:20");
+configuration =
+new FlinkConfigBuilder(dep, new Configuration())
+.applyFlinkConfiguration()
+.applyTaskManagerSpec()
+.applyJobOrSessionSpec()
+.build();
+assertEquals(
+10,
+configuration.get(
+
StandaloneKubernetesConfigOptionsInternal.KUBERNETES_TASKMANAGER_REPLICAS));
 }
 
 @Test



(flink) 03/03: [FLINK-25857] Add committer metrics to track the status of committables

2023-12-12 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 59c42d96062de2e289aa91a2a3a5d71a9075582a
Author: Peter Vary 
AuthorDate: Thu Oct 12 16:46:38 2023 +0200

[FLINK-25857] Add committer metrics to track the status of committables

This implementation preserves the existing API without breaking changes. 
For more context please refer to 
https://lists.apache.org/thread/h6nkgth838dlh5s90sd95zd6hlsxwx57.
Closes #23876.
---
 .../flink/api/connector/sink2/InitContext.java |  62 +++
 .../org/apache/flink/api/connector/sink2/Sink.java |  35 +---
 .../connector/sink2/TwoPhaseCommittingSink.java|  28 +++-
 .../metrics/groups/SinkCommitterMetricGroup.java   |  49 ++
 .../apache/flink/runtime/metrics/MetricNames.java  |   8 +
 .../groups/InternalSinkCommitterMetricGroup.java   |  96 +++
 .../metrics/groups/MetricsGroupTestUtils.java  |  10 +-
 .../connector/sink2/GlobalCommitterOperator.java   |  12 +-
 .../connector/sink2/GlobalCommitterSerializer.java |   9 +-
 .../api/transformations/SinkV1Adapter.java |   6 +-
 .../runtime/operators/sink/CommitterOperator.java  |  49 +-
 .../operators/sink/CommitterOperatorFactory.java   |   2 +-
 .../runtime/operators/sink/InitContextBase.java|  70 
 .../runtime/operators/sink/SinkWriterOperator.java |  56 ++-
 .../CheckpointCommittableManagerImpl.java  |  23 ++-
 .../sink/committables/CommitRequestImpl.java   |  23 ++-
 .../sink/committables/CommittableCollector.java|  31 +++-
 .../CommittableCollectorSerializer.java|  27 ++-
 .../committables/SubtaskCommittableManager.java|  26 ++-
 .../sink2/GlobalCommitterSerializerTest.java   |  14 +-
 .../api/graph/StreamingJobGraphGeneratorTest.java  |   2 +-
 .../sink/SinkV2SinkWriterOperatorTest.java |   3 +-
 .../runtime/operators/sink/TestSinkV2.java |   3 +-
 .../CheckpointCommittableManagerImplTest.java  |  14 +-
 .../CommittableCollectorSerializerTest.java|  16 +-
 .../committables/CommittableCollectorTest.java |  10 +-
 .../SubtaskCommittableManagerTest.java |  21 ++-
 .../flink/streaming/util/TestExpandingSink.java|   2 +-
 .../scheduling/SpeculativeSchedulerITCase.java |   3 +-
 .../streaming/runtime/SinkV2MetricsITCase.java | 181 +++--
 30 files changed, 723 insertions(+), 168 deletions(-)

diff --git 
a/flink-core/src/main/java/org/apache/flink/api/connector/sink2/InitContext.java
 
b/flink-core/src/main/java/org/apache/flink/api/connector/sink2/InitContext.java
new file mode 100644
index 000..ec2443aabfe
--- /dev/null
+++ 
b/flink-core/src/main/java/org/apache/flink/api/connector/sink2/InitContext.java
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.connector.sink2;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.JobID;
+
+import java.util.OptionalLong;
+
+/**
+ * Common interface which exposes runtime info for creating {@link SinkWriter} 
and {@link Committer}
+ * objects.
+ */
+@Internal
+public interface InitContext {
+/**
+ * The first checkpoint id when an application is started and not 
recovered from a previously
+ * taken checkpoint or savepoint.
+ */
+long INITIAL_CHECKPOINT_ID = 1;
+
+/** @return The id of task where the committer is running. */
+int getSubtaskId();
+
+/** @return The number of parallel committer tasks. */
+int getNumberOfParallelSubtasks();
+
+/**
+ * Gets the attempt number of this parallel subtask. First attempt is 
numbered 0.
+ *
+ * @return Attempt number of the subtask.
+ */
+int getAttemptNumber();
+
+/**
+ * Returns id of the restored checkpoint, if state was restored from the 
snapshot of a previous
+ * execution.
+ */
+OptionalLong getRestoredCheckpointId();
+
+/**
+ * The ID of the current job. Note that Job ID can change in particular 
upon manual restart. The
+ * returned ID should NOT be used for any job management tasks.

(flink) 01/03: Revert "[FLINK-33568] Fix NullpointerException in SinkV2MetricsITCase (#23737)"

2023-12-12 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit dddbeafab2c2157d60bb4fd87c7bef15ee8f11dd
Author: Peter Vary 
AuthorDate: Thu Dec 7 11:57:49 2023 +0100

Revert "[FLINK-33568] Fix NullpointerException in SinkV2MetricsITCase 
(#23737)"

This reverts commit 7092bb4cc2afaa92c02743f46f23d2606c0e3c4e.
---
 .../streaming/runtime/SinkV2MetricsITCase.java | 67 --
 1 file changed, 38 insertions(+), 29 deletions(-)

diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/streaming/runtime/SinkV2MetricsITCase.java
 
b/flink-tests/src/test/java/org/apache/flink/test/streaming/runtime/SinkV2MetricsITCase.java
index dfcc2ad02c0..37f1f0dc214 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/streaming/runtime/SinkV2MetricsITCase.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/streaming/runtime/SinkV2MetricsITCase.java
@@ -42,7 +42,6 @@ import 
org.apache.flink.shaded.guava31.com.google.common.collect.ImmutableMap;
 import org.junit.Rule;
 import org.junit.Test;
 
-import java.util.Arrays;
 import java.util.Collection;
 import java.util.HashMap;
 import java.util.List;
@@ -57,6 +56,7 @@ import static 
org.apache.flink.metrics.testutils.MetricAssertions.assertThatGaug
 import static org.hamcrest.CoreMatchers.equalTo;
 import static org.hamcrest.MatcherAssert.assertThat;
 import static org.hamcrest.Matchers.hasEntry;
+import static org.hamcrest.Matchers.hasSize;
 
 /** Tests whether all provided metrics of a {@link Sink} are of the expected 
values (FLIP-33). */
 public class SinkV2MetricsITCase extends TestLogger {
@@ -115,11 +115,11 @@ public class SinkV2MetricsITCase extends TestLogger {
 final JobID jobId = jobClient.getJobID();
 
 beforeBarrier.get().await();
-assertSinkMetrics(jobId, stopAtRecord1, numSplits);
+assertSinkMetrics(jobId, stopAtRecord1, env.getParallelism(), 
numSplits);
 afterBarrier.get().await();
 
 beforeBarrier.get().await();
-assertSinkMetrics(jobId, stopAtRecord2, numSplits);
+assertSinkMetrics(jobId, stopAtRecord2, env.getParallelism(), 
numSplits);
 afterBarrier.get().await();
 
 jobClient.getJobExecutionResult().get();
@@ -152,6 +152,7 @@ public class SinkV2MetricsITCase extends TestLogger {
 beforeLatch.get().await();
 assertSinkCommitterMetrics(
 jobId,
+env.getParallelism(),
 ImmutableMap.of(
 MetricNames.ALREADY_COMMITTED_COMMITTABLES, 0L,
 MetricNames.FAILED_COMMITTABLES, 0L,
@@ -165,6 +166,7 @@ public class SinkV2MetricsITCase extends TestLogger {
 jobClient.getJobExecutionResult().get();
 assertSinkCommitterMetrics(
 jobId,
+env.getParallelism(),
 ImmutableMap.of(
 MetricNames.ALREADY_COMMITTED_COMMITTABLES, 1L,
 MetricNames.FAILED_COMMITTABLES, 2L,
@@ -175,18 +177,18 @@ public class SinkV2MetricsITCase extends TestLogger {
 }
 
 @SuppressWarnings("checkstyle:WhitespaceAfter")
-private void assertSinkMetrics(JobID jobId, long 
processedRecordsPerSubtask, int numSplits) {
+private void assertSinkMetrics(
+JobID jobId, long processedRecordsPerSubtask, int parallelism, int 
numSplits) {
 List groups =
 reporter.findOperatorMetricGroups(
 jobId, TEST_SINK_NAME + ": " + DEFAULT_WRITER_NAME);
+assertThat(groups, hasSize(parallelism));
 
 int subtaskWithMetrics = 0;
 for (OperatorMetricGroup group : groups) {
 Map metrics = reporter.getMetricsByGroup(group);
 // There are only 2 splits assigned; so two groups will not update 
metrics.
-if (group.getIOMetricGroup() == null
-|| group.getIOMetricGroup().getNumRecordsOutCounter() == 
null
-|| 
group.getIOMetricGroup().getNumRecordsOutCounter().getCount() == 0) {
+if (group.getIOMetricGroup().getNumRecordsOutCounter().getCount() 
== 0) {
 continue;
 }
 subtaskWithMetrics++;
@@ -215,36 +217,43 @@ public class SinkV2MetricsITCase extends TestLogger {
 assertThat(subtaskWithMetrics, equalTo(numSplits));
 }
 
-private void assertSinkCommitterMetrics(JobID jobId, Map 
expected) {
+private void assertSinkCommitterMetrics(
+JobID jobId, int parallelism, Map expected) {
 List groups =
 reporter.findOperatorMetricGroups(
 jobId, TEST_SINK_NAME + ": " + DEFAULT_COMMITTER_NAME);
+assertThat(groups, hasSize(parallelism));
 
 Map aggregated = new HashMap<>(6);
 for (OperatorMetricGr

(flink) branch master updated (be5cf3c9d67 -> 59c42d96062)

2023-12-12 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from be5cf3c9d67 [FLINK-33757] Deleting RankJsonPlanTest.java and 
RankJsonPlanITCase.java
 new dddbeafab2c Revert "[FLINK-33568] Fix NullpointerException in 
SinkV2MetricsITCase (#23737)"
 new 55d3a39fd47 Revert "[FLINK-25857] Add committer metrics to track the 
status of committables"
 new 59c42d96062 [FLINK-25857] Add committer metrics to track the status of 
committables

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../f7a4e6fa-e7de-48c9-a61e-c13e83f0c72e   | 30 +++---
 .../base/sink/writer/AsyncSinkWriter.java  |  6 ++---
 .../base/sink/writer/ElementConverter.java |  2 +-
 .../connector/base/sink/ArrayListAsyncSink.java|  4 +--
 .../base/sink/writer/AsyncSinkWriterTest.java  |  8 +++---
 .../sink/writer/AsyncSinkWriterThrottlingTest.java |  2 +-
 .../base/sink/writer/TestElementConverter.java |  2 +-
 .../base/sink/writer/TestSinkInitContext.java  |  4 +--
 .../TestSinkInitContextAnyThreadMailbox.java   |  2 +-
 .../apache/flink/connector/file/sink/FileSink.java | 13 +-
 .../org/apache/flink/api/connector/sink/Sink.java  |  2 +-
 .../org/apache/flink/api/connector/sink2/Sink.java |  4 +--
 .../flink/api/connector/sink2/StatefulSink.java|  7 +++--
 .../connector/sink2/TwoPhaseCommittingSink.java| 21 ---
 flink-end-to-end-tests/run-nightly-tests.sh|  3 +--
 .../connectors/tests/test_elasticsearch.py |  3 ---
 .../datastream/connectors/tests/test_kafka.py  |  4 ---
 .../datastream/connectors/tests/test_kinesis.py|  3 ---
 .../datastream/connectors/tests/test_pulsar.py |  4 ---
 .../streaming/api/functions/sink/PrintSink.java|  2 +-
 .../api/functions/sink/v2/DiscardingSink.java  |  2 +-
 .../api/transformations/SinkV1Adapter.java | 13 +-
 .../runtime/operators/sink/InitContextBase.java|  5 ++--
 .../runtime/operators/sink/SinkWriterOperator.java |  9 +++
 .../operators/sink/SinkWriterStateHandler.java |  6 ++---
 .../sink/StatefulSinkWriterStateHandler.java   |  4 +--
 .../sink/StatelessSinkWriterStateHandler.java  |  4 +--
 .../sink/committables/CommittableCollector.java|  2 +-
 .../streaming/api/functions/PrintSinkTest.java |  2 +-
 .../api/graph/StreamingJobGraphGeneratorTest.java  |  2 +-
 .../sink/SinkV2SinkWriterOperatorTest.java |  3 ++-
 .../operators/sink/SinkWriterOperatorTestBase.java | 12 -
 .../runtime/operators/sink/TestSinkV2.java | 13 +-
 .../CommittableCollectorSerializerTest.java|  6 ++---
 .../flink/streaming/util/TestExpandingSink.java|  2 +-
 .../connector/upserttest/sink/UpsertTestSink.java  |  2 +-
 .../scheduling/SpeculativeSchedulerITCase.java |  2 +-
 .../streaming/runtime/SinkV2MetricsITCase.java |  2 +-
 38 files changed, 106 insertions(+), 111 deletions(-)



(flink) branch master updated: [FLINK-33295] Separate SinkV2 and SinkV1Adapter tests

2023-11-08 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 92951a05127 [FLINK-33295] Separate SinkV2 and SinkV1Adapter tests
92951a05127 is described below

commit 92951a05127f1e0e2ab0ea04ae022659fc5276ab
Author: pvary 
AuthorDate: Wed Nov 8 17:55:43 2023 +0100

[FLINK-33295] Separate SinkV2 and SinkV1Adapter tests

Co-authored-by: Peter Vary 
---
 .../base/sink/writer/TestSinkInitContext.java  |   4 +-
 .../connector/file/sink/writer/FileWriterTest.java |   8 +-
 .../groups/InternalSinkWriterMetricGroup.java  |  16 +-
 .../metrics/groups/MetricsGroupTestUtils.java  |  47 +++
 .../api/datastream/DataStreamSinkTest.java |   8 +-
 .../streaming/api/functions/PrintSinkTest.java |   5 +-
 .../SinkTransformationTranslatorITCaseBase.java| 225 +++
 ...a => SinkV1TransformationTranslatorITCase.java} | 190 +
 .../SinkV2TransformationTranslatorITCase.java  | 100 +
 ...torTest.java => CommitterOperatorTestBase.java} | 122 ++
 .../runtime/operators/sink/SinkTestUtil.java   |   4 +-
 .../sink/SinkV2CommitterOperatorTest.java  |  75 
 .../sink/SinkV2SinkWriterOperatorTest.java | 149 +++
 ...orTest.java => SinkWriterOperatorTestBase.java} | 212 --
 .../streaming/runtime/operators/sink/TestSink.java |  50 +--
 .../runtime/operators/sink/TestSinkV2.java | 434 +
 .../sink/WithAdapterCommitterOperatorTest.java |  83 
 .../sink/WithAdapterSinkWriterOperatorTest.java| 132 +++
 .../flink/test/streaming/runtime/SinkV2ITCase.java | 138 +++
 .../streaming/runtime/SinkV2MetricsITCase.java | 183 +
 20 files changed, 1725 insertions(+), 460 deletions(-)

diff --git 
a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/TestSinkInitContext.java
 
b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/TestSinkInitContext.java
index 601d4f9d427..1f70b03413c 100644
--- 
a/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/TestSinkInitContext.java
+++ 
b/flink-connectors/flink-connector-base/src/test/java/org/apache/flink/connector/base/sink/writer/TestSinkInitContext.java
@@ -28,7 +28,7 @@ import org.apache.flink.metrics.Gauge;
 import org.apache.flink.metrics.groups.OperatorIOMetricGroup;
 import org.apache.flink.metrics.groups.SinkWriterMetricGroup;
 import org.apache.flink.metrics.testutils.MetricListener;
-import org.apache.flink.runtime.metrics.groups.InternalSinkWriterMetricGroup;
+import org.apache.flink.runtime.metrics.groups.MetricsGroupTestUtils;
 import org.apache.flink.runtime.metrics.groups.UnregisteredMetricGroups;
 import org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor;
 import org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService;
@@ -51,7 +51,7 @@ public class TestSinkInitContext implements Sink.InitContext {
 private final OperatorIOMetricGroup operatorIOMetricGroup =
 
UnregisteredMetricGroups.createUnregisteredOperatorMetricGroup().getIOMetricGroup();
 private final SinkWriterMetricGroup metricGroup =
-InternalSinkWriterMetricGroup.mock(
+MetricsGroupTestUtils.mockWriterMetricGroup(
 metricListener.getMetricGroup(), operatorIOMetricGroup);
 private final MailboxExecutor mailboxExecutor;
 
diff --git 
a/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/writer/FileWriterTest.java
 
b/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/writer/FileWriterTest.java
index 0149bf3e6da..cd0dda1d978 100644
--- 
a/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/writer/FileWriterTest.java
+++ 
b/flink-connectors/flink-connector-files/src/test/java/org/apache/flink/connector/file/sink/writer/FileWriterTest.java
@@ -31,7 +31,7 @@ import org.apache.flink.metrics.Counter;
 import org.apache.flink.metrics.groups.OperatorIOMetricGroup;
 import org.apache.flink.metrics.groups.SinkWriterMetricGroup;
 import org.apache.flink.metrics.testutils.MetricListener;
-import org.apache.flink.runtime.metrics.groups.InternalSinkWriterMetricGroup;
+import org.apache.flink.runtime.metrics.groups.MetricsGroupTestUtils;
 import org.apache.flink.runtime.metrics.groups.UnregisteredMetricGroups;
 import org.apache.flink.streaming.api.functions.sink.filesystem.BucketAssigner;
 import 
org.apache.flink.streaming.api.functions.sink.filesystem.OutputFileConfig;
@@ -292,7 +292,7 @@ class FileWriterTest {
 final OperatorIOMetricGroup operatorIOMetricGroup =
 
UnregisteredMetricGroups.createUnregisteredOperatorMetricGroup().getIOMe

(flink-kubernetes-operator) branch main updated: [FLINK-33427] Handle new autoscaler config keys like operator configs

2023-11-02 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 81a2d993 [FLINK-33427] Handle new autoscaler config keys like operator 
configs
81a2d993 is described below

commit 81a2d993dffb6b193a582fcc0f08b28e5bb1073d
Author: Gyula Fora 
AuthorDate: Thu Nov 2 18:59:01 2023 +0100

[FLINK-33427] Handle new autoscaler config keys like operator configs
---
 .../flink/kubernetes/operator/api/spec/AbstractFlinkSpec.java | 1 +
 .../flink/kubernetes/operator/config/FlinkConfigManager.java  | 4 +++-
 .../flink/kubernetes/operator/config/FlinkConfigManagerTest.java  | 8 
 .../flink/kubernetes/operator/reconciler/diff/SpecDiffTest.java   | 5 -
 4 files changed, 16 insertions(+), 2 deletions(-)

diff --git 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/spec/AbstractFlinkSpec.java
 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/spec/AbstractFlinkSpec.java
index 3646f2ef..d3ed1d70 100644
--- 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/spec/AbstractFlinkSpec.java
+++ 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/spec/AbstractFlinkSpec.java
@@ -48,6 +48,7 @@ public abstract class AbstractFlinkSpec implements 
Diffable {
 
 /** Flink configuration overrides for the Flink deployment or Flink 
session job. */
 @SpecDiff.Config({
+@SpecDiff.Entry(prefix = "job.autoscaler", type = DiffType.IGNORE),
 @SpecDiff.Entry(prefix = "parallelism.default", type = 
DiffType.IGNORE),
 @SpecDiff.Entry(prefix = "kubernetes.operator", type = 
DiffType.IGNORE),
 @SpecDiff.Entry(
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigManager.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigManager.java
index df7d7ee6..0bca875a 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigManager.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigManager.java
@@ -19,6 +19,7 @@
 package org.apache.flink.kubernetes.operator.config;
 
 import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.autoscaler.config.AutoScalerOptions;
 import org.apache.flink.configuration.ConfigOption;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.GlobalConfiguration;
@@ -245,7 +246,8 @@ public class FlinkConfigManager {
 spec.getFlinkConfiguration()
 .forEach(
 (k, v) -> {
-if (k.startsWith(K8S_OP_CONF_PREFIX)) {
+if (k.startsWith(K8S_OP_CONF_PREFIX)
+|| 
k.startsWith(AutoScalerOptions.AUTOSCALER_CONF_PREFIX)) {
 conf.setString(k, v);
 }
 });
diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigManagerTest.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigManagerTest.java
index 47bf78ed..3ec8fa40 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigManagerTest.java
+++ 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/config/FlinkConfigManagerTest.java
@@ -17,6 +17,7 @@
 
 package org.apache.flink.kubernetes.operator.config;
 
+import org.apache.flink.autoscaler.config.AutoScalerOptions;
 import org.apache.flink.configuration.ConfigOption;
 import org.apache.flink.configuration.ConfigOptions;
 import org.apache.flink.configuration.Configuration;
@@ -80,6 +81,10 @@ public class FlinkConfigManagerTest {
 
 deployment.getSpec().getFlinkConfiguration().put(testConf.key(), 
"latest");
 deployment.getSpec().getFlinkConfiguration().put(opTestConf.key(), 
"latest");
+deployment
+.getSpec()
+.getFlinkConfiguration()
+.put(AutoScalerOptions.METRICS_WINDOW.key(), "1234m");
 
 assertEquals(
 "latest",
@@ -93,6 +98,9 @@ public class FlinkConfigManagerTest {
 .get(opTestConf));
 assertEquals("reconciled", 
configManager.getObserveConfig(deployment).get(testConf));
 assertEquals("latest", 
configManager.getObserveConfig(deployment).get(opTestConf));
+assertEquals(
+ 

(flink-kubernetes-operator) branch release-1.6 updated: [release] Update the doc version to 1.6.1

2023-10-27 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch release-1.6
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/release-1.6 by this push:
 new 6458c7ff [release] Update the doc version to 1.6.1
6458c7ff is described below

commit 6458c7ffc6d30e9b231932dad634a39d450916c7
Author: Rui Fan <1996fan...@gmail.com>
AuthorDate: Fri Oct 27 23:48:01 2023 +0800

[release] Update the doc version to 1.6.1
---
 docs/config.toml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/config.toml b/docs/config.toml
index 4ee62dca..66c7cb83 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -34,7 +34,7 @@ pygmentsUseClasses = true
   # we change the version for the complete docs when forking of a release 
branch
   # etc.
   # The full version string as referenced in Maven (e.g. 1.2.1)
-  Version = "1.6.0"
+  Version = "1.6.1"
 
   # For stable releases, leave the bugfix version out (e.g. 1.2). For snapshot
   # release this should be the same as the regular version



[flink] branch master updated (0c3ccbf8a0d -> 2da9a963921)

2023-10-15 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 0c3ccbf8a0d [hotfix] Fix predicates test imports
 add 2da9a963921 [FLINK-33030][python] Add python 3.11 support

No new revisions were added by this update.

Summary of changes:
 .../content/docs/dev/python/datastream_tutorial.md |   2 +-
 docs/content/docs/dev/python/installation.md   |   4 +-
 docs/content/docs/dev/python/table_api_tutorial.md |   2 +-
 docs/content/docs/flinkDev/building.md |   4 +-
 flink-python/README.md |   4 +-
 flink-python/apache-flink-libraries/setup.py   |   3 +-
 flink-python/dev/build-wheels.sh   |   2 +-
 flink-python/dev/dev-requirements.txt  |   6 +-
 flink-python/dev/lint-python.sh|  12 +-
 flink-python/pom.xml   |   2 +-
 .../pyflink/fn_execution/flink_fn_execution_pb2.py | 402 +--
 .../fn_execution/flink_fn_execution_pb2.pyi| 566 +
 .../tests/test_flink_fn_execution_pb2.py   |  18 +-
 flink-python/pyflink/gen_protos.py |  13 +-
 flink-python/pyproject.toml|   2 +-
 flink-python/setup.py  |   5 +-
 flink-python/src/main/resources/META-INF/NOTICE|   2 +-
 flink-python/tox.ini   |   4 +-
 tools/releasing/create_binary_release.sh   |   4 +-
 19 files changed, 615 insertions(+), 442 deletions(-)
 create mode 100644 flink-python/pyflink/fn_execution/flink_fn_execution_pb2.pyi



[flink-kubernetes-operator] branch main updated: [FLINK-33066] Support all k8s methods to configure env variable in operatorPod (#671)

2023-09-13 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 1053e269 [FLINK-33066] Support all k8s methods to configure env 
variable in operatorPod (#671)
1053e269 is described below

commit 1053e2698eec7a09a6acb65444b9ad25dd3d8d2e
Author: dongwoo kim 
AuthorDate: Thu Sep 14 13:45:58 2023 +0800

[FLINK-33066] Support all k8s methods to configure env variable in 
operatorPod (#671)

Co-authored-by: dongwoo6.kim 
---
 docs/content/docs/operations/helm.md |  1 +
 helm/flink-kubernetes-operator/templates/flink-operator.yaml | 11 +++
 helm/flink-kubernetes-operator/values.yaml   |  8 
 3 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/docs/content/docs/operations/helm.md 
b/docs/content/docs/operations/helm.md
index fecb4891..bb84a2b5 100644
--- a/docs/content/docs/operations/helm.md
+++ b/docs/content/docs/operations/helm.md
@@ -71,6 +71,7 @@ The configurable parameters of the Helm chart and which 
default values as detail
 | operatorPod.annotations| Custom annotations to be 
added to the operator pod (but not the deployment). 
  | 



 [...]
 | operatorPod.labels | Custom labels to be added 
to the operator pod (but not the deployment).   
 |  



[...]
 | operatorPod.env| Custom env to be added to 
the operator pod.   
 |  



[...]
+| operatorPod.envFrom| Custom envFrom settings to 
be added to the operator pod.   
|   



   [...]
 | operatorPod.dnsPolicy  | DNS policy to be used by 
the operator pod.   
  | 



 [...]
 | operatorPod.dnsConfig  | DNS configuration to be 
used by the operator pod.   
   |



  [...]
 | operatorPod.nodeSelector   | Custom nodeSelector to be 
added to the operator pod.  
 |  



[...]
diff --git a/helm/flink-kubernetes-operator/templates/flink-operator.yaml 
b/helm/flink-kubernetes-operator/templates/flink-operator.yaml
index 792087b1..0349612b 100644
--- a/helm/flink-kubernetes-operator/templates/flink

[flink] branch master updated (c23a3002b83 -> de01f021bdf)

2023-08-29 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from c23a3002b83 [FLINK-30863][state/changelog] Register local recovery 
files of changelog before notifyCheckpointComplete()
 add de01f021bdf [FLINK-32811][runtime] Add port range support for 
"taskmanager.data.bind-port"

No new revisions were added by this update.

Summary of changes:
 .../generated/all_taskmanager_section.html |  6 ++
 .../generated/common_host_port_section.html|  6 ++
 .../netty_shuffle_environment_configuration.html   |  4 +-
 .../NettyShuffleEnvironmentOptions.java| 12 +++-
 .../main/java/org/apache/flink/util/NetUtils.java  | 18 --
 .../java/org/apache/flink/util/NetUtilsTest.java   | 34 +++
 .../runtime/io/network/netty/NettyClient.java  |  6 +-
 .../runtime/io/network/netty/NettyConfig.java  | 25 +---
 .../runtime/io/network/netty/NettyServer.java  | 45 --
 .../runtime/taskexecutor/TaskManagerServices.java  |  4 ++
 .../NettyShuffleEnvironmentConfiguration.java  | 45 +++---
 .../org/apache/flink/runtime/util/PortRange.java   | 61 +++
 .../io/network/netty/NettyClientServerSslTest.java | 30 ++---
 .../netty/NettyServerFromPortRangeTest.java| 71 ++
 .../runtime/io/network/netty/NettyTestUtil.java| 23 ++-
 .../util/{HardwareTest.java => PortRangeTest.java} | 34 +--
 16 files changed, 326 insertions(+), 98 deletions(-)
 create mode 100644 
flink-runtime/src/main/java/org/apache/flink/runtime/util/PortRange.java
 create mode 100644 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/netty/NettyServerFromPortRangeTest.java
 copy 
flink-runtime/src/test/java/org/apache/flink/runtime/util/{HardwareTest.java => 
PortRangeTest.java} (56%)



[flink] branch master updated (171524fec9a -> 63a4db2f455)

2023-08-28 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 171524fec9a [FLINK-32971][python] Add proper development version 
support in pyflink
 add 63a4db2f455 [FLINK-32981][python] Add python dynamic Flink home 
detection

No new revisions were added by this update.

Summary of changes:
 flink-python/apache-flink-libraries/setup.py | 8 ++--
 flink-python/setup.py| 9 +++--
 2 files changed, 13 insertions(+), 4 deletions(-)



[flink] branch master updated (ab41e4f4f14 -> 171524fec9a)

2023-08-28 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from ab41e4f4f14 [FLINK-32963][test] Remove flakiness from 
testKeyedMapStateStateMigration (#23298)
 add 171524fec9a [FLINK-32971][python] Add proper development version 
support in pyflink

No new revisions were added by this update.

Summary of changes:
 .gitignore   | 4 ++--
 flink-python/apache-flink-libraries/setup.py | 3 ++-
 flink-python/setup.py| 5 +++--
 3 files changed, 7 insertions(+), 5 deletions(-)



[flink] branch 1.16 created (now bc4c21e4704)

2023-07-13 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch 1.16
in repository https://gitbox.apache.org/repos/asf/flink.git


  at bc4c21e4704 [FLINK-32049][checkpoint] Fix thread-safe bug of channel 
state executor when some subtasks are closed while other subtasks are starting

No new revisions were added by this update.



[flink] branch master updated (d90a72da2fd -> 74572519768)

2023-06-01 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from d90a72da2fd [FLINK-32226][client] Cleanup jobgraph file on submission 
failure
 add 74572519768 [docs] Deduplicate mistakenly repeated words

No new revisions were added by this update.

Summary of changes:
 docs/content/docs/dev/table/data_stream_api.md| 2 +-
 docs/layouts/shortcodes/github_repo.html  | 4 ++--
 docs/layouts/shortcodes/version.html  | 4 ++--
 .../org/apache/flink/connector/file/src/reader/FileRecordFormat.java  | 4 ++--
 .../flink/streaming/connectors/kafka/KafkaProducerTestBase.java   | 2 +-
 .../src/main/java/org/apache/hadoop/hive/conf/HiveConf.java   | 2 +-
 .../flink/runtime/deployment/InputGateDeploymentDescriptor.java   | 2 +-
 .../resourcemanager/slotmanager/FineGrainedSlotManagerTest.java   | 2 +-
 .../org/apache/flink/runtime/state/AsyncSnapshotCallableTest.java | 2 +-
 .../org/apache/flink/table/planner/plan/utils/TemporalJoinUtil.scala  | 2 +-
 .../flink/table/runtime/operators/window/slicing/SliceAssigner.java   | 4 ++--
 11 files changed, 15 insertions(+), 15 deletions(-)



[flink] branch release-1.16 updated (141b47a8009 -> 6a31cc4f766)

2023-05-25 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git


from 141b47a8009 [FLINK-30844][runtime] Increased TASK_CANCELLATION_TIMEOUT 
for testInterruptibleSharedLockInInvokeAndCancel (#22657)
 add 6a31cc4f766 [FLINK-32174][docs] Update Cloudera related product under 
vendor solutions

No new revisions were added by this update.

Summary of changes:
 docs/content.zh/docs/deployment/overview.md | 11 ++-
 docs/content/docs/deployment/overview.md| 11 ++-
 2 files changed, 4 insertions(+), 18 deletions(-)



[flink] branch release-1.17 updated: [FLINK-32174][docs] Update Cloudera related product under vendor solutions

2023-05-25 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.17 by this push:
 new fda49dd6aad [FLINK-32174][docs] Update Cloudera related product under 
vendor solutions
fda49dd6aad is described below

commit fda49dd6aad198f8200c27b603e4da8c758dedb7
Author: Ferenc Csaky 
AuthorDate: Thu May 25 20:00:53 2023 +0200

[FLINK-32174][docs] Update Cloudera related product under vendor solutions
---
 docs/content.zh/docs/deployment/overview.md | 11 ++-
 docs/content/docs/deployment/overview.md| 11 ++-
 2 files changed, 4 insertions(+), 18 deletions(-)

diff --git a/docs/content.zh/docs/deployment/overview.md 
b/docs/content.zh/docs/deployment/overview.md
index 20a137459c7..f07fdbcc1f5 100644
--- a/docs/content.zh/docs/deployment/overview.md
+++ b/docs/content.zh/docs/deployment/overview.md
@@ -289,9 +289,9 @@ Supported Environments:
 Supported Environments:
 {{< label AWS >}}
 
- Cloudera DataFlow
+ Cloudera Stream Processing
 
-[Website](https://www.cloudera.com/products/cdf.html)
+[Website](https://www.cloudera.com/products/stream-processing.html)
 
 Supported Environment:
 {{< label AWS >}}
@@ -299,13 +299,6 @@ Supported Environment:
 {{< label Google Cloud >}}
 {{< label On-Premise >}}
 
- Eventador
-
-[Website](https://eventador.io)
-
-Supported Environment:
-{{< label AWS >}}
-
  Huawei Cloud Stream Service
 
 [Website](https://www.huaweicloud.com/intl/zh-cn/product/cs.html)
diff --git a/docs/content/docs/deployment/overview.md 
b/docs/content/docs/deployment/overview.md
index a1cc8669043..15e90fce97c 100644
--- a/docs/content/docs/deployment/overview.md
+++ b/docs/content/docs/deployment/overview.md
@@ -295,9 +295,9 @@ Supported Environments:
 Supported Environments:
 {{< label AWS >}}
 
- Cloudera DataFlow
+ Cloudera Stream Processing
 
-[Website](https://www.cloudera.com/products/cdf.html)
+[Website](https://www.cloudera.com/products/stream-processing.html)
 
 Supported Environment:
 {{< label AWS >}}
@@ -305,13 +305,6 @@ Supported Environment:
 {{< label Google Cloud >}}
 {{< label On-Premise >}}
 
- Eventador
-
-[Website](https://eventador.io)
-
-Supported Environment:
-{{< label AWS >}}
-
  Huawei Cloud Stream Service
 
 [Website](https://www.huaweicloud.com/intl/en-us/product/cs.html)



[flink] branch master updated: [FLINK-32174][docs] Update Cloudera related product under vendor solutions

2023-05-25 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new a4de89450e1 [FLINK-32174][docs] Update Cloudera related product under 
vendor solutions
a4de89450e1 is described below

commit a4de89450e114bc611fd5dcb90ec30221fa77baf
Author: Ferenc Csaky 
AuthorDate: Thu May 25 19:59:32 2023 +0200

[FLINK-32174][docs] Update Cloudera related product under vendor solutions
---
 docs/content.zh/docs/deployment/overview.md | 11 ++-
 docs/content/docs/deployment/overview.md| 11 ++-
 2 files changed, 4 insertions(+), 18 deletions(-)

diff --git a/docs/content.zh/docs/deployment/overview.md 
b/docs/content.zh/docs/deployment/overview.md
index 20a137459c7..f07fdbcc1f5 100644
--- a/docs/content.zh/docs/deployment/overview.md
+++ b/docs/content.zh/docs/deployment/overview.md
@@ -289,9 +289,9 @@ Supported Environments:
 Supported Environments:
 {{< label AWS >}}
 
- Cloudera DataFlow
+ Cloudera Stream Processing
 
-[Website](https://www.cloudera.com/products/cdf.html)
+[Website](https://www.cloudera.com/products/stream-processing.html)
 
 Supported Environment:
 {{< label AWS >}}
@@ -299,13 +299,6 @@ Supported Environment:
 {{< label Google Cloud >}}
 {{< label On-Premise >}}
 
- Eventador
-
-[Website](https://eventador.io)
-
-Supported Environment:
-{{< label AWS >}}
-
  Huawei Cloud Stream Service
 
 [Website](https://www.huaweicloud.com/intl/zh-cn/product/cs.html)
diff --git a/docs/content/docs/deployment/overview.md 
b/docs/content/docs/deployment/overview.md
index 11e07c8d60e..6548c8f9cef 100644
--- a/docs/content/docs/deployment/overview.md
+++ b/docs/content/docs/deployment/overview.md
@@ -295,9 +295,9 @@ Supported Environments:
 Supported Environments:
 {{< label AWS >}}
 
- Cloudera DataFlow
+ Cloudera Stream Processing
 
-[Website](https://www.cloudera.com/products/cdf.html)
+[Website](https://www.cloudera.com/products/stream-processing.html)
 
 Supported Environment:
 {{< label AWS >}}
@@ -305,13 +305,6 @@ Supported Environment:
 {{< label Google Cloud >}}
 {{< label On-Premise >}}
 
- Eventador
-
-[Website](https://eventador.io)
-
-Supported Environment:
-{{< label AWS >}}
-
  Huawei Cloud Stream Service
 
 [Website](https://www.huaweicloud.com/intl/en-us/product/cs.html)



[flink-kubernetes-operator] branch main updated (f23e38d6 -> 31c73993)

2023-05-09 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


from f23e38d6 [FLINK-31815] Bump jackson version to eliminate snakeyaml 
vulnerability
 add 31c73993 [hotfix] Update CRD compat check to 1.4.0

No new revisions were added by this update.

Summary of changes:
 flink-kubernetes-operator-api/pom.xml | 14 ++
 1 file changed, 2 insertions(+), 12 deletions(-)



[flink] branch release-1.17 updated: [FLINK-31839][filesystems] Fix flink-s3-fs-hadoop and flink-s3-fs-presto plugin collision

2023-04-19 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.17 by this push:
 new af4c68ab8c9 [FLINK-31839][filesystems] Fix flink-s3-fs-hadoop and 
flink-s3-fs-presto plugin collision
af4c68ab8c9 is described below

commit af4c68ab8c910ffc4e02ec3c0b07fe727d867303
Author: Gabor Somogyi 
AuthorDate: Wed Apr 19 18:49:15 2023 +0200

[FLINK-31839][filesystems] Fix flink-s3-fs-hadoop and flink-s3-fs-presto 
plugin collision

These collided with regards to their name of delegation token service.
---
 .../security/security-delegation-token.md  | 21 -
 .../fs/s3/common/AbstractS3FileSystemFactory.java  |  4 +--
 ...java => AbstractS3DelegationTokenProvider.java} | 10 ++-
 ...java => AbstractS3DelegationTokenReceiver.java} | 12 +++-
 .../DynamicTemporaryAWSCredentialsProvider.java|  2 +-
 ... => AbstractS3DelegationTokenProviderTest.java} | 23 ++
 ... => AbstractS3DelegationTokenReceiverTest.java} | 35 ++
 ...DynamicTemporaryAWSCredentialsProviderTest.java | 12 ++--
 .../token/S3HadoopDelegationTokenProvider.java | 31 +++
 .../token/S3HadoopDelegationTokenReceiver.java | 31 +++
 ...ink.core.security.token.DelegationTokenProvider |  2 +-
 ...ink.core.security.token.DelegationTokenReceiver |  2 +-
 .../token/S3PrestoDelegationTokenProvider.java | 31 +++
 .../token/S3PrestoDelegationTokenReceiver.java | 31 +++
 ...ink.core.security.token.DelegationTokenProvider |  2 +-
 ...ink.core.security.token.DelegationTokenReceiver |  2 +-
 .../token/DefaultDelegationTokenManager.java   | 26 ++--
 .../token/DefaultDelegationTokenManagerTest.java   | 24 +++
 18 files changed, 248 insertions(+), 53 deletions(-)

diff --git a/docs/content/docs/deployment/security/security-delegation-token.md 
b/docs/content/docs/deployment/security/security-delegation-token.md
index 16ce2228783..a7f1876d595 100644
--- a/docs/content/docs/deployment/security/security-delegation-token.md
+++ b/docs/content/docs/deployment/security/security-delegation-token.md
@@ -213,23 +213,30 @@ loaded and then will be overwritten by the loading 
mechanism in Flink.
 
 There are certain limitations to bear in mind when talking about DTs.
 
-Firstly, not all DTs actually expose their renewal period. This is a service 
configuration that is 
+* Not all DTs actually expose their renewal period. This is a service 
configuration that is 
 not generally exposed to clients. For this reason, certain DT providers cannot 
provide a renewal period, 
 thus requiring that the service's configuration is in some way synchronized 
with another service
-that does provide that information.
-
+that does provide that information.  
 The HDFS service, which is generally available when DTs are needed in the 
first place, provides
 this information, so in general it's a good idea for all services using DTs to 
use the same
 configuration as HDFS for the renewal period.
 
-Secondly, Flink is not parsing the user application code, so it doesn't know 
which delegation
+* Flink is not parsing the user application code, so it doesn't know which 
delegation
 tokens will be needed. This means that Flink will try to get as many 
delegation tokens as is possible
 based on the configuration available. That means that if an HBase token 
provider is enabled but the app
 doesn't actually use HBase, a DT will still be generated. The user would have 
to explicitly
 disable the mentioned provider in that case.
 
-Thirdly, it is challenging to create DTs "on demand". Flink 
obtains/distributes tokens upfront
-and re-obtains/re-distributes them periodically.
-
+* It is challenging to create DTs "on demand". Flink obtains/distributes 
tokens upfront
+and re-obtains/re-distributes them periodically.  
 The advantage, though, is that user code does not need to worry about DTs, 
since Flink will handle
 them transparently when the proper configuration is available.
+
+* There are external file system plugins which are authenticating to the same 
service. One good example
+is `s3-hadoop` and `s3-presto`. They both authenticate to S3. They're having 
different service names
+but obtaining tokens for the same service which might cause unintended 
consequences. Since they're
+obtaining tokens for the same service they store these tokens at the same 
place. It's easy to see that
+if they're used together with the same credentials then there will be no 
issues since the tokens are
+going to be overwritten by each other in a single-threaded way (which belongs 
to a single user).
+However, if the plugins are configured with different user credentials then 
the token which w

[flink] branch master updated (15c4d88eb78 -> 839b3b9005c)

2023-04-19 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 15c4d88eb78 [FLINK-31834][Azure] Free up disk space before caching
 add 839b3b9005c [FLINK-31839][filesystems] Fix flink-s3-fs-hadoop and 
flink-s3-fs-presto plugin collision

No new revisions were added by this update.

Summary of changes:
 .../security/security-delegation-token.md  | 21 -
 .../fs/s3/common/AbstractS3FileSystemFactory.java  |  4 +--
 ...java => AbstractS3DelegationTokenProvider.java} | 10 ++-
 ...java => AbstractS3DelegationTokenReceiver.java} | 12 +++-
 .../DynamicTemporaryAWSCredentialsProvider.java|  2 +-
 ... => AbstractS3DelegationTokenProviderTest.java} | 23 ++
 ... => AbstractS3DelegationTokenReceiverTest.java} | 35 ++
 ...DynamicTemporaryAWSCredentialsProviderTest.java | 12 ++--
 .../token/S3HadoopDelegationTokenProvider.java | 31 +++
 .../token/S3HadoopDelegationTokenReceiver.java | 31 +++
 ...ink.core.security.token.DelegationTokenProvider |  2 +-
 ...ink.core.security.token.DelegationTokenReceiver |  2 +-
 .../token/S3PrestoDelegationTokenProvider.java | 31 +++
 .../token/S3PrestoDelegationTokenReceiver.java | 31 +++
 ...ink.core.security.token.DelegationTokenProvider |  2 +-
 ...ink.core.security.token.DelegationTokenReceiver |  2 +-
 .../token/DefaultDelegationTokenManager.java   | 26 ++--
 .../token/DefaultDelegationTokenManagerTest.java   | 24 +++
 18 files changed, 248 insertions(+), 53 deletions(-)
 rename 
flink-filesystems/flink-s3-fs-base/src/main/java/org/apache/flink/fs/s3/common/token/{S3DelegationTokenProvider.java
 => AbstractS3DelegationTokenProvider.java} (94%)
 rename 
flink-filesystems/flink-s3-fs-base/src/main/java/org/apache/flink/fs/s3/common/token/{S3DelegationTokenReceiver.java
 => AbstractS3DelegationTokenReceiver.java} (91%)
 rename 
flink-filesystems/flink-s3-fs-base/src/test/java/org/apache/flink/fs/s3/common/token/{S3DelegationTokenProviderTest.java
 => AbstractS3DelegationTokenProviderTest.java} (78%)
 rename 
flink-filesystems/flink-s3-fs-base/src/test/java/org/apache/flink/fs/s3/common/token/{S3DelegationTokenReceiverTest.java
 => AbstractS3DelegationTokenReceiverTest.java} (75%)
 create mode 100644 
flink-filesystems/flink-s3-fs-hadoop/src/main/java/org/apache/flink/fs/s3hadoop/token/S3HadoopDelegationTokenProvider.java
 create mode 100644 
flink-filesystems/flink-s3-fs-hadoop/src/main/java/org/apache/flink/fs/s3hadoop/token/S3HadoopDelegationTokenReceiver.java
 copy flink-filesystems/{flink-s3-fs-base => 
flink-s3-fs-hadoop}/src/main/resources/META-INF/services/org.apache.flink.core.security.token.DelegationTokenProvider
 (92%)
 copy flink-filesystems/{flink-s3-fs-base => 
flink-s3-fs-hadoop}/src/main/resources/META-INF/services/org.apache.flink.core.security.token.DelegationTokenReceiver
 (92%)
 create mode 100644 
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/token/S3PrestoDelegationTokenProvider.java
 create mode 100644 
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/flink/fs/s3presto/token/S3PrestoDelegationTokenReceiver.java
 rename flink-filesystems/{flink-s3-fs-base => 
flink-s3-fs-presto}/src/main/resources/META-INF/services/org.apache.flink.core.security.token.DelegationTokenProvider
 (92%)
 rename flink-filesystems/{flink-s3-fs-base => 
flink-s3-fs-presto}/src/main/resources/META-INF/services/org.apache.flink.core.security.token.DelegationTokenReceiver
 (92%)



[flink-kubernetes-operator] branch main updated: [FLINK-31716] Improve Event UID field handling

2023-04-19 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new a5aca7c1 [FLINK-31716] Improve Event UID field handling
a5aca7c1 is described below

commit a5aca7c13e7b822fc82b79e8d6d76eb6f15a
Author: Rodrigo 
AuthorDate: Wed Apr 19 01:03:52 2023 -0700

[FLINK-31716] Improve Event UID field handling
---
 .../flink/kubernetes/operator/utils/EventUtils.java   |  8 ++--
 .../controller/FlinkDeploymentControllerTest.java | 19 +++
 .../kubernetes/operator/utils/EventUtilsTest.java |  7 ---
 3 files changed, 21 insertions(+), 13 deletions(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/EventUtils.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/EventUtils.java
index 6769ce93..1e374127 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/EventUtils.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/EventUtils.java
@@ -76,8 +76,7 @@ public class EventUtils {
 existing.setLastTimestamp(Instant.now().toString());
 existing.setCount(existing.getCount() + 1);
 existing.setMessage(message);
-client.resource(existing).createOrReplace();
-eventListener.accept(existing);
+eventListener.accept(client.resource(existing).createOrReplace());
 return false;
 } else {
 var event =
@@ -104,10 +103,7 @@ public class EventUtils {
 .withNamespace(target.getMetadata().getNamespace())
 .endMetadata()
 .build();
-
-var ev = client.resource(event).createOrReplace();
-event.getMetadata().setUid(ev.getMetadata().getUid());
-eventListener.accept(event);
+eventListener.accept(client.resource(event).createOrReplace());
 return true;
 }
 }
diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentControllerTest.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentControllerTest.java
index 2103b182..541da7aa 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentControllerTest.java
+++ 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentControllerTest.java
@@ -41,6 +41,7 @@ import 
org.apache.flink.kubernetes.operator.utils.EventRecorder;
 import org.apache.flink.kubernetes.operator.utils.IngressUtils;
 import org.apache.flink.runtime.client.JobStatusMessage;
 
+import io.fabric8.kubernetes.api.model.Event;
 import io.fabric8.kubernetes.api.model.EventBuilder;
 import io.fabric8.kubernetes.api.model.HasMetadata;
 import io.fabric8.kubernetes.api.model.networking.v1.Ingress;
@@ -85,6 +86,15 @@ public class FlinkDeploymentControllerTest {
 private KubernetesMockServer mockServer;
 private KubernetesClient kubernetesClient;
 
+Event mockedEvent =
+new EventBuilder()
+.withNewMetadata()
+.withName("name")
+.endMetadata()
+.withType("type")
+.withReason("reason")
+.build();
+
 @BeforeEach
 public void setup() {
 flinkService = new TestingFlinkService(kubernetesClient);
@@ -217,9 +227,10 @@ public class FlinkDeploymentControllerTest {
 
 @Test
 public void verifyFailedDeployment() throws Exception {
+
 var submittedEventValidatingResponseProvider =
 new TestUtils.ValidatingResponseProvider<>(
-new 
EventBuilder().withNewMetadata().endMetadata().build(),
+mockedEvent,
 r ->
 assertTrue(
 r.getBody()
@@ -236,7 +247,7 @@ public class FlinkDeploymentControllerTest {
 
 var validatingResponseProvider =
 new TestUtils.ValidatingResponseProvider<>(
-new 
EventBuilder().withNewMetadata().endMetadata().build(),
+mockedEvent,
 r ->
 assertTrue(
 r.getBody()
@@ -301,7 +312,7 @@ public class FlinkDeploymentControllerTest {
 
 var submittedEventValidatingResponseProvider =
 new TestUtils.ValidatingResponseProvider<>(
-new 
EventBuilder().withNewMetadata().endMe

[flink-kubernetes-operator] branch main updated: [FLINK-31713] Expose FlinkDeployment version metrics

2023-04-10 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 8d7758c4 [FLINK-31713] Expose FlinkDeployment version metrics
8d7758c4 is described below

commit 8d7758c420439a054e9977a8342215b3b8f2670c
Author: Máté Czagány <4469996+mateczag...@users.noreply.github.com>
AuthorDate: Mon Apr 10 13:25:02 2023 +0200

[FLINK-31713] Expose FlinkDeployment version metrics
---
 docs/content/docs/operations/metrics-logging.md|  2 +
 .../operator/metrics/FlinkDeploymentMetrics.java   | 50 +++--
 .../metrics/FlinkDeploymentMetricsTest.java| 85 ++
 3 files changed, 131 insertions(+), 6 deletions(-)

diff --git a/docs/content/docs/operations/metrics-logging.md 
b/docs/content/docs/operations/metrics-logging.md
index 60081bd7..8683a366 100644
--- a/docs/content/docs/operations/metrics-logging.md
+++ b/docs/content/docs/operations/metrics-logging.md
@@ -38,7 +38,9 @@ The Operator gathers aggregates metrics about managed 
resources.
 | Scope  | Metrics 
  | Description 

   | Type   
   |
 
||---||---|
 | Namespace  | FlinkDeployment/FlinkSessionJob.Count   
  | Number of managed resources per namespace   

| Gauge |
+| Namespace  | FlinkDeployment.ResourceUsage.Cpu/Memory
| Total resources used per namespace

| Gauge |
 | Namespace  | FlinkDeployment.JmDeploymentStatus.<Status>.Count 
  | Number of managed FlinkDeployment resources 
per <Status> per namespace. <Status> can take values from: READY, 
DEPLOYED_NOT_READY, DEPLOYING, MISSING, ERROR| 
Gauge |
+| Namespace  | FlinkDeployment.FlinkVersion.<FlinkVersion>.Count 
  | Number of managed FlinkDeployment resources per 
<FlinkVersion> per namespace. <FlinkVersion> is retrieved via REST 
API from Flink JM.  | Gauge 
|
 | Namespace  | 
FlinkDeployment/FlinkSessionJob.Lifecycle.State.<State>.Count 
  | Number of managed resources currently in state <State> per 
namespace. <State> can take values from: CREATED, SUSPENDED, UPGRADING, 
DEPLOYED, STABLE, ROLLING_BACK, ROLLED_BACK, FAILED | Gauge |
 | System/Namespace   | 
FlinkDeployment/FlinkSessionJob.Lifecycle.State.<State>.TimeSeconds   
  | Time spent in state <State$gt for a given resource. 
<State> can take values from: CREATED, SUSPENDED, UPGRADING, DEPLOYED, 
STABLE, ROLLING_BACK, ROLLED_BACK, FAILED  | Histogram |
 | System/Namespace   | 
FlinkDeployment/FlinkSessionJob.Lifecycle.Transition.<Transition>.TimeSeconds
   | Time statistics for selected lifecycle state transitions. 
<Transition> can take values from: Resume, Upgrade, Suspend, 
Stabilization, Rollback, Submission   | 
Histogram |
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
index 36c09d95..a5871544 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
@@ -21,6 +21,8 @@ import org.apache.flink.configuration.Configuration;
 import org.apache.flink.kubernetes.operator.api.FlinkDeployment;
 import 
org.apache.flink.kubernetes.operator.api.status.JobManagerDeploymentStatus;
 import org.apache.flink.kubernetes.operator.service.AbstractFlinkService;
+import org.apache.flink.runtime.rest.messages.DashboardConfiguration;
+import

[flink-kubernetes-operator] branch main updated: [FLINK-31716] Event UID field is missing the first time that an event is consumed

2023-04-04 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 658ad63e [FLINK-31716] Event UID field is missing the first time that 
an event is consumed
658ad63e is described below

commit 658ad63ed98010634ba385fae5992c55daa4f621
Author: Rodrigo Meneses 
AuthorDate: Mon Apr 3 14:26:58 2023 -0700

[FLINK-31716] Event UID field is missing the first time that an event is 
consumed
---
 .../flink/kubernetes/operator/utils/EventUtils.java   |  4 +++-
 .../kubernetes/operator/utils/EventUtilsTest.java | 19 ---
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/EventUtils.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/EventUtils.java
index d993de20..6769ce93 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/EventUtils.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/EventUtils.java
@@ -104,7 +104,9 @@ public class EventUtils {
 .withNamespace(target.getMetadata().getNamespace())
 .endMetadata()
 .build();
-client.resource(event).createOrReplace();
+
+var ev = client.resource(event).createOrReplace();
+event.getMetadata().setUid(ev.getMetadata().getUid());
 eventListener.accept(event);
 return true;
 }
diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/utils/EventUtilsTest.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/utils/EventUtilsTest.java
index 9e9404e4..d5c7dd9d 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/utils/EventUtilsTest.java
+++ 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/utils/EventUtilsTest.java
@@ -19,21 +19,32 @@ package org.apache.flink.kubernetes.operator.utils;
 
 import org.apache.flink.kubernetes.operator.TestUtils;
 
+import io.fabric8.kubernetes.api.model.Event;
 import io.fabric8.kubernetes.client.KubernetesClient;
 import io.fabric8.kubernetes.client.server.mock.EnableKubernetesMockClient;
 import io.fabric8.kubernetes.client.server.mock.KubernetesMockServer;
 import org.junit.jupiter.api.Assertions;
 import org.junit.jupiter.api.Test;
 
+import java.util.function.Consumer;
+
 /** Test for {@link EventUtils}. */
 @EnableKubernetesMockClient(crud = true)
 public class EventUtilsTest {
 
 private KubernetesMockServer mockServer;
 private KubernetesClient kubernetesClient;
+private Event eventConsumed = null;
 
 @Test
 public void testCreateOrReplaceEvent() {
+var consumer =
+new Consumer() {
+@Override
+public void accept(Event event) {
+eventConsumed = event;
+}
+};
 var flinkApp = TestUtils.buildApplicationCluster();
 var reason = "Cleanup";
 var message = "message";
@@ -52,7 +63,7 @@ public class EventUtilsTest {
 reason,
 message,
 EventRecorder.Component.Operator,
-e -> {}));
+consumer));
 var event =
 kubernetesClient
 .v1()
@@ -60,6 +71,8 @@ public class EventUtilsTest {
 .inNamespace(flinkApp.getMetadata().getNamespace())
 .withName(eventName)
 .get();
+Assertions.assertEquals(event.getMetadata().getUid(), 
eventConsumed.getMetadata().getUid());
+eventConsumed = null;
 Assertions.assertNotNull(event);
 Assertions.assertEquals(1, event.getCount());
 Assertions.assertEquals(reason, event.getReason());
@@ -72,7 +85,7 @@ public class EventUtilsTest {
 reason,
 message,
 EventRecorder.Component.Operator,
-e -> {}));
+consumer));
 event =
 kubernetesClient
 .v1()
@@ -80,7 +93,7 @@ public class EventUtilsTest {
 .inNamespace(flinkApp.getMetadata().getNamespace())
 .withName(eventName)
 .get();
-
+Assertions.assertEquals(event.getMetadata().getUid(), 
eventConsumed.getMetadata().getUid());
 Assertions.assertEquals(2, event.getCount());
 }
 



[flink-kubernetes-operator] branch main updated (4c944403 -> d9ec4b03)

2023-04-04 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


from 4c944403 [FLINK-31630] Limit max checkpoint age for last-state upgrade
 new 048c492d [FLINK-31303] Expose Flink application resource usage via 
metrics and status
 new d9ec4b03 [FLINK-31303] Fix fractional CPU calculation and added test

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../operator/metrics/FlinkDeploymentMetrics.java   |  95 --
 .../operator/service/AbstractFlinkService.java |  47 +--
 .../kubernetes/operator/utils/FlinkUtils.java  |  38 ++
 .../metrics/FlinkDeploymentMetricsTest.java| 137 +
 .../kubernetes/operator/utils/FlinkUtilsTest.java  |  40 ++
 5 files changed, 339 insertions(+), 18 deletions(-)



[flink-kubernetes-operator] 01/02: [FLINK-31303] Expose Flink application resource usage via metrics and status

2023-04-04 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git

commit 048c492d09c2208904908d3f15a3a235969a7c19
Author: Marton Balassi 
AuthorDate: Tue Mar 14 16:25:40 2023 +0100

[FLINK-31303] Expose Flink application resource usage via metrics and status
---
 .../operator/metrics/FlinkDeploymentMetrics.java   | 93 +++---
 .../operator/service/AbstractFlinkService.java | 72 +++--
 2 files changed, 147 insertions(+), 18 deletions(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
index d41c6dde..3f1cf394 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
@@ -20,6 +20,9 @@ package org.apache.flink.kubernetes.operator.metrics;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.kubernetes.operator.api.FlinkDeployment;
 import 
org.apache.flink.kubernetes.operator.api.status.JobManagerDeploymentStatus;
+import org.apache.flink.kubernetes.operator.service.AbstractFlinkService;
+
+import org.apache.commons.lang3.math.NumberUtils;
 
 import java.util.Map;
 import java.util.Set;
@@ -30,10 +33,19 @@ public class FlinkDeploymentMetrics implements 
CustomResourceMetrics>> 
deployments =
+
+// map(namespace, map(status, set(deployment))
+private final Map>> 
deploymentStatuses =
 new ConcurrentHashMap<>();
+// map(namespace, map(deployment, cpu))
+private final Map> deploymentCpuUsage = new 
ConcurrentHashMap<>();
+// map(namespace, map(deployment, memory))
+private final Map> deploymentMemoryUsage = new 
ConcurrentHashMap<>();
 public static final String STATUS_GROUP_NAME = "JmDeploymentStatus";
+public static final String RESOURCE_USAGE_GROUP_NAME = "ResourceUsage";
 public static final String COUNTER_NAME = "Count";
+public static final String CPU_NAME = "Cpu";
+public static final String MEMORY_NAME = "Memory";
 
 public FlinkDeploymentMetrics(
 KubernetesOperatorMetricGroup parentMetricGroup, Configuration 
configuration) {
@@ -43,26 +55,60 @@ public class FlinkDeploymentMetrics implements 
CustomResourceMetrics {
 initNamespaceDeploymentCounts(ns);
 initNamespaceStatusCounts(ns);
 return createDeploymentStatusMap();
 })
 .get(flinkApp.getStatus().getJobManagerDeploymentStatus())
-.add(flinkApp.getMetadata().getName());
+.add(deploymentName);
+
+deploymentCpuUsage
+.computeIfAbsent(
+namespace,
+ns -> {
+initNamespaceCpuUsage(ns);
+return new ConcurrentHashMap<>();
+})
+.put(
+deploymentName,
+NumberUtils.toDouble(
+clusterInfo.getOrDefault(
+
AbstractFlinkService.FIELD_NAME_TOTAL_CPU, "0")));
+
+deploymentMemoryUsage
+.computeIfAbsent(
+namespace,
+ns -> {
+initNamespaceMemoryUsage(ns);
+return new ConcurrentHashMap<>();
+})
+.put(
+deploymentName,
+NumberUtils.toLong(
+clusterInfo.getOrDefault(
+
AbstractFlinkService.FIELD_NAME_TOTAL_MEMORY, "0")));
 }
 
 public void onRemove(FlinkDeployment flinkApp) {
-if (!deployments.containsKey(flinkApp.getMetadata().getNamespace())) {
+var namespace = flinkApp.getMetadata().getNamespace();
+var name = flinkApp.getMetadata().getName();
+
+if (!deploymentStatuses.containsKey(namespace)) {
 return;
 }
-deployments
-.get(flinkApp.getMetadata().getNamespace())
-.values()
-.forEach(names -> 
names.remove(flinkApp.getMetadata().getName()));
+deploymentStatuses.get(namespace).values().forEach(names -> 
names.remove(name));
+
+deploymentCpuUsage.get(namespace).remove(name);
+deploymentMemoryUsage.get(namespace).remove(name);
 }
 
 private void in

[flink-kubernetes-operator] 02/02: [FLINK-31303] Fix fractional CPU calculation and added test

2023-04-04 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git

commit d9ec4b034078a64d08ed01705f5712a45c9a2ca7
Author: Mate Czagany 
AuthorDate: Sun Apr 2 19:27:44 2023 +0200

[FLINK-31303] Fix fractional CPU calculation and added test
---
 .../operator/metrics/FlinkDeploymentMetrics.java   |  12 +-
 .../operator/service/AbstractFlinkService.java |  39 ++
 .../kubernetes/operator/utils/FlinkUtils.java  |  38 ++
 .../metrics/FlinkDeploymentMetricsTest.java| 137 +
 .../kubernetes/operator/utils/FlinkUtilsTest.java  |  40 ++
 5 files changed, 229 insertions(+), 37 deletions(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
index 3f1cf394..36c09d95 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/metrics/FlinkDeploymentMetrics.java
@@ -71,6 +71,12 @@ public class FlinkDeploymentMetrics implements 
CustomResourceMetrics();
 })
-.put(
-deploymentName,
-NumberUtils.toDouble(
-clusterInfo.getOrDefault(
-
AbstractFlinkService.FIELD_NAME_TOTAL_CPU, "0")));
+.put(deploymentName, totalCpu);
 
 deploymentMemoryUsage
 .computeIfAbsent(
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
index e59c085d..aef44c6f 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
@@ -26,10 +26,8 @@ import 
org.apache.flink.client.program.rest.RestClusterClient;
 import org.apache.flink.configuration.CheckpointingOptions;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.RestOptions;
-import org.apache.flink.kubernetes.KubernetesClusterClientFactory;
 import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
 import 
org.apache.flink.kubernetes.kubeclient.decorators.ExternalServiceDecorator;
-import 
org.apache.flink.kubernetes.kubeclient.parameters.KubernetesJobManagerParameters;
 import org.apache.flink.kubernetes.operator.api.FlinkDeployment;
 import org.apache.flink.kubernetes.operator.api.FlinkSessionJob;
 import org.apache.flink.kubernetes.operator.api.spec.FlinkSessionJobSpec;
@@ -52,7 +50,6 @@ import 
org.apache.flink.kubernetes.operator.utils.SavepointUtils;
 import org.apache.flink.runtime.client.JobStatusMessage;
 import org.apache.flink.runtime.execution.ExecutionState;
 import 
org.apache.flink.runtime.highavailability.nonha.standalone.StandaloneClientHAServices;
-import org.apache.flink.runtime.instance.HardwareDescription;
 import org.apache.flink.runtime.jobgraph.RestoreMode;
 import org.apache.flink.runtime.jobmaster.JobResult;
 import org.apache.flink.runtime.messages.webmonitor.JobDetails;
@@ -73,7 +70,6 @@ import 
org.apache.flink.runtime.rest.messages.job.savepoints.SavepointStatusHead
 import 
org.apache.flink.runtime.rest.messages.job.savepoints.SavepointStatusMessageParameters;
 import 
org.apache.flink.runtime.rest.messages.job.savepoints.SavepointTriggerHeaders;
 import 
org.apache.flink.runtime.rest.messages.job.savepoints.SavepointTriggerRequestBody;
-import org.apache.flink.runtime.rest.messages.taskmanager.TaskManagerInfo;
 import org.apache.flink.runtime.rest.messages.taskmanager.TaskManagersHeaders;
 import org.apache.flink.runtime.rest.messages.taskmanager.TaskManagersInfo;
 import org.apache.flink.runtime.rest.util.RestConstants;
@@ -124,11 +120,9 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
-import java.util.function.Supplier;
 import java.util.jar.JarOutputStream;
 import java.util.jar.Manifest;
 import java.util.stream.Collectors;
-import java.util.stream.Stream;
 
 import static 
org.apache.flink.kubernetes.operator.config.FlinkConfigBuilder.FLINK_VERSION;
 import static 
org.apache.flink.kubernetes.operator.config.KubernetesOperatorConfigOptions.K8S_OP_CONF_PREFIX;
@@ -659,32 +653,13 @@ public abstract class AbstractFlinkService implements 
FlinkService {
 dashboardConfiguration.getFli

[flink-kubernetes-operator] branch main updated: [FLINK-31407] Bump fabric8 version to 6.5.0

2023-03-22 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 91cf2338 [FLINK-31407] Bump fabric8 version to 6.5.0
91cf2338 is described below

commit 91cf2338c9bd54b1bad7680c231a59facc12c8b9
Author: Márton Balassi 
AuthorDate: Wed Mar 22 22:25:26 2023 +0100

[FLINK-31407] Bump fabric8 version to 6.5.0
---
 .../operator/autoscaler/AutoScalerInfo.java|   2 +-
 .../operator/service/AbstractFlinkService.java |   1 -
 .../operator/utils/KubernetesClientUtils.java  |   4 +-
 .../kubernetes/operator/utils/StatusRecorder.java  |   4 +-
 .../flink/kubernetes/operator/TestUtils.java   |   2 +-
 .../metrics/KubernetesClientMetricsTest.java   |   2 +-
 .../operator/utils/KubernetesClientUtilsTest.java  |  20 +--
 .../Fabric8FlinkStandaloneKubeClient.java  |   5 +-
 .../crds/flinkdeployments.flink.apache.org-v1.yml  | 153 +
 pom.xml|   2 +-
 10 files changed, 169 insertions(+), 26 deletions(-)

diff --git 
a/flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/AutoScalerInfo.java
 
b/flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/AutoScalerInfo.java
index 8850b95a..a5462219 100644
--- 
a/flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/AutoScalerInfo.java
+++ 
b/flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/AutoScalerInfo.java
@@ -181,7 +181,7 @@ public class AutoScalerInfo {
 
 public void replaceInKubernetes(KubernetesClient client) throws Exception {
 trimHistoryToMaxCmSize();
-client.resource(configMap).replace();
+client.resource(configMap).update();
 }
 
 @VisibleForTesting
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
index 4c5d9fcc..97262daf 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
@@ -808,7 +808,6 @@ public abstract class AbstractFlinkService implements 
FlinkService {
 .inNamespace(namespace)
 .withName(
 
ExternalServiceDecorator.getExternalServiceName(clusterId))
-.fromServer()
 .get();
 if (service == null) {
 serviceRunning = false;
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/KubernetesClientUtils.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/KubernetesClientUtils.java
index d52bfc67..d6724adf 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/KubernetesClientUtils.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/KubernetesClientUtils.java
@@ -73,12 +73,12 @@ public class KubernetesClientUtils {
 
 public static > void applyToStoredCr(
 KubernetesClient kubernetesClient, T cr, Consumer function) {
-var inKube = kubernetesClient.resource(cr).fromServer().get();
+var inKube = kubernetesClient.resource(cr).get();
 Long localGeneration = cr.getMetadata().getGeneration();
 Long serverGeneration = inKube.getMetadata().getGeneration();
 if (serverGeneration.equals(localGeneration)) {
 function.accept(inKube);
-kubernetesClient.resource(inKube).lockResourceVersion().replace();
+kubernetesClient.resource(inKube).lockResourceVersion().update();
 } else {
 LOG.info(
 "Spec already upgrading in kube (generation - local: {} 
server: {}), skipping scale operation.",
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/StatusRecorder.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/StatusRecorder.java
index b1bec2f7..e009a7ce 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/StatusRecorder.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/StatusRecorder.java
@@ -120,7 +120,7 @@ public class StatusRecorder<
 int retries = 0;
 while (true) {
 try {
-   

[flink] branch master updated (8990822bd77 -> e818c11d104)

2023-03-15 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 8990822bd77 [FLINK-31273][table-planner] Fix left join with IS_NULL 
filter be wrongly pushed down and get wrong join results
 add e818c11d104 [FLINK-31085][formats] Add schema option to confluent 
registry avro formats

No new revisions were added by this update.

Summary of changes:
 .../connectors/table/formats/avro-confluent.md |   7 +
 .../docs/connectors/table/formats/debezium.md  |   7 +
 .../connectors/table/formats/avro-confluent.md |   8 ++
 .../docs/connectors/table/formats/debezium.md  |   7 +
 .../kafka/table/KafkaDynamicTableFactoryTest.java  |   2 +-
 .../confluent/AvroConfluentFormatOptions.java  |  12 ++
 .../confluent/RegistryAvroFormatFactory.java   |  45 ++-
 .../DebeziumAvroDeserializationSchema.java |  15 ++-
 .../debezium/DebeziumAvroFormatFactory.java|  32 -
 .../debezium/DebeziumAvroSerializationSchema.java  |  15 ++-
 .../confluent/RegistryAvroFormatFactoryTest.java   |  54 +++-
 .../debezium/DebeziumAvroFormatFactoryTest.java| 144 -
 .../avro/AvroRowDataSerializationSchema.java   |   6 +-
 .../formats/avro/AvroSerializationSchema.java  |   8 ++
 .../avro/AvroRowDataDeSerializationSchemaTest.java |  83 
 .../formats/avro/AvroSerializationSchemaTest.java  |  18 +++
 16 files changed, 438 insertions(+), 25 deletions(-)



[flink-web] branch asf-site updated: Update Slack invite link

2023-03-15 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 8cfa7dad9 Update Slack invite link
8cfa7dad9 is described below

commit 8cfa7dad9a4a590d9a299056753fcacc7ace9875
Author: Ferenc Csaky 
AuthorDate: Tue Mar 14 19:25:17 2023 +0100

Update Slack invite link
---
 content/community/index.html   | 2 +-
 content/getting-help/index.html| 2 +-
 content/zh/community/index.html| 2 +-
 content/zh/getting-help/index.html | 2 +-
 docs/content.zh/community.md   | 2 +-
 docs/content.zh/getting-help.md| 2 +-
 docs/content/community.md  | 2 +-
 docs/content/getting-help.md   | 2 +-
 8 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/content/community/index.html b/content/community/index.html
index f111e2063..76405ae10 100644
--- a/content/community/index.html
+++ b/content/community/index.html
@@ -1591,7 +1591,7 @@ data-td_id='Archive'>https://lists.apache.org/list.html?commits@flink.a
   Slack
   #
 
-You can join the https://join.slack.com/t/apache-flink/shared_invite/zt-1oamx9dv6-5fB8pQqUH2qY~A_77D4S2A";>Apache
 Flink community on Slack.
+You can join the https://join.slack.com/t/apache-flink/shared_invite/zt-1qxczhnzb-Redy4vcPAiTkfcBw81Dm8Q";>Apache
 Flink community on Slack.
 After creating an account in Slack, don’t forget to introduce yourself 
in #introductions.
 Due to Slack limitations the invite link expires after 100 invites. If it is 
expired, please reach out to the Dev mailing list.
 Any existing Slack member can also invite anyone else to join.
diff --git a/content/getting-help/index.html b/content/getting-help/index.html
index 71d20d15e..64c88259d 100644
--- a/content/getting-help/index.html
+++ b/content/getting-help/index.html
@@ -971,7 +971,7 @@ under the License.
   Slack
   #
 
-You can join the https://join.slack.com/t/apache-flink/shared_invite/zt-1oamx9dv6-5fB8pQqUH2qY~A_77D4S2A";>Apache
 Flink community on Slack.
+You can join the https://join.slack.com/t/apache-flink/shared_invite/zt-1qxczhnzb-Redy4vcPAiTkfcBw81Dm8Q";>Apache
 Flink community on Slack.
 After creating an account in Slack, don’t forget to introduce yourself 
in #introductions.
 Due to Slack limitations the invite link expires after 100 invites. If it is 
expired, please reach out to the Dev 
mailing list.
 Any existing Slack member can also invite anyone else to join.
diff --git a/content/zh/community/index.html b/content/zh/community/index.html
index 529c93c14..fdceefcb4 100644
--- a/content/zh/community/index.html
+++ b/content/zh/community/index.html
@@ -1601,7 +1601,7 @@ data-td_id='Archive'>https://lists.apache.org/list.html?commits@flink.a
   Slack
   #
 
-你可以通过 https://join.slack.com/t/apache-flink/shared_invite/zt-1oamx9dv6-5fB8pQqUH2qY~A_77D4S2A";>此链接
+你可以通过 https://join.slack.com/t/apache-flink/shared_invite/zt-1qxczhnzb-Redy4vcPAiTkfcBw81Dm8Q";>此链接
 加入 Apache Flink 社区专属的 Slack 工作空间。 在成功加入后,不要忘记在 #introductions 频道介绍你自己。 Slack 
规定每个邀请链接最多可邀请 100 人,如果遇到上述链接失效的情况,请联系 Dev 邮件列表。
 所有已经加入社区 Slack 空间的成员同样可以邀请新成员加入。
 在 Slack 空间交流时,请遵守以下规则:
diff --git a/content/zh/getting-help/index.html 
b/content/zh/getting-help/index.html
index d942bba06..c13bcb666 100644
--- a/content/zh/getting-help/index.html
+++ b/content/zh/getting-help/index.html
@@ -981,7 +981,7 @@ under the License.
   Slack
   #
 
-你可以通过 https://join.slack.com/t/apache-flink/shared_invite/zt-1oamx9dv6-5fB8pQqUH2qY~A_77D4S2A";>此链接
 加入 Apache Flink 社区专属的 Slack 工作空间。
+你可以通过 https://join.slack.com/t/apache-flink/shared_invite/zt-1qxczhnzb-Redy4vcPAiTkfcBw81Dm8Q";>此链接
 加入 Apache Flink 社区专属的 Slack 工作空间。
 在成功加入后,不要忘记在 #introductions 频道介绍你自己。
 Slack 规定每个邀请链接最多可邀请 100 人,如果遇到上述链接失效的情况,请联系 Dev 邮件列表。
 所有已经加入社区 Slack 空间的成员同样可以邀请新成员加入。
diff --git a/docs/content.zh/community.md b/docs/content.zh/community.md
index c707ec134..864abf17f 100644
--- a/docs/content.zh/community.md
+++ b/docs/content.zh/community.md
@@ -145,7 +145,7 @@ under the License.
 
 ## Slack
 
-你可以通过 
[此链接](https://join.slack.com/t/apache-flink/shared_invite/zt-1oamx9dv6-5fB8pQqUH2qY~A_77D4S2A)
+你可以通过 
[此链接](https://join.slack.com/t/apache-flink/shared_invite/zt-1qxczhnzb-Redy4vcPAiTkfcBw81Dm8Q)
 加入 Apache Flink 社区专属的 Slack 工作空间。 在成功加入后,不要忘记在 #introductions 频道介绍你自己。 Slack 
规定每个邀请链接最多可邀请 100 人,如果遇到上述链接失效的情况,请联系 [Dev 邮件列表](#mailing-lists)。 
 所有已经加入社区 Slack 空间的成员同样可以邀请新成员加入。
 
diff --git a/docs/content.zh/getting-help.md b/docs/content.zh/getting-help.md
index ce33180df..7f10374c9 100644
--- a/docs/content.zh/getting-help.md
+++ b/docs/content.zh/getting-help.md
@@ -47,7 +47,7 @@ Apache Flink 社区每天都会回答许多用户的问题。你可以从历史
 
 ### Slack
 
-你可以通过 
[此链接](https://join.slack.com/t/apache-flink/shared_invite/zt-1oamx9dv6-5fB8pQqUH2qY~A_77D4S2A)
 加入 Apache Flink 社区专属的 Slack 工作空间。
+你可以通过 
[此链

[flink] branch master updated: [FLINK-31401][streaming][tests] Make parallelism assumption explicit in StreamingJobGraphGeneratorTest

2023-03-12 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new eb83d854629 [FLINK-31401][streaming][tests] Make parallelism 
assumption explicit in StreamingJobGraphGeneratorTest
eb83d854629 is described below

commit eb83d854629c676becdce50ce25eb543b8b2e28a
Author: Marton Balassi 
AuthorDate: Fri Mar 10 16:32:41 2023 +0100

[FLINK-31401][streaming][tests] Make parallelism assumption explicit in 
StreamingJobGraphGeneratorTest
---
 .../flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java  | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java
 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java
index a58a52dc536..8a6d2c9f582 100644
--- 
a/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java
+++ 
b/flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java
@@ -259,6 +259,9 @@ class StreamingJobGraphGeneratorTest {
 @Test
 public void testTransformationSetParallelism() {
 StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+/* The default parallelism of the environment (that is inherited by 
the source)
+and the parallelism of the map operator needs to be different for this 
test */
+env.setParallelism(4);
 env.fromSequence(1L, 3L).map(i -> 
i).setParallelism(10).print().setParallelism(20);
 StreamGraph streamGraph = env.getStreamGraph();
 



[flink-web] 01/02: Kubernetes Operator 1.4.0 release

2023-02-23 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 7e409734914a405ec83d2e584a163ae87893204b
Author: Gyula Fora 
AuthorDate: Thu Feb 23 16:15:52 2023 +0100

Kubernetes Operator 1.4.0 release
---
 docs/config.toml|  4 ++--
 docs/data/flink_kubernetes_operator.yml | 14 +++---
 docs/data/release_archive.yml   |  5 -
 3 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/docs/config.toml b/docs/config.toml
index a13fdbbf1..98d540003 100644
--- a/docs/config.toml
+++ b/docs/config.toml
@@ -39,8 +39,8 @@ posts = "/:year/:month/:day/:title/"
   StateFunStableShortVersion = "3.2"
   FlinkMLStableVersion = "2.1.0"
   FlinkMLStableShortVersion = "2.1"
-  FlinkKubernetesOperatorStableVersion = "1.3.1"
-  FlinkKubernetesOperatorStableShortVersion = "1.3"
+  FlinkKubernetesOperatorStableVersion = "1.4.0"
+  FlinkKubernetesOperatorStableShortVersion = "1.4"
   FlinkTableStoreStableVersion = "0.3.0"
   FlinkTableStoreStableShortVersion = "0.3"
 
diff --git a/docs/data/flink_kubernetes_operator.yml 
b/docs/data/flink_kubernetes_operator.yml
index 2c73bb37d..ea8df4c4a 100644
--- a/docs/data/flink_kubernetes_operator.yml
+++ b/docs/data/flink_kubernetes_operator.yml
@@ -15,16 +15,16 @@
 # specific language governing permissions and limitations
 # under the License
 
+1.4:
+  name: "Apache Flink Kubernetes Operator 1.4.0"
+  source_release_url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-kubernetes-operator-1.4.0/flink-kubernetes-operator-1.4.0-src.tgz";
+  source_release_asc_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.4.0/flink-kubernetes-operator-1.4.0-src.tgz.asc";
+  source_release_sha512_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.4.0/flink-kubernetes-operator-1.4.0-src.tgz.sha512";
+  compatibility: ["1.17.0", "1.16.1", "1.15.3", "1.14.6", "1.13.6"]
+
 1.3:
   name: "Apache Flink Kubernetes Operator 1.3.1"
   source_release_url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-kubernetes-operator-1.3.1/flink-kubernetes-operator-1.3.1-src.tgz";
   source_release_asc_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.3.1/flink-kubernetes-operator-1.3.1-src.tgz.asc";
   source_release_sha512_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.3.1/flink-kubernetes-operator-1.3.1-src.tgz.sha512";
   compatibility: ["1.16.0", "1.15.3", "1.14.6", "1.13.6"]
-
-1.2:
-  name: "Apache Flink Kubernetes Operator 1.2.0"
-  source_release_url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-kubernetes-operator-1.2.0/flink-kubernetes-operator-1.2.0-src.tgz";
-  source_release_asc_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.2.0/flink-kubernetes-operator-1.2.0-src.tgz.asc";
-  source_release_sha512_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.2.0/flink-kubernetes-operator-1.2.0-src.tgz.sha512";
-  compatibility: ["1.15.2", "1.14.6", "1.13.6"]
diff --git a/docs/data/release_archive.yml b/docs/data/release_archive.yml
index 3e04145d7..918f88b57 100644
--- a/docs/data/release_archive.yml
+++ b/docs/data/release_archive.yml
@@ -483,6 +483,9 @@ release_archive:
   release_date: 2021-01-07
 
   flink_kubernetes_operator:
+- version_short: 1.4
+  version_long: 1.4.0
+  release_date: 2023-02-22
 - version_short: 1.3
   version_long: 1.3.1
   release_date: 2023-01-10
@@ -514,4 +517,4 @@ release_archive:
   release_date: 2022-08-29
 - version_short: 0.1
   version_long: 0.1.0
-  release_date: 2022-05-11
\ No newline at end of file
+  release_date: 2022-05-11



[flink-web] branch asf-site updated (63e15d083 -> 147d1c0d5)

2023-02-23 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


from 63e15d083 Rebuild website
 new 7e4097349 Kubernetes Operator 1.4.0 release
 new 147d1c0d5 Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../08/26/apache-flink-0.6-available/index.html|   9 +-
 .../09/26/apache-flink-0.6.1-available/index.html  |   9 +-
 content/2014/10/03/upcoming-events/index.html  |   9 +-
 .../11/04/apache-flink-0.7.0-available/index.html  |   9 +-
 .../11/18/hadoop-compatibility-in-flink/index.html |  15 +--
 .../index.html |   9 +-
 .../01/21/apache-flink-0.8.0-available/index.html  |   9 +-
 .../january-2015-in-the-flink-community/index.html |   9 +-
 .../02/09/introducing-flink-streaming/index.html   |  71 +++--
 .../index.html |   9 +-
 .../index.html |   9 +-
 .../march-2015-in-the-flink-community/index.html   |   9 +-
 .../index.html |   9 +-
 .../05/11/juggling-with-bits-and-bytes/index.html  |   9 +-
 .../april-2015-in-the-flink-community/index.html   |   9 +-
 .../06/24/announcing-apache-flink-0.9.0/index.html |   9 +-
 .../index.html |  19 ++--
 .../09/01/apache-flink-0.9.1-available/index.html  |   9 +-
 .../09/03/announcing-flink-forward-2015/index.html |   9 +-
 .../index.html |   9 +-
 .../16/announcing-apache-flink-0.10.0/index.html   |   9 +-
 .../2015/11/27/flink-0.10.1-released/index.html|   9 +-
 .../index.html |   9 +-
 .../index.html |  15 +--
 .../index.html |   9 +-
 .../2016/02/11/flink-0.10.2-released/index.html|   9 +-
 .../03/08/announcing-apache-flink-1.0.0/index.html |   9 +-
 content/2016/04/06/flink-1.0.1-released/index.html |   9 +-
 .../index.html |  15 +--
 .../index.html |   9 +-
 content/2016/04/22/flink-1.0.2-released/index.html |   9 +-
 content/2016/05/11/flink-1.0.3-released/index.html |   9 +-
 .../index.html |   9 +-
 .../08/04/announcing-apache-flink-1.1.0/index.html |  13 +--
 content/2016/08/04/flink-1.1.1-released/index.html |   9 +-
 .../index.html |   9 +-
 .../09/05/apache-flink-1.1.2-released/index.html   |   9 +-
 .../10/12/apache-flink-1.1.3-released/index.html   |   9 +-
 .../apache-flink-in-2016-year-in-review/index.html |   9 +-
 .../12/21/apache-flink-1.1.4-released/index.html   |   9 +-
 .../02/06/announcing-apache-flink-1.2.0/index.html |   9 +-
 .../03/23/apache-flink-1.1.5-released/index.html   |   9 +-
 .../index.html |   9 +-
 .../index.html |   9 +-
 .../04/26/apache-flink-1.2.1-released/index.html   |   9 +-
 .../index.html |   9 +-
 .../index.html |   9 +-
 .../06/23/apache-flink-1.3.1-released/index.html   |   9 +-
 .../index.html |   9 +-
 .../08/05/apache-flink-1.3.2-released/index.html   |   9 +-
 .../index.html |   9 +-
 .../index.html |   9 +-
 .../apache-flink-in-2017-year-in-review/index.html |   9 +-
 .../index.html |   9 +-
 .../02/15/apache-flink-1.4.1-released/index.html   |   9 +-
 .../index.html |   9 +-
 .../03/08/apache-flink-1.4.2-released/index.html   |   9 +-
 .../03/15/apache-flink-1.3.3-released/index.html   |   9 +-
 .../index.html |   9 +-
 .../07/12/apache-flink-1.5.1-released/index.html   |   9 +-
 .../07/31/apache-flink-1.5.2-released/index.html   |   9 +-
 .../index.html |   9 +-
 .../08/21/apache-flink-1.5.3-released/index.html   |   9 +-
 .../09/20/apache-flink-1.5.4-released/index.html   |   9 +-
 .../09/20/apache-flink-1.6.1-released/index.html   |   9 +-
 .../10/29/apache-flink-1.5.5-released/index.html   |   9 +-
 .../10/29/apache-flink-1.6.2-released/index.html   |   9 +-
 .../index.html |   9 +-
 .../12/21/apache-flink-1.7.1-released/index.html   |   9 +-
 .../12/22/apache-flink-1.6.3-released/index.html   |   9 +-
 .../12/26/apache-flink-1.5.6-released/index.html   |   9 +-
 .../index.html |   9 +-
 .../02/1

[flink-kubernetes-operator] branch main updated: [FLINK-30405] Add ResourceLifecycleStatus to CommonStatus and printer column (#524)

2023-02-09 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new a9f40451 [FLINK-30405] Add ResourceLifecycleStatus to CommonStatus and 
printer column (#524)
a9f40451 is described below

commit a9f404512e3d4508526c0bd0278bf66785ee42aa
Author: Zsombor Chikan <73717102+zc...@users.noreply.github.com>
AuthorDate: Fri Feb 10 01:09:54 2023 +0100

[FLINK-30405] Add ResourceLifecycleStatus to CommonStatus and printer 
column (#524)

Co-authored-by: Marton Balassi 
---
 docs/content/docs/custom-resource/reference.md |  2 ++
 .../operator/api/lifecycle/ResourceLifecycleState.java |  5 +++--
 .../kubernetes/operator/api/status/CommonStatus.java   |  9 +++--
 .../operator/api/status/ReconciliationStatus.java  |  2 --
 .../crds/flinkdeployments.flink.apache.org-v1.yml  | 18 +++---
 .../crds/flinksessionjobs.flink.apache.org-v1.yml  | 18 +++---
 6 files changed, 42 insertions(+), 12 deletions(-)

diff --git a/docs/content/docs/custom-resource/reference.md 
b/docs/content/docs/custom-resource/reference.md
index 7c27192a..cb0b8398 100644
--- a/docs/content/docs/custom-resource/reference.md
+++ b/docs/content/docs/custom-resource/reference.md
@@ -198,6 +198,7 @@ This page serves as a full reference for FlinkDeployment 
custom resource definit
 | --|  |  |
 | jobStatus | org.apache.flink.kubernetes.operator.api.status.JobStatus | Last 
observed status of the Flink job on Application/Session cluster. |
 | error | java.lang.String | Error information about the 
FlinkDeployment/FlinkSessionJob. |
+| lifecycleState | 
org.apache.flink.kubernetes.operator.api.lifecycle.ResourceLifecycleState | 
Lifecycle state of the Flink resource (including being rolled back, failed 
etc.). |
 | clusterInfo | java.util.Map | Information 
from running clusters. |
 | jobManagerDeploymentStatus | 
org.apache.flink.kubernetes.operator.api.status.JobManagerDeploymentStatus | 
Last observed status of the JobManager deployment. |
 | reconciliationStatus | 
org.apache.flink.kubernetes.operator.api.status.FlinkDeploymentReconciliationStatus
 | Status of the last reconcile operation. |
@@ -224,6 +225,7 @@ This page serves as a full reference for FlinkDeployment 
custom resource definit
 | --|  |  |
 | jobStatus | org.apache.flink.kubernetes.operator.api.status.JobStatus | Last 
observed status of the Flink job on Application/Session cluster. |
 | error | java.lang.String | Error information about the 
FlinkDeployment/FlinkSessionJob. |
+| lifecycleState | 
org.apache.flink.kubernetes.operator.api.lifecycle.ResourceLifecycleState | 
Lifecycle state of the Flink resource (including being rolled back, failed 
etc.). |
 | reconciliationStatus | 
org.apache.flink.kubernetes.operator.api.status.FlinkSessionJobReconciliationStatus
 | Status of the last reconcile operation. |
 
 ### JobManagerDeploymentStatus
diff --git 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/lifecycle/ResourceLifecycleState.java
 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/lifecycle/ResourceLifecycleState.java
index 51d52a4b..ca580c37 100644
--- 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/lifecycle/ResourceLifecycleState.java
+++ 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/lifecycle/ResourceLifecycleState.java
@@ -17,6 +17,7 @@
 
 package org.apache.flink.kubernetes.operator.api.lifecycle;
 
+import com.fasterxml.jackson.annotation.JsonIgnore;
 import lombok.Getter;
 
 import java.util.Collections;
@@ -36,8 +37,8 @@ public enum ResourceLifecycleState {
 ROLLED_BACK(true, "The resource is deployed with the last stable spec"),
 FAILED(true, "The job terminally failed");
 
-private final boolean terminal;
-@Getter private final String description;
+@JsonIgnore private final boolean terminal;
+@JsonIgnore @Getter private final String description;
 
 ResourceLifecycleState(boolean terminal, String description) {
 this.terminal = terminal;
diff --git 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/CommonStatus.java
 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/CommonStatus.java
index b90949af..1493ab84 100644
--- 
a/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/CommonStatus.java
+++ 
b/flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/CommonStatus.java
@@ -22,7 +22,7 @@ import 
org.apache.flink.kubernetes.operator.api.lifecycle.Reso

[flink-kubernetes-operator] branch main updated: [FLINK-30858] Always record reconciled spec generation (#523)

2023-02-03 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new b45122ce [FLINK-30858] Always record reconciled spec generation (#523)
b45122ce is described below

commit b45122cef1c97a9aaa4c60dc8b28d09d46f6c9b4
Author: Gyula Fora 
AuthorDate: Sat Feb 4 03:09:54 2023 +0100

[FLINK-30858] Always record reconciled spec generation (#523)
---
 .../operator/reconciler/ReconciliationUtils.java   | 16 ++
 .../AbstractFlinkResourceReconciler.java   |  2 ++
 .../deployment/ApplicationReconcilerTest.java  | 35 ++
 3 files changed, 53 insertions(+)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/ReconciliationUtils.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/ReconciliationUtils.java
index 20d7bd4c..7cdd1e29 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/ReconciliationUtils.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/ReconciliationUtils.java
@@ -21,6 +21,7 @@ import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.kubernetes.operator.api.AbstractFlinkResource;
 import org.apache.flink.kubernetes.operator.api.FlinkDeployment;
+import 
org.apache.flink.kubernetes.operator.api.reconciler.ReconciliationMetadata;
 import org.apache.flink.kubernetes.operator.api.spec.AbstractFlinkSpec;
 import org.apache.flink.kubernetes.operator.api.spec.JobState;
 import org.apache.flink.kubernetes.operator.api.spec.UpgradeMode;
@@ -482,4 +483,19 @@ public class ReconciliationUtils {
 SpecUtils.writeSpecWithMeta(
 lastSpecWithMeta.getSpec(), 
lastSpecWithMeta.getMeta()));
 }
+
+public static  void 
updateReconciliationMetadata(
+AbstractFlinkResource resource) {
+var reconciliationStatus = 
resource.getStatus().getReconciliationStatus();
+var lastSpecWithMeta = 
reconciliationStatus.deserializeLastReconciledSpecWithMeta();
+var newMeta = ReconciliationMetadata.from(resource);
+
+if (newMeta.equals(lastSpecWithMeta.getMeta())) {
+// Nothing to update
+return;
+}
+
+reconciliationStatus.setLastReconciledSpec(
+SpecUtils.writeSpecWithMeta(lastSpecWithMeta.getSpec(), 
newMeta));
+}
 }
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/AbstractFlinkResourceReconciler.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/AbstractFlinkResourceReconciler.java
index 79e5e72d..5c5bcd05 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/AbstractFlinkResourceReconciler.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/reconciler/deployment/AbstractFlinkResourceReconciler.java
@@ -157,6 +157,8 @@ public abstract class AbstractFlinkResourceReconciler<
 // reconcile other changes
 return;
 }
+} else {
+ReconciliationUtils.updateReconciliationMetadata(cr);
 }
 
 if (shouldRollBack(cr, observeConfig, ctx.getFlinkService())) {
diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/reconciler/deployment/ApplicationReconcilerTest.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/reconciler/deployment/ApplicationReconcilerTest.java
index 063986eb..6c7269bc 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/reconciler/deployment/ApplicationReconcilerTest.java
+++ 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/reconciler/deployment/ApplicationReconcilerTest.java
@@ -746,4 +746,39 @@ public class ApplicationReconcilerTest extends 
OperatorTestBase {
 assertEquals(deployment.getSpec().getRestartNonce(), 
lastReconciledSpec.getRestartNonce());
 assertEquals(JobState.SUSPENDED, 
lastReconciledSpec.getJob().getState());
 }
+
+@Test
+public void testUpgradeReconciledGeneration() throws Exception {
+FlinkDeployment deployment = TestUtils.buildApplicationCluster();
+deployment.getMetadata().setGeneration(1L);
+
+// Initial deployment
+reconciler.reconcile(deployment, context);
+verifyAndSetRunningJobsToStatus(deployment, flinkService.listJobs());
+
+assertEquals(
+1L,
+deployment
+.getSta

[flink] branch master updated: [FLINK-30754][tests] Fix ExceptionThrowingDelegationTokenProvider/Receiver multi-threaded test issues

2023-01-26 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 3a64648b431 [FLINK-30754][tests] Fix 
ExceptionThrowingDelegationTokenProvider/Receiver multi-threaded test issues
3a64648b431 is described below

commit 3a64648b4316fb5283bade5132d8fba1c2211d9b
Author: Gabor Somogyi 
AuthorDate: Thu Jan 19 10:54:12 2023 +0100

[FLINK-30754][tests] Fix ExceptionThrowingDelegationTokenProvider/Receiver 
multi-threaded test issues
---
 .../token/DefaultDelegationTokenManagerTest.java   | 12 
 .../DelegationTokenReceiverRepositoryTest.java |  4 +--
 .../ExceptionThrowingDelegationTokenProvider.java  | 32 --
 .../ExceptionThrowingDelegationTokenReceiver.java  | 21 --
 4 files changed, 38 insertions(+), 31 deletions(-)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/DefaultDelegationTokenManagerTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/DefaultDelegationTokenManagerTest.java
index e211389573b..b1b19109cbe 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/DefaultDelegationTokenManagerTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/DefaultDelegationTokenManagerTest.java
@@ -84,7 +84,7 @@ public class DefaultDelegationTokenManagerTest {
 assertThrows(
 Exception.class,
 () -> {
-ExceptionThrowingDelegationTokenProvider.throwInInit = 
true;
+
ExceptionThrowingDelegationTokenProvider.throwInInit.set(true);
 new DefaultDelegationTokenManager(new Configuration(), 
null, null, null);
 });
 }
@@ -107,8 +107,8 @@ public class DefaultDelegationTokenManagerTest {
 assertTrue(delegationTokenManager.isProviderLoaded("test"));
 assertTrue(delegationTokenManager.isReceiverLoaded("test"));
 
-assertTrue(ExceptionThrowingDelegationTokenProvider.constructed);
-assertTrue(ExceptionThrowingDelegationTokenReceiver.constructed);
+assertTrue(ExceptionThrowingDelegationTokenProvider.constructed.get());
+assertTrue(ExceptionThrowingDelegationTokenReceiver.constructed.get());
 assertFalse(delegationTokenManager.isProviderLoaded("throw"));
 assertFalse(delegationTokenManager.isReceiverLoaded("throw"));
 }
@@ -169,7 +169,7 @@ public class DefaultDelegationTokenManagerTest {
 final ManuallyTriggeredScheduledExecutorService scheduler =
 new ManuallyTriggeredScheduledExecutorService();
 
-ExceptionThrowingDelegationTokenProvider.addToken = true;
+ExceptionThrowingDelegationTokenProvider.addToken.set(true);
 Configuration configuration = new Configuration();
 configuration.setBoolean(CONFIG_PREFIX + ".throw.enabled", true);
 AtomicInteger startTokensUpdateCallCount = new AtomicInteger(0);
@@ -184,10 +184,10 @@ public class DefaultDelegationTokenManagerTest {
 };
 
 delegationTokenManager.startTokensUpdate();
-ExceptionThrowingDelegationTokenProvider.throwInUsage = true;
+ExceptionThrowingDelegationTokenProvider.throwInUsage.set(true);
 scheduledExecutor.triggerScheduledTasks();
 scheduler.triggerAll();
-ExceptionThrowingDelegationTokenProvider.throwInUsage = false;
+ExceptionThrowingDelegationTokenProvider.throwInUsage.set(false);
 scheduledExecutor.triggerScheduledTasks();
 scheduler.triggerAll();
 delegationTokenManager.stopTokensUpdate();
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/DelegationTokenReceiverRepositoryTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/DelegationTokenReceiverRepositoryTest.java
index b1605226bd8..4b997fa81e6 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/DelegationTokenReceiverRepositoryTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/DelegationTokenReceiverRepositoryTest.java
@@ -53,7 +53,7 @@ class DelegationTokenReceiverRepositoryTest {
 assertThrows(
 Exception.class,
 () -> {
-ExceptionThrowingDelegationTokenReceiver.throwInInit = 
true;
+
ExceptionThrowingDelegationTokenReceiver.throwInInit.set(true);
 new DelegationTokenReceiverRepository(new Configuration(), 
null);
 });
 }
@@ -69,7 +69,7 @@ class DelegationTokenReceiverRepositoryTest {
 
assertTrue(delegationTokenReceiverRepository.isReceiverLoaded("hadoopfs"));
 
assertTrue(delegationToken

[flink] branch master updated (9e9081cbf88 -> 409e18b1f96)

2023-01-23 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 9e9081cbf88 [FLINK-30764][docs] Include generic delegation token 
params in the main config page
 add 409e18b1f96 [FLINK-30713][k8s] Add Hadoop related k8s decorators 
exclude possibility

No new revisions were added by this update.

Summary of changes:
 .../generated/kubernetes_config_configuration.html | 12 +
 .../configuration/KubernetesConfigOptions.java | 20 +++
 .../factory/KubernetesJobManagerFactory.java   | 40 +-
 .../factory/KubernetesTaskManagerFactory.java  | 35 +
 .../factory/KubernetesJobManagerFactoryTest.java   | 61 ++
 .../factory/KubernetesTaskManagerFactoryTest.java  | 14 +
 6 files changed, 137 insertions(+), 45 deletions(-)



[flink-web] 02/02: Rebuild website

2023-01-20 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 9290f9c3b78745e0d4d348ab774b6af282532042
Author: Marton Balassi 
AuthorDate: Fri Jan 20 12:54:20 2023 +0100

Rebuild website
---
 content/blog/feed.xml  | 201 +++--
 content/blog/index.html|  43 ++-
 content/blog/page10/index.html |  43 ++-
 content/blog/page11/index.html |  36 ++-
 content/blog/page12/index.html |  36 ++-
 content/blog/page13/index.html |  42 +--
 content/blog/page14/index.html |  46 +--
 content/blog/page15/index.html |  42 ++-
 content/blog/page16/index.html |  40 ++-
 content/blog/page17/index.html |  40 ++-
 content/blog/page18/index.html |  40 ++-
 content/blog/page19/index.html |  40 ++-
 content/blog/page2/index.html  |  41 ++-
 content/blog/page20/index.html |  40 ++-
 content/blog/page21/index.html |  25 ++
 content/blog/page3/index.html  |  39 ++-
 content/blog/page4/index.html  |  38 ++-
 content/blog/page5/index.html  |  47 +--
 content/blog/page6/index.html  |  49 +--
 content/blog/page7/index.html  |  42 ++-
 content/blog/page8/index.html  |  40 ++-
 content/blog/page9/index.html  |  45 +--
 .../delegation_token_framework.svg |  23 ++
 content/index.html |  11 +-
 .../2023/01/20/delegation-token-framework.html}| 334 +++--
 content/zh/index.html  |  11 +-
 26 files changed, 786 insertions(+), 648 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 6cf80f447..d69118da8 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -6,6 +6,110 @@
 https://flink.apache.org/blog
 https://flink.apache.org/blog/feed.xml"; rel="self" 
type="application/rss+xml" />
 
+
+Delegation Token Framework: Obtain, Distribute and Use Temporary 
Credentials Automatically
+<p>The Apache Flink Community is pleased to announce that 
the upcoming minor version of Flink (1.17) includes 
+the Delegation Token Framework proposed in <a 
href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-272%3A+Generalized+delegation+token+support">FLIP-272</a>;.
+This enables Flink to authenticate to external services at a central location 
(JobManager) and distribute authentication
+tokens to the TaskManagers.</p>
+
+<h2 id="introduction">Introduction</h2>
+
+<p>Authentication in distributed systems is not an easy task. Previously 
all worker nodes (TaskManagers) reading from or 
+writing to an external system needed to authenticate on their own. In such a 
case several things can go wrong, including but not limited to:</p>
+
+<ul>
+  <li>Too many authentication requests (potentially resulting in 
rejected requests)</li>
+  <li>Large number of retries on authentication failures</li>
+  <li>Re-occurring propagation/update of temporary credentials in a 
timely manner</li>
+  <li>Dependency issues when external system libraries are having the 
same dependency with different versions</li>
+  <li>Each authentication/temporary credentials are different making 
standardization challenging</li>
+  <li>…</li>
+</ul>
+
+<p>The aim of Delegation Token Framework is to solve the above 
challenges. The framework is authentication protocol agnostic and pluggable.
+The primary design concept is that authentication happens only at a single 
location (JobManager), the obtained temporary
+credentials propagated automatically to all the task managers where they can 
be used. The token re-obtain process is also handled
+in the JobManager.</p>
+
+<p align="center">
+<img 
src="/img/blog/2023-01-20-delegation-token-framework/delegation_token_framework.svg"
 width="70%" height="70%" />
+</p>
+
+<p>New authentication providers can be added with small amount of code 
which is going to be loaded by Flink automatically.
+At the moment the following external systems are supported:</p>
+
+<ul>
+  <li>Hadoop filesystems</li>
+  <li>HBase</li>
+  <li>S3</li>
+</ul>
+
+<p>Planned, but not yet implemented/contributed:</p>
+
+<ul>
+  <li>Kafka</li>
+  <li>Hive</li>
+</ul>
+
+<p>The design and implementation approach 

[flink-web] 01/02: Blog post on the Delegation Token Framework

2023-01-20 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 6a044980eb8deaa0ae78b5a82c5dd9f059219559
Author: Gabor Somogyi 
AuthorDate: Wed Jan 18 13:24:42 2023 +0100

Blog post on the Delegation Token Framework

Closes #602.
---
 _posts/2023-01-20-delegation-token-framework.md| 97 ++
 .../delegation_token_framework.svg | 23 +
 2 files changed, 120 insertions(+)

diff --git a/_posts/2023-01-20-delegation-token-framework.md 
b/_posts/2023-01-20-delegation-token-framework.md
new file mode 100644
index 0..78a86b464
--- /dev/null
+++ b/_posts/2023-01-20-delegation-token-framework.md
@@ -0,0 +1,97 @@
+---
+layout: post
+title:  "Delegation Token Framework: Obtain, Distribute and Use Temporary 
Credentials Automatically"
+date: 2023-01-20T08:00:00.000Z
+categories: news
+authors:
+- gaborgsomogyi:
+  name: "Gabor Somogyi"
+- mbalassi:
+  name: "Marton Balassi"
+  twitter: "MartonBalassi"
+---
+
+The Apache Flink Community is pleased to announce that the upcoming minor 
version of Flink (1.17) includes 
+the Delegation Token Framework proposed in 
[FLIP-272](https://cwiki.apache.org/confluence/display/FLINK/FLIP-272%3A+Generalized+delegation+token+support).
+This enables Flink to authenticate to external services at a central location 
(JobManager) and distribute authentication
+tokens to the TaskManagers.
+
+## Introduction
+
+Authentication in distributed systems is not an easy task. Previously all 
worker nodes (TaskManagers) reading from or 
+writing to an external system needed to authenticate on their own. In such a 
case several things can go wrong, including but not limited to:
+
+* Too many authentication requests (potentially resulting in rejected requests)
+* Large number of retries on authentication failures
+* Re-occurring propagation/update of temporary credentials in a timely manner
+* Dependency issues when external system libraries are having the same 
dependency with different versions
+* Each authentication/temporary credentials are different making 
standardization challenging
+* ...
+
+The aim of Delegation Token Framework is to solve the above challenges. The 
framework is authentication protocol agnostic and pluggable.
+The primary design concept is that authentication happens only at a single 
location (JobManager), the obtained temporary
+credentials propagated automatically to all the task managers where they can 
be used. The token re-obtain process is also handled
+in the JobManager.
+
+
+
+
+
+New authentication providers can be added with small amount of code which is 
going to be loaded by Flink automatically.
+At the moment the following external systems are supported:
+
+* Hadoop filesystems
+* HBase
+* S3
+
+Planned, but not yet implemented/contributed:
+
+* Kafka
+* Hive
+
+The design and implementation approach has already been proven in [Apache 
Spark](https://spark.apache.org/docs/latest/security.html#kerberos).
+Gabor is a Spark committer, he championed this feature in the Spark community. 
The most notable improvement we achieved compared to the
+current state in Spark is that the framework in Flink is already 
authentication protocol agnostic (and not bound to Kerberos).
+
+## Documentation
+
+For more details please refer to the following documentation:
+
+* [Delegation Tokens In 
General](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/security/security-delegation-token/)
+* [How to use Kerberos delegation 
tokens](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/security/security-kerberos/#using-delegation-tokens)
+
+## Development details
+
+Major tickets where the framework has been added:
+
+* [FLINK-21232](https://issues.apache.org/jira/browse/FLINK-21232) Kerberos 
delegation token framework
+* [FLINK-29918](https://issues.apache.org/jira/browse/FLINK-29918) Generalized 
delegation token support
+* [FLINK-30704](https://issues.apache.org/jira/browse/FLINK-30704) Add S3 
delegation token support
+
+## Example implementation
+
+Adding a new authentication protocol is relatively straight-forward:
+
+* Check out the [example 
implementation](https://github.com/gaborgsomogyi/flink-test-java-delegation-token-provider)
+* Change `FlinkTestJavaDelegationTokenProvider.obtainDelegationTokens` to 
obtain a custom token from any external service
+* Change `FlinkTestJavaDelegationTokenReceiver.onNewTokensObtained` to receive 
the previously obtained tokens on all task managers
+* Use the tokens for external service authentication
+* Compile the project and put it into the classpath (adding it inside a plugin 
also supported)
+* Enjoy that Flink does all the heavy lifting behind the scenes :-)
+
+## Example implementation testing
+
+The existing providers are tested with the [Flink Kubernetes 
Operator](

[flink-web] branch asf-site updated (a96ace934 -> 9290f9c3b)

2023-01-20 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


from a96ace934 regenerate website
 new 6a044980e Blog post on the Delegation Token Framework
 new 9290f9c3b Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _posts/2023-01-20-delegation-token-framework.md|  97 ++
 content/blog/feed.xml  | 201 +++--
 content/blog/index.html|  43 +++--
 content/blog/page10/index.html |  43 +++--
 content/blog/page11/index.html |  36 ++--
 content/blog/page12/index.html |  36 ++--
 content/blog/page13/index.html |  42 +++--
 content/blog/page14/index.html |  46 +++--
 content/blog/page15/index.html |  42 +++--
 content/blog/page16/index.html |  40 ++--
 content/blog/page17/index.html |  40 ++--
 content/blog/page18/index.html |  40 ++--
 content/blog/page19/index.html |  40 ++--
 content/blog/page2/index.html  |  41 +++--
 content/blog/page20/index.html |  40 ++--
 content/blog/page21/index.html |  25 +++
 content/blog/page3/index.html  |  39 ++--
 content/blog/page4/index.html  |  38 ++--
 content/blog/page5/index.html  |  47 ++---
 content/blog/page6/index.html  |  49 +++--
 content/blog/page7/index.html  |  42 +++--
 content/blog/page8/index.html  |  40 ++--
 content/blog/page9/index.html  |  45 +++--
 .../delegation_token_framework.svg |  23 +++
 content/index.html |  11 +-
 .../01/20/delegation-token-framework.html} | 149 ---
 content/zh/index.html  |  11 +-
 .../delegation_token_framework.svg |  23 +++
 28 files changed, 885 insertions(+), 484 deletions(-)
 create mode 100644 _posts/2023-01-20-delegation-token-framework.md
 create mode 100644 
content/img/blog/2023-01-20-delegation-token-framework/delegation_token_framework.svg
 copy content/news/{2018/03/08/release-1.4.2.html => 
2023/01/20/delegation-token-framework.html} (70%)
 create mode 100644 
img/blog/2023-01-20-delegation-token-framework/delegation_token_framework.svg



[flink] branch master updated: [FLINK-30704][filesystems][s3] Add S3 delegation token support

2023-01-20 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 0141f13ca80 [FLINK-30704][filesystems][s3] Add S3 delegation token 
support
0141f13ca80 is described below

commit 0141f13ca801d5db45435d101a9c3ef83889bbc0
Author: Gabor Somogyi 
AuthorDate: Fri Jan 6 12:20:42 2023 +0100

[FLINK-30704][filesystems][s3] Add S3 delegation token support
---
 .../flink/core/plugin/DefaultPluginManager.java|  42 +++-
 .../security/token/DelegationTokenProvider.java|   8 +-
 .../security/token/DelegationTokenReceiver.java|   2 +-
 .../fs/s3/common/AbstractS3FileSystemFactory.java  |   2 +
 .../DynamicTemporaryAWSCredentialsProvider.java|  65 
 .../s3/common/token/S3DelegationTokenProvider.java | 112 +
 .../s3/common/token/S3DelegationTokenReceiver.java | 106 +++
 ...ink.core.security.token.DelegationTokenProvider |   3 +-
 ...ink.core.security.token.DelegationTokenReceiver |   3 +-
 ...DynamicTemporaryAWSCredentialsProviderTest.java |  73 ++
 .../token/S3DelegationTokenProviderTest.java   |  55 ++
 .../token/S3DelegationTokenReceiverTest.java   | 105 +++
 .../runtime/entrypoint/ClusterEntrypoint.java  |   5 +-
 .../flink/runtime/minicluster/MiniCluster.java |   8 +-
 .../token/DefaultDelegationTokenManager.java   |  74 --
 .../DefaultDelegationTokenManagerFactory.java  |   5 +-
 .../token/DelegationTokenReceiverRepository.java   |  72 +++--
 .../token/hadoop/HBaseDelegationTokenProvider.java |   2 +-
 .../hadoop/HadoopDelegationTokenReceiver.java  |   2 +-
 .../hadoop/HadoopFSDelegationTokenProvider.java|   2 +-
 .../runtime/taskexecutor/TaskManagerRunner.java|   2 +-
 ...nk.core.security.token.DelegationTokenProvider} |   0
 ...nk.core.security.token.DelegationTokenReceiver} |   0
 .../token/DefaultDelegationTokenManagerTest.java   |  16 +--
 .../DelegationTokenReceiverRepositoryTest.java |   8 +-
 .../ExceptionThrowingDelegationTokenProvider.java  |   1 +
 .../ExceptionThrowingDelegationTokenReceiver.java  |   1 +
 .../token/TestDelegationTokenProvider.java |   1 +
 .../token/TestDelegationTokenReceiver.java |   1 +
 .../runtime/taskexecutor/TaskExecutorBuilder.java  |   2 +-
 ...cutorExecutionDeploymentReconciliationTest.java |   2 +-
 .../TaskExecutorPartitionLifecycleTest.java|   2 +-
 .../taskexecutor/TaskExecutorSlotLifetimeTest.java |   2 +-
 .../runtime/taskexecutor/TaskExecutorTest.java |   4 +-
 .../taskexecutor/TaskManagerRunnerStartupTest.java |   2 +-
 .../TaskSubmissionTestEnvironment.java |   2 +-
 ...nk.core.security.token.DelegationTokenProvider} |   0
 ...nk.core.security.token.DelegationTokenReceiver} |   0
 .../test/plugin/DefaultPluginManagerTest.java  |  43 ++--
 .../apache/flink/yarn/YarnClusterDescriptor.java   |   2 +-
 40 files changed, 730 insertions(+), 107 deletions(-)

diff --git 
a/flink-core/src/main/java/org/apache/flink/core/plugin/DefaultPluginManager.java
 
b/flink-core/src/main/java/org/apache/flink/core/plugin/DefaultPluginManager.java
index be8825bd0ca..07a3a533900 100644
--- 
a/flink-core/src/main/java/org/apache/flink/core/plugin/DefaultPluginManager.java
+++ 
b/flink-core/src/main/java/org/apache/flink/core/plugin/DefaultPluginManager.java
@@ -21,20 +21,31 @@ package org.apache.flink.core.plugin;
 import org.apache.flink.annotation.Internal;
 import org.apache.flink.annotation.VisibleForTesting;
 
+import org.apache.flink.shaded.guava30.com.google.common.base.Joiner;
 import org.apache.flink.shaded.guava30.com.google.common.collect.Iterators;
 
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.concurrent.GuardedBy;
 import javax.annotation.concurrent.ThreadSafe;
 
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.HashMap;
 import java.util.Iterator;
+import java.util.Map;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
 
 /** Default implementation of {@link PluginManager}. */
 @Internal
 @ThreadSafe
 public class DefaultPluginManager implements PluginManager {
 
+private static final Logger LOG = 
LoggerFactory.getLogger(DefaultPluginManager.class);
+
 /**
  * Parent-classloader to all classloader that are used for plugin loading. 
We expect that this
  * is thread-safe.
@@ -44,6 +55,11 @@ public class DefaultPluginManager implements PluginManager {
 /** A collection of descriptions of all plugins known to this plugin 
manager. */
 private final Collection pluginDescriptors;
 
+private final Lock pluginLoadersLock;
+
+@GuardedBy("pluginLoadersLock")
+private final Map pluginLoaders;
+
 /** List o

[flink] branch master updated: [FLINK-30704][build] Skip SBOM creation in fast profile

2023-01-18 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new b35cef59975 [FLINK-30704][build] Skip SBOM creation in fast profile
b35cef59975 is described below

commit b35cef5997568d75611307d11fca0dec415f99b4
Author: Marton Balassi 
AuthorDate: Wed Jan 18 14:12:15 2023 +0100

[FLINK-30704][build] Skip SBOM creation in fast profile
---
 pom.xml | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/pom.xml b/pom.xml
index ffee13f96fd..02af94cf4c4 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1122,6 +1122,13 @@ under the License.

true


+   
+   
org.cyclonedx
+   
cyclonedx-maven-plugin
+   
+   
true
+   
+   






[flink] branch master updated: [FLINK-30425][runtime][security] Generalize token receive side

2023-01-13 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 324b54cf413 [FLINK-30425][runtime][security] Generalize token receive 
side
324b54cf413 is described below

commit 324b54cf413229f78e6e7092e16a03d45df1
Author: Gabor Somogyi 
AuthorDate: Mon Jan 2 15:30:45 2023 +0100

[FLINK-30425][runtime][security] Generalize token receive side
---
 flink-end-to-end-tests/test-scripts/common.sh  |   1 +
 .../flink/runtime/minicluster/MiniCluster.java |  10 +-
 .../token/DefaultDelegationTokenManager.java   |  64 --
 .../security/token/DelegationTokenProvider.java|  13 +-
 .../security/token/DelegationTokenReceiver.java|  61 +
 .../token/DelegationTokenReceiverRepository.java   | 140 +
 .../token/hadoop/HBaseDelegationTokenProvider.java |  27 +++-
 .../token/hadoop/HBaseDelegationTokenReceiver.java |  35 ++
 ...ter.java => HadoopDelegationTokenReceiver.java} |  39 +++---
 .../hadoop/HadoopFSDelegationTokenProvider.java|  31 -
 .../hadoop/HadoopFSDelegationTokenReceiver.java|  31 +
 .../flink/runtime/taskexecutor/TaskExecutor.java   |  12 +-
 .../runtime/taskexecutor/TaskManagerRunner.java|  22 +++-
 runtime.security.token.DelegationTokenReceiver |  17 +++
 .../token/DefaultDelegationTokenManagerTest.java   |  96 --
 .../DelegationTokenReceiverRepositoryTest.java |  75 +++
 .../ExceptionThrowingDelegationTokenReceiver.java  |  60 +
 .../token/TestDelegationTokenReceiver.java |  36 ++
 ...va => HadoopDelegationTokenReceiverITCase.java} |  55 
 .../runtime/taskexecutor/TaskExecutorBuilder.java  |   7 +-
 ...cutorExecutionDeploymentReconciliationTest.java |   4 +-
 .../TaskExecutorPartitionLifecycleTest.java|   4 +-
 .../taskexecutor/TaskExecutorSlotLifetimeTest.java |   4 +-
 .../runtime/taskexecutor/TaskExecutorTest.java |   7 +-
 .../taskexecutor/TaskManagerRunnerStartupTest.java |   4 +-
 .../taskexecutor/TaskManagerRunnerTest.java|   7 +-
 .../TaskSubmissionTestEnvironment.java |   4 +-
 .../runtime/taskexecutor/TestingTaskExecutor.java  |   7 +-
 runtime.security.token.DelegationTokenReceiver |  17 +++
 .../apache/flink/yarn/YarnClusterDescriptor.java   |   7 +-
 30 files changed, 792 insertions(+), 105 deletions(-)

diff --git a/flink-end-to-end-tests/test-scripts/common.sh 
b/flink-end-to-end-tests/test-scripts/common.sh
index 31741710b3a..047a05a7bb4 100644
--- a/flink-end-to-end-tests/test-scripts/common.sh
+++ b/flink-end-to-end-tests/test-scripts/common.sh
@@ -380,6 +380,7 @@ function check_logs_for_errors {
   | grep -v "Error sending fetch request" \
   | grep -v "WARN  akka.remote.ReliableDeliverySupervisor" \
   | grep -v "Options.*error_*" \
+  | grep -v "not packaged with this application" \
   | grep -ic "error" || true)
   if [[ ${error_count} -gt 0 ]]; then
 echo "Found error in log files; printing first 500 lines; see full logs 
for details:"
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
index 59240a14c03..1d48126e141 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
@@ -91,6 +91,7 @@ import org.apache.flink.runtime.rpc.RpcUtils;
 import org.apache.flink.runtime.scheduler.ExecutionGraphInfo;
 import 
org.apache.flink.runtime.security.token.DefaultDelegationTokenManagerFactory;
 import org.apache.flink.runtime.security.token.DelegationTokenManager;
+import 
org.apache.flink.runtime.security.token.DelegationTokenReceiverRepository;
 import org.apache.flink.runtime.taskexecutor.TaskExecutor;
 import org.apache.flink.runtime.taskexecutor.TaskManagerRunner;
 import org.apache.flink.runtime.webmonitor.retriever.LeaderRetriever;
@@ -199,6 +200,9 @@ public class MiniCluster implements AutoCloseableAsync {
 @GuardedBy("lock")
 private DelegationTokenManager delegationTokenManager;
 
+@GuardedBy("lock")
+private DelegationTokenReceiverRepository 
delegationTokenReceiverRepository;
+
 @GuardedBy("lock")
 private BlobCacheService blobCacheService;
 
@@ -431,6 +435,9 @@ public class MiniCluster implements AutoCloseableAsync {
 DefaultDelegationTokenManagerFactory.create(
 configuration, 
commonRpcService.getScheduledExecutor(), ioExecutor);
 
+delegationTokenReceiverRepository =
+new DelegationTokenReceiverRepository(configuration);

[flink-kubernetes-operator] branch main updated: [FLINK-30599][build] Publish SBOM artifacts (#503)

2023-01-12 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 02f08c12 [FLINK-30599][build] Publish SBOM artifacts (#503)
02f08c12 is described below

commit 02f08c1261ec3130b90bc06491b5aa9f02ed7fc8
Author: Márton Balassi 
AuthorDate: Thu Jan 12 18:12:31 2023 +0100

[FLINK-30599][build] Publish SBOM artifacts (#503)

Based on https://github.com/apache/flink/pull/21606
---
 pom.xml | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/pom.xml b/pom.xml
index f3576a7b..146821a8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -402,6 +402,20 @@ under the License.
 
 
 
+
+
+org.cyclonedx
+cyclonedx-maven-plugin
+2.7.3
+
+
+package
+
+makeBom
+
+
+
+
 
 
 



[flink] branch master updated: [FLINK-30578][build] Publish SBOM artifacts

2023-01-09 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 9bb6500e2fa [FLINK-30578][build] Publish SBOM artifacts
9bb6500e2fa is described below

commit 9bb6500e2fa6fdf937eee907f62ea504ebba30ef
Author: Dongjoon Hyun 
AuthorDate: Thu Jan 5 16:26:29 2023 -0800

[FLINK-30578][build] Publish SBOM artifacts
---
 pom.xml | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/pom.xml b/pom.xml
index 5d39d6218bd..05185314d2b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1926,6 +1926,20 @@ under the License.



+
+   
+   org.cyclonedx
+   cyclonedx-maven-plugin
+   2.7.3
+   
+   
+   package
+   
+   makeBom
+   
+   
+   
+   

 




[flink] branch master updated: [hotfix][docs] Fix documentation build

2023-01-04 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 4bb1ef66d42 [hotfix][docs] Fix documentation build
4bb1ef66d42 is described below

commit 4bb1ef66d42ee22c9f5c54358bd2e92749784eaf
Author: Marton Balassi 
AuthorDate: Wed Jan 4 14:28:08 2023 +0100

[hotfix][docs] Fix documentation build
---
 docs/static/fig/delegation_token_framework.svg | 21 -
 pom.xml|  1 +
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/docs/static/fig/delegation_token_framework.svg 
b/docs/static/fig/delegation_token_framework.svg
index b2c46a95e80..82eda539fbd 100644
--- a/docs/static/fig/delegation_token_framework.svg
+++ b/docs/static/fig/delegation_token_framework.svg
@@ -1,4 +1,23 @@
 
+
+
 
 http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd";>
-http://www.w3.org/2000/svg"; 
xmlns:xlink="http://www.w3.org/1999/xlink"; version="1.1" width="851px" 
height="768px" viewBox="-0.5 -0.5 851 768" content="<mxfile 
host="app.diagrams.net" modified="2023-01-04T12:07:02.859Z" 
agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, 
like Gecko) Chrome/108.0.0.0 Safari/537.36" 
etag="yuoZn9JIeDNMMz-aMEgE" version="20.8.1" 
type="google"><diagram id=&qu [...]
\ No newline at end of file
+http://www.w3.org/2000/svg"; 
xmlns:xlink="http://www.w3.org/1999/xlink"; version="1.1" width="851px" 
height="768px" viewBox="-0.5 -0.5 851 768" content="<mxfile 
host="app.diagrams.net" modified="2023-01-04T12:07:02.859Z" 
agent="5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, 
like Gecko) Chrome/108.0.0.0 Safari/537.36" 
etag="yuoZn9JIeDNMMz-aMEgE" version="20.8.1" 
type="google"><diagram id=&qu [...]
diff --git a/pom.xml b/pom.xml
index ab0a7000527..832a8d7be34 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1547,6 +1547,7 @@ under the License.
**/target/**

build-target/**

docs/layouts/shortcodes/generated/**
+   
docs/themes/connectors/layouts/shortcodes/generated/**

docs/static/generated/**


tools/artifacts/**



[flink] branch master updated (b48cddf12a3 -> e7631b51a51)

2023-01-04 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from b48cddf12a3 [FLINK-30416][sql-gateway] Add configureSession REST API 
in the SQL Gateway (#21525)
 add e7631b51a51 [FLINK-25911][docs] Add delegation token framework 
documentation

No new revisions were added by this update.

Summary of changes:
 .../security/security-delegation-token.md  | 235 +
 .../docs/deployment/security/security-kerberos.md  | 120 ---
 docs/static/fig/delegation_token_framework.svg |   4 +
 3 files changed, 327 insertions(+), 32 deletions(-)
 create mode 100644 
docs/content/docs/deployment/security/security-delegation-token.md
 create mode 100644 docs/static/fig/delegation_token_framework.svg



[flink-kubernetes-operator] branch main updated: [hotfix] Add year 2023 to Notice files

2023-01-03 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 4e75af7e [hotfix] Add year 2023 to Notice files
4e75af7e is described below

commit 4e75af7e2e0696dfa8b0a63f6d95ee5a0db5917e
Author: Márton Balassi 
AuthorDate: Tue Jan 3 20:48:20 2023 +0100

[hotfix] Add year 2023 to Notice files
---
 NOTICE | 2 +-
 flink-kubernetes-operator/src/main/resources/META-INF/NOTICE   | 2 +-
 flink-kubernetes-standalone/src/main/resources/META-INF/NOTICE | 2 +-
 flink-kubernetes-webhook/src/main/resources/META-INF/NOTICE| 2 +-
 tools/license/NOTICE-binary_PREAMBLE.txt   | 2 +-
 5 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/NOTICE b/NOTICE
index 6c490671..5cfa161b 100644
--- a/NOTICE
+++ b/NOTICE
@@ -1,5 +1,5 @@
 Apache Flink Kubernetes Operator (flink-kubernetes-operator)
-Copyright 2014-2022 The Apache Software Foundation
+Copyright 2014-2023 The Apache Software Foundation
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
diff --git a/flink-kubernetes-operator/src/main/resources/META-INF/NOTICE 
b/flink-kubernetes-operator/src/main/resources/META-INF/NOTICE
index 10b4e3bd..2ca555de 100644
--- a/flink-kubernetes-operator/src/main/resources/META-INF/NOTICE
+++ b/flink-kubernetes-operator/src/main/resources/META-INF/NOTICE
@@ -1,5 +1,5 @@
 flink-kubernetes-operator
-Copyright 2014-2022 The Apache Software Foundation
+Copyright 2014-2023 The Apache Software Foundation
 
 This project includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
diff --git a/flink-kubernetes-standalone/src/main/resources/META-INF/NOTICE 
b/flink-kubernetes-standalone/src/main/resources/META-INF/NOTICE
index 993ca343..108cef76 100644
--- a/flink-kubernetes-standalone/src/main/resources/META-INF/NOTICE
+++ b/flink-kubernetes-standalone/src/main/resources/META-INF/NOTICE
@@ -1,5 +1,5 @@
 flink-kubernetes-operator-standalone
-Copyright 2014-2022 The Apache Software Foundation
+Copyright 2014-2023 The Apache Software Foundation
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
\ No newline at end of file
diff --git a/flink-kubernetes-webhook/src/main/resources/META-INF/NOTICE 
b/flink-kubernetes-webhook/src/main/resources/META-INF/NOTICE
index 150a7f25..2153ab09 100644
--- a/flink-kubernetes-webhook/src/main/resources/META-INF/NOTICE
+++ b/flink-kubernetes-webhook/src/main/resources/META-INF/NOTICE
@@ -1,5 +1,5 @@
 flink-kubernetes-webhook
-Copyright 2014-2022 The Apache Software Foundation
+Copyright 2014-2023 The Apache Software Foundation
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
diff --git a/tools/license/NOTICE-binary_PREAMBLE.txt 
b/tools/license/NOTICE-binary_PREAMBLE.txt
index df65fdbb..d4bad006 100644
--- a/tools/license/NOTICE-binary_PREAMBLE.txt
+++ b/tools/license/NOTICE-binary_PREAMBLE.txt
@@ -4,5 +4,5 @@
 // --
 
 Apache Flink Kubernetes Operator (flink-kubernetes-operator)
-Copyright 2014-2022 The Apache Software Foundation
+Copyright 2014-2023 The Apache Software Foundation
 



[flink] 02/03: [FLINK-30422][runtime][security] Generalize token framework provider API

2022-12-20 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 6e5f8694b66d8dea8ae2926adafdb610c09d9b88
Author: Gabor Somogyi 
AuthorDate: Tue Dec 13 15:48:33 2022 +0100

[FLINK-30422][runtime][security] Generalize token framework provider API
---
 .../runtime/resourcemanager/ResourceManager.java   |   3 +-
 ...Listener.java => DelegationTokenContainer.java} |  31 +++--
 .../security/token/DelegationTokenManager.java |  16 ++-
 ...nProvider.java => DelegationTokenProvider.java} |  47 ---
 .../security/token/NoOpDelegationTokenManager.java |   5 +-
 .../token/hadoop/HBaseDelegationTokenProvider.java | 153 +
 .../token/hadoop/HadoopDelegationTokenUpdater.java |  18 ++-
 .../hadoop/HadoopFSDelegationTokenProvider.java|  60 
 .../hadoop/KerberosDelegationTokenManager.java | 125 -
 .../flink/runtime/taskexecutor/TaskExecutor.java   |   6 +-
 ...runtime.security.token.DelegationTokenProvider} |   0
 .../token/DelegationTokenContainerTest.java|  73 ++
 ... ExceptionThrowingDelegationTokenProvider.java} |  30 ++--
 ...vider.java => TestDelegationTokenProvider.java} |  17 +--
 .../hadoop/HadoopDelegationTokenConverterTest.java |   2 +-
 .../hadoop/HadoopDelegationTokenUpdaterITCase.java |  15 +-
 .../KerberosDelegationTokenManagerITCase.java  |  25 ++--
 ...runtime.security.token.DelegationTokenProvider} |   4 +-
 .../apache/flink/yarn/YarnClusterDescriptor.java   |  10 +-
 19 files changed, 361 insertions(+), 279 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
index 5d38143974d..2dd390ac711 100755
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java
@@ -67,7 +67,6 @@ import org.apache.flink.runtime.rpc.FencedRpcEndpoint;
 import org.apache.flink.runtime.rpc.Local;
 import org.apache.flink.runtime.rpc.RpcService;
 import org.apache.flink.runtime.rpc.RpcServiceUtils;
-import org.apache.flink.runtime.security.token.DelegationTokenListener;
 import org.apache.flink.runtime.security.token.DelegationTokenManager;
 import org.apache.flink.runtime.shuffle.ShuffleDescriptor;
 import org.apache.flink.runtime.slots.ResourceRequirement;
@@ -117,7 +116,7 @@ import static 
org.apache.flink.util.Preconditions.checkState;
  */
 public abstract class ResourceManager
 extends FencedRpcEndpoint
-implements DelegationTokenListener, ResourceManagerGateway {
+implements DelegationTokenManager.Listener, ResourceManagerGateway {
 
 public static final String RESOURCE_MANAGER_NAME = "resourcemanager";
 
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenListener.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenContainer.java
similarity index 61%
rename from 
flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenListener.java
rename to 
flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenContainer.java
index 9a29e1acf3b..0f11452449b 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenListener.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenContainer.java
@@ -18,17 +18,26 @@
 
 package org.apache.flink.runtime.security.token;
 
-import org.apache.flink.annotation.Internal;
+import org.apache.flink.annotation.Experimental;
 
-/**
- * Listener for delegation tokens state changes in the {@link 
DelegationTokenManager}.
- *
- * By registering it in the manager one can receive callbacks when events 
are happening related
- * to delegation tokens.
- */
-@Internal
-public interface DelegationTokenListener {
+import java.io.Serializable;
+import java.util.HashMap;
+import java.util.Map;
+
+/** Container for delegation tokens. */
+@Experimental
+public class DelegationTokenContainer implements Serializable {
+private Map tokens = new HashMap<>();
+
+public Map getTokens() {
+return tokens;
+}
+
+public void addToken(String key, byte[] value) {
+tokens.put(key, value);
+}
 
-/** Callback function when new delegation tokens obtained. */
-void onNewTokensObtained(byte[] tokens) throws Exception;
+public boolean hasTokens() {
+return !tokens.isEmpty();
+}
 }
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenManager.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenManager.java
index 8a00c9ae154..57f53c75ccc 100644
--- 

[flink] 03/03: [FLINK-30339][runtime][security] Add a unified delegation token manager

2022-12-20 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 37ad3434b887023ad81df0a013d434ca17f04e89
Author: Gabor Somogyi 
AuthorDate: Wed Dec 14 18:16:27 2022 +0100

[FLINK-30339][runtime][security] Add a unified delegation token manager
---
 docs/content.zh/docs/deployment/config.md  |   5 +
 .../generated/security_auth_kerberos_section.html  |  18 ---
 .../generated/security_configuration.html  |  36 ++---
 .../security_delegation_token_section.html |  30 
 .../flink/annotation/docs/Documentation.java   |   1 +
 .../flink/configuration/SecurityOptions.java   |  37 +
 .../runtime/entrypoint/ClusterEntrypoint.java  |   9 +-
 .../flink/runtime/minicluster/MiniCluster.java |   9 +-
 ...ger.java => DefaultDelegationTokenManager.java} |  33 ++---
 ...a => DefaultDelegationTokenManagerFactory.java} |  31 +---
 .../token/DefaultDelegationTokenManagerTest.java   | 135 +
 .../KerberosDelegationTokenManagerITCase.java  | 160 -
 .../apache/flink/yarn/YarnClusterDescriptor.java   |   4 +-
 13 files changed, 254 insertions(+), 254 deletions(-)

diff --git a/docs/content.zh/docs/deployment/config.md 
b/docs/content.zh/docs/deployment/config.md
index cb05cd9e358..71a6237f11b 100644
--- a/docs/content.zh/docs/deployment/config.md
+++ b/docs/content.zh/docs/deployment/config.md
@@ -204,6 +204,11 @@ Flink's network connections can be secured via SSL. Please 
refer to the [SSL Set
 
 {{< generated/security_ssl_section >}}
 
+### Delegation token
+
+Flink has a pluggable authentication protocol agnostic delegation token 
framework.
+
+{{< generated/security_delegation_token_section >}}
 
 ### Auth with External Systems
 
diff --git 
a/docs/layouts/shortcodes/generated/security_auth_kerberos_section.html 
b/docs/layouts/shortcodes/generated/security_auth_kerberos_section.html
index 0d97daba7f2..ad063064b49 100644
--- a/docs/layouts/shortcodes/generated/security_auth_kerberos_section.html
+++ b/docs/layouts/shortcodes/generated/security_auth_kerberos_section.html
@@ -14,12 +14,6 @@
 List<String>
 A comma-separated list of Kerberos-secured Hadoop filesystems 
Flink is going to access. For example, 
security.kerberos.access.hadoopFileSystems=hdfs://namenode2:9002,hdfs://namenode3:9003.
 The JobManager needs to have access to these filesystems to retrieve the 
security tokens.
 
-
-security.kerberos.fetch.delegation-token
-true
-Boolean
-Indicates whether to fetch the delegation tokens for external 
services the Flink job needs to contact. Only HDFS and HBase are supported. It 
is used in Yarn deployments. If true, Flink will fetch HDFS and HBase 
delegation tokens and inject them into Yarn AM containers. If false, Flink will 
assume that the delegation tokens are managed outside of Flink. As a 
consequence, it will not fetch delegation tokens for HDFS and HBase. You may 
need to disable this option, if you rel [...]
-
 
 security.kerberos.login.contexts
 (none)
@@ -50,17 +44,5 @@
 Duration
 The time period when keytab login happens automatically in 
order to always have a valid TGT.
 
-
-security.kerberos.tokens.renewal.retry.backoff
-1 h
-Duration
-The time period how long to wait before retrying to obtain new 
delegation tokens after a failure.
-
-
-security.kerberos.tokens.renewal.time-ratio
-0.75
-Double
-Ratio of the tokens's expiration time when new credentials 
should be re-obtained.
-
 
 
diff --git a/docs/layouts/shortcodes/generated/security_configuration.html 
b/docs/layouts/shortcodes/generated/security_configuration.html
index 9b34482d61b..a7e479fe44a 100644
--- a/docs/layouts/shortcodes/generated/security_configuration.html
+++ b/docs/layouts/shortcodes/generated/security_configuration.html
@@ -14,18 +14,30 @@
 List<String>
 List of factories that should be used to instantiate a 
security context. If multiple are configured, Flink will use the first 
compatible factory. You should have a NoOpSecurityContextFactory in this list 
as a fallback.
 
+
+security.delegation.tokens.enabled
+true
+Boolean
+Indicates whether to start delegation tokens system for 
external services.
+
+
+security.delegation.tokens.renewal.retry.backoff
+1 h
+Duration
+The time period how long to wait before retrying to obtain new 
delegation tokens after a failure.
+
+
+security.delegation.tokens.renewal.time-rat

[flink] branch master updated (0efcab872c7 -> 37ad3434b88)

2022-12-20 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 0efcab872c7 [hotfix][misprint] Remove duplicate word `version`
 new 21a1bf53ad5 [FLINK-30421][runtime][security] Move TGT renewal to 
hadoop module
 new 6e5f8694b66 [FLINK-30422][runtime][security] Generalize token 
framework provider API
 new 37ad3434b88 [FLINK-30339][runtime][security] Add a unified delegation 
token manager

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/content.zh/docs/deployment/config.md  |   5 +
 .../generated/security_auth_kerberos_section.html  |  18 --
 .../generated/security_configuration.html  |  36 ++--
 .../security_delegation_token_section.html |  30 +++
 .../flink/annotation/docs/Documentation.java   |   1 +
 .../flink/configuration/SecurityOptions.java   |  37 
 .../runtime/entrypoint/ClusterEntrypoint.java  |   9 +-
 .../flink/runtime/minicluster/MiniCluster.java |   9 +-
 .../runtime/resourcemanager/ResourceManager.java   |   3 +-
 .../runtime/security/SecurityConfiguration.java|   9 +
 .../runtime/security/modules/HadoopModule.java |  54 +-
 ...ger.java => DefaultDelegationTokenManager.java} | 207 -
 ...a => DefaultDelegationTokenManagerFactory.java} |  31 +--
 ...Listener.java => DelegationTokenContainer.java} |  31 +--
 .../security/token/DelegationTokenManager.java |  16 +-
 ...nProvider.java => DelegationTokenProvider.java} |  47 +++--
 .../security/token/NoOpDelegationTokenManager.java |   5 +-
 .../token/hadoop/HBaseDelegationTokenProvider.java | 153 +++
 .../token/hadoop/HadoopDelegationTokenUpdater.java |  18 +-
 .../hadoop/HadoopFSDelegationTokenProvider.java|  60 +++---
 .../flink/runtime/taskexecutor/TaskExecutor.java   |   6 +-
 ...runtime.security.token.DelegationTokenProvider} |   0
 .../runtime/security/modules/HadoopModuleTest.java |  54 ++
 .../token/DefaultDelegationTokenManagerTest.java   | 135 ++
 .../token/DelegationTokenContainerTest.java|  73 
 ... ExceptionThrowingDelegationTokenProvider.java} |  30 ++-
 ...vider.java => TestDelegationTokenProvider.java} |  17 +-
 .../hadoop/HadoopDelegationTokenConverterTest.java |   2 +-
 .../hadoop/HadoopDelegationTokenUpdaterITCase.java |  15 +-
 .../KerberosDelegationTokenManagerITCase.java  | 189 ---
 ...runtime.security.token.DelegationTokenProvider} |   4 +-
 .../apache/flink/yarn/YarnClusterDescriptor.java   |  14 +-
 32 files changed, 717 insertions(+), 601 deletions(-)
 create mode 100644 
docs/layouts/shortcodes/generated/security_delegation_token_section.html
 rename 
flink-runtime/src/main/java/org/apache/flink/runtime/security/token/{hadoop/KerberosDelegationTokenManager.java
 => DefaultDelegationTokenManager.java} (50%)
 rename 
flink-runtime/src/main/java/org/apache/flink/runtime/security/token/{hadoop/KerberosDelegationTokenManagerFactory.java
 => DefaultDelegationTokenManagerFactory.java} (51%)
 rename 
flink-runtime/src/main/java/org/apache/flink/runtime/security/token/{DelegationTokenListener.java
 => DelegationTokenContainer.java} (61%)
 rename 
flink-runtime/src/main/java/org/apache/flink/runtime/security/token/{hadoop/HadoopDelegationTokenProvider.java
 => DelegationTokenProvider.java} (60%)
 rename 
flink-runtime/src/main/resources/META-INF/services/{org.apache.flink.runtime.security.token.hadoop.HadoopDelegationTokenProvider
 => org.apache.flink.runtime.security.token.DelegationTokenProvider} (100%)
 create mode 100644 
flink-runtime/src/test/java/org/apache/flink/runtime/security/modules/HadoopModuleTest.java
 create mode 100644 
flink-runtime/src/test/java/org/apache/flink/runtime/security/token/DefaultDelegationTokenManagerTest.java
 create mode 100644 
flink-runtime/src/test/java/org/apache/flink/runtime/security/token/DelegationTokenContainerTest.java
 rename 
flink-runtime/src/test/java/org/apache/flink/runtime/security/token/{hadoop/ExceptionThrowingHadoopDelegationTokenProvider.java
 => ExceptionThrowingDelegationTokenProvider.java} (67%)
 rename 
flink-runtime/src/test/java/org/apache/flink/runtime/security/token/{hadoop/TestHadoopDelegationTokenProvider.java
 => TestDelegationTokenProvider.java} (70%)
 delete mode 100644 
flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/KerberosDelegationTokenManagerITCase.java
 rename 
flink-runtime/src/test/resources/META-INF/services/{org.apache.flink.runtime.security.token.hadoop.HadoopDelegationTokenProvider
 => org.apache.flink.runtime.security.token.DelegationTokenProvider} (83%)



[flink] 01/03: [FLINK-30421][runtime][security] Move TGT renewal to hadoop module

2022-12-20 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 21a1bf53ad5a5ed1842341624dcfe36c3a385f45
Author: Gabor Somogyi 
AuthorDate: Wed Dec 14 13:48:10 2022 +0100

[FLINK-30421][runtime][security] Move TGT renewal to hadoop module
---
 .../runtime/security/SecurityConfiguration.java|  9 
 .../runtime/security/modules/HadoopModule.java | 54 +-
 .../hadoop/KerberosDelegationTokenManager.java | 53 +
 .../runtime/security/modules/HadoopModuleTest.java | 54 ++
 .../KerberosDelegationTokenManagerITCase.java  | 32 +
 5 files changed, 118 insertions(+), 84 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/SecurityConfiguration.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/SecurityConfiguration.java
index 4db3e3a6139..ac032b02862 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/SecurityConfiguration.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/SecurityConfiguration.java
@@ -25,10 +25,12 @@ import org.apache.flink.configuration.SecurityOptions;
 import org.apache.commons.lang3.StringUtils;
 
 import java.io.File;
+import java.time.Duration;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 
+import static 
org.apache.flink.configuration.SecurityOptions.KERBEROS_RELOGIN_PERIOD;
 import static 
org.apache.flink.configuration.SecurityOptions.SECURITY_CONTEXT_FACTORY_CLASSES;
 import static 
org.apache.flink.configuration.SecurityOptions.SECURITY_MODULE_FACTORY_CLASSES;
 import static org.apache.flink.util.Preconditions.checkNotNull;
@@ -53,6 +55,8 @@ public class SecurityConfiguration {
 
 private final boolean useTicketCache;
 
+private final Duration tgtRenewalPeriod;
+
 private final String keytab;
 
 private final String principal;
@@ -90,6 +94,7 @@ public class SecurityConfiguration {
 this.keytab = 
flinkConf.getString(SecurityOptions.KERBEROS_LOGIN_KEYTAB);
 this.principal = 
flinkConf.getString(SecurityOptions.KERBEROS_LOGIN_PRINCIPAL);
 this.useTicketCache = 
flinkConf.getBoolean(SecurityOptions.KERBEROS_LOGIN_USETICKETCACHE);
+this.tgtRenewalPeriod = flinkConf.get(KERBEROS_RELOGIN_PERIOD);
 this.loginContextNames =
 
parseList(flinkConf.getString(SecurityOptions.KERBEROS_LOGIN_CONTEXTS));
 this.zkServiceName = 
flinkConf.getString(SecurityOptions.ZOOKEEPER_SASL_SERVICE_NAME);
@@ -116,6 +121,10 @@ public class SecurityConfiguration {
 return useTicketCache;
 }
 
+public Duration getTgtRenewalPeriod() {
+return tgtRenewalPeriod;
+}
+
 public Configuration getFlinkConfig() {
 return flinkConfig;
 }
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java
index 2ebcdb6f1bf..9f0bba6ea90 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java
@@ -22,6 +22,7 @@ import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.runtime.hadoop.HadoopUserUtils;
 import org.apache.flink.runtime.security.SecurityConfiguration;
 import org.apache.flink.runtime.security.token.hadoop.KerberosLoginProvider;
+import org.apache.flink.util.concurrent.ExecutorThreadFactory;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.security.Credentials;
@@ -29,7 +30,12 @@ import org.apache.hadoop.security.UserGroupInformation;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import javax.annotation.Nullable;
+
 import java.io.File;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
 
 import static org.apache.flink.util.Preconditions.checkNotNull;
 
@@ -42,6 +48,8 @@ public class HadoopModule implements SecurityModule {
 
 private final Configuration hadoopConfiguration;
 
+@Nullable private ScheduledExecutorService tgtRenewalExecutorService;
+
 public HadoopModule(
 SecurityConfiguration securityConfiguration, Configuration 
hadoopConfiguration) {
 this.securityConfig = checkNotNull(securityConfiguration);
@@ -75,6 +83,10 @@ public class HadoopModule implements SecurityModule {
 new File(fileLocation), 
hadoopConfiguration);
 loginUser.addCredentials(credentials);
 }
+tgtRenewalExecutorService =
+Executors.newSingleThreadScheduledExecutor(
+new

[flink] branch master updated: [FLINK-30402][security][runtime] Separate token framework generic and hadoop specific parts

2022-12-14 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 4ba336656b6 [FLINK-30402][security][runtime] Separate token framework 
generic and hadoop specific parts
4ba336656b6 is described below

commit 4ba336656b6a24c99b7e4e50ef9772fd58d79792
Author: Gabor Somogyi 
AuthorDate: Wed Dec 14 16:28:48 2022 +0100

[FLINK-30402][security][runtime] Separate token framework generic and 
hadoop specific parts
---
 .../runtime/entrypoint/ClusterEntrypoint.java  |  2 +-
 .../flink/runtime/minicluster/MiniCluster.java |  2 +-
 .../runtime/security/modules/HadoopModule.java |  2 +-
 .../{ => hadoop}/HBaseDelegationTokenProvider.java |  2 +-
 .../HadoopDelegationTokenConverter.java}   |  4 +--
 .../HadoopDelegationTokenProvider.java |  3 ++-
 .../HadoopDelegationTokenUpdater.java} | 10 +++
 .../HadoopFSDelegationTokenProvider.java   |  2 +-
 .../KerberosDelegationTokenManager.java| 11 +---
 .../KerberosDelegationTokenManagerFactory.java |  4 ++-
 .../token/{ => hadoop}/KerberosLoginProvider.java  |  2 +-
 .../flink/runtime/taskexecutor/TaskExecutor.java   |  7 ++---
 ...ity.token.hadoop.HadoopDelegationTokenProvider} |  4 +--
 ...ptionThrowingHadoopDelegationTokenProvider.java |  2 +-
 .../HadoopDelegationTokenConverterTest.java}   | 10 +++
 .../HadoopDelegationTokenUpdaterITCase.java}   | 12 -
 .../HadoopFSDelegationTokenProviderITCase.java | 31 +-
 .../KerberosDelegationTokenManagerITCase.java  |  3 ++-
 .../{ => hadoop}/KerberosLoginProviderITCase.java  |  2 +-
 .../TestHadoopDelegationTokenIdentifier.java}  |  8 +++---
 .../TestHadoopDelegationTokenProvider.java |  7 +++--
 ...ity.token.hadoop.HadoopDelegationTokenProvider} |  4 +--
 ...rg.apache.hadoop.security.token.TokenIdentifier |  2 +-
 .../apache/flink/yarn/YarnClusterDescriptor.java   |  8 +++---
 24 files changed, 80 insertions(+), 64 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/entrypoint/ClusterEntrypoint.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/entrypoint/ClusterEntrypoint.java
index ce67ccd3010..243dbe059a6 100755
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/entrypoint/ClusterEntrypoint.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/entrypoint/ClusterEntrypoint.java
@@ -66,7 +66,7 @@ import 
org.apache.flink.runtime.security.SecurityConfiguration;
 import org.apache.flink.runtime.security.SecurityUtils;
 import org.apache.flink.runtime.security.contexts.SecurityContext;
 import org.apache.flink.runtime.security.token.DelegationTokenManager;
-import 
org.apache.flink.runtime.security.token.KerberosDelegationTokenManagerFactory;
+import 
org.apache.flink.runtime.security.token.hadoop.KerberosDelegationTokenManagerFactory;
 import org.apache.flink.runtime.util.ZooKeeperUtils;
 import 
org.apache.flink.runtime.webmonitor.retriever.impl.RpcMetricQueryServiceRetriever;
 import org.apache.flink.util.AutoCloseableAsync;
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
index bab16dd21f8..a4c2fd06968 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java
@@ -90,7 +90,7 @@ import org.apache.flink.runtime.rpc.RpcSystem;
 import org.apache.flink.runtime.rpc.RpcUtils;
 import org.apache.flink.runtime.scheduler.ExecutionGraphInfo;
 import org.apache.flink.runtime.security.token.DelegationTokenManager;
-import 
org.apache.flink.runtime.security.token.KerberosDelegationTokenManagerFactory;
+import 
org.apache.flink.runtime.security.token.hadoop.KerberosDelegationTokenManagerFactory;
 import org.apache.flink.runtime.taskexecutor.TaskExecutor;
 import org.apache.flink.runtime.taskexecutor.TaskManagerRunner;
 import org.apache.flink.runtime.webmonitor.retriever.LeaderRetriever;
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java
index 6e77341e755..2ebcdb6f1bf 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java
@@ -21,7 +21,7 @@ package org.apache.flink.runtime.security.modules;
 import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.runtime.hadoop.HadoopUserUtils;
 import org.apache.flink.runtime.security.SecurityConfiguration

[flink-web] 01/03: Kubernetes Operator Release 1.3.0

2022-12-13 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 95c936d2f328181ea75064963ef931c55cf7e925
Author: Matyas Orhidi 
AuthorDate: Wed Dec 7 15:49:58 2022 -0800

Kubernetes Operator Release 1.3.0

Closes #593.
---
 _config.yml | 37 +++--
 1 file changed, 19 insertions(+), 18 deletions(-)

diff --git a/_config.yml b/_config.yml
index c12226585..77f322b6a 100644
--- a/_config.yml
+++ b/_config.yml
@@ -29,8 +29,8 @@ FLINK_ML_STABLE_SHORT: "2.1"
 FLINK_ML_GITHUB_URL: https://github.com/apache/flink-ml
 FLINK_ML_GITHUB_REPO_NAME: flink-ml
 
-FLINK_KUBERNETES_OPERATOR_VERSION_STABLE: 1.2.0
-FLINK_KUBERNETES_OPERATOR_STABLE_SHORT: "1.2"
+FLINK_KUBERNETES_OPERATOR_VERSION_STABLE: 1.3.0
+FLINK_KUBERNETES_OPERATOR_STABLE_SHORT: "1.3"
 
 FLINK_KUBERNETES_OPERATOR_URL: 
https://github.com/apache/flink-kubernetes-operator
 FLINK_KUBERNETES_OPERATOR_GITHUB_REPO_NAME: flink-kubernetes-operator
@@ -176,6 +176,23 @@ flink_ml_releases:
   sha512_url: 
"https://downloads.apache.org/flink/flink-ml-2.0.0/flink-ml-2.0.0-src.tgz.sha512";
 
 flink_kubernetes_operator_releases:
+
+  -
+  version_short: "1.3"
+  source_release:
+name: "Apache Flink Kubernetes Operator 1.3.0"
+id: "130-kubernetes-operator-download-source"
+flink_version: "1.16.0, 1.15.3, 1.14.6, 1.13.6"
+url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-kubernetes-operator-1.3.0/flink-kubernetes-operator-1.3.0-src.tgz";
+asc_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.3.0/flink-kubernetes-operator-1.3.0-src.tgz.asc";
+sha512_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.3.0/flink-kubernetes-operator-1.3.0-src.tgz.sha512";
+  helm_release:
+name: "Apache Flink Kubernetes Operator Helm Chart 1.3.0"
+id: "130-kubernetes-operator-download-helm"
+flink_version: "1.16.0, 1.15.3, 1.14.6, 1.13.6"
+url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-kubernetes-operator-1.3.0/flink-kubernetes-operator-1.3.0-helm.tgz";
+asc_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.3.0/flink-kubernetes-operator-1.3.0-helm.tgz.asc";
+sha512_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.3.0/flink-kubernetes-operator-1.3.0-helm.tgz.sha512";
   -
   version_short: "1.2"
   source_release:
@@ -192,22 +209,6 @@ flink_kubernetes_operator_releases:
 url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-kubernetes-operator-1.2.0/flink-kubernetes-operator-1.2.0-helm.tgz";
 asc_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.2.0/flink-kubernetes-operator-1.2.0-helm.tgz.asc";
 sha512_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.2.0/flink-kubernetes-operator-1.2.0-helm.tgz.sha512";
-  -
-  version_short: "1.1"
-  source_release:
-  name: "Apache Flink Kubernetes Operator 1.1.0"
-  id: "110-kubernetes-operator-download-source"
-  flink_version: "1.15.1, 1.14.5, 1.13.6"
-  url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-kubernetes-operator-1.1.0/flink-kubernetes-operator-1.1.0-src.tgz";
-  asc_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.1.0/flink-kubernetes-operator-1.1.0-src.tgz.asc";
-  sha512_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.1.0/flink-kubernetes-operator-1.1.0-src.tgz.sha512";
-  helm_release:
-name: "Apache Flink Kubernetes Operator Helm Chart 1.1.0"
-id: "110-kubernetes-operator-download-helm"
-flink_version: "1.15.1, 1.14.5, 1.13.6"
-url: 
"https://www.apache.org/dyn/closer.lua/flink/flink-kubernetes-operator-1.1.0/flink-kubernetes-operator-1.1.0-helm.tgz";
-asc_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.1.0/flink-kubernetes-operator-1.1.0-helm.tgz.asc";
-sha512_url: 
"https://downloads.apache.org/flink/flink-kubernetes-operator-1.1.0/flink-kubernetes-operator-1.1.0-helm.tgz.sha512";
 
 flink_table_store_releases:
   -



[flink-web] branch asf-site updated (110fb4acc -> a5d613e15)

2022-12-13 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


from 110fb4acc Rebuild website
 new 95c936d2f Kubernetes Operator Release 1.3.0
 new f1d1dc3e8 Kubernetes Operator 1.3.0 release blogpost
 new a5d613e15 Rebuild website

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _config.yml| 37 ++---
 ...2022-12-14-release-kubernetes-operator-1.3.0.md | 63 ++
 content/2019/05/03/pulsar-flink.html   |  2 +-
 content/2019/05/14/temporal-tables.html|  2 +-
 content/2019/05/19/state-ttl.html  |  2 +-
 content/2019/06/05/flink-network-stack.html|  2 +-
 content/2019/06/26/broadcast-state.html|  2 +-
 content/2019/07/23/flink-network-stack-2.html  |  2 +-
 content/2020/04/09/pyflink-udf-support-flink.html  |  2 +-
 content/2020/07/23/catalogs.html   |  2 +-
 ...ql-demo-building-e2e-streaming-application.html |  2 +-
 .../08/04/pyflink-pandas-udf-support-flink.html|  2 +-
 content/2020/08/19/statefun.html   |  2 +-
 .../flink-1.11-memory-management-improvements.html |  2 +-
 ...om-aligned-to-unaligned-checkpoints-part-1.html |  2 +-
 content/2020/12/15/pipelined-region-sheduling.html |  2 +-
 content/2021/01/07/pulsar-flink-connector-270.html |  2 +-
 content/2021/01/18/rocksdb.html|  2 +-
 content/2021/02/10/native-k8s-with-ha.html |  2 +-
 content/2021/03/11/batch-execution-mode.html   |  2 +-
 content/2021/05/06/reactive-mode.html  |  2 +-
 content/2021/07/07/backpressure.html   |  2 +-
 .../2021/09/07/connector-table-sql-api-part1.html  |  2 +-
 .../2021/09/07/connector-table-sql-api-part2.html  |  2 +-
 content/2021/10/26/sort-shuffle-part1.html |  2 +-
 content/2021/10/26/sort-shuffle-part2.html |  2 +-
 content/2021/11/03/flink-backward.html |  2 +-
 content/2021/12/10/log4j-cve.html  |  2 +-
 .../2022/01/04/scheduler-performance-part-one.html |  2 +-
 .../2022/01/04/scheduler-performance-part-two.html |  2 +-
 content/2022/01/20/pravega-connector-101.html  |  2 +-
 content/2022/02/22/scala-free.html |  2 +-
 content/2022/05/06/async-sink-base.html|  2 +-
 content/2022/05/06/pyflink-1.15-thread-mode.html   |  2 +-
 content/2022/05/06/restore-modes.html  |  2 +-
 content/2022/05/18/latency-part1.html  |  2 +-
 content/2022/05/23/latency-part2.html  |  2 +-
 content/2022/05/30/changelog-state-backend.html|  2 +-
 content/2022/06/17/adaptive-batch-scheduler.html   |  2 +-
 content/2022/07/11/final-checkpoint-part1.html |  2 +-
 content/2022/07/11/final-checkpoint-part2.html |  2 +-
 .../11/25/async-sink-rate-limiting-strategy.html   |  2 +-
 content/blog/index.html|  2 +-
 content/blog/page10/index.html |  2 +-
 content/blog/page11/index.html |  2 +-
 content/blog/page12/index.html |  2 +-
 content/blog/page13/index.html |  2 +-
 content/blog/page14/index.html |  2 +-
 content/blog/page15/index.html |  2 +-
 content/blog/page16/index.html |  2 +-
 content/blog/page17/index.html |  2 +-
 content/blog/page18/index.html |  2 +-
 content/blog/page19/index.html |  2 +-
 content/blog/page2/index.html  |  2 +-
 content/blog/page20/index.html |  2 +-
 content/blog/page21/index.html |  2 +-
 content/blog/page3/index.html  |  2 +-
 content/blog/page4/index.html  |  2 +-
 content/blog/page5/index.html  |  2 +-
 content/blog/page6/index.html  |  2 +-
 content/blog/page7/index.html  |  2 +-
 content/blog/page8/index.html  |  2 +-
 content/blog/page9/index.html  |  2 +-
 .../blog/release_1.0.0-changelog_known_issues.html |  2 +-
 content/blog/release_1.1.0-changelog.html  |  2 +-
 content/blog/release_1.2.0-changelog.html  |  2 +-
 content/blog/release_1.3.0-changelog.html  |  2 +-
 content/community.html |  2 +-
 .../code-style-and-quality-common.html |  2 +-
 .../code-style-and-quality-components.html |  2 +-
 .../code-style-and-quality-formatting.html |  2 +-
 .../contributing/code-style-and-quality-java.html  |  2 +-
 .../co

[flink-web] 02/03: Kubernetes Operator 1.3.0 release blogpost

2022-12-13 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit f1d1dc3e8e858d5065e24f33987f26f6a9310e5a
Author: Matyas Orhidi 
AuthorDate: Tue Dec 13 14:29:14 2022 -0800

Kubernetes Operator 1.3.0 release blogpost

Closes #595.
---
 ...2022-12-14-release-kubernetes-operator-1.3.0.md | 63 ++
 1 file changed, 63 insertions(+)

diff --git a/_posts/2022-12-14-release-kubernetes-operator-1.3.0.md 
b/_posts/2022-12-14-release-kubernetes-operator-1.3.0.md
new file mode 100644
index 0..81629ed0f
--- /dev/null
+++ b/_posts/2022-12-14-release-kubernetes-operator-1.3.0.md
@@ -0,0 +1,63 @@
+---
+layout: post
+title:  "Apache Flink Kubernetes Operator 1.3.0 Release Announcement"
+subtitle: "Lifecycle management for Apache Flink deployments using native 
Kubernetes tooling"
+date: 2022-12-14T08:00:00.000Z
+categories: news
+authors:
+- morhidi:
+  name: "Matyas Orhidi"
+  twitter: "matyasorhidi"
+- gyfora:
+  name: "Gyula Fora"
+  twitter: "GyulaFora"
+---
+The Flink community is happy to announce that the latest Flink Kubernetes 
Operator version went live today. Beyond the regular operator improvements and 
fixes the 1.3.0 version also integrates better with some popular infrastructure 
management tools like OLM and Argo CD. These improvements are clear indicators 
that the original intentions of the Flink community, namely to provide the de 
facto standard solution for managing Flink applications on Kubernetes is making 
steady progress to bec [...]
+
+## Release Highlights
+ * Upgrade to Fabric8 6.x.x and JOSDK 4.x.x
+ * Restart unhealthy Flink clusters
+ * Contribute the Flink Kubernetes Operator to OperatorHub
+ * Publish flink-kubernetes-operator-api module separately
+
+## Upgrade to Fabric8 6.x.x and JOSDK 4.x.x
+Two important framework components were upgraded with the current operator 
release, the Fabric8 client to v6.2.0 and the JOSDK to v4.1.0. These upgrades 
among others contain important informer improvements that help lower or 
completely eliminate the occurrence of certain intermittent issues when the 
operator looses track of managed Custom Resources.
+
+With the new JOSDK version, the operator now supports leader election and 
allows users to run standby operator replicas to reduce downtime due to 
operator failures. Read more about this in the 
[docs](https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.3/docs/operations/configuration/#leader-election-and-high-availability).
+
+## Restart unhealthy Flink clusters
+Flink has its own [restart 
strategies](https://nightlies.apache.org/flink/flink-docs-master/docs/ops/state/task_failure_recovery/#restart-strategies)
 which are working fine in most of the cases, but there are certain 
circumstances when Flink can be stuck in restart loops often resulting in 
`OutOfMemoryError: Metaspace` type of state which the job cannot recover from. 
If the root cause is just a temporary outage of some external system, for 
example, the Flink job could be resurrected by s [...]
+
+This restart can now be triggered by the operator itself. The operator can 
watch the actual retry count of a Flink job and restart it when too many 
restarts occurred in a defined amount of time window, for example:
+
+```
+kubernetes.operator.job.health-check.enabled: false
+kubernetes.operator.job.restart-check.duration-window: 2m
+kubernetes.operator.job.restart-check.threshold: 64
+```
+
+Operator is checking the retry count of a job in every defined interval. If 
there is a count value where the actual job retry count is bigger than the 
threshold and the timestamp is inside the grace period then the operator 
initiates a full job restart.
+
+## Contribute the Flink Kubernetes Operator to OperatorHub
+The Apache Flink Kubernetes Operator has been contributed to 
[OperatorHub.io](https://operatorhub.io/operator/flink-kubernetes-operator) by 
the Flink community. The OperatorHub.io aims to be a central location to find a 
wide array of operators that have been built by the community. An [OLM bundle 
generator](https://github.com/apache/flink-kubernetes-operator/tree/main/tools/olm)
 ensures that the resources required by OperatorHub.io are automatically 
derived from Helm charts.
+
+## Publish flink-kubernetes-operator-api module separately
+With the current operator release the Flink community introduces a more 
light-weight dependency model for interacting with the Flink Kubernetes 
Operator programmatically. We have refactored the existing operator modules, 
and introduced a new module, called `flink-kubernetes-operator-api` that 
contains the generated CRD classes and a minimal set of dependencies only to 
make the operator client as slim as possible.
+
+## What's Next?
+"One of the most challenging aspects of running an al

svn commit: r58710 - /release/flink/flink-kubernetes-operator-1.1.0/

2022-12-13 Thread mbalassi
Author: mbalassi
Date: Tue Dec 13 23:37:27 2022
New Revision: 58710

Log:
Remove old kubernetes operator release 1.1.0

Removed:
release/flink/flink-kubernetes-operator-1.1.0/



svn commit: r58709 - /dev/flink/flink-kubernetes-operator-1.3.0-rc1/ /release/flink/flink-kubernetes-operator-1.3.0/

2022-12-13 Thread mbalassi
Author: mbalassi
Date: Tue Dec 13 23:31:40 2022
New Revision: 58709

Log:
Release Apache Flink Kubernetes Operator 1.3.0

Added:
release/flink/flink-kubernetes-operator-1.3.0/
  - copied from r58708, dev/flink/flink-kubernetes-operator-1.3.0-rc1/
Removed:
dev/flink/flink-kubernetes-operator-1.3.0-rc1/



[flink-kubernetes-operator] branch main updated: [FLINK-30381] Bump minikube kubernetes version for operator

2022-12-13 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 4ec52209 [FLINK-30381] Bump minikube kubernetes version for operator
4ec52209 is described below

commit 4ec52209e59e191e09e0f946134c2bf1a2405a0f
Author: Gabor Somogyi 
AuthorDate: Tue Dec 13 15:53:49 2022 +0100

[FLINK-30381] Bump minikube kubernetes version for operator
---
 .../try-flink-kubernetes-operator/quick-start.md   | 30 --
 e2e-tests/utils.sh |  2 ++
 2 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/docs/content/docs/try-flink-kubernetes-operator/quick-start.md 
b/docs/content/docs/try-flink-kubernetes-operator/quick-start.md
index 17d76c64..a9a70566 100644
--- a/docs/content/docs/try-flink-kubernetes-operator/quick-start.md
+++ b/docs/content/docs/try-flink-kubernetes-operator/quick-start.md
@@ -42,22 +42,24 @@ So that the `kubectl` and `helm` commands are available on 
your local system.
 For docker we recommend that you have [Docker 
Desktop](https://www.docker.com/products/docker-desktop) installed
 and configured with at least 8GB of RAM.
 For kubernetes [minikube](https://minikube.sigs.k8s.io/docs/start/) is our 
choice, at the time of writing this we are
-using version v1.21.5. You can start a cluster with the following command:
+using version v1.25.3 (end-to-end tests are using the same version). You can 
start a cluster with the following command:
 
 ```bash
-minikube start --kubernetes-version=v1.21.5
-😄 minikube v1.25.1 on Darwin 12.1
-🆕 Kubernetes 1.23.1 is now available. If you would like to upgrade, specify: 
--kubernetes-version=v1.23.1
-✨ Using the docker driver based on existing profile
-👍 Starting control plane node minikube in cluster minikube
-🚜 Pulling base image ...
-🏃 Updating the running docker "minikube" container ...
-🐳 Preparing Kubernetes v1.21.5 on Docker 20.10.12 ...
-▪ kubelet.housekeeping-interval=5m
-🔎 Verifying Kubernetes components...
-▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 
(http://gcr.io/k8s-minikube/storage-provisioner:v5)
-🌟 Enabled addons: storage-provisioner, default-storageclass
-🏄 Done! kubectl is now configured to use "minikube" cluster and "default" 
namespace by default
+minikube start --kubernetes-version=v1.25.3
+😄  minikube v1.28.0 on Darwin 13.0.1
+✨  Automatically selected the docker driver. Other choices: hyperkit, ssh
+📌  Using Docker Desktop driver with root privileges
+👍  Starting control plane node minikube in cluster minikube
+🚜  Pulling base image ...
+🔥  Creating docker container (CPUs=2, Memory=4000MB) ...
+🐳  Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
+▪ Generating certificates and keys ...
+▪ Booting up control plane ...
+▪ Configuring RBAC rules ...
+🔎  Verifying Kubernetes components...
+▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
+🌟  Enabled addons: default-storageclass, storage-provisioner
+🏄  Done! kubectl is now configured to use "minikube" cluster and "default" 
namespace by default
 ```
 
 We also recommend [k9s](https://k9scli.io/) as GUI for kubernetes, but it is 
optional for this quickstart guide.
diff --git a/e2e-tests/utils.sh b/e2e-tests/utils.sh
index 85113622..76ee9190 100755
--- a/e2e-tests/utils.sh
+++ b/e2e-tests/utils.sh
@@ -194,7 +194,9 @@ function start_minikube {
 function start_minikube_if_not_running {
 if ! minikube status; then
 echo "Starting minikube ..."
+# Please update tbe docs when changing kubernetes version
 minikube start \
+--kubernetes-version=v1.25.3 \
 --extra-config=kubelet.image-gc-high-threshold=99 \
 --extra-config=kubelet.image-gc-low-threshold=98 \
 --extra-config=kubelet.minimum-container-ttl-duration=120m \



[flink] branch master updated (2c89283b877 -> 4abdf2d94f1)

2022-12-13 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 2c89283b877 [FLINK-30175][Build] Bump snakeyaml from 1.31 to 1.33
 add 4abdf2d94f1 [FLINK-30370][runtime][security] Move existing delegation 
token framework authentication to providers

No new revisions were added by this update.

Summary of changes:
 .../token/HBaseDelegationTokenProvider.java| 149 -
 .../token/HadoopDelegationTokenProvider.java   |   3 -
 .../token/HadoopFSDelegationTokenProvider.java |  39 --
 .../token/KerberosDelegationTokenManager.java  |  38 +-
 ...ptionThrowingHadoopDelegationTokenProvider.java |  23 +++-
 .../KerberosDelegationTokenManagerITCase.java  |  42 ++
 6 files changed, 153 insertions(+), 141 deletions(-)



[flink-kubernetes-operator] branch main updated: [FLINK-30329] Mount flink-operator-config-volume at /opt/flink/conf without subPath

2022-12-12 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new dc173cbc [FLINK-30329] Mount flink-operator-config-volume at 
/opt/flink/conf without subPath
dc173cbc is described below

commit dc173cbccd5f9a90fe573ce01790684253dcc0a9
Author: Ottomata 
AuthorDate: Wed Dec 7 14:48:15 2022 -0500

[FLINK-30329] Mount flink-operator-config-volume at /opt/flink/conf without 
subPath

When mounted volumes use subPath, the volume that uses them is not updated 
automatically.
So, even you do update the ConfigMap in k8s, the flink-conf.yaml file will 
never see those updates.
All of the entires in the flink-operator-config-volume configMap are 
mounted in each
container anyway, so we might as well just mount the 
flink-operator-config-volume
directly at /opt/flink/conf witihout specifying the subPath of each file.

This allows for Dynamic Operator Configuration to work when deployed with
these helm charts.

https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/configuration/#dynamic-operator-configuration
---
 .../templates/flink-operator.yaml  | 18 ++
 1 file changed, 2 insertions(+), 16 deletions(-)

diff --git a/helm/flink-kubernetes-operator/templates/flink-operator.yaml 
b/helm/flink-kubernetes-operator/templates/flink-operator.yaml
index f94c2198..f8da1543 100644
--- a/helm/flink-kubernetes-operator/templates/flink-operator.yaml
+++ b/helm/flink-kubernetes-operator/templates/flink-operator.yaml
@@ -100,14 +100,7 @@ spec:
 {{- toYaml .Values.operatorSecurityContext | nindent 12 }}
   volumeMounts:
 - name: flink-operator-config-volume
-  mountPath: /opt/flink/conf/flink-conf.yaml
-  subPath: flink-conf.yaml
-- name: flink-operator-config-volume
-  mountPath: /opt/flink/conf/log4j-operator.properties
-  subPath: log4j-operator.properties
-- name: flink-operator-config-volume
-  mountPath: /opt/flink/conf/log4j-console.properties
-  subPath: log4j-console.properties
+  mountPath: /opt/flink/conf
 {{- if .Values.operatorVolumeMounts.create }}
 {{- toYaml .Values.operatorVolumeMounts.data | nindent 12 }}
 {{- end }}
@@ -167,14 +160,7 @@ spec:
 mountPath: "/certs"
 readOnly: true
   - name: flink-operator-config-volume
-mountPath: /opt/flink/conf/flink-conf.yaml
-subPath: flink-conf.yaml
-  - name: flink-operator-config-volume
-mountPath: /opt/flink/conf/log4j-operator.properties
-subPath: log4j-operator.properties
-  - name: flink-operator-config-volume
-mountPath: /opt/flink/conf/log4j-console.properties
-subPath: log4j-console.properties
+mountPath: /opt/flink/conf
 {{- end }}
   {{- if index (.Values.operatorPod) "dnsPolicy" }}
   dnsPolicy: {{ .Values.operatorPod.dnsPolicy | quote }}



[flink-kubernetes-operator] branch main updated: [FLINK-30146] Log job listing exception as warning

2022-12-12 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 15983525 [FLINK-30146] Log job listing exception as warning
15983525 is described below

commit 15983525cf44dcec578a0ee87c1582cf698a6f16
Author: Gabor Somogyi 
AuthorDate: Wed Dec 7 12:39:32 2022 +0100

[FLINK-30146] Log job listing exception as warning
---
 e2e-tests/utils.sh  | 1 -
 .../apache/flink/kubernetes/operator/observer/JobStatusObserver.java| 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/e2e-tests/utils.sh b/e2e-tests/utils.sh
index 32cdd593..85113622 100755
--- a/e2e-tests/utils.sh
+++ b/e2e-tests/utils.sh
@@ -140,7 +140,6 @@ function check_operator_log_for_errors {
   operator_pod_name=$(get_operator_pod_name)
   echo "Operator namespace: ${operator_pod_namespace} pod: 
${operator_pod_name}"
   errors=$(kubectl logs -n "${operator_pod_namespace}" "${operator_pod_name}" \
-  | grep -v "Exception while listing jobs" 
`#https://issues.apache.org/jira/browse/FLINK-30146` \
   | grep -v "Failed to submit a listener notification task" 
`#https://issues.apache.org/jira/browse/FLINK-30147` \
   | grep -v "Failed to submit job to session cluster" 
`#https://issues.apache.org/jira/browse/FLINK-30148` \
   | grep -v "Error during event processing" 
`#https://issues.apache.org/jira/browse/FLINK-30149` \
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/JobStatusObserver.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/JobStatusObserver.java
index b70c761a..9e5ac8e3 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/JobStatusObserver.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/JobStatusObserver.java
@@ -77,7 +77,7 @@ public abstract class JobStatusObserver<
 clusterJobStatuses = new 
ArrayList<>(flinkService.listJobs(ctx.getDeployedConfig()));
 } catch (Exception e) {
 // Error while accessing the rest api, will try again later...
-LOG.error("Exception while listing jobs", e);
+LOG.warn("Exception while listing jobs", e);
 ifRunningMoveToReconciling(jobStatus, previousJobStatus);
 if (e instanceof TimeoutException) {
 onTimeout(resource, resourceContext, ctx);



[flink-kubernetes-operator] branch main updated: [docs] Update Known issues for v1.3

2022-12-08 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 4ba68949 [docs] Update Known issues for v1.3
4ba68949 is described below

commit 4ba689490de052dab58c7ee30e2feef241b545ab
Author: Marton Balassi 
AuthorDate: Thu Dec 8 11:38:57 2022 +0100

[docs] Update Known issues for v1.3
---
 docs/content/docs/concepts/overview.md | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/docs/content/docs/concepts/overview.md 
b/docs/content/docs/concepts/overview.md
index 2320ec67..1b3111c8 100644
--- a/docs/content/docs/concepts/overview.md
+++ b/docs/content/docs/concepts/overview.md
@@ -90,9 +90,6 @@ The examples are maintained as part of the operator repo and 
can be found [here]
 ### JobManager High-availability
 The Operator leverages [Kubernetes HA 
Services](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/)
 for providing High-availability for Flink jobs. The HA solution can benefit 
form using additional [Standby 
replicas](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/overview/),
 it will result in a faster recovery time, but Flink jobs will still restart 
when the Leader JobManager goes down.
 
-### Standalone Kubernetes Support
-The Operator does not support [Standalone 
Kubernetes](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/standalone/kubernetes/)
 deployments yet. It is expected to be part of the `1.2.0` release.
-
 ### JobResultStore Resource Leak
 To mitigate the impact of 
[FLINK-27569](https://issues.apache.org/jira/browse/FLINK-27569) the operator 
introduced a workaround 
[FLINK-27573](https://issues.apache.org/jira/browse/FLINK-27573) by setting 
`job-result-store.delete-on-commit=false` and a unique value for 
`job-result-store.storage-path` for every cluster launch. The storage path for 
older runs must be cleaned up manually, keeping the latest directory always:
 ```shell
@@ -102,3 +99,7 @@ drwxr-xr-x 2   40 May 12 09:51 
119e0203-c3a9-4121-9a60-d58839576f01 <- m
 drwxr-xr-x 2   60 May 12 09:46 a6031ec7-ab3e-4b30-ba77-6498e58e6b7f
 drwxr-xr-x 2   60 May 11 15:11 b6fb2a9c-d1cd-4e65-a9a1-e825c4b47543
 ```
+
+### AuditUtils can log sensitive information present in the custom resources
+As reported in 
[FLINK-30306](https://issues.apache.org/jira/browse/FLINK-30306) when Flink 
custom resources change the operator logs the change, which could include 
sensitive information. We suggest ingesting secrets to Flink containers during 
runtime to mitigate this. 
+Also note that anyone who has access to the custom resources already had 
access to the potentially sensitive information in question, but folks who only 
have access to the logs could also see them now. We are planning to introduce 
redaction rules to AuditUtils to improve this in a later release.
\ No newline at end of file



[flink-kubernetes-operator] branch release-1.3 updated: [docs] Update Known issues for v1.3

2022-12-08 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch release-1.3
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/release-1.3 by this push:
 new a23dbfc6 [docs] Update Known issues for v1.3
a23dbfc6 is described below

commit a23dbfc64461c22c5781eb8ddfaf8037b2d4a523
Author: Marton Balassi 
AuthorDate: Thu Dec 8 11:38:57 2022 +0100

[docs] Update Known issues for v1.3
---
 docs/content/docs/concepts/overview.md | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/docs/content/docs/concepts/overview.md 
b/docs/content/docs/concepts/overview.md
index 2320ec67..1b3111c8 100644
--- a/docs/content/docs/concepts/overview.md
+++ b/docs/content/docs/concepts/overview.md
@@ -90,9 +90,6 @@ The examples are maintained as part of the operator repo and 
can be found [here]
 ### JobManager High-availability
 The Operator leverages [Kubernetes HA 
Services](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/)
 for providing High-availability for Flink jobs. The HA solution can benefit 
form using additional [Standby 
replicas](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/overview/),
 it will result in a faster recovery time, but Flink jobs will still restart 
when the Leader JobManager goes down.
 
-### Standalone Kubernetes Support
-The Operator does not support [Standalone 
Kubernetes](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/standalone/kubernetes/)
 deployments yet. It is expected to be part of the `1.2.0` release.
-
 ### JobResultStore Resource Leak
 To mitigate the impact of 
[FLINK-27569](https://issues.apache.org/jira/browse/FLINK-27569) the operator 
introduced a workaround 
[FLINK-27573](https://issues.apache.org/jira/browse/FLINK-27573) by setting 
`job-result-store.delete-on-commit=false` and a unique value for 
`job-result-store.storage-path` for every cluster launch. The storage path for 
older runs must be cleaned up manually, keeping the latest directory always:
 ```shell
@@ -102,3 +99,7 @@ drwxr-xr-x 2   40 May 12 09:51 
119e0203-c3a9-4121-9a60-d58839576f01 <- m
 drwxr-xr-x 2   60 May 12 09:46 a6031ec7-ab3e-4b30-ba77-6498e58e6b7f
 drwxr-xr-x 2   60 May 11 15:11 b6fb2a9c-d1cd-4e65-a9a1-e825c4b47543
 ```
+
+### AuditUtils can log sensitive information present in the custom resources
+As reported in 
[FLINK-30306](https://issues.apache.org/jira/browse/FLINK-30306) when Flink 
custom resources change the operator logs the change, which could include 
sensitive information. We suggest ingesting secrets to Flink containers during 
runtime to mitigate this. 
+Also note that anyone who has access to the custom resources already had 
access to the potentially sensitive information in question, but folks who only 
have access to the logs could also see them now. We are planning to introduce 
redaction rules to AuditUtils to improve this in a later release.
\ No newline at end of file



[flink-kubernetes-operator] branch release-1.2 updated: [docs][hotfix] Remove standolane support from known limitations

2022-12-08 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch release-1.2
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/release-1.2 by this push:
 new 1ec61016 [docs][hotfix] Remove standolane support from known 
limitations
1ec61016 is described below

commit 1ec6101601108e9de2f490704f9c3befb0627595
Author: Marton Balassi 
AuthorDate: Thu Dec 8 12:12:07 2022 +0100

[docs][hotfix] Remove standolane support from known limitations
---
 docs/content/docs/concepts/overview.md | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/docs/content/docs/concepts/overview.md 
b/docs/content/docs/concepts/overview.md
index d1aeb6b9..e5324e74 100644
--- a/docs/content/docs/concepts/overview.md
+++ b/docs/content/docs/concepts/overview.md
@@ -90,9 +90,6 @@ The examples are maintained as part of the operator repo and 
can be found [here]
 ### JobManager High-availability
 The Operator leverages [Kubernetes HA 
Services](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/)
 for providing High-availability for Flink jobs. The HA solution can benefit 
form using additional [Standby 
replicas](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/overview/),
 it will result in a faster recovery time, but Flink jobs will still restart 
when the Leader JobManager goes down.
 
-### Standalone Kubernetes Support
-The Operator does not support [Standalone 
Kubernetes](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/standalone/kubernetes/)
 deployments yet. It is expected to be part of the `1.2.0` release.
-
 ### JobResultStore Resource Leak
 To mitigate the impact of 
[FLINK-27569](https://issues.apache.org/jira/browse/FLINK-27569) the operator 
introduced a workaround 
[FLINK-27573](https://issues.apache.org/jira/browse/FLINK-27573) by setting 
`job-result-store.delete-on-commit=false` and a unique value for 
`job-result-store.storage-path` for every cluster launch. The storage path for 
older runs must be cleaned up manually, keeping the latest directory always:
 ```shell



[flink-kubernetes-operator] branch main updated (2c207edf -> 41b9f4fc)

2022-12-06 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


from 2c207edf [FLINK-30154] Allow to set flinkConfiguration in SessionJob
 add 41b9f4fc [FLINK-30280] Document logging configuration behaviour

No new revisions were added by this update.

Summary of changes:
 helm/flink-kubernetes-operator/values.yaml | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)



[flink-kubernetes-operator] branch main updated: [FLINK-30154] Allow to set flinkConfiguration in SessionJob

2022-12-06 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 2c207edf [FLINK-30154] Allow to set flinkConfiguration in SessionJob
2c207edf is described below

commit 2c207edf5915f077b92c5b2bc2f0e6cac54daa9d
Author: Gabor Somogyi 
AuthorDate: Tue Dec 6 14:44:40 2022 +0100

[FLINK-30154] Allow to set flinkConfiguration in SessionJob
---
 .../kubernetes/operator/validation/DefaultValidator.java | 12 +---
 .../kubernetes/operator/validation/DefaultValidatorTest.java |  7 +++
 2 files changed, 4 insertions(+), 15 deletions(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/validation/DefaultValidator.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/validation/DefaultValidator.java
index 0df0e593..05fdfaa8 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/validation/DefaultValidator.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/validation/DefaultValidator.java
@@ -61,9 +61,6 @@ public class DefaultValidator implements 
FlinkResourceValidator {
 KubernetesConfigOptions.NAMESPACE.key(), 
KubernetesConfigOptions.CLUSTER_ID.key()
 };
 
-private static final Set ALLOWED_FLINK_SESSION_JOB_CONF_KEYS =
-
Set.of(KubernetesOperatorConfigOptions.JAR_ARTIFACT_HTTP_HEADER.key());
-
 private static final Set ALLOWED_LOG_CONF_KEYS =
 Set.of(Constants.CONFIG_FILE_LOG4J_NAME, 
Constants.CONFIG_FILE_LOGBACK_NAME);
 
@@ -468,14 +465,7 @@ public class DefaultValidator implements 
FlinkResourceValidator {
 return Optional.empty();
 }
 
-for (String key : flinkSessionJobConfig.keySet()) {
-if (!ALLOWED_FLINK_SESSION_JOB_CONF_KEYS.contains(key)) {
-return Optional.of(
-String.format(
-"Invalid session job flinkConfiguration key: 
%s. Allowed keys are %s",
-key, ALLOWED_FLINK_SESSION_JOB_CONF_KEYS));
-}
-}
+// Exclude specific keys if they cause issues
 return Optional.empty();
 }
 }
diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/validation/DefaultValidatorTest.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/validation/DefaultValidatorTest.java
index fe2e3162..a8cb5fa5 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/validation/DefaultValidatorTest.java
+++ 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/validation/DefaultValidatorTest.java
@@ -514,12 +514,11 @@ public class DefaultValidatorTest {
 .setFlinkConfiguration(
 Map.of(
 KubernetesOperatorConfigOptions
-
.OPERATOR_RECONCILE_INTERVAL
+
.PERIODIC_SAVEPOINT_INTERVAL
 .key(),
-"60")),
+"1m")),
 flinkDeployment -> {},
-"Invalid session job flinkConfiguration key: 
kubernetes.operator.reconcile.interval."
-+ " Allowed keys are 
[kubernetes.operator.user.artifacts.http.header]");
+null);
 
 testSessionJobValidateWithModifier(
 sessionJob -> {



[flink-kubernetes-operator] branch main updated: [FLINK-30268] HA metadata and other related errors should not throw DeploymentFailedException

2022-12-06 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 310ff307 [FLINK-30268] HA metadata and other related errors should not 
throw DeploymentFailedException
310ff307 is described below

commit 310ff3072cd6196202ac37a171a896d3359cfc56
Author: pvary 
AuthorDate: Tue Dec 6 14:41:16 2022 +0100

[FLINK-30268] HA metadata and other related errors should not throw 
DeploymentFailedException
---
 .../controller/FlinkDeploymentController.java  | 15 ++
 .../exception/RecoveryFailureException.java| 32 +
 .../deployment/ApplicationReconciler.java  |  4 +-
 .../operator/service/AbstractFlinkService.java |  6 +--
 .../kubernetes/operator/TestingFlinkService.java   |  4 +-
 .../controller/DeploymentRecoveryTest.java | 10 ++--
 .../controller/FlinkDeploymentControllerTest.java  | 55 ++
 .../deployment/ApplicationReconcilerTest.java  |  6 +--
 .../ApplicationReconcilerUpgradeModeTest.java  |  6 +--
 .../operator/service/NativeFlinkServiceTest.java   |  4 +-
 10 files changed, 124 insertions(+), 18 deletions(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentController.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentController.java
index b681a4b5..7b9db5a3 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentController.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentController.java
@@ -24,6 +24,7 @@ import 
org.apache.flink.kubernetes.operator.api.status.JobManagerDeploymentStatu
 import org.apache.flink.kubernetes.operator.config.FlinkConfigManager;
 import 
org.apache.flink.kubernetes.operator.exception.DeploymentFailedException;
 import org.apache.flink.kubernetes.operator.exception.ReconciliationException;
+import org.apache.flink.kubernetes.operator.exception.RecoveryFailureException;
 import 
org.apache.flink.kubernetes.operator.observer.deployment.FlinkDeploymentObserverFactory;
 import org.apache.flink.kubernetes.operator.reconciler.ReconciliationUtils;
 import 
org.apache.flink.kubernetes.operator.reconciler.deployment.ReconcilerFactory;
@@ -121,6 +122,8 @@ public class FlinkDeploymentController
 }
 statusRecorder.patchAndCacheStatus(flinkApp);
 reconcilerFactory.getOrCreate(flinkApp).reconcile(flinkApp, 
context);
+} catch (RecoveryFailureException rfe) {
+handleRecoveryFailed(flinkApp, rfe);
 } catch (DeploymentFailedException dfe) {
 handleDeploymentFailed(flinkApp, dfe);
 } catch (Exception e) {
@@ -153,6 +156,18 @@ public class FlinkDeploymentController
 EventRecorder.Component.JobManagerDeployment);
 }
 
+private void handleRecoveryFailed(FlinkDeployment flinkApp, 
RecoveryFailureException rfe) {
+LOG.error("Flink recovery failed", rfe);
+ReconciliationUtils.updateForReconciliationError(
+flinkApp, rfe, configManager.getOperatorConfiguration());
+eventRecorder.triggerEvent(
+flinkApp,
+EventRecorder.Type.Warning,
+rfe.getReason(),
+rfe.getMessage(),
+EventRecorder.Component.JobManagerDeployment);
+}
+
 @Override
 public Map prepareEventSources(
 EventSourceContext context) {
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/exception/RecoveryFailureException.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/exception/RecoveryFailureException.java
new file mode 100644
index ..1b0be67b
--- /dev/null
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/exception/RecoveryFailureException.java
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions

[flink-kubernetes-operator] branch main updated: [FLINK-30151] Remove AuditUtils from error log check whitelist

2022-12-06 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new e0479732 [FLINK-30151] Remove AuditUtils from error log check whitelist
e0479732 is described below

commit e04797321f398e12f16cd0c6a7059b5037863bdf
Author: Gabor Somogyi 
AuthorDate: Tue Dec 6 10:01:41 2022 +0100

[FLINK-30151] Remove AuditUtils from error log check whitelist
---
 e2e-tests/utils.sh | 1 -
 1 file changed, 1 deletion(-)

diff --git a/e2e-tests/utils.sh b/e2e-tests/utils.sh
index 81021bc3..32cdd593 100755
--- a/e2e-tests/utils.sh
+++ b/e2e-tests/utils.sh
@@ -145,7 +145,6 @@ function check_operator_log_for_errors {
   | grep -v "Failed to submit job to session cluster" 
`#https://issues.apache.org/jira/browse/FLINK-30148` \
   | grep -v "Error during event processing" 
`#https://issues.apache.org/jira/browse/FLINK-30149` \
   | grep -v "REST service in session cluster is bad now" 
`#https://issues.apache.org/jira/browse/FLINK-30150` \
-  | grep -v "AuditUtils" 
`#https://issues.apache.org/jira/browse/FLINK-30151` \
   | grep -v "Error while patching status" 
`#https://issues.apache.org/jira/browse/FLINK-30283` \
   | grep -e "\[\s*ERROR\s*\]" || true)
   if [ -z "${errors}" ]; then



[flink-kubernetes-operator] branch main updated: [FLINK-30307] Turn off e2e test error check temporarily

2022-12-06 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new b95e3e41 [FLINK-30307] Turn off e2e test error check temporarily
b95e3e41 is described below

commit b95e3e41fbac940fa281b69d05df32ef02591691
Author: Gabor Somogyi 
AuthorDate: Tue Dec 6 09:59:35 2022 +0100

[FLINK-30307] Turn off e2e test error check temporarily
---
 e2e-tests/utils.sh | 4 
 1 file changed, 4 insertions(+)

diff --git a/e2e-tests/utils.sh b/e2e-tests/utils.sh
index 8db0059e..81021bc3 100755
--- a/e2e-tests/utils.sh
+++ b/e2e-tests/utils.sh
@@ -132,6 +132,10 @@ function retry_times() {
 
 function check_operator_log_for_errors {
   echo "Checking for operator log errors..."
+  #https://issues.apache.org/jira/browse/FLINK-30310
+  echo "Error checking is temporarily turned off."
+  return 0
+
   operator_pod_namespace=$(get_operator_pod_namespace)
   operator_pod_name=$(get_operator_pod_name)
   echo "Operator namespace: ${operator_pod_namespace} pod: 
${operator_pod_name}"



[flink-kubernetes-operator] branch main updated: [FLINK-29974] Allow session job cancel to be called for each job state

2022-12-02 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new ea01e294 [FLINK-29974] Allow session job cancel to be called for each 
job state
ea01e294 is described below

commit ea01e294cf1b68d597244d0a11b3c81822a163e7
Author: Gabor Somogyi 
AuthorDate: Fri Dec 2 14:08:27 2022 +0100

[FLINK-29974] Allow session job cancel to be called for each job state
---
 e2e-tests/utils.sh |   6 +-
 .../operator/service/AbstractFlinkService.java | 161 +++--
 .../kubernetes/operator/TestingFlinkService.java   |   7 +
 .../sessionjob/SessionJobReconcilerTest.java   | 257 +++--
 4 files changed, 286 insertions(+), 145 deletions(-)

diff --git a/e2e-tests/utils.sh b/e2e-tests/utils.sh
index e38f9631..b092b7a2 100755
--- a/e2e-tests/utils.sh
+++ b/e2e-tests/utils.sh
@@ -159,12 +159,14 @@ function debug_and_show_logs {
 kubectl describe all
 
 echo "Operator logs:"
+operator_pod_namespace=$(get_operator_pod_namespace)
 operator_pod_name=$(get_operator_pod_name)
-kubectl logs "${operator_pod_name}"
+echo "Operator namespace: ${operator_pod_namespace} pod: 
${operator_pod_name}"
+kubectl logs -n "${operator_pod_namespace}" "${operator_pod_name}"
 
 echo "Flink logs:"
 kubectl get pods -o jsonpath='{range 
.items[*]}{.metadata.name}{"\n"}{end}' | while read pod;do
-containers=(`kubectl get pods  $pod -o 
jsonpath='{.spec.containers[*].name}'`)
+containers=(`kubectl get pods $pod -o 
jsonpath='{.spec.containers[*].name}'`)
 i=0
 for container in "${containers[@]}"; do
   echo "Current logs for $pod:$container: "
diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
index e68168e8..0e33386e 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkService.java
@@ -292,8 +292,8 @@ public abstract class AbstractFlinkService implements 
FlinkService {
 final long timeout =
 
conf.get(ExecutionCheckpointingOptions.CHECKPOINTING_TIMEOUT)
 .getSeconds();
-try {
-if 
(ReconciliationUtils.isJobRunning(deploymentStatus)) {
+if (ReconciliationUtils.isJobRunning(deploymentStatus)) {
+try {
 LOG.info("Suspending job with savepoint.");
 String savepoint =
 clusterClient
@@ -309,22 +309,23 @@ public abstract class AbstractFlinkService implements 
FlinkService {
 .get(timeout, TimeUnit.SECONDS);
 savepointOpt = Optional.of(savepoint);
 LOG.info("Job successfully suspended with 
savepoint {}.", savepoint);
-} else if 
(ReconciliationUtils.isJobInTerminalState(deploymentStatus)) {
-LOG.info(
-"Job is already in terminal state skipping 
cancel-with-savepoint operation.");
-} else {
-throw new RuntimeException(
-"Unexpected non-terminal status: " + 
deploymentStatus);
+} catch (TimeoutException exception) {
+throw new FlinkException(
+String.format(
+"Timed out stopping the job %s in 
Flink cluster %s with savepoint, "
++ "please configure a 
larger timeout via '%s'",
+jobId,
+clusterId,
+
ExecutionCheckpointingOptions.CHECKPOINTING_TIMEOUT
+.key()),
+exception);
 }
-} catch (TimeoutException exception) {
-throw new FlinkException(
-String.format(
-"Timed out stopping

[flink-kubernetes-operator] branch main updated: Revert "[FLINK-30199] Script for running Kubernetes Operator e2e tests manually"

2022-12-02 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new e188c3ca Revert "[FLINK-30199] Script for running Kubernetes Operator 
e2e tests manually"
e188c3ca is described below

commit e188c3ca8b6b47a0ab94cd9b39d1afad58afbe56
Author: Gabor Somogyi 
AuthorDate: Fri Dec 2 11:38:32 2022 +0100

Revert "[FLINK-30199] Script for running Kubernetes Operator e2e tests 
manually"

The new test suite is unstable in the CI, will revisit after the upcoming 
release.
---
 .github/workflows/ci.yml   |  63 +++-
 docs/content/docs/development/guide.md |  33 -
 e2e-tests/run_tests.sh | 253 -
 e2e-tests/utils.sh |  81 +--
 4 files changed, 59 insertions(+), 371 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 7d5fd0b6..c48b631c 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -59,8 +59,9 @@ jobs:
   start_minikube
   - name: Install cert-manager
 run: |
-  source e2e-tests/utils.sh
-  install_cert_manager
+  kubectl get pods -A
+  kubectl apply -f 
https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml
+  kubectl -n cert-manager wait --all=true --for=condition=Available 
--timeout=300s deploy
   - name: Build image
 run: |
   export SHELL=/bin/bash
@@ -94,8 +95,29 @@ jobs:
 runs-on: ubuntu-latest
 strategy:
   matrix:
-version: [ "v1_16","v1_15","v1_14","v1_13" ]
-namespace: [ "default","flink" ]
+version: ["v1_16","v1_15","v1_14","v1_13"]
+namespace: ["default","flink"]
+mode: ["native", "standalone"]
+test:
+  - test_application_kubernetes_ha.sh
+  - test_application_operations.sh
+  - test_sessionjob_kubernetes_ha.sh
+  - test_sessionjob_operations.sh
+  - test_multi_sessionjob.sh
+include:
+  - namespace: flink
+extraArgs: '--create-namespace --set 
"watchNamespaces={default,flink}"'
+  - version: v1_16
+image: flink:1.16
+  - version: v1_15
+image: flink:1.15
+  - version: v1_14
+image: flink:1.14
+  - version: v1_13
+image: flink:1.13
+exclude:
+  - namespace: default
+test: test_multi_sessionjob.sh
 name: e2e_ci
 steps:
   - uses: actions/checkout@v2
@@ -111,10 +133,39 @@ jobs:
   key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
   restore-keys: |
 ${{ runner.os }}-maven-
-  - name: Run the tests
+  - name: Start minikube
 run: |
   source e2e-tests/utils.sh
-  bash e2e-tests/run_tests.sh -f ${{ matrix.version }} -n ${{ 
matrix.namespace }} -m native,standalone -d test_multi_sessionjob.sh 
test_application_operations.sh test_application_kubernetes_ha.sh 
test_sessionjob_kubernetes_ha.sh test_sessionjob_operations.sh || exit 1
+  start_minikube
+  - name: Install cert-manager
+run: |
+  kubectl get pods -A
+  kubectl apply -f 
https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml
+  kubectl -n cert-manager wait --all=true --for=condition=Available 
--timeout=300s deploy
+  - name: Build image
+run: |
+  export SHELL=/bin/bash
+  export DOCKER_BUILDKIT=1
+  eval $(minikube -p minikube docker-env)
+  docker build --progress=plain --no-cache -f ./Dockerfile -t 
flink-kubernetes-operator:ci-latest --progress plain .
+  docker images
+  - name: Start the operator
+run: |
+  helm --debug install flink-kubernetes-operator -n ${{ 
matrix.namespace }} helm/flink-kubernetes-operator --set 
image.repository=flink-kubernetes-operator --set image.tag=ci-latest ${{ 
matrix.extraArgs }}
+  kubectl wait --for=condition=Available --timeout=120s -n ${{ 
matrix.namespace }} deploy/flink-kubernetes-operator
+  kubectl get pods
+  - name: Run Flink e2e tests
+run: |
+  sed -i "s/image: flink:.*/image: ${{ matrix.image }}/" 
e2e-tests/data/*.yaml
+  sed -i "s/flinkVersion: .*/flinkVersion: ${{ matrix.version }}/" 
e2e-tests/data/*.yaml
+  sed -i "s/mode: .*/mode: ${{ matrix.mode }}/" e2e-tests/data/*.yaml
+  git diff HEAD
+  echo "Running e2e-tests/$test"
+  bash e2e-tests/${{ matrix.test }} || exit 

svn commit: r58389 - /release/flink/KEYS

2022-12-01 Thread mbalassi
Author: mbalassi
Date: Thu Dec  1 16:30:23 2022
New Revision: 58389

Log:
Code signing key for Matyas Orhidi

Modified:
release/flink/KEYS

Modified: release/flink/KEYS
==
--- release/flink/KEYS (original)
+++ release/flink/KEYS Thu Dec  1 16:30:23 2022
@@ -3050,3 +3050,63 @@ YX79mTJJ8KmFTGKJE5GpJjGPA8ooTnBn62mMEsuY
 cJwxHSCuN/VA+zKkhZ7ckQ==
 =WR2T
 -END PGP PUBLIC KEY BLOCK-
+
+pub   rsa4096 2022-11-30 [SC] [expires: 2024-11-29]
+  ACD1386E2C4D07053D775A0648E78F054AA33CB5
+uid   [ultimate] Matyas Orhidi (CODE SIGNING KEY) 
+sig 348E78F054AA33CB5 2022-11-30  Matyas Orhidi (CODE SIGNING KEY) 

+sub   rsa4096 2022-11-30 [E] [expires: 2024-11-29]
+sig  48E78F054AA33CB5 2022-11-30  Matyas Orhidi (CODE SIGNING KEY) 

+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQINBGOH4XwBEAC5uBSowbxLgWsW3EMXyepjaNTHYAmvY7jYgPFxiD7Iy5w60cfO
+BlxQCI0bng3OMMsR9qJ2LdhfVtiYNy9ljUjg4O/IzaYRfY81R1KLk7jTCDulajUe
+K9f92ihdQGYJjPTtbzTOCKD5Ui8D7VILtg8IjmCjteGPkXqKoxCSCHOjjUYq2JBC
+WNp6iNA+FRLi+zLF13Vt1ui031BUjF8ewOfGQxKFhDowLy1vSFjX9RvioKQ6sle7
+XF5D5Xoysd3f71TmiDPt0EM49BX4PBG+8L9KER3PzsJTfrsjXnA0JjeqSGtuqRQ0
+f0ANP99tV7OVP6Xq+p5whLbXzMAQBPIE6aIWfYIwo88/c43jQ2reszvshV1DjTF3
+wecFPD57pG0wYbpgdvkKJHkiAgMkchC94QVcDEeL23Do8500xsALcUGB/h5862Uh
+UXKpVNtCoqvQ7KUSsbaSzXsW99pB/H4ydCSMUhzwGLDrIVGEnD0rcexv9Yo1nOH9
+xY07lIM0DJ7lSeE761QmW0822+uE0WvPH1ZIyA/5iajGVV/l7BGEFdoDBUhouh33
+ya5Luv5epscHAWy4A8kv3pU9jeAfQX3ta468vMBoQw8t+4oRpjNbAYS6vWlqQNc7
+Anix5zcPuNsiIiX4fGP3iYeAFyoSfODiVN4MIG7T6M5CTbsFWbAIZFX9aQARAQAB
+tDVNYXR5YXMgT3JoaWRpIChDT0RFIFNJR05JTkcgS0VZKSA8bW9yaGlkaUBhcGFj
+aGUub3JnPokCVwQTAQgAQRYhBKzROG4sTQcFPXdaBkjnjwVKozy1BQJjh+F8AhsD
+BQkDwmcABQsJCAcCAiICBhUKCQgLAgQWAgMBAh4HAheAAAoJEEjnjwVKozy1C7kP
+/3qamWPmcUN7dDFnNzPPH7/fUYs4MsU6tVtn853avILYOsS4HCDuFO3Fl+1dyk0T
+mGpC9vJkLroZyDtdKbgcInOb2b/ebEpb+GtdkCUNerm8ba+tGIre5NeiptcJ7quh
+SyHTThli5n+guYMR7f8PulIkL2rDQNpuzoI28MKqZ8oU6u+ZlmLWBbc0yG9pRjEX
+29BWA00o3PIAOOtpKccXK1C9/VQOTrMZdhRQZSFJzc8OlqBsEceJW9X5/bUEdmwm
+fxnWhlI9t/QeyLO/72KUVcDETmp6P6QK6Vk04b62Ns+ZAQWOkmw41fKXVtkT03QH
+juhZyGJOtQGj9sgKnXCK+MUNYYEi3XD16EmZFDjLfAJ8qCRoRiu8hisWTlHg9265
+0KwLcJ+DvFMSrVL6QZ0P9e0cNwKTai1lOLQZcC83yfDLZA12NjNM370ew416iHaQ
+hP7vdjPXDi+MIV37VfAfzygutluCkx6ZDCMzwqiTd/znsSx4OiqzoKpT9aZOoIEA
+8PDD3xIz5f4CBtI1FA64/cOsDBXpKh9yorfr2iWUC5LMhReoMrCQtqSw4xeJQoYy
+NPiaiEqmNHQWXhyhSPNJbpSoMZqgOMSpKXCkKbmlBwCjdDSMTnw0ZNIl0vUDi0HE
+6GYUIVpr2oxlqG5aYdn03D0uho810NebuKXECFUM0uCIuQINBGOH4XwBEADAECWl
+7rfH3hF1VKML0qdJRzEjKBpBsv2AFiECRCRDS7EQ3AJYM3PbJixJ2u6FRO369KMt
+Mras0t/2pOO3qwvveJB57Qml+UC+/+/K2QOj3EY6kLqlZb3V3N1RHOgoAK2ILgFF
+cRn+wRagwIzTBVyZKSUbXfTr128mNpBZyD5tPBUGsd7pAJBspHyGsO13SSJT3erU
+e+hPw/OE/kAcS1pmG85nRzvT0Emsw88lfLRV6gNwImavK6tick/qCMeFCENAEjJy
+T+XcRe5Pra1HFFBCDywdIA8p1GIFIEtcM3nI40Y8cvwvOm6T4MYHAL01efA2gg1x
+MImuLkSnfOkkzBAVz6GFO/0S1JiZZAvgIOI370QBSQmbxefBJgnrBY+VMuS/LLZa
+20ergiNKGy9lS7Nv75YsFUcIocnsK/pdsmf9Zh6X2mmd4DenJhOO4fE0MK0C78y2
+Cdri45Z8Bey1JnShl8SA25G3pPs3ge7rB7PipqBO91EX6nLvFxZ+Cyk5g23SoKNj
+hlGZSbPF80irk764w0gmDcmtvs19kAVAzIJT9+yEBEcEuh5FkAjxSr4x7GZ5KFmC
+Hxf91E6ozw0jgYEVx/UYLGTyw8H47cFbwIOyxoo5GyBvJc97bIarYyh2R5SwORbQ
+Lx3YuUZyT+zmP+guGrFn3QOc9iUvEkul75qiwwARAQABiQI8BBgBCAAmFiEErNE4
+bixNBwU9d1oGSOePBUqjPLUFAmOH4XwCGwwFCQPCZwAACgkQSOePBUqjPLUz8Q/+
+Ne8k0BEIq0dRWRSAg334HpKA4LEIud2rYCwCI7RnEuoY+BhqWedAJNsImO7ymCoF
+W2vmJA4eZYytxg9w/MP7F4Z7Wkq67AGCFBn7hqEEXjpdzC+ssGPU5O0385CCWUJ7
+/3Sr4/uZmjPfZhSwA6qucYqNHrMRpn47W29S+Qbf2v8ff6Jq8lScEofXLnqLVUNB
+ZNI5hcS4t0+o3BoYiX6cNbQBAVsMO9xx8DkZos90EiAeVUSHp0PFWDGPhFmH3U9u
+MRGZoNQbZEM0fo6dD5KvlRWktYLMJiUC7NAX4A44YrvlhpziwvkUVmhUd1WvL4GP
+UXQJ1ZgXSqNbSmkNOs6lL8jfkWqy7MZ5s2lng0d1pC4PTxMcSlBItEQRb0YtgRBQ
+WlBoJVplvPGAfw73SJ3i06lGhdErED0lCYyivxBRaGf3Ugv7HQbEwThoVIgf4KtV
+a5HMyAMlKcV5nSDgRT0rM4SDzfFd4IfDT424uJgGcVLG2VdsiFXIU+hepuWQKQ7A
+qc2bR9xFuJZ441z1PRIRLThyUzD7dZdqr4+aVa3mHMY7/C7bhCkyD3Hlz8ccrFlF
+7uMbgtHb86v6r+ED3rqc2qZP+5MS5vQeTgfAeIRj0a04zpckjJqiSZg2rVgHh4Db
+Bkg2cPr95iP358z2YkTvm8W7PN0j6hUeC+0j7KuwkQg=
+=cLO/
+-END PGP PUBLIC KEY BLOCK-




[flink-docker] branch master updated: [FLINK-27671] Add arm support for snapshot docker images

2022-12-01 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/master by this push:
 new d2ead98  [FLINK-27671] Add arm support for snapshot docker images
d2ead98 is described below

commit d2ead987dc336bf7094671fe238f49c688472c8e
Author: Márton Balassi 
AuthorDate: Thu Dec 1 16:33:39 2022 +0100

[FLINK-27671] Add arm support for snapshot docker images
---
 .github/workflows/docker-bake.hcl | 36 +++
 .github/workflows/snapshot.yml| 45 +--
 2 files changed, 65 insertions(+), 16 deletions(-)

diff --git a/.github/workflows/docker-bake.hcl 
b/.github/workflows/docker-bake.hcl
new file mode 100644
index 000..435ff7a
--- /dev/null
+++ b/.github/workflows/docker-bake.hcl
@@ -0,0 +1,36 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+variable TAG {
+default = "latest-snapshot"
+}
+
+variable DOCKER_FILE {
+default = "Dockerfile"
+}
+
+target "bake-platform" {
+  inherits = ["docker-metadata-action"]
+  dockerfile = "$DOCKER_FILE"
+  context = "./"
+  tags = ["${TAG}"]
+  platforms = [
+"linux/amd64",
+"linux/arm64/v8",
+  ]
+}
\ No newline at end of file
diff --git a/.github/workflows/snapshot.yml b/.github/workflows/snapshot.yml
index f1ee3a9..cdda9a0 100644
--- a/.github/workflows/snapshot.yml
+++ b/.github/workflows/snapshot.yml
@@ -48,26 +48,39 @@ jobs:
   - uses: actions/checkout@v3
 with:
   ref: ${{ matrix.build.branch }}
-  - name: Set env
+
+  - name: Set up QEMU
+uses: docker/setup-qemu-action@v1
+with:
+  image: tonistiigi/binfmt:latest
+  platforms: all
+
+  - name: Set up Docker Buildx
+uses: docker/setup-buildx-action@v1
+
+  - name: Log in to the Container registry
+uses: docker/login-action@v1
+with:
+  registry: ghcr.io
+  username: ${{ github.actor }}
+  password: ${{ secrets.GITHUB_TOKEN }}
+
+  - name: Prepare Dockerfiles and set env
 run: |
   IMAGE_NAME=${{ matrix.build.flink_version }}-scala_2.12-java${{ 
matrix.java_version }}
   echo "IMAGE_NAME=${IMAGE_NAME}" >> $GITHUB_ENV
   echo "TAG=${REGISTRY}/${OWNER}/${IMAGE_REPO}:${IMAGE_NAME}-debian" 
>> $GITHUB_ENV
+  ./add-custom.sh -u "https://s3.amazonaws.com/flink-nightly/flink-${{ 
matrix.build.flink_version }}-bin-scala_2.12.tgz" -j ${{ matrix.java_version }} 
-n ${IMAGE_NAME}
+  echo "DOCKER_FILE=$(ls ./*/*${{ matrix.build.flink_version }}*${{ 
matrix.java_version }}*/Dockerfile)" >> $GITHUB_ENV
+
   - name: Environment
 run: env
-  - name: Prepare Dockerfiles
-run: |
-  ./add-custom.sh -u "https://s3.amazonaws.com/flink-nightly/flink-${{ 
matrix.build.flink_version }}-bin-scala_2.12.tgz" -j ${{ matrix.java_version }} 
-n ${IMAGE_NAME}
-  - name: Build image
-run: |
-  dockerfile="$(ls ./*/*${{ matrix.build.flink_version }}*${{ 
matrix.java_version }}*/Dockerfile)"
-  dockerfile_dir="$(dirname "$dockerfile")"
-  echo "===> Building ${TAG} image from ${dockerfile_dir}"
-  docker build -t ${TAG} ${dockerfile_dir}
-  - name: Docker login
-run: |
-  docker login ${REGISTRY} -u ${{ github.actor }} -p ${{ 
secrets.GITHUB_TOKEN }}
-  - name: "Publish snapshots"
-run: |
-  docker push "${TAG}"
 
+  - name: Build and push Docker images (supported platforms)
+uses: docker/bake-action@v1.7.0
+with:
+  files: |
+.github/workflows/docker-bake.hcl
+${{ steps.meta.outputs.bake-file }}
+  targets: bake-platform
+  push: ${{ github.event_name != 'pull_request' }}



[flink-docker] branch dev-1.15 updated: [FLINK-27671] Add arm support for snapshot docker images

2022-12-01 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch dev-1.15
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/dev-1.15 by this push:
 new e055a46  [FLINK-27671] Add arm support for snapshot docker images
e055a46 is described below

commit e055a462d826b584dcb553d1ae0ce06f7c77e703
Author: Márton Balassi 
AuthorDate: Thu Dec 1 16:30:20 2022 +0100

[FLINK-27671] Add arm support for snapshot docker images
---
 .github/workflows/docker-bake.hcl | 38 ++
 1 file changed, 38 insertions(+)

diff --git a/.github/workflows/docker-bake.hcl 
b/.github/workflows/docker-bake.hcl
new file mode 100644
index 000..496572b
--- /dev/null
+++ b/.github/workflows/docker-bake.hcl
@@ -0,0 +1,38 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+target "docker-metadata-action" {}
+
+variable TAG {
+default = "latest-snapshot"
+}
+
+variable DOCKER_FILE {
+default = "Dockerfile"
+}
+
+target "bake-platform" {
+  inherits = ["docker-metadata-action"]
+  dockerfile = "${DOCKER_FILE}"
+  context = "./"
+  tags = ["${TAG}"]
+  platforms = [
+"linux/amd64",
+"linux/arm64/v8",
+  ]
+}



[flink-docker] branch dev-1.16 updated: [FLINK-27671] Add arm support for snapshot docker images

2022-12-01 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch dev-1.16
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/dev-1.16 by this push:
 new 4811799  [FLINK-27671] Add arm support for snapshot docker images
4811799 is described below

commit 4811799cc81edb7abb2d80bcca6abf93a5523158
Author: Márton Balassi 
AuthorDate: Thu Dec 1 16:29:00 2022 +0100

[FLINK-27671] Add arm support for snapshot docker images
---
 .github/workflows/docker-bake.hcl | 38 ++
 1 file changed, 38 insertions(+)

diff --git a/.github/workflows/docker-bake.hcl 
b/.github/workflows/docker-bake.hcl
new file mode 100644
index 000..496572b
--- /dev/null
+++ b/.github/workflows/docker-bake.hcl
@@ -0,0 +1,38 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+target "docker-metadata-action" {}
+
+variable TAG {
+default = "latest-snapshot"
+}
+
+variable DOCKER_FILE {
+default = "Dockerfile"
+}
+
+target "bake-platform" {
+  inherits = ["docker-metadata-action"]
+  dockerfile = "${DOCKER_FILE}"
+  context = "./"
+  tags = ["${TAG}"]
+  platforms = [
+"linux/amd64",
+"linux/arm64/v8",
+  ]
+}



[flink-docker] branch dev-master updated: [FLINK-27671] Add arm support for snapshot docker images

2022-12-01 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch dev-master
in repository https://gitbox.apache.org/repos/asf/flink-docker.git


The following commit(s) were added to refs/heads/dev-master by this push:
 new a56b01c  [FLINK-27671] Add arm support for snapshot docker images
a56b01c is described below

commit a56b01c459d9d24be18c5746f6958dbb75f13329
Author: Márton Balassi 
AuthorDate: Thu Dec 1 16:28:18 2022 +0100

[FLINK-27671] Add arm support for snapshot docker images
---
 .github/workflows/docker-bake.hcl | 38 ++
 1 file changed, 38 insertions(+)

diff --git a/.github/workflows/docker-bake.hcl 
b/.github/workflows/docker-bake.hcl
new file mode 100644
index 000..496572b
--- /dev/null
+++ b/.github/workflows/docker-bake.hcl
@@ -0,0 +1,38 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+target "docker-metadata-action" {}
+
+variable TAG {
+default = "latest-snapshot"
+}
+
+variable DOCKER_FILE {
+default = "Dockerfile"
+}
+
+target "bake-platform" {
+  inherits = ["docker-metadata-action"]
+  dockerfile = "${DOCKER_FILE}"
+  context = "./"
+  tags = ["${TAG}"]
+  platforms = [
+"linux/amd64",
+"linux/arm64/v8",
+  ]
+}



[flink-kubernetes-operator] branch main updated: [hotfix][FLINK-30265] Turn on debug logs for e2e tests in CI when tests failing

2022-12-01 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 5da829a1 [hotfix][FLINK-30265] Turn on debug logs for e2e tests in CI 
when tests failing
5da829a1 is described below

commit 5da829a18d04205d56f0b9e9d8fa92c9534aa6fd
Author: Gabor Somogyi 
AuthorDate: Thu Dec 1 15:47:26 2022 +0100

[hotfix][FLINK-30265] Turn on debug logs for e2e tests in CI when tests 
failing
---
 e2e-tests/utils.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/e2e-tests/utils.sh b/e2e-tests/utils.sh
index a7726234..8f3af9c4 100755
--- a/e2e-tests/utils.sh
+++ b/e2e-tests/utils.sh
@@ -217,7 +217,7 @@ function stop_minikube {
 }
 
 function cleanup_and_exit() {
-if [[ $TRAPPED_EXIT_CODE != 0 && -n $DEBUG ]]; then
+if [[ $TRAPPED_EXIT_CODE != 0 ]]; then
   debug_and_show_logs
 fi
 



[flink-kubernetes-operator] branch main updated: [FLINK-30199] Script for running Kubernetes Operator e2e tests manually

2022-11-30 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new f033976f [FLINK-30199] Script for running Kubernetes Operator e2e 
tests manually
f033976f is described below

commit f033976f2f225e8b5b869b0e523624230a2540eb
Author: pvary 
AuthorDate: Wed Nov 30 12:59:01 2022 +0100

[FLINK-30199] Script for running Kubernetes Operator e2e tests manually

Co-authored-by: Peter Vary 
---
 .github/workflows/ci.yml   |  63 +---
 docs/content/docs/development/guide.md |  33 +
 e2e-tests/run_tests.sh | 253 +
 e2e-tests/utils.sh |  81 ++-
 4 files changed, 371 insertions(+), 59 deletions(-)

diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index c48b631c..7d5fd0b6 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -59,9 +59,8 @@ jobs:
   start_minikube
   - name: Install cert-manager
 run: |
-  kubectl get pods -A
-  kubectl apply -f 
https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml
-  kubectl -n cert-manager wait --all=true --for=condition=Available 
--timeout=300s deploy
+  source e2e-tests/utils.sh
+  install_cert_manager
   - name: Build image
 run: |
   export SHELL=/bin/bash
@@ -95,29 +94,8 @@ jobs:
 runs-on: ubuntu-latest
 strategy:
   matrix:
-version: ["v1_16","v1_15","v1_14","v1_13"]
-namespace: ["default","flink"]
-mode: ["native", "standalone"]
-test:
-  - test_application_kubernetes_ha.sh
-  - test_application_operations.sh
-  - test_sessionjob_kubernetes_ha.sh
-  - test_sessionjob_operations.sh
-  - test_multi_sessionjob.sh
-include:
-  - namespace: flink
-extraArgs: '--create-namespace --set 
"watchNamespaces={default,flink}"'
-  - version: v1_16
-image: flink:1.16
-  - version: v1_15
-image: flink:1.15
-  - version: v1_14
-image: flink:1.14
-  - version: v1_13
-image: flink:1.13
-exclude:
-  - namespace: default
-test: test_multi_sessionjob.sh
+version: [ "v1_16","v1_15","v1_14","v1_13" ]
+namespace: [ "default","flink" ]
 name: e2e_ci
 steps:
   - uses: actions/checkout@v2
@@ -133,39 +111,10 @@ jobs:
   key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
   restore-keys: |
 ${{ runner.os }}-maven-
-  - name: Start minikube
+  - name: Run the tests
 run: |
   source e2e-tests/utils.sh
-  start_minikube
-  - name: Install cert-manager
-run: |
-  kubectl get pods -A
-  kubectl apply -f 
https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml
-  kubectl -n cert-manager wait --all=true --for=condition=Available 
--timeout=300s deploy
-  - name: Build image
-run: |
-  export SHELL=/bin/bash
-  export DOCKER_BUILDKIT=1
-  eval $(minikube -p minikube docker-env)
-  docker build --progress=plain --no-cache -f ./Dockerfile -t 
flink-kubernetes-operator:ci-latest --progress plain .
-  docker images
-  - name: Start the operator
-run: |
-  helm --debug install flink-kubernetes-operator -n ${{ 
matrix.namespace }} helm/flink-kubernetes-operator --set 
image.repository=flink-kubernetes-operator --set image.tag=ci-latest ${{ 
matrix.extraArgs }}
-  kubectl wait --for=condition=Available --timeout=120s -n ${{ 
matrix.namespace }} deploy/flink-kubernetes-operator
-  kubectl get pods
-  - name: Run Flink e2e tests
-run: |
-  sed -i "s/image: flink:.*/image: ${{ matrix.image }}/" 
e2e-tests/data/*.yaml
-  sed -i "s/flinkVersion: .*/flinkVersion: ${{ matrix.version }}/" 
e2e-tests/data/*.yaml
-  sed -i "s/mode: .*/mode: ${{ matrix.mode }}/" e2e-tests/data/*.yaml
-  git diff HEAD
-  echo "Running e2e-tests/$test"
-  bash e2e-tests/${{ matrix.test }} || exit 1
-  git reset --hard
-  - name: Stop the operator
-run: |
-  helm uninstall -n ${{ matrix.namespace }} flink-kubernetes-operator
+  bash e2e-tests/run_tests.sh -f ${{ matrix.version }} -n ${{ 
matrix.namespace }} -m native,standalone -d test_multi_sessionjob.sh 
test_application_operations.sh test_application_kubernetes_ha.sh 
test_sessionjob_k

[flink-kubernetes-operator] branch main updated: [FLINK-30216] Improve code quality by standardising tab/spaces in pom files

2022-11-29 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 80e5f3f4 [FLINK-30216] Improve code quality by standardising 
tab/spaces in pom files
80e5f3f4 is described below

commit 80e5f3f4add50355de6964c1466ba558d46bc79d
Author: darenwkt <108749182+daren...@users.noreply.github.com>
AuthorDate: Tue Nov 29 14:47:20 2022 +

[FLINK-30216] Improve code quality by standardising tab/spaces in pom files
---
 examples/flink-sql-runner-example/pom.xml | 220 +++---
 pom.xml   | 209 
 2 files changed, 228 insertions(+), 201 deletions(-)

diff --git a/examples/flink-sql-runner-example/pom.xml 
b/examples/flink-sql-runner-example/pom.xml
index 60002810..11297f01 100644
--- a/examples/flink-sql-runner-example/pom.xml
+++ b/examples/flink-sql-runner-example/pom.xml
@@ -17,123 +17,123 @@ specific language governing permissions and limitations
 under the License.
 -->
 http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
-   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
-   4.0.0
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+4.0.0
 
-   
-   org.apache.flink
-   
flink-kubernetes-operator-parent
-   1.3-SNAPSHOT
-   ../..
-   
+
+org.apache.flink
+flink-kubernetes-operator-parent
+1.3-SNAPSHOT
+../..
+
 
-   flink-sql-runner-example
-   Flink SQL Runner Example
+flink-sql-runner-example
+Flink SQL Runner Example
 
-   
-   
-   true
-   
+
+
+true
+
 
-   
-   
-   
-   
-   org.apache.flink
-   flink-streaming-java
-   ${flink.version}
-   provided
-   
-   
-   org.apache.flink
-   flink-table-api-java
-   ${flink.version}
-   provided
-   
+
+
+
+
+org.apache.flink
+flink-streaming-java
+${flink.version}
+provided
+
+
+org.apache.flink
+flink-table-api-java
+${flink.version}
+provided
+
 
-   
+
 
-   
+
+org.apache.flink
+flink-connector-kafka
+${flink.version}
+
+-->
 
-   
-   
-   
-   org.slf4j
-   slf4j-api
-   ${slf4j.version}
-   provided
-   
-   
-   org.apache.logging.log4j
-   log4j-slf4j-impl
-   ${log4j.version}
-   runtime
-   
-   
-   org.apache.logging.log4j
-   log4j-api
-   ${log4j.version}
-   runtime
-   
-   
-   org.apache.logging.log4j
-   log4j-core
-   ${log4j.version}
-   runtime
-   
-   
+
+
+
+org.slf4j
+slf4j-api
+${slf4j.version}
+provided
+
+
+org.apache.logging.log4j
+log4j-slf4j-impl
+${log4j.version}
+runtime
+
+
+org.apache.logging.log4j
+log4j-api
+${log4j.version}
+runtime
+
+
+org.apache.logging.log4j
+log4j-core
+${log4j.version}
+runtime
+
+
 
-   
-   
-   
-   org.apache.maven.plugins
-   maven-shade-plugin
-   3.1.1
-   
-   
-   
-   package
-   
-   

[flink-kubernetes-operator] branch main updated: [FLINK-30222] Operator should handle 'kubernetes' as the 'high-availability' config key

2022-11-29 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 458022d2 [FLINK-30222] Operator should handle 'kubernetes' as the 
'high-availability' config key
458022d2 is described below

commit 458022d2e67247c9941f102fb39d9dda96bd8837
Author: pvary 
AuthorDate: Tue Nov 29 12:31:49 2022 +0100

[FLINK-30222] Operator should handle 'kubernetes' as the 
'high-availability' config key

Co-authored-by: Peter Vary 
---
 .../apache/flink/kubernetes/operator/utils/FlinkUtils.java   |  7 ---
 .../kubernetes/operator/validation/DefaultValidatorTest.java | 12 
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/FlinkUtils.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/FlinkUtils.java
index c17a2495..b438a14d 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/FlinkUtils.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/FlinkUtils.java
@@ -160,9 +160,10 @@ public class FlinkUtils {
 }
 
 public static boolean isKubernetesHAActivated(Configuration configuration) 
{
-return configuration
-.get(HighAvailabilityOptions.HA_MODE)
-
.equalsIgnoreCase(KubernetesHaServicesFactory.class.getCanonicalName());
+String haMode = configuration.get(HighAvailabilityOptions.HA_MODE);
+return 
haMode.equalsIgnoreCase(KubernetesHaServicesFactory.class.getCanonicalName())
+// Hardcoded config value should be removed when upgrading 
Flink dependency to 1.16
+|| haMode.equalsIgnoreCase("kubernetes");
 }
 
 public static boolean clusterShutdownDisabled(FlinkDeploymentSpec spec) {
diff --git 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/validation/DefaultValidatorTest.java
 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/validation/DefaultValidatorTest.java
index 80fb6af0..fe2e3162 100644
--- 
a/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/validation/DefaultValidatorTest.java
+++ 
b/flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/validation/DefaultValidatorTest.java
@@ -416,6 +416,18 @@ public class DefaultValidatorTest {
 "spec.serviceAccount must be defined. If you use helm, its 
value should be the same with the name of jobServiceAccount.");
 
 testSuccess(dep -> dep.getSpec().setServiceAccount("flink"));
+
+testSuccess(
+dep -> {
+
dep.getSpec().getJob().setUpgradeMode(UpgradeMode.LAST_STATE);
+dep.getSpec()
+.getFlinkConfiguration()
+.put(
+HighAvailabilityOptions.HA_MODE.key(),
+// Hardcoded config value should be 
removed when upgrading Flink
+// dependency to 1.16
+"kubernetes");
+});
 }
 
 @Test



[flink-kubernetes-operator] branch main updated: [hotfix][FLINK-27037] Doc publishing script typo

2022-11-28 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new f713b049 [hotfix][FLINK-27037] Doc publishing script typo
f713b049 is described below

commit f713b049c26c987106be8f8bf283896c44ea8d26
Author: Marton Balassi 
AuthorDate: Mon Nov 28 15:58:45 2022 +0100

[hotfix][FLINK-27037] Doc publishing script typo
---
 .github/workflows/docs.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.github/workflows/docs.sh b/.github/workflows/docs.sh
index a6eb9c8d..079535dd 100755
--- a/.github/workflows/docs.sh
+++ b/.github/workflows/docs.sh
@@ -44,7 +44,7 @@ mkdir -p docs/target/api
 mvn javadoc:aggregate -B \
 -DadditionalJOption="-Xdoclint:none" \
 -DadditionalJOption="--allow-script-in-comments" \
--DexcludePackageNames="org.apache.flink.examples"
+-DexcludePackageNames="org.apache.flink.examples" \
 -Dmaven.javadoc.failOnError=false \
 -Dcheckstyle.skip=true \
 -Dspotless.check.skip=true \



[flink-kubernetes-operator] branch main updated: [FLINK-27037] Add Javadocs publishing to docs generation

2022-11-28 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 2d3b2a1e [FLINK-27037] Add Javadocs publishing to docs generation
2d3b2a1e is described below

commit 2d3b2a1e8d2c8b3df259345d9a2c4329bf77c59c
Author: Márton Balassi 
AuthorDate: Mon Nov 28 15:52:39 2022 +0100

[FLINK-27037] Add Javadocs publishing to docs generation
---
 .github/workflows/docs.sh | 20 +---
 docs/config.toml  |  5 +++-
 flink-kubernetes-operator-api/pom.xml | 45 ---
 flink-kubernetes-operator/pom.xml | 24 ---
 pom.xml   | 25 +++
 5 files changed, 66 insertions(+), 53 deletions(-)

diff --git a/.github/workflows/docs.sh b/.github/workflows/docs.sh
index 7c1f350a..a6eb9c8d 100755
--- a/.github/workflows/docs.sh
+++ b/.github/workflows/docs.sh
@@ -40,12 +40,14 @@ fi
 #mvn clean install -B -DskipTests
 
 # build Java docs
-#mkdir -p docs/target/api
-#mvn javadoc:aggregate -B \
-#-DadditionalJOption="-Xdoclint:none --allow-script-in-comments" \
-#-Dmaven.javadoc.failOnError=false \
-#-Dcheckstyle.skip=true \
-#-Dspotless.check.skip=true \
-#-Denforcer.skip=true \
-#-Dheader="http://flink.apache.org/\"; target=\"_top\">Back 
to Flink Website 

[flink] branch master updated (c79b60cb115 -> b4bad50d77c)

2022-11-28 Thread mbalassi
This is an automated email from the ASF dual-hosted git repository.

mbalassi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from c79b60cb115 [FLINK-30084][Runtime] Remove unused method 
notifyAllocationFailure.
 add b4bad50d77c [FLINK-30194][runtime][security] Rename 
DelegationTokenProvider to HadoopDelegationTokenProvider

No new revisions were added by this update.

Summary of changes:
 .../token/HBaseDelegationTokenProvider.java|  2 +-
 ...der.java => HadoopDelegationTokenProvider.java} | 14 +---
 .../token/HadoopFSDelegationTokenProvider.java |  2 +-
 .../token/KerberosDelegationTokenManager.java  | 14 +---
 ...e.security.token.HadoopDelegationTokenProvider} |  0
 ...tionThrowingHadoopDelegationTokenProvider.java} |  8 ---
 .../KerberosDelegationTokenManagerITCase.java  | 26 +++---
 ...java => TestHadoopDelegationTokenProvider.java} |  4 ++--
 runtime.security.token.DelegationTokenProvider | 17 --
 ...me.security.token.HadoopDelegationTokenProvider |  3 ++-
 10 files changed, 41 insertions(+), 49 deletions(-)
 rename 
flink-runtime/src/main/java/org/apache/flink/runtime/security/token/{DelegationTokenProvider.java
 => HadoopDelegationTokenProvider.java} (79%)
 rename 
flink-runtime/src/main/resources/META-INF/services/{org.apache.flink.runtime.security.token.DelegationTokenProvider
 => org.apache.flink.runtime.security.token.HadoopDelegationTokenProvider} 
(100%)
 rename 
flink-runtime/src/test/java/org/apache/flink/runtime/security/token/{ExceptionThrowingDelegationTokenProvider.java
 => ExceptionThrowingHadoopDelegationTokenProvider.java} (86%)
 rename 
flink-runtime/src/test/java/org/apache/flink/runtime/security/token/{TestDelegationTokenProvider.java
 => TestHadoopDelegationTokenProvider.java} (88%)
 delete mode 100644 
flink-runtime/src/test/resources/META-INF/services/org.apache.flink.runtime.security.token.DelegationTokenProvider
 copy 
flink-clients/src/main/resources/META-INF/services/org.apache.flink.client.deployment.ClusterClientFactory
 => 
flink-runtime/src/test/resources/META-INF/services/org.apache.flink.runtime.security.token.HadoopDelegationTokenProvider
 (82%)



  1   2   3   4   5   6   >