svn commit: r61931 - /release/flink/KEYS

2023-05-16 Thread xtsong
Author: xtsong
Date: Wed May 17 05:47:45 2023
New Revision: 61931

Log:
Add code signing key for Weijie Guo

Modified:
release/flink/KEYS

Modified: release/flink/KEYS
==
--- release/flink/KEYS (original)
+++ release/flink/KEYS Wed May 17 05:47:45 2023
@@ -3229,3 +3229,62 @@ DhC7tv8R7gzBhBCSn6oOMw0=
 =ekpR
 -END PGP PUBLIC KEY BLOCK-
 
+pub   rsa4096 2023-05-11 [SC]
+  8D56AE6E7082699A4870750EA4E8C4C05EE6861F
+uid [ultimate] Weijie Guo (CODE SIGNING KEY) 
+sig 3A4E8C4C05EE6861F 2023-05-11  Weijie Guo (CODE SIGNING KEY) 

+sub   rsa4096 2023-05-11 [E]
+sig  A4E8C4C05EE6861F 2023-05-11  Weijie Guo (CODE SIGNING KEY) 

+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQINBGRcZC4BEADEw0pzE+gKq19IA2fRgQp/Hw+NIabNmIc5CoURu5gonkh1ZiCD
+ZzdvIdLwlQcj4PNSr407Yi7MOt6VfjPY8f2sbY2issCTrIrL6fPA+3Hi7AxUPRUt
+UtyR9wSQXlzkm4g2cbg+LSLrEIPqi6uAEZNSCOtWqMM5hZAphMd6NiAbf29yRAXB
+3UJPL1QvtmWyrNETx6qLNI0qDIqhiI8KfTnoc24Q1RhuML/m37H61q7VyonRmQpy
+EREcoGL+I4qEiNu+WWQhUINOLuqbl5QBhWIyUsyymnh3zjJDCGzgl4KzRq9dxiDf
+RpbqMFZxUZ8fUsG8BtJ1zYLpDBtjRRsCp3NBkKVu61+EH+LPEFYBNBWU8sVPSi1T
+kIcvM2TI0SHm4osFYthtFffBfVeelEm4CvXcbhzJHnFWDYMs9/CAcz1Pl5oEX/ds
+WJLt1cDVJ477jrh367EJsFki/Vy7meGe7BQVbiDpYzysyYZHF1iyrv4W8oSZyHo1
+8YHUID2nFVWR+kO6kmCKY6WjlxOvja9huqqsV1czp6y/qvSoZWi7ZWPYIZPfZG3J
+uGwaN44mN2nw7yL6iHtNwEWLEouLjN9Udf5FX1YWOCUC56zVtZB9s19pVTumAnzn
+uACxM3PCR8a1BoCC4MUbDrfhy3a9n94wrK5DQPM5aFu5MFu0Fw4svR+m5QARAQAB
+tDRXZWlqaWUgR3VvIChDT0RFIFNJR05JTkcgS0VZKSA8Z3Vvd2VpamllQGFwYWNo
+ZS5vcmc+iQJOBBMBCAA4FiEEjVaubnCCaZpIcHUOpOjEwF7mhh8FAmRcZC4CGwMF
+CwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQpOjEwF7mhh/Ybw/+L8CnXQ51k2Wy
+Kz4ITYUgOjg0/HEwI4dPvKM5VHtFoON35DhofRZ5UaNKb8BtfzzIPHZp++snD4Q7
+t8NhnmzFJLOUhHFgklbBdh6Pu2ipjb9vGCc7aTwdEVHDZASSK5KcbLGSPIO/IAEQ
+ikPOZmiyK5+RuC8LQaxhtW7pWDQc1t+hxx+mi8KRQiu5jvOCFTQoK2YtR9M6rvue
+5gcpNiLHMPvFHm0Yq0moR4OTExo+uIM5m3G1DYflUXUCbByMLOrEdTcg6S3cAcI4
+I7LDnsfughhDvPW9jSwc521Byy4vZbbwYp2bDbo2szxjNMq8yKgnZtR5tsxQy1Os
+647jZfetUjqpUw5DNjCeaiBwjkCnSbgTZTcPdG970QQqSR1FkoWMEY0B/xqnTRJA
+yR/+gOBeS2E4RBfIrpjIJh1lVeYaXJg3ILZnBNLIP1aDrgW9PRORTtrQk7By17uw
+9iFOyDbWotN+/RVXwEtaKUHkYvCnn9+NSOcMhye1O1l/QdhmzOMG4EbU2ZUOAyIL
+95HF/zqy2cRBNlWrdP5q9DI+4E/ruo19emQfpn2AP3fe8SEoeifZzbBLtYwSFE7F
+NNW3HEQ/KovOByncj+UD1EwBJmOUFTUgAOBe6Pf73X28/HdCyJU9rJ1EwqKRQUts
+AizYUEgMviOcFZ+VGLSWvNbFEGPQZzK5Ag0EZFxkLgEQAMJtrSOA6GgmDrVWYJC5
+hGbyQnu5rY2Tmmspxmh1IQrXUyVzLmCraG4oanamAggNJr6seczmvyZV9ITkHmEy
+DeI7VPFSW8HfAxglWpiDBwCSAw/40bfBBZfBLxgxJFFsxcG9mM3XV9rqK2DAxdbK
+bQ8JUpOXoyBgrCn1s2WkV2fdSioU1p+cqSZqMN4yoMdRG9LCfkYThRfnS3xYaEFO
+maS9AAJItGx+1NvIE77tKuHyg1fQFsnRut7FEYdHGIV/vjfHxm+f7IWtK9TZrM7P
+qyWDJpl2ss/OS1gyXDoHfSawKAR6n1bBNzAzUQncZnp5rPPZNiwglt/TL3RNsic4
+rZ2f+aR5em+1VpB7AQZpwwnQbHmWlJdVs6yp1s64RCXnDDKJnxHX/Xth5f9CWa6/
+74hGpiTGPiqqOD2doNVO3px4U0F1jJj9lA4LHPqVINkK66e/nGnv4tiHOHblbl3B
+9BhsC3+yIh9AZKj6Va2eO5A4JhAQLWYyQvhDRERZ+oEz0OjAdOa97aaX+28SitX4
+L6Fr4rznQ9oMeG2iFsoYv06hQy5KSU++FFiF6o2wqz7DgXxWBmyvtu9DdpUx7WOW
+RiWA8KMoVTAMvxJNRpfv2EccjDpvATTkfGt6a8kyc9D0G7gMcGMrPfjfZGfe+a7t
+lemtwz2hbduoKqaUWY7tCnWRABEBAAGJAjYEGAEIACAWIQSNVq5ucIJpmkhwdQ6k
+6MTAXuaGHwUCZFxkLgIbDAAKCRCk6MTAXuaGH3tgD/0enrhaWrwoGHUCWqnraNsW
+gKeBl9yKFVV9qqn+s5/rPe+487iAP3IbPY94U86/oXB/8gd7b3VL9CUIYWULxGTi
+8YMTJh1Pd9CZm9s+3BcNI2ZJbzBHZFHaUjwU77BINtJahuizU0HW7+jhzl/7bI7w
+VVbptORw5jkxP291pHOB8qmnoBBIB1uTzxDnWJoCmyfFtM+ajnA73K8zaNKQeLZt
+M8qPkqFxu1+rWRxVJmUrswUkH2UVFmUuGhgevbuvIoXJkdiqUyoOLLVponCMGsAI
+G4Ix795iit28w5IgYbnVT7avmhOsbKsCQDeZ/UME50oj5tzSA1koZQc8h7x26iHv
+kcIdlMRHtD+SGe7q5+jlV0qjEsixb/P71lsmLhlDp1dpZ97rzdB6ArZYvxHYo+dc
+T2vnBuoEajXkpF0UjKsR4ujFvJJmJGkYdA84rfh+ftDSkvlQgUMEwFZPuryJS4/r
+JvSsD/viYC5FAbIsYAgWOQ3i08xhdaP0T6BRDlBp3bfn4mmuwAodLp3wM42w/ehD
+2n56GLDpGtF8wkAOQLH3hZuEsy3xUAy9fj4iMWSf2enJ04J8gYaCel1chOocvl6T
+T4slBroTxilCwU2EAAFo1bB7WdMh1Q6IAwljNgB8HqpAJmn1D7AP27MJvFKRySqs
+dhqaTwP5eDMy4VtBiOw+8g==
+=Ddr1
+-END PGP PUBLIC KEY BLOCK-




[flink] branch release-1.16 updated (ffa58e1de3d -> 2203bc3bdc9)

2023-05-16 Thread srichter
This is an automated email from the ASF dual-hosted git repository.

srichter pushed a change to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git


from ffa58e1de3d [FLINK-31418][network][tests] Fix unstable test case 
SortMergeResultPartitionReadSchedulerTest.testRequestBufferTimeout
 add 2203bc3bdc9 [FLINK-31963][state] Fix rescaling bug in recovery from 
unaligned checkpoints. (#22584) (#22595)

No new revisions were added by this update.

Summary of changes:
 .../checkpoint/StateAssignmentOperation.java   |  28 ++--
 .../runtime/checkpoint/TaskStateAssignment.java|  19 ++-
 .../checkpoint/StateAssignmentOperationTest.java   | 178 -
 .../checkpointing/UnalignedCheckpointITCase.java   |  18 ++-
 .../UnalignedCheckpointRescaleITCase.java  | 137 ++--
 .../checkpointing/UnalignedCheckpointTestBase.java |  32 +++-
 6 files changed, 335 insertions(+), 77 deletions(-)



[flink-kubernetes-operator] branch main updated (562dbbc4 -> 9618f519)

2023-05-16 Thread mxm
This is an automated email from the ASF dual-hosted git repository.

mxm pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


from 562dbbc4 [FLINK-32100] Use the total number of Kafka partitions as the 
max source parallelism (#597)
 add 9618f519 [FLINK-32102] Aggregate multiple pendingRecords metric per 
source if present (#598)

No new revisions were added by this update.

Summary of changes:
 .../autoscaler/RestApiMetricsCollector.java| 15 -
 .../autoscaler/ScalingMetricCollector.java | 16 ++---
 .../operator/autoscaler/metrics/FlinkMetric.java   |  9 +++
 .../autoscaler/RestApiMetricsCollectorTest.java| 77 ++
 .../kubernetes/operator/TestingFlinkService.java   | 13 
 5 files changed, 120 insertions(+), 10 deletions(-)
 create mode 100644 
flink-kubernetes-operator-autoscaler/src/test/java/org/apache/flink/kubernetes/operator/autoscaler/RestApiMetricsCollectorTest.java



[flink-kubernetes-operator] branch main updated: [FLINK-32100] Use the total number of Kafka partitions as the max source parallelism (#597)

2023-05-16 Thread mxm
This is an automated email from the ASF dual-hosted git repository.

mxm pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 562dbbc4 [FLINK-32100] Use the total number of Kafka partitions as the 
max source parallelism (#597)
562dbbc4 is described below

commit 562dbbc441f27c50e63483d114d805946a1e1d4c
Author: Maximilian Michels 
AuthorDate: Tue May 16 19:21:27 2023 +0200

[FLINK-32100] Use the total number of Kafka partitions as the max source 
parallelism (#597)

So far, we've taken the max number partitions we can find. However, the 
correct way to calculate the
max source parallelism would be to sum the number of partitions of all 
topis.
---
 .../autoscaler/ScalingMetricCollector.java | 33 +-
 .../MetricsCollectionAndEvaluationTest.java|  6 +++-
 2 files changed, 19 insertions(+), 20 deletions(-)

diff --git 
a/flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/ScalingMetricCollector.java
 
b/flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/ScalingMetricCollector.java
index b20c53ee..cb094e70 100644
--- 
a/flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/ScalingMetricCollector.java
+++ 
b/flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/ScalingMetricCollector.java
@@ -58,6 +58,7 @@ import java.util.Optional;
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.concurrent.ConcurrentHashMap;
+import java.util.regex.Pattern;
 import java.util.stream.Collectors;
 
 /** Metric collector using flink rest api. */
@@ -192,28 +193,22 @@ public abstract class ScalingMetricCollector {
 private void updateKafkaSourceMaxParallelisms(
 RestClusterClient restClient, JobID jobId, JobTopology 
topology)
 throws Exception {
+var partitionRegex = 
Pattern.compile("^.*\\.partition\\.\\d+\\.currentOffset$");
 for (Map.Entry> entry : 
topology.getInputs().entrySet()) {
 if (entry.getValue().isEmpty()) {
 var sourceVertex = entry.getKey();
-queryAggregatedMetricNames(restClient, jobId, 
sourceVertex).stream()
-.map(AggregatedMetric::getId)
-.filter(s -> s.endsWith(".currentOffset"))
-.mapToInt(
-s -> {
-// We extract the partition from the 
pattern:
-// 
...topic.[topic].partition.3.currentOffset
-var split = s.split("\\.");
-return Integer.parseInt(split[split.length 
- 2]);
-})
-.max()
-.ifPresent(
-p -> {
-LOG.debug(
-"Updating source {} max 
parallelism based on available partitions to {}",
-sourceVertex,
-p + 1);
-
topology.updateMaxParallelism(sourceVertex, p + 1);
-});
+var numPartitions =
+queryAggregatedMetricNames(restClient, jobId, 
sourceVertex).stream()
+.map(AggregatedMetric::getId)
+.filter(partitionRegex.asMatchPredicate())
+.count();
+if (numPartitions > 0) {
+LOG.debug(
+"Updating source {} max parallelism based on 
available partitions to {}",
+sourceVertex,
+numPartitions);
+topology.updateMaxParallelism(sourceVertex, (int) 
numPartitions);
+}
 }
 }
 }
diff --git 
a/flink-kubernetes-operator-autoscaler/src/test/java/org/apache/flink/kubernetes/operator/autoscaler/MetricsCollectionAndEvaluationTest.java
 
b/flink-kubernetes-operator-autoscaler/src/test/java/org/apache/flink/kubernetes/operator/autoscaler/MetricsCollectionAndEvaluationTest.java
index 27341510..7bef5609 100644
--- 
a/flink-kubernetes-operator-autoscaler/src/test/java/org/apache/flink/kubernetes/operator/autoscaler/MetricsCollectionAndEvaluationTest.java
+++ 
b/flink-kubernetes-operator-autoscaler/src/test/java/org/apache/flink/kubernetes/operator/autoscaler/MetricsCollectionAndEvaluationTest.java
@@ -270,6 +270,10 @@ public class MetricsCollectionAndEvaluationTest {
 Map.of(
 source1,
 List.of(
+

[flink] branch master updated: [FLINK-32030][sql-client] Add URLs support for SQL Client gateway mode (#22556)

2023-05-16 Thread thw
This is an automated email from the ASF dual-hosted git repository.

thw pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 4bd51ce122d [FLINK-32030][sql-client] Add URLs support for SQL Client 
gateway mode (#22556)
4bd51ce122d is described below

commit 4bd51ce122d03a13cfd6fdf69325630679cd5053
Author: Alexander Fedulov <1492164+afedu...@users.noreply.github.com>
AuthorDate: Tue May 16 18:52:06 2023 +0200

[FLINK-32030][sql-client] Add URLs support for SQL Client gateway mode 
(#22556)
---
 .../main/java/org/apache/flink/util/NetUtils.java  |  12 +++
 .../java/org/apache/flink/util/NetUtilsTest.java   |  11 +++
 .../org/apache/flink/runtime/rest/RestClient.java  |  14 ++-
 .../apache/flink/table/client/cli/CliOptions.java  |   7 +-
 .../flink/table/client/cli/CliOptionsParser.java   |  31 ++-
 .../flink/table/client/gateway/Executor.java   |   5 +
 .../flink/table/client/gateway/ExecutorImpl.java   |  49 +++---
 .../apache/flink/table/client/SqlClientTest.java   |  17 +++-
 .../rest/header/util/UrlPrefixDecorator.java   | 103 +
 9 files changed, 231 insertions(+), 18 deletions(-)

diff --git a/flink-core/src/main/java/org/apache/flink/util/NetUtils.java 
b/flink-core/src/main/java/org/apache/flink/util/NetUtils.java
index fc3be6cf9b4..bab4d9b3208 100644
--- a/flink-core/src/main/java/org/apache/flink/util/NetUtils.java
+++ b/flink-core/src/main/java/org/apache/flink/util/NetUtils.java
@@ -120,6 +120,18 @@ public class NetUtils {
 }
 }
 
+/**
+ * Converts an InetSocketAddress to a URL. This method assigns the 
"http://"; schema to the URL
+ * by default.
+ *
+ * @param socketAddress the InetSocketAddress to be converted
+ * @return a URL object representing the provided socket address with 
"http://"; schema
+ */
+public static URL socketToUrl(InetSocketAddress socketAddress) {
+String hostPort = socketAddress.getHostString() + ":" + 
socketAddress.getPort();
+return validateHostPortString(hostPort);
+}
+
 /**
  * Calls {@link ServerSocket#accept()} on the provided server socket, 
suppressing any thrown
  * {@link SocketTimeoutException}s. This is a workaround for the 
underlying JDK-8237858 bug in
diff --git a/flink-core/src/test/java/org/apache/flink/util/NetUtilsTest.java 
b/flink-core/src/test/java/org/apache/flink/util/NetUtilsTest.java
index a835e2bad7c..6168da4474e 100644
--- a/flink-core/src/test/java/org/apache/flink/util/NetUtilsTest.java
+++ b/flink-core/src/test/java/org/apache/flink/util/NetUtilsTest.java
@@ -18,12 +18,14 @@
 
 package org.apache.flink.util;
 
+import org.assertj.core.api.Assertions;
 import org.junit.Assert;
 import org.junit.Test;
 
 import java.io.IOException;
 import java.net.InetAddress;
 import java.net.InetSocketAddress;
+import java.net.MalformedURLException;
 import java.net.ServerSocket;
 import java.net.Socket;
 import java.net.SocketTimeoutException;
@@ -32,6 +34,7 @@ import java.util.HashSet;
 import java.util.Iterator;
 import java.util.Set;
 
+import static org.apache.flink.util.NetUtils.socketToUrl;
 import static org.hamcrest.core.IsCollectionContaining.hasItems;
 import static org.hamcrest.core.IsNot.not;
 import static org.junit.Assert.assertEquals;
@@ -343,4 +346,12 @@ public class NetUtilsTest extends TestLogger {
 NetUtils.unresolvedHostAndPortToNormalizedString(host, 
port));
 }
 }
+
+@Test
+public void testSocketToUrl() throws MalformedURLException {
+InetSocketAddress socketAddress = new InetSocketAddress("foo.com", 
8080);
+URL expectedResult = new URL("http://foo.com:8080";);
+
+
Assertions.assertThat(socketToUrl(socketAddress)).isEqualTo(expectedResult);
+}
 }
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java 
b/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java
index 2ab8fdac579..03a5c7b0742 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java
@@ -123,6 +123,8 @@ public class RestClient implements AutoCloseableAsync {
 
 private final AtomicBoolean isRunning = new AtomicBoolean(true);
 
+public static final String VERSION_PLACEHOLDER = "{{VERSION}}";
+
 @VisibleForTesting List 
outboundChannelHandlerFactories;
 
 public RestClient(Configuration configuration, Executor executor)
@@ -353,7 +355,7 @@ public class RestClient implements AutoCloseableAsync {
 }
 
 String versionedHandlerURL =
-"/" + apiVersion.getURLVersionPrefix() + 
messageHeaders.getTargetRestEndpointURL();
+constructVersionedHandlerUrl(messageHeaders, 
apiVersion.getURLVersionPrefix());
 String targetUrl = MessageParameters.resolveUrl(versi

[flink-connector-aws] branch main updated: [FLINK-31772][Connector/Kinesis] Adjusting Kinesis Ratelimiting strategy to fix performance regression (#70)

2023-05-16 Thread dannycranmer
This is an automated email from the ASF dual-hosted git repository.

dannycranmer pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-connector-aws.git


The following commit(s) were added to refs/heads/main by this push:
 new 0be8192  [FLINK-31772][Connector/Kinesis] Adjusting Kinesis 
Ratelimiting strategy to fix performance regression (#70)
0be8192 is described below

commit 0be819249cfb2930b9356a8228bdea025c04d74e
Author: Ahmed Hamdy 
AuthorDate: Tue May 16 14:40:55 2023 +0100

[FLINK-31772][Connector/Kinesis] Adjusting Kinesis Ratelimiting strategy to 
fix performance regression (#70)
---
 .../kinesis/sink/KinesisStreamsSinkWriter.java | 21 +
 .../kinesis/sink/KinesisStreamsSinkWriterTest.java | 97 ++
 2 files changed, 118 insertions(+)

diff --git 
a/flink-connector-aws-kinesis-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisStreamsSinkWriter.java
 
b/flink-connector-aws-kinesis-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisStreamsSinkWriter.java
index 2ff49cb..58d3d8a 100644
--- 
a/flink-connector-aws-kinesis-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisStreamsSinkWriter.java
+++ 
b/flink-connector-aws-kinesis-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisStreamsSinkWriter.java
@@ -26,6 +26,9 @@ import 
org.apache.flink.connector.base.sink.writer.AsyncSinkWriter;
 import org.apache.flink.connector.base.sink.writer.BufferedRequestState;
 import org.apache.flink.connector.base.sink.writer.ElementConverter;
 import 
org.apache.flink.connector.base.sink.writer.config.AsyncSinkWriterConfiguration;
+import 
org.apache.flink.connector.base.sink.writer.strategy.AIMDScalingStrategy;
+import 
org.apache.flink.connector.base.sink.writer.strategy.CongestionControlRateLimitingStrategy;
+import 
org.apache.flink.connector.base.sink.writer.strategy.RateLimitingStrategy;
 import org.apache.flink.metrics.Counter;
 import org.apache.flink.metrics.groups.SinkWriterMetricGroup;
 
@@ -79,6 +82,9 @@ class KinesisStreamsSinkWriter extends 
AsyncSinkWriter extends 
AsyncSinkWriter extends 
AsyncSinkWriter requestEntries,
diff --git 
a/flink-connector-aws-kinesis-streams/src/test/java/org/apache/flink/connector/kinesis/sink/KinesisStreamsSinkWriterTest.java
 
b/flink-connector-aws-kinesis-streams/src/test/java/org/apache/flink/connector/kinesis/sink/KinesisStreamsSinkWriterTest.java
new file mode 100644
index 000..eccfe0a
--- /dev/null
+++ 
b/flink-connector-aws-kinesis-streams/src/test/java/org/apache/flink/connector/kinesis/sink/KinesisStreamsSinkWriterTest.java
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kinesis.sink;
+
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.connector.aws.testutils.AWSServicesTestUtils;
+import org.apache.flink.connector.base.sink.writer.ElementConverter;
+import org.apache.flink.connector.base.sink.writer.TestSinkInitContext;
+import 
org.apache.flink.connector.base.sink.writer.strategy.AIMDScalingStrategy;
+import 
org.apache.flink.connector.base.sink.writer.strategy.CongestionControlRateLimitingStrategy;
+
+import org.junit.jupiter.api.Test;
+import software.amazon.awssdk.services.kinesis.model.PutRecordsRequestEntry;
+
+import java.util.Properties;
+
+import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
+
+/** Test class for {@link KinesisStreamsSinkWriter}. */
+public class KinesisStreamsSinkWriterTest {
+
+private static final int EXPECTED_AIMD_INC_RATE = 10;
+private static final double EXPECTED_AIMD_DEC_FACTOR = 0.99D;
+private static final int MAX_BATCH_SIZE = 50;
+private static final int MAX_INFLIGHT_REQUESTS = 16;
+private static final int MAX_BUFFERED_REQUESTS = 1;
+private static final long MAX_BATCH_SIZE_IN_BYTES = 4 * 1024 * 1024;
+private static final long MAX_TIME_IN_BUFFER = 5000;
+private static final long MAX_RECORD_SIZE = 1000 * 1024;
+private static final boolean FAIL_ON_ERROR = false;
+
+private KinesisStreamsSinkWriter sinkWriter;
+
+p

[flink] branch master updated (354c0f455b9 -> ce286c969a5)

2023-05-16 Thread guoweijie
This is an automated email from the ASF dual-hosted git repository.

guoweijie pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 354c0f455b9 [FLINK-31963][state] Fix rescaling bug in recovery from 
unaligned checkpoints. (#22584)
 add a8c757081c6 [FLINK-32060][test] Introduce NoOpTestExtension to make 
subclass unaware of @TestTemplate
 add ce286c969a5 [FLINK-32060][test] Migrate subclasses of 
BatchAbstractTestBase in table and other modules to JUnit5

No new revisions were added by this update.

Summary of changes:
 .../flink/formats/avro/AvroFilesystemITCase.java   |  16 ++--
 .../formats/csv/CsvFilesystemBatchITCase.java  |  23 +++--
 .../formats/csv/CsvFilesystemStreamITCase.java |   2 +-
 .../formats/json/JsonBatchFileSystemITCase.java|  20 ++--
 .../org/apache/flink/orc/OrcFileSystemITCase.java  |  41 
 .../formats/parquet/ParquetFileSystemITCase.java   |  25 ++---
 ...acyRowResource.java => LegacyRowExtension.java} |  16 +---
 .../planner/runtime/FileSystemITCaseBase.scala | 106 +
 .../batch/sql/BatchFileSystemITCaseBase.scala  |   6 +-
 .../batch/sql/FileSystemTestCsvITCase.scala|  16 ++--
 .../runtime/batch/table/AggregationITCase.scala|  10 +-
 .../planner/runtime/batch/table/CalcITCase.scala   |  51 +-
 .../runtime/batch/table/CorrelateITCase.scala  |  35 ---
 .../runtime/batch/table/GroupWindowITCase.scala|  54 +++
 .../planner/runtime/batch/table/JoinITCase.scala   |  12 ++-
 .../runtime/batch/table/LegacyLimitITCase.scala|  12 +--
 .../batch/table/LegacyTableSinkITCase.scala|  20 ++--
 .../planner/runtime/batch/table/LimitITCase.scala  |   4 +-
 .../runtime/batch/table/OverAggregateITCase.scala  |   6 +-
 .../runtime/batch/table/SetOperatorsITCase.scala   |  19 ++--
 .../planner/runtime/batch/table/SortITCase.scala   |  12 ++-
 .../runtime/batch/table/TableSinkITCase.scala  |  65 +++--
 .../stream/sql/StreamFileSystemITCaseBase.scala|  14 +--
 .../stream/sql/StreamFileSystemTestCsvITCase.scala |  15 ++-
 .../parameterized/NoOpTestExtension.java   |  56 +++
 25 files changed, 365 insertions(+), 291 deletions(-)
 copy 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/utils/{LegacyRowResource.java
 => LegacyRowExtension.java} (79%)
 create mode 100644 
flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/testutils/junit/extensions/parameterized/NoOpTestExtension.java



[flink] branch release-1.16 updated (1a3b539fa43 -> ffa58e1de3d)

2023-05-16 Thread guoweijie
This is an automated email from the ASF dual-hosted git repository.

guoweijie pushed a change to branch release-1.16
in repository https://gitbox.apache.org/repos/asf/flink.git


from 1a3b539fa43 [FLINK-32024][docs] Short code related to externalized 
connector retrieve version from its own data yaml.
 add ffa58e1de3d [FLINK-31418][network][tests] Fix unstable test case 
SortMergeResultPartitionReadSchedulerTest.testRequestBufferTimeout

No new revisions were added by this update.

Summary of changes:
 .../network/partition/SortMergeResultPartitionReadSchedulerTest.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[flink] branch master updated: [FLINK-31963][state] Fix rescaling bug in recovery from unaligned checkpoints. (#22584)

2023-05-16 Thread srichter
This is an automated email from the ASF dual-hosted git repository.

srichter pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 354c0f455b9 [FLINK-31963][state] Fix rescaling bug in recovery from 
unaligned checkpoints. (#22584)
354c0f455b9 is described below

commit 354c0f455b92c083299d8028f161f0dd113ab614
Author: Stefan Richter 
AuthorDate: Tue May 16 13:06:05 2023 +0200

[FLINK-31963][state] Fix rescaling bug in recovery from unaligned 
checkpoints. (#22584)

This commit fixes problems in StateAssignmentOperation for unaligned 
checkpoints with stateless operators that have upstream operators with output 
partition state or downstream operators with input channel state.
---
 .../checkpoint/StateAssignmentOperation.java   |  28 ++--
 .../runtime/checkpoint/TaskStateAssignment.java|  19 ++-
 .../checkpoint/StateAssignmentOperationTest.java   | 178 -
 .../checkpointing/UnalignedCheckpointITCase.java   |  18 ++-
 .../UnalignedCheckpointRescaleITCase.java  | 137 ++--
 .../checkpointing/UnalignedCheckpointTestBase.java |  32 +++-
 6 files changed, 335 insertions(+), 77 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
index 681e0b18df1..e476c6b65ec 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java
@@ -136,19 +136,24 @@ public class StateAssignmentOperation {
 
 // repartition state
 for (TaskStateAssignment stateAssignment : vertexAssignments.values()) 
{
-if (stateAssignment.hasNonFinishedState) {
+if (stateAssignment.hasNonFinishedState
+// FLINK-31963: We need to run repartitioning for 
stateless operators that have
+// upstream output or downstream input states.
+|| stateAssignment.hasUpstreamOutputStates()
+|| stateAssignment.hasDownstreamInputStates()) {
 assignAttemptState(stateAssignment);
 }
 }
 
 // actually assign the state
 for (TaskStateAssignment stateAssignment : vertexAssignments.values()) 
{
-// If upstream has output states, even the empty task state should 
be assigned for the
-// current task in order to notify this task that the old states 
will send to it which
-// likely should be filtered.
+// If upstream has output states or downstream has input states, 
even the empty task
+// state should be assigned for the current task in order to 
notify this task that the
+// old states will send to it which likely should be filtered.
 if (stateAssignment.hasNonFinishedState
 || stateAssignment.isFullyFinished
-|| stateAssignment.hasUpstreamOutputStates()) {
+|| stateAssignment.hasUpstreamOutputStates()
+|| stateAssignment.hasDownstreamInputStates()) {
 assignTaskStateToExecutionJobVertices(stateAssignment);
 }
 }
@@ -345,9 +350,10 @@ public class StateAssignmentOperation {
 newParallelism)));
 }
 
-public > void 
reDistributeResultSubpartitionStates(
-TaskStateAssignment assignment) {
-if (!assignment.hasOutputState) {
+public void reDistributeResultSubpartitionStates(TaskStateAssignment 
assignment) {
+// FLINK-31963: We can skip this phase if there is no output state AND 
downstream has no
+// input states
+if (!assignment.hasOutputState && 
!assignment.hasDownstreamInputStates()) {
 return;
 }
 
@@ -394,7 +400,9 @@ public class StateAssignmentOperation {
 }
 
 public void reDistributeInputChannelStates(TaskStateAssignment 
stateAssignment) {
-if (!stateAssignment.hasInputState) {
+// FLINK-31963: We can skip this phase only if there is no input state 
AND upstream has no
+// output states
+if (!stateAssignment.hasInputState && 
!stateAssignment.hasUpstreamOutputStates()) {
 return;
 }
 
@@ -435,7 +443,7 @@ public class StateAssignmentOperation {
 : getPartitionState(
 inputOperatorState, 
InputChannelInfo::getGateIdx, gateIndex);
 final MappingBasedRepartitioner 
repartitioner =
-new MappingBasedRepartitioner(mapping);
+new MappingBasedRepartitioner<>(mapping);
 final Map> 
repartitioned =
 applyRepartitio

[flink] branch master updated: [FLINK-32052][table-runtime] Introduce left and right state retention time to StreamingJoinOperator

2023-05-16 Thread godfrey
This is an automated email from the ASF dual-hosted git repository.

godfrey pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 5ba3f2bdea6 [FLINK-32052][table-runtime] Introduce left and right 
state retention time to StreamingJoinOperator
5ba3f2bdea6 is described below

commit 5ba3f2bdea6fc7c9e58b50200806ea341b7dd3d3
Author: Jane Chan 
AuthorDate: Sun Apr 30 00:29:29 2023 +0800

[FLINK-32052][table-runtime] Introduce left and right state retention time 
to StreamingJoinOperator

This closes #22566
---
 .../plan/nodes/exec/stream/StreamExecJoin.java |   2 +
 .../join/stream/AbstractStreamingJoinOperator.java |   9 +-
 .../join/stream/StreamingJoinOperator.java |  14 +-
 .../join/stream/StreamingSemiAntiJoinOperator.java |  12 +-
 .../join/stream/StreamingJoinOperatorTest.java | 656 +
 .../join/stream/StreamingJoinOperatorTestBase.java | 142 +
 .../stream/StreamingSemiAntiJoinOperatorTest.java  | 294 +
 .../operators/sink/SinkUpsertMaterializerTest.java |  62 +-
 .../table/runtime/util/RowDataHarnessAssertor.java |  32 +
 9 files changed, 1164 insertions(+), 59 deletions(-)

diff --git 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecJoin.java
 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecJoin.java
index da2800d246f..47544eeb6f8 100644
--- 
a/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecJoin.java
+++ 
b/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/stream/StreamExecJoin.java
@@ -184,6 +184,7 @@ public class StreamExecJoin extends ExecNodeBase
 leftInputSpec,
 rightInputSpec,
 joinSpec.getFilterNulls(),
+minRetentionTime,
 minRetentionTime);
 } else {
 boolean leftIsOuter = joinType == FlinkJoinType.LEFT || joinType 
== FlinkJoinType.FULL;
@@ -199,6 +200,7 @@ public class StreamExecJoin extends ExecNodeBase
 leftIsOuter,
 rightIsOuter,
 joinSpec.getFilterNulls(),
+minRetentionTime,
 minRetentionTime);
 }
 
diff --git 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/AbstractStreamingJoinOperator.java
 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/AbstractStreamingJoinOperator.java
index 64ada0f0db4..c7dad646631 100644
--- 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/AbstractStreamingJoinOperator.java
+++ 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/AbstractStreamingJoinOperator.java
@@ -60,7 +60,8 @@ public abstract class AbstractStreamingJoinOperator extends 
AbstractStreamOperat
 
 private final boolean[] filterNullKeys;
 
-protected final long stateRetentionTime;
+protected final long leftStateRetentionTime;
+protected final long rightStateRetentionTime;
 
 protected transient JoinConditionWithNullFilters joinCondition;
 protected transient TimestampedCollector collector;
@@ -72,13 +73,15 @@ public abstract class AbstractStreamingJoinOperator extends 
AbstractStreamOperat
 JoinInputSideSpec leftInputSideSpec,
 JoinInputSideSpec rightInputSideSpec,
 boolean[] filterNullKeys,
-long stateRetentionTime) {
+long leftStateRetentionTime,
+long rightStateRetentionTime) {
 this.leftType = leftType;
 this.rightType = rightType;
 this.generatedJoinCondition = generatedJoinCondition;
 this.leftInputSideSpec = leftInputSideSpec;
 this.rightInputSideSpec = rightInputSideSpec;
-this.stateRetentionTime = stateRetentionTime;
+this.leftStateRetentionTime = leftStateRetentionTime;
+this.rightStateRetentionTime = rightStateRetentionTime;
 this.filterNullKeys = filterNullKeys;
 }
 
diff --git 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/StreamingJoinOperator.java
 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/StreamingJoinOperator.java
index d221c555996..308b98e2794 100644
--- 
a/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream/StreamingJoinOperator.java
+++ 
b/flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/join/stream

[flink] branch master updated (062ab75c3ff -> 1957ef5fabe)

2023-05-16 Thread dwysakowicz
This is an automated email from the ASF dual-hosted git repository.

dwysakowicz pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 062ab75c3ff [hotfix][python] Update the incomplete cloudpickle package
 add 1957ef5fabe [FLINK-31118][table] Add ARRAY_UNION function. (#22483)

No new revisions were added by this update.

Summary of changes:
 docs/data/sql_functions.yml| 155 +++--
 .../docs/reference/pyflink.table/expressions.rst   |   1 +
 flink-python/pyflink/table/expression.py   |   7 +
 .../flink/table/api/internal/BaseExpressions.java  |  11 ++
 .../functions/BuiltInFunctionDefinitions.java  |  11 ++
 .../table/types/inference/InputTypeStrategies.java |   9 ++
 ...tegy.java => CommonArrayInputTypeStrategy.java} |  14 +-
 .../CommonArrayInputTypeStrategyTest.java  |  55 
 .../functions/CollectionFunctionsITCase.java   |  66 -
 ...stinctFunction.java => ArrayUnionFunction.java} |  76 ++
 10 files changed, 295 insertions(+), 110 deletions(-)
 copy 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/{CommonInputTypeStrategy.java
 => CommonArrayInputTypeStrategy.java} (87%)
 create mode 100644 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/strategies/CommonArrayInputTypeStrategyTest.java
 copy 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/{ArrayDistinctFunction.java
 => ArrayUnionFunction.java} (62%)



[flink-kubernetes-operator] branch release-1.5 updated: [docs] Update documentation to cover some recent improvements

2023-05-16 Thread gyfora
This is an automated email from the ASF dual-hosted git repository.

gyfora pushed a commit to branch release-1.5
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/release-1.5 by this push:
 new dda70681 [docs] Update documentation to cover some recent improvements
dda70681 is described below

commit dda70681236b2b5bd12088261e969eb641047858
Author: Gyula Fora 
AuthorDate: Mon May 15 17:07:01 2023 +0200

[docs] Update documentation to cover some recent improvements
---
 README.md  |  1 +
 docs/content/docs/concepts/overview.md | 10 +++-
 .../content/docs/custom-resource/job-management.md |  5 +-
 docs/content/docs/custom-resource/overview.md  |  9 ++-
 docs/content/docs/custom-resource/pod-template.md  | 30 ++
 docs/content/docs/development/roadmap.md   |  4 +-
 docs/content/docs/operations/health.md | 69 ++
 7 files changed, 117 insertions(+), 11 deletions(-)

diff --git a/README.md b/README.md
index 8f1fbcff..353aeb33 100644
--- a/README.md
+++ b/README.md
@@ -17,6 +17,7 @@ Check our 
[quick-start](https://nightlies.apache.org/flink/flink-kubernetes-oper
  - Upgrade, suspend and delete deployments
  - Full logging and metrics integration
  - Flexible deployments and native integration with Kubernetes tooling
+ - Flink Job Autoscaler
 
 For the complete feature-set please refer to our 
[documentation](https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/concepts/overview/).
 
diff --git a/docs/content/docs/concepts/overview.md 
b/docs/content/docs/concepts/overview.md
index a67e0912..d37d8d3d 100644
--- a/docs/content/docs/concepts/overview.md
+++ b/docs/content/docs/concepts/overview.md
@@ -36,7 +36,7 @@ Flink Kubernetes Operator aims to capture the 
responsibilities of a human operat
   - Stateful and stateless application upgrades
   - Triggering and managing savepoints
   - Handling errors, rolling-back broken upgrades
-- Multiple Flink version support: v1.13, v1.14, v1.15, v1.16
+- Multiple Flink version support: v1.13, v1.14, v1.15, v1.16, v1.17
 - [Deployment Modes]({{< ref 
"docs/custom-resource/overview#application-deployments" >}}):
   - Application cluster
   - Session cluster
@@ -52,6 +52,10 @@ Flink Kubernetes Operator aims to capture the 
responsibilities of a human operat
 - POD augmentation via [Pod Templates]({{< ref 
"docs/custom-resource/pod-template" >}})
   - Native Kubernetes POD definitions
   - Layering (Base/JobManager/TaskManager overrides)
+- [Job Autoscaler]({{< ref "docs/custom-resource/autoscaler" >}})
+  - Collect lag and utilization metrics
+  - Scale job vertices to the ideal parallelism
+  - Scale up and down as the load changes
 ### Operations
 - Operator [Metrics]({{< ref "docs/operations/metrics-logging#metrics" >}})
   - Utilizes the well-established [Flink Metric 
System](https://nightlies.apache.org/flink/flink-docs-master/docs/ops/metrics)
@@ -101,5 +105,5 @@ drwxr-xr-x 2   60 May 11 15:11 
b6fb2a9c-d1cd-4e65-a9a1-e825c4b47543
 ```
 
 ### AuditUtils can log sensitive information present in the custom resources
-As reported in 
[FLINK-30306](https://issues.apache.org/jira/browse/FLINK-30306) when Flink 
custom resources change the operator logs the change, which could include 
sensitive information. We suggest ingesting secrets to Flink containers during 
runtime to mitigate this. 
-Also note that anyone who has access to the custom resources already had 
access to the potentially sensitive information in question, but folks who only 
have access to the logs could also see them now. We are planning to introduce 
redaction rules to AuditUtils to improve this in a later release.
\ No newline at end of file
+As reported in 
[FLINK-30306](https://issues.apache.org/jira/browse/FLINK-30306) when Flink 
custom resources change the operator logs the change, which could include 
sensitive information. We suggest ingesting secrets to Flink containers during 
runtime to mitigate this.
+Also note that anyone who has access to the custom resources already had 
access to the potentially sensitive information in question, but folks who only 
have access to the logs could also see them now. We are planning to introduce 
redaction rules to AuditUtils to improve this in a later release.
diff --git a/docs/content/docs/custom-resource/job-management.md 
b/docs/content/docs/custom-resource/job-management.md
index 20810969..45d08403 100644
--- a/docs/content/docs/custom-resource/job-management.md
+++ b/docs/content/docs/custom-resource/job-management.md
@@ -98,7 +98,7 @@ The `upgradeMode` setting controls both the stop and restore 
mechanisms as detai
 The three upgrade modes are intended to support different scenarios:
 
  1. **stateless**: Stateless application upgrades from empty state
- 2. **last-state**: Quick upgrades in any application state (even for failing 
j

[flink] branch release-1.17 updated: [hotfix][python] Update the incomplete cloudpickle package

2023-05-16 Thread hxb
This is an automated email from the ASF dual-hosted git repository.

hxb pushed a commit to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.17 by this push:
 new 5fdcfbb2bae [hotfix][python] Update the incomplete cloudpickle package
5fdcfbb2bae is described below

commit 5fdcfbb2baee66f760019a4426f27b93fa5c41bf
Author: huangxingbo 
AuthorDate: Tue May 16 15:37:17 2023 +0800

[hotfix][python] Update the incomplete cloudpickle package
---
 flink-python/lib/cloudpickle-2.2.0-src.zip | Bin 21387 -> 22446 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)

diff --git a/flink-python/lib/cloudpickle-2.2.0-src.zip 
b/flink-python/lib/cloudpickle-2.2.0-src.zip
index 9909ae4d67c..e974db488b8 100644
Binary files a/flink-python/lib/cloudpickle-2.2.0-src.zip and 
b/flink-python/lib/cloudpickle-2.2.0-src.zip differ



[flink] branch master updated (4415c5150ed -> 062ab75c3ff)

2023-05-16 Thread hxb
This is an automated email from the ASF dual-hosted git repository.

hxb pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 4415c5150ed [FLINK-31609][yarn][test] Extend log whitelist for 
expected AMRM heartbeat interrupt
 add 062ab75c3ff [hotfix][python] Update the incomplete cloudpickle package

No new revisions were added by this update.

Summary of changes:
 flink-python/lib/cloudpickle-2.2.0-src.zip | Bin 21387 -> 22446 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)



[flink-kubernetes-operator] branch main updated: [docs] Update documentation to cover some recent improvements

2023-05-16 Thread gyfora
This is an automated email from the ASF dual-hosted git repository.

gyfora pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 80b13b52 [docs] Update documentation to cover some recent improvements
80b13b52 is described below

commit 80b13b52183e8a96c8a7de3fcca97b60f415a3a2
Author: Gyula Fora 
AuthorDate: Mon May 15 17:07:01 2023 +0200

[docs] Update documentation to cover some recent improvements
---
 README.md  |  1 +
 docs/content/docs/concepts/overview.md | 10 +++-
 .../content/docs/custom-resource/job-management.md |  5 +-
 docs/content/docs/custom-resource/overview.md  |  9 ++-
 docs/content/docs/custom-resource/pod-template.md  | 30 ++
 docs/content/docs/development/roadmap.md   |  4 +-
 docs/content/docs/operations/health.md | 69 ++
 7 files changed, 117 insertions(+), 11 deletions(-)

diff --git a/README.md b/README.md
index 8f1fbcff..353aeb33 100644
--- a/README.md
+++ b/README.md
@@ -17,6 +17,7 @@ Check our 
[quick-start](https://nightlies.apache.org/flink/flink-kubernetes-oper
  - Upgrade, suspend and delete deployments
  - Full logging and metrics integration
  - Flexible deployments and native integration with Kubernetes tooling
+ - Flink Job Autoscaler
 
 For the complete feature-set please refer to our 
[documentation](https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/concepts/overview/).
 
diff --git a/docs/content/docs/concepts/overview.md 
b/docs/content/docs/concepts/overview.md
index a67e0912..d37d8d3d 100644
--- a/docs/content/docs/concepts/overview.md
+++ b/docs/content/docs/concepts/overview.md
@@ -36,7 +36,7 @@ Flink Kubernetes Operator aims to capture the 
responsibilities of a human operat
   - Stateful and stateless application upgrades
   - Triggering and managing savepoints
   - Handling errors, rolling-back broken upgrades
-- Multiple Flink version support: v1.13, v1.14, v1.15, v1.16
+- Multiple Flink version support: v1.13, v1.14, v1.15, v1.16, v1.17
 - [Deployment Modes]({{< ref 
"docs/custom-resource/overview#application-deployments" >}}):
   - Application cluster
   - Session cluster
@@ -52,6 +52,10 @@ Flink Kubernetes Operator aims to capture the 
responsibilities of a human operat
 - POD augmentation via [Pod Templates]({{< ref 
"docs/custom-resource/pod-template" >}})
   - Native Kubernetes POD definitions
   - Layering (Base/JobManager/TaskManager overrides)
+- [Job Autoscaler]({{< ref "docs/custom-resource/autoscaler" >}})
+  - Collect lag and utilization metrics
+  - Scale job vertices to the ideal parallelism
+  - Scale up and down as the load changes
 ### Operations
 - Operator [Metrics]({{< ref "docs/operations/metrics-logging#metrics" >}})
   - Utilizes the well-established [Flink Metric 
System](https://nightlies.apache.org/flink/flink-docs-master/docs/ops/metrics)
@@ -101,5 +105,5 @@ drwxr-xr-x 2   60 May 11 15:11 
b6fb2a9c-d1cd-4e65-a9a1-e825c4b47543
 ```
 
 ### AuditUtils can log sensitive information present in the custom resources
-As reported in 
[FLINK-30306](https://issues.apache.org/jira/browse/FLINK-30306) when Flink 
custom resources change the operator logs the change, which could include 
sensitive information. We suggest ingesting secrets to Flink containers during 
runtime to mitigate this. 
-Also note that anyone who has access to the custom resources already had 
access to the potentially sensitive information in question, but folks who only 
have access to the logs could also see them now. We are planning to introduce 
redaction rules to AuditUtils to improve this in a later release.
\ No newline at end of file
+As reported in 
[FLINK-30306](https://issues.apache.org/jira/browse/FLINK-30306) when Flink 
custom resources change the operator logs the change, which could include 
sensitive information. We suggest ingesting secrets to Flink containers during 
runtime to mitigate this.
+Also note that anyone who has access to the custom resources already had 
access to the potentially sensitive information in question, but folks who only 
have access to the logs could also see them now. We are planning to introduce 
redaction rules to AuditUtils to improve this in a later release.
diff --git a/docs/content/docs/custom-resource/job-management.md 
b/docs/content/docs/custom-resource/job-management.md
index 20810969..45d08403 100644
--- a/docs/content/docs/custom-resource/job-management.md
+++ b/docs/content/docs/custom-resource/job-management.md
@@ -98,7 +98,7 @@ The `upgradeMode` setting controls both the stop and restore 
mechanisms as detai
 The three upgrade modes are intended to support different scenarios:
 
  1. **stateless**: Stateless application upgrades from empty state
- 2. **last-state**: Quick upgrades in any application state (even for failing 
jobs), does not