(flink) branch release-1.20 updated: [FLINK-36173][docs] Fix invalid link in checkpoint documentation

2024-08-30 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch release-1.20
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.20 by this push:
 new 55e4fca0b4d [FLINK-36173][docs] Fix invalid link in checkpoint 
documentation
55e4fca0b4d is described below

commit 55e4fca0b4de14c51aa05c862e1961ea00d6b536
Author: Gabor Somogyi 
AuthorDate: Fri Aug 30 19:41:23 2024 +0200

[FLINK-36173][docs] Fix invalid link in checkpoint documentation
---
 docs/layouts/shortcodes/generated/checkpointing_configuration.html  | 2 +-
 .../main/java/org/apache/flink/configuration/CheckpointingOptions.java  | 2 +-
 .../flink/streaming/api/environment/ExecutionCheckpointingOptions.java  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/layouts/shortcodes/generated/checkpointing_configuration.html 
b/docs/layouts/shortcodes/generated/checkpointing_configuration.html
index a087b98bb13..1766e2d7589 100644
--- a/docs/layouts/shortcodes/generated/checkpointing_configuration.html
+++ b/docs/layouts/shortcodes/generated/checkpointing_configuration.html
@@ -18,7 +18,7 @@
 
execution.checkpointing.checkpoints-after-tasks-finish
 true
 Boolean
-Feature toggle for enabling checkpointing even if some of 
tasks have finished. Before you enable it, please take a look at the
 important considerations 
+Feature toggle for enabling checkpointing even if some of 
tasks have finished. Before you enable it, please take a look at the
 important considerations 
 
 
 execution.checkpointing.cleaner.parallel-mode
diff --git 
a/flink-core/src/main/java/org/apache/flink/configuration/CheckpointingOptions.java
 
b/flink-core/src/main/java/org/apache/flink/configuration/CheckpointingOptions.java
index 9f210388967..f3f47a449ba 100644
--- 
a/flink-core/src/main/java/org/apache/flink/configuration/CheckpointingOptions.java
+++ 
b/flink-core/src/main/java/org/apache/flink/configuration/CheckpointingOptions.java
@@ -665,7 +665,7 @@ public class CheckpointingOptions {
 "Feature toggle for enabling 
checkpointing even if some of tasks"
 + " have finished. Before 
you enable it, please take a look at %s ",
 link(
-
"{{.Site.BaseURL}}{{.Site.LanguagePrefix}}/docs/dev/datastream/fault-tolerance/checkpointing/#checkpointing-with-parts-of-the-graph-finished-beta",
+
"{{.Site.BaseURL}}{{.Site.LanguagePrefix}}/docs/dev/datastream/fault-tolerance/checkpointing/#checkpointing-with-parts-of-the-graph-finished",
 "the important 
considerations"))
 .build());
 
diff --git 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/ExecutionCheckpointingOptions.java
 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/ExecutionCheckpointingOptions.java
index c7b5676af11..a7a3129aca0 100644
--- 
a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/ExecutionCheckpointingOptions.java
+++ 
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/ExecutionCheckpointingOptions.java
@@ -334,7 +334,7 @@ public class ExecutionCheckpointingOptions {
 "Feature toggle for enabling 
checkpointing even if some of tasks"
 + " have finished. Before 
you enable it, please take a look at %s ",
 link(
-
"{{.Site.BaseURL}}{{.Site.LanguagePrefix}}/docs/dev/datastream/fault-tolerance/checkpointing/#checkpointing-with-parts-of-the-graph-finished-beta",
+
"{{.Site.BaseURL}}{{.Site.LanguagePrefix}}/docs/dev/datastream/fault-tolerance/checkpointing/#checkpointing-with-parts-of-the-graph-finished",
 "the important 
considerations"))
 .build());
 



(flink) branch master updated (d8d69005033 -> 26ce997052b)

2024-08-30 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from d8d69005033 [FLINK-27885][tests][JUnit5 migration] Module: flink-csv
 add 26ce997052b [FLINK-36173][docs] Fix invalid link in checkpoint 
documentation

No new revisions were added by this update.

Summary of changes:
 docs/layouts/shortcodes/generated/checkpointing_configuration.html  | 2 +-
 .../main/java/org/apache/flink/configuration/CheckpointingOptions.java  | 2 +-
 .../flink/streaming/api/environment/ExecutionCheckpointingOptions.java  | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)



(flink) branch master updated (0daca7b6db8 -> 9bcd8f4b8f4)

2024-08-26 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 0daca7b6db8 [FLINK-35771][s3] Limit the amount of work per s5cmd call
 add 9bcd8f4b8f4 [FLINK-36140] Log a warning when pods are terminated by 
kubernetes

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/flink/runtime/util/SignalHandler.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



(flink) branch master updated: [FLINK-35625][cli] Merge "flink run" and "flink run-application" functionality, deprecate "run-application"

2024-07-02 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new e56b54db40a [FLINK-35625][cli] Merge "flink run" and "flink 
run-application" functionality, deprecate "run-application"
e56b54db40a is described below

commit e56b54db40a2afac420d8d8952707c2644ba633a
Author: Ferenc Csaky 
AuthorDate: Tue Jul 2 12:22:57 2024 +0200

[FLINK-35625][cli] Merge "flink run" and "flink run-application" 
functionality, deprecate "run-application"
---
 .../generated/deployment_configuration.html|   2 +-
 .../org/apache/flink/client/cli/CliFrontend.java   | 113 ++--
 .../apache/flink/client/cli/CliFrontendParser.java |   1 +
 .../org/apache/flink/client/cli/GenericCLI.java|  30 +++--
 .../flink/client/cli/CliFrontendRunTest.java   | 144 +
 .../client/cli/util/DummyClusterDescriptor.java|   2 +-
 .../flink/configuration/DeploymentOptions.java |   4 +-
 7 files changed, 215 insertions(+), 81 deletions(-)

diff --git a/docs/layouts/shortcodes/generated/deployment_configuration.html 
b/docs/layouts/shortcodes/generated/deployment_configuration.html
index f3e08ecd696..e6843a17d39 100644
--- a/docs/layouts/shortcodes/generated/deployment_configuration.html
+++ b/docs/layouts/shortcodes/generated/deployment_configuration.html
@@ -60,7 +60,7 @@
 execution.target
 (none)
 String
-The deployment target for the execution. This can take one of 
the following values when calling bin/flink 
run:remotelocalyarn-per-job 
(deprecated)yarn-sessionkubernetes-sessionAnd one 
of the following values when calling bin/flink 
run-application:yarn-applicationkubernetes-application
+The deployment target for the execution. This can take one of 
the following values when calling bin/flink 
run:remotelocalyarn-applicationyarn-per-job
 
(deprecated)yarn-sessionkubernetes-applicationkubernetes-sessionAnd
 one of the following values when calling bin/flink run-application 
(deprecated):yarn-application
 
 
diff --git 
a/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java 
b/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java
index 228ab6c4e73..2a45dedea07 100644
--- a/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java
+++ b/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java
@@ -41,6 +41,7 @@ import 
org.apache.flink.client.program.ProgramParametrizationException;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.CoreOptions;
+import org.apache.flink.configuration.DeploymentOptions;
 import org.apache.flink.configuration.GlobalConfiguration;
 import org.apache.flink.configuration.JobManagerOptions;
 import org.apache.flink.configuration.RestOptions;
@@ -167,7 +168,10 @@ public class CliFrontend {
 //  Execute Actions
 // 

 
+@Deprecated
 protected void runApplication(String[] args) throws Exception {
+LOG.warn(
+"DEPRECATION WARNING: The 'run-application' option is 
deprecated and will be removed in the future. Please use 'run' instead.");
 LOG.info("Running 'run-application' command.");
 
 final Options commandOptions = 
CliFrontendParser.getRunCommandOptions();
@@ -178,40 +182,7 @@ public class CliFrontend {
 return;
 }
 
-final CustomCommandLine activeCommandLine =
-validateAndGetActiveCommandLine(checkNotNull(commandLine));
-
-final ApplicationDeployer deployer =
-new ApplicationClusterDeployer(clusterClientServiceLoader);
-
-final ProgramOptions programOptions;
-final Configuration effectiveConfiguration;
-
-// No need to set a jarFile path for Pyflink job.
-if (ProgramOptionsUtils.isPythonEntryPoint(commandLine)) {
-programOptions = 
ProgramOptionsUtils.createPythonProgramOptions(commandLine);
-effectiveConfiguration =
-getEffectiveConfiguration(
-activeCommandLine,
-commandLine,
-programOptions,
-Collections.emptyList());
-} else {
-programOptions = new ProgramOptions(commandLine);
-programOptions.validate();
-final URI uri = 
PackagedProgramUtils.resolveURI(programOptions.getJarFilePath());
-effectiveConfiguration =
-getEffectiveConfiguration(
-  

(flink) branch master updated: [FLINK-35371][security] Add configuration for SSL keystore and truststore type

2024-06-14 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 0919ff20b91 [FLINK-35371][security] Add configuration for SSL keystore 
and truststore type
0919ff20b91 is described below

commit 0919ff20b918736c86f6c3449b21b3f1528bb2c0
Author: ammar-master <35787811+ammar-mas...@users.noreply.github.com>
AuthorDate: Fri Jun 14 08:43:20 2024 -0700

[FLINK-35371][security] Add configuration for SSL keystore and truststore 
type
---
 .../generated/security_configuration.html  | 24 +++
 .../shortcodes/generated/security_ssl_section.html | 24 +++
 .../flink/configuration/SecurityOptions.java   | 44 +++
 .../runtime/rpc/pekko/CustomSSLEngineProvider.java | 35 +++-
 .../apache/flink/runtime/rpc/pekko/PekkoUtils.java |  6 +++
 .../flink/runtime/rpc/pekko/PekkoUtilsTest.java| 14 +++
 .../org/apache/flink/runtime/net/SSLUtils.java | 17 +++-
 .../org/apache/flink/runtime/net/SSLUtilsTest.java | 49 ++
 8 files changed, 209 insertions(+), 4 deletions(-)

diff --git a/docs/layouts/shortcodes/generated/security_configuration.html 
b/docs/layouts/shortcodes/generated/security_configuration.html
index 069825bdd65..ff19479042e 100644
--- a/docs/layouts/shortcodes/generated/security_configuration.html
+++ b/docs/layouts/shortcodes/generated/security_configuration.html
@@ -134,6 +134,12 @@
 String
 The secret to decrypt the keystore file for Flink's for 
Flink's internal endpoints (rpc, data transport, blob server).
 
+
+security.ssl.internal.keystore-type
+JVM default keystore type
+String
+The type of keystore for Flink's internal endpoints (rpc, data 
transport, blob server).
+
 
 security.ssl.internal.session-cache-size
 -1
@@ -158,6 +164,12 @@
 String
 The password to decrypt the truststore for Flink's internal 
endpoints (rpc, data transport, blob server).
 
+
+security.ssl.internal.truststore-type
+JVM default keystore type
+String
+The type of truststore for Flink's internal endpoints (rpc, 
data transport, blob server).
+
 
 security.ssl.protocol
 "TLSv1.2"
@@ -206,6 +218,12 @@
 String
 The secret to decrypt the keystore file for Flink's for 
Flink's external REST endpoints.
 
+
+security.ssl.rest.keystore-type
+JVM default keystore type
+String
+The type of the keystore for Flink's external REST 
endpoints.
+
 
 security.ssl.rest.truststore
 (none)
@@ -218,6 +236,12 @@
 String
 The password to decrypt the truststore for Flink's external 
REST endpoints.
 
+
+security.ssl.rest.truststore-type
+JVM default keystore type
+String
+The type of the truststore for Flink's external REST 
endpoints.
+
 
 security.ssl.verify-hostname
 true
diff --git a/docs/layouts/shortcodes/generated/security_ssl_section.html 
b/docs/layouts/shortcodes/generated/security_ssl_section.html
index 3a4e55a8e95..ad5c72b3cf2 100644
--- a/docs/layouts/shortcodes/generated/security_ssl_section.html
+++ b/docs/layouts/shortcodes/generated/security_ssl_section.html
@@ -44,6 +44,12 @@
 String
 The secret to decrypt the keystore file for Flink's for 
Flink's internal endpoints (rpc, data transport, blob server).
 
+
+security.ssl.internal.keystore-type
+JVM default keystore type
+String
+The type of keystore for Flink's internal endpoints (rpc, data 
transport, blob server).
+
 
 security.ssl.internal.truststore
 (none)
@@ -56,6 +62,12 @@
 String
 The password to decrypt the truststore for Flink's internal 
endpoints (rpc, data transport, blob server).
 
+
+security.ssl.internal.truststore-type
+JVM default keystore type
+String
+The type of truststore for Flink's internal endpoints (rpc, 
data transport, blob server).
+
 
 security.ssl.protocol
 "TLSv1.2"
@@ -98,6 +110,12 @@
 String
 The secret to decrypt the keystore file for Flink's for 
Flink's external REST endpoints.
 
+
+security.ssl.rest.keystore-type
+JVM defau

(flink) branch master updated: [FLINK-35525][yarn] Add a token services configuration to allow obtained token to be passed to Yarn AM

2024-06-10 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 74b100bcba3 [FLINK-35525][yarn] Add a token services configuration to 
allow obtained token to be passed to Yarn AM
74b100bcba3 is described below

commit 74b100bcba33ce18b21a44b26db7e162507f7830
Author: Zhen Wang <643348...@qq.com>
AuthorDate: Mon Jun 10 20:50:14 2024 +0800

[FLINK-35525][yarn] Add a token services configuration to allow obtained 
token to be passed to Yarn AM
---
 .../generated/yarn_config_configuration.html   |  6 +++
 .../apache/flink/yarn/YarnClusterDescriptor.java   |  6 ++-
 .../yarn/configuration/YarnConfigOptions.java  | 10 
 .../flink/yarn/YarnClusterDescriptorTest.java  | 31 +++
 .../token/TestYarnAMDelegationTokenProvider.java   | 63 ++
 .../token/TestYarnAMDelegationTokenReceiver.java   | 36 +
 ...ink.core.security.token.DelegationTokenProvider | 16 ++
 ...ink.core.security.token.DelegationTokenReceiver | 16 ++
 8 files changed, 182 insertions(+), 2 deletions(-)

diff --git a/docs/layouts/shortcodes/generated/yarn_config_configuration.html 
b/docs/layouts/shortcodes/generated/yarn_config_configuration.html
index 41c32c77c94..cb0b0f935ba 100644
--- a/docs/layouts/shortcodes/generated/yarn_config_configuration.html
+++ b/docs/layouts/shortcodes/generated/yarn_config_configuration.html
@@ -152,6 +152,12 @@
 String
 The provided usrlib directory in remote. It should be 
pre-uploaded and world-readable. Flink will use it to exclude the local usrlib 
directory(i.e. usrlib/ under the parent directory of FLINK_LIB_DIR). Unlike 
yarn.provided.lib.dirs, YARN will not cache it on the nodes as it is for each 
application. An example could be 
hdfs://$namenode_address/path/of/flink/usrlib
 
+
+yarn.security.appmaster.delegation.token.services
+"hadoopfs"
+List<String>
+The delegation token provider services are allowed to pass 
obtained tokens to YARN application master. For backward compatibility to make 
log aggregation to work, we add tokens obtained by `hadoopfs` provider to AM by 
default.
+
 
 yarn.security.kerberos.localized-keytab-path
 "krb5.keytab"
diff --git 
a/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java 
b/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
index 31bd574b024..bc65a5ee93d 100644
--- a/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
+++ b/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java
@@ -147,6 +147,7 @@ import static 
org.apache.flink.yarn.Utils.getPathFromLocalFilePathStr;
 import static org.apache.flink.yarn.Utils.getStartCommand;
 import static org.apache.flink.yarn.YarnConfigKeys.ENV_FLINK_CLASSPATH;
 import static 
org.apache.flink.yarn.YarnConfigKeys.LOCAL_RESOURCE_DESCRIPTOR_SEPARATOR;
+import static 
org.apache.flink.yarn.configuration.YarnConfigOptions.APP_MASTER_TOKEN_SERVICES;
 import static 
org.apache.flink.yarn.configuration.YarnConfigOptions.YARN_CONTAINER_START_COMMAND_TEMPLATE;
 
 /** The descriptor with deployment information for deploying a Flink cluster 
on Yarn. */
@@ -1348,7 +1349,8 @@ public class YarnClusterDescriptor implements 
ClusterDescriptor {
 });
 }
 
-private void setTokensFor(ContainerLaunchContext containerLaunchContext, 
boolean fetchToken)
+@VisibleForTesting
+void setTokensFor(ContainerLaunchContext containerLaunchContext, boolean 
fetchToken)
 throws Exception {
 Credentials credentials = new Credentials();
 
@@ -1372,7 +1374,7 @@ public class YarnClusterDescriptor implements 
ClusterDescriptor {
 
 // This is here for backward compatibility to make log aggregation 
work
 for (Map.Entry e : 
container.getTokens().entrySet()) {
-if (e.getKey().equals("hadoopfs")) {
+if 
(flinkConfiguration.get(APP_MASTER_TOKEN_SERVICES).contains(e.getKey())) {
 
credentials.addAll(HadoopDelegationTokenConverter.deserialize(e.getValue()));
 }
 }
diff --git 
a/flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java
 
b/flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java
index 06ab7e7437d..5f45c1f86b4 100644
--- 
a/flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java
+++ 
b/flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java
@@ -364,6 +364,16 @@ public class YarnConfigOptions {
 + "resource directory. If set to false, 
Flink"

(flink-kubernetes-operator) branch main updated: [FLINK-35192] support jemalloc in image

2024-05-20 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 8b789eeb [FLINK-35192] support jemalloc in image
8b789eeb is described below

commit 8b789eeb0139bc491858cc0189e09512c1a73ed1
Author: chenyuzhi459 <553673...@qq.com>
AuthorDate: Tue May 21 14:22:35 2024 +0800

[FLINK-35192] support jemalloc in image
---
 Dockerfile   |  8 
 docker-entrypoint.sh | 21 +
 2 files changed, 29 insertions(+)

diff --git a/Dockerfile b/Dockerfile
index ac516aeb..7dc05a66 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -64,6 +64,14 @@ ARG SKIP_OS_UPDATE=true
 RUN if [ "$SKIP_OS_UPDATE" = "false" ]; then apt-get update; fi
 RUN if [ "$SKIP_OS_UPDATE" = "false" ]; then apt-get upgrade -y; fi
 
+ARG DISABLE_JEMALLOC=false
+# Install jemalloc
+RUN if [ "$DISABLE_JEMALLOC" = "false" ]; then \
+  apt-get update; \
+  apt-get -y install libjemalloc-dev; \
+  rm -rf /var/lib/apt/lists/*; \
+  fi
+
 USER flink
 ENTRYPOINT ["/docker-entrypoint.sh"]
 CMD ["help"]
diff --git a/docker-entrypoint.sh b/docker-entrypoint.sh
index 3343f2d6..7464645e 100755
--- a/docker-entrypoint.sh
+++ b/docker-entrypoint.sh
@@ -22,6 +22,27 @@ args=("$@")
 
 cd /flink-kubernetes-operator || exit
 
+maybe_enable_jemalloc() {
+if [ "${DISABLE_JEMALLOC:-false}" = "false" ]; then
+JEMALLOC_PATH="/usr/lib/$(uname -m)-linux-gnu/libjemalloc.so"
+JEMALLOC_FALLBACK="/usr/lib/x86_64-linux-gnu/libjemalloc.so"
+if [ -f "$JEMALLOC_PATH" ]; then
+export LD_PRELOAD=$LD_PRELOAD:$JEMALLOC_PATH
+elif [ -f "$JEMALLOC_FALLBACK" ]; then
+export LD_PRELOAD=$LD_PRELOAD:$JEMALLOC_FALLBACK
+else
+if [ "$JEMALLOC_PATH" = "$JEMALLOC_FALLBACK" ]; then
+MSG_PATH=$JEMALLOC_PATH
+else
+MSG_PATH="$JEMALLOC_PATH and $JEMALLOC_FALLBACK"
+fi
+echo "WARNING: attempted to load jemalloc from $MSG_PATH but the 
library couldn't be found. glibc will be used instead."
+fi
+fi
+}
+
+maybe_enable_jemalloc
+
 if [ "$1" = "help" ]; then
 printf "Usage: $(basename "$0") (operator|webhook)\n"
 printf "Or $(basename "$0") help\n\n"



(flink) branch master updated: [FLINK-35302][rest] Ignore unknown fields in REST request deserialization

2024-05-13 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 36b1d2acd6d [FLINK-35302][rest] Ignore unknown fields in REST request 
deserialization
36b1d2acd6d is described below

commit 36b1d2acd6d736898f3ef27d78587bd9954fda82
Author: Juntao Hu 
AuthorDate: Mon May 13 15:22:51 2024 +0800

[FLINK-35302][rest] Ignore unknown fields in REST request deserialization
---
 .../runtime/rest/handler/AbstractHandler.java  |  2 +-
 .../runtime/rest/FileUploadHandlerITCase.java  |  9 ++--
 .../runtime/rest/handler/AbstractHandlerTest.java  | 63 ++
 3 files changed, 69 insertions(+), 5 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/AbstractHandler.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/AbstractHandler.java
index 55929e7eaea..61793602e21 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/AbstractHandler.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/AbstractHandler.java
@@ -76,7 +76,7 @@ public abstract class AbstractHandler<
 
 protected final Logger log = LoggerFactory.getLogger(getClass());
 
-protected static final ObjectMapper MAPPER = 
RestMapperUtils.getStrictObjectMapper();
+protected static final ObjectMapper MAPPER = 
RestMapperUtils.getFlexibleObjectMapper();
 
 /**
  * Other response payload overhead (in bytes). If we truncate response 
payload, we should leave
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/rest/FileUploadHandlerITCase.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/rest/FileUploadHandlerITCase.java
index 45d14f1a444..f507a1469c8 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/rest/FileUploadHandlerITCase.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/rest/FileUploadHandlerITCase.java
@@ -331,8 +331,8 @@ class FileUploadHandlerITCase {
 
fileHandler.getMessageHeaders().getTargetRestEndpointURL(),
 new MultipartUploadExtension.TestRequestBody());
 try (Response response = client.newCall(jsonRequest).execute()) {
-// JSON payload did not match expected format
-
assertThat(response.code()).isEqualTo(HttpResponseStatus.BAD_REQUEST.code());
+// explicitly rejected by the test handler implementation
+
assertThat(response.code()).isEqualTo(HttpResponseStatus.INTERNAL_SERVER_ERROR.code());
 }
 
 Request fileRequest =
@@ -347,8 +347,9 @@ class FileUploadHandlerITCase {
 
fileHandler.getMessageHeaders().getTargetRestEndpointURL(),
 new MultipartUploadExtension.TestRequestBody());
 try (Response response = client.newCall(mixedRequest).execute()) {
-// JSON payload did not match expected format
-
assertThat(response.code()).isEqualTo(HttpResponseStatus.BAD_REQUEST.code());
+// unknown field in TestRequestBody is ignored
+assertThat(response.code())
+
.isEqualTo(fileHandler.getMessageHeaders().getResponseStatusCode().code());
 }
 
 verifyNoFileIsRegisteredToDeleteOnExitHook();
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/AbstractHandlerTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/AbstractHandlerTest.java
index 05907457b3d..0343c352d16 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/AbstractHandlerTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/AbstractHandlerTest.java
@@ -20,6 +20,7 @@ package org.apache.flink.runtime.rest.handler;
 
 import org.apache.flink.configuration.Configuration;
 import org.apache.flink.configuration.RestOptions;
+import org.apache.flink.runtime.rest.FlinkHttpObjectAggregator;
 import org.apache.flink.runtime.rest.HttpMethodWrapper;
 import org.apache.flink.runtime.rest.RestClient;
 import org.apache.flink.runtime.rest.handler.router.RouteResult;
@@ -43,10 +44,14 @@ import org.apache.flink.util.concurrent.FutureUtils;
 
 import org.apache.flink.shaded.netty4.io.netty.buffer.Unpooled;
 import org.apache.flink.shaded.netty4.io.netty.channel.Channel;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelFuture;
 import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
+import org.apache.flink.shaded.netty4.io.netty.channel.ChannelPipeline;
 import 
org.apache.flink.shaded.netty4.io.netty.handler.codec.http.DefaultFullHttpRequest;
 import org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpMethod;
 import org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpRequest;
+imp

(flink) branch master updated (06fdc0155be -> 34a7734c489)

2024-02-29 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 06fdc0155be [hotfix] Fix missing flink-end-to-end-tests-jdbc-driver 
dependency
 add 34a7734c489 [FLINK-20090][rest] Expose slot sharing group info in REST 
API

No new revisions were added by this update.

Summary of changes:
 .../client/program/rest/RestClusterClientTest.java | 78 ++
 .../src/test/resources/rest_api_v1.snapshot|  3 +
 .../executiongraph/AccessExecutionJobVertex.java   |  8 +++
 .../executiongraph/ArchivedExecutionGraph.java |  1 +
 .../executiongraph/ArchivedExecutionJobVertex.java | 11 +++
 .../runtime/executiongraph/ExecutionJobVertex.java |  1 +
 .../flink/runtime/instance/SlotSharingGroupId.java | 13 +++-
 .../rest/handler/job/JobDetailsHandler.java|  1 +
 .../runtime/rest/messages/job/JobDetailsInfo.java  | 20 ++
 ...er.java => SlotSharingGroupIDDeserializer.java} | 14 ++--
 ...izer.java => SlotSharingGroupIDSerializer.java} | 14 ++--
 .../rest/handler/job/JobExceptionsHandlerTest.java |  2 +
 .../job/JobVertexFlameGraphHandlerTest.java|  3 +
 .../SubtaskExecutionAttemptDetailsHandlerTest.java |  2 +
 .../rest/messages/job/JobDetailsInfoTest.java  |  2 +
 15 files changed, 159 insertions(+), 14 deletions(-)
 copy 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/{JobVertexIDDeserializer.java
 => SlotSharingGroupIDDeserializer.java} (71%)
 copy 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/json/{JobIDSerializer.java
 => SlotSharingGroupIDSerializer.java} (73%)



(flink-connector-shared-utils) branch ci_utils updated: [FLINK-34267][CI] Update miniconda install script to fix build on MacOS

2024-02-09 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch ci_utils
in repository 
https://gitbox.apache.org/repos/asf/flink-connector-shared-utils.git


The following commit(s) were added to refs/heads/ci_utils by this push:
 new e6e1426  [FLINK-34267][CI] Update miniconda install script to fix 
build on MacOS
e6e1426 is described below

commit e6e14268b8316352031b25f4b67ed64dc142b683
Author: Aleksandr Pilipenko <2481047+z3...@users.noreply.github.com>
AuthorDate: Fri Feb 9 09:59:50 2024 +

[FLINK-34267][CI] Update miniconda install script to fix build on MacOS
---
 python/lint-python.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/python/lint-python.sh b/python/lint-python.sh
index 1c0452b..89c2922 100755
--- a/python/lint-python.sh
+++ b/python/lint-python.sh
@@ -185,8 +185,8 @@ function install_wget() {
 # some packages including checks such as tox and flake8.
 
 function install_miniconda() {
-
OS_TO_CONDA_URL=("https://repo.continuum.io/miniconda/Miniconda3-4.7.10-MacOSX-x86_64.sh";
 \
-
"https://repo.continuum.io/miniconda/Miniconda3-4.7.10-Linux-x86_64.sh";)
+
OS_TO_CONDA_URL=("https://repo.continuum.io/miniconda/Miniconda3-4.7.12.1-MacOSX-x86_64.sh";
 \
+
"https://repo.continuum.io/miniconda/Miniconda3-4.7.12.1-Linux-x86_64.sh";)
 if [ ! -f "$CONDA_INSTALL" ]; then
 print_function "STEP" "download miniconda..."
 download ${OS_TO_CONDA_URL[$1]} $CONDA_INSTALL_SH



(flink-kubernetes-operator) branch main updated: [FLINK-34198] Remove e2e test operator log error check

2024-01-22 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 31d01f24 [FLINK-34198] Remove e2e test operator log error check
31d01f24 is described below

commit 31d01f246d8a344b560aab1653b7aba561baea26
Author: Gabor Somogyi 
AuthorDate: Mon Jan 22 17:33:07 2024 +0100

[FLINK-34198] Remove e2e test operator log error check
---
 e2e-tests/test_application_kubernetes_ha.sh |  2 --
 e2e-tests/test_application_operations.sh|  2 --
 e2e-tests/test_autoscaler.sh|  2 --
 e2e-tests/test_multi_sessionjob.sh  |  2 --
 e2e-tests/test_sessionjob_kubernetes_ha.sh  |  2 --
 e2e-tests/test_sessionjob_operations.sh |  2 --
 e2e-tests/utils.sh  | 31 -
 7 files changed, 43 deletions(-)

diff --git a/e2e-tests/test_application_kubernetes_ha.sh 
b/e2e-tests/test_application_kubernetes_ha.sh
index 3c1a4d82..1797b29a 100755
--- a/e2e-tests/test_application_kubernetes_ha.sh
+++ b/e2e-tests/test_application_kubernetes_ha.sh
@@ -47,7 +47,5 @@ wait_for_logs $jm_pod_name "Completed checkpoint [0-9]+ for 
job" ${TIMEOUT} || e
 wait_for_status flinkdep/flink-example-statemachine 
'.status.jobManagerDeploymentStatus' READY ${TIMEOUT} || exit 1
 wait_for_status flinkdep/flink-example-statemachine '.status.jobStatus.state' 
RUNNING ${TIMEOUT} || exit 1
 
-check_operator_log_for_errors '|grep -v "REST service in session cluster timed 
out"' || exit 1
-
 echo "Successfully run the Flink Kubernetes application HA test"
 
diff --git a/e2e-tests/test_application_operations.sh 
b/e2e-tests/test_application_operations.sh
index d69da980..22e2d179 100755
--- a/e2e-tests/test_application_operations.sh
+++ b/e2e-tests/test_application_operations.sh
@@ -67,6 +67,4 @@ wait_for_status flinkdep/flink-example-statemachine 
'.status.jobManagerDeploymen
 wait_for_status flinkdep/flink-example-statemachine '.status.jobStatus.state' 
RUNNING ${TIMEOUT} || exit 1
 assert_available_slots 1 $CLUSTER_ID
 
-check_operator_log_for_errors || exit 1
-
 echo "Successfully run the last-state upgrade test"
diff --git a/e2e-tests/test_autoscaler.sh b/e2e-tests/test_autoscaler.sh
index 49767d64..4afa952b 100644
--- a/e2e-tests/test_autoscaler.sh
+++ b/e2e-tests/test_autoscaler.sh
@@ -48,6 +48,4 @@ else
 fi
 wait_for_status $APPLICATION_IDENTIFIER '.status.lifecycleState' STABLE 
${TIMEOUT} || exit 1
 
-check_operator_log_for_errors || exit 1
-
 echo "Successfully run the autoscaler test"
diff --git a/e2e-tests/test_multi_sessionjob.sh 
b/e2e-tests/test_multi_sessionjob.sh
index 09862db5..59990870 100755
--- a/e2e-tests/test_multi_sessionjob.sh
+++ b/e2e-tests/test_multi_sessionjob.sh
@@ -38,7 +38,6 @@ jm_pod_name=$(get_jm_pod_name $CLUSTER_ID)
 wait_for_logs $jm_pod_name "Completed checkpoint [0-9]+ for job" ${TIMEOUT} || 
exit 1
 wait_for_status $SESSION_CLUSTER_IDENTIFIER 
'.status.jobManagerDeploymentStatus' READY ${TIMEOUT} || exit 1
 wait_for_status $SESSION_JOB_IDENTIFIER '.status.jobStatus.state' RUNNING 
${TIMEOUT} || exit 1
-check_operator_log_for_errors || exit 1
 echo "Flink Session Job is running properly"
 
 # Current namespace: flink
@@ -49,5 +48,4 @@ jm_pod_name=$(get_jm_pod_name $CLUSTER_ID)
 wait_for_logs $jm_pod_name "Completed checkpoint [0-9]+ for job" ${TIMEOUT} || 
exit 1
 wait_for_status $SESSION_CLUSTER_IDENTIFIER 
'.status.jobManagerDeploymentStatus' READY ${TIMEOUT} || exit 1
 wait_for_status $SESSION_JOB_IDENTIFIER '.status.jobStatus.state' RUNNING 
${TIMEOUT} || exit 1
-check_operator_log_for_errors || exit 1
 echo "Flink Session Job is running properly"
diff --git a/e2e-tests/test_sessionjob_kubernetes_ha.sh 
b/e2e-tests/test_sessionjob_kubernetes_ha.sh
index 37c8c37e..0ad55b12 100755
--- a/e2e-tests/test_sessionjob_kubernetes_ha.sh
+++ b/e2e-tests/test_sessionjob_kubernetes_ha.sh
@@ -48,7 +48,5 @@ wait_for_logs $jm_pod_name "Completed checkpoint [0-9]+ for 
job" ${TIMEOUT} || e
 wait_for_status $SESSION_CLUSTER_IDENTIFIER 
'.status.jobManagerDeploymentStatus' READY ${TIMEOUT} || exit 1
 wait_for_status $SESSION_JOB_IDENTIFIER '.status.jobStatus.state' RUNNING 
${TIMEOUT} || exit 1
 
-check_operator_log_for_errors '|grep -v "REST service in session cluster timed 
out"' || exit 1
-
 echo "Successfully run the Flink Session Job HA test"
 
diff --git a/e2e-tests/test_sessionjob_operations.sh 
b/e2e-tests/test_sessionjob_operations.sh
index c230af8c..b1c88fc2 100755
--- a/e2e-tests/test_sessionjob_operations.sh
+++ b/e2e-tests/test_sessionjob_operations.sh
@@ -79,5 +79,3 @@ wait_for_jobmanage

(flink) branch master updated: [FLINK-33268][rest] Skip unknown fields in REST response deserialization

2024-01-15 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 19cb9de5c54 [FLINK-33268][rest] Skip unknown fields in REST response 
deserialization
19cb9de5c54 is described below

commit 19cb9de5c54b9535be15ca850f5e1ebd2e21c244
Author: Gabor Somogyi 
AuthorDate: Mon Jan 15 09:54:36 2024 +0100

[FLINK-33268][rest] Skip unknown fields in REST response deserialization
---
 .../org/apache/flink/runtime/rest/RestClient.java  | 47 +++--
 .../flink/runtime/rest/util/RestMapperUtils.java   | 25 +--
 .../messages/webmonitor/JobDetailsTest.java| 81 ++
 .../apache/flink/runtime/rest/RestClientTest.java  | 23 ++
 4 files changed, 119 insertions(+), 57 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java 
b/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java
index f92fbbfdc62..bf882601858 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java
@@ -121,6 +121,8 @@ public class RestClient implements AutoCloseableAsync {
 private static final Logger LOG = 
LoggerFactory.getLogger(RestClient.class);
 
 private static final ObjectMapper objectMapper = 
RestMapperUtils.getStrictObjectMapper();
+private static final ObjectMapper flexibleObjectMapper =
+RestMapperUtils.getFlexibleObjectMapper();
 
 // used to open connections to a rest server endpoint
 private final Executor executor;
@@ -632,35 +634,34 @@ public class RestClient implements AutoCloseableAsync {
 CompletableFuture responseFuture = new CompletableFuture<>();
 final JsonParser jsonParser = 
objectMapper.treeAsTokens(rawResponse.json);
 try {
-P response = objectMapper.readValue(jsonParser, responseType);
-responseFuture.complete(response);
-} catch (IOException originalException) {
-// the received response did not matched the expected response type
-
-// lets see if it is an ErrorResponse instead
-try {
+// We make sure it fits to ErrorResponseBody, this condition is 
enforced by test in
+// RestClientTest
+if (rawResponse.json.size() == 1 && 
rawResponse.json.has("errors")) {
 ErrorResponseBody error =
 objectMapper.treeToValue(rawResponse.getJson(), 
ErrorResponseBody.class);
 responseFuture.completeExceptionally(
 new RestClientException(
 error.errors.toString(), 
rawResponse.getHttpResponseStatus()));
-} catch (JsonProcessingException jpe2) {
-// if this fails it is either the expected type or response 
type was wrong, most
-// likely caused
-// by a client/search MessageHeaders mismatch
-LOG.error(
-"Received response was neither of the expected type 
({}) nor an error. Response={}",
-responseType,
-rawResponse,
-jpe2);
-responseFuture.completeExceptionally(
-new RestClientException(
-"Response was neither of the expected type("
-+ responseType
-+ ") nor an error.",
-originalException,
-rawResponse.getHttpResponseStatus()));
+} else {
+P response = flexibleObjectMapper.readValue(jsonParser, 
responseType);
+responseFuture.complete(response);
 }
+} catch (IOException ex) {
+// if this fails it is either the expected type or response type 
was wrong, most
+// likely caused
+// by a client/search MessageHeaders mismatch
+LOG.error(
+"Received response was neither of the expected type ({}) 
nor an error. Response={}",
+responseType,
+rawResponse,
+ex);
+responseFuture.completeExceptionally(
+new RestClientException(
+"Response was neither of the expected type("
++ responseType
++ ") nor an error.",
+ex,
+rawResponse.getHttpResponseStatus()));
 }
 return responseFuture;
 }
diff --git 
a/flink-runtime/src/main/java/org/apa

(flink-connector-kafka) branch main updated: [FLINK-33559] Externalize Kafka Python connector code

2023-12-11 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-connector-kafka.git


The following commit(s) were added to refs/heads/main by this push:
 new c38a0406 [FLINK-33559] Externalize Kafka Python connector code
c38a0406 is described below

commit c38a0406104646a7ea8199bb64244310e344ce2b
Author: pvary 
AuthorDate: Mon Dec 11 09:40:50 2023 +0100

[FLINK-33559] Externalize Kafka Python connector code
---
 .github/workflows/push_pr.yml  |8 +
 .gitignore |   18 +-
 .../push_pr.yml => flink-python/MANIFEST.in|   16 +-
 flink-python/README.txt|   14 +
 flink-python/dev/integration_test.sh   |   54 +
 flink-python/pom.xml   |  222 
 .../pyflink/datastream/connectors/kafka.py | 1163 
 .../datastream/connectors/tests/test_kafka.py  |  669 +++
 flink-python/pyflink/pyflink_gateway_server.py |  288 +
 flink-python/setup.py  |  158 +++
 flink-python/tox.ini   |   51 +
 pom.xml|1 +
 12 files changed, 2648 insertions(+), 14 deletions(-)

diff --git a/.github/workflows/push_pr.yml b/.github/workflows/push_pr.yml
index ddc50ab8..8f53a5bd 100644
--- a/.github/workflows/push_pr.yml
+++ b/.github/workflows/push_pr.yml
@@ -29,3 +29,11 @@ jobs:
 uses: apache/flink-connector-shared-utils/.github/workflows/ci.yml@ci_utils
 with:
   flink_version: ${{ matrix.flink }}
+
+  python_test:
+strategy:
+  matrix:
+flink: [ 1.17.1, 1.18.0 ]
+uses: 
apache/flink-connector-shared-utils/.github/workflows/python_ci.yml@ci_utils
+with:
+  flink_version: ${{ matrix.flink }}
diff --git a/.gitignore b/.gitignore
index 5f0068cd..901fd674 100644
--- a/.gitignore
+++ b/.gitignore
@@ -35,4 +35,20 @@ out/
 tools/flink
 tools/flink-*
 tools/releasing/release
-tools/japicmp-output
\ No newline at end of file
+tools/japicmp-output
+
+# Generated file, do not store in git
+flink-python/pyflink/datastream/connectors/kafka_connector_version.py
+flink-python/apache_flink_connectors_kafka.egg-info/
+flink-python/.tox/
+flink-python/build
+flink-python/dist
+flink-python/dev/download
+flink-python/dev/.conda/
+flink-python/dev/log/
+flink-python/dev/.stage.txt
+flink-python/dev/install_command.sh
+flink-python/dev/lint-python.sh
+flink-python/dev/build-wheels.sh
+flink-python/dev/glibc_version_fix.h
+flink-python/dev/dev-requirements.txt
diff --git a/.github/workflows/push_pr.yml b/flink-python/MANIFEST.in
similarity index 73%
copy from .github/workflows/push_pr.yml
copy to flink-python/MANIFEST.in
index ddc50ab8..3578d2df 100644
--- a/.github/workflows/push_pr.yml
+++ b/flink-python/MANIFEST.in
@@ -16,16 +16,6 @@
 # limitations under the License.
 

 
-name: CI
-on: [push, pull_request]
-concurrency:
-  group: ${{ github.workflow }}-${{ github.ref }}
-  cancel-in-progress: true
-jobs:
-  compile_and_test:
-strategy:
-  matrix:
-flink: [ 1.17.1, 1.18.0 ]
-uses: apache/flink-connector-shared-utils/.github/workflows/ci.yml@ci_utils
-with:
-  flink_version: ${{ matrix.flink }}
+graft pyflink
+global-exclude *.py[cod] __pycache__ .DS_Store
+
diff --git a/flink-python/README.txt b/flink-python/README.txt
new file mode 100644
index ..a12c13e5
--- /dev/null
+++ b/flink-python/README.txt
@@ -0,0 +1,14 @@
+This is official Apache Flink Kafka Python connector.
+
+For the latest information about Flink connector, please visit our website at:
+
+   https://flink.apache.org
+
+and our GitHub Account for Kafka connector
+
+   https://github.com/apache/flink-connector-kafka
+
+If you have any questions, ask on our Mailing lists:
+
+   u...@flink.apache.org
+   d...@flink.apache.org
diff --git a/flink-python/dev/integration_test.sh 
b/flink-python/dev/integration_test.sh
new file mode 100755
index ..19816725
--- /dev/null
+++ b/flink-python/dev/integration_test.sh
@@ -0,0 +1,54 @@
+#!/usr/bin/env bash
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an 

(flink-connector-shared-utils) branch ci_utils updated: [FLINK-33556][CI] Test infrastructure for externalized python code

2023-12-07 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch ci_utils
in repository 
https://gitbox.apache.org/repos/asf/flink-connector-shared-utils.git


The following commit(s) were added to refs/heads/ci_utils by this push:
 new 7691962  [FLINK-33556][CI] Test infrastructure for externalized python 
code
7691962 is described below

commit 7691962b8031536a2baa4d39252a38198ba91dc5
Author: pvary 
AuthorDate: Thu Dec 7 10:33:52 2023 +0100

[FLINK-33556][CI] Test infrastructure for externalized python code
---
 .github/workflows/python_ci.yml |  86 
 python/README.md|   6 +
 python/build-wheels.sh  |  52 +++
 python/glibc_version_fix.h  |  17 +
 python/install_command.sh   |  31 ++
 python/lint-python.sh   | 876 
 6 files changed, 1068 insertions(+)

diff --git a/.github/workflows/python_ci.yml b/.github/workflows/python_ci.yml
new file mode 100644
index 000..89e10d9
--- /dev/null
+++ b/.github/workflows/python_ci.yml
@@ -0,0 +1,86 @@
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+on:
+  workflow_call:
+inputs:
+  flink_version:
+description: "Flink version to test against."
+required: true
+type: string
+  timeout_global:
+description: "The timeout in minutes for the entire workflow."
+required: false
+type: number
+default: 80
+  timeout_test:
+description: "The timeout in minutes for the test compile&step."
+required: false
+type: number
+default: 50
+  connector_branch:
+description: "Branch that need to be checked out"
+required: false
+type: string
+
+jobs:
+  python_test:
+runs-on: ubuntu-latest
+timeout-minutes: ${{ inputs.timeout_global }}
+env:
+  MVN_COMMON_OPTIONS: -U -B --no-transfer-progress -Dflink.version=${{ 
inputs.flink_version }} -DskipTests
+  MVN_CONNECTION_OPTIONS: -Dhttp.keepAlive=false 
-Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
+  MVN_BUILD_OUTPUT_FILE: "/tmp/mvn_build_output.out"
+steps:
+  - run: echo "Running CI pipeline for JDK version 8"
+
+  - name: Check out repository code
+uses: actions/checkout@v3
+with:
+  ref: "${{ inputs.connector_branch }}"
+
+  - name: Set JDK
+uses: actions/setup-java@v3
+with:
+  java-version: 8
+  distribution: 'temurin'
+  cache: 'maven'
+
+  - name: Set Maven 3.8.6
+uses: stCarolas/setup-maven@v4.5
+with:
+  maven-version: 3.8.6
+
+  - name: Compile
+timeout-minutes: ${{ inputs.timeout_test }}
+run: |
+  set -o pipefail
+
+  mvn clean install ${MVN_COMMON_OPTIONS} \
+${{ env.MVN_CONNECTION_OPTIONS }} \
+-Dlog4j.configurationFile=file://$(pwd)/tools/ci/log4j.properties \
+| tee ${{ env.MVN_BUILD_OUTPUT_FILE }}
+
+  - name: Run Python test
+timeout-minutes: ${{ inputs.timeout_test }}
+run: |
+  set -o pipefail
+
+  cd flink-python
+  chmod a+x dev/* 
+  ./dev/lint-python.sh -e mypy,sphinx | tee ${{ 
env.MVN_BUILD_OUTPUT_FILE }}
diff --git a/python/README.md b/python/README.md
new file mode 100644
index 000..0dc8217
--- /dev/null
+++ b/python/README.md
@@ -0,0 +1,6 @@
+This directory contains commonly used scripts for testing and creating python 
packages for connectors.
+
+The original version of the files are based on
+https://github.com/apache/flink/tree/release-1.17.2/flink-python/dev.
+
+Created FLINK-33762 to make these scripts versioned to allow backward 
incompatible changes in the future.
diff --git a/python/build-wheels.sh b/python/build-wheels.sh
new file mode 100755
index 000..7f18a91
--- /dev/null
+++ b/python/build-wheels.sh
@@ -0,0 +1,52 @@

(flink) branch master updated (4eb5b588e4d -> e4f389895d9)

2023-12-07 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from 4eb5b588e4d [FLINK-33726][sql-client] print time cost for streaming 
queries
 add e4f389895d9 [FLINK-33556] Test infrastructure for externalized python 
code

No new revisions were added by this update.

Summary of changes:
 flink-python/pyflink/pyflink_gateway_server.py | 17 +++--
 1 file changed, 11 insertions(+), 6 deletions(-)



(flink) branch master updated: [FLINK-33515][python] Stream python process output to log instead of collecting it in memory

2023-11-13 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new caa324a254b [FLINK-33515][python] Stream python process output to log 
instead of collecting it in memory
caa324a254b is described below

commit caa324a254b3b5c1b821604a2537fa679d0a7722
Author: Gabor Somogyi 
AuthorDate: Mon Nov 13 11:59:14 2023 +0100

[FLINK-33515][python] Stream python process output to log instead of 
collecting it in memory
---
 .../org/apache/flink/client/python/PythonDriver.java | 20 +++-
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git 
a/flink-python/src/main/java/org/apache/flink/client/python/PythonDriver.java 
b/flink-python/src/main/java/org/apache/flink/client/python/PythonDriver.java
index ed439415418..005fb04af0e 100644
--- 
a/flink-python/src/main/java/org/apache/flink/client/python/PythonDriver.java
+++ 
b/flink-python/src/main/java/org/apache/flink/client/python/PythonDriver.java
@@ -108,20 +108,14 @@ public final class PythonDriver {
 LOG.info(
 "--- Python Process Started 
--");
 // print the python process output to stdout and log file
-final StringBuilder sb = new StringBuilder();
-try {
-while (true) {
-String line = in.readLine();
-if (line == null) {
-break;
-} else {
-System.out.println(line);
-sb.append(line);
-sb.append("\n");
-}
+while (true) {
+String line = in.readLine();
+if (line == null) {
+break;
+} else {
+System.out.println(line);
+LOG.info(line);
 }
-} finally {
-LOG.info(sb.toString());
 }
 int exitCode = pythonProcess.waitFor();
 LOG.info(



[flink] branch master updated: [FLINK-33172][python] Bump numpy version

2023-10-01 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new ab26175a82a [FLINK-33172][python] Bump numpy version
ab26175a82a is described below

commit ab26175a82a836da9edfaea6325038541e492a3e
Author: Gabor Somogyi 
AuthorDate: Sun Oct 1 13:12:06 2023 +0200

[FLINK-33172][python] Bump numpy version
---
 flink-python/dev/dev-requirements.txt | 2 +-
 flink-python/setup.py | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/flink-python/dev/dev-requirements.txt 
b/flink-python/dev/dev-requirements.txt
index bfede381ab1..167b12947c5 100755
--- a/flink-python/dev/dev-requirements.txt
+++ b/flink-python/dev/dev-requirements.txt
@@ -24,7 +24,7 @@ avro-python3>=1.8.1,!=1.9.2
 pandas>=1.3.0
 pyarrow>=5.0.0
 pytz>=2018.3
-numpy>=1.21.4
+numpy>=1.22.4
 fastavro>=1.1.0,!=1.8.0
 grpcio>=1.29.0,<=1.48.2
 grpcio-tools>=1.29.0,<=1.48.2
diff --git a/flink-python/setup.py b/flink-python/setup.py
index 4303cbf5a63..bfd821b6509 100644
--- a/flink-python/setup.py
+++ b/flink-python/setup.py
@@ -321,7 +321,7 @@ try:
 'cloudpickle>=2.2.0', 'avro-python3>=1.8.1,!=1.9.2',
 'pytz>=2018.3', 'fastavro>=1.1.0,!=1.8.0', 
'requests>=2.26.0',
 'protobuf>=3.19.0',
-'numpy>=1.21.4',
+'numpy>=1.22.4',
 'pandas>=1.3.0',
 'pyarrow>=5.0.0',
 'pemja==0.3.0;platform_system != "Windows"',



[flink] branch master updated: [FLINK-32223][runtime][security] Add Hive delegation token support

2023-09-29 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new e4c15aa8e67 [FLINK-32223][runtime][security] Add Hive delegation token 
support
e4c15aa8e67 is described below

commit e4c15aa8e67c28419914fefd9dab83f1e93df2ac
Author: jiaoqingbo <1178404...@qq.com>
AuthorDate: Fri Sep 29 16:28:33 2023 +0800

[FLINK-32223][runtime][security] Add Hive delegation token support
---
 .../HiveServer2DelegationTokenIdentifier.java  |  28 +--
 .../token/HiveServer2DelegationTokenProvider.java  | 226 +
 .../token/HiveServer2DelegationTokenReceiver.java  |  24 +--
 ...ink.core.security.token.DelegationTokenProvider |  16 ++
 ...ink.core.security.token.DelegationTokenReceiver |  16 ++
 .../HiveServer2DelegationTokenProviderITCase.java  | 174 
 .../src/test/resources/hive-site.xml   |   2 +-
 .../{ => test-hive-delegation-token}/hive-site.xml |  30 +--
 .../flink/runtime/hadoop/HadoopUserUtils.java  |  36 
 .../hadoop/HadoopFSDelegationTokenProvider.java|  31 +--
 .../runtime/hadoop/HadoopUserUtilsITCase.java  |  46 +
 .../HadoopFSDelegationTokenProviderITCase.java |  46 -
 12 files changed, 543 insertions(+), 132 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/hadoop/HadoopUserUtils.java
 
b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/security/token/HiveServer2DelegationTokenIdentifier.java
similarity index 53%
copy from 
flink-runtime/src/main/java/org/apache/flink/runtime/hadoop/HadoopUserUtils.java
copy to 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/security/token/HiveServer2DelegationTokenIdentifier.java
index c360b5852c1..94ce2c9adf7 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/hadoop/HadoopUserUtils.java
+++ 
b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/security/token/HiveServer2DelegationTokenIdentifier.java
@@ -16,23 +16,25 @@
  * limitations under the License.
  */
 
-package org.apache.flink.runtime.hadoop;
+package org.apache.flink.table.security.token;
 
-import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.flink.annotation.Internal;
 
-/**
- * Utility class for working with Hadoop user related classes. This should 
only be used if Hadoop is
- * on the classpath.
- */
-public class HadoopUserUtils {
+import org.apache.hadoop.io.Text;
+import 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier;
+
+/** Delegation token identifier for HiveServer2. */
+@Internal
+public class HiveServer2DelegationTokenIdentifier extends 
AbstractDelegationTokenIdentifier {
+public static final Text HIVE_DELEGATION_KIND = new 
Text("HIVE_DELEGATION_TOKEN");
+
+public HiveServer2DelegationTokenIdentifier() {}
 
-public static boolean isProxyUser(UserGroupInformation ugi) {
-return ugi.getAuthenticationMethod() == 
UserGroupInformation.AuthenticationMethod.PROXY;
+public HiveServer2DelegationTokenIdentifier(Text owner, Text renewer, Text 
realUser) {
+super(owner, renewer, realUser);
 }
 
-public static boolean hasUserKerberosAuthMethod(UserGroupInformation ugi) {
-return UserGroupInformation.isSecurityEnabled()
-&& ugi.getAuthenticationMethod()
-== UserGroupInformation.AuthenticationMethod.KERBEROS;
+public Text getKind() {
+return HIVE_DELEGATION_KIND;
 }
 }
diff --git 
a/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/security/token/HiveServer2DelegationTokenProvider.java
 
b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/security/token/HiveServer2DelegationTokenProvider.java
new file mode 100644
index 000..2988f2aebd6
--- /dev/null
+++ 
b/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/security/token/HiveServer2DelegationTokenProvider.java
@@ -0,0 +1,226 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing pe

[flink] branch master updated: [FLINK-32976][runtime] Fix NullPointException when starting flink cluster in standalone mode

2023-09-22 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 1fb95c30301 [FLINK-32976][runtime] Fix NullPointException when 
starting flink cluster in standalone mode
1fb95c30301 is described below

commit 1fb95c30301f4148ea9943fa6ff42421311a89aa
Author: Feng Jin 
AuthorDate: Fri Sep 22 15:58:45 2023 +0800

[FLINK-32976][runtime] Fix NullPointException when starting flink cluster 
in standalone mode
---
 .../security/token/hadoop/HadoopFSDelegationTokenProvider.java| 5 -
 .../token/hadoop/HadoopFSDelegationTokenProviderITCase.java   | 8 
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenProvider.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenProvider.java
index aeb37fefdcb..5c3dd48a01b 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenProvider.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenProvider.java
@@ -170,7 +170,10 @@ public class HadoopFSDelegationTokenProvider implements 
DelegationTokenProvider
 });
 
 // YARN staging dir
-if 
(flinkConfiguration.getString(DeploymentOptions.TARGET).toLowerCase().contains("yarn"))
 {
+if (flinkConfiguration
+.getString(DeploymentOptions.TARGET, "")
+.toLowerCase()
+.contains("yarn")) {
 LOG.debug("Running on YARN, trying to add staging directory to 
file systems to access");
 String yarnStagingDirectory =
 flinkConfiguration.getString("yarn.staging-directory", "");
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenProviderITCase.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenProviderITCase.java
index 5f23695b973..43ba3ae7a68 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenProviderITCase.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenProviderITCase.java
@@ -35,6 +35,7 @@ import java.util.Set;
 
 import static java.time.Instant.ofEpochMilli;
 import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertNotNull;
 import static org.junit.jupiter.api.Assertions.assertNull;
 
 /** Test for {@link HadoopFSDelegationTokenProvider}. */
@@ -215,4 +216,11 @@ class HadoopFSDelegationTokenProviderITCase {
 provider.getIssueDate(
 constantClock, tokenIdentifier.getKind().toString(), 
tokenIdentifier));
 }
+
+@Test
+public void obtainDelegationTokenWithStandaloneDeployment() throws 
Exception {
+HadoopFSDelegationTokenProvider provider = new 
HadoopFSDelegationTokenProvider();
+provider.init(new org.apache.flink.configuration.Configuration());
+assertNotNull(provider.obtainDelegationTokens());
+}
 }



[flink-kubernetes-operator] branch main updated: [FLINK-33105] Log ClusterInfo fetch error as warning

2023-09-19 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new 32115490 [FLINK-33105] Log ClusterInfo fetch error as warning
32115490 is described below

commit 321154901c7ec40eefbfb6bacb2808634ae006ab
Author: Gabor Somogyi 
AuthorDate: Tue Sep 19 11:02:51 2023 +0200

[FLINK-33105] Log ClusterInfo fetch error as warning
---
 .../operator/observer/deployment/AbstractFlinkDeploymentObserver.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/deployment/AbstractFlinkDeploymentObserver.java
 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/deployment/AbstractFlinkDeploymentObserver.java
index 54c2cd4f..fd19713a 100644
--- 
a/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/deployment/AbstractFlinkDeploymentObserver.java
+++ 
b/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/observer/deployment/AbstractFlinkDeploymentObserver.java
@@ -86,7 +86,7 @@ public abstract class AbstractFlinkDeploymentObserver
 flinkApp.getStatus().getClusterInfo().putAll(clusterInfo);
 logger.debug("ClusterInfo: {}", 
flinkApp.getStatus().getClusterInfo());
 } catch (Exception e) {
-logger.error("Exception while fetching cluster info", e);
+logger.warn("Exception while fetching cluster info", e);
 }
 }
 



[flink] branch master updated: [FLINK-33029][python] Drop python 3.7 support

2023-09-13 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 50cb4ee8c54 [FLINK-33029][python] Drop python 3.7 support
50cb4ee8c54 is described below

commit 50cb4ee8c545cd38d0efee014939df91c2c9c65f
Author: Gabor Somogyi 
AuthorDate: Wed Sep 13 17:56:08 2023 +0200

[FLINK-33029][python] Drop python 3.7 support
---
 docs/content.zh/docs/deployment/cli.md |  6 +-
 .../docs/dev/python/datastream_tutorial.md |  2 +-
 docs/content.zh/docs/dev/python/installation.md|  4 +-
 .../docs/dev/python/python_execution_mode.md   |  2 +-
 .../docs/dev/python/table_api_tutorial.md  |  2 +-
 docs/content.zh/docs/dev/table/sqlClient.md|  2 +-
 docs/content.zh/docs/flinkDev/building.md  |  4 +-
 docs/content/docs/deployment/cli.md|  6 +-
 .../content/docs/dev/python/datastream_tutorial.md |  2 +-
 docs/content/docs/dev/python/installation.md   |  4 +-
 .../docs/dev/python/python_execution_mode.md   |  2 +-
 .../docs/dev/python/table/udfs/python_udfs.md  |  2 +-
 .../python/table/udfs/vectorized_python_udfs.md|  2 +-
 docs/content/docs/dev/python/table_api_tutorial.md |  2 +-
 docs/content/docs/dev/table/sqlClient.md   |  2 +-
 docs/content/docs/flinkDev/building.md |  4 +-
 .../shortcodes/generated/python_configuration.html |  2 +-
 docs/static/downloads/setup-pyflink-virtual-env.sh | 18 +++--
 .../apache/flink/client/cli/CliFrontendParser.java |  2 +-
 .../hadoop/entrypoint.sh   |  1 +
 flink-python/apache-flink-libraries/setup.py   |  7 +-
 flink-python/dev/build-wheels.sh   |  5 +-
 flink-python/dev/install_command.sh|  7 ++
 flink-python/dev/lint-python.sh| 80 +-
 flink-python/pyflink/datastream/functions.py   |  4 +-
 flink-python/pyflink/table/udf.py  |  2 +-
 flink-python/setup.py  |  7 +-
 .../org/apache/flink/python/PythonOptions.java |  2 +-
 flink-python/tox.ini   |  2 +-
 .../src/test/resources/cli/all-mode-help.out   |  2 +-
 .../src/test/resources/cli/embedded-mode-help.out  |  2 +-
 tools/releasing/create_binary_release.sh   |  4 +-
 32 files changed, 111 insertions(+), 84 deletions(-)

diff --git a/docs/content.zh/docs/deployment/cli.md 
b/docs/content.zh/docs/deployment/cli.md
index de9dccf85d0..06dc7cb0700 100644
--- a/docs/content.zh/docs/deployment/cli.md
+++ b/docs/content.zh/docs/deployment/cli.md
@@ -367,11 +367,11 @@ Currently, users are able to submit a PyFlink job via the 
CLI. It does not requi
 JAR file path or the entry main class, which is different from the Java job 
submission.
 
 {{< hint info >}}
-When submitting Python job via `flink run`, Flink will run the command 
"python". Please run the following command to confirm that the python 
executable in current environment points to a supported Python version of 3.7+.
+When submitting Python job via `flink run`, Flink will run the command 
"python". Please run the following command to confirm that the python 
executable in current environment points to a supported Python version of 3.8+.
 {{< /hint >}}
 ```bash
 $ python --version
-# the version printed here must be 3.7+
+# the version printed here must be 3.8+
 ```
 
 The following commands show different PyFlink job submission use-cases:
@@ -522,7 +522,7 @@ related options. Here's an overview of all the Python 
related options for the ac
 
 Specify the path of the python interpreter used to execute the 
python UDF worker
 (e.g.: --pyExecutable /usr/local/bin/python3).
-The python UDF worker depends on Python 3.7+, Apache Beam 
(version == 2.43.0),
+The python UDF worker depends on Python 3.8+, Apache Beam 
(version == 2.43.0),
 Pip (version >= 20.3) and SetupTools (version >= 37.0.0).
 Please ensure that the specified environment meets the above 
requirements.
 
diff --git a/docs/content.zh/docs/dev/python/datastream_tutorial.md 
b/docs/content.zh/docs/dev/python/datastream_tutorial.md
index c96f9fa3f10..d4579cbf174 100644
--- a/docs/content.zh/docs/dev/python/datastream_tutorial.md
+++ b/docs/content.zh/docs/dev/python/datastream_tutorial.md
@@ -48,7 +48,7 @@ Apache Flink 提供了 DataStream API,用于构建健壮的、有状态的流
 首先,你需要在你的电脑上准备以下环境:
 
 * Java 11
-* Python 3.7, 3.8, 3.9 or 3.10
+* Python 3.8, 3.9 or 3.10
 
 使用 Python DataStream API 需要安装 PyFlink,PyFlink 发布在 
[PyPI](https://pypi.org/project/apache-flink/)上,可以通过 `pip` 快速安装。 
 
diff --git a/docs/content.zh/docs/dev/python/installation.md 
b/docs/content.zh/docs/dev/python/installation.md
index 5a2189c3ce

[flink] branch master updated: [FLINK-30812][yarn] Fix uploading local files when using YARN with S3

2023-09-02 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 9507dd63445 [FLINK-30812][yarn] Fix uploading local files when using 
YARN with S3
9507dd63445 is described below

commit 9507dd6344587fc4c903672b95b54e5ab6ab6574
Author: Máté Czagány <4469996+mateczag...@users.noreply.github.com>
AuthorDate: Sat Sep 2 09:03:52 2023 +0200

[FLINK-30812][yarn] Fix uploading local files when using YARN with S3
---
 .../flink/yarn/YarnApplicationFileUploader.java| 12 -
 .../yarn/YarnApplicationFileUploaderTest.java  | 52 ++
 2 files changed, 62 insertions(+), 2 deletions(-)

diff --git 
a/flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationFileUploader.java
 
b/flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationFileUploader.java
index c03d15e3552..95f38ee882e 100644
--- 
a/flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationFileUploader.java
+++ 
b/flink-yarn/src/main/java/org/apache/flink/yarn/YarnApplicationFileUploader.java
@@ -23,6 +23,7 @@ import 
org.apache.flink.client.deployment.ClusterDeploymentException;
 import org.apache.flink.configuration.ConfigConstants;
 import org.apache.flink.util.FileUtils;
 import org.apache.flink.util.IOUtils;
+import org.apache.flink.util.StringUtils;
 import org.apache.flink.util.function.FunctionUtils;
 import org.apache.flink.yarn.configuration.YarnConfigOptions;
 
@@ -388,13 +389,20 @@ class YarnApplicationFileUploader implements 
AutoCloseable {
 (relativeDstPath.isEmpty() ? "" : relativeDstPath + "/") + 
localSrcPath.getName();
 final Path dst = new Path(applicationDir, suffix);
 
+final Path localSrcPathWithScheme;
+if 
(StringUtils.isNullOrWhitespaceOnly(localSrcPath.toUri().getScheme())) {
+localSrcPathWithScheme = new 
Path(URI.create("file:///").resolve(localSrcPath.toUri()));
+} else {
+localSrcPathWithScheme = localSrcPath;
+}
+
 LOG.debug(
 "Copying from {} to {} with replication factor {}",
-localSrcPath,
+localSrcPathWithScheme,
 dst,
 replicationFactor);
 
-fileSystem.copyFromLocalFile(false, true, localSrcPath, dst);
+fileSystem.copyFromLocalFile(false, true, localSrcPathWithScheme, dst);
 fileSystem.setReplication(dst, (short) replicationFactor);
 return dst;
 }
diff --git 
a/flink-yarn/src/test/java/org/apache/flink/yarn/YarnApplicationFileUploaderTest.java
 
b/flink-yarn/src/test/java/org/apache/flink/yarn/YarnApplicationFileUploaderTest.java
index 3adca4ffe9c..8fc19391605 100644
--- 
a/flink-yarn/src/test/java/org/apache/flink/yarn/YarnApplicationFileUploaderTest.java
+++ 
b/flink-yarn/src/test/java/org/apache/flink/yarn/YarnApplicationFileUploaderTest.java
@@ -23,6 +23,7 @@ import org.apache.flink.util.IOUtils;
 import org.apache.flink.yarn.configuration.YarnConfigOptions;
 
 import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocalFileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
@@ -37,6 +38,7 @@ import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.HashMap;
+import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
@@ -132,6 +134,27 @@ class YarnApplicationFileUploaderTest {
 ConfigConstants.DEFAULT_FLINK_USR_LIB_DIR);
 }
 
+@Test
+void testUploadLocalFileWithoutScheme(@TempDir File flinkHomeDir) throws 
IOException {
+final MockLocalFileSystem fileSystem = new MockLocalFileSystem();
+final File tempFile = 
File.createTempFile(UUID.randomUUID().toString(), "", flinkHomeDir);
+final Path pathWithoutScheme = new Path(tempFile.getAbsolutePath());
+
+try (final YarnApplicationFileUploader yarnApplicationFileUploader =
+YarnApplicationFileUploader.from(
+fileSystem,
+new Path(flinkHomeDir.getPath()),
+Collections.emptyList(),
+ApplicationId.newInstance(0, 0),
+DFSConfigKeys.DFS_REPLICATION_DEFAULT)) {
+
+
yarnApplicationFileUploader.uploadLocalFileToRemote(pathWithoutScheme, "");
+assertThat(fileSystem.getCopiedPaths())
+.hasSize(1)
+.allMatch(path -> "file".equals(path.toUri().getScheme()));
+}
+}
+
 private static Map getLibJars() {
 final HashMap libJars = new HashMap<>(4);
 final String jarContent = "JA

[flink] branch master updated: [FLINK-23632][docs] Fix the setup-pyflink-virtual-env.sh link

2023-08-31 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 4739dabc33f [FLINK-23632][docs] Fix the setup-pyflink-virtual-env.sh 
link
4739dabc33f is described below

commit 4739dabc33f34c239ac4bb4b1b7a4bca0b8f14c6
Author: Akira Ajisaka 
AuthorDate: Thu Aug 31 17:38:47 2023 +0900

[FLINK-23632][docs] Fix the setup-pyflink-virtual-env.sh link
---
 docs/content.zh/docs/dev/python/faq.md |  2 +-
 docs/content/docs/dev/python/faq.md|  2 +-
 docs/static/downloads/setup-pyflink-virtual-env.sh | 53 ++
 3 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/docs/content.zh/docs/dev/python/faq.md 
b/docs/content.zh/docs/dev/python/faq.md
index 939f6b4a441..00fd3442599 100644
--- a/docs/content.zh/docs/dev/python/faq.md
+++ b/docs/content.zh/docs/dev/python/faq.md
@@ -32,7 +32,7 @@ under the License.
 
 ## 准备Python虚拟环境
 
-您可以下载[便捷脚本]({% link downloads/setup-pyflink-virtual-env.sh %}),以准备可在Mac 
OS和大多数Linux发行版上使用的Python虚拟环境包(virtual env zip)。
+您可以下载[便捷脚本](/downloads/setup-pyflink-virtual-env.sh),以准备可在Mac 
OS和大多数Linux发行版上使用的Python虚拟环境包(virtual env zip)。
 您可以指定PyFlink的版本,来生成对应的PyFlink版本所需的Python虚拟环境,否则将安装最新版本的PyFlink所对应的Python虚拟环境。
 
 {{< stable >}}
diff --git a/docs/content/docs/dev/python/faq.md 
b/docs/content/docs/dev/python/faq.md
index 0c73a27f1ee..7e21ffeb07d 100644
--- a/docs/content/docs/dev/python/faq.md
+++ b/docs/content/docs/dev/python/faq.md
@@ -30,7 +30,7 @@ This page describes the solutions to some common questions 
for PyFlink users.
 
 ## Preparing Python Virtual Environment
 
-You can download a [convenience script]({% link 
downloads/setup-pyflink-virtual-env.sh %}) to prepare a Python virtual env zip 
which can be used on Mac OS and most Linux distributions.
+You can download a [convenience 
script](/downloads/setup-pyflink-virtual-env.sh) to prepare a Python virtual 
env zip which can be used on Mac OS and most Linux distributions.
 You can specify the PyFlink version to generate a Python virtual environment 
required for the corresponding PyFlink version, otherwise the most recent 
version will be installed.
 
 {{< stable >}}
diff --git a/docs/static/downloads/setup-pyflink-virtual-env.sh 
b/docs/static/downloads/setup-pyflink-virtual-env.sh
new file mode 100755
index 000..fdfebbfc13f
--- /dev/null
+++ b/docs/static/downloads/setup-pyflink-virtual-env.sh
@@ -0,0 +1,53 @@
+#!/usr/bin/env bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+set -e
+# download miniconda.sh
+if [[ `uname -s` == "Darwin" ]]; then
+if [[ `uname -m` == "arm64" ]]; then
+wget 
"https://repo.anaconda.com/miniconda/Miniconda3-py310_23.5.2-0-MacOSX-arm64.sh"; 
-O "miniconda.sh"
+else
+wget 
"https://repo.anaconda.com/miniconda/Miniconda3-py310_23.5.2-0-MacOSX-x86_64.sh";
 -O "miniconda.sh"
+fi
+else
+wget 
"https://repo.anaconda.com/miniconda/Miniconda3-py310_23.5.2-0-Linux-x86_64.sh"; 
-O "miniconda.sh"
+fi
+
+# add the execution permission
+chmod +x miniconda.sh
+
+# create python virtual environment
+./miniconda.sh -b -p venv
+
+# activate the conda python virtual environment
+source venv/bin/activate ""
+
+# install PyFlink dependency
+if [[ $1 = "" ]]; then
+# install the latest version of pyflink
+pip install apache-flink
+else
+# install the specified version of pyflink
+pip install "apache-flink==$1"
+fi
+
+# deactivate the conda python virtual environment
+conda deactivate
+
+# remove the cached packages
+rm -rf venv/pkgs
+
+# package the prepared conda python virtual environment
+zip -r venv.zip venv



[flink] branch master updated: [FLINK-32989][python] Fix version parsing issue

2023-08-30 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 3975dd02198 [FLINK-32989][python] Fix version parsing issue
3975dd02198 is described below

commit 3975dd0219879e444e6d5da2b1f2d5534a6aa8b8
Author: Gabor Somogyi 
AuthorDate: Thu Aug 31 08:45:14 2023 +0200

[FLINK-32989][python] Fix version parsing issue
---
 flink-python/apache-flink-libraries/setup.py | 18 +++---
 flink-python/setup.py| 19 ---
 2 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/flink-python/apache-flink-libraries/setup.py 
b/flink-python/apache-flink-libraries/setup.py
index 39019ca5a50..a1b6081426d 100644
--- a/flink-python/apache-flink-libraries/setup.py
+++ b/flink-python/apache-flink-libraries/setup.py
@@ -21,12 +21,12 @@ import glob
 import io
 import os
 import platform
-import re
 import subprocess
 import sys
 from shutil import copytree, copy, rmtree
 
 from setuptools import setup
+from xml.etree import ElementTree as ET
 
 
 def remove_if_exists(file_path):
@@ -98,13 +98,17 @@ try:
 print("Temp path for symlink to parent already exists 
{0}".format(TEMP_PATH),
   file=sys.stderr)
 sys.exit(-1)
-flink_version = re.sub("[.]dev.*", "-SNAPSHOT", VERSION)
-flink_homes = glob.glob('../../flink-dist/target/flink-*-bin/flink-*')
-if len(flink_homes) != 1:
-print("Exactly one Flink home directory must exist, but found: 
{0}".format(flink_homes),
-  file=sys.stderr)
+flink_version = ET.parse("../../pom.xml").getroot().find(
+'POM:version',
+namespaces={
+'POM': 'http://maven.apache.org/POM/4.0.0'
+}).text
+if not flink_version:
+print("Not able to get flink version", file=sys.stderr)
 sys.exit(-1)
-FLINK_HOME = os.path.abspath(flink_homes[0])
+print("Detected flink version: {0}".format(flink_version))
+FLINK_HOME = os.path.abspath(
+"../../flink-dist/target/flink-%s-bin/flink-%s" % (flink_version, 
flink_version))
 
 incorrect_invocation_message = """
 If you are installing pyflink from flink source, you must first build Flink and
diff --git a/flink-python/setup.py b/flink-python/setup.py
index 4911ba8fd48..44109e2c772 100644
--- a/flink-python/setup.py
+++ b/flink-python/setup.py
@@ -17,7 +17,6 @@
 

 from __future__ import print_function
 
-import glob
 import io
 import os
 import platform
@@ -27,6 +26,7 @@ from distutils.command.build_ext import build_ext
 from shutil import copytree, copy, rmtree
 
 from setuptools import setup, Extension
+from xml.etree import ElementTree as ET
 
 if sys.version_info < (3, 7):
 print("Python versions prior to 3.7 are not supported for PyFlink.",
@@ -208,13 +208,18 @@ try:
 print("Temp path for symlink to parent already exists 
{0}".format(TEMP_PATH),
   file=sys.stderr)
 sys.exit(-1)
-flink_version = re.sub("[.]dev.*", "-SNAPSHOT", VERSION)
-flink_homes = glob.glob('../flink-dist/target/flink-*-bin/flink-*')
-if len(flink_homes) != 1:
-print("Exactly one Flink home directory must exist, but found: 
{0}".format(flink_homes),
-  file=sys.stderr)
+namespace = "http://maven.apache.org/POM/4.0.0";
+flink_version = ET.parse("../pom.xml").getroot().find(
+'POM:version',
+namespaces={
+'POM': 'http://maven.apache.org/POM/4.0.0'
+}).text
+if not flink_version:
+print("Not able to get flink version", file=sys.stderr)
 sys.exit(-1)
-FLINK_HOME = os.path.abspath(flink_homes[0])
+print("Detected flink version: {0}".format(flink_version))
+FLINK_HOME = os.path.abspath(
+"../flink-dist/target/flink-%s-bin/flink-%s" % (flink_version, 
flink_version))
 FLINK_ROOT = os.path.abspath("..")
 FLINK_DIST = os.path.join(FLINK_ROOT, "flink-dist")
 FLINK_BIN = os.path.join(FLINK_DIST, "src/main/flink-bin")



[flink] branch master updated (fd0d2db378d -> 70e635983dc)

2023-07-31 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


from fd0d2db378d [FLINK-32519][docs] Add doc for [CREATE OR] REPLACE TABLE 
AS statement (#23060)
 add 70e635983dc [FLINK-32223][runtime][security] Add Hive delegation token 
support

No new revisions were added by this update.

Summary of changes:
 .../token/HiveServer2DelegationTokenProvider.java  | 221 +
 .../token/HiveServer2DelegationTokenReceiver.java  |   9 +-
 ...ink.core.security.token.DelegationTokenProvider |   2 +-
 ...ink.core.security.token.DelegationTokenReceiver |   2 +-
 ...rg.apache.hadoop.security.token.TokenIdentifier |   2 +-
 .../HiveServer2DelegationTokenProviderITCase.java  | 175 
 .../src/test/resources/hive-site.xml   |   2 +-
 .../hive-site.xml  |  13 +-
 .../flink/runtime/hadoop/HadoopUserUtils.java  |  36 
 .../hadoop/HadoopFSDelegationTokenProvider.java|  31 +--
 .../runtime/hadoop/HadoopUserUtilsITCase.java  |  46 +
 .../HadoopFSDelegationTokenProviderITCase.java |  46 -
 12 files changed, 490 insertions(+), 95 deletions(-)
 create mode 100644 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/security/token/HiveServer2DelegationTokenProvider.java
 copy 
flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/HadoopFSDelegationTokenReceiver.java
 => 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/security/token/HiveServer2DelegationTokenReceiver.java
 (75%)
 copy {flink-filesystems/flink-s3-fs-hadoop => 
flink-connectors/flink-connector-hive}/src/main/resources/META-INF/services/org.apache.flink.core.security.token.DelegationTokenProvider
 (91%)
 copy {flink-filesystems/flink-s3-fs-hadoop => 
flink-connectors/flink-connector-hive}/src/main/resources/META-INF/services/org.apache.flink.core.security.token.DelegationTokenReceiver
 (91%)
 copy 
flink-test-utils-parent/flink-test-utils/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory
 => 
flink-connectors/flink-connector-hive/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
 (91%)
 create mode 100644 
flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/table/security/token/HiveServer2DelegationTokenProviderITCase.java
 copy 
flink-connectors/flink-connector-hive/src/test/resources/{test-multi-hive-conf1 
=> test-hive-delegation-token}/hive-site.xml (76%)



[flink] branch release-1.17 updated: [FLINK-32465][runtime][security] Fix KerberosLoginProvider.isLoginPossible accidental login with keytab

2023-06-28 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch release-1.17
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.17 by this push:
 new 0472cc151ea [FLINK-32465][runtime][security] Fix 
KerberosLoginProvider.isLoginPossible accidental login with keytab
0472cc151ea is described below

commit 0472cc151ea446f2c5f9560b8515fb6bc74ca83c
Author: Gabor Somogyi 
AuthorDate: Wed Jun 28 17:50:43 2023 +0200

[FLINK-32465][runtime][security] Fix KerberosLoginProvider.isLoginPossible 
accidental login with keytab
---
 .../security/token/hadoop/KerberosLoginProvider.java  |  6 ++
 .../token/hadoop/KerberosLoginProviderITCase.java | 19 +++
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProvider.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProvider.java
index 5a94fc3d1ad..aff5f410cd0 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProvider.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProvider.java
@@ -67,13 +67,11 @@ public class KerberosLoginProvider {
 return false;
 }
 
-UserGroupInformation currentUser = 
UserGroupInformation.getCurrentUser();
-
 if (principal != null) {
 LOG.debug("Login from keytab is possible");
 return true;
-} else if (!HadoopUserUtils.isProxyUser(currentUser)) {
-if (useTicketCache && currentUser.hasKerberosCredentials()) {
+} else if 
(!HadoopUserUtils.isProxyUser(UserGroupInformation.getCurrentUser())) {
+if (useTicketCache && 
UserGroupInformation.getCurrentUser().hasKerberosCredentials()) {
 LOG.debug("Login from ticket cache is possible");
 return true;
 }
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProviderITCase.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProviderITCase.java
index b0cd1c055a2..71d732e47b8 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProviderITCase.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProviderITCase.java
@@ -49,6 +49,25 @@ import static org.mockito.Mockito.when;
  */
 public class KerberosLoginProviderITCase {
 
+@Test
+public void isLoginPossibleMustNotDoAccidentalLoginWithKeytab(@TempDir 
Path tmpDir)
+throws IOException {
+Configuration configuration = new Configuration();
+configuration.setString(KERBEROS_LOGIN_PRINCIPAL, "principal");
+final Path keyTab = Files.createFile(tmpDir.resolve("test.keytab"));
+configuration.setString(KERBEROS_LOGIN_KEYTAB, 
keyTab.toAbsolutePath().toString());
+KerberosLoginProvider kerberosLoginProvider = new 
KerberosLoginProvider(configuration);
+
+try (MockedStatic ugi = 
mockStatic(UserGroupInformation.class)) {
+ugi.when(UserGroupInformation::isSecurityEnabled).thenReturn(true);
+ugi.when(UserGroupInformation::getCurrentUser)
+.thenThrow(
+new IllegalStateException(
+"isLoginPossible must not do login with 
keytab"));
+kerberosLoginProvider.isLoginPossible(false);
+}
+}
+
 @Test
 public void isLoginPossibleMustReturnFalseByDefault() throws IOException {
 Configuration configuration = new Configuration();



[flink] branch master updated: [FLINK-32465][runtime][security] Fix KerberosLoginProvider.isLoginPossible accidental login with keytab

2023-06-28 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new e33875e4a25 [FLINK-32465][runtime][security] Fix 
KerberosLoginProvider.isLoginPossible accidental login with keytab
e33875e4a25 is described below

commit e33875e4a25c1e4118bfc0514a2092b724ebebd9
Author: Gabor Somogyi 
AuthorDate: Wed Jun 28 15:38:14 2023 +0200

[FLINK-32465][runtime][security] Fix KerberosLoginProvider.isLoginPossible 
accidental login with keytab
---
 .../security/token/hadoop/KerberosLoginProvider.java |  6 ++
 .../token/hadoop/KerberosLoginProviderITCase.java| 20 
 2 files changed, 22 insertions(+), 4 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProvider.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProvider.java
index 5a94fc3d1ad..aff5f410cd0 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProvider.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProvider.java
@@ -67,13 +67,11 @@ public class KerberosLoginProvider {
 return false;
 }
 
-UserGroupInformation currentUser = 
UserGroupInformation.getCurrentUser();
-
 if (principal != null) {
 LOG.debug("Login from keytab is possible");
 return true;
-} else if (!HadoopUserUtils.isProxyUser(currentUser)) {
-if (useTicketCache && currentUser.hasKerberosCredentials()) {
+} else if 
(!HadoopUserUtils.isProxyUser(UserGroupInformation.getCurrentUser())) {
+if (useTicketCache && 
UserGroupInformation.getCurrentUser().hasKerberosCredentials()) {
 LOG.debug("Login from ticket cache is possible");
 return true;
 }
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProviderITCase.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProviderITCase.java
index bbf49ce136f..a9cbb91afb3 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProviderITCase.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/security/token/hadoop/KerberosLoginProviderITCase.java
@@ -51,6 +51,26 @@ import static org.mockito.Mockito.when;
  */
 public class KerberosLoginProviderITCase {
 
+@ParameterizedTest
+@ValueSource(booleans = {true, false})
+public void isLoginPossibleMustNotDoAccidentalLoginWithKeytab(
+boolean supportProxyUser, @TempDir Path tmpDir) throws IOException 
{
+Configuration configuration = new Configuration();
+configuration.setString(KERBEROS_LOGIN_PRINCIPAL, "principal");
+final Path keyTab = Files.createFile(tmpDir.resolve("test.keytab"));
+configuration.setString(KERBEROS_LOGIN_KEYTAB, 
keyTab.toAbsolutePath().toString());
+KerberosLoginProvider kerberosLoginProvider = new 
KerberosLoginProvider(configuration);
+
+try (MockedStatic ugi = 
mockStatic(UserGroupInformation.class)) {
+ugi.when(UserGroupInformation::isSecurityEnabled).thenReturn(true);
+ugi.when(UserGroupInformation::getCurrentUser)
+.thenThrow(
+new IllegalStateException(
+"isLoginPossible must not do login with 
keytab"));
+kerberosLoginProvider.isLoginPossible(supportProxyUser);
+}
+}
+
 @ParameterizedTest
 @ValueSource(booleans = {true, false})
 public void isLoginPossibleMustReturnFalseByDefault(boolean 
supportProxyUser)



[flink-benchmarks] branch master updated: [FLINK-32415] Add maven wrapper to benchmark to avoid maven version issues

2023-06-23 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-benchmarks.git


The following commit(s) were added to refs/heads/master by this push:
 new 62a1db5  [FLINK-32415] Add maven wrapper to benchmark to avoid maven 
version issues
62a1db5 is described below

commit 62a1db552f93965788b4172681f4caae5879143c
Author: Gabor Somogyi 
AuthorDate: Thu Jun 22 17:48:54 2023 +0200

[FLINK-32415] Add maven wrapper to benchmark to avoid maven version issues
---
 .mvn/wrapper/maven-wrapper.properties |  18 ++
 mvnw  | 308 ++
 mvnw.cmd  | 205 ++
 3 files changed, 531 insertions(+)

diff --git a/.mvn/wrapper/maven-wrapper.properties 
b/.mvn/wrapper/maven-wrapper.properties
new file mode 100644
index 000..452adac
--- /dev/null
+++ b/.mvn/wrapper/maven-wrapper.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.3.9/apache-maven-3.3.9-bin.zip
+wrapperUrl=https://repo.maven.apache.org/maven2/org/apache/maven/wrapper/maven-wrapper/3.2.0/maven-wrapper-3.2.0.jar
diff --git a/mvnw b/mvnw
new file mode 100755
index 000..8d937f4
--- /dev/null
+++ b/mvnw
@@ -0,0 +1,308 @@
+#!/bin/sh
+# 
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# 
+
+# 
+# Apache Maven Wrapper startup batch script, version 3.2.0
+#
+# Required ENV vars:
+# --
+#   JAVA_HOME - location of a JDK home dir
+#
+# Optional ENV vars
+# -
+#   MAVEN_OPTS - parameters passed to the Java VM when running Maven
+# e.g. to debug Maven itself, use
+#   set MAVEN_OPTS=-Xdebug 
-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
+#   MAVEN_SKIP_RC - flag to disable loading of mavenrc files
+# 
+
+if [ -z "$MAVEN_SKIP_RC" ] ; then
+
+  if [ -f /usr/local/etc/mavenrc ] ; then
+. /usr/local/etc/mavenrc
+  fi
+
+  if [ -f /etc/mavenrc ] ; then
+. /etc/mavenrc
+  fi
+
+  if [ -f "$HOME/.mavenrc" ] ; then
+. "$HOME/.mavenrc"
+  fi
+
+fi
+
+# OS specific support.  $var _must_ be set to either true or false.
+cygwin=false;
+darwin=false;
+mingw=false
+case "$(uname)" in
+  CYGWIN*) cygwin=true ;;
+  MINGW*) mingw=true;;
+  Darwin*) darwin=true
+# Use /usr/libexec/java_home if available, otherwise fall back to 
/Library/Java/Home
+# See https://developer.apple.com/library/mac/qa/qa1170/_index.html
+if [ -z "$JAVA_HOME" ]; then
+  if [ -x "/usr/libexec/java_home" ]; then
+JAVA_HOME="$(/usr/libexec/java_home)"; export JAVA_HOME
+  else
+JAVA_HOME="/Library/Java/Home"; export JAVA_HOME
+  fi
+fi
+;;
+esac
+
+if [ -z "$JAVA_HOME" ] ; then
+  if [ -r /etc/gentoo-release ] ; then
+JAVA_HOME=$(java-config --jre-home)
+  fi
+fi
+
+# For Cygwin, ensure paths are in UNIX format before anything is touched
+if $cygwin ; then

[flink] branch master updated: [FLINK-31609][yarn][test] Extend log whitelist for expected AMRM heartbeat interrupt

2023-05-15 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 4415c5150ed [FLINK-31609][yarn][test] Extend log whitelist for 
expected AMRM heartbeat interrupt
4415c5150ed is described below

commit 4415c5150eda071b219db5532c359ca29730a378
Author: Ferenc Csaky 
AuthorDate: Mon May 15 17:08:25 2023 +0200

[FLINK-31609][yarn][test] Extend log whitelist for expected AMRM heartbeat 
interrupt
---
 flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnTestBase.java | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnTestBase.java 
b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnTestBase.java
index 1f7fa1433af..7afeb19708f 100644
--- a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnTestBase.java
+++ b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YarnTestBase.java
@@ -171,6 +171,8 @@ public abstract class YarnTestBase {
 // this can happen during cluster shutdown, if AMRMClient happens to 
be heartbeating
 Pattern.compile("Exception on heartbeat"),
 Pattern.compile("java\\.io\\.InterruptedIOException: Call 
interrupted"),
+Pattern.compile(
+"java\\.io\\.InterruptedIOException: Interrupted waiting to 
send RPC request to server"),
 Pattern.compile("java\\.lang\\.InterruptedException"),
 
 // this can happen if the hbase delegation token provider is not 
available



[flink-kubernetes-operator] branch main updated: [FLINK-31886] Bump fabric8 and josdk version

2023-04-26 Thread gaborgsomogyi
This is an automated email from the ASF dual-hosted git repository.

gaborgsomogyi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/flink-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
 new d9b3437d [FLINK-31886] Bump fabric8 and josdk version
d9b3437d is described below

commit d9b3437d20200a0e6e7f4d1ab3e8178eface7b21
Author: Gyula Fora 
AuthorDate: Wed Apr 26 10:59:45 2023 +0200

[FLINK-31886] Bump fabric8 and josdk version
---
 .../kubernetes_operator_config_configuration.html  |  2 +-
 .../shortcodes/generated/system_section.html   |  2 +-
 .../src/main/resources/META-INF/NOTICE | 56 +++---
 .../flink/kubernetes/operator/TestUtils.java   |  6 +++
 pom.xml|  4 +-
 5 files changed, 38 insertions(+), 32 deletions(-)

diff --git 
a/docs/layouts/shortcodes/generated/kubernetes_operator_config_configuration.html
 
b/docs/layouts/shortcodes/generated/kubernetes_operator_config_configuration.html
index 1b792d02..5495b2af 100644
--- 
a/docs/layouts/shortcodes/generated/kubernetes_operator_config_configuration.html
+++ 
b/docs/layouts/shortcodes/generated/kubernetes_operator_config_configuration.html
@@ -238,7 +238,7 @@
 
 
 kubernetes.operator.reconcile.parallelism
-10
+200
 Integer
 The maximum number of threads running the reconciliation loop. 
Use -1 for infinite.
 
diff --git a/docs/layouts/shortcodes/generated/system_section.html 
b/docs/layouts/shortcodes/generated/system_section.html
index 153c7eee..aa317587 100644
--- a/docs/layouts/shortcodes/generated/system_section.html
+++ b/docs/layouts/shortcodes/generated/system_section.html
@@ -88,7 +88,7 @@
 
 
 kubernetes.operator.reconcile.parallelism
-10
+200
 Integer
 The maximum number of threads running the reconciliation loop. 
Use -1 for infinite.
 
diff --git a/flink-kubernetes-operator/src/main/resources/META-INF/NOTICE 
b/flink-kubernetes-operator/src/main/resources/META-INF/NOTICE
index 8e60f3bc..8178eb88 100644
--- a/flink-kubernetes-operator/src/main/resources/META-INF/NOTICE
+++ b/flink-kubernetes-operator/src/main/resources/META-INF/NOTICE
@@ -20,34 +20,33 @@ This project bundles the following dependencies under the 
Apache Software Licens
 - commons-cli:commons-cli:1.5.0
 - commons-collections:commons-collections:3.2.2
 - commons-io:commons-io:2.11.0
-- io.fabric8:kubernetes-client-api:jar:6.2.0
-- io.fabric8:kubernetes-client:jar:6.2.0
-- io.fabric8:kubernetes-httpclient-okhttp:jar:6.2.0
-- io.fabric8:kubernetes-model-admissionregistration:jar:6.2.0
-- io.fabric8:kubernetes-model-apiextensions:jar:6.2.0
-- io.fabric8:kubernetes-model-apps:jar:6.2.0
-- io.fabric8:kubernetes-model-autoscaling:jar:6.2.0
-- io.fabric8:kubernetes-model-batch:jar:6.2.0
-- io.fabric8:kubernetes-model-certificates:jar:6.2.0
-- io.fabric8:kubernetes-model-common:jar:6.2.0
-- io.fabric8:kubernetes-model-coordination:jar:6.2.0
-- io.fabric8:kubernetes-model-core:jar:6.2.0
-- io.fabric8:kubernetes-model-discovery:jar:6.2.0
-- io.fabric8:kubernetes-model-events:jar:6.2.0
-- io.fabric8:kubernetes-model-extensions:jar:6.2.0
-- io.fabric8:kubernetes-model-flowcontrol:jar:6.2.0
-- io.fabric8:kubernetes-model-gatewayapi:jar:6.2.0
-- io.fabric8:kubernetes-model-metrics:jar:6.2.0
-- io.fabric8:kubernetes-model-networking:jar:6.2.0
-- io.fabric8:kubernetes-model-node:jar:6.2.0
-- io.fabric8:kubernetes-model-policy:jar:6.2.0
-- io.fabric8:kubernetes-model-rbac:jar:6.2.0
-- io.fabric8:kubernetes-model-scheduling:jar:6.2.0
-- io.fabric8:kubernetes-model-storageclass:jar:6.2.0
+- io.fabric8:kubernetes-client-api:jar:6.5.1
+- io.fabric8:kubernetes-client:jar:6.5.1
+- io.fabric8:kubernetes-httpclient-okhttp:jar:6.5.1
+- io.fabric8:kubernetes-model-admissionregistration:jar:6.5.1
+- io.fabric8:kubernetes-model-apiextensions:jar:6.5.1
+- io.fabric8:kubernetes-model-apps:jar:6.5.1
+- io.fabric8:kubernetes-model-autoscaling:jar:6.5.1
+- io.fabric8:kubernetes-model-batch:jar:6.5.1
+- io.fabric8:kubernetes-model-certificates:jar:6.5.1
+- io.fabric8:kubernetes-model-common:jar:6.5.1
+- io.fabric8:kubernetes-model-coordination:jar:6.5.1
+- io.fabric8:kubernetes-model-core:jar:6.5.1
+- io.fabric8:kubernetes-model-discovery:jar:6.5.1
+- io.fabric8:kubernetes-model-events:jar:6.5.1
+- io.fabric8:kubernetes-model-extensions:jar:6.5.1
+- io.fabric8:kubernetes-model-flowcontrol:jar:6.5.1
+- io.fabric8:kubernetes-model-gatewayapi:jar:6.5.1
+- io.fabric8:kubernetes-model-metrics:jar:6.5.1
+- io.fabric8:kubernetes-model-networking:jar:6.5.1
+- io.fabric8:kubernetes-model-node:jar:6.5.1
+- io.fabric8:kubernetes-model-policy:jar:6.5.1
+- io.fabric8:kubernetes-model-rbac:jar:6.5.1
+- io.fabric8:kubernetes-model-scheduling:jar:6.5.1
+- io.fabric8:kubernetes-model