Re: [PR] [FLINK-32315][k8s] Support uploading "local://" artifacts in Kubernetes Application Mode [flink]

2024-02-12 Thread via GitHub


ferenc-csaky commented on code in PR #24303:
URL: https://github.com/apache/flink/pull/24303#discussion_r1487357451


##
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/artifact/DefaultKubernetesArtifactUploader.java:
##
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.artifact;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.client.cli.ArtifactFetchOptions;
+import org.apache.flink.client.program.PackagedProgramUtils;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.PipelineOptions;
+import org.apache.flink.core.fs.FSDataOutputStream;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
+import org.apache.flink.util.StringUtils;
+import org.apache.flink.util.function.FunctionUtils;
+
+import org.apache.commons.io.FileUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Collections;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Default {@link KubernetesArtifactUploader} implementation. */
+public class DefaultKubernetesArtifactUploader implements 
KubernetesArtifactUploader {
+
+private static final Logger LOG =
+LoggerFactory.getLogger(DefaultKubernetesArtifactUploader.class);
+
+@Override
+public void uploadAll(Configuration config) throws Exception {
+if (!config.get(KubernetesConfigOptions.LOCAL_UPLOAD_ENABLED)) {
+LOG.info(
+"Local artifact uploading is disabled. Set '{}' to 
enable.",
+KubernetesConfigOptions.LOCAL_UPLOAD_ENABLED.key());
+return;
+}
+
+final String jobUri = upload(config, getJobUri(config));
+config.set(PipelineOptions.JARS, Collections.singletonList(jobUri));
+
+final List additionalUris =
+config.getOptional(ArtifactFetchOptions.ARTIFACT_LIST)
+.orElse(Collections.emptyList());
+
+final List uploadedAdditionalUris =
+additionalUris.stream()
+.map(
+FunctionUtils.uncheckedFunction(
+artifactUri -> upload(config, 
artifactUri)))
+.collect(Collectors.toList());
+
+config.set(ArtifactFetchOptions.ARTIFACT_LIST, uploadedAdditionalUris);
+}
+
+@VisibleForTesting
+String upload(Configuration config, String artifactUriStr)
+throws IOException, URISyntaxException {
+URI artifactUri = PackagedProgramUtils.resolveURI(artifactUriStr);
+if (!"local".equals(artifactUri.getScheme())) {
+return artifactUriStr;
+}
+
+final String targetDir = 
config.get(KubernetesConfigOptions.LOCAL_UPLOAD_TARGET);
+checkArgument(
+!StringUtils.isNullOrWhitespaceOnly(targetDir),
+String.format(
+"Setting '%s' to a valid remote path is required.",
+KubernetesConfigOptions.LOCAL_UPLOAD_TARGET.key()));
+
+final File src = new File(artifactUri.getPath());
+final Path target = new Path(targetDir, src.getName());
+if (target.getFileSystem().exists(target)) {
+LOG.debug("Skipping artifact '{}', as it already exists.", target);

Review Comment:
   It has to be removed by hand to get the updated JAR, or rename the new file. 
Regarding this I was thinking about adding a config option flag to force 
override existing artifacts on the remote.
   
   The same thing might make sense for fetching the artifats as well, but I 
think the 2 things are different so probably would require 2 different flags.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To 

[jira] [Updated] (FLINK-34427) ResourceManagerTaskExecutorTest fails fatally (exit code 239)

2024-02-12 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-34427:
--
Component/s: Runtime / Coordination

> ResourceManagerTaskExecutorTest fails fatally (exit code 239)
> -
>
> Key: FLINK-34427
> URL: https://issues.apache.org/jira/browse/FLINK-34427
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://github.com/apache/flink/actions/runs/7866453350/job/21460921911#step:10:8959
> {code}
> Error: 02:28:53 02:28:53.220 [ERROR] Process Exit Code: 239
> Error: 02:28:53 02:28:53.220 [ERROR] Crashed tests:
> Error: 02:28:53 02:28:53.220 [ERROR] 
> org.apache.flink.runtime.resourcemanager.ResourceManagerTaskExecutorTest
> Error: 02:28:53 02:28:53.220 [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> Error: 02:28:53 02:28:53.220 [ERROR] Command was /bin/sh -c cd 
> '/root/flink/flink-runtime' && 
> '/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java' '-XX:+UseG1GC' '-Xms256m' 
> '-XX:+IgnoreUnrecognizedVMOptions' 
> '--add-opens=java.base/java.util=ALL-UNNAMED' 
> '--add-opens=java.base/java.lang=ALL-UNNAMED' 
> '--add-opens=java.base/java.net=ALL-UNNAMED' 
> '--add-opens=java.base/java.io=ALL-UNNAMED' 
> '--add-opens=java.base/java.util.concurrent=ALL-UNNAMED' '-Xmx768m' '-jar' 
> '/root/flink/flink-runtime/target/surefire/surefirebooter-20240212022332296_94.jar'
>  '/root/flink/flink-runtime/target/surefire' 
> '2024-02-12T02-21-39_495-jvmRun3' 'surefire-20240212022332296_88tmp' 
> 'surefire_26-20240212022332296_91tmp'
> Error: 02:28:53 02:28:53.220 [ERROR] Error occurred in starting fork, check 
> output in log
> Error: 02:28:53 02:28:53.220 [ERROR] Process Exit Code: 239
> Error: 02:28:53 02:28:53.220 [ERROR] Crashed tests:
> Error: 02:28:53 02:28:53.221 [ERROR] 
> org.apache.flink.runtime.resourcemanager.ResourceManagerTaskExecutorTest
> Error: 02:28:53 02:28:53.221 [ERROR]  at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:456)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Bump org.yaml:snakeyaml from 1.31 to 2.0 in /flink-connector-kafka [flink-connector-kafka]

2024-02-12 Thread via GitHub


dependabot[bot] opened a new pull request, #85:
URL: https://github.com/apache/flink-connector-kafka/pull/85

   Bumps [org.yaml:snakeyaml](https://bitbucket.org/snakeyaml/snakeyaml) from 
1.31 to 2.0.
   
   Commits
   
   https://bitbucket.org/snakeyaml/snakeyaml/commits/c98ffba9cd065d1ead94c9ec580d8b5a5966c9d3;>c98ffba
 issue 561: add negative test case
   https://bitbucket.org/snakeyaml/snakeyaml/commits/e2ca740df5510abf4f8de49c56e4ec53ec7b5624;>e2ca740
 Use Maven wrapper on github
   https://bitbucket.org/snakeyaml/snakeyaml/commits/49d91a1e2d7fbd756f1d5f380b0c07e13546222d;>49d91a1
 Fix target for github
   https://bitbucket.org/snakeyaml/snakeyaml/commits/19e331dd722325758263bfdfdd1d72872d8451bd;>19e331d
 Disable toolchain for github
   https://bitbucket.org/snakeyaml/snakeyaml/commits/42c781297909a3c7e61a234071540b91c6bf5834;>42c7812
 Cobertura plugin does not work
   https://bitbucket.org/snakeyaml/snakeyaml/commits/03c82b5d8ef3525ba407f3a96cbb6d5f6f9d364d;>03c82b5
 Rename GlobalTagRejectionTest to be run by Maven
   https://bitbucket.org/snakeyaml/snakeyaml/commits/6e8cd890716dfe22d5ba56f9a592225fb7fa2803;>6e8cd89
 Remove cobertura
   https://bitbucket.org/snakeyaml/snakeyaml/commits/d9b0f480b1a63aca4678da7ab1915fcfc7d2a856;>d9b0f48
 Improve Javadoc
   https://bitbucket.org/snakeyaml/snakeyaml/commits/519791aa35b5415494234cd91c250ba5ed9fa80a;>519791a
 Run install and site goals under docker
   https://bitbucket.org/snakeyaml/snakeyaml/commits/82f33d25ae189560ebeed29bbe3aff5bc44556fc;>82f33d2
 Merge branch 'master' into add-module-info
   Additional commits viewable in https://bitbucket.org/snakeyaml/snakeyaml/branches/compare/snakeyaml-2.0..snakeyaml-1.31;>compare
 view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.yaml:snakeyaml=maven=1.31=2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot show  ignore conditions` will show all of 
the ignore conditions of the specified dependency
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   You can disable automated security fix PRs for this repo from the [Security 
Alerts page](https://github.com/apache/flink-connector-kafka/network/alerts).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34192] Update to be compatible with updated SinkV2 interfaces [flink-connector-kafka]

2024-02-12 Thread via GitHub


boring-cyborg[bot] commented on PR #84:
URL: 
https://github.com/apache/flink-connector-kafka/pull/84#issuecomment-1940629810

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34192] Update to be compatible with updated SinkV2 interfaces [flink-connector-kafka]

2024-02-12 Thread via GitHub


MartijnVisser merged PR #84:
URL: https://github.com/apache/flink-connector-kafka/pull/84


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34403) VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM

2024-02-12 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816876#comment-17816876
 ] 

Matthias Pohl commented on FLINK-34403:
---

* 
https://dev.azure.com/apache-flink/web/build.aspx?pcguid=2d3c0ac8-fecf-45be-8407-6d87302181a9=vstfs%3a%2f%2f%2fBuild%2fBuild%2f57469_data=ew0KICAic291cmNlIjogIlNsYWNrUGlwZWxpbmVzQXBwIiwNCiAgInNvdXJjZV9ldmVudF9uYW1lIjogImJ1aWxkLmNvbXBsZXRlIg0KfQ%3d%3d
* 
https://dev.azure.com/apache-flink/web/build.aspx?pcguid=2d3c0ac8-fecf-45be-8407-6d87302181a9=vstfs%3a%2f%2f%2fBuild%2fBuild%2f57489_data=ew0KICAic291cmNlIjogIlNsYWNrUGlwZWxpbmVzQXBwIiwNCiAgInNvdXJjZV9ldmVudF9uYW1lIjogImJ1aWxkLmNvbXBsZXRlIg0KfQ%3d%3d
* 
https://dev.azure.com/apache-flink/web/build.aspx?pcguid=2d3c0ac8-fecf-45be-8407-6d87302181a9=vstfs%3a%2f%2f%2fBuild%2fBuild%2f57491_data=ew0KICAic291cmNlIjogIlNsYWNrUGlwZWxpbmVzQXBwIiwNCiAgInNvdXJjZV9ldmVudF9uYW1lIjogImJ1aWxkLmNvbXBsZXRlIg0KfQ%3d%3d

> VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM
> -
>
> Key: FLINK-34403
> URL: https://issues.apache.org/jira/browse/FLINK-34403
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.20.0
>Reporter: Benchao Li
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> After FLINK-33611 merged, the misc test on GHA cannot pass due to out of 
> memory error, throwing following exceptions:
> {code:java}
> Error: 05:43:21 05:43:21.768 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 40.98 s <<< FAILURE! -- in 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest
> Error: 05:43:21 05:43:21.773 [ERROR] 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple -- Time 
> elapsed: 40.97 s <<< ERROR!
> Feb 07 05:43:21 org.apache.flink.util.FlinkRuntimeException: Error in 
> serialization.
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:327)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:162)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:1007)
> Feb 07 05:43:21   at 
> org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:56)
> Feb 07 05:43:21   at 
> org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:45)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:61)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:104)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:81)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2440)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2421)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollectWithClient(DataStream.java:1495)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1382)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1367)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.validateRow(ProtobufTestHelper.java:66)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:89)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:76)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:71)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple(VeryBigPbRowToProtoTest.java:37)
> Feb 07 05:43:21   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 07 05:43:21 Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalArgumentException: Self-suppression not permitted
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Feb 07 05:43:21   at 
> 

[jira] [Commented] (FLINK-34273) git fetch fails

2024-02-12 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816875#comment-17816875
 ] 

Matthias Pohl commented on FLINK-34273:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57492=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=a47dd1b5-aa0a-596a-799b-05a053059d14

> git fetch fails
> ---
>
> Key: FLINK-34273
> URL: https://issues.apache.org/jira/browse/FLINK-34273
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI, Test Infrastructure
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> We've seen multiple {{git fetch}} failures. I assume this to be an 
> infrastructure issue. This Jira issue is for documentation purposes.
> {code:java}
> error: RPC failed; curl 18 transfer closed with outstanding read data 
> remaining
> error: 5211 bytes of body are still expected
> fetch-pack: unexpected disconnect while reading sideband packet
> fatal: early EOF
> fatal: fetch-pack: invalid index-pack output {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57080=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=5d6dc3d3-393d-5111-3a40-c6a5a36202e6=667



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34403][ci] Transforms VeryBigPbProtoToRowTest into an integration test [flink]

2024-02-12 Thread via GitHub


XComp commented on PR #24302:
URL: https://github.com/apache/flink/pull/24302#issuecomment-194055

   The [CI run 
shows](https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57473=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23880)
 that the test is running as an integration test now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34192] Update to be compatible with updated SinkV2 interfaces [flink-connector-kafka]

2024-02-12 Thread via GitHub


mas-chen commented on code in PR #84:
URL: 
https://github.com/apache/flink-connector-kafka/pull/84#discussion_r1487301708


##
flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaTableTestUtils.java:
##
@@ -124,4 +128,12 @@ public static void comparedWithKeyAndOrder(
 
matching(TableTestMatchers.deepEqualTo(expectedData.get(key), false)));
 }
 }
+
+private static String rowToString(Object o) {
+if (o instanceof Row) {

Review Comment:
   This confused me but I understand it is to provide compatibility for 1.19, 
I'd leave a comment since that's not clear



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486986597


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   
   SQL Server: Uses the ORDER BY clause for sorting based on column names, 
supporting ascending or descending order. Does not have native array types but 
allows sorting through table variables or temporary tables. More info.
   
   SQLite: Similar to SQL Server, it relies on ORDER BY for sorting without 
built-in array support.
   
   PostgreSQL: Supports arrays and provides functions like array_sort for 
direct sorting of array elements.
   For postgresql https://www.postgresql.org/docs/8.4/intarray.html
   it has four ways to sort an array
   ```
   
   sort(int[], text dir) | int[] | sort array — dir must be asc or desc | 
sort('{1,2,3}'::int[], 'desc') | {3,2,1}
   sort(int[]) | int[] | sort in ascending order | sort(array[11,77,44]) | 
{11,44,77}
   sort_asc(int[]) | int[] | sort in ascending order |   |  
   sort_desc(int[]) | int[] | sort in descending order |  
   ```
   MySQL & MariaDB: Lack built-in array support; sorting is achieved using 
ORDER BY. For JSON array elements, functions such as JSON_EXTRACT are used.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34000] Implement restore tests for IncrementalGroupAgg node [flink]

2024-02-12 Thread via GitHub


bvarghese1 commented on PR #24154:
URL: https://github.com/apache/flink/pull/24154#issuecomment-1940403064

   @dawidwys Rebased.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34386][state] Add RocksDB bloom filter metrics [flink]

2024-02-12 Thread via GitHub


hejufang commented on PR #24274:
URL: https://github.com/apache/flink/pull/24274#issuecomment-1940364419

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33201][Connectors/Kafka] Fix memory leak in CachingTopicSelector [flink-connector-kafka]

2024-02-12 Thread via GitHub


qbx2 commented on PR #55:
URL: 
https://github.com/apache/flink-connector-kafka/pull/55#issuecomment-1940337630

   @MartijnVisser Could you please review?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26088][Connectors/ElasticSearch] Add Elasticsearch 8.0 support [flink-connector-elasticsearch]

2024-02-12 Thread via GitHub


reta commented on code in PR #53:
URL: 
https://github.com/apache/flink-connector-elasticsearch/pull/53#discussion_r1487031922


##
flink-connector-elasticsearch8/pom.xml:
##
@@ -0,0 +1,141 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+   4.0.0
+
+   
+   org.apache.flink
+   flink-connector-elasticsearch-parent
+   4.0-SNAPSHOT
+   
+
+   flink-connector-elasticsearch8
+   Flink : Connectors : Elasticsearch 8
+
+   jar
+
+   
+   
+   8.7.0

Review Comment:
   Probably we could use latest one (as of today):
   
   ```suggestion
8.12.1
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34386][state] Add RocksDB bloom filter metrics [flink]

2024-02-12 Thread via GitHub


JingGe commented on PR #24274:
URL: https://github.com/apache/flink/pull/24274#issuecomment-1939890045

   @hejufang the issue should be fixed if you rebase your branch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486986597


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   For postgresql https://www.postgresql.org/docs/8.4/intarray.html
   it has four ways to sort an array
   ```
   sort(int[], text dir) | int[] | sort array — dir must be asc or desc | 
sort('{1,2,3}'::int[], 'desc') | {3,2,1}
   sort(int[]) | int[] | sort in ascending order | sort(array[11,77,44]) | 
{11,44,77}
   sort_asc(int[]) | int[] | sort in ascending order |   |  
   sort_desc(int[]) | int[] | sort in descending order |  
   ```
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486986597


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   For postgresql https://www.postgresql.org/docs/8.4/intarray.html
   it has three ways to sort an array
   ```
   sort(int[], text dir) | int[] | sort array — dir must be asc or desc | 
sort('{1,2,3}'::int[], 'desc') | {3,2,1}
   sort(int[]) | int[] | sort in ascending order | sort(array[11,77,44]) | 
{11,44,77}
   sort_asc(int[]) | int[] | sort in ascending order |   |  
   sort_desc(int[]) | int[] | sort in descending order |  
   ```
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-12 Thread via GitHub


JingGe commented on PR #24279:
URL: https://github.com/apache/flink/pull/24279#issuecomment-1939874511

   Could you rebase your branch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31663][table] Add-ARRAY_EXCEPT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #23173:
URL: https://github.com/apache/flink/pull/23173#issuecomment-1939821694

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #22951:
URL: https://github.com/apache/flink/pull/22951#issuecomment-1939820210

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34367) Release Testing Instructions: Verify FLINK-34027 AsyncScalarFunction for asynchronous scalar function support

2024-02-12 Thread Alan Sheinberg (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816825#comment-17816825
 ] 

Alan Sheinberg commented on FLINK-34367:


Sorry, I missed this last week.  I don't believe that this requires cross team 
testing, since it's a fairly well defined new feature that shouldn't really 
affect the correctness of existing functionality and shouldn't affect other 
components.  Can you give me an idea of what sort of functionality you think 
requires this type of testing? [~lincoln.86xy] [~jingge] 

> Release Testing Instructions: Verify FLINK-34027 AsyncScalarFunction for 
> asynchronous scalar function support
> -
>
> Key: FLINK-34367
> URL: https://issues.apache.org/jira/browse/FLINK-34367
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Alan Sheinberg
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-27891][Table] Add ARRAY_APPEND and ARRAY_PREPEND functions [flink]

2024-02-12 Thread via GitHub


snuyanzin commented on PR #19873:
URL: https://github.com/apache/flink/pull/19873#issuecomment-1939713892

   rebased to resolve conflicts


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] fix: flink account cannot get resource "services" in API group [flink-kubernetes-operator]

2024-02-12 Thread via GitHub


prakash-42 commented on PR #596:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/596#issuecomment-1939705128

   Sorry to bug you again @mxm , but where is the place to submit bugs/ask 
questions for the flink operator? (The JIRA project doesn't have public signup, 
are the flink mailing lists active?)
   
   I tried to use the 
[flink-sql-runner-example](https://github.com/apache/flink-kubernetes-operator/tree/main/examples/flink-sql-runner-example)
 before writing documentation about it, and stumbled into two issues:
   
   1. **The regex for SET statement is not flexible. (not a blocker)**
 [The regex for SET 
statement](https://github.com/apache/flink-kubernetes-operator/blob/release-1.7/examples/flink-sql-runner-example/src/main/java/org/apache/flink/examples/SqlRunner.java#L44)
 enforces that the user provides single quote (`'`) around the key and value. 
Also there must be space on both sides of the equals (`=`)
   
   2. **Any text present between backticks(\`) is getting lost** (unable to 
debug the reason)
 When I try to submit a script that uses backticks around field names, the 
text between backticks isn't read by the code. 
([FileUtils.readFileUtf8](https://github.com/apache/flink-kubernetes-operator/blob/release-1.7/examples/flink-sql-runner-example/src/main/java/org/apache/flink/examples/SqlRunner.java#L55)
 is responsible for this, I added a log). (I used `flink:1.17` as my base 
image). I'm not able to reproduce this behavior locally, only happens when I 
package the job and deploy it in our EKS cluster.
   
 Here's a portion of SQL script I used (I printed the full script to 
console, even the content b/w backticks from the comments was gone)
```
SET 'execution.checkpointing.interval' = '30s';
   SET 'pipeline.name' = 'MyFlinkJob';
   
   -- some comment text in `back ticks`
   -- some more text in ```back ticks x3```
   
   CREATE TABLE Library_Book_Source (
  database_name STRING METADATA VIRTUAL,
  table_name STRING METADATA VIRTUAL,
  `ID` INT NOT NULL,
  Title VARCHAR(255),
  `Price` VARCHAR(50),
  PRIMARY KEY (`ID`) NOT ENFORCED
) WITH (
  'connector' = 'mysql-cdc',
  'hostname' = 'DB_HOST',
  'port' = '3306',
  'username' = 'username',
  'password' = 'password',
  'database-name' = 'Library',
  'table-name' = 'Book',
  'jdbc.properties.useSSL' = 'false'
);
   ```
   
   Due to these problems, I'm not sure if this example is production ready and 
whether or not it should be added in our documentation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34157) Migrate FlinkLimit0RemoveRule

2024-02-12 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin closed FLINK-34157.
---

> Migrate FlinkLimit0RemoveRule
> -
>
> Key: FLINK-34157
> URL: https://issues.apache.org/jira/browse/FLINK-34157
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34157) Migrate FlinkLimit0RemoveRule

2024-02-12 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816805#comment-17816805
 ] 

Sergey Nuyanzin commented on FLINK-34157:
-

Merged to master as 
[6f74889cbb52a2e7c11ad6cd86db7082604c|https://github.com/apache/flink/commit/6f74889cbb52a2e7c11ad6cd86db7082604c]

> Migrate FlinkLimit0RemoveRule
> -
>
> Key: FLINK-34157
> URL: https://issues.apache.org/jira/browse/FLINK-34157
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34157) Migrate FlinkLimit0RemoveRule

2024-02-12 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin resolved FLINK-34157.
-
Fix Version/s: 1.20.0
   Resolution: Fixed

> Migrate FlinkLimit0RemoveRule
> -
>
> Key: FLINK-34157
> URL: https://issues.apache.org/jira/browse/FLINK-34157
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34157][table] Migrate FlinkLimit0RemoveRule to java [flink]

2024-02-12 Thread via GitHub


snuyanzin merged PR #24139:
URL: https://github.com/apache/flink/pull/24139


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34157][table] Migrate FlinkLimit0RemoveRule to java [flink]

2024-02-12 Thread via GitHub


snuyanzin commented on PR #24139:
URL: https://github.com/apache/flink/pull/24139#issuecomment-1939681905

   Thanks for taking a look


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34417] Log Job ID via MDC [flink]

2024-02-12 Thread via GitHub


rkhachatryan commented on code in PR #24292:
URL: https://github.com/apache/flink/pull/24292#discussion_r1486799769


##
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskExecutor.java:
##
@@ -1266,15 +1282,21 @@ public CompletableFuture freeSlot(
 
 @Override
 public void freeInactiveSlots(JobID jobId, Time timeout) {
-log.debug("Freeing inactive slots for job {}.", jobId);
+MdcUtils.addJobID(jobId);

Review Comment:
   It wouldn't work for all cases because sometimes we need to extract job ID 
from the argument or TM state.
   (And also that would be much less readable IMO)
   
   edit: it also removes the logger after the execution completely which is not 
what we want for non-test loggers



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34417] Log Job ID via MDC [flink]

2024-02-12 Thread via GitHub


rkhachatryan commented on code in PR #24292:
URL: https://github.com/apache/flink/pull/24292#discussion_r1486809430


##
flink-tests/src/test/java/org/apache/flink/test/misc/JobIDLoggingITCase.java:
##
@@ -0,0 +1,218 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.test.misc;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.runtime.checkpoint.CheckpointCoordinator;
+import org.apache.flink.runtime.executiongraph.ExecutionGraph;
+import org.apache.flink.runtime.jobmaster.JobMaster;
+import org.apache.flink.runtime.taskmanager.Task;
+import org.apache.flink.runtime.testutils.CommonTestUtils;
+import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.DiscardingSink;
+import org.apache.flink.streaming.runtime.tasks.StreamTask;
+import org.apache.flink.test.util.MiniClusterWithClientResource;
+import org.apache.flink.util.MdcUtils;
+import org.apache.flink.util.TestLogger;
+
+import org.apache.logging.log4j.Level;
+import org.apache.logging.log4j.core.LogEvent;
+import org.apache.logging.log4j.core.LoggerContext;
+import org.apache.logging.log4j.core.config.LoggerConfig;
+import org.apache.logging.log4j.test.appender.ListAppender;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.util.Preconditions.checkState;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Tests adding of {@link JobID} to logs (via {@link org.slf4j.MDC}) in the 
most important cases.
+ */
+public class JobIDLoggingITCase extends TestLogger {
+private static final Logger logger = 
LoggerFactory.getLogger(JobIDLoggingITCase.class);
+
+@ClassRule
+public static MiniClusterWithClientResource miniClusterResource =
+new MiniClusterWithClientResource(
+new MiniClusterResourceConfiguration.Builder()
+.setNumberTaskManagers(1)
+.setNumberSlotsPerTaskManager(1)
+.build());
+
+@Test
+public void testJobIDLogging() throws Exception {
+LoggerContext ctx = LoggerContext.getContext(false);

Review Comment:
   The extension is tailored for a single-logger case, while in this test, 
there are many loggers and a single run.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34417] Log Job ID via MDC [flink]

2024-02-12 Thread via GitHub


rkhachatryan commented on code in PR #24292:
URL: https://github.com/apache/flink/pull/24292#discussion_r1486799769


##
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskExecutor.java:
##
@@ -1266,15 +1282,21 @@ public CompletableFuture freeSlot(
 
 @Override
 public void freeInactiveSlots(JobID jobId, Time timeout) {
-log.debug("Freeing inactive slots for job {}.", jobId);
+MdcUtils.addJobID(jobId);

Review Comment:
   It wouldn't work for all cases because sometimes we need to extract job ID 
from the argument or TM state.
   (And also that would be much less readable IMO)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34417] Log Job ID via MDC [flink]

2024-02-12 Thread via GitHub


rkhachatryan commented on code in PR #24292:
URL: https://github.com/apache/flink/pull/24292#discussion_r1486795898


##
flink-rpc/flink-rpc-core/src/main/java/org/apache/flink/runtime/rpc/RpcGateway.java:
##
@@ -34,4 +34,16 @@ public interface RpcGateway {
  * @return Fully qualified hostname under which the associated rpc 
endpoint is reachable
  */
 String getHostname();
+
+/**
+ * Perform optional steps before handling any messages or performing 
service actions (e.g.
+ * start/stop).
+ */
+default void beforeInvocation() {}

Review Comment:
   Do you mean passing these Runnables to `actor.Props.create()`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34417] Log Job ID via MDC [flink]

2024-02-12 Thread via GitHub


rkhachatryan commented on code in PR #24292:
URL: https://github.com/apache/flink/pull/24292#discussion_r1486789947


##
flink-rpc/flink-rpc-core/src/main/java/org/apache/flink/runtime/rpc/MainThreadValidatorUtil.java:
##
@@ -45,11 +45,13 @@ public MainThreadValidatorUtil(RpcEndpoint endpoint) {
 public void enterMainThread() {
 assert (endpoint.currentMainThread.compareAndSet(null, 
Thread.currentThread()))
 : "The RpcEndpoint has concurrent access from " + 
endpoint.currentMainThread.get();
+endpoint.beforeInvocation();

Review Comment:
   When I remove it, I get log lines like this without a job ID:
   ```
   [] ExecutionGraph 
[flink-pekko.actor.default-dispatcher-6] INFO  Source: Custom Source -> 
Timestamps/Watermarks -> transform-1-forward -> Sink: Unnamed (1/4) 
(2dae9a492649d761d992295551f74d3c_cbc357ccb763df2852fee8c4fc7d55f2_0_0) 
switched from SCHEDULED to DEPLOYING.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34417] Log Job ID via MDC [flink]

2024-02-12 Thread via GitHub


rkhachatryan commented on code in PR #24292:
URL: https://github.com/apache/flink/pull/24292#discussion_r1486782437


##
flink-core/src/main/java/org/apache/flink/util/JobIdLoggingExecutor.java:
##
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.util;
+
+import org.apache.flink.api.common.JobID;
+
+import java.util.concurrent.Executor;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+class JobIdLoggingExecutor implements Executor {
+protected final JobID jobID;

Review Comment:
   Yes, it can be generalised, but what would be the use case?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34417] Log Job ID via MDC [flink]

2024-02-12 Thread via GitHub


rkhachatryan commented on code in PR #24292:
URL: https://github.com/apache/flink/pull/24292#discussion_r1486776195


##
docs/content.zh/docs/deployment/advanced/logging.md:
##
@@ -40,6 +40,21 @@ Flink 中的日志记录是使用 [SLF4J](http://www.slf4j.org/) 日志接口实
 
 
 
+### Structured logging
+
+Flink adds the following fields to 
[MDC](https://www.slf4j.org/api/org/slf4j/MDC.html) of most of the relevant log 
messages (experimental feature):
+- Job ID
+- key: `flink-job-id`

Review Comment:
   My reasoning was that users configure the mappings anyways, so making it 
configurable isn't strictly necessary.
   OTH, adding more configuration to Flink makes it more complex; so chose to 
have it static.
   I can add the configuration though if you think it's necessary.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #22951:
URL: https://github.com/apache/flink/pull/22951#issuecomment-1939582774

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31663][table] Add-ARRAY_EXCEPT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #23173:
URL: https://github.com/apache/flink/pull/23173#issuecomment-1939581810

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-28229][streaming-java] Introduce Source API alternatives for StreamExecutionEnvironment#fromCollection() methods [flink]

2024-02-12 Thread via GitHub


tigrulya-exe closed pull request #21028: [FLINK-28229][streaming-java] 
Introduce Source API alternatives for 
StreamExecutionEnvironment#fromCollection() methods
URL: https://github.com/apache/flink/pull/21028


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34111][table] Add JSON_QUOTE and JSON_UNQUOTE function [flink]

2024-02-12 Thread via GitHub


jeyhunkarimov commented on PR #24156:
URL: https://github.com/apache/flink/pull/24156#issuecomment-1939523473

   Thanks a lot for the comments. I agree. I was a bit influenced by the 
definition of MySQL and didn't check the `json.org`. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33138] DataStream API implementation [flink-connector-prometheus]

2024-02-12 Thread via GitHub


z3d1k commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-prometheus/pull/1#discussion_r1486713717


##
amp-request-signer/src/main/java/org/apache/flink/connector/prometheus/sink/aws/AmazonManagedPrometheusWriteRequestSigner.java:
##
@@ -0,0 +1,98 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.connector.prometheus.sink.aws;
+
+import org.apache.flink.connector.prometheus.sink.PrometheusRequestSigner;
+import org.apache.flink.util.Preconditions;
+
+import com.amazonaws.auth.AWSCredentials;
+import com.amazonaws.auth.AWSSessionCredentials;
+import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
+import com.amazonaws.util.BinaryUtils;
+import org.apache.commons.lang3.StringUtils;
+
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.util.Map;
+
+/** Sign a Remote-Write request to Amazon Managed Service for Prometheus 
(AMP). */
+public class AmazonManagedPrometheusWriteRequestSigner implements 
PrometheusRequestSigner {
+
+private final URL remoteWriteUrl;
+private final String awsRegion;
+
+/**
+ * Constructor.
+ *
+ * @param remoteWriteUrl URL of the remote-write endpoint
+ * @param awsRegion Region of the AMP workspace
+ */
+public AmazonManagedPrometheusWriteRequestSigner(String remoteWriteUrl, 
String awsRegion) {
+Preconditions.checkArgument(
+StringUtils.isNotBlank(awsRegion), "Missing or blank AMP 
workspace region");
+Preconditions.checkNotNull(
+StringUtils.isNotBlank(remoteWriteUrl),
+"Missing or blank AMP workspace remote-write URL");
+this.awsRegion = awsRegion;
+try {
+this.remoteWriteUrl = new URL(remoteWriteUrl);
+} catch (MalformedURLException e) {
+throw new IllegalArgumentException(
+"Invalid AMP remote-write URL: " + remoteWriteUrl, e);
+}
+}
+
+/**
+ * Add the additional Http request headers required by Amazon Managed 
Prometheus:
+ * 'x-amz-content-sha256', 'Host', 'X-Amz-Date', 'x-amz-security-token' 
and 'Authorization`.
+ *
+ * @param requestHeaders original Http request headers. It must be 
mutable. For efficiency, any
+ * new header is added to the map, instead of making a copy.
+ * @param requestBody request body, already compressed
+ */
+@Override
+public void addSignatureHeaders(Map requestHeaders, byte[] 
requestBody) {
+byte[] contentHash = AWS4SignerBase.hash(requestBody);
+String contentHashString = BinaryUtils.toHex(contentHash);
+requestHeaders.put(
+"x-amz-content-sha256",
+contentHashString); // this header must be included before 
generating the
+// Authorization header
+
+DefaultAWSCredentialsProviderChain credsChain = new 
DefaultAWSCredentialsProviderChain();

Review Comment:
   Should this be made configurable? Some use cases may require specific 
credentials provider implementation.



##
amp-request-signer/src/main/java/org/apache/flink/connector/prometheus/sink/aws/AWS4SignerBase.java:
##
@@ -0,0 +1,290 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.connector.prometheus.sink.aws;
+
+import com.amazonaws.util.BinaryUtils;
+
+import javax.crypto.Mac;
+import javax.crypto.spec.SecretKeySpec;
+

Re: [PR] [FLINK-31663][table] Add-ARRAY_EXCEPT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #23173:
URL: https://github.com/apache/flink/pull/23173#issuecomment-1939372666

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32315][k8s] Support uploading "local://" artifacts in Kubernetes Application Mode [flink]

2024-02-12 Thread via GitHub


schevalley2 commented on code in PR #24303:
URL: https://github.com/apache/flink/pull/24303#discussion_r1486621705


##
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/artifact/DefaultKubernetesArtifactUploader.java:
##
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.artifact;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.client.cli.ArtifactFetchOptions;
+import org.apache.flink.client.program.PackagedProgramUtils;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.PipelineOptions;
+import org.apache.flink.core.fs.FSDataOutputStream;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
+import org.apache.flink.util.StringUtils;
+import org.apache.flink.util.function.FunctionUtils;
+
+import org.apache.commons.io.FileUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Collections;
+import java.util.List;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Default {@link KubernetesArtifactUploader} implementation. */
+public class DefaultKubernetesArtifactUploader implements 
KubernetesArtifactUploader {
+
+private static final Logger LOG =
+LoggerFactory.getLogger(DefaultKubernetesArtifactUploader.class);
+
+@Override
+public void uploadAll(Configuration config) throws Exception {
+if (!config.get(KubernetesConfigOptions.LOCAL_UPLOAD_ENABLED)) {
+LOG.info(
+"Local artifact uploading is disabled. Set '{}' to 
enable.",
+KubernetesConfigOptions.LOCAL_UPLOAD_ENABLED.key());
+return;
+}
+
+final String jobUri = upload(config, getJobUri(config));
+config.set(PipelineOptions.JARS, Collections.singletonList(jobUri));
+
+final List additionalUris =
+config.getOptional(ArtifactFetchOptions.ARTIFACT_LIST)
+.orElse(Collections.emptyList());
+
+final List uploadedAdditionalUris =
+additionalUris.stream()
+.map(
+FunctionUtils.uncheckedFunction(
+artifactUri -> upload(config, 
artifactUri)))
+.collect(Collectors.toList());
+
+config.set(ArtifactFetchOptions.ARTIFACT_LIST, uploadedAdditionalUris);
+}
+
+@VisibleForTesting
+String upload(Configuration config, String artifactUriStr)
+throws IOException, URISyntaxException {
+URI artifactUri = PackagedProgramUtils.resolveURI(artifactUriStr);
+if (!"local".equals(artifactUri.getScheme())) {
+return artifactUriStr;
+}
+
+final String targetDir = 
config.get(KubernetesConfigOptions.LOCAL_UPLOAD_TARGET);
+checkArgument(
+!StringUtils.isNullOrWhitespaceOnly(targetDir),
+String.format(
+"Setting '%s' to a valid remote path is required.",
+KubernetesConfigOptions.LOCAL_UPLOAD_TARGET.key()));
+
+final File src = new File(artifactUri.getPath());
+final Path target = new Path(targetDir, src.getName());
+if (target.getFileSystem().exists(target)) {
+LOG.debug("Skipping artifact '{}', as it already exists.", target);

Review Comment:
   Actually, what happen if I am not adding version in the file name and I had 
an `udf.jar` in the target folder and now I have a new version of `udf.jar` 
that I want to upload?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32964) KinesisStreamsSink cant renew credentials with WebIdentityTokenFileCredentialsProvider

2024-02-12 Thread Aleksandr Pilipenko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816700#comment-17816700
 ] 

Aleksandr Pilipenko commented on FLINK-32964:
-

Hi [~jank], thank you for the detailed info, this was very helpful.

My attempt to reproduce the issue was unsuccessful because I also configured 
source and sink to use *WEB_IDENTITY_TOKEN* - this way 
*WebIdentityTokenFileCredentialsProvider* is used directly, with a new instance 
created for each client.

There is also a bug filed in AWS SDK repository related to this: 
https://github.com/aws/aws-sdk-java-v2/issues/3493

> KinesisStreamsSink cant renew credentials with 
> WebIdentityTokenFileCredentialsProvider
> --
>
> Key: FLINK-32964
> URL: https://issues.apache.org/jira/browse/FLINK-32964
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.4, 1.16.2, 1.17.1
>Reporter: PhilippeB
>Priority: Major
>  Labels: pull-request-available
>
> (First time filling a ticket in Flink community, please let me know if there 
> are any guidelinges I need to follow)
> I noticed a very strange behavior with the Kinesis Sink. I actually using 
> Flink in containerized and Application (reactive) mode on EKS with high 
> availability on S3. 
> Kinesis is configured with IAM role and appropried policies. 
> {code:java}
> //Here a part of my flink-config.yaml:
> parallelism.default: 2
> scheduler-mode: reactive
> execution.checkpointing.interval: 10s
> env.java.opts.jobmanager: -Dkubernetes.max.concurrent.requests=200
> containerized.master.env.KUBERNETES_MAX_CONCURRENT_REQUESTS: 200
> aws.credentials.provider: WEB_IDENTITY_TOKEN
> aws.credentials.role.arn: role
> aws.credentials.role.sessionName: session
> aws.credentials.webIdentityToken.file: 
> /var/run/secrets/eks.amazonaws.com/serviceaccount/token {code}
> When my project is deployed the application and cluster are working well but 
> when the project has been started for about an hour, I suppose the IAM roles 
> session need to be renew, then the job become to crashing continuously.
> {code:java}
> 2023-08-24 10:35:55
> java.lang.IllegalStateException: Connection pool shut down
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.util.Asserts.check(Asserts.java:34)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:269)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.internal.conn.ClientConnectionManagerFactory$DelegatingHttpClientConnectionManager.requestConnection(ClientConnectionManagerFactory.java:75)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.internal.conn.ClientConnectionManagerFactory$InstrumentedHttpClientConnectionManager.requestConnection(ClientConnectionManagerFactory.java:57)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:176)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.internal.impl.ApacheSdkHttpClient.execute(ApacheSdkHttpClient.java:72)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.ApacheHttpClient.execute(ApacheHttpClient.java:254)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.ApacheHttpClient.access$500(ApacheHttpClient.java:104)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.ApacheHttpClient$1.call(ApacheHttpClient.java:231)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.ApacheHttpClient$1.call(ApacheHttpClient.java:228)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe(MetricUtils.java:63)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.executeHttpRequest(MakeHttpRequestStage.java:77)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.execute(MakeHttpRequestStage.java:56)
>     at 
> 

[jira] [Updated] (FLINK-32964) KinesisStreamsSink cant renew credentials with WebIdentityTokenFileCredentialsProvider

2024-02-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32964:
---
Labels: pull-request-available  (was: )

> KinesisStreamsSink cant renew credentials with 
> WebIdentityTokenFileCredentialsProvider
> --
>
> Key: FLINK-32964
> URL: https://issues.apache.org/jira/browse/FLINK-32964
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.4, 1.16.2, 1.17.1
>Reporter: PhilippeB
>Priority: Major
>  Labels: pull-request-available
>
> (First time filling a ticket in Flink community, please let me know if there 
> are any guidelinges I need to follow)
> I noticed a very strange behavior with the Kinesis Sink. I actually using 
> Flink in containerized and Application (reactive) mode on EKS with high 
> availability on S3. 
> Kinesis is configured with IAM role and appropried policies. 
> {code:java}
> //Here a part of my flink-config.yaml:
> parallelism.default: 2
> scheduler-mode: reactive
> execution.checkpointing.interval: 10s
> env.java.opts.jobmanager: -Dkubernetes.max.concurrent.requests=200
> containerized.master.env.KUBERNETES_MAX_CONCURRENT_REQUESTS: 200
> aws.credentials.provider: WEB_IDENTITY_TOKEN
> aws.credentials.role.arn: role
> aws.credentials.role.sessionName: session
> aws.credentials.webIdentityToken.file: 
> /var/run/secrets/eks.amazonaws.com/serviceaccount/token {code}
> When my project is deployed the application and cluster are working well but 
> when the project has been started for about an hour, I suppose the IAM roles 
> session need to be renew, then the job become to crashing continuously.
> {code:java}
> 2023-08-24 10:35:55
> java.lang.IllegalStateException: Connection pool shut down
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.util.Asserts.check(Asserts.java:34)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:269)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.internal.conn.ClientConnectionManagerFactory$DelegatingHttpClientConnectionManager.requestConnection(ClientConnectionManagerFactory.java:75)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.internal.conn.ClientConnectionManagerFactory$InstrumentedHttpClientConnectionManager.requestConnection(ClientConnectionManagerFactory.java:57)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:176)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
>     at 
> org.apache.flink.kinesis.shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.internal.impl.ApacheSdkHttpClient.execute(ApacheSdkHttpClient.java:72)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.ApacheHttpClient.execute(ApacheHttpClient.java:254)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.ApacheHttpClient.access$500(ApacheHttpClient.java:104)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.ApacheHttpClient$1.call(ApacheHttpClient.java:231)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.apache.ApacheHttpClient$1.call(ApacheHttpClient.java:228)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe(MetricUtils.java:63)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.executeHttpRequest(MakeHttpRequestStage.java:77)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.execute(MakeHttpRequestStage.java:56)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.execute(MakeHttpRequestStage.java:39)
>     at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
>     at 
> 

[PR] [FLINK-32964][Connectors/AWS] Ensure that new instance of DefaultCredentialsProvider is created for each client [flink-connector-aws]

2024-02-12 Thread via GitHub


z3d1k opened a new pull request, #129:
URL: https://github.com/apache/flink-connector-aws/pull/129

   ## Purpose of the change
   
   Explicitly use `.builder().build()` instead of `.create()` method while 
constructing DefaultCredentialsProvider.
   This way new instance of credentials provider will be created for each AWS 
SDK client to prevent failures when one of the clients closes singleton 
instance, resulting in failures in others.
   
   ## Verifying this change
   
   - *Added unit test*
   
   ## Significant changes
   *(Please check any boxes [x] if the answer is "yes". You can first publish 
the PR and check them afterwards, for convenience.)*
   - [ ] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
 - If yes, how is this documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33138] DataStream API implementation [flink-connector-prometheus]

2024-02-12 Thread via GitHub


nicusX commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-prometheus/pull/1#discussion_r1486607342


##
example-datastream-job/README.md:
##
@@ -0,0 +1,22 @@
+## Example  job using Prometheus Sink connector with DataStream API
+
+Sample application demonstrating the usage of Prometheus Sink Connector with 
DataStream API.
+
+The example demonstrates how to write to a generic, unauthenticated Prometheus 
remote-write URL, and optionally how to use the Amazon Managed Prometheus 
request signer.
+
+It generates random dummy Memory and CPU metrics from a number of instances, 
and writes them to Prometheus.
+
+### Configuration
+
+The application expects these parameters, via command line:
+
+* `--prometheusRemoteWriteUrl `: the Prometheus remote-write URL to target
+* `--awsRegion `: (optional) if specified, it configures the Amazon 
Managed Prometheus request signer for a workspace in this Region
+* `--webUI`: (optional, for local development only) enables Flink Web UI, with 
flame graphs, for local development

Review Comment:
   I don't think that would be any useful.
   The way you pass parameters depends on your deployment.
   And if you are running locally in your IDE, it depends on your IDE.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33138] DataStream API implementation [flink-connector-prometheus]

2024-02-12 Thread via GitHub


nicusX commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-prometheus/pull/1#discussion_r1486605618


##
example-datastream-job/pom.xml:
##
@@ -0,0 +1,156 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+4.0.0
+
+org.apache.flink
+flink-connector-prometheus-parent
+1.0.0-SNAPSHOT
+
+
+Flink : Connectors : Prometheus : Sample Application
+example-datastream-job
+
+
+UTF-8
+1.17.2
+2.17.1
+
+
+
+
+org.apache.flink
+flink-connector-prometheus
+${project.version}
+
+
+
+
+org.apache.flink.connector.prometheus
+amp-request-signer
+${project.version}
+
+
+
+
+org.apache.flink
+flink-streaming-java
+${flink.version}
+provided
+
+
+org.apache.flink
+flink-clients
+${flink.version}
+provided
+
+
+org.apache.flink
+flink-connector-base
+${flink.version}
+provided
+
+
+
+org.apache.logging.log4j
+log4j-slf4j-impl
+${log4j.version}
+runtime
+
+
+org.apache.logging.log4j
+log4j-api
+${log4j.version}
+runtime
+
+
+org.apache.logging.log4j
+log4j-core
+${log4j.version}
+runtime
+
+
+
+
+org.apache.flink
+flink-runtime-web

Review Comment:
   Removed the Flink dashboard when running locally. Removed this dependency, 
that was used for that



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32315][k8s] Support uploading "local://" artifacts in Kubernetes Application Mode [flink]

2024-02-12 Thread via GitHub


schevalley2 commented on code in PR #24303:
URL: https://github.com/apache/flink/pull/24303#discussion_r1486546701


##
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##
@@ -97,13 +100,44 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
-# Local Schema
 $ ./bin/flink run-application \
 --target kubernetes-application \
 -Dkubernetes.cluster-id=my-first-application-cluster \
 -Dkubernetes.container.image.ref=custom-image-name \
 local:///opt/flink/usrlib/my-flink-job.jar
+```
+
+ Configure User Artifact Management
+
+In case you have a locally available Flink job JAR, artifact upload can be 
utilized so Flink will upload the local artifact to DFS during deployment and 
fetch it on the deployed JobManager pod:

Review Comment:
   nit: utilized -> used



##
docs/content.zh/docs/deployment/resource-providers/native_kubernetes.md:
##
@@ -97,13 +100,44 @@ COPY /path/of/my-flink-job.jar 
$FLINK_HOME/usrlib/my-flink-job.jar
 After creating and publishing the Docker image under `custom-image-name`, you 
can start an Application cluster with the following command:
 
 ```bash
-# Local Schema
 $ ./bin/flink run-application \
 --target kubernetes-application \
 -Dkubernetes.cluster-id=my-first-application-cluster \
 -Dkubernetes.container.image.ref=custom-image-name \
 local:///opt/flink/usrlib/my-flink-job.jar
+```
+
+ Configure User Artifact Management
+
+In case you have a locally available Flink job JAR, artifact upload can be 
utilized so Flink will upload the local artifact to DFS during deployment and 
fetch it on the deployed JobManager pod:
+
+```bash
+$ ./bin/flink run-application \
+--target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+-Dkubernetes.artifacts.local-upload-enabled=true \
+-Dkubernetes.artifacts.local-upload-target=s3://my-bucket/ \
+local:///tmp/my-flink-job.jar
+```
+
+The `kubernetes.artifacts.local-upload-enabled` enables this feature, and 
`kubernetes.artifacts.local-upload-target` has to point to a valid remote 
target that exists and the permissions configured properly.

Review Comment:
   nit: that exists and *has* the permissiosn configured properly.



##
flink-clients/src/main/java/org/apache/flink/client/cli/ArtifactFetchOptionsInternal.java:
##
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.client.cli;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+
+import java.util.List;
+
+/** Artifact Fetch options. */
+@Internal
+public class ArtifactFetchOptionsInternal {
+
+public static final ConfigOption> COMPLETE_LIST =
+ConfigOptions.key("$internal.user.artifacts.complete-list")

Review Comment:
   I agree with you. I wonder if it would be worth either writing somewhere 
else as you suggest or simply log something like:
   
   > DefaultKubernetesArtifactUploader completed uploading artifacts, replacing 
user.artifacts.artifact-list with "…"
   
   So it can be easily found in logs, like in `checkAndUpdatePortConfigOption`



##
flink-kubernetes/src/test/java/org/apache/flink/kubernetes/KubernetesClusterDescriptorTest.java:
##
@@ -86,7 +86,8 @@ public FlinkKubeClient fromConfiguration(
 
server.createClient().inNamespace(NAMESPACE),
 Executors.newDirectExecutorService());
 }
-});
+},
+config -> {});

Review Comment:
   nit: I think it's fine since it's a test. Seeing in the diff I thoght maybe 
it's possible to declare some static NO_OP = config -> {} and use it here as 
`KubernetesArtifactUploader.NO_OP` but I don't have a strong opinion on that.



##

Re: [PR] [FLINK-33138] DataStream API implementation [flink-connector-prometheus]

2024-02-12 Thread via GitHub


nicusX commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-prometheus/pull/1#discussion_r1486604736


##
example-datastream-job/src/main/java/org/apache/flink/connector/prometheus/examples/DataStreamJob.java:
##
@@ -0,0 +1,140 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one or more
+ *  contributor license agreements.  See the NOTICE file distributed with
+ *  this work for additional information regarding copyright ownership.
+ *  The ASF licenses this file to You under the Apache License, Version 2.0
+ *  (the "License"); you may not use this file except in compliance with
+ *  the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.connector.prometheus.examples;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.base.sink.AsyncSinkBase;
+import org.apache.flink.connector.prometheus.sink.PrometheusRequestSigner;
+import org.apache.flink.connector.prometheus.sink.PrometheusSink;
+import org.apache.flink.connector.prometheus.sink.PrometheusTimeSeries;
+import 
org.apache.flink.connector.prometheus.sink.PrometheusTimeSeriesLabelsAndMetricNameKeySelector;
+import 
org.apache.flink.connector.prometheus.sink.aws.AmazonManagedPrometheusWriteRequestSigner;
+import 
org.apache.flink.connector.prometheus.sink.errorhandling.OnErrorBehavior;
+import 
org.apache.flink.connector.prometheus.sink.errorhandling.SinkWriterErrorHandlingBehaviorConfiguration;
+import org.apache.flink.connector.prometheus.sink.http.RetryConfiguration;
+import org.apache.flink.connector.prometheus.sink.prometheus.Types;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.source.SourceFunction;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Arrays;
+import java.util.function.Supplier;
+
+/** Test application testing the Prometheus sink connector. */
+public class DataStreamJob {
+private static final Logger LOGGER = 
LoggerFactory.getLogger(DataStreamJob.class);
+
+public static void main(String[] args) throws Exception {
+StreamExecutionEnvironment env;
+
+// Conditionally return a local execution environment with
+if (args.length > 0 && 
Arrays.stream(args).anyMatch("--webUI"::equalsIgnoreCase)) {
+Configuration conf = new Configuration();
+conf.set(
+
ConfigOptions.key("rest.flamegraph.enabled").booleanType().noDefaultValue(),
+true);
+env = 
StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(conf);
+} else {
+env = StreamExecutionEnvironment.getExecutionEnvironment();
+}
+
+env.setParallelism(2);

Review Comment:
   Added a comment.
   The goal is to force parallel writes and demonstrate how 
`PrometheusTimeSeriesLabelsAndMetricNameKeySelector` partitions data preventing 
out-of-order writes, that would be rejected by Prometheus



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #22951:
URL: https://github.com/apache/flink/pull/22951#issuecomment-1939194309

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31664][table]Add ARRAY_INTERSECT function [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #23171:
URL: https://github.com/apache/flink/pull/23171#issuecomment-1939193051

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31663][table] Add-ARRAY_EXCEPT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #23173:
URL: https://github.com/apache/flink/pull/23173#issuecomment-1939192362

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486481830


##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/InputTypeStrategiesTest.java:
##
@@ -640,6 +640,28 @@ ANY, explicit(DataTypes.INT())
 .expectArgumentTypes(
 
DataTypes.ARRAY(DataTypes.INT().notNull()).notNull(),
 DataTypes.INT()),
+
TestSpec.forStrategy(sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.expectSignature("f(>)")
+
.calledWithArgumentTypes(DataTypes.ARRAY(DataTypes.ROW()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if input argument type is not 
ARRAY",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(DataTypes.INT())
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if the number of input 
arguments are not one",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(
+DataTypes.ARRAY(DataTypes.INT()),
+DataTypes.ARRAY(DataTypes.STRING()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),

Review Comment:
   Yes, although I use `sequence`, actually I test `ARRAY_COMPARABLE` strategy 
here. If I don't use `sequence`, I cannot test ArgumentTypeStrategy here. I saw 
other contributor using `sequence` testing their `ArgumentTypeStrategy` in this 
class.
   
   Because https://github.com/apache/flink/pull/22951#discussion_r1484467843, I 
delete old test 
file([flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategyTest.java](url))
 and add these test in the `InputTypeStrategiesTest`.



##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/InputTypeStrategiesTest.java:
##
@@ -640,6 +640,28 @@ ANY, explicit(DataTypes.INT())
 .expectArgumentTypes(
 
DataTypes.ARRAY(DataTypes.INT().notNull()).notNull(),
 DataTypes.INT()),
+
TestSpec.forStrategy(sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.expectSignature("f(>)")
+
.calledWithArgumentTypes(DataTypes.ARRAY(DataTypes.ROW()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if input argument type is not 
ARRAY",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(DataTypes.INT())
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if the number of input 
arguments are not one",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(
+DataTypes.ARRAY(DataTypes.INT()),
+DataTypes.ARRAY(DataTypes.STRING()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),

Review Comment:
   Yes, although I use `sequence`, actually I test `ARRAY_COMPARABLE` strategy 
here. If I don't use `sequence`, I cannot test ArgumentTypeStrategy here. I saw 
other contributor using `sequence` testing their `ArgumentTypeStrategy` in this 
class.
   
   Because https://github.com/apache/flink/pull/22951#discussion_r1484467843, I 
deleted old test 
file([flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategyTest.java](url))
 and add these test in the `InputTypeStrategiesTest`.



-- 

Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486487968


##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/InputTypeStrategiesTest.java:
##
@@ -640,6 +640,28 @@ ANY, explicit(DataTypes.INT())
 .expectArgumentTypes(
 
DataTypes.ARRAY(DataTypes.INT().notNull()).notNull(),
 DataTypes.INT()),
+
TestSpec.forStrategy(sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.expectSignature("f(>)")
+
.calledWithArgumentTypes(DataTypes.ARRAY(DataTypes.ROW()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if input argument type is not 
ARRAY",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(DataTypes.INT())
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if the number of input 
arguments are not one",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(
+DataTypes.ARRAY(DataTypes.INT()),
+DataTypes.ARRAY(DataTypes.STRING()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),

Review Comment:
   Because https://github.com/apache/flink/pull/22951#discussion_r1484467843, I 
delete old test file and add these test in the `InputTypeStrategiesTest`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486487968


##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/InputTypeStrategiesTest.java:
##
@@ -640,6 +640,28 @@ ANY, explicit(DataTypes.INT())
 .expectArgumentTypes(
 
DataTypes.ARRAY(DataTypes.INT().notNull()).notNull(),
 DataTypes.INT()),
+
TestSpec.forStrategy(sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.expectSignature("f(>)")
+
.calledWithArgumentTypes(DataTypes.ARRAY(DataTypes.ROW()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if input argument type is not 
ARRAY",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(DataTypes.INT())
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if the number of input 
arguments are not one",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(
+DataTypes.ARRAY(DataTypes.INT()),
+DataTypes.ARRAY(DataTypes.STRING()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),

Review Comment:
   Because https://github.com/apache/flink/pull/22951#discussion_r1484467843, I 
delete old test file and add these test in the `InputTypeStrategiesTest`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486481830


##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/InputTypeStrategiesTest.java:
##
@@ -640,6 +640,28 @@ ANY, explicit(DataTypes.INT())
 .expectArgumentTypes(
 
DataTypes.ARRAY(DataTypes.INT().notNull()).notNull(),
 DataTypes.INT()),
+
TestSpec.forStrategy(sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.expectSignature("f(>)")
+
.calledWithArgumentTypes(DataTypes.ARRAY(DataTypes.ROW()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if input argument type is not 
ARRAY",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(DataTypes.INT())
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if the number of input 
arguments are not one",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(
+DataTypes.ARRAY(DataTypes.INT()),
+DataTypes.ARRAY(DataTypes.STRING()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),

Review Comment:
   Yes, although I use `sequence`, actually I test `ARRAY_COMPARABLE` strategy 
here. If I don't use `sequence`, I cannot test ArgumentTypeStrategy here. I saw 
other contributor using `sequence` testing their `ArgumentTypeStrategy` in this 
class.
   
   The test is same as
   
[flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategyTest.java](url)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486481830


##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/InputTypeStrategiesTest.java:
##
@@ -640,6 +640,28 @@ ANY, explicit(DataTypes.INT())
 .expectArgumentTypes(
 
DataTypes.ARRAY(DataTypes.INT().notNull()).notNull(),
 DataTypes.INT()),
+
TestSpec.forStrategy(sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.expectSignature("f(>)")
+
.calledWithArgumentTypes(DataTypes.ARRAY(DataTypes.ROW()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if input argument type is not 
ARRAY",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(DataTypes.INT())
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if the number of input 
arguments are not one",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(
+DataTypes.ARRAY(DataTypes.INT()),
+DataTypes.ARRAY(DataTypes.STRING()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),

Review Comment:
   Yes, although I use `sequence`, actually I test `ARRAY_COMPARABLE` strategy 
here. If I don't use `sequence`, I cannot test ArgumentTypeStrategy here. I saw 
other contributor using `sequence` testing their `ArgumentTypeStrategy` in this 
class.
   
   The test is same as
   
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategyTest.java



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486481830


##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/InputTypeStrategiesTest.java:
##
@@ -640,6 +640,28 @@ ANY, explicit(DataTypes.INT())
 .expectArgumentTypes(
 
DataTypes.ARRAY(DataTypes.INT().notNull()).notNull(),
 DataTypes.INT()),
+
TestSpec.forStrategy(sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.expectSignature("f(>)")
+
.calledWithArgumentTypes(DataTypes.ARRAY(DataTypes.ROW()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if input argument type is not 
ARRAY",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(DataTypes.INT())
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),
+TestSpec.forStrategy(
+"Strategy fails if the number of input 
arguments are not one",
+
sequence(SpecificInputTypeStrategies.ARRAY_COMPARABLE))
+.calledWithArgumentTypes(
+DataTypes.ARRAY(DataTypes.INT()),
+DataTypes.ARRAY(DataTypes.STRING()))
+.expectErrorMessage(
+"Invalid input arguments. Expected signatures 
are:\n"
++ "f(>)"),

Review Comment:
   Yes, although I use `sequence`, actually I test `ARRAY_COMPARABLE` strategy 
here. If I don't use `sequence`, I cannot test ArgumentTypeStrategy here. I saw 
other contributor using `sequence` testing their `ArgumentTypeStrategy` in this 
class.
   
   The test is similar as 
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategyTest.java



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] fix: flink account cannot get resource "services" in API group [flink-kubernetes-operator]

2024-02-12 Thread via GitHub


mxm commented on PR #596:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/596#issuecomment-1939122448

   I would add it on this page: 
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/overview/
 Actually, the examples linked above are already linked at the bottom of the 
page. Maybe we can add a section which expands a little bit on the concept.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486472909


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +231,22 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(ARRAY_COMPARABLE),
+sequence(
+ARRAY_COMPARABLE,
+InputTypeStrategies.explicit(

Review Comment:
   If we not use  InputTypeStrategies.explicit here, it will conflict with 
   `import static 
org.apache.flink.table.types.inference.TypeStrategies.explicit;`



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/SpecificInputTypeStrategies.java:
##
@@ -89,6 +90,10 @@ public static InputTypeStrategy windowTimeIndicator() {
 public static final ArgumentTypeStrategy ARRAY_ELEMENT_ARG =
 new ArrayElementArgumentTypeStrategy();
 
+/** Argument type representing the array is comparable. */
+public static final ArgumentTypeStrategy ARRAY_COMPARABLE =

Review Comment:
   ok



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementArgumentTypeStrategy.java:
##
@@ -34,40 +32,33 @@
 import 
org.apache.flink.table.types.logical.StructuredType.StructuredComparison;
 import org.apache.flink.util.Preconditions;
 
-import java.util.Collections;
 import java.util.List;
 import java.util.Optional;
 
 /**
- * An {@link InputTypeStrategy} that checks if the input argument is an ARRAY 
type and check whether
- * its' elements are comparable.
+ * An {@link ArgumentTypeStrategy} that checks if the input argument is an 
ARRAY type and check
+ * whether its' elements are comparable.
  *
  * It requires one argument.
  *
  * For the rules which types are comparable with which types see {@link
  * #areComparable(LogicalType, LogicalType)}.
  */
 @Internal
-public final class ArrayComparableElementTypeStrategy implements 
InputTypeStrategy {
+public final class ArrayComparableElementArgumentTypeStrategy implements 
ArgumentTypeStrategy {
+
 private final StructuredComparison requiredComparison;
-private final ConstantArgumentCount argumentCount;
 
-public ArrayComparableElementTypeStrategy(StructuredComparison 
requiredComparison) {
+public ArrayComparableElementArgumentTypeStrategy(StructuredComparison 
requiredComparison) {
 Preconditions.checkArgument(requiredComparison != 
StructuredComparison.NONE);
 this.requiredComparison = requiredComparison;
-this.argumentCount = ConstantArgumentCount.of(1);
-}
-
-@Override
-public ArgumentCount getArgumentCount() {
-return argumentCount;
 }
 
 @Override
-public Optional> inferInputTypes(
-CallContext callContext, boolean throwOnFailure) {
+public Optional inferArgumentType(
+CallContext callContext, int argumentPos, boolean throwOnFailure) {
 final List argumentDataTypes = 
callContext.getArgumentDataTypes();
-final DataType argumentType = argumentDataTypes.get(0);
+final DataType argumentType = argumentDataTypes.get(argumentPos);
 if (!argumentType.getLogicalType().is(LogicalTypeRoot.ARRAY)) {
 return callContext.fail(throwOnFailure, "All arguments requires to 
be an ARRAY type");

Review Comment:
   ok



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486465811


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/functions/scalar/ArraySortFunction.java:
##
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.scalar;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.functions.BuiltInFunctionDefinitions;
+import org.apache.flink.table.functions.FunctionContext;
+import org.apache.flink.table.functions.SpecializedFunction;
+import org.apache.flink.table.types.CollectionDataType;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import javax.annotation.Nullable;
+
+import java.lang.invoke.MethodHandle;
+import java.util.Arrays;
+import java.util.Comparator;
+
+import static org.apache.flink.table.api.Expressions.$;
+
+/** Implementation of ARRAY_SORT function. */
+@Internal
+public class ArraySortFunction extends BuiltInScalarFunction {
+
+private final ArrayData.ElementGetter elementGetter;
+private final SpecializedFunction.ExpressionEvaluator greaterEvaluator;
+
+private transient MethodHandle greaterHandle;
+
+public ArraySortFunction(SpecializedFunction.SpecializedContext context) {
+super(BuiltInFunctionDefinitions.ARRAY_SORT, context);
+final DataType elementDataType =
+((CollectionDataType) 
context.getCallContext().getArgumentDataTypes().get(0))
+.getElementDataType()
+.toInternal();
+elementGetter =
+
ArrayData.createElementGetter(elementDataType.toInternal().getLogicalType());
+greaterEvaluator =
+context.createEvaluator(
+$("element1").isGreater($("element2")),
+DataTypes.BOOLEAN(),
+DataTypes.FIELD("element1", 
elementDataType.notNull().toInternal()),
+DataTypes.FIELD("element2", 
elementDataType.notNull().toInternal()));
+}
+
+@Override
+public void open(FunctionContext context) throws Exception {
+greaterHandle = greaterEvaluator.open(context);
+}
+
+public @Nullable ArrayData eval(ArrayData array, Boolean... 
ascendingOrder) {
+try {
+if (array == null) {
+return null;
+}
+if (array.size() == 0) {
+return array;
+}
+boolean isAscending = ascendingOrder.length > 0 ? 
ascendingOrder[0] : true;
+Object[] elements = new Object[array.size()];
+for (int i = 0; i < array.size(); i++) {
+elements[i] = elementGetter.getElementOrNull(array, i);
+}
+Comparator ascendingComparator = new 
ArraySortComparator(isAscending);
+Arrays.sort(elements, ascendingComparator);
+return new GenericArrayData(elements);
+} catch (Throwable t) {
+throw new FlinkRuntimeException(t);
+}
+}
+
+private class ArraySortComparator implements Comparator {
+private final boolean isAscending;
+
+public ArraySortComparator(boolean isAscending) {
+this.isAscending = isAscending;
+}
+
+@Override
+public int compare(Object o1, Object o2) {
+Comparator baseComparator =
+(a, b) -> {
+try {
+if (a == null || b == null) {
+return 0;
+}
+boolean isGreater = (boolean) 
greaterHandle.invoke(a, b);
+return isGreater ? 1 : -1;
+} catch (Throwable e) {
+throw new RuntimeException(e);
+}
+};
+Comparator comparator =
+isAscending
+

Re: [PR] [FLINK-34431] Move static BlobWriter methods to separate util [flink]

2024-02-12 Thread via GitHub


flinkbot commented on PR #24306:
URL: https://github.com/apache/flink/pull/24306#issuecomment-1939080766

   
   ## CI report:
   
   * 18d6fed5bc96c4a763e89e65d4bfbb6c90c45714 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34422) BatchTestBase doesn't actually use MiniClusterExtension

2024-02-12 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816573#comment-17816573
 ] 

Chesnay Schepler edited comment on FLINK-34422 at 2/12/24 4:22 PM:
---

master:
65727fb943807f1ff5345419ce389c5734df0cb4
4c4643c3251c284260c96a2110f4b78c8a369723
1.19:
994850d33a32f1ac27cee755f976b86208f911e3
3fcbe3df48904d10ae29a35800474b18af9e7172
1.18:
d69393678efe7e26bd5168407a1c862cd4a0e148


was (Author: zentol):
master:
65727fb943807f1ff5345419ce389c5734df0cb4
4c4643c3251c284260c96a2110f4b78c8a369723
1.19:
1.18:
d69393678efe7e26bd5168407a1c862cd4a0e148

> BatchTestBase doesn't actually use MiniClusterExtension
> ---
>
> Key: FLINK-34422
> URL: https://issues.apache.org/jira/browse/FLINK-34422
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Test Infrastructure
>Affects Versions: 1.18.1
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.2, 1.20.0
>
>
> BatchTestBase sets up a table environment in instance fields, which runs 
> before the BeforeEachCallback from the MiniClusterExtension has time to run.
> As a result the table environment internally creates a local stream 
> environment, due to which _all_ test extending the BatchTestBase are spawning 
> separate mini clusters for every single job.
> I believe this is on reason why we had troubles with enabling fork-reuse in 
> the planner module



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34422) BatchTestBase doesn't actually use MiniClusterExtension

2024-02-12 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-34422.

Resolution: Fixed

> BatchTestBase doesn't actually use MiniClusterExtension
> ---
>
> Key: FLINK-34422
> URL: https://issues.apache.org/jira/browse/FLINK-34422
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Test Infrastructure
>Affects Versions: 1.18.1
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.2, 1.20.0
>
>
> BatchTestBase sets up a table environment in instance fields, which runs 
> before the BeforeEachCallback from the MiniClusterExtension has time to run.
> As a result the table environment internally creates a local stream 
> environment, due to which _all_ test extending the BatchTestBase are spawning 
> separate mini clusters for every single job.
> I believe this is on reason why we had troubles with enabling fork-reuse in 
> the planner module



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34431) Move static BlobWriter methods to separate util

2024-02-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34431:
---
Labels: pull-request-available  (was: )

> Move static BlobWriter methods to separate util
> ---
>
> Key: FLINK-34431
> URL: https://issues.apache.org/jira/browse/FLINK-34431
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Coordination
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The BlobWriter interface contains several static methods, some being used, 
> others being de-facto internal methods.
> We should move these into a dedicated BlobWriterUtils class so we can 
> properly deal with method visibility.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [1.19][FLINK-34422][test] BatchTestBase uses MiniClusterExtension [flink]

2024-02-12 Thread via GitHub


zentol merged PR #24301:
URL: https://github.com/apache/flink/pull/24301


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34431] Move static BlobWriter methods to separate util [flink]

2024-02-12 Thread via GitHub


zentol opened a new pull request, #24306:
URL: https://github.com/apache/flink/pull/24306

   I intend to refactor these methods soon to also support transient blobs, and 
not being able to designate methods as private isn't great.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34431) Move static BlobWriter methods to separate util

2024-02-12 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-34431:


 Summary: Move static BlobWriter methods to separate util
 Key: FLINK-34431
 URL: https://issues.apache.org/jira/browse/FLINK-34431
 Project: Flink
  Issue Type: Technical Debt
  Components: Runtime / Coordination
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.20.0


The BlobWriter interface contains several static methods, some being used, 
others being de-facto internal methods.
We should move these into a dedicated BlobWriterUtils class so we can properly 
deal with method visibility.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-31664][table]Add ARRAY_INTERSECT function [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #23171:
URL: https://github.com/apache/flink/pull/23171#issuecomment-1938995805

   @flinkbot run azure
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486372018


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   Ok, thank you, actually, I did this function last summer and follow the jira 
tickect's https://issues.apache.org/jira/browse/FLINK-26948 description.
   And I will check other RDNMS today.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-26948][table] Add-ARRAY_SORT-function. [flink]

2024-02-12 Thread via GitHub


dawidwys commented on code in PR #22951:
URL: https://github.com/apache/flink/pull/22951#discussion_r1486362717


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/functions/BuiltInFunctionDefinitions.java:
##
@@ -231,6 +232,21 @@ ANY, and(logical(LogicalTypeRoot.BOOLEAN), LITERAL)
 
"org.apache.flink.table.runtime.functions.scalar.ArrayContainsFunction")
 .build();
 
+public static final BuiltInFunctionDefinition ARRAY_SORT =
+BuiltInFunctionDefinition.newBuilder()
+.name("ARRAY_SORT")
+.kind(SCALAR)
+.inputTypeStrategy(
+or(
+sequence(new 
ArrayComparableElementArgumentTypeStrategy()),
+sequence(
+new 
ArrayComparableElementArgumentTypeStrategy(),
+logical(LogicalTypeRoot.BOOLEAN

Review Comment:
   If you're looking for ways to verify other databases that's a good start: 
https://sqlfiddle.com/



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34000] Implement restore tests for IncrementalGroupAgg node [flink]

2024-02-12 Thread via GitHub


dawidwys commented on PR #24154:
URL: https://github.com/apache/flink/pull/24154#issuecomment-1938921254

   @bvarghese1 Do you mind rebasing on top of the current master? I believe 
failures should be fixed now: https://issues.apache.org/jira/browse/FLINK-34420


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-18356) flink-table-planner Exit code 137 returned from process

2024-02-12 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816643#comment-17816643
 ] 

Martijn Visser commented on FLINK-18356:


Let's try it at least

> flink-table-planner Exit code 137 returned from process
> ---
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.17.0, 1.18.0, 
> 1.19.0
>Reporter: Piotr Nowojski
>Assignee: Yunhong Zheng
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.1, 1.17.3
>
> Attachments: 1234.jpg, app-profiling_4.gif, 
> image-2023-01-11-22-21-57-784.png, image-2023-01-11-22-22-32-124.png, 
> image-2023-02-16-20-18-09-431.png, image-2023-07-11-19-28-52-851.png, 
> image-2023-07-11-19-35-54-530.png, image-2023-07-11-19-41-18-626.png, 
> image-2023-07-11-19-41-37-105.png
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [DEBUG][FLINK-18356] Re-enable fork reuse [flink]

2024-02-12 Thread via GitHub


flinkbot commented on PR #24305:
URL: https://github.com/apache/flink/pull/24305#issuecomment-1938886708

   
   ## CI report:
   
   * 9d35ba1fcaeab907b42c8f4b4d34d6b54f504a1e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34192] Update to be compatible with updated SinkV2 interfaces [flink-connector-kafka]

2024-02-12 Thread via GitHub


MartijnVisser commented on PR #84:
URL: 
https://github.com/apache/flink-connector-kafka/pull/84#issuecomment-1938866968

   @mas-chen Any more considerations from your end, else I'm inclined to 
approve and merge this. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31664][table]Add ARRAY_INTERSECT function [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #23171:
URL: https://github.com/apache/flink/pull/23171#issuecomment-1938842552

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-31663][table] Add-ARRAY_EXCEPT-function. [flink]

2024-02-12 Thread via GitHub


hanyuzheng7 commented on PR #23173:
URL: https://github.com/apache/flink/pull/23173#issuecomment-1938841626

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] fix: flink account cannot get resource "services" in API group [flink-kubernetes-operator]

2024-02-12 Thread via GitHub


prakash-42 commented on PR #596:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/596#issuecomment-1938838002

   Thanks @mxm  ! I have a few questions before I can start working on the PR, 
hopefully this is the right place to ask.
   
   1. Where should I put this guide (new document) in the project structure? I 
suggest that we create a folder by the name "examples" (in the 
[docs](https://github.com/apache/flink-kubernetes-operator/tree/main/docs/content/docs)
 folder), and create this single guide in there. We can later add more examples 
to the folder. What do you think?
   
   2. I will copy a lot of content from the readme of the 
[flink-sql-runner-example](https://github.com/apache/flink-kubernetes-operator/tree/main/examples/flink-sql-runner-example),
 with maybe a few additions. Is there a way to directly add that file as a page 
instead of duplicating content?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34111) Add JSON_QUOTE and JSON_UNQUOTE function

2024-02-12 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816622#comment-17816622
 ] 

Sergey Nuyanzin edited comment on FLINK-34111 at 2/12/24 2:38 PM:
--

A couple of findings
1. MySQL seems not following JSON spec [1], [2] regarding escaping 
{quote}
escape
'"'
'\'
'/'
'b'
'f'
'n'
'r'
't'
'u' hex hex hex hex
{quote}
Probably it's better to follow json spec rules
2. I would expect the {{INPUT_STRING}} as output for the query like 
{code:sql}
SELECT json_unquote(json_quote());
{code}
by the way MySQL seems following this rule (it is not stated explicitly in docs 
however tests show that it does)


[1] https://www.json.org/json-en.html
[2]  https://www.crockford.com/mckeeman.html



was (Author: sergey nuyanzin):
A couple of findings
1. MySQL seems not following JSON spec [1] regarding escaping 
{quote}
escape
'"'
'\'
'/'
'b'
'f'
'n'
'r'
't'
'u' hex hex hex hex
{quote}
Probably it's better to follow json spec rules
2. I would expect the {{INPUT_STRING}} as output for the query like 
{code:sql}
SELECT json_unquote(json_quote());
{code}
by the way MySQL seems following this rule (it is not stated explicitly in docs 
however tests show that it does)


[1] https://www.crockford.com/mckeeman.html


> Add JSON_QUOTE and JSON_UNQUOTE function
> 
>
> Key: FLINK-34111
> URL: https://issues.apache.org/jira/browse/FLINK-34111
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> Escapes or unescapes a JSON string removing traces of offending characters 
> that could prevent parsing.
> Proposal:
> - JSON_QUOTE: Quotes a string by wrapping it with double quote characters and 
> escaping interior quote and other characters, then returning the result as a 
> utf8mb4 string. Returns NULL if the argument is NULL.
> - JSON_UNQUOTE: Unquotes value and returns the result as a string. Returns 
> NULL if the argument is NULL. An error occurs if the value starts and ends 
> with double quotes but is not a valid JSON string literal.
> The following characters are reserved in JSON and must be properly escaped to 
> be used in strings:
> Backspace is replaced with \b
> Form feed is replaced with \f
> Newline is replaced with \n
> Carriage return is replaced with \r
> Tab is replaced with \t
> Double quote is replaced with \"
> Backslash is replaced with \\
> This function exists in MySQL: 
> - 
> https://dev.mysql.com/doc/refman/8.0/en/json-creation-functions.html#function_json-quote
> - 
> https://dev.mysql.com/doc/refman/8.0/en/json-modification-functions.html#function_json-unquote
> It's still open in Calcite CALCITE-3130



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34111) Add JSON_QUOTE and JSON_UNQUOTE function

2024-02-12 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816622#comment-17816622
 ] 

Sergey Nuyanzin commented on FLINK-34111:
-

A couple of findings
1. MySQL seems not following JSON spec [1] regarding escaping 
{quote}
escape
'"'
'\'
'/'
'b'
'f'
'n'
'r'
't'
'u' hex hex hex hex
{quote}
Probably it's better to follow json spec rules
2. I would expect the {{INPUT_STRING}} as output for the query like 
{code:sql}
SELECT json_unquote(json_quote());
{code}
by the way MySQL seems following this rule (it is not stated explicitly in docs 
however tests show that it does)


[1] https://www.crockford.com/mckeeman.html


> Add JSON_QUOTE and JSON_UNQUOTE function
> 
>
> Key: FLINK-34111
> URL: https://issues.apache.org/jira/browse/FLINK-34111
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> Escapes or unescapes a JSON string removing traces of offending characters 
> that could prevent parsing.
> Proposal:
> - JSON_QUOTE: Quotes a string by wrapping it with double quote characters and 
> escaping interior quote and other characters, then returning the result as a 
> utf8mb4 string. Returns NULL if the argument is NULL.
> - JSON_UNQUOTE: Unquotes value and returns the result as a string. Returns 
> NULL if the argument is NULL. An error occurs if the value starts and ends 
> with double quotes but is not a valid JSON string literal.
> The following characters are reserved in JSON and must be properly escaped to 
> be used in strings:
> Backspace is replaced with \b
> Form feed is replaced with \f
> Newline is replaced with \n
> Carriage return is replaced with \r
> Tab is replaced with \t
> Double quote is replaced with \"
> Backslash is replaced with \\
> This function exists in MySQL: 
> - 
> https://dev.mysql.com/doc/refman/8.0/en/json-creation-functions.html#function_json-quote
> - 
> https://dev.mysql.com/doc/refman/8.0/en/json-modification-functions.html#function_json-unquote
> It's still open in Calcite CALCITE-3130



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34111][table] Add JSON_QUOTE and JSON_UNQUOTE function [flink]

2024-02-12 Thread via GitHub


snuyanzin commented on PR #24156:
URL: https://github.com/apache/flink/pull/24156#issuecomment-1938742257

   > and json_unqote("\\"key\\"") will produce the same ("\\"key\\"") since the 
input ("\\"key\\"") is not a valid json.
   
   can you clarify why it the input here is not a valid json?
   
   Based on [1]
   valid  json is an element
   > json
   >element
   
   which is a value separated by whitespaces
   > element
   ws value ws
   
   and value itself could be a string
   >  value
  object
  array
  string
  number
  "true"
  "false"
  "null"
   
   which is characters
   >string
   '"' characters '"'
   
   >characters
   ""
   character characters
   and characters could contain escape characters
   
   >character
   '0020' . '10' - '"' - '\'
   '\' escape
   
   like that 
   >escape
   '"'
   '\'
   '/'
   'b'
   'f'
   'n'
   'r'
   't'
   'u' hex hex hex hex
   
   so I didn't get why  `"\"key\""` should be considered as an invalid json...
   Can you elaborate here please?
   
   By the way I checked these queries against MySQL and it works as expected [2]
   ```SQL
   select json_unquote(json_quote("\"key\""));
   select json_unquote(json_quote("\"key\": \"value\""));
   ```
   it returns
   ```
   "key"
   "key": "value"
   ```
   
   [1] https://www.json.org/json-en.html
   [2] https://www.db-fiddle.com/f/fuuT4xzfFEbY772eNkLkmp/0


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34111][table] Add JSON_QUOTE and JSON_UNQUOTE function [flink]

2024-02-12 Thread via GitHub


jeyhunkarimov commented on PR #24156:
URL: https://github.com/apache/flink/pull/24156#issuecomment-1938706008

   > > Is it mainly because of the definition of JSON_QUOTE that does not 
require its input to be valid JSON, and JSON_UNQOTE that require its input to 
be a valid JSON.
   > 
   > I'm not sure that this is the only reason
   > 
   > there is a more simple case, for instance `"key"` is a valid JSON and 
`json_quote` produces also a valid json `"\"key\""` however the combination 
also seems not working
   > 
   > ```sql
   > SELECT json_unquote(json_quote('"key"'));
   > ```
   > 
   > and continues producing `"\"key\""`
   
   Let me paste the definitions here:
   
   `json_quote`: Quotes a string as a JSON value by wrapping it with double 
quote characters, escaping interior quote and other characters, and returning 
the result as a utf8mb4 string. If the argument is NULL, the function returns 
NULL.
   
   j`son_unquote`: Unquotes JSON value and returns the result as a utf8mb4 
string. If the argument is NULL, returns NULL. If the value starts and ends 
with double quotes but is not a valid JSON string literal, the value is passed 
through unmodified.
   
   So, from this definition, `json_quote` does not enforce the input to be a 
valid json, but `json_unquote` does. 
   
   So, in your example, `json_quote('"key"')`  produces `"\"key\""` and 
`json_unqote("\"key\"")` will produce the same `("\"key\"")` since the input 
`("\"key\"")` is not a valid json. 
   
   Instead this works: `testSqlApi("JSON_UNQUOTE(JSON_QUOTE('[1,2,3]'))", 
"[1,2,3]")` since the input of the json_unqote is a valid json.
   
   These rules I derived from the definition I pasted above. Maybe I am missing 
something. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34418][ci] Adds steps to monitor used disk space [flink]

2024-02-12 Thread via GitHub


flinkbot commented on PR #24304:
URL: https://github.com/apache/flink/pull/24304#issuecomment-1938695518

   
   ## CI report:
   
   * 96ba7e66fb90aa9bf61e20455d479f278129a394 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34418) Disk space issues for Docker-ized GitHub Action jobs

2024-02-12 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816602#comment-17816602
 ] 

Matthias Pohl commented on FLINK-34418:
---

https://github.com/apache/flink/actions/runs/7869012663/job/21467582110

> Disk space issues for Docker-ized GitHub Action jobs
> 
>
> Key: FLINK-34418
> URL: https://issues.apache.org/jira/browse/FLINK-34418
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0, 1.18.1, 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, pull-request-available, test-stability
>
> [https://github.com/apache/flink/actions/runs/7838691874/job/21390739806#step:10:27746]
> {code:java}
> [...]
> Feb 09 03:00:13 Caused by: java.io.IOException: No space left on device
> 27608Feb 09 03:00:13  at java.io.FileOutputStream.writeBytes(Native Method)
> 27609Feb 09 03:00:13  at 
> java.io.FileOutputStream.write(FileOutputStream.java:326)
> 27610Feb 09 03:00:13  at 
> org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:250)
> 27611Feb 09 03:00:13  ... 39 more
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34418) Disk space issues for Docker-ized GitHub Action jobs

2024-02-12 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816603#comment-17816603
 ] 

Matthias Pohl commented on FLINK-34418:
---

https://github.com/apache/flink/actions/runs/7870763675

> Disk space issues for Docker-ized GitHub Action jobs
> 
>
> Key: FLINK-34418
> URL: https://issues.apache.org/jira/browse/FLINK-34418
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0, 1.18.1, 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, pull-request-available, test-stability
>
> [https://github.com/apache/flink/actions/runs/7838691874/job/21390739806#step:10:27746]
> {code:java}
> [...]
> Feb 09 03:00:13 Caused by: java.io.IOException: No space left on device
> 27608Feb 09 03:00:13  at java.io.FileOutputStream.writeBytes(Native Method)
> 27609Feb 09 03:00:13  at 
> java.io.FileOutputStream.write(FileOutputStream.java:326)
> 27610Feb 09 03:00:13  at 
> org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:250)
> 27611Feb 09 03:00:13  ... 39 more
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34344) Wrong JobID in CheckpointStatsTracker

2024-02-12 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-34344.

Resolution: Fixed

> Wrong JobID in CheckpointStatsTracker
> -
>
> Key: FLINK-34344
> URL: https://issues.apache.org/jira/browse/FLINK-34344
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0, 1.18.1, 1.20.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.2
>
>
> The job id is generated randomly:
> ```
> public CheckpointStatsTracker(int numRememberedCheckpoints, MetricGroup 
> metricGroup) {
> this(numRememberedCheckpoints, metricGroup, new JobID(), 
> Integer.MAX_VALUE);
> }
> ```
> This affects how it is logged (or reported elsewhere).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34111][table] Add JSON_QUOTE and JSON_UNQUOTE function [flink]

2024-02-12 Thread via GitHub


snuyanzin commented on PR #24156:
URL: https://github.com/apache/flink/pull/24156#issuecomment-1938687661

   >Is it mainly because of the definition of JSON_QUOTE that does not require 
its input to be valid JSON, and JSON_UNQOTE that require its input to be a 
valid JSON.
   
   I'm not sure that this is the only reson
   
   there is a more simple case, for instance '"key"` is a valid JSON and 
`json_quote` produces also a valid json `"\"key\""`
   however the combination also seems not working
   ```sql
   SELECT json_unquote(json_quote('"key"'));
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34344) Wrong JobID in CheckpointStatsTracker

2024-02-12 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-34344:
-
Fix Version/s: (was: 1.20.0)

> Wrong JobID in CheckpointStatsTracker
> -
>
> Key: FLINK-34344
> URL: https://issues.apache.org/jira/browse/FLINK-34344
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0, 1.18.1, 1.20.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.2
>
>
> The job id is generated randomly:
> ```
> public CheckpointStatsTracker(int numRememberedCheckpoints, MetricGroup 
> metricGroup) {
> this(numRememberedCheckpoints, metricGroup, new JobID(), 
> Integer.MAX_VALUE);
> }
> ```
> This affects how it is logged (or reported elsewhere).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34344) Wrong JobID in CheckpointStatsTracker

2024-02-12 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-34344:
-
Priority: Minor  (was: Major)

> Wrong JobID in CheckpointStatsTracker
> -
>
> Key: FLINK-34344
> URL: https://issues.apache.org/jira/browse/FLINK-34344
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0, 1.18.1, 1.20.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.2
>
>
> The job id is generated randomly:
> ```
> public CheckpointStatsTracker(int numRememberedCheckpoints, MetricGroup 
> metricGroup) {
> this(numRememberedCheckpoints, metricGroup, new JobID(), 
> Integer.MAX_VALUE);
> }
> ```
> This affects how it is logged (or reported elsewhere).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34344) Wrong JobID in CheckpointStatsTracker

2024-02-12 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816601#comment-17816601
 ] 

Chesnay Schepler commented on FLINK-34344:
--

master: c84f42c1a7e752eaf8b9c3beb23fb9b01d39443d
1.19: 37756561d99ff73ba8cbf445c57f57fe11250867
1.18: 33fb37aac5fbc709a62d35445879c75a6ba48086

> Wrong JobID in CheckpointStatsTracker
> -
>
> Key: FLINK-34344
> URL: https://issues.apache.org/jira/browse/FLINK-34344
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0, 1.18.1, 1.20.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0, 1.18.2
>
>
> The job id is generated randomly:
> ```
> public CheckpointStatsTracker(int numRememberedCheckpoints, MetricGroup 
> metricGroup) {
> this(numRememberedCheckpoints, metricGroup, new JobID(), 
> Integer.MAX_VALUE);
> }
> ```
> This affects how it is logged (or reported elsewhere).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34418][ci] Adds steps to monitor used disk space [flink]

2024-02-12 Thread via GitHub


XComp opened a new pull request, #24304:
URL: https://github.com/apache/flink/pull/24304

   ## What is the purpose of the change
   
   * Adds disk space monitoring steps after test run to verify disk usage
   
   ## Brief change log
   
   * Adds steps for monitoring disk usage
   
   ## Verifying this change
   
   -
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32315][k8s] Support uploading "local://" artifacts in Kubernetes Application Mode [flink]

2024-02-12 Thread via GitHub


ferenc-csaky commented on code in PR #24303:
URL: https://github.com/apache/flink/pull/24303#discussion_r1486182159


##
flink-clients/src/main/java/org/apache/flink/client/cli/ArtifactFetchOptionsInternal.java:
##
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.client.cli;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+
+import java.util.List;
+
+/** Artifact Fetch options. */
+@Internal
+public class ArtifactFetchOptionsInternal {
+
+public static final ConfigOption> COMPLETE_LIST =
+ConfigOptions.key("$internal.user.artifacts.complete-list")

Review Comment:
   This is currently unused. The current implementation modifies the passed 
`user.artifacts.artifact-list` config option, which was more straightforward 
regarding the impl but maybe confusing/magical for users, so it can be changed 
of the current behavior would be a no go. Otherwise, this can be removed.
   
   Personally I do not think modifying the given config option would be risky, 
IMO it can be justified to show the final deployed state in the config.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34418) Disk space issues for Docker-ized GitHub Action jobs

2024-02-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34418:
---
Labels: github-actions pull-request-available test-stability  (was: 
github-actions test-stability)

> Disk space issues for Docker-ized GitHub Action jobs
> 
>
> Key: FLINK-34418
> URL: https://issues.apache.org/jira/browse/FLINK-34418
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Affects Versions: 1.19.0, 1.18.1, 1.20.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, pull-request-available, test-stability
>
> [https://github.com/apache/flink/actions/runs/7838691874/job/21390739806#step:10:27746]
> {code:java}
> [...]
> Feb 09 03:00:13 Caused by: java.io.IOException: No space left on device
> 27608Feb 09 03:00:13  at java.io.FileOutputStream.writeBytes(Native Method)
> 27609Feb 09 03:00:13  at 
> java.io.FileOutputStream.write(FileOutputStream.java:326)
> 27610Feb 09 03:00:13  at 
> org.apache.logging.log4j.core.appender.OutputStreamManager.writeToDestination(OutputStreamManager.java:250)
> 27611Feb 09 03:00:13  ... 39 more
> [...] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34364) Fix release utils mount point to match the release doc and scripts

2024-02-12 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816560#comment-17816560
 ] 

Etienne Chauchot edited comment on FLINK-34364 at 2/12/24 1:27 PM:
---

This was an incorrect mount point tools/*release* instead of tools/*releasing*. 
_tools/releasing_ is already excluded in the source release script And 
_tools/releasing_ is the path referred to in the docs


was (Author: echauchot):
This was an incorrect mount point tools/*release* instead of tools/*releasing*. 
tools/releasing is already excluded in the source release script And 
tools/releasing is the path referred to in the docs

> Fix release utils mount point to match the release doc and scripts
> --
>
> Key: FLINK-34364
> URL: https://issues.apache.org/jira/browse/FLINK-34364
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent, Release System
>Reporter: Etienne Chauchot
>Assignee: Etienne Chauchot
>Priority: Major
>
> This directory is the mount point of the release utils repository and should 
> be excluded from the source release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34364) Update flink-connector-parent-1.1.0-rc1 source release to exclude tools/release directory

2024-02-12 Thread Etienne Chauchot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Etienne Chauchot updated FLINK-34364:
-
Issue Type: Bug  (was: Improvement)

> Update flink-connector-parent-1.1.0-rc1 source release to exclude 
> tools/release directory
> -
>
> Key: FLINK-34364
> URL: https://issues.apache.org/jira/browse/FLINK-34364
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent, Release System
>Reporter: Etienne Chauchot
>Assignee: Etienne Chauchot
>Priority: Major
>
> This directory is the mount point of the release utils repository and should 
> be excluded from the source release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34364) Fix release utils mount point to match the release doc and scripts

2024-02-12 Thread Etienne Chauchot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Etienne Chauchot updated FLINK-34364:
-
Summary: Fix release utils mount point to match the release doc and scripts 
 (was: Update flink-connector-parent-1.1.0-rc1 source release to exclude 
tools/release directory)

> Fix release utils mount point to match the release doc and scripts
> --
>
> Key: FLINK-34364
> URL: https://issues.apache.org/jira/browse/FLINK-34364
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Parent, Release System
>Reporter: Etienne Chauchot
>Assignee: Etienne Chauchot
>Priority: Major
>
> This directory is the mount point of the release utils repository and should 
> be excluded from the source release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34364) Update flink-connector-parent-1.1.0-rc1 source release to exclude tools/release directory

2024-02-12 Thread Etienne Chauchot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816560#comment-17816560
 ] 

Etienne Chauchot edited comment on FLINK-34364 at 2/12/24 1:25 PM:
---

This was an incorrect mount point tools/*release* instead of tools/*releasing*. 
tools/releasing is already excluded in the source release script And 
tools/releasing is the path referred to in the docs


was (Author: echauchot):
This was an incorrect mount point tools/*release* instead of tools/*releasing*. 
tools/releasing is already excluded in the source release script. Just need to 
re-create the source release and to republish it

> Update flink-connector-parent-1.1.0-rc1 source release to exclude 
> tools/release directory
> -
>
> Key: FLINK-34364
> URL: https://issues.apache.org/jira/browse/FLINK-34364
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Parent, Release System
>Reporter: Etienne Chauchot
>Assignee: Etienne Chauchot
>Priority: Major
>
> This directory is the mount point of the release utils repository and should 
> be excluded from the source release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34403) VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM

2024-02-12 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816599#comment-17816599
 ] 

Benchao Li commented on FLINK-34403:


ITCase sounds good to me, +1 for it.

> VeryBigPbProtoToRowTest#testSimple cannot pass due to OOM
> -
>
> Key: FLINK-34403
> URL: https://issues.apache.org/jira/browse/FLINK-34403
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.20.0
>Reporter: Benchao Li
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> After FLINK-33611 merged, the misc test on GHA cannot pass due to out of 
> memory error, throwing following exceptions:
> {code:java}
> Error: 05:43:21 05:43:21.768 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 40.98 s <<< FAILURE! -- in 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest
> Error: 05:43:21 05:43:21.773 [ERROR] 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple -- Time 
> elapsed: 40.97 s <<< ERROR!
> Feb 07 05:43:21 org.apache.flink.util.FlinkRuntimeException: Error in 
> serialization.
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:327)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:162)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:1007)
> Feb 07 05:43:21   at 
> org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:56)
> Feb 07 05:43:21   at 
> org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:45)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:61)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:104)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:81)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2440)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2421)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollectWithClient(DataStream.java:1495)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1382)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1367)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.validateRow(ProtobufTestHelper.java:66)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:89)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:76)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:71)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple(VeryBigPbRowToProtoTest.java:37)
> Feb 07 05:43:21   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 07 05:43:21 Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalArgumentException: Self-suppression not permitted
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:323)
> Feb 07 05:43:21   ... 18 more
> Feb 07 05:43:21 Caused by: java.lang.IllegalArgumentException: 
> Self-suppression not permitted
> Feb 07 05:43:21   at 
> java.lang.Throwable.addSuppressed(Throwable.java:1072)
> Feb 07 05:43:21   at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:556)
> Feb 07 05:43:21   at 
> org.apache.flink.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:486)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamConfig.lambda$triggerSerializationAndReturnFuture$0(StreamConfig.java:182)
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.uniAccept(CompletableFuture.java:670)

Re: [PR] [FLINK-32315][k8s] Support uploading "local://" artifacts in Kubernetes Application Mode [flink]

2024-02-12 Thread via GitHub


flinkbot commented on PR #24303:
URL: https://github.com/apache/flink/pull/24303#issuecomment-1938670866

   
   ## CI report:
   
   * 3b142b6ba387fd7f6bfaee5c66d9d9d8b0966976 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32315) Support local file upload in K8s mode

2024-02-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32315:
---
Labels: pull-request-available  (was: )

> Support local file upload in K8s mode
> -
>
> Key: FLINK-32315
> URL: https://issues.apache.org/jira/browse/FLINK-32315
> Project: Flink
>  Issue Type: New Feature
>  Components: Client / Job Submission, Deployment / Kubernetes
>Reporter: Paul Lin
>Assignee: Ferenc Csaky
>Priority: Major
>  Labels: pull-request-available
>
> Currently, Flink assumes all resources are locally accessible in the pods, 
> which requires users to prepare the resources by mounting storages, 
> downloading resources with init containers, or rebuilding images for each 
> execution.
> We could make things much easier by introducing a built-in file distribution 
> mechanism based on Flink-supported filesystems. It's implemented in two steps:
>  
> 1. KubernetesClusterDescripter uploads all local resources to remote storage 
> via Flink filesystem (skips if the resources are already remote).
> 2. KubernetesApplicationClusterEntrypoint and KubernetesTaskExecutorRunner 
> download the resources and put them in the classpath during startup.
>  
> The 2nd step is mostly done by 
> [FLINK-28915|https://issues.apache.org/jira/browse/FLINK-28915], thus this 
> issue is focused on the upload part.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >