Re: [PR] [FLINK-34994][tests] Ignore unknown task checkpoint confirmation log in JobIDLoggingITCase [flink]
XComp commented on PR #24611: URL: https://github.com/apache/flink/pull/24611#issuecomment-2036265939 Ok, I would have waited for us to fix FLINK-34999. :-D Anyway, (for documentation purposes) the [GHA action workflow in your fork for this branch](https://github.com/rkhachatryan/flink/actions/runs/8535570993) passed as well. The same applies for your test in `master` after the PR was merged: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58716&view=results -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34996][Connectors/Kafka] Use UserCodeCL to instantiate Deserializer [flink-connector-kafka]
hugogu commented on code in PR #89: URL: https://github.com/apache/flink-connector-kafka/pull/89#discussion_r1550913873 ## flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/KafkaSerializerWrapperTest.java: ## @@ -0,0 +1,28 @@ +package org.apache.flink.connector.kafka.sink; + +import org.apache.flink.streaming.connectors.kafka.testutils.SerializationTestBase; +import org.apache.kafka.common.serialization.StringSerializer; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.mockito.junit.MockitoJUnitRunner; + +import static org.mockito.Mockito.when; + +@RunWith(MockitoJUnitRunner.class) Review Comment: Thanks for pointing it out. I have just updated it to get rid of Mockito. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-34657) Implement Lineage Graph for streaming API use cases
[ https://issues.apache.org/jira/browse/FLINK-34657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34657: --- Labels: pull-request-available (was: ) > Implement Lineage Graph for streaming API use cases > --- > > Key: FLINK-34657 > URL: https://issues.apache.org/jira/browse/FLINK-34657 > Project: Flink > Issue Type: Sub-task >Reporter: Zhenqiu Huang >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] [FLINK-34657] add lineage provider API [flink]
HuangZhenQiu opened a new pull request, #24619: URL: https://github.com/apache/flink/pull/24619 ## What is the purpose of the change Add lineage provider interface for flink connector to expose lineage info. ## Brief change log - Add LineageVertexProvider interface ## Verifying this change This change is a trivial new interface, no test. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34450) TwoInputStreamTaskTest.testWatermarkAndWatermarkStatusForwarding failed
[ https://issues.apache.org/jira/browse/FLINK-34450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833783#comment-17833783 ] Roman Khachatryan commented on FLINK-34450: --- 1.20 master: https://github.com/apache/flink/actions/runs/8545965922/job/23415690241#step:10:9900 > TwoInputStreamTaskTest.testWatermarkAndWatermarkStatusForwarding failed > --- > > Key: FLINK-34450 > URL: https://issues.apache.org/jira/browse/FLINK-34450 > Project: Flink > Issue Type: Bug > Components: Runtime / Task >Affects Versions: 1.20.0 >Reporter: Matthias Pohl >Priority: Major > Labels: github-actions, test-stability > > https://github.com/XComp/flink/actions/runs/7927275243/job/21643615491#step:10:9880 > {code} > Error: 07:48:06 07:48:06.643 [ERROR] Tests run: 11, Failures: 1, Errors: 0, > Skipped: 0, Time elapsed: 0.309 s <<< FAILURE! -- in > org.apache.flink.streaming.runtime.tasks.TwoInputStreamTaskTest > Error: 07:48:06 07:48:06.646 [ERROR] > org.apache.flink.streaming.runtime.tasks.TwoInputStreamTaskTest.testWatermarkAndWatermarkStatusForwarding > -- Time elapsed: 0.036 s <<< FAILURE! > Feb 16 07:48:06 Output was not correct.: array lengths differed, > expected.length=8 actual.length=7; arrays first differed at element [6]; > expected: but was: > Feb 16 07:48:06 at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:78) > Feb 16 07:48:06 at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:28) > Feb 16 07:48:06 at org.junit.Assert.internalArrayEquals(Assert.java:534) > Feb 16 07:48:06 at org.junit.Assert.assertArrayEquals(Assert.java:285) > Feb 16 07:48:06 at > org.apache.flink.streaming.util.TestHarnessUtil.assertOutputEquals(TestHarnessUtil.java:59) > Feb 16 07:48:06 at > org.apache.flink.streaming.runtime.tasks.TwoInputStreamTaskTest.testWatermarkAndWatermarkStatusForwarding(TwoInputStreamTaskTest.java:248) > Feb 16 07:48:06 at java.lang.reflect.Method.invoke(Method.java:498) > Feb 16 07:48:06 Caused by: java.lang.AssertionError: expected: > but was: > Feb 16 07:48:06 at org.junit.Assert.fail(Assert.java:89) > Feb 16 07:48:06 at org.junit.Assert.failNotEquals(Assert.java:835) > Feb 16 07:48:06 at org.junit.Assert.assertEquals(Assert.java:120) > Feb 16 07:48:06 at org.junit.Assert.assertEquals(Assert.java:146) > Feb 16 07:48:06 at > org.junit.internal.ExactComparisonCriteria.assertElementsEqual(ExactComparisonCriteria.java:8) > Feb 16 07:48:06 at > org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:76) > Feb 16 07:48:06 ... 6 more > {code} > I couldn't reproduce it locally with 2 runs. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-34994) JobIDLoggingITCase fails because of "checkpoint confirmation for unknown task"
[ https://issues.apache.org/jira/browse/FLINK-34994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Khachatryan closed FLINK-34994. - Fix Version/s: 1.20.0 Resolution: Fixed Fix merged into master as 875683082a58636b377bbb0a82bac4d273455e6e..f86c08041211bbeddf36c9ff0fbe6ae4abaa3b9d > JobIDLoggingITCase fails because of "checkpoint confirmation for unknown task" > -- > > Key: FLINK-34994 > URL: https://issues.apache.org/jira/browse/FLINK-34994 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.20.0 >Reporter: Roman Khachatryan >Assignee: Roman Khachatryan >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.20.0 > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58640&view=logs&j=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3&t=0c010d0c-3dec-5bf1-d408-7b18988b1b2b&l=8735] > {code:java} > Mar 30 03:46:07 03:46:07.807 [ERROR] Tests run: 1, Failures: 1, Errors: 0, > Skipped: 0, Time elapsed: 7.147 s <<< FAILURE! -- in > org.apache.flink.test.misc.JobIDLoggingITCase > Mar 30 03:46:07 03:46:07.807 [ERROR] > org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(ClusterClient) > -- Time elapsed: 2.301 s <<< FAILURE! > Mar 30 03:46:07 java.lang.AssertionError: > Mar 30 03:46:07 [too many events without Job ID logged by > org.apache.flink.runtime.taskexecutor.TaskExecutor] > Mar 30 03:46:07 Expecting empty but was: > [Logger=org.apache.flink.runtime.taskexecutor.TaskExecutor Level=DEBUG > Message=TaskManager received a checkpoint confirmation for unknown task > b45d406844d494592784a88e47d201e2_cbc357ccb763df2852fee8c4fc7d55f2_0_0.] > Mar 30 03:46:07 at > org.apache.flink.test.misc.JobIDLoggingITCase.assertJobIDPresent(JobIDLoggingITCase.java:264) > Mar 30 03:46:07 at > org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(JobIDLoggingITCase.java:149) > Mar 30 03:46:07 at java.lang.reflect.Method.invoke(Method.java:498) > Mar 30 03:46:07 at > java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) > Mar 30 03:46:07 at > java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) > Mar 30 03:46:07 at > java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) > Mar 30 03:46:07 at > java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) > Mar 30 03:46:07 at > java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) > {code} > [https://github.com/apache/flink/actions/runs/8502821551/job/23287730632#step:10:8131] > [https://github.com/apache/flink/actions/runs/8507870399/job/23300810619#step:10:8086] > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34994][tests] Ignore unknown task checkpoint confirmation log in JobIDLoggingITCase [flink]
rkhachatryan merged PR #24611: URL: https://github.com/apache/flink/pull/24611 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34994][tests] Ignore unknown task checkpoint confirmation log in JobIDLoggingITCase [flink]
rkhachatryan commented on PR #24611: URL: https://github.com/apache/flink/pull/24611#issuecomment-2035661830 Thanks! Merging -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-33676] Implement RestoreTests for WindowAggregate [flink]
jnh5y commented on code in PR #23886: URL: https://github.com/apache/flink/pull/23886#discussion_r1550173030 ## flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/nodes/exec/stream/WindowAggregateJsonPlanTest.java: ## @@ -1,528 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.flink.table.planner.plan.nodes.exec.stream; - -import org.apache.flink.table.api.TableConfig; -import org.apache.flink.table.api.TableEnvironment; -import org.apache.flink.table.api.config.OptimizerConfigOptions; -import org.apache.flink.table.planner.plan.utils.JavaUserDefinedAggFunctions.ConcatDistinctAggFunction; -import org.apache.flink.table.planner.utils.StreamTableTestUtil; -import org.apache.flink.table.planner.utils.TableTestBase; - -import org.junit.jupiter.api.BeforeEach; -import org.junit.jupiter.api.Test; - -/** Test json serialization/deserialization for window aggregate. */ -class WindowAggregateJsonPlanTest extends TableTestBase { - -private StreamTableTestUtil util; -private TableEnvironment tEnv; - -@BeforeEach -void setup() { -util = streamTestUtil(TableConfig.getDefault()); -tEnv = util.getTableEnv(); - -String insertOnlyTableDdl = -"CREATE TABLE MyTable (\n" -+ " a INT,\n" -+ " b BIGINT,\n" -+ " c VARCHAR,\n" -+ " `rowtime` AS TO_TIMESTAMP(c),\n" -+ " proctime as PROCTIME(),\n" -+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL '1' SECOND\n" -+ ") WITH (\n" -+ " 'connector' = 'values')\n"; -tEnv.executeSql(insertOnlyTableDdl); - -String changelogTableDdl = -"CREATE TABLE MyCDCTable (\n" -+ " a INT,\n" -+ " b BIGINT,\n" -+ " c VARCHAR,\n" -+ " `rowtime` AS TO_TIMESTAMP(c),\n" -+ " proctime as PROCTIME(),\n" -+ " WATERMARK for `rowtime` AS `rowtime` - INTERVAL '1' SECOND\n" -+ ") WITH (\n" -+ " 'connector' = 'values',\n" -+ " 'changelog-mode' = 'I,UA,UB,D')\n"; -tEnv.executeSql(changelogTableDdl); -} - -@Test -void testEventTimeTumbleWindow() { -tEnv.createFunction("concat_distinct_agg", ConcatDistinctAggFunction.class); -String sinkTableDdl = -"CREATE TABLE MySink (\n" -+ " b BIGINT,\n" -+ " window_start TIMESTAMP(3),\n" -+ " window_end TIMESTAMP(3),\n" -+ " cnt BIGINT,\n" -+ " sum_a INT,\n" -+ " distinct_cnt BIGINT,\n" -+ " concat_distinct STRING\n" -+ ") WITH (\n" -+ " 'connector' = 'values')\n"; -tEnv.executeSql(sinkTableDdl); -util.verifyJsonPlan( -"insert into MySink select\n" -+ " b,\n" -+ " window_start,\n" -+ " window_end,\n" -+ " COUNT(*),\n" -+ " SUM(a),\n" -+ " COUNT(DISTINCT c),\n" -+ " concat_distinct_agg(c)\n" -+ "FROM TABLE(\n" -+ " TUMBLE(TABLE MyTable, DESCRIPTOR(rowtime), INTERVAL '5' SECOND))\n" -+ "GROUP BY b, window_start, window_end"); -} - -@Test -void testEventTimeTumbleWindowWithCDCSource() { -tEnv.createFunction("concat_distinct_agg", ConcatDistinctAggFunction.class); -String sinkTableDdl = -"CREATE TABLE MySink (\n" -+ " b BIGINT,\n" -+ " window_start TIMESTAMP(3),\n" -+ " window_end TIMESTAMP(3),\n" -+ " cnt BIGINT,\n" -+ " sum_a INT,\n" -
Re: [PR] [FLINK-33211][table] support flink table lineage [flink]
HuangZhenQiu commented on PR #24618: URL: https://github.com/apache/flink/pull/24618#issuecomment-2035113099 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34996][Connectors/Kafka] Use UserCodeCL to instantiate Deserializer [flink-connector-kafka]
MartijnVisser commented on code in PR #89: URL: https://github.com/apache/flink-connector-kafka/pull/89#discussion_r1549969179 ## flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/sink/KafkaSerializerWrapperTest.java: ## @@ -0,0 +1,28 @@ +package org.apache.flink.connector.kafka.sink; + +import org.apache.flink.streaming.connectors.kafka.testutils.SerializationTestBase; +import org.apache.kafka.common.serialization.StringSerializer; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.mockito.junit.MockitoJUnitRunner; + +import static org.mockito.Mockito.when; + +@RunWith(MockitoJUnitRunner.class) Review Comment: Per the code style and quality guide, please don't use Mockito https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#avoid-mockito---use-reusable-test-implementations -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34997) PyFlink YARN per-job on Docker test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-34997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833647#comment-17833647 ] Ryan Skraba commented on FLINK-34997: - 1.20 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58710&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=0f3adb59-eefa-51c6-2858-3654d9e0749d&l=1 > PyFlink YARN per-job on Docker test failed on azure > --- > > Key: FLINK-34997 > URL: https://issues.apache.org/jira/browse/FLINK-34997 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Blocker > Labels: test-stability > > {code} > Apr 03 03:12:37 > == > Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' > Apr 03 03:12:37 > == > Apr 03 03:12:37 TEST_DATA_DIR: > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 > Apr 03 03:12:37 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Docker version 24.0.9, build 2936816 > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: > line 24: docker-compose: command not found > Apr 03 03:12:38 [FAIL] Test script contains errors. > Apr 03 03:12:38 Checking of logs skipped. > Apr 03 03:12:38 > Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 > minutes and 1 seconds! Test exited with exit code 1 > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=94ccd692-49fc-5c64-8775-d427c6e65440&l=10226 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35004) SqlGatewayE2ECase could not start container
[ https://issues.apache.org/jira/browse/FLINK-35004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Skraba updated FLINK-35004: Description: 1.20, jdk17: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=e8e46ef5-75cc-564f-c2bd-1797c35cbebe&t=60c49903-2505-5c25-7e46-de91b1737bea&l=15078 There is an error: "Process failed due to timeout" in {{SqlGatewayE2ECase.testSqlClientExecuteStatement}}. In the maven logs, we can see: {code:java} 02:57:26,979 [main] INFO tc.prestodb/hdp2.6-hive:10 [] - Image prestodb/hdp2.6-hive:10 pull took PT43.59218S 02:57:26,991 [main] INFO tc.prestodb/hdp2.6-hive:10 [] - Creating container for image: prestodb/hdp2.6-hive:10 02:57:27,032 [main] INFO tc.prestodb/hdp2.6-hive:10 [] - Container prestodb/hdp2.6-hive:10 is starting: 162069678c7d03252a42ed81ca43e1911ca7357c476a4a5de294ffe55bd83145 02:57:42,846 [main] INFO tc.prestodb/hdp2.6-hive:10 [] - Container prestodb/hdp2.6-hive:10 started in PT15.855339866S 02:57:53,447 [main] ERROR tc.prestodb/hdp2.6-hive:10 [] - Could not start container java.lang.RuntimeException: java.net.SocketTimeoutException: timeout at org.apache.flink.table.gateway.containers.HiveContainer.containerIsStarted(HiveContainer.java:94) ~[test-classes/:?] at org.testcontainers.containers.GenericContainer.containerIsStarted(GenericContainer.java:723) ~[testcontainers-1.19.1.jar:1.19.1] at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:543) ~[testcontainers-1.19.1.jar:1.19.1] at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354) ~[testcontainers-1.19.1.jar:1.19.1] at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81) ~[duct-tape-1.0.8.jar:?] at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344) ~[testcontainers-1.19.1.jar:1.19.1] at org.apache.flink.table.gateway.containers.HiveContainer.doStart(HiveContainer.java:69) ~[test-classes/:?] at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:334) ~[testcontainers-1.19.1.jar:1.19.1] at org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1144) ~[testcontainers-1.19.1.jar:1.19.1] at org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28) ~[testcontainers-1.19.1.jar:1.19.1] at org.junit.rules.RunRules.evaluate(RunRules.java:20) ~[junit-4.13.2.jar:4.13.2] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) ~[junit-4.13.2.jar:4.13.2] at org.junit.runners.ParentRunner.run(ParentRunner.java:413) ~[junit-4.13.2.jar:4.13.2] at org.junit.runner.JUnitCore.run(JUnitCore.java:137) ~[junit-4.13.2.jar:4.13.2] at org.junit.runner.JUnitCore.run(JUnitCore.java:115) ~[junit-4.13.2.jar:4.13.2] at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) ~[junit-vintage-engine-5.10.1.jar:5.10.1] at org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) ~[junit-vintage-engine-5.10.1.jar:5.10.1] at org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) ~[junit-vintage-engine-5.10.1.jar:5.10.1] at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:198) ~[junit-platform-launcher-1.10.1.jar:1.10.1] at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:169) ~[junit-platform-launcher-1.10.1.jar:1.10.1] at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:93) ~[junit-platform-launcher-1.10.1.jar:1.10.1] at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:58) ~[junit-platform-launcher-1.10.1.jar:1.10.1] at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:141) [junit-platform-launcher-1.10.1.jar:1.10.1] at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:57) [junit-platform-launcher-1.10.1.jar:1.10.1] at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:103) [junit-platform-launcher-1.10.1.jar:1.10.1] at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:85) [junit-platform-launcher-1.10.1.jar:1.10.1]
[jira] [Created] (FLINK-35005) SqlClientITCase Failed to build JobManager image
Ryan Skraba created FLINK-35005: --- Summary: SqlClientITCase Failed to build JobManager image Key: FLINK-35005 URL: https://issues.apache.org/jira/browse/FLINK-35005 Project: Flink Issue Type: Bug Affects Versions: 1.20.0 Reporter: Ryan Skraba jdk21 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=dc1bf4ed-4646-531a-f094-e103042be549&t=fb3d654d-52f8-5b98-fe9d-b18dd2e2b790&l=15140 {code} Apr 03 02:59:16 02:59:16.247 [INFO] --- Apr 03 02:59:16 02:59:16.248 [INFO] T E S T S Apr 03 02:59:16 02:59:16.248 [INFO] --- Apr 03 02:59:17 02:59:17.841 [INFO] Running SqlClientITCase Apr 03 03:03:15 at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) Apr 03 03:03:15 at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) Apr 03 03:03:15 at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) Apr 03 03:03:15 at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) Apr 03 03:03:15 Caused by: org.apache.flink.connector.testframe.container.ImageBuildException: Failed to build image "flink-configured-jobmanager" Apr 03 03:03:15 at org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:234) Apr 03 03:03:15 at org.apache.flink.connector.testframe.container.FlinkTestcontainersConfigurator.configureJobManagerContainer(FlinkTestcontainersConfigurator.java:65) Apr 03 03:03:15 ... 12 more Apr 03 03:03:15 Caused by: java.lang.RuntimeException: com.github.dockerjava.api.exception.DockerClientException: Could not build image: Head "https://registry-1.docker.io/v2/library/eclipse-temurin/manifests/21-jre-jammy": received unexpected HTTP status: 500 Internal Server Error Apr 03 03:03:15 at org.rnorth.ducttape.timeouts.Timeouts.callFuture(Timeouts.java:68) Apr 03 03:03:15 at org.rnorth.ducttape.timeouts.Timeouts.getWithTimeout(Timeouts.java:43) Apr 03 03:03:15 at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:47) Apr 03 03:03:15 at org.apache.flink.connector.testframe.container.FlinkImageBuilder.buildBaseImage(FlinkImageBuilder.java:255) Apr 03 03:03:15 at org.apache.flink.connector.testframe.container.FlinkImageBuilder.build(FlinkImageBuilder.java:206) Apr 03 03:03:15 ... 13 more Apr 03 03:03:15 Caused by: com.github.dockerjava.api.exception.DockerClientException: Could not build image: Head "https://registry-1.docker.io/v2/library/eclipse-temurin/manifests/21-jre-jammy": received unexpected HTTP status: 500 Internal Server Error Apr 03 03:03:15 at com.github.dockerjava.api.command.BuildImageResultCallback.getImageId(BuildImageResultCallback.java:78) Apr 03 03:03:15 at com.github.dockerjava.api.command.BuildImageResultCallback.awaitImageId(BuildImageResultCallback.java:50) Apr 03 03:03:15 at org.testcontainers.images.builder.ImageFromDockerfile.resolve(ImageFromDockerfile.java:159) Apr 03 03:03:15 at org.testcontainers.images.builder.ImageFromDockerfile.resolve(ImageFromDockerfile.java:40) Apr 03 03:03:15 at org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:19) Apr 03 03:03:15 at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:41) Apr 03 03:03:15 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) Apr 03 03:03:15 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) Apr 03 03:03:15 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) Apr 03 03:03:15 at java.base/java.lang.Thread.run(Thread.java:1583) Apr 03 03:03:15 {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35004) SqlGatewayE2ECase could not start container
[ https://issues.apache.org/jira/browse/FLINK-35004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Skraba updated FLINK-35004: Description: 1.20, jdk17: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=e8e46ef5-75cc-564f-c2bd-1797c35cbebe&t=60c49903-2505-5c25-7e46-de91b1737bea&l=15078 There is an error: "Process failed due to timeout" in {{SqlGatewayE2ECase.testSqlClientExecuteStatement}}. In the maven logs, we can see: {code:java} 02:57:26,979 [main] INFO tc.prestodb/hdp2.6-hive:10 [] - Image prestodb/hdp2.6-hive:10 pull took PT43.59218S02:57:26,991 [main] INFO tc.prestodb/hdp2.6-hive:10 [] - Creating container for image: prestodb/hdp2.6-hive:1002:57:27,032 [main] INFO tc.prestodb/hdp2.6-hive:10 [] - Container prestodb/hdp2.6-hive:10 is starting: 162069678c7d03252a42ed81ca43e1911ca7357c476a4a5de294ffe55bd8314502:57:42,846 [ main] INFO tc.prestodb/hdp2.6-hive:10 [] - Container prestodb/hdp2.6-hive:10 started in PT15.855339866S02:57:53,447 [main] ERROR tc.prestodb/hdp2.6-hive:10 [] - Could not start containerjava.lang.RuntimeException: java.net.SocketTimeoutException: timeoutat org.apache.flink.table.gateway.containers.HiveContainer.containerIsStarted(HiveContainer.java:94) ~[test-classes/:?]at org.testcontainers.containers.GenericContainer.containerIsStarted(GenericContainer.java:723) ~[testcontainers-1.19.1.jar:1.19.1]at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:543) ~[testcontainers-1.19.1.jar:1.19.1]at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354) ~[testcontainers-1.19.1.jar:1.19.1]at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81) ~[duct-tape-1.0.8.jar:?]at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344) ~[testcontainers-1.19.1.jar:1.19.1]at org.apache.flink.table.gateway.containers.HiveContainer.doStart(HiveContainer.java:69) ~[test-classes/:?]at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:334) ~[testcontainers-1.19.1.jar:1.19.1]at org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1144) ~[testcontainers-1.19.1.jar:1.19.1]at org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28) ~[testcontainers-1.19.1.jar:1.19.1]at org.junit.rules.RunRules.evaluate(RunRules.java:20) ~[junit-4.13.2.jar:4.13.2] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) ~[junit-4.13.2.jar:4.13.2]at org.junit.runners.ParentRunner.run(ParentRunner.java:413) ~[junit-4.13.2.jar:4.13.2]at org.junit.runner.JUnitCore.run(JUnitCore.java:137) ~[junit-4.13.2.jar:4.13.2] at org.junit.runner.JUnitCore.run(JUnitCore.java:115) ~[junit-4.13.2.jar:4.13.2]at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) ~[junit-vintage-engine-5.10.1.jar:5.10.1]at org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) ~[junit-vintage-engine-5.10.1.jar:5.10.1]at org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) ~[junit-vintage-engine-5.10.1.jar:5.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:198) ~[junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:169) ~[junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:93) ~[junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:58) ~[junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:141) [junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:57) [junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:103) [junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:85) [junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.DelegatingLauncher.execute(DelegatingLauncher.java:47) [junit-platform-launcher-1.10.1.
[jira] [Commented] (FLINK-34997) PyFlink YARN per-job on Docker test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-34997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833638#comment-17833638 ] Ryan Skraba commented on FLINK-34997: - 1.20 [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=af184cdd-c6d8-5084-0b69-7e9c67b35f7a&t=0f3adb59-eefa-51c6-2858-3654d9e0749d&l=10071] 1.20 adaptive scheduler [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=fb37c667-81b7-5c22-dd91-846535e99a97&t=011e961e-597c-5c96-04fe-7941c8b83f23&l=10519] > PyFlink YARN per-job on Docker test failed on azure > --- > > Key: FLINK-34997 > URL: https://issues.apache.org/jira/browse/FLINK-34997 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Blocker > Labels: test-stability > > {code} > Apr 03 03:12:37 > == > Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' > Apr 03 03:12:37 > == > Apr 03 03:12:37 TEST_DATA_DIR: > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 > Apr 03 03:12:37 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Docker version 24.0.9, build 2936816 > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: > line 24: docker-compose: command not found > Apr 03 03:12:38 [FAIL] Test script contains errors. > Apr 03 03:12:38 Checking of logs skipped. > Apr 03 03:12:38 > Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 > minutes and 1 seconds! Test exited with exit code 1 > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=94ccd692-49fc-5c64-8775-d427c6e65440&l=10226 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-35004) SqlGatewayE2ECase could not start container
[ https://issues.apache.org/jira/browse/FLINK-35004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan Skraba updated FLINK-35004: Affects Version/s: 1.20.0 > SqlGatewayE2ECase could not start container > --- > > Key: FLINK-35004 > URL: https://issues.apache.org/jira/browse/FLINK-35004 > Project: Flink > Issue Type: Bug >Affects Versions: 1.20.0 >Reporter: Ryan Skraba >Priority: Critical > Labels: github-actions, test-stability > > 1.20, jdk17: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=e8e46ef5-75cc-564f-c2bd-1797c35cbebe&t=60c49903-2505-5c25-7e46-de91b1737bea&l=15078 > There is an error: "Process failed due to timeout" in > {{SqlGatewayE2ECase.testSqlClientExecuteStatement}}. In the maven logs, we > can see: > {code:java} > 02:57:26,979 [main] INFO tc.prestodb/hdp2.6-hive:10 > [] - Image prestodb/hdp2.6-hive:10 pull took > PT43.59218S02:57:26,991 [main] INFO > tc.prestodb/hdp2.6-hive:10 [] - Creating > container for image: prestodb/hdp2.6-hive:1002:57:27,032 [ > main] INFO tc.prestodb/hdp2.6-hive:10 [] - > Container prestodb/hdp2.6-hive:10 is starting: > 162069678c7d03252a42ed81ca43e1911ca7357c476a4a5de294ffe55bd8314502:57:42,846 > [main] INFO tc.prestodb/hdp2.6-hive:10 > [] - Container prestodb/hdp2.6-hive:10 started in > PT15.855339866S02:57:53,447 [main] ERROR > tc.prestodb/hdp2.6-hive:10 [] - Could not > start containerjava.lang.RuntimeException: java.net.SocketTimeoutException: > timeoutat > org.apache.flink.table.gateway.containers.HiveContainer.containerIsStarted(HiveContainer.java:94) > ~[test-classes/:?]at > org.testcontainers.containers.GenericContainer.containerIsStarted(GenericContainer.java:723) > ~[testcontainers-1.19.1.jar:1.19.1]at > org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:543) > ~[testcontainers-1.19.1.jar:1.19.1]at > org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354) > ~[testcontainers-1.19.1.jar:1.19.1]at > org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81) > ~[duct-tape-1.0.8.jar:?]at > org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344) > ~[testcontainers-1.19.1.jar:1.19.1]at > org.apache.flink.table.gateway.containers.HiveContainer.doStart(HiveContainer.java:69) > ~[test-classes/:?]at > org.testcontainers.containers.GenericContainer.start(GenericContainer.java:334) > ~[testcontainers-1.19.1.jar:1.19.1]at > org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1144) > ~[testcontainers-1.19.1.jar:1.19.1]at > org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28) > ~[testcontainers-1.19.1.jar:1.19.1]at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > ~[junit-4.13.2.jar:4.13.2]at > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > ~[junit-4.13.2.jar:4.13.2]at > org.junit.runners.ParentRunner.run(ParentRunner.java:413) > ~[junit-4.13.2.jar:4.13.2]at > org.junit.runner.JUnitCore.run(JUnitCore.java:137) ~[junit-4.13.2.jar:4.13.2] >at org.junit.runner.JUnitCore.run(JUnitCore.java:115) > ~[junit-4.13.2.jar:4.13.2]at > org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) > ~[junit-vintage-engine-5.10.1.jar:5.10.1]at > org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) > ~[junit-vintage-engine-5.10.1.jar:5.10.1]at > org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) > ~[junit-vintage-engine-5.10.1.jar:5.10.1]at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:198) > ~[junit-platform-launcher-1.10.1.jar:1.10.1]at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:169) > ~[junit-platform-launcher-1.10.1.jar:1.10.1]at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:93) > ~[junit-platform-launcher-1.10.1.jar:1.10.1]at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:58) > ~[junit-platform-launcher-1.10.1.jar:1.10.1]at > org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:141) > [junit-platform-launcher-1.10.1.jar:1.10.1]
[jira] [Commented] (FLINK-34998) Wordcount on Docker test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-34998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833637#comment-17833637 ] Ryan Skraba commented on FLINK-34998: - * 1.20, jdk11 [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=e9d3d34f-3d15-59f4-0e3e-35067d100dfe&t=5d91035e-8022-55f2-2d4f-ab121508bf7e&l=6091] * 1.20, jdk17 [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=64debf87-ecdb-5aef-788d-8720d341b5cb&t=2302fb98-0839-5df2-3354-bbae636f81a7&l=5267] * 1.20 jdk21 [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=40819b3f-6406-53da-fb7a-b3c0f1535d7c&t=ec45d684-7283-5150-360d-c37269cd552a&l=5241] * 1.20 jdk21 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=59a2b95a-736b-5c46-b3e0-cee6e587fd86&t=c301da75-e699-5c06-735f-778207c16f50&l=22616 > Wordcount on Docker test failed on azure > > > Key: FLINK-34998 > URL: https://issues.apache.org/jira/browse/FLINK-34998 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Major > > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_docker_embedded_job.sh: > line 65: docker-compose: command not found > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_docker_embedded_job.sh: > line 66: docker-compose: command not found > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_docker_embedded_job.sh: > line 67: docker-compose: command not found > sort: cannot read: > '/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-24250435151/out/docker_wc_out*': > No such file or directory > Apr 03 02:08:14 FAIL WordCount: Output hash mismatch. Got > d41d8cd98f00b204e9800998ecf8427e, expected 0e5bd0a3dd7d5a7110aa85ff70adb54b. > Apr 03 02:08:14 head hexdump of actual: > head: cannot open > '/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-24250435151/out/docker_wc_out*' > for reading: No such file or directory > Apr 03 02:08:14 Stopping job timeout watchdog (with pid=244913) > Apr 03 02:08:14 [FAIL] Test script contains errors. > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=e9d3d34f-3d15-59f4-0e3e-35067d100dfe&t=5d91035e-8022-55f2-2d4f-ab121508bf7e&l=6043 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34988) Class loading issues in JDK17 and JDK21
[ https://issues.apache.org/jira/browse/FLINK-34988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833633#comment-17833633 ] Ryan Skraba commented on FLINK-34988: - * 1.20 jdk17 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=13039 * 1.20 jdk17 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=d871f0ce-7328-5d00-023b-e7391f5801c8&t=77cbea27-feb9-5cf5-53f7-3267f9f9c6b6&l=22655 * 1.20 jdk21 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=d06b80b4-9e88-5d40-12a2-18072cf60528&t=609ecd5a-3f6e-5d0c-2239-2096b155a4d0&l=13019 * 1.20 jdk21 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=59a2b95a-736b-5c46-b3e0-cee6e587fd86&t=c301da75-e699-5c06-735f-778207c16f50&l=22616 > Class loading issues in JDK17 and JDK21 > --- > > Key: FLINK-34988 > URL: https://issues.apache.org/jira/browse/FLINK-34988 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.20.0 >Reporter: Matthias Pohl >Priority: Major > Labels: pull-request-available, test-stability > > * JDK 17 (core; NoClassDefFoundError caused by ExceptionInInitializeError): > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58676&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=12942 > * JDK 17 (misc; ExceptionInInitializeError): > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58676&view=logs&j=d871f0ce-7328-5d00-023b-e7391f5801c8&t=77cbea27-feb9-5cf5-53f7-3267f9f9c6b6&l=22548 > * JDK 21 (core; same as above): > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58676&view=logs&j=d06b80b4-9e88-5d40-12a2-18072cf60528&t=609ecd5a-3f6e-5d0c-2239-2096b155a4d0&l=12963 > * JDK 21 (misc; same as above): > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58676&view=logs&j=59a2b95a-736b-5c46-b3e0-cee6e587fd86&t=c301da75-e699-5c06-735f-778207c16f50&l=22506 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-28440) EventTimeWindowCheckpointingITCase failed with restore
[ https://issues.apache.org/jira/browse/FLINK-28440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833634#comment-17833634 ] Ryan Skraba commented on FLINK-28440: - 1.20 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3&t=0c010d0c-3dec-5bf1-d408-7b18988b1b2b&l=8750 > EventTimeWindowCheckpointingITCase failed with restore > -- > > Key: FLINK-28440 > URL: https://issues.apache.org/jira/browse/FLINK-28440 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Runtime / State Backends >Affects Versions: 1.16.0, 1.17.0, 1.18.0, 1.19.0 >Reporter: Huang Xingbo >Assignee: Yanfei Lei >Priority: Critical > Labels: auto-deprioritized-critical, pull-request-available, > stale-assigned, test-stability > Fix For: 1.20.0 > > Attachments: image-2023-02-01-00-51-54-506.png, > image-2023-02-01-01-10-01-521.png, image-2023-02-01-01-19-12-182.png, > image-2023-02-01-16-47-23-756.png, image-2023-02-01-16-57-43-889.png, > image-2023-02-02-10-52-56-599.png, image-2023-02-03-10-09-07-586.png, > image-2023-02-03-12-03-16-155.png, image-2023-02-03-12-03-56-614.png > > > {code:java} > Caused by: java.lang.Exception: Exception while creating > StreamOperatorStateContext. > at > org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:256) > at > org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:268) > at > org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:722) > at > org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:698) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:665) > at > org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935) > at > org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:904) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.flink.util.FlinkException: Could not restore keyed > state backend for WindowOperator_0a448493b4782967b150582570326227_(2/4) from > any of the 1 provided restore options. > at > org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160) > at > org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:353) > at > org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:165) > ... 11 more > Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: > /tmp/junit1835099326935900400/junit1113650082510421526/52ee65b7-033f-4429-8ddd-adbe85e27ced > (No such file or directory) > at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321) > at > org.apache.flink.runtime.state.changelog.StateChangelogHandleStreamHandleReader$1.advance(StateChangelogHandleStreamHandleReader.java:87) > at > org.apache.flink.runtime.state.changelog.StateChangelogHandleStreamHandleReader$1.hasNext(StateChangelogHandleStreamHandleReader.java:69) > at > org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.readBackendHandle(ChangelogBackendRestoreOperation.java:96) > at > org.apache.flink.state.changelog.restore.ChangelogBackendRestoreOperation.restore(ChangelogBackendRestoreOperation.java:75) > at > org.apache.flink.state.changelog.ChangelogStateBackend.restore(ChangelogStateBackend.java:92) > at > org.apache.flink.state.changelog.AbstractChangelogStateBackend.createKeyedStateBackend(AbstractChangelogStateBackend.java:136) > at > org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:336) > at > org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168) > at > org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135) > ... 13 more > Caused by: java.io.Fi
[jira] [Commented] (FLINK-34999) PR CI stopped operating
[ https://issues.apache.org/jira/browse/FLINK-34999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833629#comment-17833629 ] Robert Metzger commented on FLINK-34999: I have access to the Flinkbot gh account (Matthias Pohl and Chesnay have access too). [~lorenzo.affetti] I pinged you in the Flink slack regarding the VV flinkbot stuff. > PR CI stopped operating > --- > > Key: FLINK-34999 > URL: https://issues.apache.org/jira/browse/FLINK-34999 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.19.0, 1.18.1, 1.20.0 >Reporter: Matthias Pohl >Priority: Blocker > > There are no [new PR CI > runs|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=2] > being picked up anymore. [Recently updated > PRs|https://github.com/apache/flink/pulls?q=sort%3Aupdated-desc] are not > picked up by the @flinkbot. > In the meantime there was a notification sent from GitHub that the password > of the [@flinkbot|https://github.com/flinkbot] was reset for security > reasons. It's quite likely that these two events are related. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-35000) PullRequest template doesn't use the correct format to refer to the testing code convention
[ https://issues.apache.org/jira/browse/FLINK-35000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl resolved FLINK-35000. --- Fix Version/s: 1.18.2 1.20.0 1.19.1 Resolution: Fixed master: [d301839dfe2ed9b1313d23f8307bda76868a0c0a|https://github.com/apache/flink/commit/d301839dfe2ed9b1313d23f8307bda76868a0c0a] 1.19: [eb58599b434b6c5fe86f6e487ce88315c98b4ec3|https://github.com/apache/flink/commit/eb58599b434b6c5fe86f6e487ce88315c98b4ec3] 1.18: [9150f93b18b8694646092a6ed24a14e3653f613f|https://github.com/apache/flink/commit/9150f93b18b8694646092a6ed24a14e3653f613f] > PullRequest template doesn't use the correct format to refer to the testing > code convention > --- > > Key: FLINK-35000 > URL: https://issues.apache.org/jira/browse/FLINK-35000 > Project: Flink > Issue Type: Bug > Components: Build System / CI, Project Website >Affects Versions: 1.19.0, 1.18.1, 1.20.0 >Reporter: Matthias Pohl >Assignee: Matthias Pohl >Priority: Minor > Labels: pull-request-available > Fix For: 1.18.2, 1.20.0, 1.19.1 > > > The PR template refers to > https://flink.apache.org/contributing/code-style-and-quality-common.html#testing > rather than > https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [BP-1.18][FLINK-35000][build] Updates link to test code convention in pull request template [flink]
XComp merged PR #24617: URL: https://github.com/apache/flink/pull/24617 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [BP-1.19][FLINK-35000][build] Updates link to test code convention in pull request template [flink]
XComp merged PR #24616: URL: https://github.com/apache/flink/pull/24616 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35000][build] Updates link to test code convention in pull request template [flink]
XComp merged PR #24615: URL: https://github.com/apache/flink/pull/24615 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-35004) SqlGatewayE2ECase could not start container
Ryan Skraba created FLINK-35004: --- Summary: SqlGatewayE2ECase could not start container Key: FLINK-35004 URL: https://issues.apache.org/jira/browse/FLINK-35004 Project: Flink Issue Type: Bug Reporter: Ryan Skraba 1.20, jdk17: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58708&view=logs&j=e8e46ef5-75cc-564f-c2bd-1797c35cbebe&t=60c49903-2505-5c25-7e46-de91b1737bea&l=15078 There is an error: "Process failed due to timeout" in {{SqlGatewayE2ECase.testSqlClientExecuteStatement}}. In the maven logs, we can see: {code:java} 02:57:26,979 [main] INFO tc.prestodb/hdp2.6-hive:10 [] - Image prestodb/hdp2.6-hive:10 pull took PT43.59218S02:57:26,991 [main] INFO tc.prestodb/hdp2.6-hive:10 [] - Creating container for image: prestodb/hdp2.6-hive:1002:57:27,032 [main] INFO tc.prestodb/hdp2.6-hive:10 [] - Container prestodb/hdp2.6-hive:10 is starting: 162069678c7d03252a42ed81ca43e1911ca7357c476a4a5de294ffe55bd8314502:57:42,846 [ main] INFO tc.prestodb/hdp2.6-hive:10 [] - Container prestodb/hdp2.6-hive:10 started in PT15.855339866S02:57:53,447 [main] ERROR tc.prestodb/hdp2.6-hive:10 [] - Could not start containerjava.lang.RuntimeException: java.net.SocketTimeoutException: timeoutat org.apache.flink.table.gateway.containers.HiveContainer.containerIsStarted(HiveContainer.java:94) ~[test-classes/:?]at org.testcontainers.containers.GenericContainer.containerIsStarted(GenericContainer.java:723) ~[testcontainers-1.19.1.jar:1.19.1]at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:543) ~[testcontainers-1.19.1.jar:1.19.1]at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354) ~[testcontainers-1.19.1.jar:1.19.1]at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81) ~[duct-tape-1.0.8.jar:?]at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344) ~[testcontainers-1.19.1.jar:1.19.1]at org.apache.flink.table.gateway.containers.HiveContainer.doStart(HiveContainer.java:69) ~[test-classes/:?]at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:334) ~[testcontainers-1.19.1.jar:1.19.1]at org.testcontainers.containers.GenericContainer.starting(GenericContainer.java:1144) ~[testcontainers-1.19.1.jar:1.19.1]at org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:28) ~[testcontainers-1.19.1.jar:1.19.1]at org.junit.rules.RunRules.evaluate(RunRules.java:20) ~[junit-4.13.2.jar:4.13.2] at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) ~[junit-4.13.2.jar:4.13.2]at org.junit.runners.ParentRunner.run(ParentRunner.java:413) ~[junit-4.13.2.jar:4.13.2]at org.junit.runner.JUnitCore.run(JUnitCore.java:137) ~[junit-4.13.2.jar:4.13.2] at org.junit.runner.JUnitCore.run(JUnitCore.java:115) ~[junit-4.13.2.jar:4.13.2]at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) ~[junit-vintage-engine-5.10.1.jar:5.10.1]at org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) ~[junit-vintage-engine-5.10.1.jar:5.10.1]at org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) ~[junit-vintage-engine-5.10.1.jar:5.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:198) ~[junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:169) ~[junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:93) ~[junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:58) ~[junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:141) [junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:57) [junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:103) [junit-platform-launcher-1.10.1.jar:1.10.1]at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:85) [junit-platform-launcher-1.10.1.jar:1.10.1]at org
[jira] [Updated] (FLINK-33211) Implement table lineage graph
[ https://issues.apache.org/jira/browse/FLINK-33211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-33211: --- Labels: pull-request-available (was: ) > Implement table lineage graph > - > > Key: FLINK-33211 > URL: https://issues.apache.org/jira/browse/FLINK-33211 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Runtime >Affects Versions: 1.19.0 >Reporter: Fang Yong >Assignee: Zhenqiu Huang >Priority: Major > Labels: pull-request-available > > Implement table lineage graph -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] [FLINK-33211][table] support flink table lineage [flink]
HuangZhenQiu opened a new pull request, #24618: URL: https://github.com/apache/flink/pull/24618 ## What is the purpose of the change 1. Add Table Lineage Vertex into transformation in planner. The final LineageGraph is generated from transformation and put into StreamGraph. The lineage graph will be published to Lineage Listener in follow up PR. 2. Deprecated table source and sink are not considered as no enough info can be used for name and namespace for lineage dataset. ## Brief change log - add table lineage interface and default implementations - create lineage vertex and add them to transformation in the phase of physical plan to transformation conversion. ## Verifying this change 1. Add TableLineageGraphTest for both stream and batch. 2. Added LineageGraph verification in TransformationsTest for legacy sources. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable ) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-35003) Update zookeeper to 3.8.4 to address CVE-2024-23944
[ https://issues.apache.org/jira/browse/FLINK-35003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35003: --- Labels: pull-request-available (was: ) > Update zookeeper to 3.8.4 to address CVE-2024-23944 > --- > > Key: FLINK-35003 > URL: https://issues.apache.org/jira/browse/FLINK-35003 > Project: Flink > Issue Type: Improvement > Components: BuildSystem / Shaded >Reporter: Shilun Fan >Priority: Major > Labels: pull-request-available > > Update zookeeper to 3.8.4 to address CVE-2024-23944 > https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper/3.8.3 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34999) PR CI stopped operating
[ https://issues.apache.org/jira/browse/FLINK-34999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833587#comment-17833587 ] Robert Metzger commented on FLINK-34999: I'm trying to restore access to the flinkbot gh account. > PR CI stopped operating > --- > > Key: FLINK-34999 > URL: https://issues.apache.org/jira/browse/FLINK-34999 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.19.0, 1.18.1, 1.20.0 >Reporter: Matthias Pohl >Priority: Blocker > > There are no [new PR CI > runs|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=2] > being picked up anymore. [Recently updated > PRs|https://github.com/apache/flink/pulls?q=sort%3Aupdated-desc] are not > picked up by the @flinkbot. > In the meantime there was a notification sent from GitHub that the password > of the [@flinkbot|https://github.com/flinkbot] was reset for security > reasons. It's quite likely that these two events are related. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35003) Update zookeeper to 3.8.4 to address CVE-2024-23944
Shilun Fan created FLINK-35003: -- Summary: Update zookeeper to 3.8.4 to address CVE-2024-23944 Key: FLINK-35003 URL: https://issues.apache.org/jira/browse/FLINK-35003 Project: Flink Issue Type: Improvement Components: BuildSystem / Shaded Reporter: Shilun Fan Update zookeeper to 3.8.4 to address CVE-2024-23944 https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper/3.8.3 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] FLINK-35003. Update zookeeper to 3.8.4 to address CVE-2024-23944. [flink-shaded]
slfan1989 commented on PR #137: URL: https://github.com/apache/flink-shaded/pull/137#issuecomment-2034730125 @mbalassi Can you help review this PR? Thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34996][Connectors/Kafka] Use UserCodeCL to instantiate Deserializer [flink-connector-kafka]
hugogu commented on PR #89: URL: https://github.com/apache/flink-connector-kafka/pull/89#issuecomment-2034714493 @MartijnVisser May I have your advice on the use of `TemporaryClassLoaderContext` in the implementation? It looks unnecessary to me. Do you think I should just remove it in this PR? ```java public void open(InitializationContext context) throws Exception { final ClassLoader userCodeClassLoader = context.getUserCodeClassLoader().asClassLoader(); try (TemporaryClassLoaderContext ignored = TemporaryClassLoaderContext.of(userCodeClassLoader)) { serializer = InstantiationUtil.instantiate( serializerClass.getName(), Serializer.class, userCodeClassLoader); if (serializer instanceof Configurable) { ((Configurable) serializer).configure(config); } else { serializer.configure(config, isKey); } } catch (Exception e) { throw new IOException("Failed to instantiate the serializer of class " + serializer, e); } } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34996][Connectors/Kafka] Use UserCodeCL to instantiate Deserializer [flink-connector-kafka]
hugogu commented on PR #89: URL: https://github.com/apache/flink-connector-kafka/pull/89#issuecomment-2034716402 @MartijnVisser May I have your advice on the use of `TemporaryClassLoaderContext` in the implementation? It looks unnecessary to me. Do you think I should just remove it in this PR? ```java public void open(InitializationContext context) throws Exception { final ClassLoader userCodeClassLoader = context.getUserCodeClassLoader().asClassLoader(); try (TemporaryClassLoaderContext ignored = TemporaryClassLoaderContext.of(userCodeClassLoader)) { serializer = InstantiationUtil.instantiate( serializerClass.getName(), Serializer.class, userCodeClassLoader); if (serializer instanceof Configurable) { ((Configurable) serializer).configure(config); } else { serializer.configure(config, isKey); } } catch (Exception e) { throw new IOException("Failed to instantiate the serializer of class " + serializer, e); } } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-35002) GitHub action/upload-artifact@v4 can timeout
[ https://issues.apache.org/jira/browse/FLINK-35002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-35002: -- Labels: github-actions test-stability (was: test-stability) > GitHub action/upload-artifact@v4 can timeout > > > Key: FLINK-35002 > URL: https://issues.apache.org/jira/browse/FLINK-35002 > Project: Flink > Issue Type: Bug > Components: Build System >Reporter: Ryan Skraba >Priority: Major > Labels: github-actions, test-stability > > A timeout can occur when uploading a successfully built artifact: > * [https://github.com/apache/flink/actions/runs/8516411871/job/23325392650] > {code:java} > 2024-04-02T02:20:15.6355368Z With the provided path, there will be 1 file > uploaded > 2024-04-02T02:20:15.6360133Z Artifact name is valid! > 2024-04-02T02:20:15.6362872Z Root directory input is valid! > 2024-04-02T02:20:20.6975036Z Attempt 1 of 5 failed with error: Request > timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. > Retrying request in 3000 ms... > 2024-04-02T02:20:28.7084937Z Attempt 2 of 5 failed with error: Request > timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. > Retrying request in 4785 ms... > 2024-04-02T02:20:38.5015936Z Attempt 3 of 5 failed with error: Request > timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. > Retrying request in 7375 ms... > 2024-04-02T02:20:50.8901508Z Attempt 4 of 5 failed with error: Request > timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. > Retrying request in 14988 ms... > 2024-04-02T02:21:10.9028438Z ##[error]Failed to CreateArtifact: Failed to > make request after 5 attempts: Request timeout: > /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact > 2024-04-02T02:22:59.9893296Z Post job cleanup. > 2024-04-02T02:22:59.9958844Z Post job cleanup. {code} > (This is unlikely to be something we can fix, but we can track it.) -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34996][Connectors/Kafka] Use UserCodeCL to instantiate Deserializer [flink-connector-kafka]
hugogu commented on PR #89: URL: https://github.com/apache/flink-connector-kafka/pull/89#issuecomment-2034668045 @MartijnVisser True. I added a couple of tests that would fail without my change. Hope it looks good now. Please let me know if anything else is needed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-34994) JobIDLoggingITCase fails because of "checkpoint confirmation for unknown task"
[ https://issues.apache.org/jira/browse/FLINK-34994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833574#comment-17833574 ] Ryan Skraba commented on FLINK-34994: - 1.20, jdk11 [https://github.com/apache/flink/actions/runs/8532178112/job/23373188793#step:10:9091] > JobIDLoggingITCase fails because of "checkpoint confirmation for unknown task" > -- > > Key: FLINK-34994 > URL: https://issues.apache.org/jira/browse/FLINK-34994 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.20.0 >Reporter: Roman Khachatryan >Assignee: Roman Khachatryan >Priority: Major > Labels: pull-request-available, test-stability > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58640&view=logs&j=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3&t=0c010d0c-3dec-5bf1-d408-7b18988b1b2b&l=8735] > {code:java} > Mar 30 03:46:07 03:46:07.807 [ERROR] Tests run: 1, Failures: 1, Errors: 0, > Skipped: 0, Time elapsed: 7.147 s <<< FAILURE! -- in > org.apache.flink.test.misc.JobIDLoggingITCase > Mar 30 03:46:07 03:46:07.807 [ERROR] > org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(ClusterClient) > -- Time elapsed: 2.301 s <<< FAILURE! > Mar 30 03:46:07 java.lang.AssertionError: > Mar 30 03:46:07 [too many events without Job ID logged by > org.apache.flink.runtime.taskexecutor.TaskExecutor] > Mar 30 03:46:07 Expecting empty but was: > [Logger=org.apache.flink.runtime.taskexecutor.TaskExecutor Level=DEBUG > Message=TaskManager received a checkpoint confirmation for unknown task > b45d406844d494592784a88e47d201e2_cbc357ccb763df2852fee8c4fc7d55f2_0_0.] > Mar 30 03:46:07 at > org.apache.flink.test.misc.JobIDLoggingITCase.assertJobIDPresent(JobIDLoggingITCase.java:264) > Mar 30 03:46:07 at > org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(JobIDLoggingITCase.java:149) > Mar 30 03:46:07 at java.lang.reflect.Method.invoke(Method.java:498) > Mar 30 03:46:07 at > java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) > Mar 30 03:46:07 at > java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) > Mar 30 03:46:07 at > java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) > Mar 30 03:46:07 at > java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) > Mar 30 03:46:07 at > java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) > {code} > [https://github.com/apache/flink/actions/runs/8502821551/job/23287730632#step:10:8131] > [https://github.com/apache/flink/actions/runs/8507870399/job/23300810619#step:10:8086] > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35002) GitHub action/upload-artifact@v4 can timeout
Ryan Skraba created FLINK-35002: --- Summary: GitHub action/upload-artifact@v4 can timeout Key: FLINK-35002 URL: https://issues.apache.org/jira/browse/FLINK-35002 Project: Flink Issue Type: Bug Components: Build System Reporter: Ryan Skraba A timeout can occur when uploading a successfully built artifact: * [https://github.com/apache/flink/actions/runs/8516411871/job/23325392650] {code:java} 2024-04-02T02:20:15.6355368Z With the provided path, there will be 1 file uploaded 2024-04-02T02:20:15.6360133Z Artifact name is valid! 2024-04-02T02:20:15.6362872Z Root directory input is valid! 2024-04-02T02:20:20.6975036Z Attempt 1 of 5 failed with error: Request timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. Retrying request in 3000 ms... 2024-04-02T02:20:28.7084937Z Attempt 2 of 5 failed with error: Request timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. Retrying request in 4785 ms... 2024-04-02T02:20:38.5015936Z Attempt 3 of 5 failed with error: Request timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. Retrying request in 7375 ms... 2024-04-02T02:20:50.8901508Z Attempt 4 of 5 failed with error: Request timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact. Retrying request in 14988 ms... 2024-04-02T02:21:10.9028438Z ##[error]Failed to CreateArtifact: Failed to make request after 5 attempts: Request timeout: /twirp/github.actions.results.api.v1.ArtifactService/CreateArtifact 2024-04-02T02:22:59.9893296Z Post job cleanup. 2024-04-02T02:22:59.9958844Z Post job cleanup. {code} (This is unlikely to be something we can fix, but we can track it.) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34988) Class loading issues in JDK17 and JDK21
[ https://issues.apache.org/jira/browse/FLINK-34988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833573#comment-17833573 ] Ryan Skraba commented on FLINK-34988: - Identical problems found in the GitHub Actions runs in JDK17 and JDK21: * [https://github.com/apache/flink/actions/runs/8516411771/job/23325640752#step:10:12870] * [https://github.com/apache/flink/actions/runs/8516411771/job/23325641276#step:10:22601] * [https://github.com/apache/flink/actions/runs/8516411771/job/23325644069#step:10:12904] * [https://github.com/apache/flink/actions/runs/8516411771/job/23325644866#step:10:22512] And * [https://github.com/apache/flink/actions/runs/8532178112/job/23373191936#step:10:12874] * [https://github.com/apache/flink/actions/runs/8532178112/job/23373192587#step:10:22678] * [https://github.com/apache/flink/actions/runs/8532178112/job/23373174773#step:10:12918] * [https://github.com/apache/flink/actions/runs/8532178112/job/23373175427#step:10:22550] > Class loading issues in JDK17 and JDK21 > --- > > Key: FLINK-34988 > URL: https://issues.apache.org/jira/browse/FLINK-34988 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.20.0 >Reporter: Matthias Pohl >Priority: Major > Labels: pull-request-available, test-stability > > * JDK 17 (core; NoClassDefFoundError caused by ExceptionInInitializeError): > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58676&view=logs&j=675bf62c-8558-587e-2555-dcad13acefb5&t=5878eed3-cc1e-5b12-1ed0-9e7139ce0992&l=12942 > * JDK 17 (misc; ExceptionInInitializeError): > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58676&view=logs&j=d871f0ce-7328-5d00-023b-e7391f5801c8&t=77cbea27-feb9-5cf5-53f7-3267f9f9c6b6&l=22548 > * JDK 21 (core; same as above): > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58676&view=logs&j=d06b80b4-9e88-5d40-12a2-18072cf60528&t=609ecd5a-3f6e-5d0c-2239-2096b155a4d0&l=12963 > * JDK 21 (misc; same as above): > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58676&view=logs&j=59a2b95a-736b-5c46-b3e0-cee6e587fd86&t=c301da75-e699-5c06-735f-778207c16f50&l=22506 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34999) PR CI stopped operating
[ https://issues.apache.org/jira/browse/FLINK-34999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833563#comment-17833563 ] Lorenzo Affetti commented on FLINK-34999: - Hello [~mapohl] , I think Jing is not available at the moment. Let me have a pass over this from the Ververica side to double-check if I see any problem on CI machines. > PR CI stopped operating > --- > > Key: FLINK-34999 > URL: https://issues.apache.org/jira/browse/FLINK-34999 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.19.0, 1.18.1, 1.20.0 >Reporter: Matthias Pohl >Priority: Blocker > > There are no [new PR CI > runs|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=2] > being picked up anymore. [Recently updated > PRs|https://github.com/apache/flink/pulls?q=sort%3Aupdated-desc] are not > picked up by the @flinkbot. > In the meantime there was a notification sent from GitHub that the password > of the [@flinkbot|https://github.com/flinkbot] was reset for security > reasons. It's quite likely that these two events are related. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-34955) Upgrade commons-compress to 1.26.0
[ https://issues.apache.org/jira/browse/FLINK-34955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Márton Balassi closed FLINK-34955. -- Resolution: Fixed > Upgrade commons-compress to 1.26.0 > -- > > Key: FLINK-34955 > URL: https://issues.apache.org/jira/browse/FLINK-34955 > Project: Flink > Issue Type: Improvement >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: 1.18.2, 1.20.0, 1.19.1 > > > commons-compress 1.24.0 has CVE issues, try to upgrade to 1.26.0, we can > refer to the maven link > https://mvnrepository.com/artifact/org.apache.commons/commons-compress -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34955) Upgrade commons-compress to 1.26.0
[ https://issues.apache.org/jira/browse/FLINK-34955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833561#comment-17833561 ] Márton Balassi commented on FLINK-34955: [{{f172171}}|https://github.com/apache/flink/commit/f17217100cf7d28bf6a1b687427c01e30b77e900] in release-1.19 and [{{1711ba8}}|https://github.com/apache/flink/commit/1711ba85744d917ca63d989bf4c120c6aebda9ba] in release-1.18. > Upgrade commons-compress to 1.26.0 > -- > > Key: FLINK-34955 > URL: https://issues.apache.org/jira/browse/FLINK-34955 > Project: Flink > Issue Type: Improvement >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: 1.18.2, 1.20.0, 1.19.1 > > > commons-compress 1.24.0 has CVE issues, try to upgrade to 1.26.0, we can > refer to the maven link > https://mvnrepository.com/artifact/org.apache.commons/commons-compress -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [BP-1.18][FLINK-34955] Upgrade commons-compress to 1.26.0. [flink]
mbalassi merged PR #24608: URL: https://github.com/apache/flink/pull/24608 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [BP-1.19][FLINK-34955] Upgrade commons-compress to 1.26.0. [flink]
mbalassi merged PR #24609: URL: https://github.com/apache/flink/pull/24609 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [BP-1.19][FLINK-34955] Upgrade commons-compress to 1.26.0. [flink]
mbalassi commented on PR #24609: URL: https://github.com/apache/flink/pull/24609#issuecomment-2034559870 The Azure CI [run](https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58706&view=results) is marked as pending even though it has succeeded. Merging. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [BP-1.18][FLINK-34955] Upgrade commons-compress to 1.26.0. [flink]
mbalassi commented on PR #24608: URL: https://github.com/apache/flink/pull/24608#issuecomment-2034554168 The Azure CI [run](https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58705&view=results) is marked as pending even though it has succeeded. Merging. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34996][Connectors/Kafka] Use UserCodeCL to instantiate Deserializer [flink-connector-kafka]
MartijnVisser commented on PR #89: URL: https://github.com/apache/flink-connector-kafka/pull/89#issuecomment-2034536889 > There are existing tests covering this class and exactly this line of code and I also verified it pass. I'm happy to add more tests to ensure the user code class loader is being used. I don't think that's correct, because if there were existing tests I would have expected those to fail because of your change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-35001) Avoid scientific notation for DOUBLE to STRING
[ https://issues.apache.org/jira/browse/FLINK-35001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833551#comment-17833551 ] Timo Walther commented on FLINK-35001: -- Since functions in CompiledPlan are versioned. We could introduce this in a backwards compatible way if necessary. > Avoid scientific notation for DOUBLE to STRING > -- > > Key: FLINK-35001 > URL: https://issues.apache.org/jira/browse/FLINK-35001 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Runtime >Reporter: Timo Walther >Priority: Major > > Flink currently uses Java semantics for some casts. > When executing: > {code} > SELECT CAST(CAST('19586232024.0' AS DOUBLE) AS STRING); > {code} > Leads to > {code} > 1.9586232024E10 > {code} > However, other vendors such as Postgres or MySQL return {{19586232024}}. > We should reconsider this behavior for consistency. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35001) Avoid scientific notation for DOUBLE to STRING
Timo Walther created FLINK-35001: Summary: Avoid scientific notation for DOUBLE to STRING Key: FLINK-35001 URL: https://issues.apache.org/jira/browse/FLINK-35001 Project: Flink Issue Type: Sub-task Components: Table SQL / Runtime Reporter: Timo Walther Flink currently uses Java semantics for some casts. When executing: {code} SELECT CAST(CAST('19586232024.0' AS DOUBLE) AS STRING); {code} Leads to {code} 1.9586232024E10 {code} However, other vendors such as Postgres or MySQL return {{19586232024}}. We should reconsider this behavior for consistency. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34961) GitHub Actions runner statistcs can be monitored per workflow name
[ https://issues.apache.org/jira/browse/FLINK-34961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34961: --- Labels: pull-request-available starter (was: starter) > GitHub Actions runner statistcs can be monitored per workflow name > -- > > Key: FLINK-34961 > URL: https://issues.apache.org/jira/browse/FLINK-34961 > Project: Flink > Issue Type: Improvement > Components: Build System / CI >Reporter: Matthias Pohl >Priority: Major > Labels: pull-request-available, starter > > Apache Infra allows the monitoring of runner usage per workflow (see [report > for > Flink|https://infra-reports.apache.org/#ghactions&project=flink&hours=168&limit=10]; > only accessible with Apache committer rights). They accumulate the data by > workflow name. The Flink space has multiple repositories that use the generic > workflow name {{CI}}). That makes the differentiation in the report harder. > This Jira issue is about identifying all Flink-related projects with a CI > workflow (Kubernetes operator and the JDBC connector were identified, for > instance) and adding a more distinct name. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] [FLINK-34961] Use dedicated CI name for Elasticsearch connector to differentiate it in infra-reports [flink-connector-elasticsearch]
snuyanzin opened a new pull request, #97: URL: https://github.com/apache/flink-connector-elasticsearch/pull/97 The PR will allow to differentiate between elasticsearch connector statistics and others with name ci -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] [FLINK-35000][build] Updates link to test code convention in pull request template [flink]
XComp opened a new pull request, #24617: URL: https://github.com/apache/flink/pull/24617 1.18 backport PR for parent PR #24615 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] [FLINK-35000][build] Updates link to test code convention in pull request template [flink]
XComp opened a new pull request, #24616: URL: https://github.com/apache/flink/pull/24616 1.19 backport for parent PR #24615 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-35000) PullRequest template doesn't use the correct format to refer to the testing code convention
[ https://issues.apache.org/jira/browse/FLINK-35000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-35000: --- Labels: pull-request-available (was: ) > PullRequest template doesn't use the correct format to refer to the testing > code convention > --- > > Key: FLINK-35000 > URL: https://issues.apache.org/jira/browse/FLINK-35000 > Project: Flink > Issue Type: Bug > Components: Build System / CI, Project Website >Affects Versions: 1.19.0, 1.18.1, 1.20.0 >Reporter: Matthias Pohl >Assignee: Matthias Pohl >Priority: Minor > Labels: pull-request-available > > The PR template refers to > https://flink.apache.org/contributing/code-style-and-quality-common.html#testing > rather than > https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] [FLINK-35000][build] Updates link to test code convention in pull request template [flink]
XComp opened a new pull request, #24615: URL: https://github.com/apache/flink/pull/24615 ## What is the purpose of the change The website update changed the anchor format causing the PullRequest template link to not point to the right location. ## Brief change log * Updates link in PR template ## Verifying this change This change is a trivial rework / code cleanup without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no - If yes, how is the feature documented? not applicable -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-34999) PR CI stopped operating
[ https://issues.apache.org/jira/browse/FLINK-34999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-34999: -- Description: There are no [new PR CI runs|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=2] being picked up anymore. [Recently updated PRs|https://github.com/apache/flink/pulls?q=sort%3Aupdated-desc] are not picked up by the @flinkbot. In the meantime there was a notification sent from GitHub that the password of the [@flinkbot|https://github.com/flinkbot] was reset for security reasons. It's quite likely that these two events are related. was: There are no [new PR CI runs|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=2] being picked up anymore. [Recently updated PRs|https://github.com/apache/flink/pulls?q=sort%3Aupdated-desc] are not picked up by the @flinkbot. In the meantime there was a notification sent from GitHub that the password of the @flinkbot was reset for security reasons. It's quite likely that these two events are related. > PR CI stopped operating > --- > > Key: FLINK-34999 > URL: https://issues.apache.org/jira/browse/FLINK-34999 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.19.0, 1.18.1, 1.20.0 >Reporter: Matthias Pohl >Priority: Blocker > > There are no [new PR CI > runs|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=2] > being picked up anymore. [Recently updated > PRs|https://github.com/apache/flink/pulls?q=sort%3Aupdated-desc] are not > picked up by the @flinkbot. > In the meantime there was a notification sent from GitHub that the password > of the [@flinkbot|https://github.com/flinkbot] was reset for security > reasons. It's quite likely that these two events are related. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-35000) PullRequest template doesn't use the correct format to refer to the testing code convention
Matthias Pohl created FLINK-35000: - Summary: PullRequest template doesn't use the correct format to refer to the testing code convention Key: FLINK-35000 URL: https://issues.apache.org/jira/browse/FLINK-35000 Project: Flink Issue Type: Bug Components: Build System / CI, Project Website Affects Versions: 1.18.1, 1.19.0, 1.20.0 Reporter: Matthias Pohl The PR template refers to https://flink.apache.org/contributing/code-style-and-quality-common.html#testing rather than https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-35000) PullRequest template doesn't use the correct format to refer to the testing code convention
[ https://issues.apache.org/jira/browse/FLINK-35000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl reassigned FLINK-35000: - Assignee: Matthias Pohl > PullRequest template doesn't use the correct format to refer to the testing > code convention > --- > > Key: FLINK-35000 > URL: https://issues.apache.org/jira/browse/FLINK-35000 > Project: Flink > Issue Type: Bug > Components: Build System / CI, Project Website >Affects Versions: 1.19.0, 1.18.1, 1.20.0 >Reporter: Matthias Pohl >Assignee: Matthias Pohl >Priority: Minor > > The PR template refers to > https://flink.apache.org/contributing/code-style-and-quality-common.html#testing > rather than > https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#7-testing -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34999) PR CI stopped operating
[ https://issues.apache.org/jira/browse/FLINK-34999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833523#comment-17833523 ] Matthias Pohl commented on FLINK-34999: --- CC [~uce] [~Weijie Guo] [~fanrui] [~rmetzger] CC [~jingge] since it might be Ververica infrastructure-related > PR CI stopped operating > --- > > Key: FLINK-34999 > URL: https://issues.apache.org/jira/browse/FLINK-34999 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.19.0, 1.18.1, 1.20.0 >Reporter: Matthias Pohl >Priority: Blocker > > There are no [new PR CI > runs|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=2] > being picked up anymore. [Recently updated > PRs|https://github.com/apache/flink/pulls?q=sort%3Aupdated-desc] are not > picked up by the @flinkbot. > In the meantime there was a notification sent from GitHub that the password > of the @flinkbot was reset for security reasons. It's quite likely that these > two events are related. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34999) PR CI stopped operating
Matthias Pohl created FLINK-34999: - Summary: PR CI stopped operating Key: FLINK-34999 URL: https://issues.apache.org/jira/browse/FLINK-34999 Project: Flink Issue Type: Bug Components: Build System / CI Affects Versions: 1.18.1, 1.19.0, 1.20.0 Reporter: Matthias Pohl There are no [new PR CI runs|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=2] being picked up anymore. [Recently updated PRs|https://github.com/apache/flink/pulls?q=sort%3Aupdated-desc] are not picked up by the @flinkbot. In the meantime there was a notification sent from GitHub that the password of the @flinkbot was reset for security reasons. It's quite likely that these two events are related. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34997) PyFlink YARN per-job on Docker test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-34997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833505#comment-17833505 ] Matthias Pohl commented on FLINK-34997: --- The issue seems to be that {{docker-compose}} binaries are missing in the Azure VMs. > PyFlink YARN per-job on Docker test failed on azure > --- > > Key: FLINK-34997 > URL: https://issues.apache.org/jira/browse/FLINK-34997 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Blocker > Labels: test-stability > > {code} > Apr 03 03:12:37 > == > Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' > Apr 03 03:12:37 > == > Apr 03 03:12:37 TEST_DATA_DIR: > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 > Apr 03 03:12:37 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Docker version 24.0.9, build 2936816 > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: > line 24: docker-compose: command not found > Apr 03 03:12:38 [FAIL] Test script contains errors. > Apr 03 03:12:38 Checking of logs skipped. > Apr 03 03:12:38 > Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 > minutes and 1 seconds! Test exited with exit code 1 > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=94ccd692-49fc-5c64-8775-d427c6e65440&l=10226 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34997) PyFlink YARN per-job on Docker test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-34997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-34997: -- Labels: test-stability (was: ) > PyFlink YARN per-job on Docker test failed on azure > --- > > Key: FLINK-34997 > URL: https://issues.apache.org/jira/browse/FLINK-34997 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Major > Labels: test-stability > > {code} > Apr 03 03:12:37 > == > Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' > Apr 03 03:12:37 > == > Apr 03 03:12:37 TEST_DATA_DIR: > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 > Apr 03 03:12:37 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Docker version 24.0.9, build 2936816 > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: > line 24: docker-compose: command not found > Apr 03 03:12:38 [FAIL] Test script contains errors. > Apr 03 03:12:38 Checking of logs skipped. > Apr 03 03:12:38 > Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 > minutes and 1 seconds! Test exited with exit code 1 > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=94ccd692-49fc-5c64-8775-d427c6e65440&l=10226 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34998) Wordcount on Docker test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-34998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833504#comment-17833504 ] Matthias Pohl commented on FLINK-34998: --- I guess, this one is a duplicate of FLINK-34997. In the end, the error happens due to the missing {{docker-compose}} binaries in the Azure VMs. WDYT? > Wordcount on Docker test failed on azure > > > Key: FLINK-34998 > URL: https://issues.apache.org/jira/browse/FLINK-34998 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Major > > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_docker_embedded_job.sh: > line 65: docker-compose: command not found > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_docker_embedded_job.sh: > line 66: docker-compose: command not found > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_docker_embedded_job.sh: > line 67: docker-compose: command not found > sort: cannot read: > '/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-24250435151/out/docker_wc_out*': > No such file or directory > Apr 03 02:08:14 FAIL WordCount: Output hash mismatch. Got > d41d8cd98f00b204e9800998ecf8427e, expected 0e5bd0a3dd7d5a7110aa85ff70adb54b. > Apr 03 02:08:14 head hexdump of actual: > head: cannot open > '/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-24250435151/out/docker_wc_out*' > for reading: No such file or directory > Apr 03 02:08:14 Stopping job timeout watchdog (with pid=244913) > Apr 03 02:08:14 [FAIL] Test script contains errors. > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=e9d3d34f-3d15-59f4-0e3e-35067d100dfe&t=5d91035e-8022-55f2-2d4f-ab121508bf7e&l=6043 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34997) PyFlink YARN per-job on Docker test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-34997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-34997: -- Description: {code} Apr 03 03:12:37 == Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' Apr 03 03:12:37 == Apr 03 03:12:37 TEST_DATA_DIR: /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 Apr 03 03:12:37 Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT Apr 03 03:12:38 Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT Apr 03 03:12:38 Docker version 24.0.9, build 2936816 /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: line 24: docker-compose: command not found Apr 03 03:12:38 [FAIL] Test script contains errors. Apr 03 03:12:38 Checking of logs skipped. Apr 03 03:12:38 Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 minutes and 1 seconds! Test exited with exit code 1 {code} https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=94ccd692-49fc-5c64-8775-d427c6e65440&l=10226 was: Apr 03 03:12:37 == Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' Apr 03 03:12:37 == Apr 03 03:12:37 TEST_DATA_DIR: /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 Apr 03 03:12:37 Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT Apr 03 03:12:38 Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT Apr 03 03:12:38 Docker version 24.0.9, build 2936816 /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: line 24: docker-compose: command not found Apr 03 03:12:38 [FAIL] Test script contains errors. Apr 03 03:12:38 Checking of logs skipped. Apr 03 03:12:38 Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 minutes and 1 seconds! Test exited with exit code 1 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=94ccd692-49fc-5c64-8775-d427c6e65440&l=10226 > PyFlink YARN per-job on Docker test failed on azure > --- > > Key: FLINK-34997 > URL: https://issues.apache.org/jira/browse/FLINK-34997 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Major > > {code} > Apr 03 03:12:37 > == > Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' > Apr 03 03:12:37 > == > Apr 03 03:12:37 TEST_DATA_DIR: > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 > Apr 03 03:12:37 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Docker version 24.0.9, build 2936816 > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: > line 24: docker-compose: command not found > Apr 03 03:12:38 [FAIL] Test script contains errors. > Apr 03 03:12:38 Checking of logs skipped. > Apr 03 03:12:38 > Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 > minutes and 1 seconds! Test exited with exit code 1 > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=94ccd692-49fc-5c64-8775-d427c6e65440&l=10226 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34997) PyFlink YARN per-job on Docker test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-34997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Pohl updated FLINK-34997: -- Priority: Blocker (was: Major) > PyFlink YARN per-job on Docker test failed on azure > --- > > Key: FLINK-34997 > URL: https://issues.apache.org/jira/browse/FLINK-34997 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Blocker > Labels: test-stability > > {code} > Apr 03 03:12:37 > == > Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' > Apr 03 03:12:37 > == > Apr 03 03:12:37 TEST_DATA_DIR: > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 > Apr 03 03:12:37 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Docker version 24.0.9, build 2936816 > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: > line 24: docker-compose: command not found > Apr 03 03:12:38 [FAIL] Test script contains errors. > Apr 03 03:12:38 Checking of logs skipped. > Apr 03 03:12:38 > Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 > minutes and 1 seconds! Test exited with exit code 1 > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=94ccd692-49fc-5c64-8775-d427c6e65440&l=10226 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34960) NullPointerException while applying parallelism overrides for session jobs
[ https://issues.apache.org/jira/browse/FLINK-34960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kunal Rohitas updated FLINK-34960: -- Description: While using the autoscaler for session jobs, the operator throws a NullPointerException while trying to apply parallelism overrides, though it's able to generate parallelism suggestion report for scaling. The versions used here are flink-1.18.1 and flink-kubernetes-operator-1.8.0. {code:java} 2024-03-26 08:41:21,617 o.a.f.a.JobAutoScalerImpl [ERROR][default/clientsession-job] Error applying overrides. java.lang.NullPointerException at org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:52) at org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:40) at org.apache.flink.autoscaler.JobAutoScalerImpl.applyParallelismOverrides(JobAutoScalerImpl.java:161) at org.apache.flink.autoscaler.JobAutoScalerImpl.scale(JobAutoScalerImpl.java:111) at org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.applyAutoscaler(AbstractFlinkResourceReconciler.java:192) at org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.reconcile(AbstractFlinkResourceReconciler.java:139) at org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:116) at org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:53) at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:152) at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:110) at org.apache.flink.kubernetes.operator.metrics.OperatorJosdkMetrics.timeControllerExecution(OperatorJosdkMetrics.java:80) at io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:109) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:140) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64) at io.javaoperatorsdk.operator.processing.event.EventProcessor$ReconcilerExecutor.run(EventProcessor.java:417) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source){code} {code:java} 2024-03-26 08:41:21,617 o.a.f.a.JobAutoScalerImpl [ERROR][default/clientsession-job] Error while scaling job java.lang.NullPointerException at org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:52) at org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:40) at org.apache.flink.autoscaler.JobAutoScalerImpl.applyParallelismOverrides(JobAutoScalerImpl.java:161) at org.apache.flink.autoscaler.JobAutoScalerImpl.scale(JobAutoScalerImpl.java:111) at org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.applyAutoscaler(AbstractFlinkResourceReconciler.java:192) at org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.reconcile(AbstractFlinkResourceReconciler.java:139) at org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:116) at org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:53) at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:152) at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:110) at org.apache.flink.kubernetes.operator.metrics.OperatorJosdkMetrics.timeControllerExecution(OperatorJosdkMetrics.java:80) at io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:109) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:140) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64) at io.javaoperatorsdk.operat
[jira] [Updated] (FLINK-34960) NullPointerException while applying parallelism overrides for session jobs
[ https://issues.apache.org/jira/browse/FLINK-34960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kunal Rohitas updated FLINK-34960: -- Description: While using the autoscaler for session jobs, the operator throws a NullPointerException while trying to apply parallelism overrides, though it's able to generate parallelism suggestion report for scaling. The versions used here are flink-1.18.1 and flink-kubernetes-operator-1.8.0. {code:java} 2024-03-26 08:41:21,617 o.a.f.a.JobAutoScalerImpl [ERROR][default/clientsession-job] Error applying overrides. java.lang.NullPointerException at org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:52) at org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:40) at org.apache.flink.autoscaler.JobAutoScalerImpl.applyParallelismOverrides(JobAutoScalerImpl.java:161) at org.apache.flink.autoscaler.JobAutoScalerImpl.scale(JobAutoScalerImpl.java:111) at org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.applyAutoscaler(AbstractFlinkResourceReconciler.java:192) at org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.reconcile(AbstractFlinkResourceReconciler.java:139) at org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:116) at org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:53) at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:152) at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:110) at org.apache.flink.kubernetes.operator.metrics.OperatorJosdkMetrics.timeControllerExecution(OperatorJosdkMetrics.java:80) at io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:109) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:140) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64) at io.javaoperatorsdk.operator.processing.event.EventProcessor$ReconcilerExecutor.run(EventProcessor.java:417) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source){code} {code:java} 2024-03-26 08:41:21,617 o.a.f.a.JobAutoScalerImpl [ERROR][default/clientsession-job] Error while scaling job java.lang.NullPointerException at org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:52) at org.apache.flink.kubernetes.operator.autoscaler.KubernetesScalingRealizer.realizeParallelismOverrides(KubernetesScalingRealizer.java:40) at org.apache.flink.autoscaler.JobAutoScalerImpl.applyParallelismOverrides(JobAutoScalerImpl.java:161) at org.apache.flink.autoscaler.JobAutoScalerImpl.scale(JobAutoScalerImpl.java:111) at org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.applyAutoscaler(AbstractFlinkResourceReconciler.java:192) at org.apache.flink.kubernetes.operator.reconciler.deployment.AbstractFlinkResourceReconciler.reconcile(AbstractFlinkResourceReconciler.java:139) at org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:116) at org.apache.flink.kubernetes.operator.controller.FlinkSessionJobController.reconcile(FlinkSessionJobController.java:53) at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:152) at io.javaoperatorsdk.operator.processing.Controller$1.execute(Controller.java:110) at org.apache.flink.kubernetes.operator.metrics.OperatorJosdkMetrics.timeControllerExecution(OperatorJosdkMetrics.java:80) at io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:109) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:140) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:121) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:91) at io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:64) at io.javaoperatorsdk.operat
[PR] F34986 [flink]
Zakelly opened a new pull request, #24614: URL: https://github.com/apache/flink/pull/24614 ## What is the purpose of the change This PR ship the core part of FLIP-425, including the basic execution logic of AsyncExecutionController. Note: This PR is based on #24597 which is still under review. ## Brief change log - AsyncExecutionController and other components around it. - RecordContext and reference counting mechanism. - Basic implementation of KeyAccountingUnit. ## Verifying this change - Added unit tests under `org.apache.flink.runtime.asyncprocessing` ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): yes (introduce new) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? yes - If yes, how is the feature documented? not documented -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression
[ https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Nuyanzin resolved FLINK-28693. - Fix Version/s: 1.20.0 Resolution: Fixed > Codegen failed if the watermark is defined on a columnByExpression > -- > > Key: FLINK-28693 > URL: https://issues.apache.org/jira/browse/FLINK-28693 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.15.1 >Reporter: Hongbo >Assignee: xuyang >Priority: Major > Labels: pull-request-available > Fix For: 1.20.0 > > > The following code will throw an exception: > > {code:java} > Table program cannot be compiled. This is a bug. Please file an issue. > ... > Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column > 54: Cannot determine simple type name "org" {code} > {color:#00}Code:{color} > {code:java} > public class TestUdf extends ScalarFunction { > @DataTypeHint("TIMESTAMP(3)") > public LocalDateTime eval(String strDate) { >return LocalDateTime.now(); > } > } > public class FlinkTest { > @Test > void testUdf() throws Exception { > //var env = StreamExecutionEnvironment.createLocalEnvironment(); > // run `gradlew shadowJar` first to generate the uber jar. > // It contains the kafka connector and a dummy UDF function. > var env = > StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081, > "build/libs/flink-test-all.jar"); > env.setParallelism(1); > var tableEnv = StreamTableEnvironment.create(env); > tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class); > var testTable = tableEnv.from(TableDescriptor.forConnector("kafka") > .schema(Schema.newBuilder() > .column("time_stamp", DataTypes.STRING()) > .columnByExpression("udf_ts", "TEST_UDF(time_stamp)") > .watermark("udf_ts", "udf_ts - INTERVAL '1' second") > .build()) > // the kafka server doesn't need to exist. It fails in the > compile stage before fetching data. > .option("properties.bootstrap.servers", "localhost:9092") > .option("topic", "test_topic") > .option("format", "json") > .option("scan.startup.mode", "latest-offset") > .build()); > testTable.printSchema(); > tableEnv.createTemporaryView("test", testTable ); > var query = tableEnv.sqlQuery("select * from test"); > var tableResult = > query.executeInsert(TableDescriptor.forConnector("print").build()); > tableResult.await(); > } > }{code} > What does the code do? > # read a stream from Kakfa > # create a derived column using an UDF expression > # define the watermark based on the derived column > The full callstack: > > {code:java} > org.apache.flink.util.FlinkRuntimeException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue. > at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.runtime.tasks.StreamTask.proce
[jira] [Commented] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression
[ https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833489#comment-17833489 ] Sergey Nuyanzin commented on FLINK-28693: - Merged as [0acf92f1c8a90dcb3eb2c1038c1cda3344b7b988|https://github.com/apache/flink/commit/0acf92f1c8a90dcb3eb2c1038c1cda3344b7b988] > Codegen failed if the watermark is defined on a columnByExpression > -- > > Key: FLINK-28693 > URL: https://issues.apache.org/jira/browse/FLINK-28693 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.15.1 >Reporter: Hongbo >Assignee: xuyang >Priority: Major > Labels: pull-request-available > Fix For: 1.20.0 > > > The following code will throw an exception: > > {code:java} > Table program cannot be compiled. This is a bug. Please file an issue. > ... > Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column > 54: Cannot determine simple type name "org" {code} > {color:#00}Code:{color} > {code:java} > public class TestUdf extends ScalarFunction { > @DataTypeHint("TIMESTAMP(3)") > public LocalDateTime eval(String strDate) { >return LocalDateTime.now(); > } > } > public class FlinkTest { > @Test > void testUdf() throws Exception { > //var env = StreamExecutionEnvironment.createLocalEnvironment(); > // run `gradlew shadowJar` first to generate the uber jar. > // It contains the kafka connector and a dummy UDF function. > var env = > StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081, > "build/libs/flink-test-all.jar"); > env.setParallelism(1); > var tableEnv = StreamTableEnvironment.create(env); > tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class); > var testTable = tableEnv.from(TableDescriptor.forConnector("kafka") > .schema(Schema.newBuilder() > .column("time_stamp", DataTypes.STRING()) > .columnByExpression("udf_ts", "TEST_UDF(time_stamp)") > .watermark("udf_ts", "udf_ts - INTERVAL '1' second") > .build()) > // the kafka server doesn't need to exist. It fails in the > compile stage before fetching data. > .option("properties.bootstrap.servers", "localhost:9092") > .option("topic", "test_topic") > .option("format", "json") > .option("scan.startup.mode", "latest-offset") > .build()); > testTable.printSchema(); > tableEnv.createTemporaryView("test", testTable ); > var query = tableEnv.sqlQuery("select * from test"); > var tableResult = > query.executeInsert(TableDescriptor.forConnector("print").build()); > tableResult.await(); > } > }{code} > What does the code do? > # read a stream from Kakfa > # create a derived column using an UDF expression > # define the watermark based on the derived column > The full callstack: > > {code:java} > org.apache.flink.util.FlinkRuntimeException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue. > at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInpu
[jira] [Assigned] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression
[ https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Nuyanzin reassigned FLINK-28693: --- Assignee: xuyang > Codegen failed if the watermark is defined on a columnByExpression > -- > > Key: FLINK-28693 > URL: https://issues.apache.org/jira/browse/FLINK-28693 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.15.1 >Reporter: Hongbo >Assignee: xuyang >Priority: Major > Labels: pull-request-available > > The following code will throw an exception: > > {code:java} > Table program cannot be compiled. This is a bug. Please file an issue. > ... > Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column > 54: Cannot determine simple type name "org" {code} > {color:#00}Code:{color} > {code:java} > public class TestUdf extends ScalarFunction { > @DataTypeHint("TIMESTAMP(3)") > public LocalDateTime eval(String strDate) { >return LocalDateTime.now(); > } > } > public class FlinkTest { > @Test > void testUdf() throws Exception { > //var env = StreamExecutionEnvironment.createLocalEnvironment(); > // run `gradlew shadowJar` first to generate the uber jar. > // It contains the kafka connector and a dummy UDF function. > var env = > StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081, > "build/libs/flink-test-all.jar"); > env.setParallelism(1); > var tableEnv = StreamTableEnvironment.create(env); > tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class); > var testTable = tableEnv.from(TableDescriptor.forConnector("kafka") > .schema(Schema.newBuilder() > .column("time_stamp", DataTypes.STRING()) > .columnByExpression("udf_ts", "TEST_UDF(time_stamp)") > .watermark("udf_ts", "udf_ts - INTERVAL '1' second") > .build()) > // the kafka server doesn't need to exist. It fails in the > compile stage before fetching data. > .option("properties.bootstrap.servers", "localhost:9092") > .option("topic", "test_topic") > .option("format", "json") > .option("scan.startup.mode", "latest-offset") > .build()); > testTable.printSchema(); > tableEnv.createTemporaryView("test", testTable ); > var query = tableEnv.sqlQuery("select * from test"); > var tableResult = > query.executeInsert(TableDescriptor.forConnector("print").build()); > tableResult.await(); > } > }{code} > What does the code do? > # read a stream from Kakfa > # create a derived column using an UDF expression > # define the watermark based on the derived column > The full callstack: > > {code:java} > org.apache.flink.util.FlinkRuntimeException: > org.apache.flink.api.common.InvalidProgramException: Table program cannot be > compiled. This is a bug. Please file an issue. > at > org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62) > ~[flink-table-runtime-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65) > ~[flink-dist-1.15.1.jar:1.15.1] > at > org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519) > ~[flink-dist-1.15.1.jar:1
Re: [PR] [FLINK-28693][table] Fix janino compile failed because the code generated refers the class in table-planner [flink]
snuyanzin merged PR #24280: URL: https://github.com/apache/flink/pull/24280 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-28693][table] Fix janino compile failed because the code generated refers the class in table-planner [flink]
snuyanzin commented on PR #24280: URL: https://github.com/apache/flink/pull/24280#issuecomment-2034173734 that is also ok to me -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] [FLINK-25537][JUnit5 Migration] Module: flink-core with,Package: types [flink]
GOODBOY008 opened a new pull request, #24613: URL: https://github.com/apache/flink/pull/24613 Changes: - Migrate Module: flink-core with,Package: types to junit5 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34996][Connectors/Kafka] Use UserCodeCL to instantiate Deserializer [flink-connector-kafka]
hugogu commented on PR #89: URL: https://github.com/apache/flink-connector-kafka/pull/89#issuecomment-2034169137 @MartijnVisser Thanks for the quick response. There are existing tests covering this class and exactly this line of code and I also verified it pass. I'm happy to add more tests to ensure the user code class loader is being used. https://github.com/apache/flink-connector-kafka/assets/2989766/f0dbbe5c-5bf4-4d22-8943-b301bca420e4";> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34162][table] Migrate LogicalUnnestRule to java [flink]
snuyanzin commented on code in PR #24144: URL: https://github.com/apache/flink/pull/24144#discussion_r1549420987 ## flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/rules/logical/LogicalUnnestRule.java: ## @@ -0,0 +1,180 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.plan.rules.logical; + +import org.apache.flink.table.functions.BuiltInFunctionDefinitions; +import org.apache.flink.table.planner.calcite.FlinkTypeFactory; +import org.apache.flink.table.planner.functions.bridging.BridgingSqlFunction; +import org.apache.flink.table.planner.utils.ShortcutUtils; +import org.apache.flink.table.runtime.functions.table.UnnestRowsFunction; +import org.apache.flink.table.types.logical.LogicalType; + +import org.apache.flink.shaded.guava32.com.google.common.collect.ImmutableList; Review Comment: I'm going to rebase the PR to be on the safe side and be sure that tests are assing -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34162][table] Migrate LogicalUnnestRule to java [flink]
snuyanzin commented on code in PR #24144: URL: https://github.com/apache/flink/pull/24144#discussion_r1549420099 ## flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/rules/logical/LogicalUnnestRule.java: ## @@ -0,0 +1,180 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.table.planner.plan.rules.logical; + +import org.apache.flink.table.functions.BuiltInFunctionDefinitions; +import org.apache.flink.table.planner.calcite.FlinkTypeFactory; +import org.apache.flink.table.planner.functions.bridging.BridgingSqlFunction; +import org.apache.flink.table.planner.utils.ShortcutUtils; +import org.apache.flink.table.runtime.functions.table.UnnestRowsFunction; +import org.apache.flink.table.types.logical.LogicalType; + +import org.apache.flink.shaded.guava32.com.google.common.collect.ImmutableList; + +import org.apache.calcite.plan.RelOptCluster; +import org.apache.calcite.plan.RelOptRuleCall; +import org.apache.calcite.plan.RelRule; +import org.apache.calcite.plan.hep.HepRelVertex; +import org.apache.calcite.rel.RelNode; +import org.apache.calcite.rel.core.Correlate; +import org.apache.calcite.rel.core.Uncollect; +import org.apache.calcite.rel.logical.LogicalCorrelate; +import org.apache.calcite.rel.logical.LogicalFilter; +import org.apache.calcite.rel.logical.LogicalProject; +import org.apache.calcite.rel.logical.LogicalTableFunctionScan; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rex.RexNode; +import org.immutables.value.Value; + +import java.util.Collections; +import java.util.Map; + +import static org.apache.flink.table.types.logical.utils.LogicalTypeUtils.toRowType; + +/** + * Planner rule that rewrites UNNEST to explode function. + * + * Note: This class can only be used in HepPlanner. + */ +@Value.Enclosing +public class LogicalUnnestRule extends RelRule { + +public static final LogicalUnnestRule INSTANCE = LogicalUnnestRuleConfig.DEFAULT.toRule(); + +public LogicalUnnestRule(LogicalUnnestRule.LogicalUnnestRuleConfig config) { +super(config); +} + +public boolean matches(RelOptRuleCall call) { +LogicalCorrelate join = call.rel(0); +RelNode right = getRel(join.getRight()); +if (right instanceof LogicalFilter) { +LogicalFilter logicalFilter = (LogicalFilter) right; +RelNode relNode = getRel(logicalFilter.getInput()); +if (relNode instanceof Uncollect) { +return !((Uncollect) relNode).withOrdinality; +} else if (relNode instanceof LogicalProject) { +LogicalProject logicalProject = (LogicalProject) relNode; +relNode = getRel(logicalProject.getInput()); +if (relNode instanceof Uncollect) { +return !((Uncollect) relNode).withOrdinality; +} +return false; +} +} else if (right instanceof LogicalProject) { +LogicalProject logicalProject = (LogicalProject) right; +RelNode relNode = getRel(logicalProject.getInput()); +if (relNode instanceof Uncollect) { +Uncollect uncollect = (Uncollect) relNode; +return !uncollect.withOrdinality; +} +return false; +} else if (right instanceof Uncollect) { +Uncollect uncollect = (Uncollect) right; +return !uncollect.withOrdinality; +} +return false; +} + +public void onMatch(RelOptRuleCall call) { +LogicalCorrelate correlate = call.rel(0); +RelNode outer = getRel(correlate.getLeft()); +RelNode array = getRel(correlate.getRight()); + +// convert unnest into table function scan +RelNode tableFunctionScan = convert(array, correlate); +// create correlate with table function scan as input +Correlate newCorrelate = +correlate.copy(correlate.getTraitSet(), ImmutableList.of(outer, tableFunctionScan)); +call.transformTo(newCorrelate); +} + +private RelNode convert(RelNode relNode, LogicalCorrelate correlate) { +if (relNode instanceof HepRelVertex) { +
[jira] [Assigned] (FLINK-34996) Custom Deserializer can't be instantiated when connector-kafka installed into Flink Libs
[ https://issues.apache.org/jira/browse/FLINK-34996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qingsheng Ren reassigned FLINK-34996: - Assignee: Hugo Gu > Custom Deserializer can't be instantiated when connector-kafka installed into > Flink Libs > > > Key: FLINK-34996 > URL: https://issues.apache.org/jira/browse/FLINK-34996 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Reporter: Hugo Gu >Assignee: Hugo Gu >Priority: Minor > Labels: pull-request-available > Attachments: image-2024-04-03-17-34-00-120.png, > image-2024-04-03-17-37-55-105.png > > > The current implementation of the > KafkaValueOnlyDeserializerWrapper Class instantiates Deserializer from the > ClassLoader of the KafkaValueOnlyDeserializerWrapper itself as following > figure shows. > > !image-2024-04-03-17-34-00-120.png|width=799,height=293! > > In case of both following conditions are met: > 1. The connector-kafka get installed into Libs of Flink (rather than in the > User Jar) > 2. The user jar defines a customized Deserializer for Kafka Record. > > The instantiation of the custom deserializer will fail due to NoClassFound > exception because it is indeed not available in the system class loader. > > As following figure illustrates > > !image-2024-04-03-17-37-55-105.png|width=413,height=452! > > It can be fixed by using either UserCodeClassLoader or the ClassLoader of > current Thread. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [cdc-connector][cdc-base] Shade guava31 to avoid dependency conflict with flink below 1.18 [flink-cdc]
PatrickRen commented on code in PR #3083: URL: https://github.com/apache/flink-cdc/pull/3083#discussion_r1549418772 ## pom.xml: ## @@ -462,8 +462,15 @@ under the License. submodules, ${flink.version} will be resolved as the actual Flink version. --> org.apache.flink:flink-shaded-force-shading + org.apache.flink:flink-shaded-guava + + +flink.shaded.guava Review Comment: I'm not an expert of `maven-shade-plugin`, but I assume that the pattern should start with `org.apache.flink`. Please correct me if I'm wrong. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34996][Connectors/Kafka] Use UserCodeCL to instantiate Deserializer [flink-connector-kafka]
boring-cyborg[bot] commented on PR #89: URL: https://github.com/apache/flink-connector-kafka/pull/89#issuecomment-2034157470 Thanks for opening this pull request! Please check out our contributing guidelines. (https://flink.apache.org/contributing/how-to-contribute.html) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-34996) Custom Deserializer can't be instantiated when connector-kafka installed into Flink Libs
[ https://issues.apache.org/jira/browse/FLINK-34996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-34996: --- Labels: pull-request-available (was: ) > Custom Deserializer can't be instantiated when connector-kafka installed into > Flink Libs > > > Key: FLINK-34996 > URL: https://issues.apache.org/jira/browse/FLINK-34996 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Reporter: Hugo Gu >Priority: Minor > Labels: pull-request-available > Attachments: image-2024-04-03-17-34-00-120.png, > image-2024-04-03-17-37-55-105.png > > > The current implementation of the > KafkaValueOnlyDeserializerWrapper Class instantiates Deserializer from the > ClassLoader of the KafkaValueOnlyDeserializerWrapper itself as following > figure shows. > > !image-2024-04-03-17-34-00-120.png|width=799,height=293! > > In case of both following conditions are met: > 1. The connector-kafka get installed into Libs of Flink (rather than in the > User Jar) > 2. The user jar defines a customized Deserializer for Kafka Record. > > The instantiation of the custom deserializer will fail due to NoClassFound > exception because it is indeed not available in the system class loader. > > As following figure illustrates > > !image-2024-04-03-17-37-55-105.png|width=413,height=452! > > It can be fixed by using either UserCodeClassLoader or the ClassLoader of > current Thread. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34998) Wordcount on Docker test failed on azure
Weijie Guo created FLINK-34998: -- Summary: Wordcount on Docker test failed on azure Key: FLINK-34998 URL: https://issues.apache.org/jira/browse/FLINK-34998 Project: Flink Issue Type: Bug Components: Build System / CI Affects Versions: 1.20.0 Reporter: Weijie Guo /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_docker_embedded_job.sh: line 65: docker-compose: command not found /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_docker_embedded_job.sh: line 66: docker-compose: command not found /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_docker_embedded_job.sh: line 67: docker-compose: command not found sort: cannot read: '/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-24250435151/out/docker_wc_out*': No such file or directory Apr 03 02:08:14 FAIL WordCount: Output hash mismatch. Got d41d8cd98f00b204e9800998ecf8427e, expected 0e5bd0a3dd7d5a7110aa85ff70adb54b. Apr 03 02:08:14 head hexdump of actual: head: cannot open '/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-24250435151/out/docker_wc_out*' for reading: No such file or directory Apr 03 02:08:14 Stopping job timeout watchdog (with pid=244913) Apr 03 02:08:14 [FAIL] Test script contains errors. https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=e9d3d34f-3d15-59f4-0e3e-35067d100dfe&t=5d91035e-8022-55f2-2d4f-ab121508bf7e&l=6043 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-34996) Custom Deserializer can't be instantiated when connector-kafka installed into Flink Libs
[ https://issues.apache.org/jira/browse/FLINK-34996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833479#comment-17833479 ] Qingsheng Ren commented on FLINK-34996: --- [~hugogu] Thanks for the detailed explanation! Would you like to create a PR for this one? > Custom Deserializer can't be instantiated when connector-kafka installed into > Flink Libs > > > Key: FLINK-34996 > URL: https://issues.apache.org/jira/browse/FLINK-34996 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Reporter: Hugo Gu >Priority: Minor > Attachments: image-2024-04-03-17-34-00-120.png, > image-2024-04-03-17-37-55-105.png > > > The current implementation of the > KafkaValueOnlyDeserializerWrapper Class instantiates Deserializer from the > ClassLoader of the KafkaValueOnlyDeserializerWrapper itself as following > figure shows. > > !image-2024-04-03-17-34-00-120.png|width=799,height=293! > > In case of both following conditions are met: > 1. The connector-kafka get installed into Libs of Flink (rather than in the > User Jar) > 2. The user jar defines a customized Deserializer for Kafka Record. > > The instantiation of the custom deserializer will fail due to NoClassFound > exception because it is indeed not available in the system class loader. > > As following figure illustrates > > !image-2024-04-03-17-37-55-105.png|width=413,height=452! > > It can be fixed by using either UserCodeClassLoader or the ClassLoader of > current Thread. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [cdc-connector][db2] Db2 support incremental source [flink-cdc]
leonardBang commented on PR #2870: URL: https://github.com/apache/flink-cdc/pull/2870#issuecomment-2034105438 Thanks @gong for the great work, I'd like to have a final review base on @lvyanquan 's review work. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-34997) PyFlink YARN per-job on Docker test failed on azure
Weijie Guo created FLINK-34997: -- Summary: PyFlink YARN per-job on Docker test failed on azure Key: FLINK-34997 URL: https://issues.apache.org/jira/browse/FLINK-34997 Project: Flink Issue Type: Bug Components: Build System / CI Affects Versions: 1.20.0 Reporter: Weijie Guo Apr 03 03:12:37 == Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' Apr 03 03:12:37 == Apr 03 03:12:37 TEST_DATA_DIR: /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 Apr 03 03:12:37 Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT Apr 03 03:12:38 Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT Apr 03 03:12:38 Docker version 24.0.9, build 2936816 /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: line 24: docker-compose: command not found Apr 03 03:12:38 [FAIL] Test script contains errors. Apr 03 03:12:38 Checking of logs skipped. Apr 03 03:12:38 Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 minutes and 1 seconds! Test exited with exit code 1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34997) PyFlink YARN per-job on Docker test failed on azure
[ https://issues.apache.org/jira/browse/FLINK-34997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weijie Guo updated FLINK-34997: --- Description: Apr 03 03:12:37 == Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' Apr 03 03:12:37 == Apr 03 03:12:37 TEST_DATA_DIR: /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 Apr 03 03:12:37 Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT Apr 03 03:12:38 Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT Apr 03 03:12:38 Docker version 24.0.9, build 2936816 /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: line 24: docker-compose: command not found Apr 03 03:12:38 [FAIL] Test script contains errors. Apr 03 03:12:38 Checking of logs skipped. Apr 03 03:12:38 Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 minutes and 1 seconds! Test exited with exit code 1 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=94ccd692-49fc-5c64-8775-d427c6e65440&l=10226 was: Apr 03 03:12:37 == Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' Apr 03 03:12:37 == Apr 03 03:12:37 TEST_DATA_DIR: /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 Apr 03 03:12:37 Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT Apr 03 03:12:38 Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT Apr 03 03:12:38 Docker version 24.0.9, build 2936816 /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: line 24: docker-compose: command not found Apr 03 03:12:38 [FAIL] Test script contains errors. Apr 03 03:12:38 Checking of logs skipped. Apr 03 03:12:38 Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 minutes and 1 seconds! Test exited with exit code 1 > PyFlink YARN per-job on Docker test failed on azure > --- > > Key: FLINK-34997 > URL: https://issues.apache.org/jira/browse/FLINK-34997 > Project: Flink > Issue Type: Bug > Components: Build System / CI >Affects Versions: 1.20.0 >Reporter: Weijie Guo >Priority: Major > > Apr 03 03:12:37 > == > Apr 03 03:12:37 Running 'PyFlink YARN per-job on Docker test' > Apr 03 03:12:37 > == > Apr 03 03:12:37 TEST_DATA_DIR: > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-37046085202 > Apr 03 03:12:37 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.19-SNAPSHOT-bin/flink-1.19-SNAPSHOT > Apr 03 03:12:38 Docker version 24.0.9, build 2936816 > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common_docker.sh: > line 24: docker-compose: command not found > Apr 03 03:12:38 [FAIL] Test script contains errors. > Apr 03 03:12:38 Checking of logs skipped. > Apr 03 03:12:38 > Apr 03 03:12:38 [FAIL] 'PyFlink YARN per-job on Docker test' failed after 0 > minutes and 1 seconds! Test exited with exit code 1 > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=f8e16326-dc75-5ba0-3e95-6178dd55bf6c&t=94ccd692-49fc-5c64-8775-d427c6e65440&l=10226 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34653] Support table merging with route [flink-cdc]
leonardBang commented on PR #3129: URL: https://github.com/apache/flink-cdc/pull/3129#issuecomment-2034101079 @PatrickRen Thanks for your contribution, Could you rebase this PR to latest master? And I found the PR contains a WIP commit, Could you resolve it? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-34648] add waitChangeResultRequest and WaitChangeResultResponse to avoid RPC timeout. [flink-cdc]
leonardBang commented on code in PR #3128: URL: https://github.com/apache/flink-cdc/pull/3128#discussion_r1549367497 ## flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/operators/schema/SchemaOperator.java: ## @@ -111,8 +115,13 @@ private SchemaChangeResponse requestSchemaChange( return sendRequestToCoordinator(new SchemaChangeRequest(tableId, schemaChangeEvent)); } -private ReleaseUpstreamResponse requestReleaseUpstream() { -return sendRequestToCoordinator(new ReleaseUpstreamRequest()); +private void requestReleaseUpstream() throws InterruptedException { +CoordinationResponse coordinationResponse = +sendRequestToCoordinator(new ReleaseUpstreamRequest()); +while (coordinationResponse instanceof SchemaChangeProcessingResponse) { Review Comment: Continuous loop maybe not acceptable as this may lead endless coordination. Could we introduce a timeout config option with a reasonable default value for coordination ? ## flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/operators/schema/event/SchemaChangeResultRequest.java: ## @@ -0,0 +1,26 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.runtime.operators.schema.event; + +import org.apache.flink.runtime.operators.coordination.CoordinationRequest; + +/** request for get change result. */ Review Comment: Could you improve your java doc refer other classes? ## flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/operators/schema/SchemaOperator.java: ## @@ -127,4 +136,13 @@ RESPONSE sendRequestToCoordinator(REQUEST request) { "Failed to send request to coordinator: " + request.toString(), e); } } + +@Override +public void initializeState(StateInitializationContext context) throws Exception { +if (context.isRestored()) { +if (getRuntimeContext().getIndexOfThisSubtask() == 0) { Review Comment: I didn't catch up this limitation, could you explain this ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-34952) Flink CDC pipeline supports SinkFunction
[ https://issues.apache.org/jira/browse/FLINK-34952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qingsheng Ren reassigned FLINK-34952: - Assignee: Hongshun Wang > Flink CDC pipeline supports SinkFunction > - > > Key: FLINK-34952 > URL: https://issues.apache.org/jira/browse/FLINK-34952 > Project: Flink > Issue Type: Improvement > Components: Flink CDC >Reporter: Hongshun Wang >Assignee: Hongshun Wang >Priority: Major > Labels: pull-request-available > Fix For: cdc-3.1.0 > > > Though current Flink CDC pipeline define > com.ververica.cdc.common.sink.FlinkSinkFunctionProvider to to provide a Flink > SinkFunction for writing events to external systems. However, > com.ververica.cdc.runtime.operators.sink.DataSinkWriterOperator don't support > SouceFunction, which means sink implement SinkFunction cannot use CDC > pipeline. > Why not support SourceFunction in Flink CDC pipeline ? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-18476) PythonEnvUtilsTest#testStartPythonProcess fails
[ https://issues.apache.org/jira/browse/FLINK-18476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833471#comment-17833471 ] Weijie Guo commented on FLINK-18476: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58709&view=logs&j=298e20ef-7951-5965-0e79-ea664ddc435e&t=d4c90338-c843-57b0-3232-10ae74f00347&l=21764 > PythonEnvUtilsTest#testStartPythonProcess fails > --- > > Key: FLINK-18476 > URL: https://issues.apache.org/jira/browse/FLINK-18476 > Project: Flink > Issue Type: Bug > Components: API / Python, Tests >Affects Versions: 1.11.0, 1.15.3, 1.18.0, 1.19.0, 1.20.0 >Reporter: Dawid Wysakowicz >Priority: Major > Labels: auto-deprioritized-major, auto-deprioritized-minor, > test-stability > > The > {{org.apache.flink.client.python.PythonEnvUtilsTest#testStartPythonProcess}} > failed in my local environment as it assumes the environment has > {{/usr/bin/python}}. > I don't know exactly how did I get python in Ubuntu 20.04, but I have only > alias for {{python = python3}}. Therefore the tests fails. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34952][cdc-composer][sink] Flink CDC pipeline supports SinkFunction [flink-cdc]
PatrickRen commented on code in PR #3204: URL: https://github.com/apache/flink-cdc/pull/3204#discussion_r1549360625 ## flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/operators/sink/DataSinkOperator.java: ## @@ -0,0 +1,131 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.cdc.runtime.operators.sink; + +import org.apache.flink.cdc.common.annotation.Internal; +import org.apache.flink.cdc.common.event.ChangeEvent; +import org.apache.flink.cdc.common.event.CreateTableEvent; +import org.apache.flink.cdc.common.event.Event; +import org.apache.flink.cdc.common.event.FlushEvent; +import org.apache.flink.cdc.common.event.TableId; +import org.apache.flink.cdc.common.schema.Schema; +import org.apache.flink.runtime.jobgraph.OperatorID; +import org.apache.flink.runtime.state.StateInitializationContext; +import org.apache.flink.streaming.api.functions.sink.SinkFunction; +import org.apache.flink.streaming.api.graph.StreamConfig; +import org.apache.flink.streaming.api.operators.ChainingStrategy; +import org.apache.flink.streaming.api.operators.Output; +import org.apache.flink.streaming.api.operators.StreamSink; +import org.apache.flink.streaming.runtime.streamrecord.StreamRecord; +import org.apache.flink.streaming.runtime.tasks.StreamTask; + +import java.util.HashSet; +import java.util.Optional; +import java.util.Set; + +/** + * An operator that processes records to be written into a {@link + * org.apache.flink.streaming.api.functions.sink.SinkFunction}. + * + * The operator is a proxy of {@link org.apache.flink.streaming.api.operators.StreamSink} in + * Flink. + * + * The operator is always part of a sink pipeline and is the first operator. + */ +@Internal +public class DataSinkOperator extends StreamSink { Review Comment: What about `DataSinkFunctionOperator`? ## flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-values/src/main/java/org/apache/flink/cdc/connectors/values/sink/ValuesDataSinkOptions.java: ## @@ -35,4 +35,10 @@ public class ValuesDataSinkOptions { .booleanType() .defaultValue(true) .withDescription("True if the Event should be print to console."); + +public static final ConfigOption LEGACY_ENABLED = +ConfigOptions.key("legacy.enabled") Review Comment: The word `legacy` is too vague. Sink V1 could also be a legacy one. What about using an enumeration here listing all APIs that the value sink supports? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-34996) Custom Deserializer can't be instantiated when connector-kafka installed into Flink Libs
[ https://issues.apache.org/jira/browse/FLINK-34996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hugo Gu updated FLINK-34996: Summary: Custom Deserializer can't be instantiated when connector-kafka installed into Flink Libs (was: Deserializer can't be instantiated when connector-kafka installed into Flink Libs) > Custom Deserializer can't be instantiated when connector-kafka installed into > Flink Libs > > > Key: FLINK-34996 > URL: https://issues.apache.org/jira/browse/FLINK-34996 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Reporter: Hugo Gu >Priority: Minor > Attachments: image-2024-04-03-17-34-00-120.png, > image-2024-04-03-17-37-55-105.png > > > The current implementation of the > KafkaValueOnlyDeserializerWrapper Class instantiates Deserializer from the > ClassLoader of the KafkaValueOnlyDeserializerWrapper itself as following > figure shows. > > !image-2024-04-03-17-34-00-120.png|width=799,height=293! > > In case of both following conditions are met: > 1. The connector-kafka get installed into Libs of Flink (rather than in the > User Jar) > 2. The user jar defines a customized Deserializer for Kafka Record. > > The instantiation of the custom deserializer will fail due to NoClassFound > exception because it is indeed not available in the system class loader. > > As following figure illustrates > > !image-2024-04-03-17-37-55-105.png|width=413,height=452! > > It can be fixed by using either UserCodeClassLoader or the ClassLoader of > current Thread. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-34996) Deserializer can't be instantiated when connector-kafka installed into Flink Libs
Hugo Gu created FLINK-34996: --- Summary: Deserializer can't be instantiated when connector-kafka installed into Flink Libs Key: FLINK-34996 URL: https://issues.apache.org/jira/browse/FLINK-34996 Project: Flink Issue Type: Bug Components: Connectors / Kafka Reporter: Hugo Gu Attachments: image-2024-04-03-17-34-00-120.png, image-2024-04-03-17-37-55-105.png The current implementation of the KafkaValueOnlyDeserializerWrapper Class instantiates Deserializer from the ClassLoader of the KafkaValueOnlyDeserializerWrapper itself as following figure shows. !image-2024-04-03-17-34-00-120.png|width=799,height=293! In case of both following conditions are met: 1. The connector-kafka get installed into Libs of Flink (rather than in the User Jar) 2. The user jar defines a customized Deserializer for Kafka Record. The instantiation of the custom deserializer will fail due to NoClassFound exception because it is indeed not available in the system class loader. As following figure illustrates !image-2024-04-03-17-37-55-105.png|width=413,height=452! It can be fixed by using either UserCodeClassLoader or the ClassLoader of current Thread. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [e2e] add pipeline e2e test for mysql connector. [flink-cdc]
leonardBang commented on PR #2997: URL: https://github.com/apache/flink-cdc/pull/2997#issuecomment-2034041053 @lvyanquan could help rebase this PR and resolve the comments as well? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [e2e] add pipeline e2e test for mysql connector. [flink-cdc]
leonardBang commented on PR #2997: URL: https://github.com/apache/flink-cdc/pull/2997#issuecomment-2034040259 @loserwang1024 Adding value to value e2e test is useless as values connector is designed for test, it’s different with other connectors like MySQL, StarRocks, etc. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-34952) Flink CDC pipeline supports SinkFunction
[ https://issues.apache.org/jira/browse/FLINK-34952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongshun Wang updated FLINK-34952: -- Summary: Flink CDC pipeline supports SinkFunction (was: Flink CDC pipeline supports SourceFunction ) > Flink CDC pipeline supports SinkFunction > - > > Key: FLINK-34952 > URL: https://issues.apache.org/jira/browse/FLINK-34952 > Project: Flink > Issue Type: Improvement > Components: Flink CDC >Reporter: Hongshun Wang >Priority: Major > Labels: pull-request-available > Fix For: cdc-3.1.0 > > > Though current Flink CDC pipeline define > com.ververica.cdc.common.sink.FlinkSinkFunctionProvider to to provide a Flink > SinkFunction for writing events to external systems. However, > com.ververica.cdc.runtime.operators.sink.DataSinkWriterOperator don't support > SouceFunction, which means sink implement SinkFunction cannot use CDC > pipeline. > Why not support SourceFunction in Flink CDC pipeline ? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-34565) Enhance flink kubernetes configMap to accommodate additional configuration files
[ https://issues.apache.org/jira/browse/FLINK-34565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833440#comment-17833440 ] Surendra Singh Lilhore edited comment on FLINK-34565 at 4/3/24 9:21 AM: [~zhuzh] If the user files are dynamic, then it is very useful to support them in ConfigMap, especially in an App Mode cluster. A similar use case was discussed on the user mailing list: [link to the mailing list thread.|https://lists.apache.org/thread/md2zq0dbvt2dxytdfxw16jbfh02yq0w9] [~wangyang0918] , Any thought about this? was (Author: surendrasingh): [~zhuzh] If the user files are dynamic, then it is very useful to support them in ConfigMap, especially in an App Mode cluster. A similar use case was discussed on the user mailing list: [link to the mailing list thread.|https://lists.apache.org/thread/md2zq0dbvt2dxytdfxw16jbfh02yq0w9] > Enhance flink kubernetes configMap to accommodate additional configuration > files > > > Key: FLINK-34565 > URL: https://issues.apache.org/jira/browse/FLINK-34565 > Project: Flink > Issue Type: Bug > Components: Deployment / Kubernetes >Reporter: Surendra Singh Lilhore >Priority: Major > Labels: pull-request-available > > Flink kubernetes client currently supports a fixed number of files > (flink-conf.yaml, logback-console.xml, log4j-console.properties) in the JM > and TM Pod ConfigMap. In certain scenarios, particularly in app mode, > additional configuration files are required for jobs to run successfully. > Presently, users must resort to workarounds to include dynamic configuration > files in the JM and TM. This proposed improvement allows users to easily add > extra files by configuring the > '{*}kubernetes.flink.configmap.additional.resources{*}' property. Users can > provide a semicolon-separated list of local files in the client Flink config > directory that should be included in the Flink ConfigMap. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-34995) flink kafka connector source stuck when partition leader invalid
[ https://issues.apache.org/jira/browse/FLINK-34995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yansuopeng updated FLINK-34995: --- Description: when partition leader invalid(leader=-1), the flink streaming job using KafkaSource can't restart or start a new instance with a new groupid, it will stuck and got following exception: "{*}org.apache.kafka.common.errors.TimeoutException: Timeout of 6ms expired before the position for partition aaa-1 could be determined{*}" when leader=-1, kafka api like KafkaConsumer.position() will block until either the position could be determined or an unrecoverable error is encountered infact, leader=-1 not easy to avoid, even replica=3, three disk offline together will trigger the problem, especially when the cluster size is relatively large. it rely on kafka administrator to fix in time, but it take risk when in kafka cluster peak period. I have solve this problem, and want to create a PR. was: when partition leader invalid(leader=-1), the flink streaming job using KafkaSource can't restart or start a new instance with a new groupid, it will stuck and got following exception: "{*}org.apache.kafka.common.errors.TimeoutException: Timeout of 6ms expired before the position for partition aaa-1 could be determined{*}" when leader=-1, kafka api like KafkaConsumer.position() will block until either the position could be determined or an unrecoverable error is encountered infact, leader=-1 not easy to avoid, even replica=3, three disk offline together will trigger the problem, especially when the cluster size is relatively large. it rely on kafka administrator to fix in time, but it take risk when in kafka cluster peak period. > flink kafka connector source stuck when partition leader invalid > > > Key: FLINK-34995 > URL: https://issues.apache.org/jira/browse/FLINK-34995 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.17.0, 1.19.0, 1.18.1 >Reporter: yansuopeng >Priority: Major > > when partition leader invalid(leader=-1), the flink streaming job using > KafkaSource can't restart or start a new instance with a new groupid, it > will stuck and got following exception: > "{*}org.apache.kafka.common.errors.TimeoutException: Timeout of 6ms > expired before the position for partition aaa-1 could be determined{*}" > when leader=-1, kafka api like KafkaConsumer.position() will block until > either the position could be determined or an unrecoverable error is > encountered > infact, leader=-1 not easy to avoid, even replica=3, three disk offline > together will trigger the problem, especially when the cluster size is > relatively large. it rely on kafka administrator to fix in time, but it > take risk when in kafka cluster peak period. > I have solve this problem, and want to create a PR. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] [FLINK-34529][table-planner] Introduce FlinkProjectWindowTransposeRule. [flink]
RubyChou commented on PR #24567: URL: https://github.com/apache/flink/pull/24567#issuecomment-2034009333 @libenchao hi, comments are all resolved, please help reviewing it when you have time, thx : ) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-34995) flink kafka connector source stuck when partition leader invalid
yansuopeng created FLINK-34995: -- Summary: flink kafka connector source stuck when partition leader invalid Key: FLINK-34995 URL: https://issues.apache.org/jira/browse/FLINK-34995 Project: Flink Issue Type: Bug Components: Connectors / Kafka Affects Versions: 1.18.1, 1.19.0, 1.17.0 Reporter: yansuopeng when partition leader invalid(leader=-1), the flink streaming job using KafkaSource can't restart or start a new instance with a new groupid, it will stuck and got following exception: "{*}org.apache.kafka.common.errors.TimeoutException: Timeout of 6ms expired before the position for partition aaa-1 could be determined{*}" when leader=-1, kafka api like KafkaConsumer.position() will block until either the position could be determined or an unrecoverable error is encountered infact, leader=-1 not easy to avoid, even replica=3, three disk offline together will trigger the problem, especially when the cluster size is relatively large. it rely on kafka administrator to fix in time, but it take risk when in kafka cluster peak period. -- This message was sent by Atlassian Jira (v8.20.10#820010)