[GitHub] [flink] flinkbot edited a comment on pull request #16584: [FLINK-22781][table-planner-blink] Fix bug that when implements the L…
flinkbot edited a comment on pull request #16584: URL: https://github.com/apache/flink/pull/16584#issuecomment-885710999 ## CI report: * Unknown: [CANCELED](TBD) * 82860fda96c0e0d5931be169f1e664ec61349564 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31108) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-benchmarks] pnowojski commented on a change in pull request #49: [FLINK-25959] Add micro-benchmarks for the sort-based blocking shuffle
pnowojski commented on a change in pull request #49: URL: https://github.com/apache/flink-benchmarks/pull/49#discussion_r803619263 ## File path: src/main/java/org/apache/flink/benchmark/BlockingPartitionRemoteChannelBenchmark.java ## @@ -49,7 +49,15 @@ public static void main(String[] args) throws RunnerException { } @Benchmark -public void remoteFilePartition(BlockingPartitionEnvironmentContext context) throws Exception { +public void remoteFilePartition(RemoteFileEnvironmentContext context) throws Exception { +StreamGraph streamGraph = +StreamGraphUtils.buildGraphForBatchJob(context.env, RECORDS_PER_INVOCATION); +context.miniCluster.executeJobBlocking( +StreamingJobGraphGenerator.createJobGraph(streamGraph)); +} + +@Benchmark +public void remoteSortShufflePartition(RemoteSortShuffleEnvironmentContext context) throws Exception { Review comment: nit ditto: `remoteSortPartition`? ## File path: src/main/java/org/apache/flink/benchmark/BlockingPartitionBenchmark.java ## @@ -66,6 +66,16 @@ public void uncompressedMmapPartition(UncompressedMmapEnvironmentContext context executeBenchmark(context.env); } +@Benchmark +public void compressedSortShufflePartition(CompressedSortShuffleEnvironmentContext context) throws Exception { +executeBenchmark(context.env); +} + +@Benchmark +public void uncompressedSortShufflePartition(UncompressedSortShuffleEnvironmentContext context) throws Exception { +executeBenchmark(context.env); +} + Review comment: nit: rename to `compressedSortPartition` and `uncompressedSortPartition` to shorten the benchmark name in the web UI? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] rmetzger edited a comment on pull request #18692: [FLINK-26015] Fixes object store bug
rmetzger edited a comment on pull request #18692: URL: https://github.com/apache/flink/pull/18692#issuecomment-1034859586 Sadly, the JRS still doesn't work on K8s, using a minio s3 implementation: ``` 2022-02-10 12:20:23,679 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - Starting the resource manager. 2022-02-10 12:20:23,765 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Start SessionDispatcherLeaderProcess. 2022-02-10 12:20:25,060 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Stopping SessionDispatcherLeaderProcess. 2022-02-10 12:20:25,164 INFO org.apache.flink.runtime.jobmanager.DefaultJobGraphStore [] - Stopping DefaultJobGraphStore. 2022-02-10 12:20:25,255 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Fatal error occurred in the cluster entrypoint. java.util.concurrent.CompletionException: org.apache.flink.util.FlinkRuntimeException: Could not retrieve JobResults of globally-terminated jobs from JobResultStore at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273) ~[?:1.8.0_322] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280) [?:1.8.0_322] at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606) [?:1.8.0_322] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_322] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_322] at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322] Caused by: org.apache.flink.util.FlinkRuntimeException: Could not retrieve JobResults of globally-terminated jobs from JobResultStore at org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.getDirtyJobResults(SessionDispatcherLeaderProcess.java:186) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.supplyUnsynchronizedIfRunning(AbstractDispatcherLeaderProcess.java:198) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.getDirtyJobResultsIfRunning(SessionDispatcherLeaderProcess.java:178) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) ~[?:1.8.0_322] ... 3 more Caused by: java.io.FileNotFoundException: No such file or directory: s3://vvc-eu-west-1-dev-store/myorg/myscope/3d78a6e7-4c88-4e6f-8e59-4fb4b6dd6319-test-job-name-a/ha/job-result-store/default at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2344) ~[?:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2226) ~[?:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2160) ~[?:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1961) ~[?:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$9(S3AFileSystem.java:1940) ~[?:?] at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) ~[?:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1940) ~[?:?] at org.apache.flink.fs.s3hadoop.common.HadoopFileSystem.listStatus(HadoopFileSystem.java:170) ~[?:?] at org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.listStatus(PluginFileSystemFactory.java:141) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.highavailability.FileSystemJobResultStore.getDirtyResultsInternal(FileSystemJobResultStore.java:158) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.highavailability.AbstractThreadsafeJobResultStore.withReadLock(AbstractThreadsafeJobResultStore.java:118) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.highavailability.AbstractThreadsafeJobResultStore.getDirtyResults(AbstractThreadsafeJobResultStore.java:100) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.getDirtyJobResults(SessionDispatcherLeaderProcess.java:184) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.supplyUnsynchronizedIfRunning(AbstractDispatcherLeaderProcess.java:198) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.getDirtyJobResultsIfRunning(SessionDispatcherLeaderProcess.java:178) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at
[GitHub] [flink] rmetzger commented on pull request #18692: [FLINK-26015] Fixes object store bug
rmetzger commented on pull request #18692: URL: https://github.com/apache/flink/pull/18692#issuecomment-1034859586 Sadly, the JRS still doesn't work on K8s, using a minio s3 implementation: ``` 2022-02-10 12:20:23,679 INFO org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - Starting the resource manager. 2022-02-10 12:20:23,765 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Start SessionDispatcherLeaderProcess. 2022-02-10 12:20:25,060 INFO org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess [] - Stopping SessionDispatcherLeaderProcess. 2022-02-10 12:20:25,164 INFO org.apache.flink.runtime.jobmanager.DefaultJobGraphStore [] - Stopping DefaultJobGraphStore. 2022-02-10 12:20:25,255 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - Fatal error occurred in the cluster entrypoint. java.util.concurrent.CompletionException: org.apache.flink.util.FlinkRuntimeException: Could not retrieve JobResults of globally-terminated jobs from JobResultStore at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273) ~[?:1.8.0_322] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280) [?:1.8.0_322] at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606) [?:1.8.0_322] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_322] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_322] at java.lang.Thread.run(Thread.java:750) [?:1.8.0_322] Caused by: org.apache.flink.util.FlinkRuntimeException: Could not retrieve JobResults of globally-terminated jobs from JobResultStore at org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.getDirtyJobResults(SessionDispatcherLeaderProcess.java:186) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.supplyUnsynchronizedIfRunning(AbstractDispatcherLeaderProcess.java:198) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.getDirtyJobResultsIfRunning(SessionDispatcherLeaderProcess.java:178) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) ~[?:1.8.0_322] ... 3 more Caused by: java.io.FileNotFoundException: No such file or directory: s3://vvc-eu-west-1-dev-store/myorg/myscope/3d78a6e7-4c88-4e6f-8e59-4fb4b6dd6319-test-job-name-a/ha/job-result-store/default at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2344) ~[?:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2226) ~[?:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2160) ~[?:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1961) ~[?:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listStatus$9(S3AFileSystem.java:1940) ~[?:?] at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) ~[?:?] at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1940) ~[?:?] at org.apache.flink.fs.s3hadoop.common.HadoopFileSystem.listStatus(HadoopFileSystem.java:170) ~[?:?] at org.apache.flink.core.fs.PluginFileSystemFactory$ClassLoaderFixingFileSystem.listStatus(PluginFileSystemFactory.java:141) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.highavailability.FileSystemJobResultStore.getDirtyResultsInternal(FileSystemJobResultStore.java:158) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.highavailability.AbstractThreadsafeJobResultStore.withReadLock(AbstractThreadsafeJobResultStore.java:118) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.highavailability.AbstractThreadsafeJobResultStore.getDirtyResults(AbstractThreadsafeJobResultStore.java:100) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.getDirtyJobResults(SessionDispatcherLeaderProcess.java:184) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.dispatcher.runner.AbstractDispatcherLeaderProcess.supplyUnsynchronizedIfRunning(AbstractDispatcherLeaderProcess.java:198) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at org.apache.flink.runtime.dispatcher.runner.SessionDispatcherLeaderProcess.getDirtyJobResultsIfRunning(SessionDispatcherLeaderProcess.java:178) ~[flink-dist-1.15-jrs-fix.jar:1.15-jrs-fix] at
[GitHub] [flink] flinkbot edited a comment on pull request #18702: [FLINK-24441][source] Block SourceOperator when watermarks are out of alignment
flinkbot edited a comment on pull request #18702: URL: https://github.com/apache/flink/pull/18702#issuecomment-1034842544 ## CI report: * 1c798c2c84e8285f7a7784038583a194bc42b3ea Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31118) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] gaborgsomogyi commented on pull request #18664: [FLINK-25907][runtime][security] Add pluggable delegation token manager
gaborgsomogyi commented on pull request #18664: URL: https://github.com/apache/flink/pull/18664#issuecomment-1034857628 Oh gosh, resolving the conflict... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18698: [FLINK-25782] [docs] Translate datastream filesystem.md page into Chinese.
flinkbot edited a comment on pull request #18698: URL: https://github.com/apache/flink/pull/18698#issuecomment-1034521527 ## CI report: * 146bd830e84ad8c2a521f2edf5e32741d7aa45b5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31098) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink
flinkbot edited a comment on pull request #18697: URL: https://github.com/apache/flink/pull/18697#issuecomment-1034510296 ## CI report: * 34a2b1811fd7bc75e2c1e9c8462ab9573d506559 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31079) * df8fc8dfd54ee05dd5ee087be89f2f634dc286d0 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31112) * 3cd445e29262a008208a97ce6e2cf80e33f37ba5 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18689: [FLINK-21439][runtime] Exception history adaptive scheduler
flinkbot edited a comment on pull request #18689: URL: https://github.com/apache/flink/pull/18689#issuecomment-1033918208 ## CI report: * 45c9c7931377fa16988a1a25985c028907047274 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31096) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18688: [FLINK-26060][table-planner] Remove persisted plan feature support for Python UDFs
flinkbot edited a comment on pull request #18688: URL: https://github.com/apache/flink/pull/18688#issuecomment-1033917965 ## CI report: * f2aeb29764801267d7cb8463724464e49774ee28 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31041) * a1e93a73c296c113f974f93e44262d156f4ff102 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-26035) Rework loader-bundle into separate module
[ https://issues.apache.org/jira/browse/FLINK-26035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-26035. Resolution: Fixed master: c0ce5506758605b1acff9e6931f7e7df0e519a09 > Rework loader-bundle into separate module > - > > Key: FLINK-26035 > URL: https://issues.apache.org/jira/browse/FLINK-26035 > Project: Flink > Issue Type: Technical Debt > Components: Build System, Table SQL / Planner >Affects Versions: 1.15.0 >Reporter: Chesnay Schepler >Assignee: Chesnay Schepler >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > The flink-table-planner currently creates 2 artifacts. 1 jar containing the > planner and various dependencies for the cases where the planner is used > directly, and another jar that additionally bundles scala for cases where the > loader is used. > The latter artifact is purely an intermediate build artifact, and as such we > usually wouldn't want to publish it. This is particularly important because > this jar doesn't have a correct NOTICE, and having different NOTICE files for > different artifacts is surprisingly tricky. > We should just rework this into a separate module. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] zentol merged pull request #18676: [FLINK-26035][build][planner] Add table-planner-loader-helper module
zentol merged pull request #18676: URL: https://github.com/apache/flink/pull/18676 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18670: [FLINK-25999] Deprecate Per-Job Mode
flinkbot edited a comment on pull request #18670: URL: https://github.com/apache/flink/pull/18670#issuecomment-1033075650 ## CI report: * 360c74aff2caa70284a90375a17519dc4843d4b2 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31043) * b87fb6bf4a43eeb1b10d3516493959f43ff901af Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31100) * 8f553fdda7a749ef4d6398f3239ab2103e629ecf UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18664: [FLINK-25907][runtime][security] Add pluggable delegation token manager
flinkbot edited a comment on pull request #18664: URL: https://github.com/apache/flink/pull/18664#issuecomment-1032585099 ## CI report: * e882713639b8f7541793d802c52d8c354ddf9ea2 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31054) * 357ed24c86356875da2a328ea57546ff86b39d5f UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18649: [FLINK-25844][table-api-java] Introduce StatementSet#compilePlan
flinkbot edited a comment on pull request #18649: URL: https://github.com/apache/flink/pull/18649#issuecomment-1031570072 ## CI report: * 7abbdf54df75a8e9e0041e908ab61d785d800410 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31047) * 414d103b5b3cd2960b910198c40a7aeb83aed868 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31109) * 114478aae924d51669e7809bc26c6c82c4d5955a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-26051) one sql has row_number =1 and the subsequent SQL has "case when" and "where" statement result Exception : The window can only be ordered in ASCENDING mode
[ https://issues.apache.org/jira/browse/FLINK-26051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490165#comment-17490165 ] zhangbin edited comment on FLINK-26051 at 2/10/22, 12:17 PM: - Using the Flink1.12.2 test on the idea, the debug log found that the logicalPlan before executing the FlinkLogicalRankRuleForRangeEnd rule is different before apply FlinkLogicalRankRuleForRangeEnd – with case when FlinkLogicalCalc(select=[biz_bill_no, CASE(OR(=(task_mode, 51), AND(=(task_mode, 40), >=(total_stage_num, 2), >=(current_stage_index, 2), =(use_pre_task_owner, 1))), parent_task_no, sowing_task_no) AS parent_task_no_cw, parent_task_no, sowing_task_no, task_type, task_mode, total_stage_num, current_stage_index, use_pre_task_owner], where=[AND(SEARCH(w0$o0, Sarg[1L:BIGINT]:BIGINT), SEARCH(task_type, Sarg[21]), SEARCH(task_mode, Sarg[40, 51]), SEARCH(poi_type, Sarg[2]), SEARCH(biz_origin_bill_type, Sarg[(-∞..111), (111..112), (112..113), (113..114), (114..+∞)]))]) +- FlinkLogicalCalc(select=[biz_bill_no, task_type, task_mode, parent_task_no, total_stage_num, current_stage_index, use_pre_task_owner, poi_type, biz_origin_bill_type, sowing_task_no, w0$o0]) +- FlinkLogicalOverAggregate(window#0=[window(partition \{10, 11} order by [1 DESC-nulls-last] rows between UNBOUNDED PRECEDING and CURRENT ROW aggs [ROW_NUMBER()])]) +- FlinkLogicalDataStreamTableScan(table=[[default_catalog, default_database, wosOutSowingTaskDetail]]) not match FlinkLogicalRankRuleForRangeEnd, But it matches the FlinkCalcMergeRule after FlinkLogicalRankRuleForRangeEnd optimize result: FlinkLogicalCalc(select=[biz_bill_no, CASE(OR(=(task_mode, 51), AND(=(task_mode, 40), >=(total_stage_num, 2), >=(current_stage_index, 2), =(use_pre_task_owner, 1))), parent_task_no, sowing_task_no) AS parent_task_no_cw, parent_task_no, sowing_task_no, task_type, task_mode, total_stage_num, current_stage_index, use_pre_task_owner], where=[AND(SEARCH(w0$o0, Sarg[1L:BIGINT]:BIGINT), SEARCH(task_type, Sarg[21]), SEARCH(task_mode, Sarg[40, 51]), SEARCH(poi_type, Sarg[2]), SEARCH(biz_origin_bill_type, Sarg[(-∞..111), (111..112), (112..113), (113..114), (114..+∞)]))]) +- {color:#ff}FlinkLogicalOverAggregate{color}(window#0=[window(partition \{10, 11} order by [1 DESC-nulls-last] rows between UNBOUNDED PRECEDING and CURRENT ROW aggs [ROW_NUMBER()])]) +- FlinkLogicalDataStreamTableScan(table=[[default_catalog, default_database, wosOutSowingTaskDetail]]) – without case when FlinkLogicalCalc(select=[biz_bill_no, parent_task_no, sowing_task_no, task_type, task_mode, total_stage_num, current_stage_index, use_pre_task_owner], where=[AND(SEARCH(w0$o0, Sarg[1L:BIGINT]:BIGINT), SEARCH(task_type, Sarg[21]), SEARCH(task_mode, Sarg[40, 51]), SEARCH(poi_type, Sarg[2]), SEARCH(biz_origin_bill_type, Sarg[(-∞..111), (111..112), (112..113), (113..114), (114..+∞)]))]) +- FlinkLogicalOverAggregate(window#0=[window(partition \{10, 11} order by [1 DESC-nulls-last] rows between UNBOUNDED PRECEDING and CURRENT ROW aggs [ROW_NUMBER()])]) +- FlinkLogicalDataStreamTableScan(table=[[default_catalog, default_database, wosOutSowingTaskDetail]]) match FlinkLogicalRankRuleForRangeEnd optimize result: FlinkLogicalCalc(select=[biz_bill_no, parent_task_no, sowing_task_no, task_type, task_mode, total_stage_num, current_stage_index, use_pre_task_owner], where=[AND(AND(AND(AND(AND(AND(=(task_type, 21), OR(=(task_mode, 40), =(task_mode, 51))), =(poi_type, 2)), <>(biz_origin_bill_type, 111)), <>(biz_origin_bill_type, 112)), <>(biz_origin_bill_type, 113)), <>(biz_origin_bill_type, 114))]) +- {color:#ff}FlinkLogicalRank{color}(rankType=[ROW_NUMBER], rankRange=[rankStart=1, rankEnd=1], partitionBy=[dt,sowing_task_detail_id], orderBy=[task_type DESC], select=[biz_bill_no, task_type, task_mode, parent_task_no, total_stage_num, current_stage_index, use_pre_task_owner, poi_type, biz_origin_bill_type, sowing_task_no, dt, sowing_task_detail_id]) +- FlinkLogicalDataStreamTableScan(table=[[default_catalog, default_database, wosOutSowingTaskDetail]]) SQL with case when is converted to FlinkLogicalOverAggregate, SQL without case when is converted to FlinkLogicalRank The problem was solved when I increased FlinkCalcMergeRule.INSTANCE before FlinkLogicalRankRule.INSTANCE in LOGICAL_REWRITE ruleset of FlinkStreamRuleSets.java !image-2022-02-10-20-13-14-424.png|width=645,height=384! cc [~godfreyhe],[~jark] was (Author: zhangbinzaifendou): Using the Flink1.12.2 test on the idea, the debug log found that the logicalPlan before executing the FlinkLogicalRankRuleForRangeEnd rule is different before apply FlinkLogicalRankRuleForRangeEnd -- with case when FlinkLogicalCalc(select=[biz_bill_no, CASE(OR(=(task_mode, 51), AND(=(task_mode, 40), >=(total_stage_num, 2), >=(current_stage_index, 2), =(use_pre_task_owner, 1))), parent_task_no,
[GitHub] [flink] flinkbot edited a comment on pull request #16584: [FLINK-22781][table-planner-blink] Fix bug that when implements the L…
flinkbot edited a comment on pull request #16584: URL: https://github.com/apache/flink/pull/16584#issuecomment-885710999 ## CI report: * Unknown: [CANCELED](TBD) * 82860fda96c0e0d5931be169f1e664ec61349564 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31108) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] gaborgsomogyi commented on a change in pull request #18664: [FLINK-25907][runtime][security] Add pluggable delegation token manager
gaborgsomogyi commented on a change in pull request #18664: URL: https://github.com/apache/flink/pull/18664#discussion_r803614366 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenManager.java ## @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.security.token; + +import org.apache.flink.configuration.Configuration; + +import org.apache.hadoop.security.Credentials; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.HashMap; +import java.util.Iterator; +import java.util.Map; +import java.util.ServiceLoader; + +import static org.apache.flink.util.Preconditions.checkNotNull; + +/** + * Manager for delegation tokens in a Flink cluster. + * + * When delegation token renewal is enabled, this manager will make sure long-running apps can + * run without interruption while accessing secured services. It periodically logs in to the KDC + * with user-provided credentials, and contacts all the configured secure services to obtain + * delegation tokens to be distributed to the rest of the application. + */ +public class DelegationTokenManager { + +private static final Logger LOG = LoggerFactory.getLogger(DelegationTokenManager.class); + +private final Configuration configuration; + +final Map delegationTokenProviders; + +public DelegationTokenManager(Configuration configuration) { +this.configuration = checkNotNull(configuration, "Flink configuration must not be null"); Review comment: This code is just dropped so the comment is outdated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] gaborgsomogyi commented on a change in pull request #18664: [FLINK-25907][runtime][security] Add pluggable delegation token manager
gaborgsomogyi commented on a change in pull request #18664: URL: https://github.com/apache/flink/pull/18664#discussion_r803613786 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/security/token/DelegationTokenManager.java ## @@ -0,0 +1,119 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.security.token; + +import org.apache.flink.configuration.Configuration; + +import org.apache.hadoop.security.Credentials; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.HashMap; +import java.util.Iterator; +import java.util.Map; +import java.util.ServiceLoader; + +import static org.apache.flink.util.Preconditions.checkNotNull; + +/** + * Manager for delegation tokens in a Flink cluster. + * + * When delegation token renewal is enabled, this manager will make sure long-running apps can + * run without interruption while accessing secured services. It periodically logs in to the KDC + * with user-provided credentials, and contacts all the configured secure services to obtain + * delegation tokens to be distributed to the rest of the application. + */ +public class DelegationTokenManager { Review comment: Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] gaborgsomogyi commented on a change in pull request #18664: [FLINK-25907][runtime][security] Add pluggable delegation token manager
gaborgsomogyi commented on a change in pull request #18664: URL: https://github.com/apache/flink/pull/18664#discussion_r803613571 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java ## @@ -205,6 +212,14 @@ public ResourceManager( this.ioExecutor = ioExecutor; this.startedFuture = new CompletableFuture<>(); + +checkNotNull(configuration, "Flink configuration must not be null"); +this.delegationTokenManager = + configuration.getBoolean(SecurityOptions.KERBEROS_FETCH_DELEGATION_TOKEN) +&& HadoopDependency.isHadoopCommonOnClasspath( +getClass().getClassLoader()) +? Optional.of(new DelegationTokenManager(configuration)) +: Optional.empty(); Review comment: Done together w/ the interface. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26051) one sql has row_number =1 and the subsequent SQL has "case when" and "where" statement result Exception : The window can only be ordered in ASCENDING mode
[ https://issues.apache.org/jira/browse/FLINK-26051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490165#comment-17490165 ] zhangbinzaifendou commented on FLINK-26051: --- Using the Flink1.12.2 test on the idea, the debug log found that the logicalPlan before executing the FlinkLogicalRankRuleForRangeEnd rule is different before apply FlinkLogicalRankRuleForRangeEnd -- with case when FlinkLogicalCalc(select=[biz_bill_no, CASE(OR(=(task_mode, 51), AND(=(task_mode, 40), >=(total_stage_num, 2), >=(current_stage_index, 2), =(use_pre_task_owner, 1))), parent_task_no, sowing_task_no) AS parent_task_no_cw, parent_task_no, sowing_task_no, task_type, task_mode, total_stage_num, current_stage_index, use_pre_task_owner], where=[AND(SEARCH(w0$o0, Sarg[1L:BIGINT]:BIGINT), SEARCH(task_type, Sarg[21]), SEARCH(task_mode, Sarg[40, 51]), SEARCH(poi_type, Sarg[2]), SEARCH(biz_origin_bill_type, Sarg[(-∞..111), (111..112), (112..113), (113..114), (114..+∞)]))]) +- FlinkLogicalCalc(select=[biz_bill_no, task_type, task_mode, parent_task_no, total_stage_num, current_stage_index, use_pre_task_owner, poi_type, biz_origin_bill_type, sowing_task_no, w0$o0]) +- FlinkLogicalOverAggregate(window#0=[window(partition \{10, 11} order by [1 DESC-nulls-last] rows between UNBOUNDED PRECEDING and CURRENT ROW aggs [ROW_NUMBER()])]) +- FlinkLogicalDataStreamTableScan(table=[[default_catalog, default_database, wosOutSowingTaskDetail]]) not match FlinkLogicalRankRuleForRangeEnd, But it matches the FlinkCalcMergeRule after FlinkLogicalRankRuleForRangeEnd optimize result: FlinkLogicalCalc(select=[biz_bill_no, CASE(OR(=(task_mode, 51), AND(=(task_mode, 40), >=(total_stage_num, 2), >=(current_stage_index, 2), =(use_pre_task_owner, 1))), parent_task_no, sowing_task_no) AS parent_task_no_cw, parent_task_no, sowing_task_no, task_type, task_mode, total_stage_num, current_stage_index, use_pre_task_owner], where=[AND(SEARCH(w0$o0, Sarg[1L:BIGINT]:BIGINT), SEARCH(task_type, Sarg[21]), SEARCH(task_mode, Sarg[40, 51]), SEARCH(poi_type, Sarg[2]), SEARCH(biz_origin_bill_type, Sarg[(-∞..111), (111..112), (112..113), (113..114), (114..+∞)]))]) +- {color:#FF}FlinkLogicalOverAggregate{color}(window#0=[window(partition \{10, 11} order by [1 DESC-nulls-last] rows between UNBOUNDED PRECEDING and CURRENT ROW aggs [ROW_NUMBER()])]) +- FlinkLogicalDataStreamTableScan(table=[[default_catalog, default_database, wosOutSowingTaskDetail]]) -- without case when FlinkLogicalCalc(select=[biz_bill_no, parent_task_no, sowing_task_no, task_type, task_mode, total_stage_num, current_stage_index, use_pre_task_owner], where=[AND(SEARCH(w0$o0, Sarg[1L:BIGINT]:BIGINT), SEARCH(task_type, Sarg[21]), SEARCH(task_mode, Sarg[40, 51]), SEARCH(poi_type, Sarg[2]), SEARCH(biz_origin_bill_type, Sarg[(-∞..111), (111..112), (112..113), (113..114), (114..+∞)]))]) +- FlinkLogicalOverAggregate(window#0=[window(partition \{10, 11} order by [1 DESC-nulls-last] rows between UNBOUNDED PRECEDING and CURRENT ROW aggs [ROW_NUMBER()])]) +- FlinkLogicalDataStreamTableScan(table=[[default_catalog, default_database, wosOutSowingTaskDetail]]) match FlinkLogicalRankRuleForRangeEnd optimize result: FlinkLogicalCalc(select=[biz_bill_no, parent_task_no, sowing_task_no, task_type, task_mode, total_stage_num, current_stage_index, use_pre_task_owner], where=[AND(AND(AND(AND(AND(AND(=(task_type, 21), OR(=(task_mode, 40), =(task_mode, 51))), =(poi_type, 2)), <>(biz_origin_bill_type, 111)), <>(biz_origin_bill_type, 112)), <>(biz_origin_bill_type, 113)), <>(biz_origin_bill_type, 114))]) +- {color:#FF}FlinkLogicalRank{color}(rankType=[ROW_NUMBER], rankRange=[rankStart=1, rankEnd=1], partitionBy=[dt,sowing_task_detail_id], orderBy=[task_type DESC], select=[biz_bill_no, task_type, task_mode, parent_task_no, total_stage_num, current_stage_index, use_pre_task_owner, poi_type, biz_origin_bill_type, sowing_task_no, dt, sowing_task_detail_id]) +- FlinkLogicalDataStreamTableScan(table=[[default_catalog, default_database, wosOutSowingTaskDetail]]) SQL with case when is converted to FlinkLogicalOverAggregate, SQL without case when is converted to FlinkLogicalRank The problem was solved when I increased FlinkCalcMergeRule.INSTANCE before FlinkLogicalRankRule.INSTANCE in LOGICAL_REWRITE ruleset of FlinkStreamRuleSets.java !image-2022-02-10-20-13-14-424.png|width=645,height=384! [~godfreyhe] [~jark] > one sql has row_number =1 and the subsequent SQL has "case when" and "where" > statement result Exception : The window can only be ordered in ASCENDING mode > -- > > Key: FLINK-26051 > URL: https://issues.apache.org/jira/browse/FLINK-26051 > Project: Flink > Issue Type: Bug
[jira] [Commented] (FLINK-26046) Link to OpenAPI specification is dead
[ https://issues.apache.org/jira/browse/FLINK-26046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490166#comment-17490166 ] Chesnay Schepler commented on FLINK-26046: -- Meh, still didn't work because of a missing slash. In production the baseUrl does not end with a slash, but locally it does. master: d8034c982d0c15cf2ab1a1cd4cec9606229b7773 > Link to OpenAPI specification is dead > - > > Key: FLINK-26046 > URL: https://issues.apache.org/jira/browse/FLINK-26046 > Project: Flink > Issue Type: Technical Debt > Components: Documentation >Reporter: David Morávek >Assignee: Chesnay Schepler >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > In the REST API docs, the link to the OpenAPI specification is missing the > flink prefix, so it's leading nowhere. > https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/ -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Closed] (FLINK-26065) org.apache.flink.table.api.PlanReference$ContentPlanReference $FilePlanReference $ResourcePlanReference violation the api rules
[ https://issues.apache.org/jira/browse/FLINK-26065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francesco Guardiani closed FLINK-26065. --- Resolution: Fixed > org.apache.flink.table.api.PlanReference$ContentPlanReference > $FilePlanReference $ResourcePlanReference violation the api rules > --- > > Key: FLINK-26065 > URL: https://issues.apache.org/jira/browse/FLINK-26065 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.15.0 >Reporter: Yun Gao >Assignee: Francesco Guardiani >Priority: Blocker > Labels: test-stability > > {code:java} > Feb 09 21:10:32 [ERROR] Failures: > Feb 09 21:10:32 [ERROR] Architecture Violation [Priority: MEDIUM] - Rule > 'Classes in API packages should have at least one API visibility annotation.' > was violated (3 times): > Feb 09 21:10:32 org.apache.flink.table.api.PlanReference$ContentPlanReference > does not satisfy: annotated with @Internal or annotated with @Experimental or > annotated with @PublicEvolving or annotated with @Public or annotated with > @Deprecated > Feb 09 21:10:32 org.apache.flink.table.api.PlanReference$FilePlanReference > does not satisfy: annotated with @Internal or annotated with @Experimental or > annotated with @PublicEvolving or annotated with @Public or annotated with > @Deprecated > Feb 09 21:10:32 > org.apache.flink.table.api.PlanReference$ResourcePlanReference does not > satisfy: annotated with @Internal or annotated with @Experimental or > annotated with @PublicEvolving or annotated with @Public or annotated with > @Deprecated > Feb 09 21:10:32 [INFO] > Feb 09 21:10:32 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0 > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31051=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=26427 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-26065) org.apache.flink.table.api.PlanReference$ContentPlanReference $FilePlanReference $ResourcePlanReference violation the api rules
[ https://issues.apache.org/jira/browse/FLINK-26065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490164#comment-17490164 ] Francesco Guardiani commented on FLINK-26065: - Fixed by https://github.com/apache/flink/commit/74815407dae8c687dabdc62378905b8f3c143a77 > org.apache.flink.table.api.PlanReference$ContentPlanReference > $FilePlanReference $ResourcePlanReference violation the api rules > --- > > Key: FLINK-26065 > URL: https://issues.apache.org/jira/browse/FLINK-26065 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.15.0 >Reporter: Yun Gao >Assignee: Francesco Guardiani >Priority: Blocker > Labels: test-stability > > {code:java} > Feb 09 21:10:32 [ERROR] Failures: > Feb 09 21:10:32 [ERROR] Architecture Violation [Priority: MEDIUM] - Rule > 'Classes in API packages should have at least one API visibility annotation.' > was violated (3 times): > Feb 09 21:10:32 org.apache.flink.table.api.PlanReference$ContentPlanReference > does not satisfy: annotated with @Internal or annotated with @Experimental or > annotated with @PublicEvolving or annotated with @Public or annotated with > @Deprecated > Feb 09 21:10:32 org.apache.flink.table.api.PlanReference$FilePlanReference > does not satisfy: annotated with @Internal or annotated with @Experimental or > annotated with @PublicEvolving or annotated with @Public or annotated with > @Deprecated > Feb 09 21:10:32 > org.apache.flink.table.api.PlanReference$ResourcePlanReference does not > satisfy: annotated with @Internal or annotated with @Experimental or > annotated with @PublicEvolving or annotated with @Public or annotated with > @Deprecated > Feb 09 21:10:32 [INFO] > Feb 09 21:10:32 [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0 > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31051=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=26427 -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (FLINK-26051) one sql has row_number =1 and the subsequent SQL has "case when" and "where" statement result Exception : The window can only be ordered in ASCENDING mode
[ https://issues.apache.org/jira/browse/FLINK-26051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangbinzaifendou updated FLINK-26051: -- Attachment: image-2022-02-10-20-13-14-424.png > one sql has row_number =1 and the subsequent SQL has "case when" and "where" > statement result Exception : The window can only be ordered in ASCENDING mode > -- > > Key: FLINK-26051 > URL: https://issues.apache.org/jira/browse/FLINK-26051 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.12.2 >Reporter: chuncheng wu >Priority: Major > Attachments: image-2022-02-10-20-13-14-424.png > > > hello, > i have 2 sqls. One sql (sql0) is "select xx from ( ROW_NUMBER stament) > where rn=1" and the other one (sql1) is "s{color:#505f79}elect ${fields} > from result where ${filter_conditions}{color}" . The fields quoted in sql1 > has one "case when" field .The two sql can work well seperately.but if they > combine it results the exception as follow . It happen in the occasion when > logical plan turn into physical plan : > > {code:java} > org.apache.flink.table.api.TableException: The window can only be ordered in > ASCENDING mode. > at > org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.scala:98) > at > org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecOverAggregate.translateToPlanInternal(StreamExecOverAggregate.scala:52) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) > at > org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecOverAggregateBase.translateToPlan(StreamExecOverAggregateBase.scala:42) > at > org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:54) > at > org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:39) > at > org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59) > at > org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalcBase.translateToPlan(StreamExecCalcBase.scala:38) > at > org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66) > at > org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:65) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.Iterator$class.foreach(Iterator.scala:891) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at > org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:65) > at > org.apache.flink.table.planner.delegation.StreamPlanner.explain(StreamPlanner.scala:103) > at > org.apache.flink.table.planner.delegation.StreamPlanner.explain(StreamPlanner.scala:42) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.explainInternal(TableEnvironmentImpl.java:630) > at > org.apache.flink.table.api.internal.TableImpl.explain(TableImpl.java:582) > at > com.meituan.grocery.data.flink.test.BugTest.testRowNumber(BugTest.java:69) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:568) > at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) > at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60){code} > In the stacktrace above , rownumber() 's physical rel which is > StreamExecRank In nomal change to StreamExecOverAggregate . The > StreamExecOverAggregate rel has a window= ROWS BETWEEN UNBOUNDED PRECEDING > AND CURRENT ROW which i never add .Oddly,if you remove the "case when" field > or the "where" statement in sql1 ,the program will work
[GitHub] [flink] flinkbot edited a comment on pull request #18702: [FLINK-24441][source] Block SourceOperator when watermarks are out of alignment
flinkbot edited a comment on pull request #18702: URL: https://github.com/apache/flink/pull/18702#issuecomment-1034842544 ## CI report: * 1c798c2c84e8285f7a7784038583a194bc42b3ea UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink
flinkbot edited a comment on pull request #18697: URL: https://github.com/apache/flink/pull/18697#issuecomment-1034510296 ## CI report: * 34a2b1811fd7bc75e2c1e9c8462ab9573d506559 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31079) * df8fc8dfd54ee05dd5ee087be89f2f634dc286d0 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31112) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18653: [FLINK-25825][connector-jdbc] MySqlCatalogITCase fails on azure
flinkbot edited a comment on pull request #18653: URL: https://github.com/apache/flink/pull/18653#issuecomment-1032212433 ## CI report: * 9261ad675d1c9e6f0ea94df829f94f25f18bafc4 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31072) * 764c5fdf856b0930f7fdf8d470256e84d05a646a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31117) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] adiaixin commented on pull request #16584: [FLINK-22781][table-planner-blink] Fix bug that when implements the L…
adiaixin commented on pull request #16584: URL: https://github.com/apache/flink/pull/16584#issuecomment-1034848279 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] afedulov commented on a change in pull request #18153: [FLINK-25568][connectors/elasticsearch] Add Elasticsearch 7 Source
afedulov commented on a change in pull request #18153: URL: https://github.com/apache/flink/pull/18153#discussion_r803608850 ## File path: flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/connector/elasticsearch/common/ElasticsearchUtil.java ## @@ -0,0 +1,67 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.connector.elasticsearch.common; + +import org.apache.flink.annotation.Internal; + +import org.apache.http.auth.AuthScope; +import org.apache.http.auth.UsernamePasswordCredentials; +import org.apache.http.client.CredentialsProvider; +import org.apache.http.impl.client.BasicCredentialsProvider; +import org.elasticsearch.client.RestClientBuilder; + +/** Collection of utility methods for the Elasticsearch source and sink. */ +@Internal +public class ElasticsearchUtil { + +public static RestClientBuilder configureRestClientBuilder( +RestClientBuilder builder, NetworkClientConfig config) { +if (config.getConnectionPathPrefix() != null) { Review comment: I meant the config object itself. There are paths like `Elasticsearch7SplitReader`/ `Elasticsearch7SearchHitReader` where the NetworkConfig can currently propagate as null. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16584: [FLINK-22781][table-planner-blink] Fix bug that when implements the L…
flinkbot edited a comment on pull request #16584: URL: https://github.com/apache/flink/pull/16584#issuecomment-885710999 ## CI report: * Unknown: [CANCELED](TBD) * 82860fda96c0e0d5931be169f1e664ec61349564 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31108) * 73d908707e5d3a259b45cb164575edb994421cb6 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] XComp commented on a change in pull request #18692: [FLINK-26015] Fixes object store bug
XComp commented on a change in pull request #18692: URL: https://github.com/apache/flink/pull/18692#discussion_r803608001 ## File path: flink-filesystems/flink-s3-fs-presto/pom.xml ## @@ -46,6 +46,36 @@ under the License. provided + + + org.apache.flink + flink-runtime + ${project.version} + test + + + + org.apache.flink + flink-runtime + ${project.version} + test-jar + tests Review comment: It took me a bit to realize that you didn't mean the entire dependency but the `` :-D Fair enough, I guess, that ended up in the code while trying to integrate the test jars. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MrWhiteSike commented on a change in pull request #17943: [FLINK-24693][docs-zh] Translate "checkpoints" and "checkpointing" page to Chinese
MrWhiteSike commented on a change in pull request #17943: URL: https://github.com/apache/flink/pull/17943#discussion_r803606329 ## File path: docs/content.zh/docs/dev/datastream/fault-tolerance/checkpointing.md ## @@ -43,34 +43,40 @@ Flink 的 checkpoint 机制会和持久化存储进行交互,读写流与状 ## 开启与配置 Checkpoint -默认情况下 checkpoint 是禁用的。通过调用 `StreamExecutionEnvironment` 的 `enableCheckpointing(n)` 来启用 checkpoint,里面的 *n* 是进行 checkpoint 的间隔,单位毫秒。 +默认情况下 checkpoint 是禁用的。通过调用 `StreamExecutionEnvironment` 的 `enableCheckpointing(n)` 来启用 checkpoint,其中 *n* 表示 [checkpoint 时间间隔]({{< ref "docs/ops/production_ready#choose-the-right-checkpoint-interval" >}}),单位毫秒。 Checkpoint 其他的属性包括: - - *精确一次(exactly-once)对比至少一次(at-least-once)*:你可以选择向 `enableCheckpointing(long interval, CheckpointingMode mode)` 方法中传入一个模式来选择使用两种保证等级中的哪一种。 + - *checkpoint 存储(storage)*:用户可以设置 checkpoint 快照持久化的位置。 默认情况下,Flink 会存储在 JobManager 的堆中。在生产环境中,建议改用持久化的文件系统。 请参考 [checkpoint 存储]({{< ref "docs/ops/state/checkpoints#checkpoint-storage" >}}) 以获取有关作业级别和集群级别的可用配置的更多详细信息。 + + - *精确一次(exactly-once)对比至少一次(at-least-once)*:你可以选择向 `enableCheckpointing(n)` 方法中传入一个模式来选择使用两种保证等级中的哪一种。 对于大多数应用来说,精确一次是较好的选择。至少一次可能与某些延迟超低(始终只有几毫秒)的应用的关联较大。 - - - *checkpoint 超时*:如果 checkpoint 执行的时间超过了该配置的阈值,还在进行中的 checkpoint 操作就会被抛弃。 - + + - *checkpoint 超时*:如果 checkpoint 执行的时间超过了该配置的阈值,还在进行中的 checkpoint 操作就会被取消。 + - *checkpoints 之间的最小时间*:该属性定义在 checkpoint 之间需要多久的时间,以确保流应用在 checkpoint 之间有足够的进展。如果值设置为了 *5000*, -无论 checkpoint 持续时间与间隔是多久,在前一个 checkpoint 完成时的至少五秒后会才开始下一个 checkpoint。 - -往往使用“checkpoints 之间的最小时间”来配置应用会比 checkpoint 间隔容易很多,因为“checkpoints 之间的最小时间”在 checkpoint 的执行时间超过平均值时不会受到影响(例如如果目标的存储系统忽然变得很慢)。 - +无论 checkpoint 持续时间与间隔是多久,在前一个 checkpoint 完成时的至少五秒后才会开始下一个 checkpoint。 Review comment: ```suggestion 无论 checkpoint 持续时间与间隔是多久,在前一个 checkpoint 完成至少五秒后才会开始下一个 checkpoint。 ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25962) Flink generated Avro schemas can't be parsed using Python
[ https://issues.apache.org/jira/browse/FLINK-25962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490162#comment-17490162 ] Ryan Skraba commented on FLINK-25962: - I forgot to ask whether this could be assigned to me; the PR is done. This would be a good candidate for the next releases, since these generated schemas are currently broken in all current versions of Avro's Python SDK :/ > Flink generated Avro schemas can't be parsed using Python > - > > Key: FLINK-25962 > URL: https://issues.apache.org/jira/browse/FLINK-25962 > Project: Flink > Issue Type: Bug >Affects Versions: 1.14.3 >Reporter: Ryan Skraba >Priority: Major > Labels: pull-request-available > > Flink currently generates Avro schemas as records with the top-level name > {{"record"}} > Unfortunately, there is some inconsistency between Avro implementations in > different languages that may prevent this record from being read, notably > Python, which generates the error: > *avro.schema.SchemaParseException: record is a reserved type name* > (See the comment on FLINK-18096 for the full stack trace). > The Java SDK accepts this name, and there's an [ongoing > discussion|https://lists.apache.org/thread/0wmgyx6z69gy07lvj9ndko75752b8cn2] > about what the expected behaviour should be. This should be clarified and > fixed in Avro, of course. > Regardless of the resolution, the best practice (which is used almost > everywhere else in the Flink codebase) is to explicitly specify a top-level > namespace for an Avro record. We should use a default like: > {{{}org.apache.flink.avro.generated{}}}. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] zentol commented on a change in pull request #18678: [FLINK-24474] Default rest.bind-address to localhost in flink-conf.yaml
zentol commented on a change in pull request #18678: URL: https://github.com/apache/flink/pull/18678#discussion_r803605847 ## File path: flink-end-to-end-tests/test-scripts/common_kubernetes.sh ## @@ -225,4 +225,9 @@ function get_host_machine_address { fi } +function set_config_for_kubernetes { +set_config_key "rest.address" "0.0.0.0" +set_config_key "rest.bind-address" "0.0.0.0" Review comment: Ok, fair enough. Do you intend to update the docs in this PR, or in a follow-up? Excluding these options in https://github.com/apache/flink/blob/222b72f2de43bb60b2f4741fde023493967d20f7/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/kubeclient/decorators/FlinkConfMountDecorator.java#L154 was rejected? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] slinkydeveloper commented on a change in pull request #18653: [FLINK-25825][connector-jdbc] MySqlCatalogITCase fails on azure
slinkydeveloper commented on a change in pull request #18653: URL: https://github.com/apache/flink/pull/18653#discussion_r803604728 ## File path: flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/table/UnsignedTypeConversionITCase.java ## @@ -94,70 +103,76 @@ .withDatabaseName(DEFAULT_DB_NAME) .withLogConsumer(new Slf4jLogConsumer(LOGGER)); -private Connection connection; private StreamTableEnvironment tEnv; @Test public void testUnsignedType() throws Exception { prepare(); // write data to db -tEnv.executeSql(String.format("insert into jdbc_sink select %s from data", COLUMNS)) +tEnv.executeSql( +String.format( +"insert into jdbc_sink select %s from data", +String.join(",", COLUMNS))) .await(); // read data from db using jdbc connection and compare -PreparedStatement query = -connection.prepareStatement( -String.format("select %s from %s", COLUMNS, TABLE_NAME)); -ResultSet resultSet = query.executeQuery(); -while (resultSet.next()) { - Assertions.assertThat(resultSet.getObject("tiny_c")).isEqualTo(127); - Assertions.assertThat(resultSet.getObject("tiny_un_c")).isEqualTo(255); - Assertions.assertThat(resultSet.getObject("small_c")).isEqualTo(32767); - Assertions.assertThat(resultSet.getObject("small_un_c")).isEqualTo(65535); - Assertions.assertThat(resultSet.getObject("int_c")).isEqualTo(2147483647); - Assertions.assertThat(resultSet.getObject("int_un_c")).isEqualTo(4294967295L); - Assertions.assertThat(resultSet.getObject("big_c")).isEqualTo(9223372036854775807L); -Assertions.assertThat(resultSet.getObject("big_un_c")) -.isEqualTo(new BigInteger("18446744073709551615")); +try (Connection con = + DriverManager.getConnection(MYSQL_CONTAINER.getJdbcUrl(), USER, PASSWORD); +PreparedStatement ps = +con.prepareStatement( +String.format( +"select %s from %s", +String.join(",", COLUMNS), TABLE_NAME))) { +ResultSet resultSet = ps.executeQuery(); +while (resultSet.next()) { +assertThat(resultSet.getObject("tiny_c")).isEqualTo(127); +assertThat(resultSet.getObject("tiny_un_c")).isEqualTo(255); +assertThat(resultSet.getObject("small_c")).isEqualTo(32767); +assertThat(resultSet.getObject("small_un_c")).isEqualTo(65535); +assertThat(resultSet.getObject("int_c")).isEqualTo(2147483647); + assertThat(resultSet.getObject("int_un_c")).isEqualTo(4294967295L); + assertThat(resultSet.getObject("big_c")).isEqualTo(9223372036854775807L); +assertThat(resultSet.getObject("big_un_c")) +.isEqualTo(new BigInteger("18446744073709551615")); +} } // read data from db using flink and compare Iterator collected = -tEnv.executeSql(String.format("select %s from jdbc_source", COLUMNS)).collect(); +tEnv.executeSql( +String.format( +"select %s from jdbc_source", String.join(",", COLUMNS))) +.collect(); List result = CollectionUtil.iteratorToList(collected); -List expected = Collections.singletonList(Row.ofKind(RowKind.INSERT, ROW)); -Assertions.assertThat(result).isEqualTo(expected); - -connection.close(); +assertThat(result).containsOnly(Row.ofKind(RowKind.INSERT, ROW)); } private void prepare() throws Exception { -MYSQL_CONTAINER.start(); - -connection = DriverManager.getConnection(MYSQL_CONTAINER.getJdbcUrl(), USER, PASSWORD); tEnv = StreamTableEnvironment.create(StreamExecutionEnvironment.getExecutionEnvironment()); - createMysqlTable(); createFlinkTable(); prepareData(); } private void createMysqlTable() throws SQLException { -PreparedStatement ddlStatement = -connection.prepareStatement( -"create table " -+ TABLE_NAME -+ " (" -+ " tiny_c TINYINT," -+ " tiny_un_c TINYINT UNSIGNED," -+ " small_c SMALLINT," -+ " small_un_c SMALLINT UNSIGNED," -+ " int_c INTEGER ," -
[GitHub] [flink] RocMarshal commented on pull request #18653: [FLINK-25825][connector-jdbc] MySqlCatalogITCase fails on azure
RocMarshal commented on pull request #18653: URL: https://github.com/apache/flink/pull/18653#issuecomment-1034842632 @slinkydeveloper It's grateful to get better for the PR based on your comments. I appreciate with your checking. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #18702: [FLINK-24441][source] Block SourceOperator when watermarks are out of alignment
flinkbot commented on pull request #18702: URL: https://github.com/apache/flink/pull/18702#issuecomment-1034842544 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18699: [FLINK-25937] Restore the environment parallelism before transforming in SinkExpander#expand.
flinkbot edited a comment on pull request #18699: URL: https://github.com/apache/flink/pull/18699#issuecomment-1034581144 ## CI report: * 5b0d16d20a8db053d7aedfd9a4384511a074a5eb Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31097) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18701: [FLINK-26071][table-api-java][table-planner] Now Planner#compilePlan fails if the plan cannot be serialized
flinkbot edited a comment on pull request #18701: URL: https://github.com/apache/flink/pull/18701#issuecomment-1034766277 ## CI report: * 1dd05d2b0f15ae91f251248c9eb6a48cd87fe0ba Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31107) * a3fdc9763094051769e040d5211db40196fe493c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31116) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink
flinkbot edited a comment on pull request #18697: URL: https://github.com/apache/flink/pull/18697#issuecomment-1034510296 ## CI report: * 34a2b1811fd7bc75e2c1e9c8462ab9573d506559 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31079) * df8fc8dfd54ee05dd5ee087be89f2f634dc286d0 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31112) * 3cd445e29262a008208a97ce6e2cf80e33f37ba5 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18653: [FLINK-25825][connector-jdbc] MySqlCatalogITCase fails on azure
flinkbot edited a comment on pull request #18653: URL: https://github.com/apache/flink/pull/18653#issuecomment-1032212433 ## CI report: * 9261ad675d1c9e6f0ea94df829f94f25f18bafc4 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31072) * 764c5fdf856b0930f7fdf8d470256e84d05a646a UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16584: [FLINK-22781][table-planner-blink] Fix bug that when implements the L…
flinkbot edited a comment on pull request #16584: URL: https://github.com/apache/flink/pull/16584#issuecomment-885710999 ## CI report: * 3556331fec7ec8ca1139085a136035a45b2d0977 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31101) * Unknown: [CANCELED](TBD) * 82860fda96c0e0d5931be169f1e664ec61349564 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31108) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-24441) Block SourceReader when watermarks are out of alignment
[ https://issues.apache.org/jira/browse/FLINK-24441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-24441: --- Labels: pull-request-available (was: ) > Block SourceReader when watermarks are out of alignment > --- > > Key: FLINK-24441 > URL: https://issues.apache.org/jira/browse/FLINK-24441 > Project: Flink > Issue Type: Sub-task > Components: Connectors / Common >Reporter: Piotr Nowojski >Assignee: Piotr Nowojski >Priority: Major > Labels: pull-request-available > Fix For: 1.15.0 > > > SourceReader should become unavailable once it's latest watermark is too far > into the future -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] pnowojski opened a new pull request #18702: [FLINK-24441][source] Block SourceOperator when watermarks are out of alignment
pnowojski opened a new pull request #18702: URL: https://github.com/apache/flink/pull/18702 ## What is the purpose of the change This PR integrates with FLINK-24440 and it implements `SourceOperator` level handling of watermark alignment. ## Verifying this change This PR adds a dedicated unit test for the `SourceOperator` changes. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (**yes** / no / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (**yes** / no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / **not documented**) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18701: [FLINK-26071][table-api-java][table-planner] Now Planner#compilePlan fails if the plan cannot be serialized
flinkbot edited a comment on pull request #18701: URL: https://github.com/apache/flink/pull/18701#issuecomment-1034766277 ## CI report: * 1dd05d2b0f15ae91f251248c9eb6a48cd87fe0ba Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31107) * a3fdc9763094051769e040d5211db40196fe493c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink
flinkbot edited a comment on pull request #18697: URL: https://github.com/apache/flink/pull/18697#issuecomment-1034510296 ## CI report: * 34a2b1811fd7bc75e2c1e9c8462ab9573d506559 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31079) * df8fc8dfd54ee05dd5ee087be89f2f634dc286d0 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31112) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18700: [FLINK-26070][Python] Update dependency numpy to >=1.21.0,<1.22
flinkbot edited a comment on pull request #18700: URL: https://github.com/apache/flink/pull/18700#issuecomment-1034649114 ## CI report: * a03576e0a414443a0174b4f3240aedb5520fe3b5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31093) * 66e884a90f74557b1402737994e86c15ff28cfd7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31115) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18624: [FLINK-25388][table-planner] Add consumedOptions to ExecNodeMetadata
flinkbot edited a comment on pull request #18624: URL: https://github.com/apache/flink/pull/18624#issuecomment-1029176049 ## CI report: * 0b72f73fa08b2f2f8d28518677a8e36f1f348572 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31095) * 1d0ac8779a7961062b728ad0e0dcc8c227dacbe4 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31105) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16584: [FLINK-22781][table-planner-blink] Fix bug that when implements the L…
flinkbot edited a comment on pull request #16584: URL: https://github.com/apache/flink/pull/16584#issuecomment-885710999 ## CI report: * 3556331fec7ec8ca1139085a136035a45b2d0977 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31101) * Unknown: [CANCELED](TBD) * 82860fda96c0e0d5931be169f1e664ec61349564 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31108) * 73d908707e5d3a259b45cb164575edb994421cb6 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] autophagy commented on a change in pull request #18678: [FLINK-24474] Default rest.bind-address to localhost in flink-conf.yaml
autophagy commented on a change in pull request #18678: URL: https://github.com/apache/flink/pull/18678#discussion_r803595185 ## File path: flink-end-to-end-tests/test-scripts/common_kubernetes.sh ## @@ -225,4 +225,9 @@ function get_host_machine_address { fi } +function set_config_for_kubernetes { +set_config_key "rest.address" "0.0.0.0" +set_config_key "rest.bind-address" "0.0.0.0" Review comment: I believe so, yes -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dmvk commented on a change in pull request #18651: [FLINK-25792][connectors] Only flushing the async sink base if it is …
dmvk commented on a change in pull request #18651: URL: https://github.com/apache/flink/pull/18651#discussion_r803594934 ## File path: flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java ## @@ -267,18 +267,31 @@ private void registerCallback() { @Override public void write(InputT element, Context context) throws IOException, InterruptedException { +while (mailboxExecutor.tryYield()) {} Review comment: I'm not saying there is an issue. I just don't feel comfortable saying there is not, and if Piotr is not comfortable with that on the first sight as well, it's worth giving it a closer look. Unfortunately I don't have enough context in this area to provide more insight myself. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18701: [FLINK-26071][table-api-java][table-planner] Now Planner#compilePlan fails if the plan cannot be serialized
flinkbot edited a comment on pull request #18701: URL: https://github.com/apache/flink/pull/18701#issuecomment-1034766277 ## CI report: * 1dd05d2b0f15ae91f251248c9eb6a48cd87fe0ba Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31107) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dmvk commented on a change in pull request #18651: [FLINK-25792][connectors] Only flushing the async sink base if it is …
dmvk commented on a change in pull request #18651: URL: https://github.com/apache/flink/pull/18651#discussion_r803594934 ## File path: flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java ## @@ -267,18 +267,31 @@ private void registerCallback() { @Override public void write(InputT element, Context context) throws IOException, InterruptedException { +while (mailboxExecutor.tryYield()) {} Review comment: I'm not saying there is an issue. I just don't feel comfortable saying there is not, and if Piotr is not comfortable with that on the first sight as well, it's worth giving closer look. Unfortunately I don't have enough context in this area to provide more insight myself. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18700: [FLINK-26070][Python] Update dependency numpy to >=1.21.0,<1.22
flinkbot edited a comment on pull request #18700: URL: https://github.com/apache/flink/pull/18700#issuecomment-1034649114 ## CI report: * a03576e0a414443a0174b4f3240aedb5520fe3b5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31093) * 66e884a90f74557b1402737994e86c15ff28cfd7 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18680: [FLINK-25583] Support compacting small files for FileSink.
flinkbot edited a comment on pull request #18680: URL: https://github.com/apache/flink/pull/18680#issuecomment-1033512635 ## CI report: * e50fb694ce8cf0ae952973693bcbf17b1cb75b25 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31006) * d3ecea69aa562e4be5e92a68bfb2617dde45e4e7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31114) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink
flinkbot edited a comment on pull request #18697: URL: https://github.com/apache/flink/pull/18697#issuecomment-1034510296 ## CI report: * 34a2b1811fd7bc75e2c1e9c8462ab9573d506559 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31079) * df8fc8dfd54ee05dd5ee087be89f2f634dc286d0 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31112) * 3cd445e29262a008208a97ce6e2cf80e33f37ba5 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18153: [FLINK-25568][connectors/elasticsearch] Add Elasticsearch 7 Source
flinkbot edited a comment on pull request #18153: URL: https://github.com/apache/flink/pull/18153#issuecomment-997756404 ## CI report: * fc2c5d6d82d21eb7dc97ae6556e797d8fd2675ac Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31039) * f34977eb02ad9dceb02c108cdbf344ebcf75e737 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31113) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] CrynetLogistics commented on a change in pull request #18651: [FLINK-25792][connectors] Only flushing the async sink base if it is …
CrynetLogistics commented on a change in pull request #18651: URL: https://github.com/apache/flink/pull/18651#discussion_r803592814 ## File path: flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java ## @@ -267,18 +267,31 @@ private void registerCallback() { @Override public void write(InputT element, Context context) throws IOException, InterruptedException { +while (mailboxExecutor.tryYield()) {} Review comment: @dmvk Would you mind letting us know what technical issues you feel there are here? I would be happy to address them and make fixes if necessary. To answer your previous question, the semantic is don't buffer or write anything if there are any failed requests waiting to be requeued or fatal exceptions to fail the app with. If the user has super frequent checkpointing, the async threads will be taking care of writing to the destination, and the buffering will proceed as normal and not block here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18700: [FLINK-26070][Python] Update dependency numpy to >=1.21.0,<1.22
flinkbot edited a comment on pull request #18700: URL: https://github.com/apache/flink/pull/18700#issuecomment-1034649114 ## CI report: * a03576e0a414443a0174b4f3240aedb5520fe3b5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31093) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dannycranmer commented on a change in pull request #18651: [FLINK-25792][connectors] Only flushing the async sink base if it is …
dannycranmer commented on a change in pull request #18651: URL: https://github.com/apache/flink/pull/18651#discussion_r803589880 ## File path: flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java ## @@ -267,18 +267,31 @@ private void registerCallback() { @Override public void write(InputT element, Context context) throws IOException, InterruptedException { +while (mailboxExecutor.tryYield()) {} Review comment: @dmvk ack, we will revisit this and get the PR into a state we are happy with and wait for additional reviews before merging. Thanks for the help -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18701: [FLINK-26071][table-api-java][table-planner] Now Planner#compilePlan fails if the plan cannot be serialized
flinkbot edited a comment on pull request #18701: URL: https://github.com/apache/flink/pull/18701#issuecomment-1034766277 ## CI report: * 1dd05d2b0f15ae91f251248c9eb6a48cd87fe0ba Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31107) * a3fdc9763094051769e040d5211db40196fe493c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18692: [FLINK-26015] Fixes object store bug
flinkbot edited a comment on pull request #18692: URL: https://github.com/apache/flink/pull/18692#issuecomment-1034131249 ## CI report: * 7645e2a83b9789816399536fbf369dc65eafcbf7 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31092) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18680: [FLINK-25583] Support compacting small files for FileSink.
flinkbot edited a comment on pull request #18680: URL: https://github.com/apache/flink/pull/18680#issuecomment-1033512635 ## CI report: * e50fb694ce8cf0ae952973693bcbf17b1cb75b25 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31006) * d3ecea69aa562e4be5e92a68bfb2617dde45e4e7 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18676: [FLINK-26035][build][planner] Add table-planner-loader-helper module
flinkbot edited a comment on pull request #18676: URL: https://github.com/apache/flink/pull/18676#issuecomment-1033442314 ## CI report: * f76952a792331a65c0152be8bced92e9dbd45659 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31091) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18153: [FLINK-25568][connectors/elasticsearch] Add Elasticsearch 7 Source
flinkbot edited a comment on pull request #18153: URL: https://github.com/apache/flink/pull/18153#issuecomment-997756404 ## CI report: * fc2c5d6d82d21eb7dc97ae6556e797d8fd2675ac Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31039) * f34977eb02ad9dceb02c108cdbf344ebcf75e737 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18516: [FLINK-25288][tests] add savepoint and metric test cases in source suite of connector testframe
flinkbot edited a comment on pull request #18516: URL: https://github.com/apache/flink/pull/18516#issuecomment-1022021934 ## CI report: * 42bf48fdbba63580ec0d5418579e2e65c413cf38 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30475) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-25875) "StatsDReporter. FilterCharacters" for special processing of the characters are comprehensive enough
[ https://issues.apache.org/jira/browse/FLINK-25875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490147#comment-17490147 ] Chesnay Schepler edited comment on FLINK-25875 at 2/10/22, 11:42 AM: - What I'm looking for a is a specification of the StatsD protocol that says these characters are invalid. If the StatsD protocol says they are fine, then this is an issue with Graphite. Note that StatsD itself pushed this responsibility into the backends: https://github.com/statsd/statsd/issues/110 was (Author: zentol): What I'm looking for a is a specification of the StatsD protocol that says these characters are invalid. If the StatsD protocol says they are fine, then this is an issue with Graphite. Note that StatsD itself pushed this responsibility into the backends: https://github.com/statsd/statsd/issues/110 > "StatsDReporter. FilterCharacters" for special processing of the characters > are comprehensive enough > > > Key: FLINK-25875 > URL: https://issues.apache.org/jira/browse/FLINK-25875 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.14.3 >Reporter: 赵富午 >Priority: Major > Labels: pull-request-available > Attachments: image-2022-01-29-11-55-20-400.png > > > I based on the 「org.Apache.Flink.Metrics.Statsd.StatsDReporter」 metrics > collection, query and display and use 「Graphite」, I found some flink metrics > cannot be queried to, after screening, I found the reason, These indicators > cannot be parsed because they contain space characters. I further track > source code, I found 「StatsDReporter.FilterCharacters」 function for the > processing of special characters is not qualified, only to replace ":" > character, for other special characters and didn't do a good replacement, > such as a space character. > !image-2022-01-29-11-55-38-064.png! > [https://nightlies.apache.org/flink/flink-docs-master/docs/ops/metrics/#garbagecollection] > JVM "Garbagecollection" grouping, indicators have space characters, > indicators cannot be used because indicators cannot be correctly parsed and > stored in the database. > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (FLINK-25875) "StatsDReporter. FilterCharacters" for special processing of the characters are comprehensive enough
[ https://issues.apache.org/jira/browse/FLINK-25875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490147#comment-17490147 ] Chesnay Schepler commented on FLINK-25875: -- What I'm looking for a is a specification of the StatsD protocol that says these characters are invalid. If the StatsD protocol says they are fine, then this is an issue with Graphite. Note that StatsD itself pushed this responsibility into the backends: https://github.com/statsd/statsd/issues/110 > "StatsDReporter. FilterCharacters" for special processing of the characters > are comprehensive enough > > > Key: FLINK-25875 > URL: https://issues.apache.org/jira/browse/FLINK-25875 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.14.3 >Reporter: 赵富午 >Priority: Major > Labels: pull-request-available > Attachments: image-2022-01-29-11-55-20-400.png > > > I based on the 「org.Apache.Flink.Metrics.Statsd.StatsDReporter」 metrics > collection, query and display and use 「Graphite」, I found some flink metrics > cannot be queried to, after screening, I found the reason, These indicators > cannot be parsed because they contain space characters. I further track > source code, I found 「StatsDReporter.FilterCharacters」 function for the > processing of special characters is not qualified, only to replace ":" > character, for other special characters and didn't do a good replacement, > such as a space character. > !image-2022-01-29-11-55-38-064.png! > [https://nightlies.apache.org/flink/flink-docs-master/docs/ops/metrics/#garbagecollection] > JVM "Garbagecollection" grouping, indicators have space characters, > indicators cannot be used because indicators cannot be correctly parsed and > stored in the database. > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] XComp commented on pull request #18644: [FLINK-25974] Makes job cancellation rely on JobResultStore entry
XComp commented on pull request #18644: URL: https://github.com/apache/flink/pull/18644#issuecomment-1034821588 I went through the code once more after you raised valid concerns. I reverted my changes and investigated the cancellation code path for the `JobMasterServiceLeadershipRunner`. I noticed one bit which we overlooked before, probably: The `SchedulerBase` calls the shutdown on the checkpoint-related resources and expects this operation to succeed (see SchedulerBase:666](https://github.com/apache/flink/blob/d8a7704a003528f60238ae40f295d0ad696c2780/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L666). Otherwise, it will fail fatally. This would prevent the retry mechanism to kick in but fail the cluster entirely, AFAIU. This `AdaptiveScheduler.closeAsync` is not implemented like that but forwards the future as the result of the `closeAsync` operation. I'm wondering whether we should tackle that as a follow-up task outside of the release work. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pltbkd commented on pull request #18680: [FLINK-25583] Support compacting small files for FileSink.
pltbkd commented on pull request #18680: URL: https://github.com/apache/flink/pull/18680#issuecomment-1034821469 @gaoyunhaii Thanks for the suggestions. The commits are reorgnized to be more clear, and about 2/3 comments are resolved. I'm still working on the remaining ones. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dmvk commented on a change in pull request #18651: [FLINK-25792][connectors] Only flushing the async sink base if it is …
dmvk commented on a change in pull request #18651: URL: https://github.com/apache/flink/pull/18651#discussion_r803587452 ## File path: flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java ## @@ -267,18 +267,31 @@ private void registerCallback() { @Override public void write(InputT element, Context context) throws IOException, InterruptedException { +while (mailboxExecutor.tryYield()) {} Review comment: Maybe to rephrase that a bit, so it doesn't come out wrong: we shouldn't rush things that require more context from people that are busy with other efforts, just for sake of getting something merged before the feature freeze -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] alpreu commented on a change in pull request #18153: [FLINK-25568][connectors/elasticsearch] Add Elasticsearch 7 Source
alpreu commented on a change in pull request #18153: URL: https://github.com/apache/flink/pull/18153#discussion_r803586116 ## File path: flink-connectors/flink-connector-elasticsearch7/src/main/java/org/apache/flink/connector/elasticsearch/source/reader/Elasticsearch7Record.java ## @@ -0,0 +1,36 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.connector.elasticsearch.source.reader; + +import org.apache.flink.annotation.Internal; + +/** The record instance of the Elasticsearch source. */ +@Internal +public class Elasticsearch7Record { Review comment: It turned out to be unnecessary, I'll remove it and use the generic one -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] GOODBOY008 commented on a change in pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink
GOODBOY008 commented on a change in pull request #18697: URL: https://github.com/apache/flink/pull/18697#discussion_r803584962 ## File path: mvnw ## @@ -0,0 +1,310 @@ +#!/bin/sh +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +#http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + +# +# Maven Start Up Batch script +# +# Required ENV vars: +# -- +# JAVA_HOME - location of a JDK home dir +# +# Optional ENV vars +# - +# M2_HOME - location of maven2's installed home dir +# MAVEN_OPTS - parameters passed to the Java VM when running Maven +# e.g. to debug Maven itself, use +# set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 +# MAVEN_SKIP_RC - flag to disable loading of mavenrc files +# + +if [ -z "$MAVEN_SKIP_RC" ] ; then + + if [ -f /etc/mavenrc ] ; then +. /etc/mavenrc + fi + + if [ -f "$HOME/.mavenrc" ] ; then +. "$HOME/.mavenrc" + fi + +fi + +# OS specific support. $var _must_ be set to either true or false. +cygwin=false; +darwin=false; +mingw=false +case "`uname`" in + CYGWIN*) cygwin=true ;; + MINGW*) mingw=true;; + Darwin*) darwin=true +# Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home +# See https://developer.apple.com/library/mac/qa/qa1170/_index.html +if [ -z "$JAVA_HOME" ]; then + if [ -x "/usr/libexec/java_home" ]; then +export JAVA_HOME="`/usr/libexec/java_home`" + else +export JAVA_HOME="/Library/Java/Home" + fi +fi +;; +esac + +if [ -z "$JAVA_HOME" ] ; then + if [ -r /etc/gentoo-release ] ; then +JAVA_HOME=`java-config --jre-home` + fi +fi + +if [ -z "$M2_HOME" ] ; then + ## resolve links - $0 may be a link to maven's home + PRG="$0" + + # need this for relative symlinks + while [ -h "$PRG" ] ; do +ls=`ls -ld "$PRG"` +link=`expr "$ls" : '.*-> \(.*\)$'` +if expr "$link" : '/.*' > /dev/null; then + PRG="$link" +else + PRG="`dirname "$PRG"`/$link" +fi + done + + saveddir=`pwd` + + M2_HOME=`dirname "$PRG"`/.. + + # make it fully qualified + M2_HOME=`cd "$M2_HOME" && pwd` + + cd "$saveddir" + # echo Using m2 at $M2_HOME +fi + +# For Cygwin, ensure paths are in UNIX format before anything is touched +if $cygwin ; then + [ -n "$M2_HOME" ] && +M2_HOME=`cygpath --unix "$M2_HOME"` + [ -n "$JAVA_HOME" ] && +JAVA_HOME=`cygpath --unix "$JAVA_HOME"` + [ -n "$CLASSPATH" ] && +CLASSPATH=`cygpath --path --unix "$CLASSPATH"` +fi + +# For Mingw, ensure paths are in UNIX format before anything is touched +if $mingw ; then + [ -n "$M2_HOME" ] && +M2_HOME="`(cd "$M2_HOME"; pwd)`" + [ -n "$JAVA_HOME" ] && +JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`" +fi + +if [ -z "$JAVA_HOME" ]; then + javaExecutable="`which javac`" + if [ -n "$javaExecutable" ] && ! [ "`expr \"$javaExecutable\" : '\([^ ]*\)'`" = "no" ]; then +# readlink(1) is not available as standard on Solaris 10. +readLink=`which readlink` +if [ ! `expr "$readLink" : '\([^ ]*\)'` = "no" ]; then + if $darwin ; then +javaHome="`dirname \"$javaExecutable\"`" +javaExecutable="`cd \"$javaHome\" && pwd -P`/javac" + else +javaExecutable="`readlink -f \"$javaExecutable\"`" + fi + javaHome="`dirname \"$javaExecutable\"`" + javaHome=`expr "$javaHome" : '\(.*\)/bin'` + JAVA_HOME="$javaHome" + export JAVA_HOME +fi + fi +fi + +if [ -z "$JAVACMD" ] ; then + if [ -n "$JAVA_HOME" ] ; then +if [ -x "$JAVA_HOME/jre/sh/java" ] ; then + # IBM's JDK on AIX uses strange locations for the executables + JAVACMD="$JAVA_HOME/jre/sh/java" +else + JAVACMD="$JAVA_HOME/bin/java" +fi + else +JAVACMD="`which java`" + fi +fi + +if [ ! -x "$JAVACMD" ] ; then + echo "Error: JAVA_HOME is not defined correctly." >&2 + echo " We cannot execute $JAVACMD" >&2 + exit 1 +fi + +if [ -z "$JAVA_HOME" ] ; then + echo "Warning:
[GitHub] [flink] flinkbot edited a comment on pull request #18701: [FLINK-26071][table-api-java][table-planner] Now Planner#compilePlan fails if the plan cannot be serialized
flinkbot edited a comment on pull request #18701: URL: https://github.com/apache/flink/pull/18701#issuecomment-1034766277 ## CI report: * 1dd05d2b0f15ae91f251248c9eb6a48cd87fe0ba Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31107) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink
flinkbot edited a comment on pull request #18697: URL: https://github.com/apache/flink/pull/18697#issuecomment-1034510296 ## CI report: * 34a2b1811fd7bc75e2c1e9c8462ab9573d506559 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31079) * df8fc8dfd54ee05dd5ee087be89f2f634dc286d0 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31112) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] pltbkd commented on a change in pull request #18680: [FLINK-25583] Support compacting small files for FileSink.
pltbkd commented on a change in pull request #18680: URL: https://github.com/apache/flink/pull/18680#discussion_r803584223 ## File path: flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/sink/compactor/operator/CompactorOperator.java ## @@ -0,0 +1,404 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.connector.file.sink.compactor.operator; + +import org.apache.flink.annotation.VisibleForTesting; +import org.apache.flink.api.common.state.CheckpointListener; +import org.apache.flink.api.common.state.ListState; +import org.apache.flink.api.common.state.ListStateDescriptor; +import org.apache.flink.api.common.typeutils.base.array.BytePrimitiveArraySerializer; +import org.apache.flink.api.java.tuple.Tuple2; +import org.apache.flink.connector.file.sink.FileSink; +import org.apache.flink.connector.file.sink.FileSinkCommittable; +import org.apache.flink.connector.file.sink.compactor.FileCompactor; +import org.apache.flink.core.fs.Path; +import org.apache.flink.core.io.SimpleVersionedSerialization; +import org.apache.flink.core.io.SimpleVersionedSerializer; +import org.apache.flink.core.memory.DataInputDeserializer; +import org.apache.flink.core.memory.DataInputView; +import org.apache.flink.core.memory.DataOutputSerializer; +import org.apache.flink.runtime.state.StateInitializationContext; +import org.apache.flink.runtime.state.StateSnapshotContext; +import org.apache.flink.runtime.util.Hardware; +import org.apache.flink.streaming.api.connector.sink2.CommittableMessage; +import org.apache.flink.streaming.api.connector.sink2.CommittableSummary; +import org.apache.flink.streaming.api.connector.sink2.CommittableWithLineage; +import org.apache.flink.streaming.api.functions.sink.filesystem.BucketWriter; +import org.apache.flink.streaming.api.functions.sink.filesystem.CompactingFileWriter; +import org.apache.flink.streaming.api.functions.sink.filesystem.InProgressFileWriter.PendingFileRecoverable; +import org.apache.flink.streaming.api.operators.AbstractStreamOperator; +import org.apache.flink.streaming.api.operators.BoundedOneInput; +import org.apache.flink.streaming.api.operators.OneInputStreamOperator; +import org.apache.flink.streaming.api.operators.util.SimpleVersionedListState; +import org.apache.flink.streaming.runtime.streamrecord.StreamRecord; +import org.apache.flink.util.concurrent.ExecutorThreadFactory; + +import javax.annotation.Nullable; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.Iterator; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.NavigableMap; +import java.util.TreeMap; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.stream.Collectors; + +/** + * An operator that perform compaction for the {@link FileSink}. + * + * Requests received from the {@link CompactCoordinator} will firstly be held in memory, and + * snapshot into the state of a checkpoint. When the checkpoint is successfully completed, all + * requests received before can be submitted. The results can be emitted at the next {@link + * #prepareSnapshotPreBarrier} invoking after the compaction is finished, to ensure that committers + * can receive only one CommittableSummary and the corresponding number of Committable for a single + * checkpoint. + */ +public class CompactorOperator +extends AbstractStreamOperator> +implements OneInputStreamOperator< +CompactorRequest, CommittableMessage>, +BoundedOneInput, +CheckpointListener { + +private static final String COMPACTED_PREFIX = "compacted-"; +private static final long SUBMITTED_ID = -1L; + +private static final ListStateDescriptor REMAINING_REQUESTS_RAW_STATES_DESC = +new ListStateDescriptor<>( +"remaining_requests_raw_state", BytePrimitiveArraySerializer.INSTANCE); + +private final int compactThreads; +private final SimpleVersionedSerializer
[GitHub] [flink] flinkbot edited a comment on pull request #18648: [FLINK-25845][table] Introduce EXECUTE PLAN
flinkbot edited a comment on pull request #18648: URL: https://github.com/apache/flink/pull/18648#issuecomment-1031569879 ## CI report: * 2e7e7a66ce145cfbbe8449e30502f0b2f75c7b90 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31046) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18644: [FLINK-25974] Makes job cancellation rely on JobResultStore entry
flinkbot edited a comment on pull request #18644: URL: https://github.com/apache/flink/pull/18644#issuecomment-1031433646 ## CI report: * 68a106239b32be93ec21cfa4723b802b34d6aa20 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30848) * dccbae13e8249167ecb5330ee231350bd00b8754 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] GOODBOY008 commented on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink
GOODBOY008 commented on pull request #18697: URL: https://github.com/apache/flink/pull/18697#issuecomment-1034814261 > Thanks for opening this PR @GOODBOY008. Could you add a short description of what the script does? Is it copied from somewhere or did you develop it from scratch? If yes, then we should link it to make this apparent. As @RyanSkraba said,I just cd flink workspace run `mvn wrapper:wrapper -Dmaven=3.2.5` ,remove file `${workspace}/flink/.mvn/wrapper/maven-wrapper.jar` and add this file to `.ignore`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] CrynetLogistics commented on a change in pull request #18651: [FLINK-25792][connectors] Only flushing the async sink base if it is …
CrynetLogistics commented on a change in pull request #18651: URL: https://github.com/apache/flink/pull/18651#discussion_r803581562 ## File path: flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java ## @@ -267,18 +267,31 @@ private void registerCallback() { @Override public void write(InputT element, Context context) throws IOException, InterruptedException { +while (mailboxExecutor.tryYield()) {} Review comment: Just wanted to back up my feeling that I don't feel we have a busy loop on the [tryYield()](https://nightlies.apache.org/flink/flink-docs-master/api/java//org/apache/flink/streaming/runtime/tasks/mailbox/MailboxExecutorImpl.html#tryYield--): set number of mailbox threads, call to tryyield will run the first one, and return true if it ran, until there are no more, when it will return false -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dmvk commented on a change in pull request #18651: [FLINK-25792][connectors] Only flushing the async sink base if it is …
dmvk commented on a change in pull request #18651: URL: https://github.com/apache/flink/pull/18651#discussion_r803580523 ## File path: flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java ## @@ -267,18 +267,31 @@ private void registerCallback() { @Override public void write(InputT element, Context context) throws IOException, InterruptedException { +while (mailboxExecutor.tryYield()) {} Review comment: @dannycranmer It sounds fishy and doesn't align with the quality standards we require for Flink contributions. If you're concerned about the feature freeze, this can still be merged after that as the current implementation is broken. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18701: [FLINK-26071][table-api-java][table-planner] Now Planner#compilePlan fails if the plan cannot be serialized
flinkbot edited a comment on pull request #18701: URL: https://github.com/apache/flink/pull/18701#issuecomment-1034766277 ## CI report: * 1dd05d2b0f15ae91f251248c9eb6a48cd87fe0ba Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31107) * a3fdc9763094051769e040d5211db40196fe493c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18697: [FLINK-26034][Build System]Add maven wrapper for flink
flinkbot edited a comment on pull request #18697: URL: https://github.com/apache/flink/pull/18697#issuecomment-1034510296 ## CI report: * 34a2b1811fd7bc75e2c1e9c8462ab9573d506559 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31079) * df8fc8dfd54ee05dd5ee087be89f2f634dc286d0 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zentol commented on a change in pull request #18696: [hotfix][docs] project config pages
zentol commented on a change in pull request #18696: URL: https://github.com/apache/flink/pull/18696#discussion_r803578123 ## File path: docs/content/docs/dev/configuration/testing.md ## @@ -31,8 +31,30 @@ Flink provides utilities for testing your job that you can add as dependencies. You need to add the following dependencies if you want to develop tests for a job built with the DataStream API: +{{< tabs "datastream test" >}} + +{{< tab "Maven" >}} +Open the `pom.xml` file in your project directory and add these dependencies in between the dependencies tab. {{< artifact flink-test-utils withTestScope >}} {{< artifact flink-runtime withTestScope >}} +{{< /tab >}} + +{{< tab "Gradle" >}} +Open the `build.gradle` file in your project directory and add the following in the dependencies block. +```gradle +... +dependencies { +... +flinkShadowJar "org.apache.flink:flink-test-utils:${flinkVersion}" +flinkShadowJar "org.apache.flink:flink-runtime:${flinkVersion}" Review comment: Shouldn't the scope be `testImplementation`? ## File path: docs/content/docs/dev/configuration/testing.md ## @@ -41,7 +63,28 @@ For more information on how to use these utilities, check out the section on [Da If you want to test the Table API & SQL programs locally within your IDE, you can add the following dependency: +{{< tabs "table test" >}} + +{{< tab "Maven" >}} +Open the `pom.xml` file in your project directory and add this dependency in between the dependencies tab. {{< artifact flink-table-test-utils withTestScope >}} +{{< /tab >}} + +{{< tab "Gradle" >}} +Open the `build.gradle` file in your project directory and add the following in the dependencies block. +```gradle +... +dependencies { +... +flinkShadowJar "org.apache.flink:flink-table-test-utils:${flinkVersion}" Review comment: same as above -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25875) "StatsDReporter. FilterCharacters" for special processing of the characters are comprehensive enough
[ https://issues.apache.org/jira/browse/FLINK-25875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490137#comment-17490137 ] 赵富午 commented on FLINK-25875: - In my opinion, we should not only filter space characters, but also filter other special characters. I submitted the code after sorting it out. https://github.com/apache/flink/pull/18557/files > "StatsDReporter. FilterCharacters" for special processing of the characters > are comprehensive enough > > > Key: FLINK-25875 > URL: https://issues.apache.org/jira/browse/FLINK-25875 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.14.3 >Reporter: 赵富午 >Priority: Major > Labels: pull-request-available > Attachments: image-2022-01-29-11-55-20-400.png > > > I based on the 「org.Apache.Flink.Metrics.Statsd.StatsDReporter」 metrics > collection, query and display and use 「Graphite」, I found some flink metrics > cannot be queried to, after screening, I found the reason, These indicators > cannot be parsed because they contain space characters. I further track > source code, I found 「StatsDReporter.FilterCharacters」 function for the > processing of special characters is not qualified, only to replace ":" > character, for other special characters and didn't do a good replacement, > such as a space character. > !image-2022-01-29-11-55-38-064.png! > [https://nightlies.apache.org/flink/flink-docs-master/docs/ops/metrics/#garbagecollection] > JVM "Garbagecollection" grouping, indicators have space characters, > indicators cannot be used because indicators cannot be correctly parsed and > stored in the database. > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #18644: [FLINK-25974] Makes job cancellation rely on JobResultStore entry
flinkbot edited a comment on pull request #18644: URL: https://github.com/apache/flink/pull/18644#issuecomment-1031433646 ## CI report: * 68a106239b32be93ec21cfa4723b802b34d6aa20 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30848) * dccbae13e8249167ecb5330ee231350bd00b8754 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17452: [FLINK-20732][connector/pulsar] Introduction of Pulsar Sink
flinkbot edited a comment on pull request #17452: URL: https://github.com/apache/flink/pull/17452#issuecomment-940136217 ## CI report: * b84065861ec8af6e9f35ffdc548ce3203a21534e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30995) * a9c278120fed2705c20d511b70f291a4bc47295f Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31110) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] vahmed-hamdy commented on a change in pull request #18669: [FLINK-25943][connector/common] Add buffered requests to snapshot state in AsyncSyncWriter.
vahmed-hamdy commented on a change in pull request #18669: URL: https://github.com/apache/flink/pull/18669#discussion_r803575849 ## File path: flink-connectors/flink-connector-aws-kinesis-data-streams/src/main/java/org/apache/flink/connector/kinesis/sink/KinesisDataStreamsStateSerializer.java ## @@ -0,0 +1,77 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.connector.kinesis.sink; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.connector.base.sink.writer.AsyncSinkWriterStateSerializer; + +import software.amazon.awssdk.core.SdkBytes; +import software.amazon.awssdk.services.kinesis.model.PutRecordsRequestEntry; + +import java.io.DataInputStream; +import java.io.DataOutputStream; +import java.io.IOException; + +/** Kinesis Streams implementation {@link AsyncSinkWriterStateSerializer}. */ +@Internal +public class KinesisDataStreamsStateSerializer +extends AsyncSinkWriterStateSerializer { +@Override +protected void serializeRequestToStream(PutRecordsRequestEntry request, DataOutputStream out) +throws IOException { +out.write(request.data().asByteArrayUnsafe()); +serializePartitionKeyToStream(request.partitionKey(), out); +validateExplicitHashKey(request); +} + +protected void serializePartitionKeyToStream(String partitionKey, DataOutputStream out) +throws IOException { +out.writeInt(partitionKey.length()); +out.write(partitionKey.getBytes()); +} + +protected void validateExplicitHashKey(PutRecordsRequestEntry request) { +if (request.explicitHashKey() != null) { +throw new IllegalStateException( +"Request contains field not included in serialization."); +} +} + +@Override +protected PutRecordsRequestEntry deserializeRequestFromStream( +long requestSize, DataInputStream in) throws IOException { +byte[] requestData = readBytes(in, (int) requestSize); + +return PutRecordsRequestEntry.builder() +.data(SdkBytes.fromByteArray(requestData)) +.partitionKey(deserializePartitionKeyToStream(in)) +.build(); +} + +protected String deserializePartitionKeyToStream(DataInputStream in) throws IOException { +int partitionKeyLength = readInt(in); +byte[] requestPartitionKeyData = readBytes(in, partitionKeyLength); +return new String(requestPartitionKeyData); +} Review comment: It is agreed to remove byte stream validation and bubble up the original error. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18701: [FLINK-26071][table-api-java][table-planner] Now Planner#compilePlan fails if the plan cannot be serialized
flinkbot edited a comment on pull request #18701: URL: https://github.com/apache/flink/pull/18701#issuecomment-1034766277 ## CI report: * 1dd05d2b0f15ae91f251248c9eb6a48cd87fe0ba Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31107) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] CrynetLogistics commented on a change in pull request #18651: [FLINK-25792][connectors] Only flushing the async sink base if it is …
CrynetLogistics commented on a change in pull request #18651: URL: https://github.com/apache/flink/pull/18651#discussion_r803573163 ## File path: flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java ## @@ -267,18 +267,31 @@ private void registerCallback() { @Override public void write(InputT element, Context context) throws IOException, InterruptedException { +while (mailboxExecutor.tryYield()) {} Review comment: Hi @pnowojski, thanks, just on the questions: 1. There are two types of scenarios where we enqueue to the mailbox (1) to handle fatal exceptions and (2) to add to the buffer any failed request entries. I believe, these should take priority over flushing new items? 2. I do agree the line is dubious, perhaps, this is more appropriate: (the behaviour would remain identical) ``` while (inFlightRequestsCount > 0) { mailboxExecutor.yield(); } ``` 3. Perhaps I'm mistaken, but I don't believe we have a busy loop here. i.e. if `mailboxExecutor.tryYield()` returns true, there is some work to be elsewhere in the mailbox, then we perform that. Otherwise, it will return false and the loop will end. I can't see where CPU resources is being wasted? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] CrynetLogistics commented on a change in pull request #18651: [FLINK-25792][connectors] Only flushing the async sink base if it is …
CrynetLogistics commented on a change in pull request #18651: URL: https://github.com/apache/flink/pull/18651#discussion_r803573163 ## File path: flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java ## @@ -267,18 +267,31 @@ private void registerCallback() { @Override public void write(InputT element, Context context) throws IOException, InterruptedException { +while (mailboxExecutor.tryYield()) {} Review comment: Hi @pnowojski, thanks, just on the questions: 1. There are two types of scenarios where we enqueue to the mailbox (1) to handle fatal exceptions and (2) to add to the buffer any failed request entries. I believe, these should take priority over flushing new items? 2. I do agree the line is dubious, perhaps, this is more appropriate: ``` while (inFlightRequestsCount > 0) { mailboxExecutor.yield(); } ``` 3. Perhaps I'm mistaken, but I don't believe we have a busy loop here. i.e. if `mailboxExecutor.tryYield()` returns true, there is some work to be elsewhere in the mailbox, then we perform that. Otherwise, it will return false and the loop will end. I can't see where CPU resources is being wasted? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-25875) "StatsDReporter. FilterCharacters" for special processing of the characters are comprehensive enough
[ https://issues.apache.org/jira/browse/FLINK-25875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490134#comment-17490134 ] 赵富午 commented on FLINK-25875: - The first thing you need to filter is space characters. Some of the metrics in the GarbageCollection contain Spaces. Please refer to the attachment. > "StatsDReporter. FilterCharacters" for special processing of the characters > are comprehensive enough > > > Key: FLINK-25875 > URL: https://issues.apache.org/jira/browse/FLINK-25875 > Project: Flink > Issue Type: Improvement > Components: Runtime / Metrics >Affects Versions: 1.14.3 >Reporter: 赵富午 >Priority: Major > Labels: pull-request-available > Attachments: image-2022-01-29-11-55-20-400.png > > > I based on the 「org.Apache.Flink.Metrics.Statsd.StatsDReporter」 metrics > collection, query and display and use 「Graphite」, I found some flink metrics > cannot be queried to, after screening, I found the reason, These indicators > cannot be parsed because they contain space characters. I further track > source code, I found 「StatsDReporter.FilterCharacters」 function for the > processing of special characters is not qualified, only to replace ":" > character, for other special characters and didn't do a good replacement, > such as a space character. > !image-2022-01-29-11-55-38-064.png! > [https://nightlies.apache.org/flink/flink-docs-master/docs/ops/metrics/#garbagecollection] > JVM "Garbagecollection" grouping, indicators have space characters, > indicators cannot be used because indicators cannot be correctly parsed and > stored in the database. > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [flink] flinkbot edited a comment on pull request #17452: [FLINK-20732][connector/pulsar] Introduction of Pulsar Sink
flinkbot edited a comment on pull request #17452: URL: https://github.com/apache/flink/pull/17452#issuecomment-940136217 ## CI report: * b84065861ec8af6e9f35ffdc548ce3203a21534e Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=30995) * a9c278120fed2705c20d511b70f291a4bc47295f UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18701: [FLINK-26071][table-api-java][table-planner] Now Planner#compilePlan fails if the plan cannot be serialized
flinkbot edited a comment on pull request #18701: URL: https://github.com/apache/flink/pull/18701#issuecomment-1034766277 ## CI report: * 1dd05d2b0f15ae91f251248c9eb6a48cd87fe0ba Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31107) * a3fdc9763094051769e040d5211db40196fe493c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-26036) LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory timeout on azure
[ https://issues.apache.org/jira/browse/FLINK-26036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490131#comment-17490131 ] Till Rohrmann commented on FLINK-26036: --- Thanks for the pointer [~gaoyunhaii]. I will take another look at the problem. I assume that there is still somehow a call that triggers {{TaskExectuor.freeSlotInternal}}. > LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory > timeout on azure > --- > > Key: FLINK-26036 > URL: https://issues.apache.org/jira/browse/FLINK-26036 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.15.0 >Reporter: Yun Gao >Assignee: Till Rohrmann >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.15.0 > > > {code:java} > 022-02-09T02:18:17.1827314Z Feb 09 02:18:14 [ERROR] > org.apache.flink.test.recovery.LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory > Time elapsed: 62.252 s <<< ERROR! > 2022-02-09T02:18:17.1827940Z Feb 09 02:18:14 > java.util.concurrent.TimeoutException > 2022-02-09T02:18:17.1828450Z Feb 09 02:18:14 at > java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784) > 2022-02-09T02:18:17.1829040Z Feb 09 02:18:14 at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) > 2022-02-09T02:18:17.1829752Z Feb 09 02:18:14 at > org.apache.flink.test.recovery.LocalRecoveryITCase.testRecoverLocallyFromProcessCrashWithWorkingDirectory(LocalRecoveryITCase.java:115) > 2022-02-09T02:18:17.1830407Z Feb 09 02:18:14 at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2022-02-09T02:18:17.1830954Z Feb 09 02:18:14 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2022-02-09T02:18:17.1831582Z Feb 09 02:18:14 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2022-02-09T02:18:17.1832135Z Feb 09 02:18:14 at > java.lang.reflect.Method.invoke(Method.java:498) > 2022-02-09T02:18:17.1832697Z Feb 09 02:18:14 at > org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) > 2022-02-09T02:18:17.1833566Z Feb 09 02:18:14 at > org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) > 2022-02-09T02:18:17.1834394Z Feb 09 02:18:14 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) > 2022-02-09T02:18:17.1835125Z Feb 09 02:18:14 at > org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) > 2022-02-09T02:18:17.1835875Z Feb 09 02:18:14 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) > 2022-02-09T02:18:17.1836565Z Feb 09 02:18:14 at > org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) > 2022-02-09T02:18:17.1837294Z Feb 09 02:18:14 at > org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) > 2022-02-09T02:18:17.1838007Z Feb 09 02:18:14 at > org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) > 2022-02-09T02:18:17.1838743Z Feb 09 02:18:14 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) > 2022-02-09T02:18:17.1839499Z Feb 09 02:18:14 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) > 2022-02-09T02:18:17.1840224Z Feb 09 02:18:14 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) > 2022-02-09T02:18:17.1840952Z Feb 09 02:18:14 at > org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) > 2022-02-09T02:18:17.1841616Z Feb 09 02:18:14 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) > 2022-02-09T02:18:17.1842257Z Feb 09 02:18:14 at > org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) > 2022-02-09T02:18:17.1842951Z Feb 09 02:18:14 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214) > 2022-02-09T02:18:17.1843681Z Feb 09 02:18:14 at > org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) > 2022-02-09T02:18:17.1844782Z Feb 09 02:18:14 at > org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210) >
[GitHub] [flink] flinkbot edited a comment on pull request #18649: [FLINK-25844][table-api-java] Introduce StatementSet#compilePlan
flinkbot edited a comment on pull request #18649: URL: https://github.com/apache/flink/pull/18649#issuecomment-1031570072 ## CI report: * 7abbdf54df75a8e9e0041e908ab61d785d800410 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31047) * 414d103b5b3cd2960b910198c40a7aeb83aed868 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31109) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16584: [FLINK-22781][table-planner-blink] Fix bug that when implements the L…
flinkbot edited a comment on pull request #16584: URL: https://github.com/apache/flink/pull/16584#issuecomment-885710999 ## CI report: * 3556331fec7ec8ca1139085a136035a45b2d0977 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31101) * Unknown: [CANCELED](TBD) * 82860fda96c0e0d5931be169f1e664ec61349564 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31108) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #18701: [FLINK-26071][table-api-java][table-planner] Now Planner#compilePlan fails if the plan cannot be serialized
flinkbot edited a comment on pull request #18701: URL: https://github.com/apache/flink/pull/18701#issuecomment-1034766277 ## CI report: * 1dd05d2b0f15ae91f251248c9eb6a48cd87fe0ba Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=31107) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zentol commented on a change in pull request #18692: [FLINK-26015] Fixes object store bug
zentol commented on a change in pull request #18692: URL: https://github.com/apache/flink/pull/18692#discussion_r803564915 ## File path: flink-test-utils-parent/flink-test-utils-junit/src/main/java/org/apache/flink/util/DockerImageVersions.java ## @@ -45,4 +45,6 @@ public static final String PULSAR = "apachepulsar/pulsar:2.8.0"; public static final String CASSANDRA_3 = "cassandra:3.0"; + +public static final String MINIO = "minio/minio:edge"; Review comment: what stability does "edge" provide is? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org