[GitHub] [flink] flinkbot edited a comment on pull request #16698: [FLINK-23602][runtime] org.codehaus.commons.compiler.CompileException…
flinkbot edited a comment on pull request #16698: URL: https://github.com/apache/flink/pull/16698#issuecomment-892376359 ## CI report: * 7b305e3726aa946d5ac25324347e4ce0a1f9a28c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21459) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16658: [FLINK-23560][streaming] Replaced systemTimerService by timerService for launching the throughput
flinkbot edited a comment on pull request #16658: URL: https://github.com/apache/flink/pull/16658#issuecomment-890336326 ## CI report: * 921d56fcaebe0ea44b81d459e617de336508abcc Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21435) * ca08c680712dd780280d86b095c0a180048471e0 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21458) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16640: [FLINK-22891][runtime] Using CompletableFuture to sync the scheduling…
flinkbot edited a comment on pull request #16640: URL: https://github.com/apache/flink/pull/16640#issuecomment-889598196 ## CI report: * 81126114e90f8947dba92e4bc1bf785f6fc86ac5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21370) Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21447) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16620: [FLINK-23246][table-planner] Refactor the time indicator materialization
flinkbot edited a comment on pull request #16620: URL: https://github.com/apache/flink/pull/16620#issuecomment-888120908 ## CI report: * 12c0dbb7a1359053f4804910f58f06434e542dad UNKNOWN * 59644cc0b8b79495818890290190e7cf91caed7d Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21135) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21090) * 7af3da3e3b9f75ede4ac13755743320ff8ac5e83 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16465: [FLINK-22910][runtime] Refine ShuffleMaster lifecycle management for pluggable shuffle service framework
flinkbot edited a comment on pull request #16465: URL: https://github.com/apache/flink/pull/16465#issuecomment-878076127 ## CI report: * 4fb87e1eedfd547c18b9a07cc52779b6c0ac39cf Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21409) * 56a034b0ebd265f1da25721aac1f30d9d375 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21450) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21460) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23613) debezium and canal support read medata op and type
[ https://issues.apache.org/jira/browse/FLINK-23613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392738#comment-17392738 ] Ward Harris commented on FLINK-23613: - [~jark] hi here is new issue > debezium and canal support read medata op and type > -- > > Key: FLINK-23613 > URL: https://issues.apache.org/jira/browse/FLINK-23613 > Project: Flink > Issue Type: New Feature > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) >Reporter: Ward Harris >Priority: Major > > in our scene, there will be two types of database data delivered to the data > warehouse: > 1. the first type is exactly the same as the online table > 2. the second type adds two columns on the basis of the previous table, > representing action_type and action_time respectively, which is to record > events > in order to solve this demand by flink sql, it is necessary to be able to > read the action_type and action_time from debezium or canal metadata, > action_time can read from ingestion-timestamp metadata, but can not read > action_type from metadata. > the database action is insert/update/delete, but there will be > insert/update_before/update_after/delete in Flink's RowKind, so action_type > is RowKind will be better for us. at the same time, flink needs to modify > RowKind to insert for record this event table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23590) StreamTaskTest#testProcessWithUnAvailableInput is flaky
[ https://issues.apache.org/jira/browse/FLINK-23590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392737#comment-17392737 ] Zhu Zhu commented on FLINK-23590: - https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21450=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7 > StreamTaskTest#testProcessWithUnAvailableInput is flaky > --- > > Key: FLINK-23590 > URL: https://issues.apache.org/jira/browse/FLINK-23590 > Project: Flink > Issue Type: Bug > Components: Runtime / Task >Affects Versions: 1.14.0 >Reporter: David Morávek >Assignee: Anton Kalashnikov >Priority: Critical > Fix For: 1.14.0 > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21218=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb] > > {code:java} > java.lang.AssertionError: > Expected: a value equal to or greater than <22L> > but: <217391L> was less than <22L> at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:964) > at org.junit.Assert.assertThat(Assert.java:930) > at > org.apache.flink.streaming.runtime.tasks.StreamTaskTest.testProcessWithUnAvailableInput(StreamTaskTest.java:1561) > at jdk.internal.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at java.base/java.lang.Thread.run(Thread.java:829){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zhuzhurk commented on pull request #16465: [FLINK-22910][runtime] Refine ShuffleMaster lifecycle management for pluggable shuffle service framework
zhuzhurk commented on pull request #16465: URL: https://github.com/apache/flink/pull/16465#issuecomment-892383883 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-23613) debezium and canal support read medata op and type
Ward Harris created FLINK-23613: --- Summary: debezium and canal support read medata op and type Key: FLINK-23613 URL: https://issues.apache.org/jira/browse/FLINK-23613 Project: Flink Issue Type: New Feature Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Reporter: Ward Harris in our scene, there will be two types of database data delivered to the data warehouse: 1. the first type is exactly the same as the online table 2. the second type adds two columns on the basis of the previous table, representing action_type and action_time respectively, which is to record events in order to solve this demand by flink sql, it is necessary to be able to read the action_type and action_time from debezium or canal metadata, action_time can read from ingestion-timestamp metadata, but can not read action_type from metadata. the database action is insert/update/delete, but there will be insert/update_before/update_after/delete in Flink's RowKind, so action_type is RowKind will be better for us. at the same time, flink needs to modify RowKind to insert for record this event table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23096) HiveParser could not attach the sessionstate of hive
[ https://issues.apache.org/jira/browse/FLINK-23096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392733#comment-17392733 ] lixu commented on FLINK-23096: -- {code:java} //代码占位符 java.lang.IllegalArgumentException: Pathname /C:/Users/merit/AppData/Local/Temp/merit/b6b954c0-78b8-458b-be35-191dcd94d535 from C:/Users/merit/AppData/Local/Temp/merit/b6b954c0-78b8-458b-be35-191dcd94d535 is not a valid DFS filename.java.lang.IllegalArgumentException: Pathname /C:/Users/merit/AppData/Local/Temp/merit/b6b954c0-78b8-458b-be35-191dcd94d535 from C:/Users/merit/AppData/Local/Temp/merit/b6b954c0-78b8-458b-be35-191dcd94d535 is not a valid DFS filename. at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:196) at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105) at org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:638) at org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:634) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:634) at org.apache.flink.table.planner.delegation.hive.HiveParser.clearSessionState(HiveParser.java:229) at org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:108) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:724) {code} > HiveParser could not attach the sessionstate of hive > > > Key: FLINK-23096 > URL: https://issues.apache.org/jira/browse/FLINK-23096 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive >Affects Versions: 1.13.1 >Reporter: shizhengchao >Assignee: shizhengchao >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0, 1.13.2 > > > My sql code is as follows: > {code:java} > //代码占位符 > CREATE CATALOG myhive WITH ( > 'type' = 'hive', > 'default-database' = 'default', > 'hive-conf-dir' = '/home/service/upload-job-file/1624269463008' > ); > use catalog hive; > set 'table.sql-dialect' = 'hive'; > create view if not exists view_test as > select > cast(goods_id as string) as goods_id, > cast(depot_id as string) as depot_id, > cast(product_id as string) as product_id, > cast(tenant_code as string) as tenant_code > from edw.dim_yezi_whse_goods_base_info/*+ > OPTIONS('streaming-source.consume-start-offset'='dayno=20210621') */; > {code} > and the exception is as follows: > {code:java} > //代码占位符 > org.apache.flink.client.program.ProgramInvocationException: The main method > caused an error: Conf non-local session path expected to be non-null > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) > at > org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246) > at > org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132) > at > org.apache.flink.client.cli.CliFrontend$$Lambda$68/330382173.call(Unknown > Source) > at > org.apache.flink.runtime.security.contexts.HadoopSecurityContext$$Lambda$69/680712932.run(Unknown > Source) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692) > at > org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132) > Caused by: java.lang.NullPointerException: Conf non-local session path > expected to be non-null > at > com.google.common.base.Preconditions.checkNotNull(Preconditions.java:208) > at > org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:669) > at > org.apache.flink.table.planner.delegation.hive.HiveParser.clearSessionState(HiveParser.java:376) > at > org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:219) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:724) > at >
[GitHub] [flink] curcur edited a comment on pull request #16653: [FLINK-23558][streaming] Ignoring RejectedExecutionException during s…
curcur edited a comment on pull request #16653: URL: https://github.com/apache/flink/pull/16653#issuecomment-892379997 @akalash I've looked through the code again, I think we should probably also keep ``` timersFinishedFuture.get(); systemTimersFinishedFuture.get(); ``` in the task thread as well for the safety purposes. I mean we move this two lines of code into the actionExecutor and keep the same copy in its origin place. I think for normal task, actionExecutor is the same task thread, but there is also some cases (like sources) that use a different thread. Task thread should also wait after these two futures complete. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur edited a comment on pull request #16653: [FLINK-23558][streaming] Ignoring RejectedExecutionException during s…
curcur edited a comment on pull request #16653: URL: https://github.com/apache/flink/pull/16653#issuecomment-892379997 @akalash I've looked through the code again, I think we should probably also keep ``` timersFinishedFuture.get(); systemTimersFinishedFuture.get(); ``` in the task thread as well for the safety purposes. I mean we move this two lines of code into the actionExecutor and keep the same copy in its origin place. I think for normal task, actionExecutor is the same task thread, but there is also some cases (like sources) that use a different thread. Task thread should also wait after these future completes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur edited a comment on pull request #16653: [FLINK-23558][streaming] Ignoring RejectedExecutionException during s…
curcur edited a comment on pull request #16653: URL: https://github.com/apache/flink/pull/16653#issuecomment-892379997 @akalash I've looked through the code again, I think we should probably also keep ``` timersFinishedFuture.get(); systemTimersFinishedFuture.get(); ``` in the task thread as well for the safety purposes. I mean we move this two lines of code into the actionExecutor and keep the same copy in its origin place. I think for normal task, actionExecutor is the same task thread, but there is also some cases (like sources) that use a different thread. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur edited a comment on pull request #16653: [FLINK-23558][streaming] Ignoring RejectedExecutionException during s…
curcur edited a comment on pull request #16653: URL: https://github.com/apache/flink/pull/16653#issuecomment-892379997 @akalash I've looked through the code again, I think we should probably also keep ``` timersFinishedFuture.get(); systemTimersFinishedFuture.get(); ``` in the task thread for the safety purposes. I mean we move this two lines of code into the actionExecutor and keep the same copy in its origin place. I think for normal task, actionExecutor is the same task thread, but there is also some cases (like sources) that use a different thread. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on pull request #16653: [FLINK-23558][streaming] Ignoring RejectedExecutionException during s…
curcur commented on pull request #16653: URL: https://github.com/apache/flink/pull/16653#issuecomment-892379997 @akalash I've looked through the code again, I think we should probably also keep ``` timersFinishedFuture.get(); systemTimersFinishedFuture.get(); ``` in the task thread for the safety purposes. I mean we move this two lines of code into the actionExecutor and keep the same copy in its origin place. I think for normal task, actionExecutor is the same task thread, but there is also some cases (like sources) that they maybe different -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23609) Codegen error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"
[ https://issues.apache.org/jira/browse/FLINK-23609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392730#comment-17392730 ] Yao Zhang commented on FLINK-23609: --- Hi [~xiaojin.wy], The exception is probably caused by passing a negative value to natural logarithm (LN). Please fix this and run it again. > Codegen error of "Infinite or NaN at > java.math.BigDecimal.(BigDecimal.java:898)" > --- > > Key: FLINK-23609 > URL: https://issues.apache.org/jira/browse/FLINK-23609 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.0 >Reporter: xiaojin.wy >Priority: Major > > {code:java} > CREATE TABLE database5_t2 ( > `c0` DECIMAL > ) WITH ( > 'connector' = 'filesystem', > 'format' = 'testcsv', > 'path' = '$resultPath33' > ) > CREATE TABLE database5_t3 ( > `c0` STRING , `c1` INTEGER , `c2` STRING , `c3` BIGINT > ) WITH ( > 'connector' = 'filesystem', > 'format' = 'testcsv', > 'path' = '$resultPath33' > ) > INSERT OVERWRITE database5_t2(c0) VALUES(1969075679) > INSERT OVERWRITE database5_t3(c0, c1, c2, c3) VALUES ('yaW鉒', -943510659, > '1970-01-20 09:49:24', 1941473165), ('2#融', 1174376063, '1969-12-21 > 09:54:49', 1941473165), ('R>t 蹿', 1648164266, '1969-12-14 14:20:28', > 1222780269) > SELECT MAX(CAST (IS_DIGIT(1837249903) AS DOUBLE )) AS ref0 FROM database5_t2, > database5_t3 > WHERE CAST ((database5_t3.c1) BETWEEN ((COSH(CAST ((-(CAST (database5_t3.c0 > AS DOUBLE ))) AS DOUBLE > AND ((LN(CAST (-351648321 AS DOUBLE AS BOOLEAN) GROUP BY database5_t2.c0 > ORDER BY database5_t2.c0 > {code} > Running the sql above, you will get the error: > {code:java} > java.lang.NumberFormatException: Infinite or NaN > at java.math.BigDecimal.(BigDecimal.java:898) > at java.math.BigDecimal.(BigDecimal.java:875) > at > org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202) > at > org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759) > at > org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699) > at > org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152) > at > org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) > at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) > at > org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) > at > org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) > at > org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127) > at > org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) > at > org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58) > at > scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) > at > scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) > at scala.collection.Iterator$class.foreach(Iterator.scala:891) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at > scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) > at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) > at > org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87) > at > org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58) > at >
[GitHub] [flink] curcur edited a comment on pull request #16685: [FLINK-23279][tests] Randomly use Changelog Backend in tests
curcur edited a comment on pull request #16685: URL: https://github.com/apache/flink/pull/16685#issuecomment-892362947 LGTM overall, thanks @rkhachatryan for enabling the tests! Please take a look at my inline comments. Also, before accepting it, 1. We should run the test with `flag = on` to make sure all related tests pass. I think that's what you've already done, but I want to double check and make sure of that. 2. We should also change the configuration setting description in `CheckpointingOptions` `public static final ConfigOption STATE_CHANGE_LOG_STORAGE ` to include "filesystem". Right now it "includes memory only" -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16698: [FLINK-23602][runtime] org.codehaus.commons.compiler.CompileException…
flinkbot commented on pull request #16698: URL: https://github.com/apache/flink/pull/16698#issuecomment-892376359 ## CI report: * 7b305e3726aa946d5ac25324347e4ce0a1f9a28c UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16658: [FLINK-23560][streaming] Replaced systemTimerService by timerService for launching the throughput
flinkbot edited a comment on pull request #16658: URL: https://github.com/apache/flink/pull/16658#issuecomment-890336326 ## CI report: * 921d56fcaebe0ea44b81d459e617de336508abcc Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21435) * ca08c680712dd780280d86b095c0a180048471e0 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15924: [FLINK-22670][FLIP-150][connector/common] Hybrid source baseline
flinkbot edited a comment on pull request #15924: URL: https://github.com/apache/flink/pull/15924#issuecomment-841943851 ## CI report: * c95109768facc0535e3ca1b9da56cf4197fb4ba9 UNKNOWN * 67e73451c554e31d4d9986cf08fc100248a9bc73 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21403) Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21441) Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21436) * 7bb0fbc1f358cb9db212024348e6ba58418095b1 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21456) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14544: [FLINK-20845] Drop Scala 2.11 support
flinkbot edited a comment on pull request #14544: URL: https://github.com/apache/flink/pull/14544#issuecomment-753633967 ## CI report: * b1fe24bab5f3a3588e594ed41932c41cc87bd069 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21445) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16698: [FLINK-23602][runtime] org.codehaus.commons.compiler.CompileException…
flinkbot commented on pull request #16698: URL: https://github.com/apache/flink/pull/16698#issuecomment-892367572 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 7b305e3726aa946d5ac25324347e4ce0a1f9a28c (Wed Aug 04 05:09:44 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-23602).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23602) org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No applicable constructor/method found for actual parameters "org.apache.flink.table.data.DecimalData
[ https://issues.apache.org/jira/browse/FLINK-23602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-23602: --- Labels: pull-request-available (was: ) > org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No > applicable constructor/method found for actual parameters > "org.apache.flink.table.data.DecimalData > - > > Key: FLINK-23602 > URL: https://issues.apache.org/jira/browse/FLINK-23602 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.0 >Reporter: xiaojin.wy >Priority: Major > Labels: pull-request-available > > {code:java} > CREATE TABLE database5_t2 ( > `c0` DECIMAL , `c1` BIGINT > ) WITH ( > 'connector' = 'filesystem', > 'format' = 'testcsv', > 'path' = '$resultPath33' > ) > INSERT OVERWRITE database5_t2(c0, c1) VALUES(-120229892, 790169221), > (-1070424438, -1787215649) > SELECT COUNT(CAST ((database5_t2.c0) BETWEEN ((REVERSE(CAST ('1969-12-08' AS > STRING AND > (('-727278084') IN (database5_t2.c0, '0.9996987230442536')) AS DOUBLE )) AS > ref0 > FROM database5_t2 GROUP BY database5_t2.c1 ORDER BY database5_t2.c1 > {code} > Running the sql above, will generate the error of this: > {code:java} > java.util.concurrent.ExecutionException: > org.apache.flink.table.api.TableException: Failed to wait job finish > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > at > org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:129) > at > org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:92) > at > org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:482) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at org.junit.runner.JUnitCore.run(JUnitCore.java:137) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33) > at > com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230) > at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58) > Caused by: org.apache.flink.table.api.TableException: Failed to wait job > finish > at >
[GitHub] [flink] paul8263 opened a new pull request #16698: [FLINK-23602][runtime] org.codehaus.commons.compiler.CompileException…
paul8263 opened a new pull request #16698: URL: https://github.com/apache/flink/pull/16698 …: Line 84, Column 78: No applicable constructor/method found for actual parameters org.apache.flink.table.data.DecimalData ## What is the purpose of the change Fixed No applicable constructor/method found for actual parameters "org.apache.flink.table.data.DecimalData, org.apache.flink.table.data.binary.BinaryStringData" when SQL needs implicit comparison between Decimal and string literal. ## Brief change log - flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/data/DecimalDataUtils.java ## Verifying this change This change added tests and can be verified as follows: - testCompareToBinaryStringData method in flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/data/DecimalDataTest.java ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on pull request #16685: [FLINK-23279][tests] Randomly use Changelog Backend in tests
curcur commented on pull request #16685: URL: https://github.com/apache/flink/pull/16685#issuecomment-892362947 LGTM overall, thanks @rkhachatryan for enabling the tests! Please take a look at my inline comments. Also, before accepting it, 1. We should run the test with `flag = on` to make sure all related tests pass. I think that's what've already done, but I want to double check and make sure of that. 2. We should also change the configuration setting description in `CheckpointingOptions` `public static final ConfigOption STATE_CHANGE_LOG_STORAGE ` to include "filesystem". Right now it "includes memory only" -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16697: [FLINK-23587] Set Deployment's Annotation when using native kubernetes
flinkbot edited a comment on pull request #16697: URL: https://github.com/apache/flink/pull/16697#issuecomment-892340685 ## CI report: * 2039e73b02de7485b105be09146ab50b627084c0 UNKNOWN * 274516bb1510a46a87a39e207666e3cfffc9859b Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21455) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #16685: [FLINK-23279][tests] Randomly use Changelog Backend in tests
curcur commented on a change in pull request #16685: URL: https://github.com/apache/flink/pull/16685#discussion_r682282987 ## File path: flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/streaming/util/TestStreamEnvironment.java ## @@ -92,10 +94,18 @@ public static void setAsContext( Duration.ofSeconds(2)); } if (STATE_CHANGE_LOG_CONFIG.equalsIgnoreCase(STATE_CHANGE_LOG_CONFIG_ON)) { -conf.set(CheckpointingOptions.ENABLE_STATE_CHANGE_LOG, true); +if (isConfigurationSupportedByChangelog(miniCluster.getConfiguration())) { + conf.set(CheckpointingOptions.ENABLE_STATE_CHANGE_LOG, true); +} } else if (STATE_CHANGE_LOG_CONFIG.equalsIgnoreCase( STATE_CHANGE_LOG_CONFIG_RAND)) { -randomize(conf, CheckpointingOptions.ENABLE_STATE_CHANGE_LOG, true, false); +if (isConfigurationSupportedByChangelog(miniCluster.getConfiguration())) { +randomize( +conf, + CheckpointingOptions.ENABLE_STATE_CHANGE_LOG, +true, +false); +} Review comment: This seems only to check `LOCAL_RECOVERY` which is configured in the mini-cluster level, since it checks against `minicluster.getConfiguration`. What if local recovery configured in other level, like locally? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16697: [FLINK-23587] Set Deployment's Annotation when using native kubernetes
flinkbot edited a comment on pull request #16697: URL: https://github.com/apache/flink/pull/16697#issuecomment-892340685 ## CI report: * 2039e73b02de7485b105be09146ab50b627084c0 UNKNOWN * 274516bb1510a46a87a39e207666e3cfffc9859b UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16645: [FLINK-23539][metrics-influxdb] InfluxDBReporter should filter charac…
flinkbot edited a comment on pull request #16645: URL: https://github.com/apache/flink/pull/16645#issuecomment-889643808 ## CI report: * cbbae90ac287c25df1d73ca53c6ece3a1032bf7d Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21380) Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21444) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21418) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints
flinkbot edited a comment on pull request #16606: URL: https://github.com/apache/flink/pull/16606#issuecomment-887431748 ## CI report: * 264be5cc6a0485171413099e8b64b9e917d06e85 UNKNOWN * 356eac1b45f73fa1f32a9a3fe5650821015c08c6 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21433) * e47b1ded46ebf510d1d62bc10584fb3c934afda1 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21454) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15924: [FLINK-22670][FLIP-150][connector/common] Hybrid source baseline
flinkbot edited a comment on pull request #15924: URL: https://github.com/apache/flink/pull/15924#issuecomment-841943851 ## CI report: * c95109768facc0535e3ca1b9da56cf4197fb4ba9 UNKNOWN * 67e73451c554e31d4d9986cf08fc100248a9bc73 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21403) Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21441) Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21436) * 7bb0fbc1f358cb9db212024348e6ba58418095b1 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-22334) Fail to translate the hive-sql in STREAMING mode
[ https://issues.apache.org/jira/browse/FLINK-22334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shengkai Fang updated FLINK-22334: -- Description: Please run in the streaming mode. The failed statement {code:java} // Some comments here insert into dest(y,x) select x,y from foo cluster by x {code} Exception stack: {code:java} org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not enough rules to produce a node with desired properties: convention=LOGICAL, FlinkRelDistributionTraitDef=any, MiniBatchIntervalTraitDef=None: 0, ModifyKindSetTraitDef=[NONE], UpdateKindTraitDef=[NONE]. Missing conversion is LogicalDistribution[convention: NONE -> LOGICAL] There is 1 empty subset: rel#5176:RelSubset#43.LOGICAL.any.None: 0.[NONE].[NONE], the relevant part of the original plan is as follows 5174:LogicalDistribution(collation=[[0 ASC-nulls-first]], dist=[[]]) 5172:LogicalProject(subset=[rel#5173:RelSubset#42.NONE.any.None: 0.[NONE].[NONE]], x=[$0]) 5106:LogicalTableScan(subset=[rel#5171:RelSubset#41.NONE.any.None: 0.[NONE].[NONE]], table=[[myhive, default, foo]]) Root: rel#5176:RelSubset#43.LOGICAL.any.None: 0.[NONE].[NONE] Original rel: FlinkLogicalLegacySink(subset=[rel#4254:RelSubset#8.LOGICAL.any.None: 0.[NONE].[NONE]], name=[collect], fields=[_o__c0]): rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 4276 FlinkLogicalCalc(subset=[rel#4275:RelSubset#7.LOGICAL.any.None: 0.[NONE].[NONE]], select=[CASE(IS NULL($f1), 0:BIGINT, $f1) AS _o__c0]): rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 4288 FlinkLogicalJoin(subset=[rel#4272:RelSubset#6.LOGICAL.any.None: 0.[NONE].[NONE]], condition=[=($0, $1)], joinType=[left]): rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0856463237676364E8 cpu, 4.0856463237676364E8 io, 0.0 network, 0.0 memory}, id = 4271 FlinkLogicalTableSourceScan(subset=[rel#4270:RelSubset#1.LOGICAL.any.None: 0.[NONE].[NONE]], table=[[myhive, default, bar, project=[i]]], fields=[i]): rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 4.0E8 io, 0.0 network, 0.0 memory}, id = 4279 FlinkLogicalAggregate(subset=[rel#4268:RelSubset#5.LOGICAL.any.None: 0.[NONE].[NONE]], group=[{1}], agg#0=[COUNT($0)]): rowcount = 8564632.376763644, cumulative cost = {9.0E7 rows, 1.89E8 cpu, 7.2E8 io, 0.0 network, 0.0 memory}, id = 4286 FlinkLogicalCalc(subset=[rel#4283:RelSubset#3.LOGICAL.any.None: 0.[NONE].[NONE]], select=[x, y], where=[IS NOT NULL(y)]): rowcount = 9.0E7, cumulative cost = {9.0E7 rows, 0.0 cpu, 0.0 io, 0.0 network, 0.0 memory}, id = 4282 FlinkLogicalTableSourceScan(subset=[rel#4262:RelSubset#2.LOGICAL.any.None: 0.[NONE].[NONE]], table=[[myhive, default, foo]], fields=[x, y]): rowcount = 1.0E8, cumulative cost = {1.0E8 rows, 1.0E8 cpu, 8.0E8 io, 0.0 network, 0.0 memory}, id = 4261 Sets: Set#41, type: RecordType(INTEGER x, INTEGER y) rel#5171:RelSubset#41.NONE.any.None: 0.[NONE].[NONE], best=null rel#5106:LogicalTableScan.NONE.any.None: 0.[NONE].[NONE](table=[myhive, default, foo]), rowcount=1.0E8, cumulative cost={inf} rel#5179:RelSubset#41.LOGICAL.any.None: 0.[NONE].[NONE], best=rel#5178 rel#5178:FlinkLogicalTableSourceScan.LOGICAL.any.None: 0.[NONE].[NONE](table=[myhive, default, foo],fields=x, y), rowcount=1.0E8, cumulative cost={1.0E8 rows, 1.0E8 cpu, 8.0E8 io, 0.0 network, 0.0 memory} Set#42, type: RecordType(INTEGER x) rel#5173:RelSubset#42.NONE.any.None: 0.[NONE].[NONE], best=null rel#5172:LogicalProject.NONE.any.None: 0.[NONE].[NONE](input=RelSubset#5171,inputs=0), rowcount=1.0E8, cumulative cost={inf} rel#5180:LogicalTableScan.NONE.any.None: 0.[NONE].[NONE](table=[myhive, default, foo, project=[x]]), rowcount=1.0E8, cumulative cost={inf} rel#5182:LogicalCalc.NONE.any.None: 0.[NONE].[NONE](input=RelSubset#5171,expr#0..1={inputs},0=$t0), rowcount=1.0E8, cumulative cost={inf} rel#5184:RelSubset#42.LOGICAL.any.None: 0.[NONE].[NONE], best=rel#5183 rel#5183:FlinkLogicalTableSourceScan.LOGICAL.any.None: 0.[NONE].[NONE](table=[myhive, default, foo, project=[x]],fields=x), rowcount=1.0E8, cumulative cost={1.0E8 rows, 1.0E8 cpu, 4.0E8 io, 0.0 network, 0.0 memory} rel#5185:FlinkLogicalCalc.LOGICAL.any.None: 0.[NONE].[NONE](input=RelSubset#5179,select=x), rowcount=1.0E8, cumulative cost={2.0E8 rows, 1.0E8 cpu, 8.0E8 io, 0.0 network, 0.0 memory} Set#43, type: RecordType(INTEGER x) rel#5175:RelSubset#43.NONE.any.None: 0.[NONE].[NONE], best=null rel#5174:LogicalDistribution.NONE.any.None: 0.[NONE].[NONE](input=RelSubset#5173,collation=[0 ASC-nulls-first],dist=[]), rowcount=1.0E8, cumulative cost={inf} rel#5176:RelSubset#43.LOGICAL.any.None:
[GitHub] [flink] flinkbot edited a comment on pull request #16695: [FLINK-21116][runtime] Harden DefaultDispatcherRunnerITCase#leaderCha…
flinkbot edited a comment on pull request #16695: URL: https://github.com/apache/flink/pull/16695#issuecomment-892178848 ## CI report: * 5fdeb3228f1dc9e24d9bc2aaf77425ffc7f14e78 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21442) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16697: [FLINK-23587] Set Deployment's Annotation when using native kubernetes
flinkbot commented on pull request #16697: URL: https://github.com/apache/flink/pull/16697#issuecomment-892340685 ## CI report: * 2039e73b02de7485b105be09146ab50b627084c0 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16630: [FLINK-23531][table]Allow skip all change log for row-time deduplicate mini-batch
flinkbot edited a comment on pull request #16630: URL: https://github.com/apache/flink/pull/16630#issuecomment-17597 ## CI report: * 5b6bb3991aceb44fd454d6b4fe6665ed8298166b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21394) Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21387) * 2d838f19f7c38f48b3942e2aa65269ee99cdd44d Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21452) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16638: [FLINK-23513] Remove legacy descriptors
flinkbot edited a comment on pull request #16638: URL: https://github.com/apache/flink/pull/16638#issuecomment-889186834 ## CI report: * 73bacada917f02fb4f0a9b83c4c327ad0e6a7476 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21443) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints
flinkbot edited a comment on pull request #16606: URL: https://github.com/apache/flink/pull/16606#issuecomment-887431748 ## CI report: * 264be5cc6a0485171413099e8b64b9e917d06e85 UNKNOWN * 356eac1b45f73fa1f32a9a3fe5650821015c08c6 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21433) * e47b1ded46ebf510d1d62bc10584fb3c934afda1 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #15924: [FLINK-22670][FLIP-150][connector/common] Hybrid source baseline
flinkbot edited a comment on pull request #15924: URL: https://github.com/apache/flink/pull/15924#issuecomment-841943851 ## CI report: * c95109768facc0535e3ca1b9da56cf4197fb4ba9 UNKNOWN * 67e73451c554e31d4d9986cf08fc100248a9bc73 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21403) Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21441) Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21436) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-19276) Allow to read metadata for Debezium format
[ https://issues.apache.org/jira/browse/FLINK-19276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392694#comment-17392694 ] Ward Harris commented on FLINK-19276: - [~jark] ok > Allow to read metadata for Debezium format > -- > > Key: FLINK-19276 > URL: https://issues.apache.org/jira/browse/FLINK-19276 > Project: Flink > Issue Type: Sub-task > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table > SQL / Ecosystem >Reporter: Timo Walther >Assignee: Timo Walther >Priority: Major > Labels: pull-request-available > Fix For: 1.12.0 > > > Expose the metadata mentioned in FLIP-107 for Debezium format. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wangyang0918 commented on pull request #16674: [FLINK-23587][deployment] Set Deployment's Annotation when using kubernetes
wangyang0918 commented on pull request #16674: URL: https://github.com/apache/flink/pull/16674#issuecomment-892337249 Closed for #16697. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16697: [FLINK-23587] Set Deployment's Annotation when using native kubernetes
flinkbot commented on pull request #16697: URL: https://github.com/apache/flink/pull/16697#issuecomment-892336506 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit fee0d7f7f9d359d82697281141a2e27a0567e47e (Wed Aug 04 03:39:54 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] KarlManong opened a new pull request #16697: [FLINK-23587] Set Deployment's Annotation when using native kubernetes
KarlManong opened a new pull request #16697: URL: https://github.com/apache/flink/pull/16697 What is the purpose of the change This pull requests configs the deployment's annotations as pods' used when deploying with native kubernetes. Brief change log The Jobmanager deployment will pick up the annotations when using "kubernetes.jobmanager.annotations". Verifying this change This change is already covered by existing tests, such as org.apache.flink.kubernetes.kubeclient.factory.KubernetesJobManagerFactoryTest. Does this pull request potentially affect one of the following parts: Anything that affects deployment or recovery: JobManager , Kubernetes Documentation Does this pull request introduce a new feature? (yes) If yes, how is the feature documented? (docs) "kubernetes.jobmanager.annotations" will also set the jobmanager deployment's annotations. @wangyang0918 I recreated the PR, please review again. Thank You! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-23172) Links to Task Failure Recovery page on Configuration page are broken
[ https://issues.apache.org/jira/browse/FLINK-23172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhu Zhu closed FLINK-23172. --- Resolution: Fixed Fixed via 5183b2af9d467708725bd1454a671bc7689159a5 46bf6d68ee97684949ba3ad38dc18ff7c800092a > Links to Task Failure Recovery page on Configuration page are broken > > > Key: FLINK-23172 > URL: https://issues.apache.org/jira/browse/FLINK-23172 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.14.0 >Reporter: Zhilong Hong >Assignee: Zhilong Hong >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0 > > > The links to [Task Failure > Recovery|https://ci.apache.org/projects/flink/flink-docs-master/docs/ops/state/task_failure_recovery/] > page inside [Fault > Tolerance|https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#fault-tolerance] > section and [Advanced Fault Tolerance > Options|https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#advanced-fault-tolerance-options] > section on the > [Configuration|https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#fault-tolerance/] > page are broken. > Let's take an example. In the description of {{restart-strategy}}, currently > the link of {{fixed-delay}} refers to > [https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/dev/task_failure_recovery.html#fixed-delay-restart-strategy], > which doesn't exist and would head to 404 error. The correct link is > [https://ci.apache.org/projects/flink/flink-docs-master/docs/ops/state/task_failure_recovery/#fixed-delay-restart-strategy]. > The links are located in {{RestartStrategyOptions.java}} and > {{JobManagerOptions.java}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zhuzhurk closed pull request #16624: [FLINK-23172][docs] Fix broken links to Task Failure Recovery page on Configuration page
zhuzhurk closed pull request #16624: URL: https://github.com/apache/flink/pull/16624 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16696: Port FLINK-23529 to 1.13
flinkbot edited a comment on pull request #16696: URL: https://github.com/apache/flink/pull/16696#issuecomment-892319172 ## CI report: * 21941f92e977b4fd1df115fef1122aab4428d196 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21451) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16630: [FLINK-23531][table]Allow skip all change log for row-time deduplicate mini-batch
flinkbot edited a comment on pull request #16630: URL: https://github.com/apache/flink/pull/16630#issuecomment-17597 ## CI report: * 5b6bb3991aceb44fd454d6b4fe6665ed8298166b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21394) Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21387) * 2d838f19f7c38f48b3942e2aa65269ee99cdd44d UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] beyond1920 commented on a change in pull request #16620: [FLINK-23246][table-planner] Refactor the time indicator materialization
beyond1920 commented on a change in pull request #16620: URL: https://github.com/apache/flink/pull/16620#discussion_r682258760 ## File path: flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/RelTimeIndicatorConverter.java ## @@ -544,11 +595,110 @@ private RelNode convertAggInput(Aggregate agg) { .collect(Collectors.toList()); } +private FlinkLogicalWindowAggregate visitWindowAggregate(FlinkLogicalWindowAggregate agg) { +RelNode newInput = convertAggInput(agg); +List updatedAggCalls = convertAggregateCalls(agg); +LogicalWindow oldWindow = agg.getWindow(); +Seq oldNamedProperties = agg.getNamedProperties(); +FieldReferenceExpression oldTimeAttribute = agg.getWindow().timeAttribute(); +LogicalType oldTimeAttributeType = oldTimeAttribute.getOutputDataType().getLogicalType(); +boolean isRowtimeIndicator = LogicalTypeChecks.isRowtimeAttribute(oldTimeAttributeType); +boolean convertedToRowtimeTimestampLtz; +if (!isRowtimeIndicator) { +convertedToRowtimeTimestampLtz = false; +} else { +int timeIndicatorIdx = oldTimeAttribute.getFieldIndex(); +RelDataType oldType = + agg.getInput().getRowType().getFieldList().get(timeIndicatorIdx).getType(); +RelDataType newType = + newInput.getRowType().getFieldList().get(timeIndicatorIdx).getType(); +convertedToRowtimeTimestampLtz = +isTimestampLtzType(newType) && !isTimestampLtzType(oldType); +} +LogicalWindow newWindow; +Seq newNamedProperties; +if (convertedToRowtimeTimestampLtz) { +// MATCH_ROWTIME may be converted from rowtime attribute to timestamp_ltz rowtime +// attribute, if time indicator of current window aggregate depends on input +// MATCH_ROWTIME, we should rewrite logicalWindow and namedProperties. +LogicalType newTimestampLtzType = +new LocalZonedTimestampType( +oldTimeAttributeType.isNullable(), TimestampKind.ROWTIME, 3); +FieldReferenceExpression newFieldRef = +new FieldReferenceExpression( +oldTimeAttribute.getName(), +fromLogicalTypeToDataType(newTimestampLtzType), +oldTimeAttribute.getInputIndex(), +oldTimeAttribute.getFieldIndex()); +PlannerWindowReference newAlias = +new PlannerWindowReference( +oldWindow.aliasAttribute().getName(), newTimestampLtzType); +if (oldWindow instanceof TumblingGroupWindow) { +TumblingGroupWindow window = (TumblingGroupWindow) oldWindow; +newWindow = new TumblingGroupWindow(newAlias, newFieldRef, window.size()); +} else if (oldWindow instanceof SlidingGroupWindow) { +SlidingGroupWindow window = (SlidingGroupWindow) oldWindow; +newWindow = +new SlidingGroupWindow( +newAlias, newFieldRef, window.size(), window.slide()); +} else if (oldWindow instanceof SessionGroupWindow) { +SessionGroupWindow window = (SessionGroupWindow) oldWindow; +newWindow = new SessionGroupWindow(newAlias, newFieldRef, window.gap()); +} else { +throw new TableException( +String.format( +"This is a bug and should not happen. Please file an issue. Invalid window %s.", +oldWindow.getClass().getSimpleName())); +} +List newNamedPropertiesList = + JavaConverters.seqAsJavaListConverter(oldNamedProperties).asJava().stream() +.map( +namedProperty -> { +if (namedProperty.getProperty() +instanceof PlannerRowtimeAttribute) { +return new PlannerNamedWindowProperty( +namedProperty.getName(), +new PlannerRowtimeAttribute(newAlias)); +} else { +return namedProperty; +} +}) +.collect(Collectors.toList()); +newNamedProperties = + JavaConverters.iterableAsScalaIterableConverter(newNamedPropertiesList) +.asScala() +.toSeq(); +} else { +
[GitHub] [flink] beyond1920 commented on a change in pull request #16620: [FLINK-23246][table-planner] Refactor the time indicator materialization
beyond1920 commented on a change in pull request #16620: URL: https://github.com/apache/flink/pull/16620#discussion_r682258450 ## File path: flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/calcite/RelTimeIndicatorConverter.java ## @@ -544,11 +595,110 @@ private RelNode convertAggInput(Aggregate agg) { .collect(Collectors.toList()); } +private FlinkLogicalWindowAggregate visitWindowAggregate(FlinkLogicalWindowAggregate agg) { Review comment: MatchRecognizeTest#testMatchRecognizeOnRowtimeLTZ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23602) org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No applicable constructor/method found for actual parameters "org.apache.flink.table.data.DecimalDa
[ https://issues.apache.org/jira/browse/FLINK-23602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392686#comment-17392686 ] Yao Zhang commented on FLINK-23602: --- Hi [~xiaojin.wy], The type of database5_t2.c0 in your DDL is Decimal. If you want to test whether a string literal is in a list contains Decimal values, Flink have to compare them. But the comparison method with the correct parameter types is not provided. If this feature is required I can help fix this. Could you please assign this to me? > org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No > applicable constructor/method found for actual parameters > "org.apache.flink.table.data.DecimalData > - > > Key: FLINK-23602 > URL: https://issues.apache.org/jira/browse/FLINK-23602 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.0 >Reporter: xiaojin.wy >Priority: Major > > {code:java} > CREATE TABLE database5_t2 ( > `c0` DECIMAL , `c1` BIGINT > ) WITH ( > 'connector' = 'filesystem', > 'format' = 'testcsv', > 'path' = '$resultPath33' > ) > INSERT OVERWRITE database5_t2(c0, c1) VALUES(-120229892, 790169221), > (-1070424438, -1787215649) > SELECT COUNT(CAST ((database5_t2.c0) BETWEEN ((REVERSE(CAST ('1969-12-08' AS > STRING AND > (('-727278084') IN (database5_t2.c0, '0.9996987230442536')) AS DOUBLE )) AS > ref0 > FROM database5_t2 GROUP BY database5_t2.c1 ORDER BY database5_t2.c1 > {code} > Running the sql above, will generate the error of this: > {code:java} > java.util.concurrent.ExecutionException: > org.apache.flink.table.api.TableException: Failed to wait job finish > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > at > org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:129) > at > org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:92) > at > org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:482) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at org.junit.runner.JUnitCore.run(JUnitCore.java:137) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33) > at >
[jira] [Created] (FLINK-23612) SELECT ROUND(CAST(1.2345 AS FLOAT), 1) cannot compile
Caizhi Weng created FLINK-23612: --- Summary: SELECT ROUND(CAST(1.2345 AS FLOAT), 1) cannot compile Key: FLINK-23612 URL: https://issues.apache.org/jira/browse/FLINK-23612 Project: Flink Issue Type: Improvement Components: Table SQL / Runtime Affects Versions: 1.14.0 Reporter: Caizhi Weng Run this SQL {{SELECT ROUND(CAST(1.2345 AS FLOAT), 1)}} and the following exception will be thrown: {code} java.lang.RuntimeException: Could not instantiate generated class 'ExpressionReducer$2' at org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:75) at org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:108) at org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759) at org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699) at org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:306) at org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) at org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) at org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127) at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) at org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69) at org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46) at org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77) at org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1702) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeQueryOperation(TableEnvironmentImpl.java:833) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1301) at org.apache.flink.table.api.internal.TableImpl.execute(TableImpl.java:601) at org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300) at
[GitHub] [flink] flinkbot edited a comment on pull request #16465: [FLINK-22910][runtime] Refine ShuffleMaster lifecycle management for pluggable shuffle service framework
flinkbot edited a comment on pull request #16465: URL: https://github.com/apache/flink/pull/16465#issuecomment-878076127 ## CI report: * 4fb87e1eedfd547c18b9a07cc52779b6c0ac39cf Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21409) * 56a034b0ebd265f1da25721aac1f30d9d375 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21450) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-23449) YarnTaskExecutorRunner does not contains MapReduce classes
[ https://issues.apache.org/jira/browse/FLINK-23449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392680#comment-17392680 ] Kai Chen edited comment on FLINK-23449 at 8/4/21, 3:10 AM: --- [~gaborgsomogyi] [~trohrmann] I'm not stick to code change. It was just a little uncompatible and confusing when upgrading from flink-1.10 to upper versions when I followed official guide and used flink hive connector. Flink-1.10 still uses flink-shaded-hadoop-2-uber.jar which includes hadoop-mapreduce-client-core.jar. And starting from Flink 1.11, using flink-shaded-hadoop-2-uber releases is not officially supported by the Flink project and thus hadoop-mapreduce-client-core.jar is not included in JM/TM classpath. In the other hand, users follow [the documentation|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/yarn] , set up HADOOP_CLASSPATH and usually take it for granted that HADOOP_CLASSPATH is added to JM/TM classpath. That's why I changed the code and added MAPREDUCE_APPLICATION_CLASSPATH. As I said, I'm not stick to code change. An updating/extending documentation looks good to me, too. was (Author: yuchuanchen): [~gaborgsomogyi] [~trohrmann] I'm not stick to code change. It was just a little uncompatible and confusing when upgrading from flink-1.10 to upper versions when I followed official guide and used flink hive connector. Flink-1.10 still uses flink-shaded-hadoop-2-uber.jar which includes hadoop-mapreduce-client-core.jar. And starting from Flink 1.11, using flink-shaded-hadoop-2-uber releases is not officially supported by the Flink project and thus hadoop-mapreduce-client-core.jar is not included in JM/TM classpath. In the other hand, users follow [the documentation|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/yarn] , set up HADOOP_CLASSPATH and usually take it for granted that HADOOP_CLASSPATH is added to JM/TM classpath. That's why I changed the code and added MAPREDUCE_APPLICATION_CLASSPATH. As I said, I'm not stick to code change. A better documentation looks good to me, too. > YarnTaskExecutorRunner does not contains MapReduce classes > --- > > Key: FLINK-23449 > URL: https://issues.apache.org/jira/browse/FLINK-23449 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive, Deployment / YARN >Affects Versions: 1.11.3 > Environment: flink-1.11 > flink on yarn cluster > jdk1.8 > hive1.2.1 > hadoop2.7 > hadoop classes is provided with {{export HADOOP_CLASSPATH=`hadoop classpath` > when submitting test APP. (described in > [https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/hadoop.html] > )}} > {{}} >Reporter: Kai Chen >Priority: Major > Labels: pull-request-available > > I followed instructions described in > [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/hive] > and tested hive streaming sink, met this exception > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.mapred.JobConf > [http://apache-flink.147419.n8.nabble.com/Flink-td7866.html] met the same > problem. > > I checked TM jvm envs and the code and found that flink only set up > YARN_APPLICATION_CLASSPATH, but without MAPREDUCE_APPLICATION_CLASSPATH. > See: > [https://github.com/apache/flink/blob/ed39fb2efc790af038c1babd4a48847b7b39f91e/flink-yarn/src/main/java/org/apache/flink/yarn/Utils.java#L119] > > I think we should add MAPREDUCE_APPLICATION_CLASSPATH as well, as the same as > spark does. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23449) YarnTaskExecutorRunner does not contains MapReduce classes
[ https://issues.apache.org/jira/browse/FLINK-23449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392680#comment-17392680 ] Kai Chen commented on FLINK-23449: -- [~gaborgsomogyi] [~trohrmann] I'm not stick to code change. It was just a little uncompatible and confusing when upgrading from flink-1.10 to upper versions when I followed official guide and used flink hive connector. Flink-1.10 still uses flink-shaded-hadoop-2-uber.jar which includes hadoop-mapreduce-client-core.jar. And starting from Flink 1.11, using flink-shaded-hadoop-2-uber releases is not officially supported by the Flink project and thus hadoop-mapreduce-client-core.jar is not included in JM/TM classpath. In the other hand, users follow [the documentation|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/yarn] , set up HADOOP_CLASSPATH and usually take it for granted that HADOOP_CLASSPATH is added to JM/TM classpath. That's why I changed the code and added MAPREDUCE_APPLICATION_CLASSPATH. As I said, I'm not stick to code change. A better documentation looks good to me, too. > YarnTaskExecutorRunner does not contains MapReduce classes > --- > > Key: FLINK-23449 > URL: https://issues.apache.org/jira/browse/FLINK-23449 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive, Deployment / YARN >Affects Versions: 1.11.3 > Environment: flink-1.11 > flink on yarn cluster > jdk1.8 > hive1.2.1 > hadoop2.7 > hadoop classes is provided with {{export HADOOP_CLASSPATH=`hadoop classpath` > when submitting test APP. (described in > [https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/hadoop.html] > )}} > {{}} >Reporter: Kai Chen >Priority: Major > Labels: pull-request-available > > I followed instructions described in > [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/hive] > and tested hive streaming sink, met this exception > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.mapred.JobConf > [http://apache-flink.147419.n8.nabble.com/Flink-td7866.html] met the same > problem. > > I checked TM jvm envs and the code and found that flink only set up > YARN_APPLICATION_CLASSPATH, but without MAPREDUCE_APPLICATION_CLASSPATH. > See: > [https://github.com/apache/flink/blob/ed39fb2efc790af038c1babd4a48847b7b39f91e/flink-yarn/src/main/java/org/apache/flink/yarn/Utils.java#L119] > > I think we should add MAPREDUCE_APPLICATION_CLASSPATH as well, as the same as > spark does. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wangyang0918 commented on pull request #16674: [FLINK-23587][deployment] Set Deployment's Annotation when using kubernetes
wangyang0918 commented on pull request #16674: URL: https://github.com/apache/flink/pull/16674#issuecomment-892323911 @KarlManong It seems that I do not have permissions to push commit to the master branch of `g...@github.com:KarlManong/flink.git`. Do you have unchecked the "Allow edits by maintainers"? Another suggestion, usually we create a new branch for opening a PR, not using the master branch. That's why I close this PR accidentally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-23611) YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots hang
Xintong Song created FLINK-23611: Summary: YARNSessionCapacitySchedulerITCase.testVCoresAreSetCorrectlyAndJobManagerHostnameAreShownInWebInterfaceAndDynamicPropertiesAndYarnApplicationNameAndTaskManagerSlots hangs on azure Key: FLINK-23611 URL: https://issues.apache.org/jira/browse/FLINK-23611 Project: Flink Issue Type: Bug Components: Deployment / YARN Affects Versions: 1.12.5 Reporter: Xintong Song Fix For: 1.12.6 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21439=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=28959 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22885) Support 'SHOW COLUMNS'.
[ https://issues.apache.org/jira/browse/FLINK-22885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392665#comment-17392665 ] Roc Marshal commented on FLINK-22885: - Could someone please help me to advance this jira? Close the JIRA or continue to improve the JIRA? Thank you very much. > Support 'SHOW COLUMNS'. > --- > > Key: FLINK-22885 > URL: https://issues.apache.org/jira/browse/FLINK-22885 > Project: Flink > Issue Type: New Feature > Components: Table SQL / API >Reporter: Roc Marshal >Priority: Major > > h1. Support 'SHOW COLUMNS'. > SHOW COLUMNS ( FROM | IN ) [LIKE ] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wangyang0918 closed pull request #16674: [FLINK-23587][deployment] Set Deployment's Annotation when using kubernetes
wangyang0918 closed pull request #16674: URL: https://github.com/apache/flink/pull/16674 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #16696: Port FLINK-23529 to 1.13
flinkbot commented on pull request #16696: URL: https://github.com/apache/flink/pull/16696#issuecomment-892319172 ## CI report: * 21941f92e977b4fd1df115fef1122aab4428d196 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16674: [FLINK-23587][deployment] Set Deployment's Annotation when using kubernetes
flinkbot edited a comment on pull request #16674: URL: https://github.com/apache/flink/pull/16674#issuecomment-890963322 ## CI report: * 425e513893d86249eb9efe3d380a88bb36fd2878 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21408) * fee0d7f7f9d359d82697281141a2e27a0567e47e UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16465: [FLINK-22910][runtime] Refine ShuffleMaster lifecycle management for pluggable shuffle service framework
flinkbot edited a comment on pull request #16465: URL: https://github.com/apache/flink/pull/16465#issuecomment-878076127 ## CI report: * 4fb87e1eedfd547c18b9a07cc52779b6c0ac39cf Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21409) * 56a034b0ebd265f1da25721aac1f30d9d375 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-23610) DefaultSchedulerTest.testProducedPartitionRegistrationTimeout fails on azure
Xintong Song created FLINK-23610: Summary: DefaultSchedulerTest.testProducedPartitionRegistrationTimeout fails on azure Key: FLINK-23610 URL: https://issues.apache.org/jira/browse/FLINK-23610 Project: Flink Issue Type: Bug Components: Runtime / Coordination Affects Versions: 1.14.0 Reporter: Xintong Song Fix For: 1.14.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21438=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=747432ad-a576-5911-1e2a-68c6bedc248a=7834 {code} Aug 03 23:05:35 [ERROR] Tests run: 40, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.43 s <<< FAILURE! - in org.apache.flink.runtime.scheduler.DefaultSchedulerTest Aug 03 23:05:35 [ERROR] testProducedPartitionRegistrationTimeout(org.apache.flink.runtime.scheduler.DefaultSchedulerTest) Time elapsed: 0.137 s <<< FAILURE! Aug 03 23:05:35 java.lang.AssertionError: Aug 03 23:05:35 Aug 03 23:05:35 Expected: a collection with size <2> Aug 03 23:05:35 but: collection size was <0> Aug 03 23:05:35 at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) Aug 03 23:05:35 at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) Aug 03 23:05:35 at org.apache.flink.runtime.scheduler.DefaultSchedulerTest.testProducedPartitionRegistrationTimeout(DefaultSchedulerTest.java:1391) Aug 03 23:05:35 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Aug 03 23:05:35 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) Aug 03 23:05:35 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Aug 03 23:05:35 at java.base/java.lang.reflect.Method.invoke(Method.java:566) Aug 03 23:05:35 at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) Aug 03 23:05:35 at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) Aug 03 23:05:35 at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) Aug 03 23:05:35 at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) Aug 03 23:05:35 at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) Aug 03 23:05:35 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) Aug 03 23:05:35 at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) Aug 03 23:05:35 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) Aug 03 23:05:35 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) Aug 03 23:05:35 at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) Aug 03 23:05:35 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) Aug 03 23:05:35 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) Aug 03 23:05:35 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) Aug 03 23:05:35 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) Aug 03 23:05:35 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) Aug 03 23:05:35 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) Aug 03 23:05:35 at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) Aug 03 23:05:35 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) Aug 03 23:05:35 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) Aug 03 23:05:35 at org.junit.rules.RunRules.evaluate(RunRules.java:20) Aug 03 23:05:35 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) Aug 03 23:05:35 at org.junit.runners.ParentRunner.run(ParentRunner.java:413) Aug 03 23:05:35 at org.junit.runners.Suite.runChild(Suite.java:128) Aug 03 23:05:35 at org.junit.runners.Suite.runChild(Suite.java:27) Aug 03 23:05:35 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) Aug 03 23:05:35 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) Aug 03 23:05:35 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) Aug 03 23:05:35 at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) Aug 03 23:05:35 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) Aug 03 23:05:35 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) Aug 03 23:05:35 at org.junit.runners.ParentRunner.run(ParentRunner.java:413) Aug 03 23:05:35 at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) Aug 03
[jira] [Commented] (FLINK-23590) StreamTaskTest#testProcessWithUnAvailableInput is flaky
[ https://issues.apache.org/jira/browse/FLINK-23590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392662#comment-17392662 ] Xintong Song commented on FLINK-23590: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21438=logs=a549b384-c55a-52c0-c451-00e0477ab6db=eef5922c-08d9-5ba3-7299-8393476594e7=9087 > StreamTaskTest#testProcessWithUnAvailableInput is flaky > --- > > Key: FLINK-23590 > URL: https://issues.apache.org/jira/browse/FLINK-23590 > Project: Flink > Issue Type: Bug > Components: Runtime / Task >Affects Versions: 1.14.0 >Reporter: David Morávek >Assignee: Anton Kalashnikov >Priority: Critical > Fix For: 1.14.0 > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21218=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb] > > {code:java} > java.lang.AssertionError: > Expected: a value equal to or greater than <22L> > but: <217391L> was less than <22L> at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:964) > at org.junit.Assert.assertThat(Assert.java:930) > at > org.apache.flink.streaming.runtime.tasks.StreamTaskTest.testProcessWithUnAvailableInput(StreamTaskTest.java:1561) > at jdk.internal.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at java.base/java.lang.Thread.run(Thread.java:829){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22342) FlinkKafkaProducerITCase fails with producer leak
[ https://issues.apache.org/jira/browse/FLINK-22342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-22342: - Affects Version/s: 1.14.0 > FlinkKafkaProducerITCase fails with producer leak > - > > Key: FLINK-22342 > URL: https://issues.apache.org/jira/browse/FLINK-22342 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.11.3, 1.14.0 >Reporter: Dawid Wysakowicz >Priority: Major > Labels: auto-deprioritized-critical, auto-deprioritized-major, > test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16732=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20=6386 > {code} > [ERROR] > testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase) > Time elapsed: 8.854 s <<< FAILURE! > java.lang.AssertionError: Detected producer leak. Thread name: > kafka-producer-network-thread | > producer-MockTask-002a002c-11 > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.checkProducerLeak(FlinkKafkaProducerITCase.java:728) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint(FlinkKafkaProducerITCase.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22342) FlinkKafkaProducerITCase fails with producer leak
[ https://issues.apache.org/jira/browse/FLINK-22342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-22342: - Labels: test-stability (was: auto-deprioritized-critical auto-deprioritized-major test-stability) > FlinkKafkaProducerITCase fails with producer leak > - > > Key: FLINK-22342 > URL: https://issues.apache.org/jira/browse/FLINK-22342 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.11.3, 1.14.0 >Reporter: Dawid Wysakowicz >Priority: Major > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16732=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20=6386 > {code} > [ERROR] > testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase) > Time elapsed: 8.854 s <<< FAILURE! > java.lang.AssertionError: Detected producer leak. Thread name: > kafka-producer-network-thread | > producer-MockTask-002a002c-11 > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.checkProducerLeak(FlinkKafkaProducerITCase.java:728) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint(FlinkKafkaProducerITCase.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-22342) FlinkKafkaProducerITCase fails with producer leak
[ https://issues.apache.org/jira/browse/FLINK-22342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-22342: - Priority: Major (was: Minor) > FlinkKafkaProducerITCase fails with producer leak > - > > Key: FLINK-22342 > URL: https://issues.apache.org/jira/browse/FLINK-22342 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.11.3 >Reporter: Dawid Wysakowicz >Priority: Major > Labels: auto-deprioritized-critical, auto-deprioritized-major, > test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16732=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20=6386 > {code} > [ERROR] > testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase) > Time elapsed: 8.854 s <<< FAILURE! > java.lang.AssertionError: Detected producer leak. Thread name: > kafka-producer-network-thread | > producer-MockTask-002a002c-11 > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.checkProducerLeak(FlinkKafkaProducerITCase.java:728) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint(FlinkKafkaProducerITCase.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-22342) FlinkKafkaProducerITCase fails with producer leak
[ https://issues.apache.org/jira/browse/FLINK-22342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392659#comment-17392659 ] Xintong Song commented on FLINK-22342: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21438=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=576aba0a-d787-51b6-6a92-cf233f360582=7442 > FlinkKafkaProducerITCase fails with producer leak > - > > Key: FLINK-22342 > URL: https://issues.apache.org/jira/browse/FLINK-22342 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.11.3 >Reporter: Dawid Wysakowicz >Priority: Minor > Labels: auto-deprioritized-critical, auto-deprioritized-major, > test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16732=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20=6386 > {code} > [ERROR] > testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase) > Time elapsed: 8.854 s <<< FAILURE! > java.lang.AssertionError: Detected producer leak. Thread name: > kafka-producer-network-thread | > producer-MockTask-002a002c-11 > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.checkProducerLeak(FlinkKafkaProducerITCase.java:728) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint(FlinkKafkaProducerITCase.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23608) org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory
[ https://issues.apache.org/jira/browse/FLINK-23608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392660#comment-17392660 ] 张祥兵 commented on FLINK-23608: - [~jark] I know the problem may be in the packaging, which is officially obtained. I am using Flink version 1.9.0. What should I do? > org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a > suitable table factory for > 'org.apache.flink.table.factories.TableSourceFactory > > > Key: FLINK-23608 > URL: https://issues.apache.org/jira/browse/FLINK-23608 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.9.0 >Reporter: 张祥兵 >Priority: Blocker > > 在IDEA可以正常执行 ,放在Flink上报错 > org.apache.flink.client.program.ProgramInvocationException: The main method > caused an error: findAndCreateTableSource failed. > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) > at > org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83) > at > org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80) > at > org.apache.flink.runtime.webmonitor.handlers.utils.JarHandlerUtils$JarHandlerContext.toJobGraph(JarHandlerUtils.java:126) > at > org.apache.flink.runtime.webmonitor.handlers.JarPlanHandler.lambda$handleRequest$1(JarPlanHandler.java:100) > at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > Caused by: org.apache.flink.table.api.TableException: > findAndCreateTableSource failed. > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67) > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54) > at > org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69) > at com.bing.flink.controller.TestKafkaFlink.main(TestKafkaFlink.java:45) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > at java.lang.reflect.Method.invoke(Unknown Source) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) > ... 9 more > Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could > not find a suitable table factory for > 'org.apache.flink.table.factories.TableSourceFactory' in > the classpath. > Reason: No context matches. > The following properties are requested: > connector.properties.0.key=group.id > connector.properties.0.value=test > connector.properties.1.key=bootstrap.servers > connector.properties.1.value=localhost:9092 > connector.property-version=1 > connector.topic=test > connector.type=kafka > connector.version=universal > format.derive-schema=true > format.fail-on-missing-field=true > format.property-version=1 > format.type=json > schema.0.name=error_time > schema.0.type=VARCHAR > schema.1.name=error_id > schema.1.type=VARCHAR > schema.2.name=task_type > schema.2.type=VARCHAR > update-mode=append > The following factories have been considered: > org.apache.flink.table.catalog.GenericInMemoryCatalogFactory > org.apache.flink.table.sources.CsvBatchTableSourceFactory > org.apache.flink.table.sources.CsvAppendTableSourceFactory > org.apache.flink.table.sinks.CsvBatchTableSinkFactory > org.apache.flink.table.sinks.CsvAppendTableSinkFactory > org.apache.flink.table.planner.delegation.BlinkPlannerFactory > org.apache.flink.table.planner.delegation.BlinkExecutorFactory > org.apache.flink.table.planner.StreamPlannerFactory > org.apache.flink.table.executor.StreamExecutorFactory > at > org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283) > at > org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191) > at > org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144) > at > org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97) > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64) > ... 17 more > 2021-08-03
[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs
[ https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392658#comment-17392658 ] Xintong Song commented on FLINK-20329: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21438=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=2e426bf0-b717-56bb-ab62-d63086457354=12729 > Elasticsearch7DynamicSinkITCase hangs > - > > Key: FLINK-20329 > URL: https://issues.apache.org/jira/browse/FLINK-20329 > Project: Flink > Issue Type: Bug > Components: Connectors / ElasticSearch >Affects Versions: 1.12.0, 1.13.0 >Reporter: Dian Fu >Assignee: Yangze Guo >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.14.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20 > {code} > 2020-11-24T16:04:05.9260517Z [INFO] Running > org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase > 2020-11-24T16:19:25.5481231Z > == > 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds. > 2020-11-24T16:19:25.5484064Z > == > 2020-11-24T16:19:25.5484498Z > == > 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS) > 2020-11-24T16:19:25.5485475Z > == > 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: > -XX:+HeapDumpOnOutOfMemoryError > 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar > 2020-11-24T16:19:25.7263515Z 18566 Jps > 2020-11-24T16:19:25.7263709Z 959 Launcher > 2020-11-24T16:19:25.7411148Z > == > 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192 > 2020-11-24T16:19:25.7427369Z > == > 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: > -XX:+HeapDumpOnOutOfMemoryError > 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26 > 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM > (25.275-b01 mixed mode): > 2020-11-24T16:19:26.0849831Z > 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 > tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x] > 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE > 2020-11-24T16:19:26.0850814Z > 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 > os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() > [0x7fc1012c4000] > 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on > object monitor) > 2020-11-24T16:19:26.0855379Z at java.lang.Object.wait(Native Method) > 2020-11-24T16:19:26.0855844Z at > org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142) > 2020-11-24T16:19:26.0857272Z - locked <0x8e2bd2d0> (a > java.util.ArrayList) > 2020-11-24T16:19:26.0857977Z at > org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown > Source) > 2020-11-24T16:19:26.0858471Z at > org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27) > 2020-11-24T16:19:26.0858961Z at > org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133) > 2020-11-24T16:19:26.0859422Z at > org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown > Source) > 2020-11-24T16:19:26.0859788Z at java.lang.Thread.run(Thread.java:748) > 2020-11-24T16:19:26.0860030Z > 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 > tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000] > 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING > (parking) > 2020-11-24T16:19:26.0861387Z at sun.misc.Unsafe.park(Native Method) > 2020-11-24T16:19:26.0862495Z - parking to wait for <0x8814bf30> (a > java.util.concurrent.SynchronousQueue$TransferStack) > 2020-11-24T16:19:26.0863253Z at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > 2020-11-24T16:19:26.0863760Z at > java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) > 2020-11-24T16:19:26.0864274Z at > java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) > 2020-11-24T16:19:26.0864762Z at > java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) > 2020-11-24T16:19:26.0865299Z
[GitHub] [flink] flinkbot commented on pull request #16696: Port FLINK-23529 to 1.13
flinkbot commented on pull request #16696: URL: https://github.com/apache/flink/pull/16696#issuecomment-892315449 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 21941f92e977b4fd1df115fef1122aab4428d196 (Wed Aug 04 02:41:10 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23556) SQLClientSchemaRegistryITCase fails with " Subject ... not found"
[ https://issues.apache.org/jira/browse/FLINK-23556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392657#comment-17392657 ] Xintong Song commented on FLINK-23556: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21438=logs=4dd4dbdd-1802-5eb7-a518-6acd9d24d0fc=7c4a8fb8--5a77-f518-4176bfae300b=15539 > SQLClientSchemaRegistryITCase fails with " Subject ... not found" > - > > Key: FLINK-23556 > URL: https://issues.apache.org/jira/browse/FLINK-23556 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Priority: Major > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21129=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=cc5499f8-bdde-5157-0d76-b6528ecd808e=25337 > {code} > Jul 28 23:37:48 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 209.44 s <<< FAILURE! - in > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase > Jul 28 23:37:48 [ERROR] > testWriting(org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase) > Time elapsed: 81.146 s <<< ERROR! > Jul 28 23:37:48 > io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: > Subject 'test-user-behavior-d18d4af2-3830-4620-9993-340c13f50cc2-value' not > found.; error code: 40401 > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:292) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:769) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.rest.RestService.getAllVersions(RestService.java:760) > Jul 28 23:37:48 at > io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getAllVersions(CachedSchemaRegistryClient.java:364) > Jul 28 23:37:48 at > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.getAllVersions(SQLClientSchemaRegistryITCase.java:230) > Jul 28 23:37:48 at > org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testWriting(SQLClientSchemaRegistryITCase.java:195) > Jul 28 23:37:48 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > Jul 28 23:37:48 at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > Jul 28 23:37:48 at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Jul 28 23:37:48 at java.lang.reflect.Method.invoke(Method.java:498) > Jul 28 23:37:48 at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > Jul 28 23:37:48 at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > Jul 28 23:37:48 at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > Jul 28 23:37:48 at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > Jul 28 23:37:48 at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > Jul 28 23:37:48 at java.lang.Thread.run(Thread.java:748) > Jul 28 23:37:48 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23557) 'Resuming Externalized Checkpoint (hashmap, sync, no parallelism change) end-to-end test' fails on Azure
[ https://issues.apache.org/jira/browse/FLINK-23557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392656#comment-17392656 ] Xintong Song commented on FLINK-23557: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21438=logs=6caf31d6-847a-526e-9624-468e053467d6=1fdd9d50-31f7-5383-5578-49e27385b5f1=816 > 'Resuming Externalized Checkpoint (hashmap, sync, no parallelism change) > end-to-end test' fails on Azure > > > Key: FLINK-23557 > URL: https://issues.apache.org/jira/browse/FLINK-23557 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.14.0 >Reporter: Dawid Wysakowicz >Assignee: Robert Metzger >Priority: Blocker > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21129=logs=6caf31d6-847a-526e-9624-468e053467d6=1fdd9d50-31f7-5383-5578-49e27385b5f1=785 > {code} > Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to > submit JobGraph. > at > org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$9(RestClusterClient.java:405) > at > java.base/java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:986) > at > java.base/java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:970) > at > java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) > at > java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) > at > org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$9(FutureUtils.java:373) > at > java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) > at > java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) > at > java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) > at > java.base/java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:610) > at > java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1085) > at > java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:829) > Caused by: org.apache.flink.runtime.rest.util.RestClientException: [File > upload failed.] > at > org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:486) > at > org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:466) > at > java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23608) org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory
[ https://issues.apache.org/jira/browse/FLINK-23608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392655#comment-17392655 ] Jark Wu commented on FLINK-23608: - [~zhangxiangbing] that's why I said the problem is packaging. > org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a > suitable table factory for > 'org.apache.flink.table.factories.TableSourceFactory > > > Key: FLINK-23608 > URL: https://issues.apache.org/jira/browse/FLINK-23608 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.9.0 >Reporter: 张祥兵 >Priority: Blocker > > 在IDEA可以正常执行 ,放在Flink上报错 > org.apache.flink.client.program.ProgramInvocationException: The main method > caused an error: findAndCreateTableSource failed. > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) > at > org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83) > at > org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80) > at > org.apache.flink.runtime.webmonitor.handlers.utils.JarHandlerUtils$JarHandlerContext.toJobGraph(JarHandlerUtils.java:126) > at > org.apache.flink.runtime.webmonitor.handlers.JarPlanHandler.lambda$handleRequest$1(JarPlanHandler.java:100) > at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > Caused by: org.apache.flink.table.api.TableException: > findAndCreateTableSource failed. > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67) > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54) > at > org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69) > at com.bing.flink.controller.TestKafkaFlink.main(TestKafkaFlink.java:45) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > at java.lang.reflect.Method.invoke(Unknown Source) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) > ... 9 more > Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could > not find a suitable table factory for > 'org.apache.flink.table.factories.TableSourceFactory' in > the classpath. > Reason: No context matches. > The following properties are requested: > connector.properties.0.key=group.id > connector.properties.0.value=test > connector.properties.1.key=bootstrap.servers > connector.properties.1.value=localhost:9092 > connector.property-version=1 > connector.topic=test > connector.type=kafka > connector.version=universal > format.derive-schema=true > format.fail-on-missing-field=true > format.property-version=1 > format.type=json > schema.0.name=error_time > schema.0.type=VARCHAR > schema.1.name=error_id > schema.1.type=VARCHAR > schema.2.name=task_type > schema.2.type=VARCHAR > update-mode=append > The following factories have been considered: > org.apache.flink.table.catalog.GenericInMemoryCatalogFactory > org.apache.flink.table.sources.CsvBatchTableSourceFactory > org.apache.flink.table.sources.CsvAppendTableSourceFactory > org.apache.flink.table.sinks.CsvBatchTableSinkFactory > org.apache.flink.table.sinks.CsvAppendTableSinkFactory > org.apache.flink.table.planner.delegation.BlinkPlannerFactory > org.apache.flink.table.planner.delegation.BlinkExecutorFactory > org.apache.flink.table.planner.StreamPlannerFactory > org.apache.flink.table.executor.StreamExecutorFactory > at > org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283) > at > org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191) > at > org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144) > at > org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97) > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64) > ... 17 more > 2021-08-03 19:06:55,821 WARN akka.remote.transport.netty.NettyTransport
[jira] [Assigned] (FLINK-22767) Optimize the initialization of LocalInputPreferredSlotSharingStrategy
[ https://issues.apache.org/jira/browse/FLINK-22767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhu Zhu reassigned FLINK-22767: --- Assignee: Zhilong Hong > Optimize the initialization of LocalInputPreferredSlotSharingStrategy > - > > Key: FLINK-22767 > URL: https://issues.apache.org/jira/browse/FLINK-22767 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Affects Versions: 1.13.0 >Reporter: Zhilong Hong >Assignee: Zhilong Hong >Priority: Major > Labels: pull-request-available > > Based on the scheduler benchmark introduced in FLINK-21731, we find that > during the initialization of {{LocalInputPreferredSlotSharingStrategy}}, > there's a procedure that has O(N^2) complexity: > {{ExecutionSlotSharingGroupBuilder#tryFindAvailableProducerExecutionSlotSharingGroupFor}} > located in {{LocalInputPreferredSlotSharingStrategy}}. > The original implementation is: > {code:java} > for all SchedulingExecutionVertex in DefaultScheduler: > for all consumed SchedulingResultPartition of the SchedulingExecutionVertex: > get the result partition's producer vertex and determine the > ExecutionSlotSharingGroup where the producer vertex locates is available for > current vertex{code} > This procedure has O(N^2) complexity. > It's obvious that the result partitions in the same ConsumedPartitionGroup > have the same producer vertex. So we can just iterate over the > ConsumedPartitionGroups instead of all the consumed partitions. This will > decrease the complexity from O(N^2) to O(N). > The optimization of this procedure will speed up the initialization of > DefaultScheduler. It will accelerate the submission of a new job, especially > for OLAP jobs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-23599) Remove JobVertex#connectIdInput
[ https://issues.apache.org/jira/browse/FLINK-23599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhu Zhu closed FLINK-23599. --- Resolution: Done Done via ec9ff1ee5e33529260d6a3adfad4b0b34efde55e > Remove JobVertex#connectIdInput > --- > > Key: FLINK-23599 > URL: https://issues.apache.org/jira/browse/FLINK-23599 > Project: Flink > Issue Type: Technical Debt > Components: Runtime / Coordination >Affects Versions: 1.14.0 >Reporter: Zhilong Hong >Assignee: Zhilong Hong >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0 > > > {{JobVertex#connectIdInput}} is not used in production anymore. It's only > used in the unit tests {{testAttachViaIds}} and > {{testCannotConnectMissingId}} located in > {{DefaultExecutionGraphConstructionTest}}. However, these two test cases are > designed to test this method. Therefore, this method and its test cases can > be removed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zhuzhurk closed pull request #16686: [FLINK-23599] Remove JobVertex#connectIdInput
zhuzhurk closed pull request #16686: URL: https://github.com/apache/flink/pull/16686 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] xintongsong closed pull request #16636: [FLINK-23529] Add Flink 1.13 MigrationVersion
xintongsong closed pull request #16636: URL: https://github.com/apache/flink/pull/16636 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23608) org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory
[ https://issues.apache.org/jira/browse/FLINK-23608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392651#comment-17392651 ] 张祥兵 commented on FLINK-23608: - [~jark] I use IDAE to run my job, and after packaging, report an error when I submit my job using Flink. > org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a > suitable table factory for > 'org.apache.flink.table.factories.TableSourceFactory > > > Key: FLINK-23608 > URL: https://issues.apache.org/jira/browse/FLINK-23608 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.9.0 >Reporter: 张祥兵 >Priority: Blocker > > 在IDEA可以正常执行 ,放在Flink上报错 > org.apache.flink.client.program.ProgramInvocationException: The main method > caused an error: findAndCreateTableSource failed. > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) > at > org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83) > at > org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80) > at > org.apache.flink.runtime.webmonitor.handlers.utils.JarHandlerUtils$JarHandlerContext.toJobGraph(JarHandlerUtils.java:126) > at > org.apache.flink.runtime.webmonitor.handlers.JarPlanHandler.lambda$handleRequest$1(JarPlanHandler.java:100) > at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > Caused by: org.apache.flink.table.api.TableException: > findAndCreateTableSource failed. > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67) > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54) > at > org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69) > at com.bing.flink.controller.TestKafkaFlink.main(TestKafkaFlink.java:45) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > at java.lang.reflect.Method.invoke(Unknown Source) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) > ... 9 more > Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could > not find a suitable table factory for > 'org.apache.flink.table.factories.TableSourceFactory' in > the classpath. > Reason: No context matches. > The following properties are requested: > connector.properties.0.key=group.id > connector.properties.0.value=test > connector.properties.1.key=bootstrap.servers > connector.properties.1.value=localhost:9092 > connector.property-version=1 > connector.topic=test > connector.type=kafka > connector.version=universal > format.derive-schema=true > format.fail-on-missing-field=true > format.property-version=1 > format.type=json > schema.0.name=error_time > schema.0.type=VARCHAR > schema.1.name=error_id > schema.1.type=VARCHAR > schema.2.name=task_type > schema.2.type=VARCHAR > update-mode=append > The following factories have been considered: > org.apache.flink.table.catalog.GenericInMemoryCatalogFactory > org.apache.flink.table.sources.CsvBatchTableSourceFactory > org.apache.flink.table.sources.CsvAppendTableSourceFactory > org.apache.flink.table.sinks.CsvBatchTableSinkFactory > org.apache.flink.table.sinks.CsvAppendTableSinkFactory > org.apache.flink.table.planner.delegation.BlinkPlannerFactory > org.apache.flink.table.planner.delegation.BlinkExecutorFactory > org.apache.flink.table.planner.StreamPlannerFactory > org.apache.flink.table.executor.StreamExecutorFactory > at > org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283) > at > org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191) > at > org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144) > at > org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97) > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64) > ... 17 more > 2021-08-03 19:06:55,821 WARN
[GitHub] [flink] xintongsong commented on pull request #16636: [FLINK-23529] Add Flink 1.13 MigrationVersion
xintongsong commented on pull request #16636: URL: https://github.com/apache/flink/pull/16636#issuecomment-892309511 Thanks, @tsreaper. Merging this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] KarlManong commented on a change in pull request #16674: [FLINK-23587][deployment] Set Deployment's Annotation when using kubernetes
KarlManong commented on a change in pull request #16674: URL: https://github.com/apache/flink/pull/16674#discussion_r682237085 ## File path: flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/factory/KubernetesJobManagerFactoryTest.java ## @@ -170,12 +177,9 @@ public void testPodSpec() throws IOException { KubernetesJobManagerFactory.buildKubernetesJobManagerSpecification( flinkPod, kubernetesJobManagerParameters); -final PodSpec resultPodSpec = -this.kubernetesJobManagerSpecification -.getDeployment() -.getSpec() -.getTemplate() -.getSpec(); +final Deployment deployment = this.kubernetesJobManagerSpecification.getDeployment(); Review comment: well, I forget to revert this change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wangyang0918 commented on a change in pull request #16674: [FLINK-23587][deployment] Set Deployment's Annotation when using kubernetes
wangyang0918 commented on a change in pull request #16674: URL: https://github.com/apache/flink/pull/16674#discussion_r682236420 ## File path: flink-kubernetes/src/test/java/org/apache/flink/kubernetes/kubeclient/factory/KubernetesJobManagerFactoryTest.java ## @@ -170,12 +177,9 @@ public void testPodSpec() throws IOException { KubernetesJobManagerFactory.buildKubernetesJobManagerSpecification( flinkPod, kubernetesJobManagerParameters); -final PodSpec resultPodSpec = -this.kubernetesJobManagerSpecification -.getDeployment() -.getSpec() -.getTemplate() -.getSpec(); +final Deployment deployment = this.kubernetesJobManagerSpecification.getDeployment(); Review comment: nit: why do we need this change? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-23608) org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory
[ https://issues.apache.org/jira/browse/FLINK-23608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-23608. --- Resolution: Not A Problem Please use English in JIRA. Regarding your exception stack, there is no kafka factory in the classloader, that's usually because you didn't transform connector resources correctly, please see the documentation about how to do it: https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/overview/#transform-table-connectorformat-resources > org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a > suitable table factory for > 'org.apache.flink.table.factories.TableSourceFactory > > > Key: FLINK-23608 > URL: https://issues.apache.org/jira/browse/FLINK-23608 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.9.0 >Reporter: 张祥兵 >Priority: Blocker > > 在IDEA可以正常执行 ,放在Flink上报错 > org.apache.flink.client.program.ProgramInvocationException: The main method > caused an error: findAndCreateTableSource failed. > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) > at > org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83) > at > org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80) > at > org.apache.flink.runtime.webmonitor.handlers.utils.JarHandlerUtils$JarHandlerContext.toJobGraph(JarHandlerUtils.java:126) > at > org.apache.flink.runtime.webmonitor.handlers.JarPlanHandler.lambda$handleRequest$1(JarPlanHandler.java:100) > at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > Caused by: org.apache.flink.table.api.TableException: > findAndCreateTableSource failed. > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67) > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54) > at > org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69) > at com.bing.flink.controller.TestKafkaFlink.main(TestKafkaFlink.java:45) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > at java.lang.reflect.Method.invoke(Unknown Source) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) > ... 9 more > Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could > not find a suitable table factory for > 'org.apache.flink.table.factories.TableSourceFactory' in > the classpath. > Reason: No context matches. > The following properties are requested: > connector.properties.0.key=group.id > connector.properties.0.value=test > connector.properties.1.key=bootstrap.servers > connector.properties.1.value=localhost:9092 > connector.property-version=1 > connector.topic=test > connector.type=kafka > connector.version=universal > format.derive-schema=true > format.fail-on-missing-field=true > format.property-version=1 > format.type=json > schema.0.name=error_time > schema.0.type=VARCHAR > schema.1.name=error_id > schema.1.type=VARCHAR > schema.2.name=task_type > schema.2.type=VARCHAR > update-mode=append > The following factories have been considered: > org.apache.flink.table.catalog.GenericInMemoryCatalogFactory > org.apache.flink.table.sources.CsvBatchTableSourceFactory > org.apache.flink.table.sources.CsvAppendTableSourceFactory > org.apache.flink.table.sinks.CsvBatchTableSinkFactory > org.apache.flink.table.sinks.CsvAppendTableSinkFactory > org.apache.flink.table.planner.delegation.BlinkPlannerFactory > org.apache.flink.table.planner.delegation.BlinkExecutorFactory > org.apache.flink.table.planner.StreamPlannerFactory > org.apache.flink.table.executor.StreamExecutorFactory > at > org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283) > at > org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191) > at > org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144) > at
[GitHub] [flink] wsry commented on pull request #16465: [FLINK-22910][runtime] Refine ShuffleMaster lifecycle management for pluggable shuffle service framework
wsry commented on pull request #16465: URL: https://github.com/apache/flink/pull/16465#issuecomment-892306109 Rebased master and squashed fixup commits. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] tsreaper commented on pull request #16636: [FLINK-23529] Add Flink 1.13 MigrationVersion
tsreaper commented on pull request #16636: URL: https://github.com/apache/flink/pull/16636#issuecomment-892305767 > @flinkbot run azure It already succeeded in https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21395=results -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-23609) Codegen error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"
[ https://issues.apache.org/jira/browse/FLINK-23609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xiaojin.wy updated FLINK-23609: --- Description: {code:java} CREATE TABLE database5_t2 ( `c0` DECIMAL ) WITH ( 'connector' = 'filesystem', 'format' = 'testcsv', 'path' = '$resultPath33' ) CREATE TABLE database5_t3 ( `c0` STRING , `c1` INTEGER , `c2` STRING , `c3` BIGINT ) WITH ( 'connector' = 'filesystem', 'format' = 'testcsv', 'path' = '$resultPath33' ) INSERT OVERWRITE database5_t2(c0) VALUES(1969075679) INSERT OVERWRITE database5_t3(c0, c1, c2, c3) VALUES ('yaW鉒', -943510659, '1970-01-20 09:49:24', 1941473165), ('2#融', 1174376063, '1969-12-21 09:54:49', 1941473165), ('R>t 蹿', 1648164266, '1969-12-14 14:20:28', 1222780269) SELECT MAX(CAST (IS_DIGIT(1837249903) AS DOUBLE )) AS ref0 FROM database5_t2, database5_t3 WHERE CAST ((database5_t3.c1) BETWEEN ((COSH(CAST ((-(CAST (database5_t3.c0 AS DOUBLE ))) AS DOUBLE AND ((LN(CAST (-351648321 AS DOUBLE AS BOOLEAN) GROUP BY database5_t2.c0 ORDER BY database5_t2.c0 {code} Running the sql above, you will get the error: {code:java} java.lang.NumberFormatException: Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898) at java.math.BigDecimal.(BigDecimal.java:875) at org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202) at org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759) at org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699) at org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152) at org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) at org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) at org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127) at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) at org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69) at org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46) at org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77) at org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
[jira] [Updated] (FLINK-23609) Codegen error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"
[ https://issues.apache.org/jira/browse/FLINK-23609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xiaojin.wy updated FLINK-23609: --- Summary: Codegen error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)" (was: Codeine error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)") > Codegen error of "Infinite or NaN at > java.math.BigDecimal.(BigDecimal.java:898)" > --- > > Key: FLINK-23609 > URL: https://issues.apache.org/jira/browse/FLINK-23609 > Project: Flink > Issue Type: Bug > Components: Table SQL / Runtime >Affects Versions: 1.14.0 > Environment: java.lang.NumberFormatException: Infinite or NaN > at java.math.BigDecimal.(BigDecimal.java:898) > at java.math.BigDecimal.(BigDecimal.java:875) > at > org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202) > at > org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759) > at > org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699) > at > org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152) > at > org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) > at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) > at > org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) > at > org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) > at > org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127) > at > org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) > at > org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58) > at > scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) > at > scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) > at scala.collection.Iterator$class.foreach(Iterator.scala:891) > at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) > at scala.collection.AbstractIterable.foreach(Iterable.scala:54) > at > scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) > at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) > at > org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) > at > org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87) > at > org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58) > at > org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46) > at > org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46) > at scala.collection.immutable.List.foreach(List.scala:392) > at > org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46) > at > org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77) > at > org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282) > at > org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1702) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:781) > at >
[jira] [Updated] (FLINK-23609) Codegen error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"
[ https://issues.apache.org/jira/browse/FLINK-23609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xiaojin.wy updated FLINK-23609: --- Environment: (was: java.lang.NumberFormatException: Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898) at java.math.BigDecimal.(BigDecimal.java:875) at org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202) at org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759) at org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699) at org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152) at org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) at org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) at org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127) at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) at org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69) at org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46) at org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77) at org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1702) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:781) at org.apache.flink.table.planner.utils.TestingStatementSet.execute(TableTestBase.scala:1509) at org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:317) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
[jira] [Created] (FLINK-23609) Codeine error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"
xiaojin.wy created FLINK-23609: -- Summary: Codeine error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)" Key: FLINK-23609 URL: https://issues.apache.org/jira/browse/FLINK-23609 Project: Flink Issue Type: Bug Components: Table SQL / Runtime Affects Versions: 1.14.0 Environment: java.lang.NumberFormatException: Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898) at java.math.BigDecimal.(BigDecimal.java:875) at org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202) at org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759) at org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699) at org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152) at org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) at org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) at org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127) at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) at org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69) at org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46) at org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77) at org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1702) at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:781) at org.apache.flink.table.planner.utils.TestingStatementSet.execute(TableTestBase.scala:1509) at org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:317) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
[GitHub] [flink] flinkbot edited a comment on pull request #16640: [FLINK-22891][runtime] Using CompletableFuture to sync the scheduling…
flinkbot edited a comment on pull request #16640: URL: https://github.com/apache/flink/pull/16640#issuecomment-889598196 ## CI report: * 81126114e90f8947dba92e4bc1bf785f6fc86ac5 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21370) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21447) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16636: [FLINK-23529] Add Flink 1.13 MigrationVersion
flinkbot edited a comment on pull request #16636: URL: https://github.com/apache/flink/pull/16636#issuecomment-889045676 ## CI report: * 46332f2c7aa2e4d4b6489f94eccdd866fbea79f1 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21395) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21446) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-18184) Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory'
[ https://issues.apache.org/jira/browse/FLINK-18184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392638#comment-17392638 ] 张祥兵 commented on FLINK-18184: - Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory' in the classpath. Reason: No context matches. The following properties are requested: connector.properties.0.key=group.id connector.properties.0.value=test connector.properties.1.key=bootstrap.servers connector.properties.1.value=localhost:9092 connector.property-version=1 connector.topic=test connector.type=kafka connector.version=universal format.derive-schema=true format.fail-on-missing-field=true format.property-version=1 format.type=json schema.0.name=error_time schema.0.type=VARCHAR schema.1.name=error_id schema.1.type=VARCHAR schema.2.name=task_type schema.2.type=VARCHAR update-mode=append 可以帮我吗 > Could not find a suitable table factory for > 'org.apache.flink.table.factories.TableSourceFactory' > - > > Key: FLINK-18184 > URL: https://issues.apache.org/jira/browse/FLINK-18184 > Project: Flink > Issue Type: Bug > Components: Table SQL / API >Affects Versions: 1.9.1 > Environment: local:macos > flink1.9 > >Reporter: mzz >Priority: Major > > val env = StreamExecutionEnvironment.getExecutionEnvironment > env.enableCheckpointing(5000) // checkpoint every 5000 msecs > //kafak配置 > val properties = new Properties() > properties.setProperty("bootstrap.servers", "172.16.30.207:9092") > properties.setProperty("group.id", "km_aggs_group") > val fsSettings = > EnvironmentSettings.newInstance().useOldPlanner().inStreamingMode().build() > val kafkaConsumer = new FlinkKafkaConsumer[String](TOPIC, new > SimpleStringSchema(), properties).setStartFromEarliest() > //val source = env.addSource(kafkaConsumer) > val streamTableEnvironment = StreamTableEnvironment.create(env,fsSettings) > streamTableEnvironment.connect(new Kafka() > .topic(TOPIC) > .version(VERSION) > .startFromEarliest() > .property("bootstrap.servers", "172.16.30.207:9092") > .property("zookeeper.connect", "172.16.30.207:2181") > .property("group.id", "km_aggs_group_table") > // .properties(properties) > ) > .withFormat( > new Json() > .failOnMissingField(true) > .deriveSchema() > ) > .withSchema(new Schema() > .field("advs", Types.STRING()) > .field("devs", Types.STRING()) > .field("environment", Types.STRING()) > .field("events", Types.STRING()) > .field("identity", Types.STRING()) > .field("ip", Types.STRING()) > .field("launchs", Types.STRING()) > .field("ts", Types.STRING()) > ) > .inAppendMode() > .registerTableSource("aggs_test") > val tableResult = streamTableEnvironment.sqlQuery("select * from > aggs_test") > tableResult.printSchema() > //streamTableEnvironment.toAppendStream[Row](tableResult).print() > //启动程序 > env.execute("test_kafka") > > erroe message: > Exception in thread "main" org.apache.flink.table.api.TableException: > findAndCreateTableSource failed. > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67) > at > org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54) > at > org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69) > at > KM.COM.KafakaHelper.FlinkTableConnKafka$.main(FlinkTableConnKafka.scala:70) > at > KM.COM.KafakaHelper.FlinkTableConnKafka.main(FlinkTableConnKafka.scala) > Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could > not find a suitable table factory for > 'org.apache.flink.table.factories.TableSourceFactory' in > the classpath. > Reason: No context matches. > The following properties are requested: > connector.properties.0.key=zookeeper.connect > connector.properties.0.value=172.16.30.207:2181 > connector.properties.1.key=group.id > connector.properties.1.value=km_aggs_group_table > connector.properties.2.key=bootstrap.servers > connector.properties.2.value=172.16.30.207:9092 > connector.property-version=1 > connector.startup-mode=earliest-offset > connector.topic=aggs_topic > connector.type=kafka > connector.version=2.0 > format.derive-schema=true > format.fail-on-missing-field=true > format.property-version=1 > format.type=json > schema.0.name=advs > schema.0.type=VARCHAR > schema.1.name=devs > schema.1.type=VARCHAR >
[GitHub] [flink] flinkbot edited a comment on pull request #14544: [FLINK-20845] Drop Scala 2.11 support
flinkbot edited a comment on pull request #14544: URL: https://github.com/apache/flink/pull/14544#issuecomment-753633967 ## CI report: * 7b92671c3ecb252643f1b27f9a9d12c8db1f0ccf Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13318) * b1fe24bab5f3a3588e594ed41932c41cc87bd069 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21445) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] xintongsong commented on pull request #16636: [FLINK-23529] Add Flink 1.13 MigrationVersion
xintongsong commented on pull request #16636: URL: https://github.com/apache/flink/pull/16636#issuecomment-892296492 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-23590) StreamTaskTest#testProcessWithUnAvailableInput is flaky
[ https://issues.apache.org/jira/browse/FLINK-23590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392636#comment-17392636 ] Xintong Song commented on FLINK-23590: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21370=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=10777 > StreamTaskTest#testProcessWithUnAvailableInput is flaky > --- > > Key: FLINK-23590 > URL: https://issues.apache.org/jira/browse/FLINK-23590 > Project: Flink > Issue Type: Bug > Components: Runtime / Task >Affects Versions: 1.14.0 >Reporter: David Morávek >Assignee: Anton Kalashnikov >Priority: Critical > Fix For: 1.14.0 > > > [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21218=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb] > > {code:java} > java.lang.AssertionError: > Expected: a value equal to or greater than <22L> > but: <217391L> was less than <22L> at > org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:964) > at org.junit.Assert.assertThat(Assert.java:930) > at > org.apache.flink.streaming.runtime.tasks.StreamTaskTest.testProcessWithUnAvailableInput(StreamTaskTest.java:1561) > at jdk.internal.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) > at java.base/java.lang.Thread.run(Thread.java:829){code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] xintongsong commented on pull request #16640: [FLINK-22891][runtime] Using CompletableFuture to sync the scheduling…
xintongsong commented on pull request #16640: URL: https://github.com/apache/flink/pull/16640#issuecomment-892296368 @flinkbot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-23608) org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory
张祥兵 created FLINK-23608: --- Summary: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory Key: FLINK-23608 URL: https://issues.apache.org/jira/browse/FLINK-23608 Project: Flink Issue Type: Bug Components: Connectors / Kafka Affects Versions: 1.9.0 Reporter: 张祥兵 在IDEA可以正常执行 ,放在Flink上报错 org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: findAndCreateTableSource failed. at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593) at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438) at org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83) at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80) at org.apache.flink.runtime.webmonitor.handlers.utils.JarHandlerUtils$JarHandlerContext.toJobGraph(JarHandlerUtils.java:126) at org.apache.flink.runtime.webmonitor.handlers.JarPlanHandler.lambda$handleRequest$1(JarPlanHandler.java:100) at java.util.concurrent.CompletableFuture$AsyncSupply.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: org.apache.flink.table.api.TableException: findAndCreateTableSource failed. at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:67) at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:54) at org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.java:69) at com.bing.flink.controller.TestKafkaFlink.main(TestKafkaFlink.java:45) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576) ... 9 more Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSourceFactory' in the classpath. Reason: No context matches. The following properties are requested: connector.properties.0.key=group.id connector.properties.0.value=test connector.properties.1.key=bootstrap.servers connector.properties.1.value=localhost:9092 connector.property-version=1 connector.topic=test connector.type=kafka connector.version=universal format.derive-schema=true format.fail-on-missing-field=true format.property-version=1 format.type=json schema.0.name=error_time schema.0.type=VARCHAR schema.1.name=error_id schema.1.type=VARCHAR schema.2.name=task_type schema.2.type=VARCHAR update-mode=append The following factories have been considered: org.apache.flink.table.catalog.GenericInMemoryCatalogFactory org.apache.flink.table.sources.CsvBatchTableSourceFactory org.apache.flink.table.sources.CsvAppendTableSourceFactory org.apache.flink.table.sinks.CsvBatchTableSinkFactory org.apache.flink.table.sinks.CsvAppendTableSinkFactory org.apache.flink.table.planner.delegation.BlinkPlannerFactory org.apache.flink.table.planner.delegation.BlinkExecutorFactory org.apache.flink.table.planner.StreamPlannerFactory org.apache.flink.table.executor.StreamExecutorFactory at org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:283) at org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:191) at org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:144) at org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:97) at org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSource(TableFactoryUtil.java:64) ... 17 more 2021-08-03 19:06:55,821 WARN akka.remote.transport.netty.NettyTransport - Remote connection to [/127.0.0.1:7513] failed with java.io.IOException: Զ��ǿ�ȹر���һ�еӡ� 2021-08-03 19:06:55,828 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink@127.0.0.1:7457] has failed, address is now gated for [50] ms. Reason: [Disassociated] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23096) HiveParser could not attach the sessionstate of hive
[ https://issues.apache.org/jira/browse/FLINK-23096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392631#comment-17392631 ] luoyuxia commented on FLINK-23096: -- [~leexu] Actual, path.getFileSystem with return HdfsFileSystem or LocalFileSystem according to what the path is. It'll get LocalFileSystem for LocalSessionPath, there should be error when delete the LocalSessionPath. > HiveParser could not attach the sessionstate of hive > > > Key: FLINK-23096 > URL: https://issues.apache.org/jira/browse/FLINK-23096 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive >Affects Versions: 1.13.1 >Reporter: shizhengchao >Assignee: shizhengchao >Priority: Major > Labels: pull-request-available > Fix For: 1.14.0, 1.13.2 > > > My sql code is as follows: > {code:java} > //代码占位符 > CREATE CATALOG myhive WITH ( > 'type' = 'hive', > 'default-database' = 'default', > 'hive-conf-dir' = '/home/service/upload-job-file/1624269463008' > ); > use catalog hive; > set 'table.sql-dialect' = 'hive'; > create view if not exists view_test as > select > cast(goods_id as string) as goods_id, > cast(depot_id as string) as depot_id, > cast(product_id as string) as product_id, > cast(tenant_code as string) as tenant_code > from edw.dim_yezi_whse_goods_base_info/*+ > OPTIONS('streaming-source.consume-start-offset'='dayno=20210621') */; > {code} > and the exception is as follows: > {code:java} > //代码占位符 > org.apache.flink.client.program.ProgramInvocationException: The main method > caused an error: Conf non-local session path expected to be non-null > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372) > at > org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) > at > org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) > at > org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812) > at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246) > at > org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054) > at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132) > at > org.apache.flink.client.cli.CliFrontend$$Lambda$68/330382173.call(Unknown > Source) > at > org.apache.flink.runtime.security.contexts.HadoopSecurityContext$$Lambda$69/680712932.run(Unknown > Source) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692) > at > org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132) > Caused by: java.lang.NullPointerException: Conf non-local session path > expected to be non-null > at > com.google.common.base.Preconditions.checkNotNull(Preconditions.java:208) > at > org.apache.hadoop.hive.ql.session.SessionState.getHDFSSessionPath(SessionState.java:669) > at > org.apache.flink.table.planner.delegation.hive.HiveParser.clearSessionState(HiveParser.java:376) > at > org.apache.flink.table.planner.delegation.hive.HiveParser.parse(HiveParser.java:219) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:724) > at > com.shizhengchao.io.FlinkSqlStreamingPlatform.callFlinkSql(FlinkSqlStreamingPlatform.java:157) > at > com.shizhengchao.io.FlinkSqlStreamingPlatform.callCommand(FlinkSqlStreamingPlatform.java:129) > at > com.shizhengchao.io.FlinkSqlStreamingPlatform.run(FlinkSqlStreamingPlatform.java:91) > at > com.shizhengchao.io.FlinkSqlStreamingPlatform.main(FlinkSqlStreamingPlatform.java:66) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355) > ... 13 common frames omitted > {code} > My guess is that sessionstate is not set to threadlocal: > {code:java} > //代码占位符 > // @see org.apache.hadoop.hive.ql.session.SessionState.setCurrentSessionState > public static void setCurrentSessionState(SessionState startSs) { > tss.get().attach(startSs); > } > {code} > -- This message
[GitHub] [flink] flinkbot edited a comment on pull request #14544: [FLINK-20845] Drop Scala 2.11 support
flinkbot edited a comment on pull request #14544: URL: https://github.com/apache/flink/pull/14544#issuecomment-753633967 ## CI report: * 7b92671c3ecb252643f1b27f9a9d12c8db1f0ccf Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13318) * b1fe24bab5f3a3588e594ed41932c41cc87bd069 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-20845) Drop support for Scala 2.11
[ https://issues.apache.org/jira/browse/FLINK-20845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17392621#comment-17392621 ] Nick Burkard commented on FLINK-20845: -- [~eloisant] I just rebased and pushed to the same branch. :) > Drop support for Scala 2.11 > --- > > Key: FLINK-20845 > URL: https://issues.apache.org/jira/browse/FLINK-20845 > Project: Flink > Issue Type: Sub-task > Components: API / Scala >Reporter: Nick Burkard >Priority: Major > Labels: auto-unassigned, pull-request-available > > The first step to adding support for Scala 2.13 is to drop Scala 2.11. > Community discussion can be found > [here|https://lists.apache.org/thread.html/ra817c5b54e3de48d80e5b4e0ae67941d387ee25cf9779f5ae37d0486%40%3Cdev.flink.apache.org%3E]. > * Scala 2.11 was released in November 2017 and is quite old now. Most > open-source libraries no longer build for it. > * Upgrading libraries to support 2.13 will be much easier without 2.11. Many > do not support 2.11, 2.12 and 2.13 at the same time, so this is basically > required to get 2.13 support. > Considerations: > * The Flink Scala Shell submodule still does not support Scala 2.12. It isn't > a strict dependency for dropping Scala 2.11, but would be nice to have before > making the cut. > * Stateful functions previously needed Scala 2.11, but it looks like it [now > supports 2.12|https://github.com/apache/flink-statefun/pull/149]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #16645: [FLINK-23539][metrics-influxdb] InfluxDBReporter should filter charac…
flinkbot edited a comment on pull request #16645: URL: https://github.com/apache/flink/pull/16645#issuecomment-889643808 ## CI report: * cbbae90ac287c25df1d73ca53c6ece3a1032bf7d Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21380) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21444) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21418) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16645: [FLINK-23539][metrics-influxdb] InfluxDBReporter should filter charac…
flinkbot edited a comment on pull request #16645: URL: https://github.com/apache/flink/pull/16645#issuecomment-889643808 ## CI report: * cbbae90ac287c25df1d73ca53c6ece3a1032bf7d Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21418) Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21380) Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=21444) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org