[GitHub] [flink-docker] iemejia commented on pull request #62: Go back to Flink 1.12.0
iemejia commented on pull request #62: URL: https://github.com/apache/flink-docker/pull/62#issuecomment-768123116 Hey you evil guys I NEED TO BE THERE !!! :P Just kidding it makes total sense because you guys are now the real maintainers and doing a great job :+1: This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-21158) wrong jvm metaspace and overhead size show in taskmanager metric page
[ https://issues.apache.org/jira/browse/FLINK-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] pengkangjing updated FLINK-21158: - Description: the size of jvm metaspace and jvm overhead on taskmanager metrics page not the derived value from flink-conf.yaml but the default value . default value: JVM_METASPACE - 256M JVM_OVERHEAD_MAX - 1GB !image-2021-01-27-16-30-51-834.png! !image-2021-01-27-16-19-16-390.png! was: the size of jvm metaspace and jvm overhead on taskmanager metrics page not the derived value from flink-conf.yaml but the default value 。 default value: JVM_METASPACE - 256M JVM_OVERHEAD_MAX - 1GB !image-2021-01-27-16-19-16-390.png! > wrong jvm metaspace and overhead size show in taskmanager metric page > -- > > Key: FLINK-21158 > URL: https://issues.apache.org/jira/browse/FLINK-21158 > Project: Flink > Issue Type: Bug > Components: Runtime / Web Frontend >Affects Versions: 1.12.0, 1.12.1 >Reporter: pengkangjing >Priority: Minor > Attachments: image-2021-01-27-16-19-16-390.png, > image-2021-01-27-16-30-51-834.png > > Original Estimate: 3h > Remaining Estimate: 3h > > the size of jvm metaspace and jvm overhead on taskmanager metrics page not > the derived value from flink-conf.yaml but the default value . > default value: > JVM_METASPACE - 256M > JVM_OVERHEAD_MAX - 1GB > !image-2021-01-27-16-30-51-834.png! > !image-2021-01-27-16-19-16-390.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21158) wrong jvm metaspace and overhead size show in taskmanager metric page
[ https://issues.apache.org/jira/browse/FLINK-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] pengkangjing updated FLINK-21158: - Attachment: image-2021-01-27-16-30-51-834.png > wrong jvm metaspace and overhead size show in taskmanager metric page > -- > > Key: FLINK-21158 > URL: https://issues.apache.org/jira/browse/FLINK-21158 > Project: Flink > Issue Type: Bug > Components: Runtime / Web Frontend >Affects Versions: 1.12.0, 1.12.1 >Reporter: pengkangjing >Priority: Minor > Attachments: image-2021-01-27-16-19-16-390.png, > image-2021-01-27-16-30-51-834.png > > Original Estimate: 3h > Remaining Estimate: 3h > > the size of jvm metaspace and jvm overhead on taskmanager metrics page not > the derived value from flink-conf.yaml but the default value 。 > default value: > JVM_METASPACE - 256M > JVM_OVERHEAD_MAX - 1GB > !image-2021-01-27-16-19-16-390.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21158) wrong jvm metaspace and overhead size show in taskmanager metric page
[ https://issues.apache.org/jira/browse/FLINK-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] pengkangjing updated FLINK-21158: - Summary: wrong jvm metaspace and overhead size show in taskmanager metric page (was: wrong jvm metaspace and overhead size show in taskmanager metric) > wrong jvm metaspace and overhead size show in taskmanager metric page > -- > > Key: FLINK-21158 > URL: https://issues.apache.org/jira/browse/FLINK-21158 > Project: Flink > Issue Type: Bug > Components: Runtime / Web Frontend >Affects Versions: 1.12.0, 1.12.1 >Reporter: pengkangjing >Priority: Minor > Attachments: image-2021-01-27-16-19-16-390.png > > Original Estimate: 3h > Remaining Estimate: 3h > > the size of jvm metaspace and jvm overhead on taskmanager metrics page not > the derived value from flink-conf.yaml but the default value 。 > default value: > JVM_METASPACE - 256M > JVM_OVERHEAD_MAX - 1GB > !image-2021-01-27-16-19-16-390.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21158) wrong jvm metaspace and overhead size show in taskmanager metric
[ https://issues.apache.org/jira/browse/FLINK-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] pengkangjing updated FLINK-21158: - Summary: wrong jvm metaspace and overhead size show in taskmanager metric (was: wrong jvm metaspace and overhead size show in jobmanager metric) > wrong jvm metaspace and overhead size show in taskmanager metric > - > > Key: FLINK-21158 > URL: https://issues.apache.org/jira/browse/FLINK-21158 > Project: Flink > Issue Type: Bug > Components: Runtime / Web Frontend >Affects Versions: 1.12.0, 1.12.1 >Reporter: pengkangjing >Priority: Minor > Attachments: image-2021-01-27-16-19-16-390.png > > Original Estimate: 3h > Remaining Estimate: 3h > > the size of jvm metaspace and jvm overhead on taskmanager metrics page not > the derived value from flink-conf.yaml but the default value 。 > default value: > JVM_METASPACE - 256M > JVM_OVERHEAD_MAX - 1GB > !image-2021-01-27-16-19-16-390.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-21158) wrong jvm metaspace and overhead size show in taskmanager metric
[ https://issues.apache.org/jira/browse/FLINK-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272658#comment-17272658 ] Roc Marshal commented on FLINK-21158: - [~pengkangjing] Could you describe more information about this scene? Such as pictures, deployment modes, etc. > wrong jvm metaspace and overhead size show in taskmanager metric > - > > Key: FLINK-21158 > URL: https://issues.apache.org/jira/browse/FLINK-21158 > Project: Flink > Issue Type: Bug > Components: Runtime / Web Frontend >Affects Versions: 1.12.0, 1.12.1 >Reporter: pengkangjing >Priority: Minor > Attachments: image-2021-01-27-16-19-16-390.png > > Original Estimate: 3h > Remaining Estimate: 3h > > the size of jvm metaspace and jvm overhead on taskmanager metrics page not > the derived value from flink-conf.yaml but the default value 。 > default value: > JVM_METASPACE - 256M > JVM_OVERHEAD_MAX - 1GB > !image-2021-01-27-16-19-16-390.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21158) wrong jvm metaspace and overhead size show in jobmanager metric
[ https://issues.apache.org/jira/browse/FLINK-21158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] pengkangjing updated FLINK-21158: - Attachment: image-2021-01-27-16-19-16-390.png Description: the size of jvm metaspace and jvm overhead on taskmanager metrics page not the derived value from flink-conf.yaml but the default value 。 default value: JVM_METASPACE - 256M JVM_OVERHEAD_MAX - 1GB !image-2021-01-27-16-19-16-390.png! Remaining Estimate: 3h Original Estimate: 3h > wrong jvm metaspace and overhead size show in jobmanager metric > > > Key: FLINK-21158 > URL: https://issues.apache.org/jira/browse/FLINK-21158 > Project: Flink > Issue Type: Bug > Components: Runtime / Web Frontend >Affects Versions: 1.12.0, 1.12.1 >Reporter: pengkangjing >Priority: Minor > Attachments: image-2021-01-27-16-19-16-390.png > > Original Estimate: 3h > Remaining Estimate: 3h > > the size of jvm metaspace and jvm overhead on taskmanager metrics page not > the derived value from flink-conf.yaml but the default value 。 > default value: > JVM_METASPACE - 256M > JVM_OVERHEAD_MAX - 1GB > !image-2021-01-27-16-19-16-390.png! > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #14771: [FLINK-9683] Allow history server use default fs scheme
flinkbot commented on pull request #14771: URL: https://github.com/apache/flink/pull/14771#issuecomment-768117610 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit bb612e361b827ad77d756cc2257d30b5c70fa253 (Wed Jan 27 08:20:53 UTC 2021) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-9683).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-9683) inconsistent behaviors when setting historyserver.archive.fs.dir
[ https://issues.apache.org/jira/browse/FLINK-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-9683: -- Labels: pull-request-available starter (was: starter) > inconsistent behaviors when setting historyserver.archive.fs.dir > > > Key: FLINK-9683 > URL: https://issues.apache.org/jira/browse/FLINK-9683 > Project: Flink > Issue Type: Bug > Components: Runtime / Coordination >Affects Versions: 1.4.2 >Reporter: Ethan Li >Priority: Major > Labels: pull-request-available, starter > > I am using release-1.4.2, > > With fs.default-scheme and fs.hdfs.hadoopconf set correctly, > when setting > {code:java} > historyserver.archive.fs.dir: /tmp/flink/cluster-name/jmarchive > {code} > I am seeing > {code:java} > 2018-06-27 18:51:12,692 WARN > org.apache.flink.runtime.webmonitor.history.HistoryServer - Failed to create > Path or FileSystem for directory '/tmp/flink/cluster-name/jmarchive'. > Directory will not be monitored. > java.lang.IllegalArgumentException: The scheme (hdfs://, file://, etc) is > null. Please specify the file system scheme explicitly in the URI. > at > org.apache.flink.runtime.webmonitor.WebMonitorUtils.validateAndNormalizeUri(WebMonitorUtils.java:300) > at > org.apache.flink.runtime.webmonitor.history.HistoryServer.(HistoryServer.java:168) > at > org.apache.flink.runtime.webmonitor.history.HistoryServer.(HistoryServer.java:132) > at > org.apache.flink.runtime.webmonitor.history.HistoryServer$1.call(HistoryServer.java:113) > at > org.apache.flink.runtime.webmonitor.history.HistoryServer$1.call(HistoryServer.java:110) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556) > at > org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) > at > org.apache.flink.runtime.webmonitor.history.HistoryServer.main(HistoryServer.java:110) > {code} > > And then if I set > {code:java} > historyserver.archive.fs.dir: hdfs:///tmp/flink/cluster-name/jmarchive{code} > I am seeing: > > {code:java} > java.io.IOException: The given file system URI > (hdfs:///tmp/flink/cluster-name/jmarchive) did not describe the authority > (like for example HDFS NameNode address/port or S3 host). The attempt to use > a configured default authority failed: Hadoop configuration for default file > system ('fs.default.name' or 'fs.defaultFS') contains no valid authority > component (like hdfs namenode, S3 host, etc) > at > org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:149) > at > org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:401) > at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:320) > at org.apache.flink.core.fs.Path.getFileSystem(Path.java:293) > at > org.apache.flink.runtime.webmonitor.history.HistoryServer.(HistoryServer.java:169) > at > org.apache.flink.runtime.webmonitor.history.HistoryServer.(HistoryServer.java:132) > {code} > The only way it works is to provide the full path of hdfs like: > {code:java} > #historyserver.archive.fs.dir: > hdfs:///tmp/flink/cluster-name/jmarchive > {code} > > Above situations are because there are two parts of code treating "scheme" > differently. > https://github.com/apache/flink/blob/release-1.4.2/flink-runtime/src/main/java/org/apache/flink/runtime/webmonitor/WebMonitorUtils.java#L299-L302 > https://github.com/apache/flink/blob/release-1.4.2/flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java#L335-L338 > I believe the first case should be supported if users have set > fs.default-scheme -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] jiangxin369 opened a new pull request #14771: [FLINK-9683] Allow history server use default fs scheme
jiangxin369 opened a new pull request #14771: URL: https://github.com/apache/flink/pull/14771 ## What is the purpose of the change *This pull request makes history server be able to use default filesystem schema if scheme not specified in `historyserver.archive.fs.dir`* ## Brief change log - *Safely remove `validateAndNormalizeUri` check of archive dir* ## Verifying this change This change is already covered by existing tests. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14770: [FLINK-21066] Fix hbase 1.4 tests on Hadoop 3.x
flinkbot edited a comment on pull request #14770: URL: https://github.com/apache/flink/pull/14770#issuecomment-768105021 ## CI report: * f515dd11a8c7bda0fd1386c9287edacec51492d2 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12547) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14760: [FLINK-20994][python] Add public method to create TableEnvironment in PyFlink.
flinkbot edited a comment on pull request #14760: URL: https://github.com/apache/flink/pull/14760#issuecomment-767543514 ## CI report: * a2dbd928da387b47c4e7daf1b2bd7dc3c0bf032f Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12541) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe merged pull request #14757: [FLINK-21105][table-planner-blink] Rename ExecEdge to InputProperty and rename RequiredShuffle to RequiredDistribution
godfreyhe merged pull request #14757: URL: https://github.com/apache/flink/pull/14757 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-20942) Digest of FLOAT literals throws UnsupportedOperationException
[ https://issues.apache.org/jira/browse/FLINK-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272647#comment-17272647 ] Danny Chen edited comment on FLINK-20942 at 1/27/21, 8:05 AM: -- I have fired a fix in CALCITE-4479 but i have no idea how to fix quickly in Flink side, should we copy the {{RexLiteral}} then ? The class is huge .. was (Author: danny0405): I have fired a fix in CALCITE-4479 but i have no idea how to fix quickly in Flink side, should we copy the {{RexLiteral }} then ? The class is huge .. > Digest of FLOAT literals throws UnsupportedOperationException > - > > Key: FLINK-20942 > URL: https://issues.apache.org/jira/browse/FLINK-20942 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: Timo Walther >Priority: Critical > Fix For: 1.13.0, 1.12.2 > > > The recent refactoring of Calcite's digests might have caused a regression > for FLOAT literals. {{org.apache.calcite.rex.RexLiteral#appendAsJava}} throws > a UnsupportedOperationException for the following query: > {code} > def main(args: Array[String]): Unit = { > val env = StreamExecutionEnvironment.getExecutionEnvironment > val source = env.fromElements( > (1.0f, 11.0f, 12.0f), > (2.0f, 21.0f, 22.0f), > (3.0f, 31.0f, 32.0f), > (4.0f, 41.0f, 42.0f), > (5.0f, 51.0f, 52.0f) > ) > val settings = EnvironmentSettings.newInstance() > .inStreamingMode() > .useBlinkPlanner() > .build() > val tEnv = StreamTableEnvironment.create(env, settings) > tEnv.createTemporaryView("myTable", source, $("id"), $("f1"), $("f2")) > val query = > """ > |select * from myTable where id in (1.0, 2.0, 3.0) > |""".stripMargin > tEnv.executeSql(query).print() > } > {code} > Stack trace: > {code} > Exception in thread "main" java.lang.UnsupportedOperationException: class > org.apache.calcite.sql.type.SqlTypeName: FLOAT > at org.apache.calcite.util.Util.needToImplement(Util.java:1075) > at org.apache.calcite.rex.RexLiteral.appendAsJava(RexLiteral.java:703) > at org.apache.calcite.rex.RexLiteral.toJavaString(RexLiteral.java:408) > at org.apache.calcite.rex.RexLiteral.computeDigest(RexLiteral.java:276) > at org.apache.calcite.rex.RexLiteral.(RexLiteral.java:223) > at org.apache.calcite.rex.RexLiteral.toLiteral(RexLiteral.java:737) > at > org.apache.calcite.rex.RexLiteral.lambda$printSarg$4(RexLiteral.java:710) > at > org.apache.calcite.util.RangeSets$Printer.singleton(RangeSets.java:397) > at org.apache.calcite.util.RangeSets.forEach(RangeSets.java:237) > at org.apache.calcite.util.Sarg.lambda$printTo$0(Sarg.java:110) > at org.apache.calcite.linq4j.Ord.forEach(Ord.java:157) > at org.apache.calcite.util.Sarg.printTo(Sarg.java:106) > at org.apache.calcite.rex.RexLiteral.printSarg(RexLiteral.java:709) > at > org.apache.calcite.rex.RexLiteral.lambda$appendAsJava$1(RexLiteral.java:652) > at org.apache.calcite.util.Util.asStringBuilder(Util.java:2502) > at org.apache.calcite.rex.RexLiteral.appendAsJava(RexLiteral.java:651) > at org.apache.calcite.rex.RexLiteral.toJavaString(RexLiteral.java:408) > at org.apache.calcite.rex.RexLiteral.computeDigest(RexLiteral.java:276) > at org.apache.calcite.rex.RexLiteral.(RexLiteral.java:223) > at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:971) > at > org.apache.calcite.rex.RexBuilder.makeSearchArgumentLiteral(RexBuilder.java:1066) > at > org.apache.calcite.rex.RexSimplify$SargCollector.fix(RexSimplify.java:2786) > at > org.apache.calcite.rex.RexSimplify.lambda$simplifyOrs$6(RexSimplify.java:1843) > at java.util.ArrayList.forEach(ArrayList.java:1257) > at org.apache.calcite.rex.RexSimplify.simplifyOrs(RexSimplify.java:1843) > at org.apache.calcite.rex.RexSimplify.simplifyOr(RexSimplify.java:1817) > at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:313) > at > org.apache.calcite.rex.RexSimplify.simplifyUnknownAs(RexSimplify.java:282) > at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:257) > at > org.apache.flink.table.planner.plan.utils.FlinkRexUtil$.simplify(FlinkRexUtil.scala:213) > at > org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.simplify(SimplifyFilterConditionRule.scala:63) > at > org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.onMatch(SimplifyFilterConditionRule.scala:46) > at > org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) > at org.apache.calcite.plan.hep.HepPlanner.apply
[jira] [Commented] (FLINK-20942) Digest of FLOAT literals throws UnsupportedOperationException
[ https://issues.apache.org/jira/browse/FLINK-20942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272647#comment-17272647 ] Danny Chen commented on FLINK-20942: I have fired a fix in CALCITE-4479 but i have no idea how to fix quickly in Flink side, should we copy the {{RexLiteral }} then ? The class is huge .. > Digest of FLOAT literals throws UnsupportedOperationException > - > > Key: FLINK-20942 > URL: https://issues.apache.org/jira/browse/FLINK-20942 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: Timo Walther >Priority: Critical > Fix For: 1.13.0, 1.12.2 > > > The recent refactoring of Calcite's digests might have caused a regression > for FLOAT literals. {{org.apache.calcite.rex.RexLiteral#appendAsJava}} throws > a UnsupportedOperationException for the following query: > {code} > def main(args: Array[String]): Unit = { > val env = StreamExecutionEnvironment.getExecutionEnvironment > val source = env.fromElements( > (1.0f, 11.0f, 12.0f), > (2.0f, 21.0f, 22.0f), > (3.0f, 31.0f, 32.0f), > (4.0f, 41.0f, 42.0f), > (5.0f, 51.0f, 52.0f) > ) > val settings = EnvironmentSettings.newInstance() > .inStreamingMode() > .useBlinkPlanner() > .build() > val tEnv = StreamTableEnvironment.create(env, settings) > tEnv.createTemporaryView("myTable", source, $("id"), $("f1"), $("f2")) > val query = > """ > |select * from myTable where id in (1.0, 2.0, 3.0) > |""".stripMargin > tEnv.executeSql(query).print() > } > {code} > Stack trace: > {code} > Exception in thread "main" java.lang.UnsupportedOperationException: class > org.apache.calcite.sql.type.SqlTypeName: FLOAT > at org.apache.calcite.util.Util.needToImplement(Util.java:1075) > at org.apache.calcite.rex.RexLiteral.appendAsJava(RexLiteral.java:703) > at org.apache.calcite.rex.RexLiteral.toJavaString(RexLiteral.java:408) > at org.apache.calcite.rex.RexLiteral.computeDigest(RexLiteral.java:276) > at org.apache.calcite.rex.RexLiteral.(RexLiteral.java:223) > at org.apache.calcite.rex.RexLiteral.toLiteral(RexLiteral.java:737) > at > org.apache.calcite.rex.RexLiteral.lambda$printSarg$4(RexLiteral.java:710) > at > org.apache.calcite.util.RangeSets$Printer.singleton(RangeSets.java:397) > at org.apache.calcite.util.RangeSets.forEach(RangeSets.java:237) > at org.apache.calcite.util.Sarg.lambda$printTo$0(Sarg.java:110) > at org.apache.calcite.linq4j.Ord.forEach(Ord.java:157) > at org.apache.calcite.util.Sarg.printTo(Sarg.java:106) > at org.apache.calcite.rex.RexLiteral.printSarg(RexLiteral.java:709) > at > org.apache.calcite.rex.RexLiteral.lambda$appendAsJava$1(RexLiteral.java:652) > at org.apache.calcite.util.Util.asStringBuilder(Util.java:2502) > at org.apache.calcite.rex.RexLiteral.appendAsJava(RexLiteral.java:651) > at org.apache.calcite.rex.RexLiteral.toJavaString(RexLiteral.java:408) > at org.apache.calcite.rex.RexLiteral.computeDigest(RexLiteral.java:276) > at org.apache.calcite.rex.RexLiteral.(RexLiteral.java:223) > at org.apache.calcite.rex.RexBuilder.makeLiteral(RexBuilder.java:971) > at > org.apache.calcite.rex.RexBuilder.makeSearchArgumentLiteral(RexBuilder.java:1066) > at > org.apache.calcite.rex.RexSimplify$SargCollector.fix(RexSimplify.java:2786) > at > org.apache.calcite.rex.RexSimplify.lambda$simplifyOrs$6(RexSimplify.java:1843) > at java.util.ArrayList.forEach(ArrayList.java:1257) > at org.apache.calcite.rex.RexSimplify.simplifyOrs(RexSimplify.java:1843) > at org.apache.calcite.rex.RexSimplify.simplifyOr(RexSimplify.java:1817) > at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:313) > at > org.apache.calcite.rex.RexSimplify.simplifyUnknownAs(RexSimplify.java:282) > at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:257) > at > org.apache.flink.table.planner.plan.utils.FlinkRexUtil$.simplify(FlinkRexUtil.scala:213) > at > org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.simplify(SimplifyFilterConditionRule.scala:63) > at > org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.onMatch(SimplifyFilterConditionRule.scala:46) > at > org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) > at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) > at > org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) > at > org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) > at > org.apache.