[jira] [Resolved] (FLINK-4565) Support for SQL IN operator
[ https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra resolved FLINK-4565. --- Resolution: Resolved > Support for SQL IN operator > --- > > Key: FLINK-4565 > URL: https://issues.apache.org/jira/browse/FLINK-4565 > Project: Flink > Issue Type: Improvement > Components: Table API & SQL >Reporter: Timo Walther >Assignee: Dmytro Shkvyra > > It seems that Flink SQL supports the uncorrelated sub-query IN operator. But > it should also be available in the Table API and tested. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (FLINK-4303) Add CEP examples
[ https://issues.apache.org/jira/browse/FLINK-4303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra reassigned FLINK-4303: - Assignee: (was: Dmytro Shkvyra) > Add CEP examples > > > Key: FLINK-4303 > URL: https://issues.apache.org/jira/browse/FLINK-4303 > Project: Flink > Issue Type: Improvement > Components: CEP >Affects Versions: 1.1.0 >Reporter: Timo Walther > > Neither CEP Java nor CEP Scala contain a runnable example. The example on the > website is also not runnable without adding some additional code. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (FLINK-4303) Add CEP examples
[ https://issues.apache.org/jira/browse/FLINK-4303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra reassigned FLINK-4303: - Assignee: Dmytro Shkvyra (was: Alexander Chermenin) > Add CEP examples > > > Key: FLINK-4303 > URL: https://issues.apache.org/jira/browse/FLINK-4303 > Project: Flink > Issue Type: Improvement > Components: CEP >Affects Versions: 1.1.0 >Reporter: Timo Walther >Assignee: Dmytro Shkvyra > > Neither CEP Java nor CEP Scala contain a runnable example. The example on the > website is also not runnable without adding some additional code. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (FLINK-6613) OOM during reading big messages from Kafka
[ https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015569#comment-16015569 ] Dmytro Shkvyra edited comment on FLINK-6613 at 5/18/17 11:08 AM: - Hi [~dernasherbrezon], first of all root cause of this issue is using ParallelGC. OOM is normal behavior for JVM with ParallelGC if application create too much objects (please explore ParallelGC algoritm). -XX:-UseGCOverheadLimit just hide problem with lack of memory. {quote} 3) If you recommend G1, then default startup scripts should be changed. {quote} We don't need change startup scripts. You can {{export JVM_ARGS="$JVM_ARGS -XX:+UseG1GC"}}, you also can pass other JVM options (except memory size options) JobManager and TaskManager use the same options from {{JVM_ARGS}} BTW, if you use {{-XX:-UseParNewGC -XX:+UseConcMarkSweepGC}} options (serial GC) Flink will not read too much messages from Kafka because Flink's JVM will be suspended for "stop the world". was (Author: dshkvyra): Hi [~dernasherbrezon], first of all root cause of this issue is using ParallelGC. OOM is normal behavior for JVM with ParallelGC if application create too much objects (please explore ParallelGC algoritm). -XX:-UseGCOverheadLimit just hide problem with lack of memory. {quote} 3) If you recommend G1, then default startup scripts should be changed. {quote} We don't need change startup scripts. You can {{export JVM_ARGS="$JVM_ARGS -XX:+UseG1GC"}}, you also can pass other JVM options (except memory size options) JobManager and TaskManager use the same options from {{JVM_ARGS}} > OOM during reading big messages from Kafka > -- > > Key: FLINK-6613 > URL: https://issues.apache.org/jira/browse/FLINK-6613 > Project: Flink > Issue Type: Bug > Components: Kafka Connector >Affects Versions: 1.2.0 >Reporter: Andrey > > Steps to reproduce: > 1) Setup Task manager with 2G heap size > 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010) > 3) Send 3300 messages each 635Kb. So total size is ~2G > 4) OOM in task manager. > According to heap dump: > 1) KafkaConsumerThread read messages with total size ~1G. > 2) Pass them to the next operator using > org.apache.flink.streaming.connectors.kafka.internal.Handover > 3) Then began to read another batch of messages. > 4) Task manager was able to read next batch of ~500Mb messages until OOM. > Expected: > 1) Either have constraint like "number of messages in-flight" OR > 2) Read next batch of messages only when previous batch processed OR > 3) Any other option which will solve OOM. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-6613) OOM during reading big messages from Kafka
[ https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16015569#comment-16015569 ] Dmytro Shkvyra commented on FLINK-6613: --- Hi [~dernasherbrezon], first of all root cause of this issue is using ParallelGC. OOM is normal behavior for JVM with ParallelGC if application create too much objects (please explore ParallelGC algoritm). -XX:-UseGCOverheadLimit just hide problem with lack of memory. {quote} 3) If you recommend G1, then default startup scripts should be changed. {quote} We don't need change startup scripts. You can {{export JVM_ARGS="$JVM_ARGS -XX:+UseG1GC"}}, you also can pass other JVM options (except memory size options) JobManager and TaskManager use the same options from {{JVM_ARGS}} > OOM during reading big messages from Kafka > -- > > Key: FLINK-6613 > URL: https://issues.apache.org/jira/browse/FLINK-6613 > Project: Flink > Issue Type: Bug > Components: Kafka Connector >Affects Versions: 1.2.0 >Reporter: Andrey > > Steps to reproduce: > 1) Setup Task manager with 2G heap size > 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010) > 3) Send 3300 messages each 635Kb. So total size is ~2G > 4) OOM in task manager. > According to heap dump: > 1) KafkaConsumerThread read messages with total size ~1G. > 2) Pass them to the next operator using > org.apache.flink.streaming.connectors.kafka.internal.Handover > 3) Then began to read another batch of messages. > 4) Task manager was able to read next batch of ~500Mb messages until OOM. > Expected: > 1) Either have constraint like "number of messages in-flight" OR > 2) Read next batch of messages only when previous batch processed OR > 3) Any other option which will solve OOM. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-6613) OOM during reading big messages from Kafka
[ https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014357#comment-16014357 ] Dmytro Shkvyra commented on FLINK-6613: --- Hi [~dernasherbrezon], I have assumed that you say that you using ParallelGC. Please see https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/parallel.html {quote} The parallel collector throws an OutOfMemoryError if too much time is being spent in garbage collection (GC): If more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, then an OutOfMemoryError is thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line. {quote} and if {quote} 3) Send 3300 messages each 635Kb. So total size is ~2G {quote} ParallelGC cant collect all garbage in time. BTW, we have two parallel CG algorithms http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/gc01/index.html and old one clean old generation also. I think we can close this ticket, because it could be solved by using GC1 and out of scope FLINK > OOM during reading big messages from Kafka > -- > > Key: FLINK-6613 > URL: https://issues.apache.org/jira/browse/FLINK-6613 > Project: Flink > Issue Type: Bug > Components: Kafka Connector >Affects Versions: 1.2.0 >Reporter: Andrey > > Steps to reproduce: > 1) Setup Task manager with 2G heap size > 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010) > 3) Send 3300 messages each 635Kb. So total size is ~2G > 4) OOM in task manager. > According to heap dump: > 1) KafkaConsumerThread read messages with total size ~1G. > 2) Pass them to the next operator using > org.apache.flink.streaming.connectors.kafka.internal.Handover > 3) Then began to read another batch of messages. > 4) Task manager was able to read next batch of ~500Mb messages until OOM. > Expected: > 1) Either have constraint like "number of messages in-flight" OR > 2) Read next batch of messages only when previous batch processed OR > 3) Any other option which will solve OOM. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-6613) OOM during reading big messages from Kafka
[ https://issues.apache.org/jira/browse/FLINK-6613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014155#comment-16014155 ] Dmytro Shkvyra commented on FLINK-6613: --- [~dernasherbrezon] What JVM option and GC do you use? > OOM during reading big messages from Kafka > -- > > Key: FLINK-6613 > URL: https://issues.apache.org/jira/browse/FLINK-6613 > Project: Flink > Issue Type: Bug > Components: Kafka Connector >Affects Versions: 1.2.0 >Reporter: Andrey > > Steps to reproduce: > 1) Setup Task manager with 2G heap size > 2) Setup job that reads messages from Kafka 10 (i.e. FlinkKafkaConsumer010) > 3) Send 3300 messages each 635Kb. So total size is ~2G > 4) OOM in task manager. > According to heap dump: > 1) KafkaConsumerThread read messages with total size ~1G. > 2) Pass them to the next operator using > org.apache.flink.streaming.connectors.kafka.internal.Handover > 3) Then began to read another batch of messages. > 4) Task manager was able to read next batch of ~500Mb messages until OOM. > Expected: > 1) Either have constraint like "number of messages in-flight" OR > 2) Read next batch of messages only when previous batch processed OR > 3) Any other option which will solve OOM. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (FLINK-3451) NettyServer port is determined via NetUtils.getAvailablePort()
[ https://issues.apache.org/jira/browse/FLINK-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014134#comment-16014134 ] Dmytro Shkvyra edited comment on FLINK-3451 at 5/17/17 2:44 PM: [~uce] IMHO the problem is in {{NetUtils.getAvailablePort()}}. Letting {{NettyServer}} pick the port itself is not solution of problem, picked port can be also busy. We can solve problem for current JVM if use {{synchronized}} declaration for {{NetUtils.getAvailablePort()}} and get port randomly from some variants. It will prevent capture of same port number from different threads. Unfortunately, it can't prevent if same port number was captured from another process. [~uce] What do you think? Is my solution acceptable for this issue. If yes I would implement it. was (Author: dshkvyra): [~uce] IMHO the problem is in `NetUtils.getAvailablePort()`. Letting `NettyServer` pick the port itself is not solution of problem, picked port can be also busy. We can solve problem for current JVM if use `synchronized` declaration for `NetUtils.getAvailablePort()` and get port randomly from some variants. It will prevent capture of same port number from different threads. Unfortunately, it can't prevent if same port number was captured from another process. [~uce] What do you think? Is my solution acceptable for this issue. If yes I would implement it. > NettyServer port is determined via NetUtils.getAvailablePort() > -- > > Key: FLINK-3451 > URL: https://issues.apache.org/jira/browse/FLINK-3451 > Project: Flink > Issue Type: Bug > Components: Distributed Coordination >Reporter: Ufuk Celebi > > During start up, the {{NettyServer}} port is configured via > {{NetUtils.getAvailablePort()}}. In most cases, this works as expected, but > it is possible that another service grabs the port while it is available > (between the check and {{NettyServer}} initialization). > It is more robust to let {{NettyServer}} pick the port itself no port is > specified. > In general the {{getAvailablePort-then-setConfig}} pattern is not recommended. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (FLINK-3451) NettyServer port is determined via NetUtils.getAvailablePort()
[ https://issues.apache.org/jira/browse/FLINK-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014134#comment-16014134 ] Dmytro Shkvyra edited comment on FLINK-3451 at 5/17/17 2:43 PM: [~uce] IMHO the problem is in `NetUtils.getAvailablePort()`. Letting `NettyServer` pick the port itself is not solution of problem, picked port can be also busy. We can solve problem for current JVM if use `synchronized` declaration for `NetUtils.getAvailablePort()` and get port randomly from some variants. It will prevent capture of same port number from different threads. Unfortunately, it can't prevent if same port number was captured from another process. [~uce] What do you think? Is my solution acceptable for this issue. If yes I would implement it. was (Author: dshkvyra): [~uce] IMHO the problem is in {NetUtils.getAvailablePort()}. Letting {NettyServer} pick the port itself is not solution of problem, picked port can be also busy. We can solve problem for current JVM if use {synchronized} declaration for {NetUtils.getAvailablePort()} and get port randomly from some variants. It will prevent capture of same port number from different threads. Unfortunately, it can't prevent if same port number was captured from another process. [~uce] What do you think? Is my solution acceptable for this issue. If yes I would implement it. > NettyServer port is determined via NetUtils.getAvailablePort() > -- > > Key: FLINK-3451 > URL: https://issues.apache.org/jira/browse/FLINK-3451 > Project: Flink > Issue Type: Bug > Components: Distributed Coordination >Reporter: Ufuk Celebi > > During start up, the {{NettyServer}} port is configured via > {{NetUtils.getAvailablePort()}}. In most cases, this works as expected, but > it is possible that another service grabs the port while it is available > (between the check and {{NettyServer}} initialization). > It is more robust to let {{NettyServer}} pick the port itself no port is > specified. > In general the {{getAvailablePort-then-setConfig}} pattern is not recommended. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-3451) NettyServer port is determined via NetUtils.getAvailablePort()
[ https://issues.apache.org/jira/browse/FLINK-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014134#comment-16014134 ] Dmytro Shkvyra commented on FLINK-3451: --- [~uce] IMHO the problem is in {NetUtils.getAvailablePort()}. Letting {NettyServer} pick the port itself is not solution of problem, picked port can be also busy. We can solve problem for current JVM if use {synchronized} declaration for {NetUtils.getAvailablePort()} and get port randomly from some variants. It will prevent capture of same port number from different threads. Unfortunately, it can't prevent if same port number was captured from another process. [~uce] What do you think? Is my solution acceptable for this issue. If yes I would implement it. > NettyServer port is determined via NetUtils.getAvailablePort() > -- > > Key: FLINK-3451 > URL: https://issues.apache.org/jira/browse/FLINK-3451 > Project: Flink > Issue Type: Bug > Components: Distributed Coordination >Reporter: Ufuk Celebi > > During start up, the {{NettyServer}} port is configured via > {{NetUtils.getAvailablePort()}}. In most cases, this works as expected, but > it is possible that another service grabs the port while it is available > (between the check and {{NettyServer}} initialization). > It is more robust to let {{NettyServer}} pick the port itself no port is > specified. > In general the {{getAvailablePort-then-setConfig}} pattern is not recommended. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (FLINK-4565) Support for SQL IN operator
[ https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra resolved FLINK-4565. --- Resolution: Resolved > Support for SQL IN operator > --- > > Key: FLINK-4565 > URL: https://issues.apache.org/jira/browse/FLINK-4565 > Project: Flink > Issue Type: Improvement > Components: Table API & SQL >Reporter: Timo Walther >Assignee: Dmytro Shkvyra > > It seems that Flink SQL supports the uncorrelated sub-query IN operator. But > it should also be available in the Table API and tested. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (FLINK-5256) Extend DataSetSingleRowJoin to support Left and Right joins
[ https://issues.apache.org/jira/browse/FLINK-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra resolved FLINK-5256. --- Resolution: Resolved > Extend DataSetSingleRowJoin to support Left and Right joins > --- > > Key: FLINK-5256 > URL: https://issues.apache.org/jira/browse/FLINK-5256 > Project: Flink > Issue Type: Improvement > Components: Table API & SQL >Affects Versions: 1.2.0 >Reporter: Fabian Hueske >Assignee: Dmytro Shkvyra > > The {{DataSetSingleRowJoin}} is a broadcast-map join that supports arbitrary > inner joins where one input is a single row. > I found that Calcite translates certain subqueries into non-equi left and > right joins with single input. These cases can be handled if the > {{DataSetSingleRowJoin}} is extended to support outer joins on the > non-single-row input, i.e., left joins if the right side is single input and > vice versa. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (FLINK-5476) Fail fast if trying to submit a job to a non-existing Flink cluster
[ https://issues.apache.org/jira/browse/FLINK-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra reassigned FLINK-5476: - Assignee: Dmytro Shkvyra > Fail fast if trying to submit a job to a non-existing Flink cluster > --- > > Key: FLINK-5476 > URL: https://issues.apache.org/jira/browse/FLINK-5476 > Project: Flink > Issue Type: Improvement > Components: Client >Affects Versions: 1.2.0, 1.3.0 >Reporter: Till Rohrmann >Assignee: Dmytro Shkvyra >Priority: Minor > > In case of entering the wrong job manager address when submitting a job via > {{flink run}}, the {{JobClientActor}} waits per default {{60 s}} until a > {{JobClientActorConnectionException}}, indicating that the {{JobManager}} is > no longer reachable, is thrown. In order to fail fast in case of wrong > connection information, we could change it such that it uses initially a much > lower timeout and only increases the timeout if it had at least once > successfully connected to a {{JobManager}} before. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (FLINK-5476) Fail fast if trying to submit a job to a non-existing Flink cluster
[ https://issues.apache.org/jira/browse/FLINK-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra resolved FLINK-5476. --- Resolution: Resolved > Fail fast if trying to submit a job to a non-existing Flink cluster > --- > > Key: FLINK-5476 > URL: https://issues.apache.org/jira/browse/FLINK-5476 > Project: Flink > Issue Type: Improvement > Components: Client >Affects Versions: 1.2.0, 1.3.0 >Reporter: Till Rohrmann >Assignee: Dmytro Shkvyra >Priority: Minor > > In case of entering the wrong job manager address when submitting a job via > {{flink run}}, the {{JobClientActor}} waits per default {{60 s}} until a > {{JobClientActorConnectionException}}, indicating that the {{JobManager}} is > no longer reachable, is thrown. In order to fail fast in case of wrong > connection information, we could change it such that it uses initially a much > lower timeout and only increases the timeout if it had at least once > successfully connected to a {{JobManager}} before. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (FLINK-5476) Fail fast if trying to submit a job to a non-existing Flink cluster
[ https://issues.apache.org/jira/browse/FLINK-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra reassigned FLINK-5476: - Assignee: Dmytro Shkvyra (was: Dmitrii Kniazev) > Fail fast if trying to submit a job to a non-existing Flink cluster > --- > > Key: FLINK-5476 > URL: https://issues.apache.org/jira/browse/FLINK-5476 > Project: Flink > Issue Type: Improvement > Components: Client >Affects Versions: 1.2.0, 1.3.0 >Reporter: Till Rohrmann >Assignee: Dmytro Shkvyra >Priority: Minor > > In case of entering the wrong job manager address when submitting a job via > {{flink run}}, the {{JobClientActor}} waits per default {{60 s}} until a > {{JobClientActorConnectionException}}, indicating that the {{JobManager}} is > no longer reachable, is thrown. In order to fail fast in case of wrong > connection information, we could change it such that it uses initially a much > lower timeout and only increases the timeout if it had at least once > successfully connected to a {{JobManager}} before. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (FLINK-5786) Add support GetClusterStatus message for standalong flink cluster
[ https://issues.apache.org/jira/browse/FLINK-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra reassigned FLINK-5786: - Assignee: Dmytro Shkvyra (was: Dmitrii Kniazev) > Add support GetClusterStatus message for standalong flink cluster > - > > Key: FLINK-5786 > URL: https://issues.apache.org/jira/browse/FLINK-5786 > Project: Flink > Issue Type: Bug > Components: Distributed Coordination >Affects Versions: 1.2.0, 1.3.0 >Reporter: Dmitrii Kniazev >Assignee: Dmytro Shkvyra >Priority: Minor > > Currently, the invoke of {{StandaloneClusterClient#getClusterStatus()}} > causes the failure of all Flink cluster, because {{JobManager}} has no > handler for {{GetClusterStatus}} message. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (FLINK-6149) add additional flink logical relation nodes
[ https://issues.apache.org/jira/browse/FLINK-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra closed FLINK-6149. - Resolution: Fixed Ok [~ykt836], I will close it. Thanks for clarification. > add additional flink logical relation nodes > --- > > Key: FLINK-6149 > URL: https://issues.apache.org/jira/browse/FLINK-6149 > Project: Flink > Issue Type: Sub-task > Components: Table API & SQL >Reporter: Kurt Young >Assignee: Kurt Young > Fix For: 1.3.0 > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Reopened] (FLINK-6149) add additional flink logical relation nodes
[ https://issues.apache.org/jira/browse/FLINK-6149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra reopened FLINK-6149: --- This change make fail SQL statements with null parts: Like this: {code} val sqlQuery = "SELECT a, cnt " + "FROM" + " (SELECT cnt FROM (SELECT COUNT(*) AS cnt FROM B) WHERE cnt < 0) " + "RIGHT JOIN A " + "ON a < cnt" {code} Error: {code} org.apache.flink.table.api.TableException: Cannot generate a valid execution plan for the given query: LogicalProject(a=[$1], cnt=[$0]) LogicalJoin(condition=[<($1, $0)], joinType=[right]) LogicalProject(cnt=[$0]) LogicalFilter(condition=[<($0, 0)]) LogicalAggregate(group=[{}], cnt=[COUNT()]) LogicalProject($f0=[0]) LogicalTableScan(table=[[B]]) LogicalTableScan(table=[[A]]) This exception indicates that the query uses an unsupported SQL feature. Please check the documentation for the set of currently supported SQL features. at org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:267) at org.apache.flink.table.api.BatchTableEnvironment.optimize(BatchTableEnvironment.scala:235) at org.apache.flink.table.api.BatchTableEnvironment.translate(BatchTableEnvironment.scala:265) at org.apache.flink.table.api.scala.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:140) at org.apache.flink.table.api.scala.TableConversions.toDataSet(TableConversions.scala:40) {code} > add additional flink logical relation nodes > --- > > Key: FLINK-6149 > URL: https://issues.apache.org/jira/browse/FLINK-6149 > Project: Flink > Issue Type: Sub-task > Components: Table API & SQL >Reporter: Kurt Young >Assignee: Kurt Young > Fix For: 1.3.0 > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (FLINK-4604) Add support for standard deviation/variance
[ https://issues.apache.org/jira/browse/FLINK-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra reassigned FLINK-4604: - Assignee: Dmytro Shkvyra (was: Anton Mushin) > Add support for standard deviation/variance > --- > > Key: FLINK-4604 > URL: https://issues.apache.org/jira/browse/FLINK-4604 > Project: Flink > Issue Type: New Feature > Components: Table API & SQL >Reporter: Timo Walther >Assignee: Dmytro Shkvyra > Attachments: 1.jpg > > > Calcite's {{AggregateReduceFunctionsRule}} can convert SQL {{AVG, STDDEV_POP, > STDDEV_SAMP, VAR_POP, VAR_SAMP}} to sum/count functions. We should add, test > and document this rule. > If we also want to add this aggregates to Table API is up for discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (FLINK-6128) Optimize JVM options for improve test performance
Dmytro Shkvyra created FLINK-6128: - Summary: Optimize JVM options for improve test performance Key: FLINK-6128 URL: https://issues.apache.org/jira/browse/FLINK-6128 Project: Flink Issue Type: Improvement Components: Tests Environment: maven-surefire-plugin Reporter: Dmytro Shkvyra Assignee: Dmytro Shkvyra Tune JVM options for run tests by maven-surefire-plugin at travis -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (FLINK-5256) Extend DataSetSingleRowJoin to support Left and Right joins
[ https://issues.apache.org/jira/browse/FLINK-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmytro Shkvyra reassigned FLINK-5256: - Assignee: Dmytro Shkvyra (was: Anton Mushin) > Extend DataSetSingleRowJoin to support Left and Right joins > --- > > Key: FLINK-5256 > URL: https://issues.apache.org/jira/browse/FLINK-5256 > Project: Flink > Issue Type: Improvement > Components: Table API & SQL >Affects Versions: 1.2.0 >Reporter: Fabian Hueske >Assignee: Dmytro Shkvyra > > The {{DataSetSingleRowJoin}} is a broadcast-map join that supports arbitrary > inner joins where one input is a single row. > I found that Calcite translates certain subqueries into non-equi left and > right joins with single input. These cases can be handled if the > {{DataSetSingleRowJoin}} is extended to support outer joins on the > non-single-row input, i.e., left joins if the right side is single input and > vice versa. -- This message was sent by Atlassian JIRA (v6.3.15#6346)