[jira] [Commented] (FLINK-6672) Support CAST(timestamp AS BIGINT)

2017-05-28 Thread sunjincheng (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16028074#comment-16028074
 ] 

sunjincheng commented on FLINK-6672:


Hi, [~twalthr] [~wheat9] I found a situation in 
[FLINK-6740|https://issues.apache.org/jira/browse/FLINK-6740], I am not sure if 
this JIRA. will deal with  
[FLINK-6740|https://issues.apache.org/jira/browse/FLINK-6740], I appreciated if 
you can look at the description of   
[FLINK-6740|https://issues.apache.org/jira/browse/FLINK-6740].

Best,
SunJincheng

> Support CAST(timestamp AS BIGINT)
> -
>
> Key: FLINK-6672
> URL: https://issues.apache.org/jira/browse/FLINK-6672
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Timo Walther
>Assignee: Haohui Mai
>
> It is not possible to cast a TIMESTAMP, TIME, or DATE to BIGINT, INT, INT in 
> SQL. The Table API and the code generation support this, but the SQL 
> validation seems to prohibit it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6737) Fix over expression parse String error.

2017-05-28 Thread sunjincheng (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16028073#comment-16028073
 ] 

sunjincheng commented on FLINK-6737:


Hi [~fhueske]I think in this JIRA. only improve `over` method, not whole 
`select` clause. I have not met the case that is not supported by a Scala 
expression but only by a String expression. 
I think this is not a release blocker. If you do not want add this change, I'am 
okay.

Best,
SunJincheng




> Fix over expression parse String error.
> ---
>
> Key: FLINK-6737
> URL: https://issues.apache.org/jira/browse/FLINK-6737
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> When we run the TableAPI as follows:
> {code}
> val windowedTable = table
>   .window(Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW 
> as 'w)
>   .select('c, "countFun(b)" over 'w as 'mycount, weightAvgFun('a, 'b) 
> over 'w as 'wAvg)
> {code}
> We get the error:
> {code}
> org.apache.flink.table.api.TableException: The over method can only using 
> with aggregation expression.
>   at 
> org.apache.flink.table.api.scala.ImplicitExpressionOperations$class.over(expressionDsl.scala:469)
>   at 
> org.apache.flink.table.api.scala.ImplicitExpressionConversions$LiteralStringExpression.over(expressionDsl.scala:756)
> {code}
> The reason is, the `over` method of `expressionDsl` not parse the String case.
> I think we should fix this before 1.3 release.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-2055) Implement Streaming HBaseSink

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16028069#comment-16028069
 ] 

ASF GitHub Bot commented on FLINK-2055:
---

Github user ramkrish86 commented on the issue:

https://github.com/apache/flink/pull/2332
  
I can take this up and come up with a design doc. Reading thro the comments 
here and the final decision points I think only puts/deletes can be considered 
idempotent. But increments/decrements cannot be considered to be idempotent. We 
may need to two types of sink then one sink which supports the puts/deletes and 
the other one where we need to support non-idempotent ops.
Coming to @nielsbasjes issue of not able to flush the buffered mutator - 
should you always call bufferedmutator#flush() on every checkpoint call?


> Implement Streaming HBaseSink
> -
>
> Key: FLINK-2055
> URL: https://issues.apache.org/jira/browse/FLINK-2055
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Erli Ding
>
> As per : 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Write-Stream-to-HBase-td1300.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #2332: [FLINK-2055] Implement Streaming HBaseSink

2017-05-28 Thread ramkrish86
Github user ramkrish86 commented on the issue:

https://github.com/apache/flink/pull/2332
  
I can take this up and come up with a design doc. Reading thro the comments 
here and the final decision points I think only puts/deletes can be considered 
idempotent. But increments/decrements cannot be considered to be idempotent. We 
may need to two types of sink then one sink which supports the puts/deletes and 
the other one where we need to support non-idempotent ops.
Coming to @nielsbasjes issue of not able to flush the buffered mutator - 
should you always call bufferedmutator#flush() on every checkpoint call?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-6389) Upgrade hbase dependency to 1.3.1

2017-05-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-6389:
--
Description: 
hbase 1.3.1 has been released.

It fixes compatibility issue in 1.3.0 release, among other bug fixes.
We should upgrade to hbase 1.3.1

  was:
hbase 1.3.1 has been released.

It fixes compatibility issue in 1.3.0 release, among other bug fixes.

We should upgrade to hbase 1.3.1


> Upgrade hbase dependency to 1.3.1
> -
>
> Key: FLINK-6389
> URL: https://issues.apache.org/jira/browse/FLINK-6389
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ted Yu
>Priority: Minor
>
> hbase 1.3.1 has been released.
> It fixes compatibility issue in 1.3.0 release, among other bug fixes.
> We should upgrade to hbase 1.3.1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6433) Unreachable code in SqlToRelConverter#visitCall()

2017-05-28 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027957#comment-16027957
 ] 

Ted Yu commented on FLINK-6433:
---

http://search-hadoop.com/m/Calcite/FR3K9Rk8b2wH4lO1?subj=Re+Expected+release+date+of+1+13+0

Looks like Calcite 1.13 is coming out in June.

> Unreachable code in SqlToRelConverter#visitCall()
> -
>
> Key: FLINK-6433
> URL: https://issues.apache.org/jira/browse/FLINK-6433
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
>   SqlFunction histogramOp = !ENABLE_HISTOGRAM_AGG
> ? null
> : getHistogramOp(aggOp);
>   if (histogramOp != null) {
> {code}
> Since ENABLE_HISTOGRAM_AGG is hardcoded as false, the if block wouldn't be 
> executed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6422) Unreachable code in FileInputFormat#createInputSplits

2017-05-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-6422:
--
Description: 
Here is related code:
{code}
if (minNumSplits < 1) {
  throw new IllegalArgumentException("Number of input splits has to be at 
least 1.");
}
...
final long maxSplitSize = (minNumSplits < 1) ? Long.MAX_VALUE : 
(totalLength / minNumSplits +
  (totalLength % minNumSplits == 0 ? 0 : 1));
{code}
minNumSplits wouldn't be less than 1 getting to the assignment of maxSplitSize.

  was:
Here is related code:

{code}
if (minNumSplits < 1) {
  throw new IllegalArgumentException("Number of input splits has to be at 
least 1.");
}
...
final long maxSplitSize = (minNumSplits < 1) ? Long.MAX_VALUE : 
(totalLength / minNumSplits +
  (totalLength % minNumSplits == 0 ? 0 : 1));
{code}
minNumSplits wouldn't be less than 1 getting to the assignment of maxSplitSize.


> Unreachable code in FileInputFormat#createInputSplits
> -
>
> Key: FLINK-6422
> URL: https://issues.apache.org/jira/browse/FLINK-6422
> Project: Flink
>  Issue Type: Bug
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ted Yu
>Priority: Minor
>
> Here is related code:
> {code}
> if (minNumSplits < 1) {
>   throw new IllegalArgumentException("Number of input splits has to be at 
> least 1.");
> }
> ...
> final long maxSplitSize = (minNumSplits < 1) ? Long.MAX_VALUE : 
> (totalLength / minNumSplits +
>   (totalLength % minNumSplits == 0 ? 0 : 1));
> {code}
> minNumSplits wouldn't be less than 1 getting to the assignment of 
> maxSplitSize.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6695) Activate strict checkstyle in flink-contrib

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027955#comment-16027955
 ] 

ASF GitHub Bot commented on FLINK-6695:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118850085
  
--- Diff: 
flink-contrib/flink-storm-examples/src/test/java/org/apache/flink/storm/tests/operators/FiniteRandomSpout.java
 ---
@@ -32,8 +32,8 @@
 import java.util.Random;
 
 /**
- * A Spout implementation that broadcast random numbers across a specified 
number of output streams, until a specified
- * count was reached.
+ * A Spout implementation that broadcast randoms numbers across a 
specified number of output streams, until a specified
--- End diff --

"broadcasts random"


> Activate strict checkstyle in flink-contrib
> ---
>
> Key: FLINK-6695
> URL: https://issues.apache.org/jira/browse/FLINK-6695
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-contrib
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6695) Activate strict checkstyle in flink-contrib

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027954#comment-16027954
 ] 

ASF GitHub Bot commented on FLINK-6695:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118850077
  
--- Diff: 
flink-contrib/flink-storm-examples/src/test/java/org/apache/flink/storm/split/SplitBoltTopology.java
 ---
@@ -28,7 +28,7 @@
 import org.apache.storm.topology.TopologyBuilder;
 
 /**
- * A simple topology that splits a number stream based the numbers parity, 
and verifies the result.
+ * A simple topology that splits a numbers stream based the numbers 
parity, and verifies the result.
--- End diff --

What if we simplified this to "splits a number stream based on the parity"?


> Activate strict checkstyle in flink-contrib
> ---
>
> Key: FLINK-6695
> URL: https://issues.apache.org/jira/browse/FLINK-6695
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-contrib
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6695) Activate strict checkstyle in flink-contrib

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027956#comment-16027956
 ] 

ASF GitHub Bot commented on FLINK-6695:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118850095
  
--- Diff: 
flink-contrib/flink-storm-examples/src/main/java/org/apache/flink/storm/split/operators/VerifyAndEnrichBolt.java
 ---
@@ -30,7 +30,7 @@
 
 /**
  * Verifies that incoming numbers are either even or odd, controlled by 
the constructor argument. Emitted tuples are
- * enriched with a new string field containing either "even" or "odd", 
based on the numbers parity.
+ * enriched with a new string field containing either "even" or "odd", 
based on the numbers' parity.
--- End diff --

I think I got this wrong, too. Should probably be `number's`.


> Activate strict checkstyle in flink-contrib
> ---
>
> Key: FLINK-6695
> URL: https://issues.apache.org/jira/browse/FLINK-6695
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-contrib
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4004: [FLINK-6695] Activate strict checkstyle in flink-c...

2017-05-28 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118850077
  
--- Diff: 
flink-contrib/flink-storm-examples/src/test/java/org/apache/flink/storm/split/SplitBoltTopology.java
 ---
@@ -28,7 +28,7 @@
 import org.apache.storm.topology.TopologyBuilder;
 
 /**
- * A simple topology that splits a number stream based the numbers parity, 
and verifies the result.
+ * A simple topology that splits a numbers stream based the numbers 
parity, and verifies the result.
--- End diff --

What if we simplified this to "splits a number stream based on the parity"?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4004: [FLINK-6695] Activate strict checkstyle in flink-c...

2017-05-28 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118850085
  
--- Diff: 
flink-contrib/flink-storm-examples/src/test/java/org/apache/flink/storm/tests/operators/FiniteRandomSpout.java
 ---
@@ -32,8 +32,8 @@
 import java.util.Random;
 
 /**
- * A Spout implementation that broadcast random numbers across a specified 
number of output streams, until a specified
- * count was reached.
+ * A Spout implementation that broadcast randoms numbers across a 
specified number of output streams, until a specified
--- End diff --

"broadcasts random"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4004: [FLINK-6695] Activate strict checkstyle in flink-c...

2017-05-28 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118850095
  
--- Diff: 
flink-contrib/flink-storm-examples/src/main/java/org/apache/flink/storm/split/operators/VerifyAndEnrichBolt.java
 ---
@@ -30,7 +30,7 @@
 
 /**
  * Verifies that incoming numbers are either even or odd, controlled by 
the constructor argument. Emitted tuples are
- * enriched with a new string field containing either "even" or "odd", 
based on the numbers parity.
+ * enriched with a new string field containing either "even" or "odd", 
based on the numbers' parity.
--- End diff --

I think I got this wrong, too. Should probably be `number's`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6699) Activate strict-checkstyle in flink-yarn-tests

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027950#comment-16027950
 ] 

ASF GitHub Bot commented on FLINK-6699:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4005
  
+1 failing test is unrelated. Thanks for correcting this @zentol!


> Activate strict-checkstyle in flink-yarn-tests
> --
>
> Key: FLINK-6699
> URL: https://issues.apache.org/jira/browse/FLINK-6699
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests, YARN
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #4005: [FLINK-6699] Add checkstyle plugin to flink-yarn-tests po...

2017-05-28 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4005
  
+1 failing test is unrelated. Thanks for correcting this @zentol!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6753) Flaky SqlITCase

2017-05-28 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6753:
---

 Summary: Flaky SqlITCase
 Key: FLINK-6753
 URL: https://issues.apache.org/jira/browse/FLINK-6753
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL, Tests
Affects Versions: 1.4.0
Reporter: Chesnay Schepler


{code}
Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.674 sec <<< 
FAILURE! - in org.apache.flink.table.api.scala.stream.sql.SqlITCase
testUnnestArrayOfArrayFromTable(org.apache.flink.table.api.scala.stream.sql.SqlITCase)
  Time elapsed: 0.289 sec  <<< ERROR!
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:933)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:876)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:876)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.
at 
org.apache.flink.table.codegen.Compiler$class.compile(Compiler.scala:36)
at 
org.apache.flink.table.runtime.CRowCorrelateProcessRunner.compile(CRowCorrelateProcessRunner.scala:35)
at 
org.apache.flink.table.runtime.CRowCorrelateProcessRunner.open(CRowCorrelateProcessRunner.scala:59)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
at 
org.apache.flink.streaming.api.operators.ProcessOperator.open(ProcessOperator.java:56)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:377)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:254)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.AssertionError: null
at org.codehaus.janino.IClass.isAssignableFrom(IClass.java:652)
at 
org.codehaus.janino.UnitCompiler.isWideningReferenceConvertible(UnitCompiler.java:10844)
at 
org.codehaus.janino.UnitCompiler.isMethodInvocationConvertible(UnitCompiler.java:9010)
at 
org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:8799)
at 
org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:8657)
at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8539)
at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8441)
at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:4609)
at org.codehaus.janino.UnitCompiler.access$8200(UnitCompiler.java:209)
at 
org.codehaus.janino.UnitCompiler$12.visitMethodInvocation(UnitCompiler.java:3969)
at 
org.codehaus.janino.UnitCompiler$12.visitMethodInvocation(UnitCompiler.java:3942)
at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:4874)
at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:3942)
at 
org.codehaus.janino.UnitCompiler.compileGetValue(UnitCompiler.java:5125)
at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:3343)
at org.codehaus.janino.UnitCompiler.access$5000(UnitCompiler.java:209)
at 
org.codehaus.janino.UnitCompiler$9.visitMethodInvocation(UnitCompiler.java:3322)
at 
org.codehaus.janino.UnitCompiler$9.visitMethodInvocation(UnitCompiler.java:3294)
at org.codehaus.janino.Java$MethodInvocation.accept(Java.java:4874)
at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:3294)
at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:2214)
at org.codehaus.janino.UnitCompiler.access$1700(UnitCompiler.java:209)
at 
org.codehaus.janino.UnitCompiler$6.visitExpressionStatement(UnitCompiler.java:1445)
at 
org.codehaus.janino.UnitCo

[GitHub] flink issue #4004: [FLINK-6695] Activate strict checkstyle in flink-contrib

2017-05-28 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4004
  
@greghogan fixed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6695) Activate strict checkstyle in flink-contrib

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027897#comment-16027897
 ] 

ASF GitHub Bot commented on FLINK-6695:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4004
  
@greghogan fixed.


> Activate strict checkstyle in flink-contrib
> ---
>
> Key: FLINK-6695
> URL: https://issues.apache.org/jira/browse/FLINK-6695
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-contrib
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6699) Activate strict-checkstyle in flink-yarn-tests

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027895#comment-16027895
 ] 

ASF GitHub Bot commented on FLINK-6699:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4005

[FLINK-6699] Add checkstyle plugin to flink-yarn-tests pom

Adds the checkstyle plugin to the `flink-yarn-tests` pom and fixes 
remaining issues.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6699csyt

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4005.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4005


commit 93d2b2b6f610a72fe45d3b7c8ead24865dd8d2b7
Author: zentol 
Date:   2017-05-28T18:14:50Z

[FLINK-6699] Add checkstyle plugin to flink-yarn-tests pom




> Activate strict-checkstyle in flink-yarn-tests
> --
>
> Key: FLINK-6699
> URL: https://issues.apache.org/jira/browse/FLINK-6699
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests, YARN
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4005: [FLINK-6699] Add checkstyle plugin to flink-yarn-t...

2017-05-28 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4005

[FLINK-6699] Add checkstyle plugin to flink-yarn-tests pom

Adds the checkstyle plugin to the `flink-yarn-tests` pom and fixes 
remaining issues.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6699csyt

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4005.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4005


commit 93d2b2b6f610a72fe45d3b7c8ead24865dd8d2b7
Author: zentol 
Date:   2017-05-28T18:14:50Z

[FLINK-6699] Add checkstyle plugin to flink-yarn-tests pom




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (FLINK-6699) Activate strict-checkstyle in flink-yarn-tests

2017-05-28 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-6699:
-

[~greghogan] You are correct, reopening the issue.

> Activate strict-checkstyle in flink-yarn-tests
> --
>
> Key: FLINK-6699
> URL: https://issues.apache.org/jira/browse/FLINK-6699
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests, YARN
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6699) Activate strict-checkstyle in flink-yarn-tests

2017-05-28 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027882#comment-16027882
 ] 

Greg Hogan commented on FLINK-6699:
---

[~Zentol] it looks like we forgot to activate the checkstyle in this module 
({{CliFrontendYarnAddressConfigurationTest}} has import errors).

> Activate strict-checkstyle in flink-yarn-tests
> --
>
> Key: FLINK-6699
> URL: https://issues.apache.org/jira/browse/FLINK-6699
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests, YARN
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6695) Activate strict checkstyle in flink-contrib

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027878#comment-16027878
 ] 

ASF GitHub Bot commented on FLINK-6695:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118845656
  
--- Diff: 
flink-contrib/flink-storm-examples/src/test/java/org/apache/flink/storm/tests/operators/VerifyMetaDataBolt.java
 ---
@@ -28,6 +27,11 @@
 import org.apache.storm.tuple.Tuple;
 import org.apache.storm.tuple.Values;
 
+import java.util.Map;
+
+/**
+ * A Bolt implementation that verifies meta data emitted by a {@link 
MetaDataSpout}.
--- End diff --

I think that this should be "metadata" but see that the class is camelcased 
`MetaData`.


> Activate strict checkstyle in flink-contrib
> ---
>
> Key: FLINK-6695
> URL: https://issues.apache.org/jira/browse/FLINK-6695
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-contrib
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6695) Activate strict checkstyle in flink-contrib

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027879#comment-16027879
 ] 

ASF GitHub Bot commented on FLINK-6695:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118845633
  
--- Diff: 
flink-contrib/flink-storm-examples/src/test/java/org/apache/flink/storm/split/SplitBoltTopology.java
 ---
@@ -15,23 +15,28 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
 package org.apache.flink.storm.split;
 
-import org.apache.storm.topology.TopologyBuilder;
 import org.apache.flink.storm.split.operators.RandomSpout;
 import org.apache.flink.storm.split.operators.VerifyAndEnrichBolt;
 import org.apache.flink.storm.util.BoltFileSink;
 import org.apache.flink.storm.util.BoltPrintSink;
 import org.apache.flink.storm.util.OutputFormatter;
 import org.apache.flink.storm.util.TupleOutputFormatter;
 
+import org.apache.storm.topology.TopologyBuilder;
+
+/**
+ * A simple topology that splits a number stream based the numbers parity, 
and verifies the result.
--- End diff --

numbers -> numbers'


> Activate strict checkstyle in flink-contrib
> ---
>
> Key: FLINK-6695
> URL: https://issues.apache.org/jira/browse/FLINK-6695
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-contrib
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6695) Activate strict checkstyle in flink-contrib

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027880#comment-16027880
 ] 

ASF GitHub Bot commented on FLINK-6695:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118845646
  
--- Diff: 
flink-contrib/flink-storm-examples/src/test/java/org/apache/flink/storm/tests/operators/FiniteRandomSpout.java
 ---
@@ -30,6 +28,13 @@
 import org.apache.storm.tuple.Values;
 import org.apache.storm.utils.Utils;
 
+import java.util.Map;
+import java.util.Random;
+
+/**
+ * A Spout implementation that broadcast random numbers across a specified 
number of output streams, until a specified
--- End diff --

"broadcast" -> "broadcasts"

"was" -> "is"?


> Activate strict checkstyle in flink-contrib
> ---
>
> Key: FLINK-6695
> URL: https://issues.apache.org/jira/browse/FLINK-6695
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-contrib
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4004: [FLINK-6695] Activate strict checkstyle in flink-c...

2017-05-28 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118845656
  
--- Diff: 
flink-contrib/flink-storm-examples/src/test/java/org/apache/flink/storm/tests/operators/VerifyMetaDataBolt.java
 ---
@@ -28,6 +27,11 @@
 import org.apache.storm.tuple.Tuple;
 import org.apache.storm.tuple.Values;
 
+import java.util.Map;
+
+/**
+ * A Bolt implementation that verifies meta data emitted by a {@link 
MetaDataSpout}.
--- End diff --

I think that this should be "metadata" but see that the class is camelcased 
`MetaData`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4004: [FLINK-6695] Activate strict checkstyle in flink-c...

2017-05-28 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118845592
  
--- Diff: 
flink-contrib/flink-storm-examples/src/main/java/org/apache/flink/storm/split/operators/VerifyAndEnrichBolt.java
 ---
@@ -27,6 +26,12 @@
 import org.apache.storm.tuple.Tuple;
 import org.apache.storm.tuple.Values;
 
+import java.util.Map;
+
+/**
+ * Verifies that incoming numbers are either even or odd, controlled by 
the constructor argument. Emitted tuples are
+ * enriched with a new string field containing either "even" or "odd", 
based on the numbers parity.
--- End diff --

numbers parity -> numbers' parity


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6695) Activate strict checkstyle in flink-contrib

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027877#comment-16027877
 ] 

ASF GitHub Bot commented on FLINK-6695:
---

Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118845592
  
--- Diff: 
flink-contrib/flink-storm-examples/src/main/java/org/apache/flink/storm/split/operators/VerifyAndEnrichBolt.java
 ---
@@ -27,6 +26,12 @@
 import org.apache.storm.tuple.Tuple;
 import org.apache.storm.tuple.Values;
 
+import java.util.Map;
+
+/**
+ * Verifies that incoming numbers are either even or odd, controlled by 
the constructor argument. Emitted tuples are
+ * enriched with a new string field containing either "even" or "odd", 
based on the numbers parity.
--- End diff --

numbers parity -> numbers' parity


> Activate strict checkstyle in flink-contrib
> ---
>
> Key: FLINK-6695
> URL: https://issues.apache.org/jira/browse/FLINK-6695
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-contrib
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4004: [FLINK-6695] Activate strict checkstyle in flink-c...

2017-05-28 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118845646
  
--- Diff: 
flink-contrib/flink-storm-examples/src/test/java/org/apache/flink/storm/tests/operators/FiniteRandomSpout.java
 ---
@@ -30,6 +28,13 @@
 import org.apache.storm.tuple.Values;
 import org.apache.storm.utils.Utils;
 
+import java.util.Map;
+import java.util.Random;
+
+/**
+ * A Spout implementation that broadcast random numbers across a specified 
number of output streams, until a specified
--- End diff --

"broadcast" -> "broadcasts"

"was" -> "is"?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4004: [FLINK-6695] Activate strict checkstyle in flink-c...

2017-05-28 Thread greghogan
Github user greghogan commented on a diff in the pull request:

https://github.com/apache/flink/pull/4004#discussion_r118845633
  
--- Diff: 
flink-contrib/flink-storm-examples/src/test/java/org/apache/flink/storm/split/SplitBoltTopology.java
 ---
@@ -15,23 +15,28 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
 package org.apache.flink.storm.split;
 
-import org.apache.storm.topology.TopologyBuilder;
 import org.apache.flink.storm.split.operators.RandomSpout;
 import org.apache.flink.storm.split.operators.VerifyAndEnrichBolt;
 import org.apache.flink.storm.util.BoltFileSink;
 import org.apache.flink.storm.util.BoltPrintSink;
 import org.apache.flink.storm.util.OutputFormatter;
 import org.apache.flink.storm.util.TupleOutputFormatter;
 
+import org.apache.storm.topology.TopologyBuilder;
+
+/**
+ * A simple topology that splits a number stream based the numbers parity, 
and verifies the result.
--- End diff --

numbers -> numbers'


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5354) Split up Table API documentation into multiple pages

2017-05-28 Thread Shaoxuan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027842#comment-16027842
 ] 

Shaoxuan Wang commented on FLINK-5354:
--

[~fhueske], thanks for kicking off this. I would like to work on the page for 
UDFs, as it seems the major missing part is UDAGG. 

> Split up Table API documentation into multiple pages 
> -
>
> Key: FLINK-5354
> URL: https://issues.apache.org/jira/browse/FLINK-5354
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table API & SQL
>Reporter: Timo Walther
>
> The Table API documentation page is quite large at the moment. We should 
> split it up into multiple pages:
> Here is my suggestion:
> - Overview (Datatypes, Config, Registering Tables, Examples)
> - TableSources and Sinks
> - Table API
> - SQL
> - Functions



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-6751) Table API / SQL Docs: UDFs Page

2017-05-28 Thread Shaoxuan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shaoxuan Wang reassigned FLINK-6751:


Assignee: Shaoxuan Wang

> Table API / SQL Docs: UDFs Page
> ---
>
> Key: FLINK-6751
> URL: https://issues.apache.org/jira/browse/FLINK-6751
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table API & SQL
>Affects Versions: 1.3.0
>Reporter: Fabian Hueske
>Assignee: Shaoxuan Wang
>
> Update and refine {{./docs/dev/table/udfs.md}} in feature branch 
> https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6752) Activate strict checkstyle for flink-core

2017-05-28 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-6752:

Description: 
Long term issue for incrementally introducing the strict checkstyle to 
flink-core.


> Activate strict checkstyle for flink-core
> -
>
> Key: FLINK-6752
> URL: https://issues.apache.org/jira/browse/FLINK-6752
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Chesnay Schepler
> Fix For: 1.4.0
>
>
> Long term issue for incrementally introducing the strict checkstyle to 
> flink-core.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6752) Activate strict checkstyle for flink-core

2017-05-28 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6752:
---

 Summary: Activate strict checkstyle for flink-core
 Key: FLINK-6752
 URL: https://issues.apache.org/jira/browse/FLINK-6752
 Project: Flink
  Issue Type: Sub-task
  Components: Core
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-6752) Activate strict checkstyle for flink-core

2017-05-28 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-6752:
---

Assignee: (was: Chesnay Schepler)

> Activate strict checkstyle for flink-core
> -
>
> Key: FLINK-6752
> URL: https://issues.apache.org/jira/browse/FLINK-6752
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6743) .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8))) doesn't work

2017-05-28 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6743.
---
Resolution: Duplicate

Please see FLINK-6214 for a workaround.

> .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8))) doesn't 
> work
> ---
>
> Key: FLINK-6743
> URL: https://issues.apache.org/jira/browse/FLINK-6743
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.2.0
> Environment: Flink 1.2.0.
>Reporter: Lix
>
> The tutorial on the official website says that we can use
> {quote}
> // daily tumbling event-time windows offset by -8 hours.
> input
> .keyBy()
> .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
> .();
> {quote}
> when our timezone is UTC+8, which is in China. But when I tried to run this 
> code, it just reported an error:
> {quote}
> Exception in thread "main" java.lang.IllegalArgumentException: 
> TumblingProcessingTimeWindows parameters must satisfy  0 <= offset < size
>   at 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.(TumblingProcessingTimeWindows.java:54)
>   at 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.of(TumblingProcessingTimeWindows.java:111)
> {quote}
> How then, should I write my code to make a tumbling window which clears every 
> day at 00:00, when my timezone is UTC+8?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6075) Support Limit/Top(Sort) for Stream SQL

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027819#comment-16027819
 ] 

ASF GitHub Bot commented on FLINK-6075:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4003
  
Thanks for the update @rtudoran. 
I'm traveling for another week before I return in the office. I hope I'll 
find time to have a look at #3889 in the next days. I think we should try to 
get that in first and later rebase this PR once the first part has been merged.

Thank you, Fabian


> Support Limit/Top(Sort) for Stream SQL
> --
>
> Key: FLINK-6075
> URL: https://issues.apache.org/jira/browse/FLINK-6075
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: radu
>  Labels: features
> Attachments: sort.png
>
>
> These will be split in 3 separated JIRA issues. However, the design is the 
> same only the processing function differs in terms of the output. Hence, the 
> design is the same for all of them.
> Time target: Proc Time
> **SQL targeted query examples:**
> *Sort example*
> Q1)` SELECT a FROM stream1 GROUP BY HOP(proctime, INTERVAL '1' HOUR, INTERVAL 
> '3' HOUR) ORDER BY b` 
> Comment: window is defined using GROUP BY
> Comment: ASC or DESC keywords can be placed to mark the ordering type
> *Limit example*
> Q2) `SELECT a FROM stream1 WHERE rowtime BETWEEN current_timestamp - INTERVAL 
> '1' HOUR AND current_timestamp ORDER BY b LIMIT 10`
> Comment: window is defined using time ranges in the WHERE clause
> Comment: window is row triggered
> *Top example*
> Q3) `SELECT sum(a) OVER (ORDER BY proctime RANGE INTERVAL '1' HOUR PRECEDING 
> LIMIT 10) FROM stream1`  
> Comment: limit over the contents of the sliding window
> General Comments:
> -All these SQL clauses are supported only over windows (bounded collections 
> of data). 
> -Each of the 3 operators will be supported with each of the types of 
> expressing the windows. 
> **Description**
> The 3 operations (limit, top and sort) are similar in behavior as they all 
> require a sorted collection of the data on which the logic will be applied 
> (i.e., select a subset of the items or the entire sorted set). These 
> functions would make sense in the streaming context only in the context of a 
> window. Without defining a window the functions could never emit as the sort 
> operation would never trigger. If an SQL query will be provided without 
> limits an error will be thrown (`SELECT a FROM stream1 TOP 10` -> ERROR). 
> Although not targeted by this JIRA, in the case of working based on event 
> time order, the retraction mechanisms of windows and the lateness mechanisms 
> can be used to deal with out of order events and retraction/updates of 
> results.
> **Functionality example**
> We exemplify with the query below for all the 3 types of operators (sorting, 
> limit and top). Rowtime indicates when the HOP window will trigger – which 
> can be observed in the fact that outputs are generated only at those moments. 
> The HOP windows will trigger at every hour (fixed hour) and each event will 
> contribute/ be duplicated for 2 consecutive hour intervals. Proctime 
> indicates the processing time when a new event arrives in the system. Events 
> are of the type (a,b) with the ordering being applied on the b field.
> `SELECT a FROM stream1 HOP(proctime, INTERVAL '1' HOUR, INTERVAL '2' HOUR) 
> ORDER BY b (LIMIT 2/ TOP 2 / [ASC/DESC] `)
> ||Rowtime||   Proctime||  Stream1||   Limit 2||   Top 2|| Sort 
> [ASC]||
> | |10:00:00  |(aaa, 11)   |   | | 
>|
> | |10:05:00|(aab, 7)  |   | ||
> |10-11  |11:00:00  |  |   aab,aaa |aab,aaa  | aab,aaa 
>|
> | |11:03:00  |(aac,21)  |   | ||  
> 
> |11-12|12:00:00  |  | aab,aaa |aab,aaa  | aab,aaa,aac|
> | |12:10:00  |(abb,12)  |   | ||  
> 
> | |12:15:00  |(abb,12)  |   | ||  
> 
> |12-13  |13:00:00  |  |   abb,abb | abb,abb | 
> abb,abb,aac|
> |...|
> **Implementation option**
> Considering that the SQL operators will be associated with window boundaries, 
> the functionality will be implemented within the logic of the window as 
> follows.
> * Window assigner – selected based on the type of window used in SQL 
> (TUMBLING, SLIDING…)
> * Evictor/ Trigger – time or count evictor based on the definition of the 
> window boundaries
> * Apply – window function that sorts data and selects the output to trigger 
> (based on LIMIT/TOP parameters). All data will be sorted at onc

[GitHub] flink issue #4003: [FLINK-6075] - Support Limit/Top(Sort) for Stream SQL

2017-05-28 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4003
  
Thanks for the update @rtudoran. 
I'm traveling for another week before I return in the office. I hope I'll 
find time to have a look at #3889 in the next days. I think we should try to 
get that in first and later rebase this PR once the first part has been merged.

Thank you, Fabian


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6751) Table API / SQL Docs: UDFs Page

2017-05-28 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6751:


 Summary: Table API / SQL Docs: UDFs Page
 Key: FLINK-6751
 URL: https://issues.apache.org/jira/browse/FLINK-6751
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table API & SQL
Affects Versions: 1.3.0
Reporter: Fabian Hueske


Update and refine {{./docs/dev/table/udfs.md}} in feature branch 
https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6750) Table API / SQL Docs: Table Sources & Sinks Page

2017-05-28 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6750:


 Summary: Table API / SQL Docs: Table Sources & Sinks Page
 Key: FLINK-6750
 URL: https://issues.apache.org/jira/browse/FLINK-6750
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table API & SQL
Affects Versions: 1.3.0
Reporter: Fabian Hueske


Update and refine {{./docs/dev/table/sourceSinks.md}} in feature branch 
https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6748) Table API / SQL Docs: Table API Page

2017-05-28 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6748:


 Summary: Table API / SQL Docs: Table API Page
 Key: FLINK-6748
 URL: https://issues.apache.org/jira/browse/FLINK-6748
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table API & SQL
Affects Versions: 1.3.0
Reporter: Fabian Hueske


Update and refine {{./docs/dev/table/tableApi.md}} in feature branch 
https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6747) Table API / SQL Docs: Streaming Page

2017-05-28 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6747:


 Summary: Table API / SQL Docs: Streaming Page
 Key: FLINK-6747
 URL: https://issues.apache.org/jira/browse/FLINK-6747
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table API & SQL
Affects Versions: 1.3.0
Reporter: Fabian Hueske


Update and refine {{./docs/dev/table/streaming.md}} in feature branch 
https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6749) Table API / SQL Docs: SQL Page

2017-05-28 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6749:


 Summary: Table API / SQL Docs: SQL Page
 Key: FLINK-6749
 URL: https://issues.apache.org/jira/browse/FLINK-6749
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table API & SQL
Affects Versions: 1.3.0
Reporter: Fabian Hueske


Update and refine {{./docs/dev/table/sql.md}} in feature branch 
https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6745) Table API / SQL Docs: Overview Page

2017-05-28 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6745:


 Summary: Table API / SQL Docs: Overview Page
 Key: FLINK-6745
 URL: https://issues.apache.org/jira/browse/FLINK-6745
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table API & SQL
Affects Versions: 1.3.0
Reporter: Fabian Hueske
Assignee: Fabian Hueske


Update and refine ./docs/dev/tableApi.md in feature branch 
https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6746) Table API / SQL Docs: Common Page

2017-05-28 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6746:


 Summary: Table API / SQL Docs: Common Page
 Key: FLINK-6746
 URL: https://issues.apache.org/jira/browse/FLINK-6746
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table API & SQL
Affects Versions: 1.3.0
Reporter: Fabian Hueske
Assignee: Fabian Hueske


Update and refine ./docs/dev/table/common.md in feature branch 
https://github.com/apache/flink/tree/tableDocs



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5354) Split up Table API documentation into multiple pages

2017-05-28 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027815#comment-16027815
 ] 

Fabian Hueske commented on FLINK-5354:
--

I created a proposal for a new structure based on Timo's initial suggestion:

https://docs.google.com/document/d/1ENY8tcPadZjoZ4AQ_lRRwWiVpScDkm_4rgxIGWGT5E0

and pushed a working branch to the Apache Flink repository: 

https://github.com/apache/flink/tree/tableDocs

> Split up Table API documentation into multiple pages 
> -
>
> Key: FLINK-5354
> URL: https://issues.apache.org/jira/browse/FLINK-5354
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table API & SQL
>Reporter: Timo Walther
>
> The Table API documentation page is quite large at the moment. We should 
> split it up into multiple pages:
> Here is my suggestion:
> - Overview (Datatypes, Config, Registering Tables, Examples)
> - TableSources and Sinks
> - Table API
> - SQL
> - Functions



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6695) Activate strict checkstyle in flink-contrib

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027813#comment-16027813
 ] 

ASF GitHub Bot commented on FLINK-6695:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4004

[FLINK-6695 Activate strict checkstyle in flink-contrib

This PR activates the strict checkstyle for most modules in flink-contrib; 
`flink-tweet-inputformat` is not included due to proposal to remove it in 
FLINK-6710.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6695csctrb

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4004.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4004


commit f550ec204f5bab0a06b0ae8a078ee8a26b05a9e2
Author: zentol 
Date:   2017-05-23T19:40:35Z

[FLINK-6695] Activate strict checkstyle for flink-connector-wikiedits

commit 41460d1da0632aab10af181678c2a4542b182895
Author: zentol 
Date:   2017-05-23T20:05:19Z

[FLINK-6695] Activate strict checkstyle for flink-statebackend-rocksDB

commit 5789ac46c84e1329d6cf9c6ed7ff77b410242283
Author: zentol 
Date:   2017-05-23T20:41:09Z

[FLINK-6695] Activate strict checkstyle for flink-storm

commit 326b997351217190c449d460ea447276434f57ba
Author: zentol 
Date:   2017-05-23T21:47:00Z

[FLINK-6695] Activate strict checkstyle for flink-storm-examples

commit 62fc105e89334a27b5af5369948faa87f913120e
Author: zentol 
Date:   2017-05-24T08:51:23Z

[FLINK-6695] Activate strict checkstyle for flink-streaming-contrib




> Activate strict checkstyle in flink-contrib
> ---
>
> Key: FLINK-6695
> URL: https://issues.apache.org/jira/browse/FLINK-6695
> Project: Flink
>  Issue Type: Sub-task
>  Components: flink-contrib
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4004: [FLINK-6695 Activate strict checkstyle in flink-co...

2017-05-28 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4004

[FLINK-6695 Activate strict checkstyle in flink-contrib

This PR activates the strict checkstyle for most modules in flink-contrib; 
`flink-tweet-inputformat` is not included due to proposal to remove it in 
FLINK-6710.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6695csctrb

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4004.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4004


commit f550ec204f5bab0a06b0ae8a078ee8a26b05a9e2
Author: zentol 
Date:   2017-05-23T19:40:35Z

[FLINK-6695] Activate strict checkstyle for flink-connector-wikiedits

commit 41460d1da0632aab10af181678c2a4542b182895
Author: zentol 
Date:   2017-05-23T20:05:19Z

[FLINK-6695] Activate strict checkstyle for flink-statebackend-rocksDB

commit 5789ac46c84e1329d6cf9c6ed7ff77b410242283
Author: zentol 
Date:   2017-05-23T20:41:09Z

[FLINK-6695] Activate strict checkstyle for flink-storm

commit 326b997351217190c449d460ea447276434f57ba
Author: zentol 
Date:   2017-05-23T21:47:00Z

[FLINK-6695] Activate strict checkstyle for flink-storm-examples

commit 62fc105e89334a27b5af5369948faa87f913120e
Author: zentol 
Date:   2017-05-24T08:51:23Z

[FLINK-6695] Activate strict checkstyle for flink-streaming-contrib




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6737) Fix over expression parse String error.

2017-05-28 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027811#comment-16027811
 ] 

Fabian Hueske commented on FLINK-6737:
--

I'm not sure if this should be supported. Either the whole {{select}} 
expression should be a String (as when using the Table API in Java) or we use 
Scala Expressions.
If we do this, we could also parse any other String literal (e.g., in a 
condition like {{t.where('name === "Smith")}}, {{t}} might have an attribute 
called {{Smith}}) if it corresponds to an Expression.

Is there a case that is not supported by a Scala expression but only by a 
String expression?

> Fix over expression parse String error.
> ---
>
> Key: FLINK-6737
> URL: https://issues.apache.org/jira/browse/FLINK-6737
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> When we run the TableAPI as follows:
> {code}
> val windowedTable = table
>   .window(Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW 
> as 'w)
>   .select('c, "countFun(b)" over 'w as 'mycount, weightAvgFun('a, 'b) 
> over 'w as 'wAvg)
> {code}
> We get the error:
> {code}
> org.apache.flink.table.api.TableException: The over method can only using 
> with aggregation expression.
>   at 
> org.apache.flink.table.api.scala.ImplicitExpressionOperations$class.over(expressionDsl.scala:469)
>   at 
> org.apache.flink.table.api.scala.ImplicitExpressionConversions$LiteralStringExpression.over(expressionDsl.scala:756)
> {code}
> The reason is, the `over` method of `expressionDsl` not parse the String case.
> I think we should fix this before 1.3 release.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6736) Fix UDTF codegen bug when window follow by join( UDTF)

2017-05-28 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027810#comment-16027810
 ] 

Fabian Hueske commented on FLINK-6736:
--

Yes, this looks like a bug that needs to be fixed, IMO.
If your analysis is correct, this would mean that we cannot use a window after 
a TableFunction was applied.

Although this is a serious limitation of the Table API, I'm not sure if this 
qualifies as a release blocker.
it would be good to fix it ASAP nonetheless, so we can include it if there is 
another RC or in the first bugfix release of 1.3.x.

> Fix UDTF codegen bug when window follow by join( UDTF)
> --
>
> Key: FLINK-6736
> URL: https://issues.apache.org/jira/browse/FLINK-6736
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> When we run the tableAPI as follows:
> {code}
> val table = stream.toTable(tEnv, 'long.rowtime, 'int, 'double, 'float, 
> 'bigdec, 'date,'pojo, 'string)
> val windowedTable = table
>   .join(udtf2('string) as ('a, 'b))
>   .window(Slide over 5.milli every 2.milli on 'long as 'w)
>   .groupBy('w)
>   .select('int.count, agg1('pojo, 'bigdec, 'date, 'int), 'w.start, 'w.end)
> {code}
> We will get the error message:
> {code}
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:933)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:876)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:876)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
>   at 
> org.apache.flink.table.codegen.Compiler$class.compile(Compiler.scala:36)
>   at 
> org.apache.flink.table.runtime.CRowCorrelateProcessRunner.compile(CRowCorrelateProcessRunner.scala:35)
>   at 
> org.apache.flink.table.runtime.CRowCorrelateProcessRunner.open(CRowCorrelateProcessRunner.scala:59)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:111)
>   at 
> org.apache.flink.streaming.api.operators.ProcessOperator.open(ProcessOperator.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:377)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:254)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.codehaus.commons.compiler.CompileException: Line 77, Column 
> 62: Unknown variable or type "in2"
>   at 
> org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11523)
>   at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6292)
>   at org.codehaus.janino.UnitCompiler.access$12900(UnitCompiler.java:209)
>   at 
> org.codehaus.janino.UnitCompiler$18.visitPackage(UnitCompiler.java:5904)
>   at 
> org.codehaus.janino.UnitCompiler$18.visitPackage(UnitCompiler.java:5901)
>   at org.codehaus.janino.Java$Package.accept(Java.java:4074)
>   at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:5901)
>   at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6287)
>   at org.codehaus.janino.UnitCompiler.access$13500(UnitCompiler.java:209)
> {code}
> The reason is {{val generator = new CodeGenerator(config, false, 
> inputSchema.physicalTypeInfo)}} `physicalTypeInfo` will remove the 
> TimeIndicator.
> I think we should fix this. What do you think [~fhueske] [~twalthr] , And 
> hope your suggestions. :)



--
This message was sent by At

[jira] [Commented] (FLINK-6725) make requiresOver as a contracted method in udagg

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027808#comment-16027808
 ] 

ASF GitHub Bot commented on FLINK-6725:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3993
  
Hi @shaoxuan-wang and @sunjincheng121,

Ad 2) I think it is OK to have this as a method with a default 
implementation. If we choose the contract function design, we have the same 
default behavior as the default implementation if the method is not 
implemented, so IMO there is not really a difference. Due to the default 
implementation, a user does not need to implement the method unless it is 
necessary. However in that case, it can be overridden in a safe way with IDE 
support. A contract function, does not offer this safety because a user needs 
to lookup the exact signature in the documentation.
Ad 3) I think it would indeed make sense to add `getAccumulatorType()` and 
`getResultType()` to `AggregateFunction` and return `null` by default. This 
would also be more consistent with `ScalarFunction` and `TableFunction`. This 
would also help with type and compile safety, because we can enforce the type 
of the returned `TypeInformation` as `T` and `ACC` without going through 
reflection magic.


> make requiresOver as a contracted method in udagg
> -
>
> Key: FLINK-6725
> URL: https://issues.apache.org/jira/browse/FLINK-6725
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Shaoxuan Wang
>Assignee: Shaoxuan Wang
>
> I realized requiresOver is defined in the udagg interface when I wrote up the 
> udagg doc. I would like to put requiresOver as a contract method. This makes 
> the entire udagg interface consistently and clean.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3993: [FLINK-6725][table] make requiresOver as a contracted met...

2017-05-28 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3993
  
Hi @shaoxuan-wang and @sunjincheng121,

Ad 2) I think it is OK to have this as a method with a default 
implementation. If we choose the contract function design, we have the same 
default behavior as the default implementation if the method is not 
implemented, so IMO there is not really a difference. Due to the default 
implementation, a user does not need to implement the method unless it is 
necessary. However in that case, it can be overridden in a safe way with IDE 
support. A contract function, does not offer this safety because a user needs 
to lookup the exact signature in the documentation.
Ad 3) I think it would indeed make sense to add `getAccumulatorType()` and 
`getResultType()` to `AggregateFunction` and return `null` by default. This 
would also be more consistent with `ScalarFunction` and `TableFunction`. This 
would also help with type and compile safety, because we can enforce the type 
of the returned `TypeInformation` as `T` and `ACC` without going through 
reflection magic.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6075) Support Limit/Top(Sort) for Stream SQL

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027801#comment-16027801
 ] 

ASF GitHub Bot commented on FLINK-6075:
---

Github user rtudoran commented on the issue:

https://github.com/apache/flink/pull/4003
  
@shijinkui @hongyuhong @stefanobortoli 
I add you to the issue to track the progress


> Support Limit/Top(Sort) for Stream SQL
> --
>
> Key: FLINK-6075
> URL: https://issues.apache.org/jira/browse/FLINK-6075
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: radu
>  Labels: features
> Attachments: sort.png
>
>
> These will be split in 3 separated JIRA issues. However, the design is the 
> same only the processing function differs in terms of the output. Hence, the 
> design is the same for all of them.
> Time target: Proc Time
> **SQL targeted query examples:**
> *Sort example*
> Q1)` SELECT a FROM stream1 GROUP BY HOP(proctime, INTERVAL '1' HOUR, INTERVAL 
> '3' HOUR) ORDER BY b` 
> Comment: window is defined using GROUP BY
> Comment: ASC or DESC keywords can be placed to mark the ordering type
> *Limit example*
> Q2) `SELECT a FROM stream1 WHERE rowtime BETWEEN current_timestamp - INTERVAL 
> '1' HOUR AND current_timestamp ORDER BY b LIMIT 10`
> Comment: window is defined using time ranges in the WHERE clause
> Comment: window is row triggered
> *Top example*
> Q3) `SELECT sum(a) OVER (ORDER BY proctime RANGE INTERVAL '1' HOUR PRECEDING 
> LIMIT 10) FROM stream1`  
> Comment: limit over the contents of the sliding window
> General Comments:
> -All these SQL clauses are supported only over windows (bounded collections 
> of data). 
> -Each of the 3 operators will be supported with each of the types of 
> expressing the windows. 
> **Description**
> The 3 operations (limit, top and sort) are similar in behavior as they all 
> require a sorted collection of the data on which the logic will be applied 
> (i.e., select a subset of the items or the entire sorted set). These 
> functions would make sense in the streaming context only in the context of a 
> window. Without defining a window the functions could never emit as the sort 
> operation would never trigger. If an SQL query will be provided without 
> limits an error will be thrown (`SELECT a FROM stream1 TOP 10` -> ERROR). 
> Although not targeted by this JIRA, in the case of working based on event 
> time order, the retraction mechanisms of windows and the lateness mechanisms 
> can be used to deal with out of order events and retraction/updates of 
> results.
> **Functionality example**
> We exemplify with the query below for all the 3 types of operators (sorting, 
> limit and top). Rowtime indicates when the HOP window will trigger – which 
> can be observed in the fact that outputs are generated only at those moments. 
> The HOP windows will trigger at every hour (fixed hour) and each event will 
> contribute/ be duplicated for 2 consecutive hour intervals. Proctime 
> indicates the processing time when a new event arrives in the system. Events 
> are of the type (a,b) with the ordering being applied on the b field.
> `SELECT a FROM stream1 HOP(proctime, INTERVAL '1' HOUR, INTERVAL '2' HOUR) 
> ORDER BY b (LIMIT 2/ TOP 2 / [ASC/DESC] `)
> ||Rowtime||   Proctime||  Stream1||   Limit 2||   Top 2|| Sort 
> [ASC]||
> | |10:00:00  |(aaa, 11)   |   | | 
>|
> | |10:05:00|(aab, 7)  |   | ||
> |10-11  |11:00:00  |  |   aab,aaa |aab,aaa  | aab,aaa 
>|
> | |11:03:00  |(aac,21)  |   | ||  
> 
> |11-12|12:00:00  |  | aab,aaa |aab,aaa  | aab,aaa,aac|
> | |12:10:00  |(abb,12)  |   | ||  
> 
> | |12:15:00  |(abb,12)  |   | ||  
> 
> |12-13  |13:00:00  |  |   abb,abb | abb,abb | 
> abb,abb,aac|
> |...|
> **Implementation option**
> Considering that the SQL operators will be associated with window boundaries, 
> the functionality will be implemented within the logic of the window as 
> follows.
> * Window assigner – selected based on the type of window used in SQL 
> (TUMBLING, SLIDING…)
> * Evictor/ Trigger – time or count evictor based on the definition of the 
> window boundaries
> * Apply – window function that sorts data and selects the output to trigger 
> (based on LIMIT/TOP parameters). All data will be sorted at once and result 
> outputted when the window is triggered
> An alternative implementation can be to use a fold window function to sort 
> the elements as they arrive, one at a time followed by a flatMap to filte

[GitHub] flink issue #4003: [FLINK-6075] - Support Limit/Top(Sort) for Stream SQL

2017-05-28 Thread rtudoran
Github user rtudoran commented on the issue:

https://github.com/apache/flink/pull/4003
  
@shijinkui @hongyuhong @stefanobortoli 
I add you to the issue to track the progress


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4003: [FLINK-6075] - Support Limit/Top(Sort) for Stream SQL

2017-05-28 Thread rtudoran
Github user rtudoran commented on the issue:

https://github.com/apache/flink/pull/4003
  
@fhueske 
I have implemented meanwhile also the support for offset and fetch for both 
rowtime and proctime. This PR includes all the modifications and reviews you 
made for #3889.
You can either go ahead first with that one and chekc if the basis is ok 
and merge that - and then we move to this one, or we can consider this 
directly. This PR will close the JIRA 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6075) Support Limit/Top(Sort) for Stream SQL

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027800#comment-16027800
 ] 

ASF GitHub Bot commented on FLINK-6075:
---

Github user rtudoran commented on the issue:

https://github.com/apache/flink/pull/4003
  
@fhueske 
I have implemented meanwhile also the support for offset and fetch for both 
rowtime and proctime. This PR includes all the modifications and reviews you 
made for #3889.
You can either go ahead first with that one and chekc if the basis is ok 
and merge that - and then we move to this one, or we can consider this 
directly. This PR will close the JIRA 


> Support Limit/Top(Sort) for Stream SQL
> --
>
> Key: FLINK-6075
> URL: https://issues.apache.org/jira/browse/FLINK-6075
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: radu
>  Labels: features
> Attachments: sort.png
>
>
> These will be split in 3 separated JIRA issues. However, the design is the 
> same only the processing function differs in terms of the output. Hence, the 
> design is the same for all of them.
> Time target: Proc Time
> **SQL targeted query examples:**
> *Sort example*
> Q1)` SELECT a FROM stream1 GROUP BY HOP(proctime, INTERVAL '1' HOUR, INTERVAL 
> '3' HOUR) ORDER BY b` 
> Comment: window is defined using GROUP BY
> Comment: ASC or DESC keywords can be placed to mark the ordering type
> *Limit example*
> Q2) `SELECT a FROM stream1 WHERE rowtime BETWEEN current_timestamp - INTERVAL 
> '1' HOUR AND current_timestamp ORDER BY b LIMIT 10`
> Comment: window is defined using time ranges in the WHERE clause
> Comment: window is row triggered
> *Top example*
> Q3) `SELECT sum(a) OVER (ORDER BY proctime RANGE INTERVAL '1' HOUR PRECEDING 
> LIMIT 10) FROM stream1`  
> Comment: limit over the contents of the sliding window
> General Comments:
> -All these SQL clauses are supported only over windows (bounded collections 
> of data). 
> -Each of the 3 operators will be supported with each of the types of 
> expressing the windows. 
> **Description**
> The 3 operations (limit, top and sort) are similar in behavior as they all 
> require a sorted collection of the data on which the logic will be applied 
> (i.e., select a subset of the items or the entire sorted set). These 
> functions would make sense in the streaming context only in the context of a 
> window. Without defining a window the functions could never emit as the sort 
> operation would never trigger. If an SQL query will be provided without 
> limits an error will be thrown (`SELECT a FROM stream1 TOP 10` -> ERROR). 
> Although not targeted by this JIRA, in the case of working based on event 
> time order, the retraction mechanisms of windows and the lateness mechanisms 
> can be used to deal with out of order events and retraction/updates of 
> results.
> **Functionality example**
> We exemplify with the query below for all the 3 types of operators (sorting, 
> limit and top). Rowtime indicates when the HOP window will trigger – which 
> can be observed in the fact that outputs are generated only at those moments. 
> The HOP windows will trigger at every hour (fixed hour) and each event will 
> contribute/ be duplicated for 2 consecutive hour intervals. Proctime 
> indicates the processing time when a new event arrives in the system. Events 
> are of the type (a,b) with the ordering being applied on the b field.
> `SELECT a FROM stream1 HOP(proctime, INTERVAL '1' HOUR, INTERVAL '2' HOUR) 
> ORDER BY b (LIMIT 2/ TOP 2 / [ASC/DESC] `)
> ||Rowtime||   Proctime||  Stream1||   Limit 2||   Top 2|| Sort 
> [ASC]||
> | |10:00:00  |(aaa, 11)   |   | | 
>|
> | |10:05:00|(aab, 7)  |   | ||
> |10-11  |11:00:00  |  |   aab,aaa |aab,aaa  | aab,aaa 
>|
> | |11:03:00  |(aac,21)  |   | ||  
> 
> |11-12|12:00:00  |  | aab,aaa |aab,aaa  | aab,aaa,aac|
> | |12:10:00  |(abb,12)  |   | ||  
> 
> | |12:15:00  |(abb,12)  |   | ||  
> 
> |12-13  |13:00:00  |  |   abb,abb | abb,abb | 
> abb,abb,aac|
> |...|
> **Implementation option**
> Considering that the SQL operators will be associated with window boundaries, 
> the functionality will be implemented within the logic of the window as 
> follows.
> * Window assigner – selected based on the type of window used in SQL 
> (TUMBLING, SLIDING…)
> * Evictor/ Trigger – time or count evictor based on the definition of the 
> window boundaries
> * Apply – window function that sorts data and selects the output 

[jira] [Updated] (FLINK-6743) .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8))) doesn't work

2017-05-28 Thread Lix (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lix updated FLINK-6743:
---
Description: 
The tutorial on the official website says that we can use

{quote}
// daily tumbling event-time windows offset by -8 hours.
input
.keyBy()
.window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
.();
{quote}

when our timezone is UTC+8, which is in China. But when I tried to run this 
code, it just reported an error:

{quote}
Exception in thread "main" java.lang.IllegalArgumentException: 
TumblingProcessingTimeWindows parameters must satisfy  0 <= offset < size
at 
org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.(TumblingProcessingTimeWindows.java:54)
at 
org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.of(TumblingProcessingTimeWindows.java:111)
{quote}

How then, should I write my code to make a tumbling window which clears every 
day at 00:00, when my timezone is UTC+8?


  was:
The tutorial on the official website says that we can use

```
// daily tumbling event-time windows offset by -8 hours.
input
.keyBy()
.window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
.();
```

when our timezone is UTC+8, which is in China. But when I tried to run this 
code, it just reported an error:

```
Exception in thread "main" java.lang.IllegalArgumentException: 
TumblingProcessingTimeWindows parameters must satisfy  0 <= offset < size
at 
org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.(TumblingProcessingTimeWindows.java:54)
at 
org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.of(TumblingProcessingTimeWindows.java:111)
```

How then, should I write my code to make a tumbling window which clears every 
day at 00:00, when my timezone is UTC+8?



> .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8))) doesn't 
> work
> ---
>
> Key: FLINK-6743
> URL: https://issues.apache.org/jira/browse/FLINK-6743
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.2.0
> Environment: Flink 1.2.0.
>Reporter: Lix
>
> The tutorial on the official website says that we can use
> {quote}
> // daily tumbling event-time windows offset by -8 hours.
> input
> .keyBy()
> .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
> .();
> {quote}
> when our timezone is UTC+8, which is in China. But when I tried to run this 
> code, it just reported an error:
> {quote}
> Exception in thread "main" java.lang.IllegalArgumentException: 
> TumblingProcessingTimeWindows parameters must satisfy  0 <= offset < size
>   at 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.(TumblingProcessingTimeWindows.java:54)
>   at 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.of(TumblingProcessingTimeWindows.java:111)
> {quote}
> How then, should I write my code to make a tumbling window which clears every 
> day at 00:00, when my timezone is UTC+8?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6075) Support Limit/Top(Sort) for Stream SQL

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027799#comment-16027799
 ] 

ASF GitHub Bot commented on FLINK-6075:
---

GitHub user rtudoran opened a pull request:

https://github.com/apache/flink/pull/4003

[FLINK-6075] - Support Limit/Top(Sort) for Stream SQL

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huawei-flink/flink FLINK-6075-OFRe3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4003.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4003


commit 52acb914b8fbdb2f36d6b2f732cc161b97fa6f2c
Author: didi 
Date:   2017-05-28T12:13:41Z

Add full support for Order By with Fetch and Offset




> Support Limit/Top(Sort) for Stream SQL
> --
>
> Key: FLINK-6075
> URL: https://issues.apache.org/jira/browse/FLINK-6075
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: radu
>  Labels: features
> Attachments: sort.png
>
>
> These will be split in 3 separated JIRA issues. However, the design is the 
> same only the processing function differs in terms of the output. Hence, the 
> design is the same for all of them.
> Time target: Proc Time
> **SQL targeted query examples:**
> *Sort example*
> Q1)` SELECT a FROM stream1 GROUP BY HOP(proctime, INTERVAL '1' HOUR, INTERVAL 
> '3' HOUR) ORDER BY b` 
> Comment: window is defined using GROUP BY
> Comment: ASC or DESC keywords can be placed to mark the ordering type
> *Limit example*
> Q2) `SELECT a FROM stream1 WHERE rowtime BETWEEN current_timestamp - INTERVAL 
> '1' HOUR AND current_timestamp ORDER BY b LIMIT 10`
> Comment: window is defined using time ranges in the WHERE clause
> Comment: window is row triggered
> *Top example*
> Q3) `SELECT sum(a) OVER (ORDER BY proctime RANGE INTERVAL '1' HOUR PRECEDING 
> LIMIT 10) FROM stream1`  
> Comment: limit over the contents of the sliding window
> General Comments:
> -All these SQL clauses are supported only over windows (bounded collections 
> of data). 
> -Each of the 3 operators will be supported with each of the types of 
> expressing the windows. 
> **Description**
> The 3 operations (limit, top and sort) are similar in behavior as they all 
> require a sorted collection of the data on which the logic will be applied 
> (i.e., select a subset of the items or the entire sorted set). These 
> functions would make sense in the streaming context only in the context of a 
> window. Without defining a window the functions could never emit as the sort 
> operation would never trigger. If an SQL query will be provided without 
> limits an error will be thrown (`SELECT a FROM stream1 TOP 10` -> ERROR). 
> Although not targeted by this JIRA, in the case of working based on event 
> time order, the retraction mechanisms of windows and the lateness mechanisms 
> can be used to deal with out of order events and retraction/updates of 
> results.
> **Functionality example**
> We exemplify with the query below for all the 3 types of operators (sorting, 
> limit and top). Rowtime indicates when the HOP window will trigger – which 
> can be observed in the fact that outputs are generated only at those moments. 
> The HOP windows will trigger at every hour (fixed hour) and each event will 
> contribute/ be duplicated for 2 consecutive hour intervals. Proctime 
> indicates the processing time when a new event arrives in the system. Events 
> are of the type (a,b) with the ordering being applied on the

[GitHub] flink pull request #4003: [FLINK-6075] - Support Limit/Top(Sort) for Stream ...

2017-05-28 Thread rtudoran
GitHub user rtudoran opened a pull request:

https://github.com/apache/flink/pull/4003

[FLINK-6075] - Support Limit/Top(Sort) for Stream SQL

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huawei-flink/flink FLINK-6075-OFRe3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4003.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4003


commit 52acb914b8fbdb2f36d6b2f732cc161b97fa6f2c
Author: didi 
Date:   2017-05-28T12:13:41Z

Add full support for Order By with Fetch and Offset




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6744) Flaky ExecutionGraphSchedulingTest

2017-05-28 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6744:
---

 Summary: Flaky ExecutionGraphSchedulingTest
 Key: FLINK-6744
 URL: https://issues.apache.org/jira/browse/FLINK-6744
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.4.0
Reporter: Chesnay Schepler


{code}

Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.013 sec <<< 
FAILURE! - in 
org.apache.flink.runtime.executiongraph.ExecutionGraphSchedulingTest

testDeployPipelinedConnectedComponentsTogether(org.apache.flink.runtime.executiongraph.ExecutionGraphSchedulingTest)
  Time elapsed: 0.121 sec  <<< FAILURE!

org.mockito.exceptions.verification.WantedButNotInvoked: 

Wanted but not invoked:

taskManagerGateway.submitTask(, );

-> at 
org.apache.flink.runtime.executiongraph.ExecutionGraphSchedulingTest.testDeployPipelinedConnectedComponentsTogether(ExecutionGraphSchedulingTest.java:246)

Actually, there were zero interactions with this mock.

at 
org.apache.flink.runtime.executiongraph.ExecutionGraphSchedulingTest.testDeployPipelinedConnectedComponentsTogether(ExecutionGraphSchedulingTest.java:246)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6743) .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8))) doesn't work

2017-05-28 Thread Lix (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lix updated FLINK-6743:
---
Affects Version/s: 1.2.0
  Environment: Flink 1.2.0.  (was: Flink 1.2.0 and above.)

> .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8))) doesn't 
> work
> ---
>
> Key: FLINK-6743
> URL: https://issues.apache.org/jira/browse/FLINK-6743
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.2.0
> Environment: Flink 1.2.0.
>Reporter: Lix
>
> The tutorial on the official website says that we can use
> ```
> // daily tumbling event-time windows offset by -8 hours.
> input
> .keyBy()
> .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
> .();
> ```
> when our timezone is UTC+8, which is in China. But when I tried to run this 
> code, it just reported an error:
> ```
> Exception in thread "main" java.lang.IllegalArgumentException: 
> TumblingProcessingTimeWindows parameters must satisfy  0 <= offset < size
>   at 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.(TumblingProcessingTimeWindows.java:54)
>   at 
> org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.of(TumblingProcessingTimeWindows.java:111)
> ```
> How then, should I write my code to make a tumbling window which clears every 
> day at 00:00, when my timezone is UTC+8?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6743) .window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8))) doesn't work

2017-05-28 Thread Lix (JIRA)
Lix created FLINK-6743:
--

 Summary: .window(TumblingEventTimeWindows.of(Time.days(1), 
Time.hours(-8))) doesn't work
 Key: FLINK-6743
 URL: https://issues.apache.org/jira/browse/FLINK-6743
 Project: Flink
  Issue Type: Bug
  Components: DataStream API
 Environment: Flink 1.2.0 and above.
Reporter: Lix


The tutorial on the official website says that we can use

```
// daily tumbling event-time windows offset by -8 hours.
input
.keyBy()
.window(TumblingEventTimeWindows.of(Time.days(1), Time.hours(-8)))
.();
```

when our timezone is UTC+8, which is in China. But when I tried to run this 
code, it just reported an error:

```
Exception in thread "main" java.lang.IllegalArgumentException: 
TumblingProcessingTimeWindows parameters must satisfy  0 <= offset < size
at 
org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.(TumblingProcessingTimeWindows.java:54)
at 
org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows.of(TumblingProcessingTimeWindows.java:111)
```

How then, should I write my code to make a tumbling window which clears every 
day at 00:00, when my timezone is UTC+8?




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6137) Activate strict checkstyle for flink-cep

2017-05-28 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6137.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

1.4: c9e574bf3206c16bd7e1be8a5672073d8846d7d7

> Activate strict checkstyle for flink-cep
> 
>
> Key: FLINK-6137
> URL: https://issues.apache.org/jira/browse/FLINK-6137
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Minor
> Fix For: 1.4.0
>
>
> Add a custom checkstyle.xml for `flink-cep` library as in [FLINK-6107]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3976: [FLINK-6137][cep] Activate strict checkstyle for f...

2017-05-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3976


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6137) Activate strict checkstyle for flink-cep

2017-05-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027749#comment-16027749
 ] 

ASF GitHub Bot commented on FLINK-6137:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3976


> Activate strict checkstyle for flink-cep
> 
>
> Key: FLINK-6137
> URL: https://issues.apache.org/jira/browse/FLINK-6137
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Minor
> Fix For: 1.4.0
>
>
> Add a custom checkstyle.xml for `flink-cep` library as in [FLINK-6107]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4002: [6730] Apply non-invasive checkstyle rules to flin...

2017-05-28 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4002

[6730] Apply non-invasive checkstyle rules to flink-optimizer

This PR applys3 non-invasive checkstyle rules to flink-optimizer. It 
reorders imports, adds an empty line before the package declaration and removes 
multiple subsequent empty lines.

Note that the actual checkstyle enforcement in flink-optimizer wasn't 
changed. The above checks should be added to the default checkstyle once we've 
applied PRs like this for the remaining modules.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6730csopt

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4002


commit 4e3760e55580dd5e0ae195496a36e0d10c3104e1
Author: zentol 
Date:   2017-05-28T07:11:10Z

[FLINK-6739] Apply checkstyle import order to flink-optimizer

commit 9d39229e223f5f76554659d6572ad7677e997c3b
Author: zentol 
Date:   2017-05-28T07:13:41Z

[FLINK-6730] Apply empty line before package declaration in flink-optimizer

commit 3254805a390d0a7c7b43fae067b5fb2ffd3ed809
Author: zentol 
Date:   2017-05-28T07:32:26Z

[FLINK-6730] Remove multiple empty lines in flink-optimizer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-6740) Fix "parameterTypeEquals" method error.

2017-05-28 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng closed FLINK-6740.
--
Resolution: Duplicate

> Fix "parameterTypeEquals" method error.
> ---
>
> Key: FLINK-6740
> URL: https://issues.apache.org/jira/browse/FLINK-6740
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> When we define UDTF as follows:
> {code}
> class TableFuncPojo extends TableFunction[TPojo] {
>   def eval(age: Int, name:String): Unit = {
> collect(new TPojo(age.toLong,name))
>   }
>   def eval(age: Date, name:String): Unit = {
>  collect(new 
> TPojo(org.apache.calcite.runtime.SqlFunctions.toInt(age).toLong,name))
>   }
> }
> {code}
> TableAPI:
> {code}
>  val table = stream.toTable(tEnv,
>   'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
> 'long.rowtime)
> val windowedTable = table
>   .join(udtf('date, 'string) as 'pojo2).select('pojo2)
> {code}
> We will get the error as following:
> {code}
> org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
> which match the signature.
>   at 
> org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
>   at 
> org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
>   at org.apache.flink.table.api.Table.join(table.scala:539)
>   at org.apache.flink.table.api.Table.join(table.scala:328)
>   at 
> org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
> {code}
> The reason is in `parameterTypeEquals` method, logical as follows:
> {code}
> candidate == classOf[Date] && (expected == classOf[Int] || expected == 
> classOf[JInt]) 
> {code}
> I think we can modify the logical of  `parameterTypeEquals` method.
> What do you think? Welcome anybody feedback...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6740) Fix "parameterTypeEquals" method error.

2017-05-28 Thread sunjincheng (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027732#comment-16027732
 ] 

sunjincheng commented on FLINK-6740:


I think this JIRA. is not very important for release 1.3. And the JIRA of  
[FLINK-6672|https://issues.apache.org/jira/browse/FLINK-6672] can also deal 
with this problem. So close this JIRA.


> Fix "parameterTypeEquals" method error.
> ---
>
> Key: FLINK-6740
> URL: https://issues.apache.org/jira/browse/FLINK-6740
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> When we define UDTF as follows:
> {code}
> class TableFuncPojo extends TableFunction[TPojo] {
>   def eval(age: Int, name:String): Unit = {
> collect(new TPojo(age.toLong,name))
>   }
>   def eval(age: Date, name:String): Unit = {
>  collect(new 
> TPojo(org.apache.calcite.runtime.SqlFunctions.toInt(age).toLong,name))
>   }
> }
> {code}
> TableAPI:
> {code}
>  val table = stream.toTable(tEnv,
>   'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
> 'long.rowtime)
> val windowedTable = table
>   .join(udtf('date, 'string) as 'pojo2).select('pojo2)
> {code}
> We will get the error as following:
> {code}
> org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
> which match the signature.
>   at 
> org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
>   at 
> org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
>   at org.apache.flink.table.api.Table.join(table.scala:539)
>   at org.apache.flink.table.api.Table.join(table.scala:328)
>   at 
> org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
> {code}
> The reason is in `parameterTypeEquals` method, logical as follows:
> {code}
> candidate == classOf[Date] && (expected == classOf[Int] || expected == 
> classOf[JInt]) 
> {code}
> I think we can modify the logical of  `parameterTypeEquals` method.
> What do you think? Welcome anybody feedback...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6740) Fix "parameterTypeEquals" method error.

2017-05-28 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6740:
---
Description: 
When we define UDTF as follows:
{code}
class TableFuncPojo extends TableFunction[TPojo] {
  def eval(age: Int, name:String): Unit = {
collect(new TPojo(age.toLong,name))
  }
  def eval(age: Date, name:String): Unit = {
 collect(new 
TPojo(org.apache.calcite.runtime.SqlFunctions.toInt(age).toLong,name))
  }
}
{code}

TableAPI:
{code}
 val table = stream.toTable(tEnv,
  'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
'long.rowtime)
val windowedTable = table
  .join(udtf('date, 'string) as 'pojo2).select('pojo2)
{code}
We will get the error as following:
{code}
org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
which match the signature.

at 
org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
at 
org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
at org.apache.flink.table.api.Table.join(table.scala:539)
at org.apache.flink.table.api.Table.join(table.scala:328)
at 
org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
{code}

The reason is in `parameterTypeEquals` method, logical as follows:
{code}
candidate == classOf[Date] && (expected == classOf[Int] || expected == 
classOf[JInt]) 
{code}
I think we can modify the logical of  `parameterTypeEquals` method.
What do you think? Welcome anybody feedback...


  was:
When we define UDTF as follows:
{code}
class TableFuncPojo extends TableFunction[TPojo] {
  def eval(age: Int, name:String): Unit = {
collect(new TPojo(age.toLong,name))
  }
  def eval(age: Date, name:String): Unit = {
 collect(new 
TPojo(org.apache.calcite.runtime.SqlFunctions.toInt(age).toLong,name))
  }
}
{code}

TableAPI:
{code}
 val table = stream.toTable(tEnv,
  'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
'long.rowtime)
val windowedTable = table
  .join(udtf('date, 'string) as 'pojo2).select('pojo2)
{code}
We will get the error as following:
{code}
org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
which match the signature.

at 
org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
at 
org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
at org.apache.flink.table.api.Table.join(table.scala:539)
at org.apache.flink.table.api.Table.join(table.scala:328)
at 
org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
{code}

The reason is in {{ parameterTypeEquals }} method, logical as follows:
{code}
candidate == classOf[Date] && (expected == classOf[Int] || expected == 
classOf[JInt]) 
{code}
But sometimes user really need two eval 
So, We should modify the logical of  {{ parameterTypeEquals }} method.
What do you think? Welcome anybody feedback...




> Fix "parameterTypeEquals" method error.
> ---
>
> Key: FLINK-6740
> URL: https://issues.apache.org/jira/browse/FLINK-6740
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> When we define UDTF as follows:
> {code}
> class TableFuncPojo extends TableFunction[TPojo] {
>   def eval(age: Int, name:String): Unit = {
> collect(new TPojo(age.toLong,name))
>   }
>   def eval(age: Date, name:String): Unit = {
>  collect(new 
> TPojo(org.apache.calcite.runtime.SqlFunctions.toInt(age).toLong,name))
>   }
> }
> {code}
> TableAPI:
> {code}
>  val table = stream.toTable(tEnv,
>   'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
> 'long.rowtime)
> val windowedTable = table
>   .join(udtf('date, 'string) as 'pojo2).select('pojo2)
> {code}
> We will get the error as following:
> {code}
> org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
> which match the signature.
>   at 
> org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
>   at 
> org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
>   at org.apache.flink.table.api.Table.join(table.scala:539)
>   at org.apache.flink.table.api.Table.join(table.scala:328)
>   at 
> org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
> {code}
> The reason is in `parameterTypeEquals` method, logical as follows:
> {code}
> c

[jira] [Assigned] (FLINK-6730) Activate strict checkstyle for flink-optimizer

2017-05-28 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-6730:
---

Assignee: (was: Chesnay Schepler)

> Activate strict checkstyle for flink-optimizer
> --
>
> Key: FLINK-6730
> URL: https://issues.apache.org/jira/browse/FLINK-6730
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Chesnay Schepler
>
> Long term issue for incrementally introducing the strict checkstyle to 
> flink-optimizer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6740) Fix "parameterTypeEquals" method error.

2017-05-28 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6740:
---
Description: 
When we define UDTF as follows:
{code}
class TableFuncPojo extends TableFunction[TPojo] {
  def eval(age: Int, name:String): Unit = {
collect(new TPojo(age.toLong,name))
  }
  def eval(age: Date, name:String): Unit = {
 collect(new 
TPojo(org.apache.calcite.runtime.SqlFunctions.toInt(age).toLong,name))
  }
}
{code}

TableAPI:
{code}
 val table = stream.toTable(tEnv,
  'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
'long.rowtime)
val windowedTable = table
  .join(udtf('date, 'string) as 'pojo2).select('pojo2)
{code}
We will get the error as following:
{code}
org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
which match the signature.

at 
org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
at 
org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
at org.apache.flink.table.api.Table.join(table.scala:539)
at org.apache.flink.table.api.Table.join(table.scala:328)
at 
org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
{code}

The reason is in {{ parameterTypeEquals }} method, logical as follows:
{code}
candidate == classOf[Date] && (expected == classOf[Int] || expected == 
classOf[JInt]) 
{code}
But sometimes user really need two eval 
So, We should modify the logical of  {{ parameterTypeEquals }} method.
What do you think? Welcome anybody feedback...



  was:
When we define UDTF as follows:
{code}
class TableFuncPojo extends TableFunction[TPojo] {
  def eval(age: Int, name:String): Unit = {
collect(new TPojo(age.toLong,name))
  }
  def eval(age: Date, name:String): Unit = {
  collect(new TPojo(age.getTime,name))
  }
}
{code}

TableAPI:
{code}
 val table = stream.toTable(tEnv,
  'long2, 'int, 'double, 'float, 'bigdec, 'ts, 'date,'pojo, 'string, 
'long.rowtime)
val windowedTable = table
  .join(udtf('date, 'string) as 'pojo2).select('pojo2)
{code}
We will get the error as following:
{code}
org.apache.flink.table.api.ValidationException: Found multiple 'eval' methods 
which match the signature.

at 
org.apache.flink.table.functions.utils.UserDefinedFunctionUtils$.getUserDefinedMethod(UserDefinedFunctionUtils.scala:180)
at 
org.apache.flink.table.plan.logical.LogicalTableFunctionCall.validate(operators.scala:700)
at org.apache.flink.table.api.Table.join(table.scala:539)
at org.apache.flink.table.api.Table.join(table.scala:328)
at 
org.apache.flink.table.runtime.datastream.DataStreamAggregateITCase.test1(DataStreamAggregateITCase.scala:84)
{code}

The reason is in {{ parameterTypeEquals }} method, logical as follows:
{code}
candidate == classOf[Date] && (expected == classOf[Int] || expected == 
classOf[JInt]) 
{code}
TestData:
{code}
val data = List(
(1L, 1, 1d, 1f, new BigDecimal("1"),
  new Timestamp(200020200),new Date(100101010),new TPojo(1L, "XX"),"Hi"),
(2L, 2, 2d, 2f, new BigDecimal("2"),
  new Timestamp(200020200),new Date(100101010),new TPojo(1L, "XX"),"Hallo"),
(3L, 2, 2d, 2f, new BigDecimal("2"),
  new Timestamp(200020200), new Date(100101010),new TPojo(1L, 
"XX"),"Hello"),
(4L, 5, 5d, 5f, new BigDecimal("5"),
  new Timestamp(200020200), new Date(2334234),new TPojo(2L, "YY"),"Hello"),
(7L, 3, 3d, 3f, new BigDecimal("3"),
  new Timestamp(200020200), new Date(66633),new TPojo(1L, 
"XX"),"Hello"),
(8L, 3, 3d, 3f, new BigDecimal("3"),
  new Timestamp(200020200), new Date(100101010),new TPojo(1L, "XX"),"Hello 
world"),
(16L, 4, 4d, 4f, new BigDecimal("4"),
  new Timestamp(200020200), new Date(100101010),new TPojo(1L, "XX"),"Hello 
world"))
{code}
But when we only define one `eval` method, we got different result, as follows:

{code}
// for def eval(age: Int, name:String)
Pojo{id=0, name='Hello'}
Pojo{id=1, name='Hallo'}
Pojo{id=1, name='Hello world'}
Pojo{id=1, name='Hello world'}
Pojo{id=1, name='Hello'}
Pojo{id=1, name='Hi'}
Pojo{id=8, name='Hello'}

// for def eval(age: Date, name:String)
Pojo{id=-2880, name='Hello'}
Pojo{id=5760, name='Hallo'}
Pojo{id=5760, name='Hello world'}
Pojo{id=5760, name='Hello world'}
Pojo{id=5760, name='Hello'}
Pojo{id=5760, name='Hi'}
Pojo{id=66240, name='Hello'}
{code}

So, We should modify the logical of  {{ parameterTypeEquals }} method.
What do you think? Welcome anybody feedback...




> Fix "parameterTypeEquals" method error.
> ---
>
> Key: FLINK-6740
> URL: https://issues.apache.org/jira/browse/FLINK-6740
> Project: Flink
>  Issue Type: Sub-task
>  Compo

[jira] [Assigned] (FLINK-6730) Activate strict checkstyle for flink-optimizer

2017-05-28 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-6730:
---

Assignee: Chesnay Schepler

> Activate strict checkstyle for flink-optimizer
> --
>
> Key: FLINK-6730
> URL: https://issues.apache.org/jira/browse/FLINK-6730
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>
> Long term issue for incrementally introducing the strict checkstyle to 
> flink-optimizer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)