[GitHub] flink issue #2354: [FLINK-4366] Enforce parallelism=1 For AllWindowedStream

2016-08-11 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2354
  
Thanks for advice. Do you have any ideas about the subclass's naming.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2352: [FLINK-4370] Add an IntelliJ Inspections Profile

2016-08-11 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2352
  
Agree with @tillrohrmann . And I'm confused that can it work? As `.idea` 
folder is in `.gitignore` list.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2354: [FLINK-4366] Enforce parallelism=1 For AllWindowed...

2016-08-11 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/2354

[FLINK-4366] Enforce parallelism=1 For AllWindowedStream

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

This PR avoid users try to set a parallelism on an all-windowed stream. 

I add an `isAllWindow` field to mark a transformation is  all-windowed. I'm 
appreciate if there is better solutions. 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink FLINK-4366

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2354.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2354


commit 79ac99a715ef9bf15de0e5f0cd8080ebc8d278c6
Author: Jark Wu 
Date:   2016-08-11T08:04:54Z

[FLINK-4366] Enforce parallelism=1 For AllWindowedStream




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2306: [FLINK-4270] [table] fix 'as' in front of join does not w...

2016-08-11 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2306
  
Hi @twalthr would be great if you could have a quick look at it. This issue 
has been pending for two weeks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2305: [FLINK-4271] [DataStreamAPI] Enable CoGroupedStreams and ...

2016-08-03 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2305
  
Hi @aljoscha @tillrohrmann , what do you think about this?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2305: [FLINK-4271] [DataStreamAPI] Enable CoGroupedStreams and ...

2016-08-03 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2305
  
I think, specify parallelism between `window(...)` and `apply(...)` is not 
nice. We not just need `setParallelism` function but also `name`, `uid`, 
`slotSharingGroup` and many other functions in `SingleOutputStreamOperator`. If 
we copy these to `WithWindow`, it will be very duplicate.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2282: [FLINK-3940] [table] Add support for ORDER BY OFFS...

2016-08-03 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2282#discussion_r73292974
  
--- Diff: docs/apis/table.md ---
@@ -606,6 +606,28 @@ Table result = in.orderBy("a.asc");
   
 
 
+
+  Offset
+  
+Similar to a SQL OFFSET clause. Returns rows from offset 
position. It is technically part of the ORDER BY clause.
+{% highlight java %}
+Table in = tableEnv.fromDataSet(ds, "a, b, c");
+Table result = in.orderBy("a.asc").offset(3);
+{% endhighlight %}
+  
+
+
+
+  Fetch
+  
+Similar to a SQL FETCH clause. Returns a set number of rows. 
FETCH can’t be used by itself, it is used in conjunction with OFFSET.
--- End diff --

I prefer `orderBy().limit()` as it's cleaner.  And then we can combine 
`Offset` and `Fetch` with `Limit`(maybe) instead. The logical in Offset and 
Fetch is a little duplicate.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2305: [FLINK-4271] [DataStreamAPI] Enable CoGroupedStreams and ...

2016-08-02 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2305
  
I agree with @aljoscha . Can we reduce the checker's sensitivity to pass 
this change ? How do we do when we need to break compatibility?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2305: [FLINK-4271] [DataStreamAPI] Enable CoGroupedStreams and ...

2016-07-28 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2305
  
The CI failed because of japicmp, as we changed the public API. 

Have no idea how to fix this... 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2306: [FLINK-4270] [table] fix 'as' in front of join doe...

2016-07-28 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/2306

[FLINK-4270] [table] fix 'as' in front of join does not work

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


The `Project` has a wrong way to construct RelNode. It pushes a new 
`LogicalProject` to the stack, but not pop the previous RelNode. It has problem 
when Join two tables. 

When join two tables (with all fields renamed), the stack will look like 
this before join:

```
LogicalProject1
TableScan1
LogicalProject0
TableScan0
``` 

After that, join will use  `LogicalProject1` and `TableScan1` which are the 
same table to create `LogicalJoin`. Then the exception throws.

The correct way to do this is make sure the previous TableScan removed from 
the stack before we push a new LogicalProject. 

```
LogicalProject1
LogicalProject0
``` 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink FLINK-4270

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2306.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2306


commit 20412c044103158dabd3de1f7a056018231f9275
Author: Jark Wu 
Date:   2016-07-28T09:33:53Z

[FLINK-4270] [table] fix 'as' in front of join does not work




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2305: [FLINK-4271] [DataStreamAPI] Enable CoGroupedStrea...

2016-07-28 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/2305

[FLINK-4271] [DataStreamAPI] Enable CoGroupedStreams and JoinedStreams to 
set parallellism

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

The CoGroupStream will construct the following graph. 

```
source -> MAP ---
|->  WindowOp -> Sink
source -> MAP ---
```

By now , the MAP and WindowOp can not set parallelism.  We can keep the MAP 
has same parallelism as previous operator (chaining). And we can change 
{{CoGroupedStreams.apply}} to return a  {{SingleOutputStreamOperator}} instead 
of {{DataStream}}, so that we can set WindowOp's parallelism.  The same thing 
has be done to {{JoinedStream}}.

So that we can do the following things:

```
DataStream result = one.coGroup(two)
 .where(new MyFirstKeySelector())
 .equalTo(new MyFirstKeySelector())
 .window(TumblingEventTimeWindows.of(Time.of(5, TimeUnit.SECONDS)))
 .apply(new MyCoGroupFunction());
 .setParallelism(10)
 .name("MyCoGroupWindow")
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink CoGroupStreams

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2305.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2305


commit 7b9594a175f33e62826a0cb51380f33dec5857b6
Author: Jark Wu 
Date:   2016-07-28T06:32:13Z

[FLINK-4271] [DataStreamAPI] Enable CoGroupedStreams and JoinedStreams to 
set parallelism.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2274: [FLINK-4180] [FLINK-4181] [table] add Batch SQL and Strea...

2016-07-24 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2274
  
@smarthi  Thanks for reviewing. I have addressed the typo.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2282: [FLINK-3940] [table] Add support for ORDER BY OFFS...

2016-07-22 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2282#discussion_r71898802
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/logical/operators.scala
 ---
@@ -150,6 +150,41 @@ case class Sort(order: Seq[Ordering], child: 
LogicalNode) extends UnaryNode {
   }
 }
 
+case class Offset(offset: Int, child: LogicalNode) extends UnaryNode {
+  override def output: Seq[Attribute] = child.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+child.construct(relBuilder)
+relBuilder.limit(offset, -1)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+if (tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  throw new TableException(s"Offset on stream tables is currently not 
supported.")
+}
+super.validate(tableEnv)
+  }
+}
+
+case class Fetch(fetch: Int, child: LogicalNode) extends UnaryNode {
+  override def output: Seq[Attribute] = child.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+
+val newChild = child.asInstanceOf[Offset].child
+newChild.construct(relBuilder)
+val relNode = child.toRelNode(relBuilder).asInstanceOf[LogicalSort]
+relBuilder.limit(RexLiteral.intValue(relNode.offset), fetch)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+if (tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  throw new TableException(s"Fetch on stream tables is currently not 
supported.")
+}
--- End diff --

I think we need to check the 'fetch' is followed after a 'orderby' and 
'offset' here.  Otherwise, the class cast in construct will throw exception.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2282: [FLINK-3940] [table] Add support for ORDER BY OFFS...

2016-07-22 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2282#discussion_r71898818
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/logical/operators.scala
 ---
@@ -150,6 +150,41 @@ case class Sort(order: Seq[Ordering], child: 
LogicalNode) extends UnaryNode {
   }
 }
 
+case class Offset(offset: Int, child: LogicalNode) extends UnaryNode {
+  override def output: Seq[Attribute] = child.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+child.construct(relBuilder)
+relBuilder.limit(offset, -1)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+if (tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  throw new TableException(s"Offset on stream tables is currently not 
supported.")
+}
--- End diff --

I think we should  check the 'offset' is followed after a 'orderby' here.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2280: [FLINK-4244] [docs] Field names for union operator...

2016-07-21 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/2280

[FLINK-4244] [docs] Field names for union operator do not have to be equal

We just merged FLINK-2985 , but not update the document. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink FLINK-4244

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2280.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2280


commit a3174fb89c7e2ab3e7bcb4f88c9ab3dbe7d47473
Author: Jark Wu 
Date:   2016-07-21T15:25:44Z

[FLINK-4244] [docs] Field names for union operator do not have to be equal




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2274: [FLINK-4180] [FLINK-4181] [table] add Batch SQL an...

2016-07-20 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/2274

[FLINK-4180] [FLINK-4181] [table] add Batch SQL and Stream SQL and Stream 
Table API examples

[FLINK-4180] [FLINK-4181] [table] add Batch SQL and Stream SQL and Stream 
Table API examples

I add 4 examples for flink-table module 

1. `org.apache.flink.examples.java.JavaSQLExample`: Batch SQL in Java
1. `org.apache.flink.examples.scala.WordCountSQL`:  Batch SQL in Scala
2. `org.apache.flink.examples.scala.StreamSQLExample`: Stream SQL in Scala
3. `org.apache.flink.examples.scala.StreamTableExample`: Stream Table API 
in Scala


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink table-example

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2274.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2274


commit 8b3c468adb574f229c6d24da58d4e52c4a025cec
Author: Jark Wu 
Date:   2016-07-20T16:31:01Z

[FLINK-4180] [FLINK-4181] [table] add Batch SQL and Stream SQL and Stream 
Table API examples




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2265: [FLINK-3097] [table] Add support for custom functions in ...

2016-07-19 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2265
  
Yes, I see. That's great!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2265: [FLINK-3097] [table] Add support for custom functions in ...

2016-07-19 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2265
  
Yes, you are right. I'm just a little concerned about the class name of 
`ScalarFunction`, haha..  

In addition, Java Table API should be `table.select("hashCode(text)");` 
which is better I think.  Assume that the eval function takes two or more 
parameters,  `"udf(a,b)"` will be satisfied and be consistent with Scala Table 
API and SQL on syntax.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2265: [FLINK-3097] [table] Add support for custom functions in ...

2016-07-18 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2265
  
Do we have any google docs or FLIP talking about this design ? 

I think the `ScalarFunction` has too many internal functions, and should 
not be exposed to users. Maybe we can create a new interface for custom 
functions, such as `UDF` or `UserDefinedFunction`. 





---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2159: [FLINK-3942] [tableAPI] Add support for INTERSECT

2016-07-08 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2159
  
@twalthr   Thanks a lot. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2120: [FLINK-4070] [tableApi] Support literals on left side of ...

2016-07-04 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2120
  
Hi @twalthr , I find that `expr` in `expressionDsl.scala`  has done what we 
want. As we can do like this: `12.expr % 'f0`  or `12 * 'f0.expr`.

So do we need this PR just rename `expr` to `toExpr` ? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2078: [FLINK-2985] Allow different field names for unionAll() i...

2016-07-03 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2078
  
Yes, you are right. Now this PR look good to me. 👍 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2078: [FLINK-2985] Allow different field names for unionAll() i...

2016-07-03 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2078
  
Hi @gallenvara , I debug the  `IndexOutOfBoundsException` exception of  
`testJoinWithDisjunctivePred`, and find this line [L526 in 
CodeGenerator](https://github.com/apache/flink/blob/master/flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/codegen/CodeGenerator.scala#L526).
  This is the reason for the test failure.   Because  `==` in scala is 
`equals`,when we visit inputRef of 'd, the input1 and input2 have the same 
field types ([Int, Long, String]). Here we will get the wrong index(3) which 
cause IOOB exception, but we want to get the index(0). 

We just need to modify L526 to `val index = if (input._2 == input1Term) {` 
will fix this problem.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2078: [FLINK-2985] Allow different field names for union...

2016-07-03 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2078#discussion_r69402729
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/typeutils/RowTypeInfo.scala
 ---
@@ -25,38 +25,25 @@ import 
org.apache.flink.api.scala.typeutils.CaseClassTypeInfo
 
 import scala.collection.mutable.ArrayBuffer
 import org.apache.flink.api.common.typeutils.TypeSerializer
-import org.apache.flink.api.table.{Row, TableException}
+import org.apache.flink.api.table.Row
 
 /**
  * TypeInformation for [[Row]].
  */
-class RowTypeInfo(fieldTypes: Seq[TypeInformation[_]], fieldNames: 
Seq[String])
+class RowTypeInfo(fieldTypes: Seq[TypeInformation[_]])
   extends CaseClassTypeInfo[Row](
 classOf[Row],
 Array(),
 fieldTypes,
-fieldNames)
+Nil)
--- End diff --

I would like to replace `Nil` with `for (i <- fieldTypes.indices) yield "f" 
+ i`, so that we can keep fieldNames as `val`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2159: [FLINK-3942] [tableAPI] Add support for INTERSECT

2016-06-30 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2159
  
Thanks @fhueske for your review,  I have addressed all the comments and 
squashed the commit. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2182: [Flink-4130] CallGenerator could generate illegal code wh...

2016-06-29 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2182
  
Yah...   the `isNull$17` is not declared if we do what I said above, and of 
course will throw compile error. 

The code looks good to me now, although it looks a little weird that 
`$nullTerm` always be `false`. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2169: [FLINK-3943] Add support for EXCEPT operator

2016-06-29 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2169#discussion_r69065352
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/logical/operators.scala
 ---
@@ -236,6 +236,32 @@ case class Aggregate(
   }
 }
 
+case class SetMinus(left: LogicalNode, right: LogicalNode, all: Boolean) 
extends BinaryNode {
+  override def output: Seq[Attribute] = left.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+left.construct(relBuilder)
+right.construct(relBuilder)
+relBuilder.minus(all)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+val resolvedMinus = super.validate(tableEnv).asInstanceOf[SetMinus]
+if (left.output.length != right.output.length) {
+  failValidation(s"Set minus two table of different column sizes:" +
+s" ${left.output.size} and ${right.output.size}")
+}
+val sameSchema = left.output.zip(right.output).forall { case (l, r) =>
+  l.resultType == r.resultType && l.name == r.name }
--- End diff --

Yes, I refer to the last case. I agree with @fhueske 's opinion, we can 
remove the check of field names in `EXCEPT` and `INTERSECT` now, and remove the 
restriction in `UNION` in the future. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2182: [Flink-4130] CallGenerator could generate illegal ...

2016-06-29 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2182#discussion_r68969793
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/codegen/calls/CallGenerator.scala
 ---
@@ -43,11 +43,16 @@ object CallGenerator {
 val nullTerm = newName("isNull")
 val resultTypeTerm = primitiveTypeTermForTypeInfo(returnType)
 val defaultValue = primitiveDefaultValue(returnType)
+val nullCheckTerms = if(operands.size > 0) {
+  operands.map(_.nullTerm).mkString(" || ")
+} else {
+  nullCheck + ""
+}
 
 val resultCode = if (nullCheck) {
--- End diff --

I think, we just need to modify this condition to `if (nullCheck && 
operands.nonEmpty)`.  Because if no operands, we do not care about whether the 
operand is null.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2182: [Flink-4130] CallGenerator could generate illegal ...

2016-06-29 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2182#discussion_r68969768
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/codegen/calls/CallGenerator.scala
 ---
@@ -43,11 +43,16 @@ object CallGenerator {
 val nullTerm = newName("isNull")
 val resultTypeTerm = primitiveTypeTermForTypeInfo(returnType)
 val defaultValue = primitiveDefaultValue(returnType)
+val nullCheckTerms = if(operands.size > 0) {
+  operands.map(_.nullTerm).mkString(" || ")
+} else {
+  nullCheck + ""
+}
--- End diff --

It seems that when operands.size == 0  and nullCheck is enable, the 
`$nullTerm` will always be true, which means `$resultTerm` always be the 
default value.  That may be wrong. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2169: [FLINK-3943] Add support for EXCEPT operator

2016-06-28 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2169#discussion_r68706865
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/logical/operators.scala
 ---
@@ -236,6 +236,32 @@ case class Aggregate(
   }
 }
 
+case class SetMinus(left: LogicalNode, right: LogicalNode, all: Boolean) 
extends BinaryNode {
+  override def output: Seq[Attribute] = left.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+left.construct(relBuilder)
+right.construct(relBuilder)
+relBuilder.minus(all)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+val resolvedMinus = super.validate(tableEnv).asInstanceOf[SetMinus]
+if (left.output.length != right.output.length) {
+  failValidation(s"Set minus two table of different column sizes:" +
+s" ${left.output.size} and ${right.output.size}")
+}
+val sameSchema = left.output.zip(right.output).forall { case (l, r) =>
+  l.resultType == r.resultType && l.name == r.name }
--- End diff --

I think both tables must have the same number of fields with similar data 
types, but they can accept different field names. e.g.  
`testMinusDifferentFieldNames`  should pass without no exception. Correct me if 
I'm wrong. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2173: [FLINK-4109] [tableAPI] Change the name of ternary...

2016-06-27 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/2173

[FLINK-4109] [tableAPI] Change the name of ternary condition operator

It's better to use "?" than "eval()" for ternary condition operator in 
Table API since most people comming from Java/C/C++ know what it does. However, 
"eval()" is ambiguous.

The condition operator looks like this now:

```
(42 > 5).?("A", "B")
(42 > 5) ? ("A", "B")
```

The document has been updated too. 

As it is a public API, I hope it can be reviewed soon and merged before 
release.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink FLINK-4109

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2173.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2173


commit 8c689ccf269e706895e26e8eb3638e226d2c783b
Author: Jark Wu 
Date:   2016-06-28T04:37:12Z

[FLINK-4109] [tableAPI] Change the name of ternary condition operator 
'eval' to '?'




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2169: [FLINK-3943] Add support for EXCEPT operator

2016-06-27 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2169
  
Hi @mushketyk, I think we should remove duplicate records in CoGroup 
instead of using `distinct`. Others looks good to me. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2169: [FLINK-3943] Add support for EXCEPT operator

2016-06-27 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2169#discussion_r68689676
  
--- Diff: docs/apis/table.md ---
@@ -873,7 +920,7 @@ val result = tableEnv.sql(
 
  Limitations
 
-The current version of streaming SQL only supports `SELECT`, `FROM`, 
`WHERE`, and `UNION` clauses. Aggregations or joins are not supported yet.
+The current version of streaming SQL only supports `SELECT`, `FROM`, 
`WHERE`, `UNION` and `EXCEPT` clauses. Aggregations or joins are not supported 
yet.
--- End diff --

It seems that Stream SQL hasn't  supported `EXCEPT`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2169: [FLINK-3943] Add support for EXCEPT operator

2016-06-27 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2169#discussion_r68689427
  
--- Diff: docs/apis/table.md ---
@@ -695,6 +718,30 @@ val result = left.unionAll(right);
 
 
 
+  Minus
+  
+Similar to a SQL EXCEPT clause. Returns elements from the first 
table that do not exist in the second table. Both tables must have identical 
schema(field names and types).
+{% highlight scala %}
+val left = ds1.toTable(tableEnv, 'a, 'b, 'c);
+val right = ds2.toTable(tableEnv, 'a, 'b, 'c);
+val result = left.minus(right);
+{% endhighlight %}
+  
+
+
+
+  UnionAll
--- End diff --

UnionAll  -> MinusAll


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2169: [FLINK-3943] Add support for EXCEPT operator

2016-06-27 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2169#discussion_r68687942
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/logical/operators.scala
 ---
@@ -236,6 +236,32 @@ case class Aggregate(
   }
 }
 
+case class SetMinus(left: LogicalNode, right: LogicalNode, all: Boolean) 
extends BinaryNode {
+  override def output: Seq[Attribute] = left.output
+
+  override protected[logical] def construct(relBuilder: RelBuilder): 
RelBuilder = {
+left.construct(relBuilder)
+right.construct(relBuilder)
+relBuilder.minus(all)
+  }
+
+  override def validate(tableEnv: TableEnvironment): LogicalNode = {
+val resolvedMinus = super.validate(tableEnv).asInstanceOf[SetMinus]
+if (left.output.length != right.output.length) {
+  failValidation(s"Set minus two table of different column sizes:" +
+s" ${left.output.size} and ${right.output.size}")
+}
+val sameSchema = left.output.zip(right.output).forall { case (l, r) =>
+  l.resultType == r.resultType && l.name == r.name }
--- End diff --

Set minus do not need the field names exactly same. So I suggest to remove 
the name constraint. Actually, it is the same for Union. However, for Union,  
the problem is so complex that can't be simply resolved.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2169: [FLINK-3943] Add support for EXCEPT operator

2016-06-27 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2169#discussion_r68687402
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/nodes/dataset/DataSetMinus.scala
 ---
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.table.plan.nodes.dataset
+
+import org.apache.calcite.plan.{RelOptCluster, RelOptCost, RelOptPlanner, 
RelTraitSet}
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rel.metadata.RelMetadataQuery
+import org.apache.calcite.rel.{BiRel, RelNode, RelWriter}
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.java.DataSet
+import org.apache.flink.api.table.BatchTableEnvironment
+
+import scala.collection.JavaConverters._
+import scala.collection.JavaConversions._
+
+/**
+  * Flink RelNode which matches along with DataSetOperator.
+  *
+  */
+class DataSetMinus(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+left: RelNode,
+right: RelNode,
+rowType: RelDataType,
+all: Boolean)
+  extends BiRel(cluster, traitSet, left, right)
+with DataSetRel {
+
+  override def deriveRowType() = rowType
+
+  override def copy(traitSet: RelTraitSet, inputs: 
java.util.List[RelNode]): RelNode = {
+new DataSetMinus(
+  cluster,
+  traitSet,
+  inputs.get(0),
+  inputs.get(1),
+  rowType,
+  all
+)
+  }
+
+  override def toString: String = {
+s"SetMinus(setMinus: 
(${rowType.getFieldNames.asScala.toList.mkString(", ")}))"
+  }
+
+  override def explainTerms(pw: RelWriter): RelWriter = {
+super.explainTerms(pw).item("setMinus", setMinusSelectionToString)
+  }
+
+  override def computeSelfCost (planner: RelOptPlanner, metadata: 
RelMetadataQuery): RelOptCost = {
+
+val children = this.getInputs
+val rowCnt = children.foldLeft(0D) { (rows, child) =>
+  rows + metadata.getRowCount(child)
+}
+
+planner.getCostFactory.makeCost(rowCnt, 0, 0)
+  }
+
+  override def translateToPlan(
+  tableEnv: BatchTableEnvironment,
+  expectedType: Option[TypeInformation[Any]]): DataSet[Any] = {
+
+var leftDataSet: DataSet[Any] = null
+var rightDataSet: DataSet[Any] = null
+
+expectedType match {
+  case None =>
+leftDataSet = 
left.asInstanceOf[DataSetRel].translateToPlan(tableEnv)
+rightDataSet =
+  right.asInstanceOf[DataSetRel].translateToPlan(tableEnv, 
Some(leftDataSet.getType))
+  case _ =>
+leftDataSet = 
left.asInstanceOf[DataSetRel].translateToPlan(tableEnv, expectedType)
+rightDataSet = 
right.asInstanceOf[DataSetRel].translateToPlan(tableEnv, expectedType)
+}
+
+val minusRes = leftDataSet.minus(rightDataSet)
+if (!all) {
+  minusRes.distinct()
--- End diff --

I think it's better to remove duplicate records in CoGroup, emit only one 
record instead of every record from left data set when it is a minus without 
all. There is no need a distinct afterwards. It's the more robust choice 
because it won't create a huge intermediate result in case of many duplicate 
records.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2159: [FLINK-3942] [tableAPI] Add support for INTERSECT

2016-06-27 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2159
  
Hi @fhueske,  that's a good idea to use CoGroup instead of Join.  I updated 
the PR according to your advice. Meanwhile, I updated the document too (correct 
me if I describe wrong). 

NOTE:

1. use CoGroup instead of Join,  no code gen.
1. add INTERSECT related tests into UnionITCase, and rename it to 
`SetOperatorsITCase`
2. remove INTERSECT Java API tests 
3. add the `intersectAll` function to `Table`
4. mark `testIntersectAll` as `@ignore` in `sql/SetOperatorsITCase`, 
because calcite sql parser doesn't support INTERSECT ALL, it will throw the 
following exception:

```
java.lang.AssertionError: Internal error: set operator INTERSECT ALL not 
suported

  at org.apache.calcite.util.Util.newInternal(Util.java:777)
  at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSetOp(SqlToRelConverter.java:2920)
  at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:2885)
  at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:582)
  at 
org.apache.flink.api.table.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:114)
  at 
org.apache.flink.api.table.BatchTableEnvironment.sql(BatchTableEnvironment.scala:132)
  at 
org.apache.flink.api.scala.batch.sql.SetOperatorsITCase.testIntersectAll(SetOperatorsITCase.scala:169)
```




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2159: [FLINK-3942] [tableAPI] Add support for INTERSECT

2016-06-24 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/2159

[FLINK-3942] [tableAPI] Add support for INTERSECT

Internally, I translate INTERSECT into a Join on all fields and then a 
distinct for removing duplicate records. 

As Calcite SQL Parser doesn't support `INTERSECT ALL` , so I didn't add 
`intersectAll()` function to `Table`.

I can add the corresponding documents if needed. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink INTERSECT

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2159.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2159


commit a666a3aabb01090eb3c6205b28e54a0dbb2e8a05
Author: Jark Wu 
Date:   2016-06-24T11:57:09Z

[FLINK-3942] [tableAPI] Add support for INTERSECT




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2102: [FLINK-4068] [tableAPI] Move constant computations out of...

2016-06-17 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2102
  
@twalthr That sounds great ! Thank you . 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2120: [FLINK-4070] [tableApi] Support literals on left s...

2016-06-17 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/2120

[FLINK-4070] [tableApi] Support literals on left side of binary expre…

The Table API does not support literals on left side of expressions like 
`+,_,*,/,%,>,< `. Because there is already such method in Int class, scala 
compiler would not search for implicit conversion.

So I add a function `toExpr` to Expression, now we can use 12.toExpr % 'f0. 
It is a useful feature in some cases.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink FLINK-4070

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2120.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2120


commit 8530408554aa6929a54be3d03b2b33f28bd01fd1
Author: Jark Wu 
Date:   2016-06-17T03:39:21Z

[FLINK-4070] [tableApi] Support literals on left side of binary expressions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2102: [FLINK-4068] [tableAPI] Move constant computations out of...

2016-06-16 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/2102
  
After introducing `RexExecutor` which make `ReduceExpressionRules` taking 
effect ,  many errors occurred.  

1. The `cannot translate call AS($t0, $t1)` is a Calcite bug I think, and I 
created a related issue : 
[CALCITE-1295](https://issues.apache.org/jira/browse/CALCITE-1295).

2. We should replace 
[L69&L73](https://github.com/apache/flink/blob/master/flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/expressions/arithmetic.scala#L69-L73)
 to `relBuilder.call(SqlStdOperatorTable.CONCAT, l, cast)` otherwise will throw 
the following exception. Because calcite have no plus(String, String) method. 

```
java.lang.RuntimeException: while resolving method 'plus[class 
java.lang.String, class java.lang.String]' in class class 
org.apache.calcite.runtime.SqlFunctions

at org.apache.calcite.linq4j.tree.Types.lookupMethod(Types.java:345)
at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:442)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$BinaryImplementor.implement(RexImpTable.java:1640)
at 
org.apache.calcite.adapter.enumerable.RexImpTable.implementCall(RexImpTable.java:854)
at 
org.apache.calcite.adapter.enumerable.RexImpTable.implementNullSemantics(RexImpTable.java:843)
at 
org.apache.calcite.adapter.enumerable.RexImpTable.implementNullSemantics0(RexImpTable.java:756)
at 
org.apache.calcite.adapter.enumerable.RexImpTable.access$900(RexImpTable.java:181)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$3.implement(RexImpTable.java:411)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateCall(RexToLixTranslator.java:535)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate0(RexToLixTranslator.java:507)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:222)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate0(RexToLixTranslator.java:472)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:222)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:217)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateList(RexToLixTranslator.java:700)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateProjects(RexToLixTranslator.java:192)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:80)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:59)
at 
org.apache.calcite.rex.RexExecutorImpl.reduce(RexExecutorImpl.java:118)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:544)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:455)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:438)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$CalcReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:350)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:213)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:838)
at 
org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:334)
at 
org.apache.flink.api.table.BatchTableEnvironment.translate(BatchTableEnvironment.scala:250)
at 
org.apache.flink.api.java.table.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:146)
at org.apache.flink.api.java.batch.table.ExpressionsITCase.testCom


3. The following error is when we convert `Trim` to `RexNode`, we use a 
Integer to represent "LEADING", "TRAILING", "BOTH". Instead we should use 
`SqlTrimFunction.Flag`. But I haven't found how to write SqlTrimFunction.Flag 
into a `RexNode`.

 ```
java.lang.ClassCastException: java.lang.Integer cannot be cast to 
org.apache.calcite.sql.fun.SqlTrimFunction$Flag

at 
org.apache.calcite.adapter.enumerable.RexImpTable$TrimImplementor.implement(RexImpTable.java:1448)
at 
org.apache.calcite.adapter.enumerable.RexImpTable.implementCall(RexImpTable.java:854)
at 
org.apache.calcite.adapter.enumerable.RexImpTable.implementNullSemantics(RexImpTable.java:843)
at 
org.apache.calcite.adapter.enumerable.RexImpTable.implementNullSemantics0(RexImpTable.java:756)
at 
org.apache.calcite.adapter.enumerable.RexImpTable.access$900(RexImpTable.java:181)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$3.implement(RexImpTable.java:411)
at 
org.apache.calcite.ad

[GitHub] flink pull request #2102: [FLINK-4068] [tableAPI] Move constant computations...

2016-06-15 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/2102#discussion_r67276795
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/sql/SelectITCase.scala
 ---
@@ -146,4 +148,21 @@ class SelectITCase(
 tEnv.sql(sqlQuery)
   }
 
+  @Test
+  def testConstantReduce(): Unit = {
--- End diff --

👍  It's a  good idea. I will try it later. And the CI throws `cannot 
translate call AS...` error, I will figure it out today. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2102: [FLINK-4068] [tableAPI] Move constant computations...

2016-06-15 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/2102

[FLINK-4068] [tableAPI] Move constant computations out of code-generated

The `ReduceExpressionsRule` rule can reduce constant expressions and 
replacing them with the corresponding constant.  We have 
`ReduceExpressionsRule.CALC_INSTANCE` in both `DATASET_OPT_RULES` and 
`DATASET_OPT_RULES`, but it dose not take effect. Because it require the 
planner have an executor to evaluate the constant expressions. This PR does 
this, and resolve FLINK-4068.

And some tests added.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink FLINK-4068

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2102


commit 786653b7be085c1e80481137fed3d47c5da2357a
Author: Jark Wu 
Date:   2016-06-15T09:19:35Z

[FLINK-4068] [tableAPI] Move constant computations out of code-generated  
functions.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [docs] fix the active tab caption is not displ...

2016-03-04 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/1763

[docs] fix the active tab caption is not displaying blue

and a very very minor typo fix


![image](https://cloud.githubusercontent.com/assets/5378924/13525465/0c4c8544-e23c-11e5-9ea3-b3357b114adc.png)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink codetabs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1763.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1763


commit 3007344bd905824e45931efe4cc39d4c5fed3810
Author: Jark Wu 
Date:   2016-03-04T09:57:40Z

[docs] fix the active tab caption is not displaying blue




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-3577] [docs] Display anchor links when ...

2016-03-04 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/1762

[FLINK-3577] [docs] Display anchor links when hovering over headers.

This is useful to share the document url if display anchor links when 
hovering over headers.  Currently we must scroll up to the TOC, find the 
section,click it, then copy  the url.

Also add AnchorJs to LISENCE

![2016-03-04 7 03 
42](https://cloud.githubusercontent.com/assets/5378924/13525491/37c994be-e23c-11e5-9ed2-319a3fbb9cfb.png)





You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink FLINK-3577

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1762.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1762


commit a9f31855cb58c0f7e592f7c6b74862dafb71c093
Author: Jark Wu 
Date:   2016-03-04T10:53:10Z

[FLINK-3577] [docs] Display anchor links when hovering over headers.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [docs] fix javascript exception caused by disq...

2016-03-02 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/1756

[docs] fix javascript exception caused by disqus

As we comment `` but not the disqus 
javascript. It will cause a exception like this:


![image](https://cloud.githubusercontent.com/assets/5378924/13482083/40bd92ea-e125-11e5-90f4-2682b763a288.png)

And fix a minor typo in `cluster_execution.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1756.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1756


commit 713d418b6259ee94abe35edf710c5cabf978e1c2
Author: Jark Wu 
Date:   2016-03-03T01:43:38Z

[docs] fix javascript exception caused by disqus and fix typos in cluster 
execution.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [docs] fix typos in Basic Concepts documentati...

2016-02-27 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/1730

[docs] fix typos in Basic Concepts documentation

fix typos in Basic Concepts documentation

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink DOCS

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1730.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1730


commit 1de78fdbd4351d97b85ee0bebb7cd59057a1a2c2
Author: Jark Wu 
Date:   2016-02-28T03:02:33Z

[docs] fix typos in Basic Concepts documentation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


<    2   3   4   5   6   7