[jira] [Updated] (FLINK-5386) Refacturing Window Clause

2016-12-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-5386:
---
Issue Type: Sub-task  (was: Improvement)
Parent: FLINK-4557

> Refacturing Window Clause
> -
>
> Key: FLINK-5386
> URL: https://issues.apache.org/jira/browse/FLINK-5386
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5386) Refacturing Window Clause

2016-12-21 Thread sunjincheng (JIRA)
sunjincheng created FLINK-5386:
--

 Summary: Refacturing Window Clause
 Key: FLINK-5386
 URL: https://issues.apache.org/jira/browse/FLINK-5386
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: sunjincheng
Assignee: sunjincheng






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5384) clean up jira issues

2016-12-21 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
https://issues.apache.org/jira/browse/FLINK-37 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-87 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-481 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-605 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-639 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-650 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-735 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-456 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-788 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-796 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-805 -> from stratosphere ;
https://issues.apache.org/jira/browse/FLINK-867 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-879 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-1166 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-1946 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2119 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2157 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2220 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2319 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2363 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2399 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2428 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2472 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2480 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2609 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2823 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3155 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3331 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3964 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4653 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4717 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4829 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-5016 -> closed on github;

should be discussed before:
https://issues.apache.org/jira/browse/FLINK-1055 ;
https://issues.apache.org/jira/browse/FLINK-1098 -> create other issue to add a 
colectEach();
https://issues.apache.org/jira/browse/FLINK-1100 ;
https://issues.apache.org/jira/browse/FLINK-1146 ;
https://issues.apache.org/jira/browse/FLINK-1335 -> maybe rename?;
https://issues.apache.org/jira/browse/FLINK-1439 ;
https://issues.apache.org/jira/browse/FLINK-1447 -> firefox problem?;
https://issues.apache.org/jira/browse/FLINK-1521 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-1538 -> gsoc2015, is it solved?;
https://issues.apache.org/jira/browse/FLINK-1541 -> gsoc2015, is it solved?;
https://issues.apache.org/jira/browse/FLINK-1723 -> almost done? ;
https://issues.apache.org/jira/browse/FLINK-1814 ;
https://issues.apache.org/jira/browse/FLINK-1858 -> is QA bot deleted?;
https://issues.apache.org/jira/browse/FLINK-1926 -> all subtasks done;

https://issues.apache.org/jira/browse/FLINK-2023 -> does not block Scala Graph 
API;
https://issues.apache.org/jira/browse/FLINK-2032 -> all subtasks done;
https://issues.apache.org/jira/browse/FLINK-2108 -> almost done? ;
https://issues.apache.org/jira/browse/FLINK-2309 -> maybe it's worth to merge 
with https://issues.apache.org/jira/browse/FLINK-2316 ? ;
https://issues.apache.org/jira/browse/FLINK-3109 -> its PR is stuck;
https://issues.apache.org/jira/browse/FLINK-3154 -> must be addressed as part 
of a bigger initiative;
https://issues.apache.org/jira/browse/FLINK-3297 -> solved as third party lib;

  was:
must be closed:
https://issues.apache.org/jira/browse/FLINK-37 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-87 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-481 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-605 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-639 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-650 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-735 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-456 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-788 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-796 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-805 -> from stratosphere ;
https://issues.apache.org/jira/browse/FLINK-867 -> from stratosphere;

[jira] [Updated] (FLINK-5384) clean up jira issues

2016-12-21 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-5384:
-
Description: 
must be closed:
https://issues.apache.org/jira/browse/FLINK-37 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-87 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-481 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-605 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-639 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-650 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-735 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-456 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-788 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-796 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-805 -> from stratosphere ;
https://issues.apache.org/jira/browse/FLINK-867 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-879 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-1166 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-1946 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2119 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2157 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2220 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2319 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2363 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2399 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2428 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2472 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2480 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2609 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2823 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3155 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3331 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3964 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4653 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4717 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4829 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-5016 -> closed on github;

should be discassed before:
https://issues.apache.org/jira/browse/FLINK-1055 ;
https://issues.apache.org/jira/browse/FLINK-1098 -> create other issue to add a 
colectEach();
https://issues.apache.org/jira/browse/FLINK-1100 ;
https://issues.apache.org/jira/browse/FLINK-1146 ;
https://issues.apache.org/jira/browse/FLINK-1335 -> maybe rename?;
https://issues.apache.org/jira/browse/FLINK-1439 ;
https://issues.apache.org/jira/browse/FLINK-1447 -> firefox problem?;
https://issues.apache.org/jira/browse/FLINK-1521 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-1538 -> gsoc2015, is it solved?;
https://issues.apache.org/jira/browse/FLINK-1541 -> gsoc2015, is it solved?;
https://issues.apache.org/jira/browse/FLINK-1723 -> almost done? ;
https://issues.apache.org/jira/browse/FLINK-1814 ;
https://issues.apache.org/jira/browse/FLINK-1858 -> is QA bot deleted?;
https://issues.apache.org/jira/browse/FLINK-1926 -> all subtasks done;

https://issues.apache.org/jira/browse/FLINK-2023 -> does not block Scala Graph 
API;
https://issues.apache.org/jira/browse/FLINK-2032 -> all subtasks done;
https://issues.apache.org/jira/browse/FLINK-2108 -> almost done? ;
https://issues.apache.org/jira/browse/FLINK-2309 -> maybe it's worth to merge 
with https://issues.apache.org/jira/browse/FLINK-2316 ? ;
https://issues.apache.org/jira/browse/FLINK-3109 -> its PR is stuck;
https://issues.apache.org/jira/browse/FLINK-3154 -> must be addressed as part 
of a bigger initiative;
https://issues.apache.org/jira/browse/FLINK-3297 -> solved as third party lib;

  was:
must be closed:
https://issues.apache.org/jira/browse/FLINK-37 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-87 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-481 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-605 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-639 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-650 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-735 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-456 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-788 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-796 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-805 -> from stratosphere ;
https://issues.apache.org/jira/browse/FLINK-867 -> from stratosphere;

[jira] [Commented] (FLINK-5385) Add a help function to create Row

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769222#comment-15769222
 ] 

ASF GitHub Bot commented on FLINK-5385:
---

GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/3038

[FLINK-5385] [core] Add a help function to create Row

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

Currently, it is trivial to create a Row, for example:

```java
Row row = new Row(3);
row.setField(0, "hello");
row.setField(1, true);
row.setField(2, 1L);
```

This PR introduces an `of` help method to create a Row, such as:

```java
Row row = Row.of("hello", true, 1L);
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink rowof-FLINK-5385

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3038.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3038


commit 8213c4722b83e5e6573cb7d84940a09d0fe1211f
Author: Jark Wu 
Date:   2016-12-22T06:23:57Z

[FLINK-5385] [core] Add a help function to create Row




> Add a help function to create Row
> -
>
> Key: FLINK-5385
> URL: https://issues.apache.org/jira/browse/FLINK-5385
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Currently, it is trivial to create a Row, for example:
> {code:java}
> Row row = new Row(3);
> row.setField(0, "hello");
> row.setField(1, true);
> row.setField(2, 1L);
> {code}
> It would be nice to have a help method {{of}} to create a Row, such as:
> {code:java}
> Row row = Row.of("hello", true, 1L);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3038: [FLINK-5385] [core] Add a help function to create ...

2016-12-21 Thread wuchong
GitHub user wuchong opened a pull request:

https://github.com/apache/flink/pull/3038

[FLINK-5385] [core] Add a help function to create Row

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

Currently, it is trivial to create a Row, for example:

```java
Row row = new Row(3);
row.setField(0, "hello");
row.setField(1, true);
row.setField(2, 1L);
```

This PR introduces an `of` help method to create a Row, such as:

```java
Row row = Row.of("hello", true, 1L);
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wuchong/flink rowof-FLINK-5385

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3038.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3038


commit 8213c4722b83e5e6573cb7d84940a09d0fe1211f
Author: Jark Wu 
Date:   2016-12-22T06:23:57Z

[FLINK-5385] [core] Add a help function to create Row




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5385) Add a help function to create Row

2016-12-21 Thread Jark Wu (JIRA)
Jark Wu created FLINK-5385:
--

 Summary: Add a help function to create Row
 Key: FLINK-5385
 URL: https://issues.apache.org/jira/browse/FLINK-5385
 Project: Flink
  Issue Type: Improvement
  Components: Core
Reporter: Jark Wu
Assignee: Jark Wu


Currently, it is trivial to create a Row, for example:

{code:java}
Row row = new Row(3);
row.setField(0, "hello");
row.setField(1, true);
row.setField(2, 1L);
{code}

It would be nice to have a help method {{of}} to create a Row, such as:

{code:java}
Row row = Row.of("hello", true, 1L);
{code}






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769103#comment-15769103
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/3020
  
Thank you @fhueske ,  I think your points are very good. And I changed my 
code according to your suggestions.


> Support custom field names for RowTypeInfo
> --
>
> Key: FLINK-5348
> URL: https://issues.apache.org/jira/browse/FLINK-5348
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Currently, the RowTypeInfo doesn't support optional custom field names, but 
> forced to generate {{f0}} ~ {{fn}} as field names. It would be better to 
> support custom names and will benefit some cases (e.g. FLINK-5280).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3020: [FLINK-5348] [core] Support custom field names for RowTyp...

2016-12-21 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/3020
  
Thank you @fhueske ,  I think your points are very good. And I changed my 
code according to your suggestions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2439: [FLINK-4450]update storm verion to 1.0.0 in flink-storm a...

2016-12-21 Thread liuyuzhong
Github user liuyuzhong commented on the issue:

https://github.com/apache/flink/pull/2439
  
@StephanEwen I use new githup account make a new pull request #3037, help 
me to review it please.Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3037: Flink-4450 update storm version to 1.0

2016-12-21 Thread liuyuzhong7
GitHub user liuyuzhong7 opened a pull request:

https://github.com/apache/flink/pull/3037

Flink-4450 update storm version to 1.0

@StephanEwen @mxm 
The old pull request #2439  was wrong , I use this account make a new pull 
request.

Please me to review this pull request.

Storm example Test:

![image](https://cloud.githubusercontent.com/assets/24708126/21415644/33789972-c847-11e6-9846-16bc623dce21.png)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/liuyuzhong7/flink FLINK-4450

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3037.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3037


commit 02596f71f8c1ac77b0ade49113a80b07f15cd337
Author: yuzhongliu 
Date:   2016-12-22T03:36:37Z

#FLINK-4450 update storm version to 1.0

commit 2b15f5d4234c074c8764768a4f329363de077c09
Author: liuyuzhong7 
Date:   2016-12-22T04:52:28Z

#FLINK-4450 format pom.xml

commit 77f228041b5652dc1b2015e6ef4bcda3313c1ae6
Author: liuyuzhong7 
Date:   2016-12-22T05:09:49Z

#FLINK-4450 fix mvn clean verify error




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93566506
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
+   throw new 
InvalidFieldReferenceException(
+   "Row field expression \"" + 
field + "\" out of bounds of " + this.toString() + ".");
+   }
+   for (int i = 0; i < fieldIndex; i++) {
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   fieldType = this.getTypeAt(fieldIndex);
+   } else {
+   for (int i = 0; i < this.fieldNames.length; 
i++) {
+   if (fieldNames[i].equals(field)) {
+   // found field
+   fieldType = this.getTypeAt(i);
+   break;
+   }
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   if (fieldType == null) {
+   throw new 
InvalidFieldReferenceException(
+   "Unable to find field \"" + 
field + "\" in type " + this.toString() + ".");
+   }
+   }
+
+   String tail = matcher.group(3);
+
+   if (tail == null) {
+   // expression hasn't nested field
+   if (fieldType instanceof CompositeType) {
+   ((CompositeType) 
fieldType).getFlatFields("*", offset, result);
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset, fieldType));
+   }
+   } 

[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769075#comment-15769075
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93566506
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
+   throw new 
InvalidFieldReferenceException(
+   "Row field expression \"" + 
field + "\" out of bounds of " + this.toString() + ".");
+   }
+   for (int i = 0; i < fieldIndex; i++) {
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   fieldType = this.getTypeAt(fieldIndex);
+   } else {
+   for (int i = 0; i < this.fieldNames.length; 
i++) {
+   if (fieldNames[i].equals(field)) {
+   // found field
+   fieldType = this.getTypeAt(i);
+   break;
+   }
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   if (fieldType == null) {
+   throw new 
InvalidFieldReferenceException(
+   "Unable to find field \"" + 
field + "\" in type " + this.toString() + ".");
+   }
+   }
+
+   String tail = matcher.group(3);
+
+   if (tail == null) {
+   // expression hasn't nested field
+   if (fieldType instanceof CompositeType) {
+   ((CompositeType) 

[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769058#comment-15769058
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93566058
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
--- End diff --

It is almost ported from `CaseClassTypeInfo.getFlatFields()` except the 
field index. The `CaseClassTypeInfo` is 1-based, but `RowTypeInfo` is 0-based.


> Support custom field names for RowTypeInfo
> --
>
> Key: FLINK-5348
> URL: https://issues.apache.org/jira/browse/FLINK-5348
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Currently, the RowTypeInfo doesn't support optional custom field names, but 
> forced to generate {{f0}} ~ {{fn}} as field names. It would be better to 
> support custom names and will benefit some cases (e.g. FLINK-5280).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93566058
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
--- End diff --

It is almost ported from `CaseClassTypeInfo.getFlatFields()` except the 
field index. The `CaseClassTypeInfo` is 1-based, but `RowTypeInfo` is 0-based.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4450) update storm version to 1.0.0

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768746#comment-15768746
 ] 

ASF GitHub Bot commented on FLINK-4450:
---

Github user liuyuzhong commented on the issue:

https://github.com/apache/flink/pull/2439
  
@StephanEwen Sorry, I can't continue with this pull request. I will use 
another githup account to pull.


> update storm version to 1.0.0
> -
>
> Key: FLINK-4450
> URL: https://issues.apache.org/jira/browse/FLINK-4450
> Project: Flink
>  Issue Type: Improvement
>  Components: flink-contrib
>Reporter: yuzhongliu
> Fix For: 2.0.0
>
>
> The storm package path was changed in new version
> storm old version package:
> backtype.storm.*
> storm new version pachage:
> org.apache.storm.*
> shall we update flink/flink-storm code to new storm version?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2439: [FLINK-4450]update storm verion to 1.0.0 in flink-storm a...

2016-12-21 Thread liuyuzhong
Github user liuyuzhong commented on the issue:

https://github.com/apache/flink/pull/2439
  
@StephanEwen Sorry, I can't continue with this pull request. I will use 
another githup account to pull.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-5175) StreamExecutionEnvironment's set function return `this` instead of void

2016-12-21 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui closed FLINK-5175.

Resolution: Won't Fix

> StreamExecutionEnvironment's set function return `this` instead of void
> ---
>
> Key: FLINK-5175
> URL: https://issues.apache.org/jira/browse/FLINK-5175
> Project: Flink
>  Issue Type: Sub-task
>  Components: DataStream API
>Reporter: shijinkui
> Fix For: 2.0.0
>
>
> from FLINK-5167.
> for example :
> public void setNumberOfExecutionRetries(int numberOfExecutionRetries)
> { config.setNumberOfExecutionRetries(numberOfExecutionRetries); }
> change to:
> public StreamExecutionEnvironment setNumberOfExecutionRetries(int 
> numberOfExecutionRetries)
> { config.setNumberOfExecutionRetries(numberOfExecutionRetries); return this; }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5370) build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest

2016-12-21 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5370:
-
Description: 
mvn clean package
head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd
on windows:

Results :

Failed tests:
  FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
/C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
her_file.bin
  FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 2: 
/C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
068/another_file.bin
Tests in error:
  GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
char...

Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0



head commit bfdaa3821c71f9fa3a3ff85f56154995d98b18b5
on osx:

Results :

Failed tests:
  BlobCacheSuccessTest.testBlobCache:108 Could not connect to BlobServer at 
address 0.0.0.0/0.0.0.0:63065
  BlobLibraryCacheManagerTest.testLibraryCacheManagerCleanup:114 Could not 
connect to BlobServer at address 0.0.0.0/0.0.0.0:63143
  ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:68 Can't assign 
requested address
Tests in error:
  CancelPartitionRequestTest.testCancelPartitionRequest » Bind Can't assign 
requ...
  CancelPartitionRequestTest.testDuplicateCancel » Bind Can't assign requested 
a...
  ClientTransportErrorHandlingTest.testExceptionOnRemoteClose » Bind Can't 
assig...
  ClientTransportErrorHandlingTest.testExceptionOnWrite » Bind Can't assign 
requ...
  NettyConnectionManagerTest.testManualConfiguration » Bind Can't assign 
request...
  NettyConnectionManagerTest.testMatchingNumberOfArenasAndThreadsAsDefault » 
Bind
  NettyServerLowAndHighWatermarkTest.testLowAndHighWatermarks » Bind Can't 
assig...
  ServerTransportErrorHandlingTest.testRemoteClose » Bind Can't assign 
requested...
  QueryableStateClientTest.testIntegrationWithKvStateServer » Bind Can't assign 
...
  KvStateClientTest.testClientServerIntegration » Bind Can't assign requested 
ad...
  KvStateClientTest.testConcurrentQueries » Bind Can't assign requested address
  KvStateClientTest.testFailureClosesChannel » Bind Can't assign requested 
addre...
  KvStateClientTest.testServerClosesChannel » Bind Can't assign requested 
addres...
  KvStateClientTest.testSimpleRequests » Bind Can't assign requested address
  KvStateServerTest.testSimpleRequest » Bind Can't assign requested address

Tests run: 1266, Failures: 3, Errors: 15, Skipped: 3


  was:
mvn clean package

head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd


Results :

Failed tests:
  FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
/C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
her_file.bin
  FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 2: 
/C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
068/another_file.bin
Tests in error:
  GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
char...

Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0


> build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest
> -
>
> Key: FLINK-5370
> URL: https://issues.apache.org/jira/browse/FLINK-5370
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>
> mvn clean package
> head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd
> on windows:
> Results :
> Failed tests:
>   FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
> /C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
> her_file.bin
>   FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 
> 2: /C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
> 068/another_file.bin
> Tests in error:
>   GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
> char...
> Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0
> 
> head commit bfdaa3821c71f9fa3a3ff85f56154995d98b18b5
> on osx:
> Results :
> Failed tests:
>   BlobCacheSuccessTest.testBlobCache:108 Could not connect to BlobServer at 
> address 0.0.0.0/0.0.0.0:63065
>   BlobLibraryCacheManagerTest.testLibraryCacheManagerCleanup:114 Could not 
> connect to BlobServer at address 0.0.0.0/0.0.0.0:63143
>   ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:68 Can't 
> assign requested address
> Tests in error:
>   CancelPartitionRequestTest.testCancelPartitionRequest » Bind Can't assign 
> requ...
>   CancelPartitionRequestTest.testDuplicateCancel » Bind Can't assign 
> requested a...
>   ClientTransportErrorHandlingTest.testExceptionOnRemoteClose » Bind Can't 
> assig...
>   ClientTransportErrorHandlingTest.testExceptionOnWrite » Bind Can't assign 
> requ...
>   

[jira] [Commented] (FLINK-5256) Extend DataSetSingleRowJoin to support Left and Right joins

2016-12-21 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767649#comment-15767649
 ] 

Fabian Hueske commented on FLINK-5256:
--

The feature can be tested with the following test case:

{code}
@Test
def testSingleRowOuterJoin(): Unit = {

  val env = ExecutionEnvironment.getExecutionEnvironment
  val tEnv = TableEnvironment.getTableEnvironment(env, config)

  val sqlQuery =
"SELECT a, cnt FROM t1 LEFT JOIN (SELECT COUNT(*) AS cnt FROM t2) AS x ON a 
> cnt"

  val ds1 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv).as('a, 'b, 
'c, 'd, 'e)
  val ds2 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv)
  tEnv.registerTable("t1", ds1)
  tEnv.registerTable("t2", ds2)

  val result = tEnv.sql(sqlQuery)

  val expected = Seq(
"1,null",
"2,null", "2,null",
"3,null", "3,null", "3,null",
"4,3", "4,3", "4,3", "4,3",
"5,3", "5,3", "5,3", "5,3", "5,3").mkString("\n")

  val results = result.toDataSet[Row].collect()
  TestBaseUtils.compareResultAsText(results.asJava, expected)
}
{code}

> Extend DataSetSingleRowJoin to support Left and Right joins
> ---
>
> Key: FLINK-5256
> URL: https://issues.apache.org/jira/browse/FLINK-5256
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Anton Mushin
>
> The {{DataSetSingleRowJoin}} is a broadcast-map join that supports arbitrary 
> inner joins where one input is a single row.
> I found that Calcite translates certain subqueries into non-equi left and 
> right joins with single input. These cases can be handled if the  
> {{DataSetSingleRowJoin}} is extended to support outer joins on the 
> non-single-row input, i.e., left joins if the right side is single input and 
> vice versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5384) clean up jira issues

2016-12-21 Thread Anton Solovev (JIRA)
Anton Solovev created FLINK-5384:


 Summary: clean up jira issues
 Key: FLINK-5384
 URL: https://issues.apache.org/jira/browse/FLINK-5384
 Project: Flink
  Issue Type: Improvement
Reporter: Anton Solovev
Assignee: Anton Solovev
Priority: Minor


must be closed:
https://issues.apache.org/jira/browse/FLINK-37 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-87 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-481 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-605 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-639 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-650 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-735 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-456 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-788 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-796 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-805 -> from stratosphere ;
https://issues.apache.org/jira/browse/FLINK-867 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-879 -> from stratosphere;
https://issues.apache.org/jira/browse/FLINK-1166 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-1946 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2119 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2157 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2220 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2319 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2363 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2399 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2428 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2472 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2480 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2609 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-2823 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3155 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3331 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-3964 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4653 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4717 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-4829 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-5016 -> closed on github;

should be discassed before:
https://issues.apache.org/jira/browse/FLINK-1055 ;
https://issues.apache.org/jira/browse/FLINK-1098 -> create other issue to add a 
colectEach();
https://issues.apache.org/jira/browse/FLINK-1100 ;
https://issues.apache.org/jira/browse/FLINK-1146 ;
https://issues.apache.org/jira/browse/FLINK-1335 -> maybe rename?;
https://issues.apache.org/jira/browse/FLINK-1439 ;
https://issues.apache.org/jira/browse/FLINK-1447 -> firefox problem?;
https://issues.apache.org/jira/browse/FLINK-1521 -> closed on github;
https://issues.apache.org/jira/browse/FLINK-1538 -> gsoc2015, is it solved?;
https://issues.apache.org/jira/browse/FLINK-1541 -> gsoc2015, is it solved?;
https://issues.apache.org/jira/browse/FLINK-1723 -> almost done? ;
https://issues.apache.org/jira/browse/FLINK-1814 ;
https://issues.apache.org/jira/browse/FLINK-1858 -> is QA bot deleted?;
https://issues.apache.org/jira/browse/FLINK-1926 -> all subtasks done;

https://issues.apache.org/jira/browse/FLINK-2023 -> does not block Scala Graph 
API;
https://issues.apache.org/jira/browse/FLINK-2032 -> all subtasks done;
https://issues.apache.org/jira/browse/FLINK-2108 -> almost done? ;
https://issues.apache.org/jira/browse/FLINK-2309 -> maybe it's worth to merge 
with https://issues.apache.org/jira/browse/FLINK-2316 ? 
;https://issues.apache.org/jira/browse/FLINK-3109 -> its PR is stuck;
https://issues.apache.org/jira/browse/FLINK-3154 -> must be addressed as part 
of a bigger initiative;
https://issues.apache.org/jira/browse/FLINK-3297 -> solved as third party lib;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3033: [FLINK-5256] Extend DataSetSingleRowJoin to suppor...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3033#discussion_r93469241
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/rules/dataSet/DataSetJoinRule.scala
 ---
@@ -40,10 +41,17 @@ class DataSetJoinRule
 val joinInfo = join.analyzeCondition
 
 // joins require an equi-condition or a conjunctive predicate with at 
least one equi-condition
-!joinInfo.pairs().isEmpty
+!joinInfo.pairs().isEmpty || isOuterJoin(join)
--- End diff --

The `DataSetJoinRule` should not be modified for this issue.
Can you revert the changes on this class?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5256) Extend DataSetSingleRowJoin to support Left and Right joins

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767567#comment-15767567
 ] 

ASF GitHub Bot commented on FLINK-5256:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3033#discussion_r93472610
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/rules/dataSet/DataSetSingleRowJoinRule.scala
 ---
@@ -35,16 +35,7 @@ class DataSetSingleRowJoinRule
 
   override def matches(call: RelOptRuleCall): Boolean = {
 val join = call.rel(0).asInstanceOf[LogicalJoin]
-
-if (isInnerJoin(join)) {
-  isSingleRow(join.getRight) || isSingleRow(join.getLeft)
-} else {
-  false
-}
-  }
-
-  private def isInnerJoin(join: LogicalJoin) = {
-join.getJoinType == JoinRelType.INNER
+isSingleRow(join.getRight) || isSingleRow(join.getLeft)
--- End diff --

The check should be 
```
join.getJoinType match {
  case JoinRelType.INNER =>
isSingleRow(join.getRight) || isSingleRow(join.getLeft)
  case JoinRelType.LEFT => 
isSingleRow(join.getRight)
  case JoinRelType.RIGHT => 
isSingleRow(join.getLeft)
  case _ => false
}
```

We also need to pass the `JoinRelType` to the `DataSetSingleRowJoin`.


> Extend DataSetSingleRowJoin to support Left and Right joins
> ---
>
> Key: FLINK-5256
> URL: https://issues.apache.org/jira/browse/FLINK-5256
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Anton Mushin
>
> The {{DataSetSingleRowJoin}} is a broadcast-map join that supports arbitrary 
> inner joins where one input is a single row.
> I found that Calcite translates certain subqueries into non-equi left and 
> right joins with single input. These cases can be handled if the  
> {{DataSetSingleRowJoin}} is extended to support outer joins on the 
> non-single-row input, i.e., left joins if the right side is single input and 
> vice versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5256) Extend DataSetSingleRowJoin to support Left and Right joins

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767566#comment-15767566
 ] 

ASF GitHub Bot commented on FLINK-5256:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3033#discussion_r93469241
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/rules/dataSet/DataSetJoinRule.scala
 ---
@@ -40,10 +41,17 @@ class DataSetJoinRule
 val joinInfo = join.analyzeCondition
 
 // joins require an equi-condition or a conjunctive predicate with at 
least one equi-condition
-!joinInfo.pairs().isEmpty
+!joinInfo.pairs().isEmpty || isOuterJoin(join)
--- End diff --

The `DataSetJoinRule` should not be modified for this issue.
Can you revert the changes on this class?


> Extend DataSetSingleRowJoin to support Left and Right joins
> ---
>
> Key: FLINK-5256
> URL: https://issues.apache.org/jira/browse/FLINK-5256
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Anton Mushin
>
> The {{DataSetSingleRowJoin}} is a broadcast-map join that supports arbitrary 
> inner joins where one input is a single row.
> I found that Calcite translates certain subqueries into non-equi left and 
> right joins with single input. These cases can be handled if the  
> {{DataSetSingleRowJoin}} is extended to support outer joins on the 
> non-single-row input, i.e., left joins if the right side is single input and 
> vice versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3033: [FLINK-5256] Extend DataSetSingleRowJoin to suppor...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3033#discussion_r93472610
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/rules/dataSet/DataSetSingleRowJoinRule.scala
 ---
@@ -35,16 +35,7 @@ class DataSetSingleRowJoinRule
 
   override def matches(call: RelOptRuleCall): Boolean = {
 val join = call.rel(0).asInstanceOf[LogicalJoin]
-
-if (isInnerJoin(join)) {
-  isSingleRow(join.getRight) || isSingleRow(join.getLeft)
-} else {
-  false
-}
-  }
-
-  private def isInnerJoin(join: LogicalJoin) = {
-join.getJoinType == JoinRelType.INNER
+isSingleRow(join.getRight) || isSingleRow(join.getLeft)
--- End diff --

The check should be 
```
join.getJoinType match {
  case JoinRelType.INNER =>
isSingleRow(join.getRight) || isSingleRow(join.getLeft)
  case JoinRelType.LEFT => 
isSingleRow(join.getRight)
  case JoinRelType.RIGHT => 
isSingleRow(join.getLeft)
  case _ => false
}
```

We also need to pass the `JoinRelType` to the `DataSetSingleRowJoin`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-3150) Make YARN container invocation configurable

2016-12-21 Thread Nico Kruber (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber reassigned FLINK-3150:
--

Assignee: Nico Kruber  (was: Robert Metzger)

> Make YARN container invocation configurable
> ---
>
> Key: FLINK-3150
> URL: https://issues.apache.org/jira/browse/FLINK-3150
> Project: Flink
>  Issue Type: Improvement
>  Components: YARN
>Reporter: Robert Metzger
>Assignee: Nico Kruber
>  Labels: qa
>
> Currently, the JVM invocation call of YARN containers is hardcoded.
> With this change, I would like to make the call configurable, using a string 
> such as
> "%java% %memopts% %jvmopts% ..."
> Also, we should respect the {{java.env.home}} if its set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4339) Implement Slot Pool Core

2016-12-21 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev closed FLINK-4339.


> Implement Slot Pool Core
> 
>
> Key: FLINK-4339
> URL: https://issues.apache.org/jira/browse/FLINK-4339
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Affects Versions: 1.1.0
>Reporter: Stephan Ewen
>Assignee: Kurt Young
>
> Impements the core slot structures and behavior of the {{SlotPool}}:
>   - pool of available slots
>   - request slots and response if slot is available in pool
>   - return / deallocate slots
> Detail design in here: 
> https://docs.google.com/document/d/1y4D-0KGiMNDFYOLRkJy-C04nl8fwJNdm9hoUfxce6zY/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4987) Harden slot pool logic

2016-12-21 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev closed FLINK-4987.


> Harden slot pool logic
> --
>
> Key: FLINK-4987
> URL: https://issues.apache.org/jira/browse/FLINK-4987
> Project: Flink
>  Issue Type: Sub-task
>  Components: Cluster Management
>Reporter: Till Rohrmann
>Assignee: Stephan Ewen
> Fix For: 1.2.0
>
>
> The {{SlotPool}} implementation can be further hardened.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93465353
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
+   throw new 
InvalidFieldReferenceException(
+   "Row field expression \"" + 
field + "\" out of bounds of " + this.toString() + ".");
+   }
+   for (int i = 0; i < fieldIndex; i++) {
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   fieldType = this.getTypeAt(fieldIndex);
+   } else {
+   for (int i = 0; i < this.fieldNames.length; 
i++) {
+   if (fieldNames[i].equals(field)) {
+   // found field
+   fieldType = this.getTypeAt(i);
+   break;
+   }
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   if (fieldType == null) {
+   throw new 
InvalidFieldReferenceException(
+   "Unable to find field \"" + 
field + "\" in type " + this.toString() + ".");
+   }
+   }
+
+   String tail = matcher.group(3);
+
+   if (tail == null) {
+   // expression hasn't nested field
+   if (fieldType instanceof CompositeType) {
+   ((CompositeType) 
fieldType).getFlatFields("*", offset, result);
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset, fieldType));
+   }
+   } 

[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93405538
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/RowTypeInfoTest.java
 ---
@@ -18,12 +18,86 @@
 package org.apache.flink.api.java.typeutils;
 
 import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.CompositeType;
+import org.junit.BeforeClass;
 import org.junit.Test;
 
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 public class RowTypeInfoTest {
--- End diff --

Add tests for `getFlatFields()`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93405238
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
+   throw new 
InvalidFieldReferenceException(
+   "Row field expression \"" + 
field + "\" out of bounds of " + this.toString() + ".");
+   }
+   for (int i = 0; i < fieldIndex; i++) {
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   fieldType = this.getTypeAt(fieldIndex);
+   } else {
+   for (int i = 0; i < this.fieldNames.length; 
i++) {
+   if (fieldNames[i].equals(field)) {
+   // found field
+   fieldType = this.getTypeAt(i);
+   break;
+   }
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   if (fieldType == null) {
+   throw new 
InvalidFieldReferenceException(
+   "Unable to find field \"" + 
field + "\" in type " + this.toString() + ".");
+   }
+   }
+
+   String tail = matcher.group(3);
+
+   if (tail == null) {
+   // expression hasn't nested field
+   if (fieldType instanceof CompositeType) {
+   ((CompositeType) 
fieldType).getFlatFields("*", offset, result);
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset, fieldType));
+   }
+   } 

[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93466347
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
+   throw new 
InvalidFieldReferenceException(
+   "Row field expression \"" + 
field + "\" out of bounds of " + this.toString() + ".");
+   }
+   for (int i = 0; i < fieldIndex; i++) {
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   fieldType = this.getTypeAt(fieldIndex);
+   } else {
+   for (int i = 0; i < this.fieldNames.length; 
i++) {
+   if (fieldNames[i].equals(field)) {
+   // found field
+   fieldType = this.getTypeAt(i);
+   break;
+   }
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   if (fieldType == null) {
+   throw new 
InvalidFieldReferenceException(
+   "Unable to find field \"" + 
field + "\" in type " + this.toString() + ".");
+   }
+   }
+
+   String tail = matcher.group(3);
+
+   if (tail == null) {
+   // expression hasn't nested field
+   if (fieldType instanceof CompositeType) {
+   ((CompositeType) 
fieldType).getFlatFields("*", offset, result);
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset, fieldType));
+   }
+   } 

[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93404965
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/RowTypeInfoTest.java
 ---
@@ -18,12 +18,86 @@
 package org.apache.flink.api.java.typeutils;
 
 import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.CompositeType;
+import org.junit.BeforeClass;
 import org.junit.Test;
 
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 public class RowTypeInfoTest {
+   private static List typeList = new ArrayList<>();
+
+
+   @BeforeClass
+   public static void setUp() throws Exception {
+   typeList.add(BasicTypeInfo.INT_TYPE_INFO);
+   typeList.add(new RowTypeInfo(
+   BasicTypeInfo.SHORT_TYPE_INFO,
+   BasicTypeInfo.BIG_DEC_TYPE_INFO));
+   typeList.add(BasicTypeInfo.STRING_TYPE_INFO);
+   }
+
+
+   @Test
+   public void testDuplicateCustomFieldNames() {
--- End diff --

Split test and use `@Test(expected = IllegalArgumentException.class)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767434#comment-15767434
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93466144
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
+   throw new 
InvalidFieldReferenceException(
+   "Row field expression \"" + 
field + "\" out of bounds of " + this.toString() + ".");
+   }
+   for (int i = 0; i < fieldIndex; i++) {
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   fieldType = this.getTypeAt(fieldIndex);
+   } else {
--- End diff --

We could use `getFieldIndex()` here to translate the field name into an 
index and use a common code path with the int field index to compute offset and 
fetch type.


> Support custom field names for RowTypeInfo
> --
>
> Key: FLINK-5348
> URL: https://issues.apache.org/jira/browse/FLINK-5348
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Currently, the RowTypeInfo doesn't support optional custom field names, but 
> forced to generate {{f0}} ~ {{fn}} as field names. It would be better to 
> support custom names and will benefit some cases (e.g. FLINK-5280).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93405114
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
--- End diff --

Can we use arrays instead of lists to be consistent with the other 
constructor?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767437#comment-15767437
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93404965
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/RowTypeInfoTest.java
 ---
@@ -18,12 +18,86 @@
 package org.apache.flink.api.java.typeutils;
 
 import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.CompositeType;
+import org.junit.BeforeClass;
 import org.junit.Test;
 
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 public class RowTypeInfoTest {
+   private static List typeList = new ArrayList<>();
+
+
+   @BeforeClass
+   public static void setUp() throws Exception {
+   typeList.add(BasicTypeInfo.INT_TYPE_INFO);
+   typeList.add(new RowTypeInfo(
+   BasicTypeInfo.SHORT_TYPE_INFO,
+   BasicTypeInfo.BIG_DEC_TYPE_INFO));
+   typeList.add(BasicTypeInfo.STRING_TYPE_INFO);
+   }
+
+
+   @Test
+   public void testDuplicateCustomFieldNames() {
--- End diff --

Split test and use `@Test(expected = IllegalArgumentException.class)`


> Support custom field names for RowTypeInfo
> --
>
> Key: FLINK-5348
> URL: https://issues.apache.org/jira/browse/FLINK-5348
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Currently, the RowTypeInfo doesn't support optional custom field names, but 
> forced to generate {{f0}} ~ {{fn}} as field names. It would be better to 
> support custom names and will benefit some cases (e.g. FLINK-5280).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767432#comment-15767432
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93405183
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
--- End diff --

This is the same logic as in `CaseClassTypeInfo.getFlatFields()` only 
ported to Java, right?


> Support custom field names for RowTypeInfo
> --
>
> Key: FLINK-5348
> URL: https://issues.apache.org/jira/browse/FLINK-5348
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Currently, the RowTypeInfo doesn't support optional custom field names, but 
> forced to generate {{f0}} ~ {{fn}} as field names. It would be better to 
> support custom names and will benefit some cases (e.g. FLINK-5280).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767431#comment-15767431
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93465353
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
+   throw new 
InvalidFieldReferenceException(
+   "Row field expression \"" + 
field + "\" out of bounds of " + this.toString() + ".");
+   }
+   for (int i = 0; i < fieldIndex; i++) {
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   fieldType = this.getTypeAt(fieldIndex);
+   } else {
+   for (int i = 0; i < this.fieldNames.length; 
i++) {
+   if (fieldNames[i].equals(field)) {
+   // found field
+   fieldType = this.getTypeAt(i);
+   break;
+   }
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   if (fieldType == null) {
+   throw new 
InvalidFieldReferenceException(
+   "Unable to find field \"" + 
field + "\" in type " + this.toString() + ".");
+   }
+   }
+
+   String tail = matcher.group(3);
+
+   if (tail == null) {
+   // expression hasn't nested field
+   if (fieldType instanceof CompositeType) {
+   ((CompositeType) 

[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767435#comment-15767435
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93465385
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
--- End diff --

`TupleTypeBase.getFieldAt()` does check for index bounds as well and throws 
an `IndexOutOfBoundsException`.
So we could simplify this a bit here.


> Support custom field names for RowTypeInfo
> --
>
> Key: FLINK-5348
> URL: https://issues.apache.org/jira/browse/FLINK-5348
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Currently, the RowTypeInfo doesn't support optional custom field names, but 
> forced to generate {{f0}} ~ {{fn}} as field names. It would be better to 
> support custom names and will benefit some cases (e.g. FLINK-5280).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767433#comment-15767433
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93405538
  
--- Diff: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/RowTypeInfoTest.java
 ---
@@ -18,12 +18,86 @@
 package org.apache.flink.api.java.typeutils;
 
 import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.CompositeType;
+import org.junit.BeforeClass;
 import org.junit.Test;
 
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 public class RowTypeInfoTest {
--- End diff --

Add tests for `getFlatFields()`?


> Support custom field names for RowTypeInfo
> --
>
> Key: FLINK-5348
> URL: https://issues.apache.org/jira/browse/FLINK-5348
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Currently, the RowTypeInfo doesn't support optional custom field names, but 
> forced to generate {{f0}} ~ {{fn}} as field names. It would be better to 
> support custom names and will benefit some cases (e.g. FLINK-5280).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767438#comment-15767438
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93405114
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
--- End diff --

Can we use arrays instead of lists to be consistent with the other 
constructor?


> Support custom field names for RowTypeInfo
> --
>
> Key: FLINK-5348
> URL: https://issues.apache.org/jira/browse/FLINK-5348
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> Currently, the RowTypeInfo doesn't support optional custom field names, but 
> forced to generate {{f0}} ~ {{fn}} as field names. It would be better to 
> support custom names and will benefit some cases (e.g. FLINK-5280).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93466144
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
+   throw new 
InvalidFieldReferenceException(
+   "Row field expression \"" + 
field + "\" out of bounds of " + this.toString() + ".");
+   }
+   for (int i = 0; i < fieldIndex; i++) {
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   fieldType = this.getTypeAt(fieldIndex);
+   } else {
--- End diff --

We could use `getFieldIndex()` here to translate the field name into an 
index and use a common code path with the int field index to compute offset and 
fetch type.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767436#comment-15767436
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93466347
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
+   throw new 
InvalidFieldReferenceException(
+   "Row field expression \"" + 
field + "\" out of bounds of " + this.toString() + ".");
+   }
+   for (int i = 0; i < fieldIndex; i++) {
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   fieldType = this.getTypeAt(fieldIndex);
+   } else {
+   for (int i = 0; i < this.fieldNames.length; 
i++) {
+   if (fieldNames[i].equals(field)) {
+   // found field
+   fieldType = this.getTypeAt(i);
+   break;
+   }
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   if (fieldType == null) {
+   throw new 
InvalidFieldReferenceException(
+   "Unable to find field \"" + 
field + "\" in type " + this.toString() + ".");
+   }
+   }
+
+   String tail = matcher.group(3);
+
+   if (tail == null) {
+   // expression hasn't nested field
+   if (fieldType instanceof CompositeType) {
+   ((CompositeType) 

[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93465385
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
--- End diff --

`TupleTypeBase.getFieldAt()` does check for index bounds as well and throws 
an `IndexOutOfBoundsException`.
So we could simplify this a bit here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3020: [FLINK-5348] [core] Support custom field names for...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93405183
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
--- End diff --

This is the same logic as in `CaseClassTypeInfo.getFlatFields()` only 
ported to Java, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767430#comment-15767430
 ] 

ASF GitHub Bot commented on FLINK-5348:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/3020#discussion_r93405238
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/java/typeutils/RowTypeInfo.java 
---
@@ -54,6 +76,152 @@ public RowTypeInfo(TypeInformation... types) {
}
}
 
+   public RowTypeInfo(List types, List 
fieldNames) {
+   super(Row.class, types == null ? null : types.toArray(new 
TypeInformation[types.size()]));
+   checkNotNull(fieldNames, "FieldNames should not be null.");
+   checkArgument(
+   types.size() == fieldNames.size(),
+   "Number of field types and names is different.");
+   checkArgument(
+   types.size() == new HashSet<>(fieldNames).size(),
+   "Field names are not unique.");
+
+   this.fieldNames = new String[fieldNames.size()];
+
+   for (int i = 0; i < fieldNames.size(); i++) {
+   this.fieldNames[i] = fieldNames.get(i);
+   }
+   }
+
+   @Override
+   public void getFlatFields(String fieldExpression, int offset, 
List result) {
+   Matcher matcher = 
PATTERN_NESTED_FIELDS_WILDCARD.matcher(fieldExpression);
+
+   if (!matcher.matches()) {
+   throw new InvalidFieldReferenceException(
+   "Invalid tuple field reference \"" + 
fieldExpression + "\".");
+   }
+
+   String field = matcher.group(0);
+
+   if ((field.equals(ExpressionKeys.SELECT_ALL_CHAR)) ||
+   (field.equals(ExpressionKeys.SELECT_ALL_CHAR_SCALA))) {
+   // handle select all
+   int keyPosition = 0;
+   for (TypeInformation fType : types) {
+   if (fType instanceof CompositeType) {
+   CompositeType cType = 
(CompositeType) fType;
+   
cType.getFlatFields(ExpressionKeys.SELECT_ALL_CHAR, offset + keyPosition, 
result);
+   keyPosition += cType.getTotalFields() - 
1;
+   } else {
+   result.add(new 
FlatFieldDescriptor(offset + keyPosition, fType));
+   }
+   keyPosition++;
+   }
+   } else {
+   field = matcher.group(1);
+
+   Matcher intFieldMatcher = 
PATTERN_INT_FIELD.matcher(field);
+   TypeInformation fieldType = null;
+   if (intFieldMatcher.matches()) {
+   // field expression is an integer
+   int fieldIndex = Integer.valueOf(field);
+   if (fieldIndex > this.getArity()) {
+   throw new 
InvalidFieldReferenceException(
+   "Row field expression \"" + 
field + "\" out of bounds of " + this.toString() + ".");
+   }
+   for (int i = 0; i < fieldIndex; i++) {
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   fieldType = this.getTypeAt(fieldIndex);
+   } else {
+   for (int i = 0; i < this.fieldNames.length; 
i++) {
+   if (fieldNames[i].equals(field)) {
+   // found field
+   fieldType = this.getTypeAt(i);
+   break;
+   }
+   offset += 
this.getTypeAt(i).getTotalFields();
+   }
+   if (fieldType == null) {
+   throw new 
InvalidFieldReferenceException(
+   "Unable to find field \"" + 
field + "\" in type " + this.toString() + ".");
+   }
+   }
+
+   String tail = matcher.group(3);
+
+   if (tail == null) {
+   // expression hasn't nested field
+   if (fieldType instanceof CompositeType) {
+   ((CompositeType) 

[jira] [Created] (FLINK-5383) TaskManager fails with SIGBUS when loading RocksDB

2016-12-21 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-5383:
-

 Summary: TaskManager fails with SIGBUS when loading RocksDB
 Key: FLINK-5383
 URL: https://issues.apache.org/jira/browse/FLINK-5383
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Robert Metzger


While trying out Flink 1.2, my TaskManager died with the following error while 
deploying a job:

{code}
2016-12-21 15:57:50,080 INFO  org.apache.flink.runtime.taskmanager.Task 
- Map -> Sink
: Unnamed (15/16) (50f527e4445479fb1fc9f34394d86d2f) switched from DEPLOYING to 
RUNNING.
2016-12-21 15:57:50,081 INFO  org.apache.flink.runtime.taskmanager.Task 
- Map -> Sink
: Unnamed (16/16) (b4b3d3340de587d729fe83d65eac3e10) switched from DEPLOYING to 
RUNNING.
2016-12-21 15:57:50,081 INFO  
org.apache.flink.streaming.runtime.tasks.StreamTask   - Using user-
defined state backend: RocksDB State Backend {isInitialized=false, 
configuredDbBasePaths=null, initialize
dDbBasePaths=null, checkpointStreamBackend=File State Backend @ 
hdfs://nameservice1/shared/checkpoint-dir
-rocks}.
2016-12-21 15:57:50,081 INFO  
org.apache.flink.streaming.runtime.tasks.StreamTask   - Using user-
defined state backend: RocksDB State Backend {isInitialized=false, 
configuredDbBasePaths=null, initialize
dDbBasePaths=null, checkpointStreamBackend=File State Backend @ 
hdfs://nameservice1/shared/checkpoint-dir
-rocks}.
2016-12-21 15:57:50,223 INFO  
org.apache.flink.contrib.streaming.state.RocksDBStateBackend  - Attempting 
to load RocksDB native library and store it at 
'/yarn/nm/usercache/longrunning/appcache/application_14821
56101125_0016'

LogType:taskmanager.out
Log Upload Time:Wed Dec 21 16:00:35 + 2016
LogLength:959
Log Contents:
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGBUS (0x7) at pc=0x7fe745fd596a, pid=7414, tid=140630801725184
#
# JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build 1.7.0_67-b01)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# C  [ld-linux-x86-64.so.2+0x1a96a]  realloc+0x2bfa
#
{code}

the error report file contained the following frames:

{code}
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j  java.lang.ClassLoader$NativeLibrary.load(Ljava/lang/String;)V+0
j  java.lang.ClassLoader.loadLibrary1(Ljava/lang/Class;Ljava/io/File;)Z+302
j  java.lang.ClassLoader.loadLibrary0(Ljava/lang/Class;Ljava/io/File;)Z+2
j  java.lang.ClassLoader.loadLibrary(Ljava/lang/Class;Ljava/lang/String;Z)V+48
j  java.lang.Runtime.load0(Ljava/lang/Class;Ljava/lang/String;)V+57
j  java.lang.System.load(Ljava/lang/String;)V+7
j  org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(Ljava/lang/String;)V+14
j  org.rocksdb.NativeLibraryLoader.loadLibrary(Ljava/lang/String;)V+22
j  
org.apache.flink.contrib.streaming.state.RocksDBStateBackend.ensureRocksDBIsLoaded(Ljava/lang/String;)V+62
j  
org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createKeyedStateBackend(Lorg/apache/flink/runtime/execution/Environment;Lorg/apache/flink/api/common/JobID;Ljava/lang/String;Lorg/apache/flink/api/common/typeutils/TypeSerializer;ILorg/apache/flink/runtime/state/KeyGroupRange;Lorg/apache/flink/runtime/query/TaskKvStateRegistry;)Lorg/apache/flink/runtime/state/AbstractKeyedStateBackend;+16
j  
org.apache.flink.streaming.runtime.tasks.StreamTask.createKeyedStateBackend(Lorg/apache/flink/api/common/typeutils/TypeSerializer;ILorg/apache/flink/runtime/state/KeyGroupRange;)Lorg/apache/flink/runtime/state/AbstractKeyedStateBackend;+137
{code}

I saw this error only once so far. I'll report again if it happens more 
frequently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5382) Taskmanager log download button causes 404

2016-12-21 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-5382:
-

 Summary: Taskmanager log download button causes 404
 Key: FLINK-5382
 URL: https://issues.apache.org/jira/browse/FLINK-5382
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 1.2.0
Reporter: Robert Metzger


The "download logs" button when viewing the TaskManager logs in the web UI 
leads to a 404 page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5381) Scrolling in some web interface pages doesn't work (taskmanager details, jobmanager config)

2016-12-21 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-5381:
-

 Summary: Scrolling in some web interface pages doesn't work 
(taskmanager details, jobmanager config)
 Key: FLINK-5381
 URL: https://issues.apache.org/jira/browse/FLINK-5381
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Robert Metzger


It seems that scrolling in the web interface doesn't work anymore on some pages 
in the 1.2 release branch.

Example pages: 
- When you click the "JobManager" tab
- The TaskManager logs page





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1439) Enable all YARN tests on Travis

2016-12-21 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-1439:
-
Description: (was: Currently blocked by 
https://issues.apache.org/jira/browse/YARN-3086)

> Enable all YARN tests on Travis
> ---
>
> Key: FLINK-1439
> URL: https://issues.apache.org/jira/browse/FLINK-1439
> Project: Flink
>  Issue Type: Improvement
>  Components: YARN
>Reporter: Robert Metzger
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767311#comment-15767311
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93437895
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsPlanTest.scala
 ---
@@ -0,0 +1,350 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.common.operators.Order
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.TableEnvironment
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767308#comment-15767308
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440307
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/SortValidationTest.scala
 ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.scala.{ExecutionEnvironment, _}
+import org.apache.flink.api.table.{Row, TableEnvironment, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class SortValidationTest(
+  mode: TestExecutionMode,
+  configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767309#comment-15767309
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440355
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CalcValidationTest.scala
 ---
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{Row, TableEnvironment, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class CalcValidationTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767300#comment-15767300
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93437942
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsValidationTest.scala
 ---
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767307#comment-15767307
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93436751
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CalcPlanTest.scala
 ---
@@ -0,0 +1,394 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import java.sql.{Date, Time, Timestamp}
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.expressions.Literal
+import org.apache.flink.api.table._
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class CalcPlanTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

`CalcPlanTest` should not extend a class. 
`TableProgramsTestBase` starts a Flink Minicluster with is quite expensive 
and only required for ITCases.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767306#comment-15767306
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440247
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/JoinPlanTest.scala
 ---
@@ -0,0 +1,283 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.expressions.Literal
+import org.apache.flink.api.table.TableEnvironment
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class JoinPlanTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767297#comment-15767297
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438092
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/JoinPlanTest.scala
 ---
@@ -0,0 +1,283 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.expressions.Literal
+import org.apache.flink.api.table.TableEnvironment
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767315#comment-15767315
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438204
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/SortValidationTest.scala
 ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.scala.{ExecutionEnvironment, _}
+import org.apache.flink.api.table.{Row, TableEnvironment, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767303#comment-15767303
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440220
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CastingPlanTest.scala
 ---
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.table.TableEnvironment
+import org.apache.flink.api.table.Types._
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class CastingPlanTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767305#comment-15767305
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438663
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CalcITCase.scala
 ---
@@ -424,6 +425,19 @@ class CalcITCase(
 val results = t.toDataSet[Row].collect()
 TestBaseUtils.compareResultAsText(results.asJava, expected)
   }
+
--- End diff --

Please remove the tests which have been moved to `CalcValidationTest` from 
this file.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767301#comment-15767301
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93437996
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CalcValidationTest.scala
 ---
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{Row, TableEnvironment, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767314#comment-15767314
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93439887
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsPlanTest.scala
 ---
@@ -0,0 +1,350 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.common.operators.Order
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.TableEnvironment
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class AggregationsPlanTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase` or any other class. 
The class does also not need any constructor parameters.
You can create a `TableEnvironment` also without a `TableConfig`: 
`TableEnvironment.getTableEnvironment(env)`.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767310#comment-15767310
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440071
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsValidationTest.scala
 ---
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class AggregationsValidationTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase` or any other class. 
The class does also not need any constructor parameters.
You can create a `TableEnvironment` also without a `TableConfig`: 
`TableEnvironment.getTableEnvironment(env, config)`.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93437996
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CalcValidationTest.scala
 ---
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{Row, TableEnvironment, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440307
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/SortValidationTest.scala
 ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.scala.{ExecutionEnvironment, _}
+import org.apache.flink.api.table.{Row, TableEnvironment, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class SortValidationTest(
+  mode: TestExecutionMode,
+  configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440355
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CalcValidationTest.scala
 ---
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{Row, TableEnvironment, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class CalcValidationTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767312#comment-15767312
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438060
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CastingPlanTest.scala
 ---
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.table.TableEnvironment
+import org.apache.flink.api.table.Types._
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767298#comment-15767298
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438117
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/JoinValidationTest.scala
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, TableException, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767313#comment-15767313
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438154
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/SetOperatorsValidationTest.scala
 ---
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438060
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CastingPlanTest.scala
 ---
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.table.TableEnvironment
+import org.apache.flink.api.table.Types._
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440071
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsValidationTest.scala
 ---
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class AggregationsValidationTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase` or any other class. 
The class does also not need any constructor parameters.
You can create a `TableEnvironment` also without a `TableConfig`: 
`TableEnvironment.getTableEnvironment(env, config)`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767299#comment-15767299
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93448930
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsITCase.scala
 ---
@@ -400,5 +342,23 @@ class AggregationsITCase(
 TestBaseUtils.compareResultAsText(results.asJava, expected)
   }
 
+  @Test
+  def testPojoGrouping() {
--- End diff --

This test is not testing a Table API aggregation or grouping. I think it 
can be removed and the `MyPojo` class as well.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767316#comment-15767316
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93436582
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CalcPlanTest.scala
 ---
@@ -0,0 +1,394 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import java.sql.{Date, Time, Timestamp}
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.expressions.Literal
+import org.apache.flink.api.table._
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767302#comment-15767302
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440266
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/JoinValidationTest.scala
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, TableException, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class JoinValidationTest(
+  mode: TestExecutionMode,
+  configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767304#comment-15767304
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440286
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/SetOperatorsValidationTest.scala
 ---
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class SetOperatorsValidationTest(
+  mode: TestExecutionMode,
+  configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440247
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/JoinPlanTest.scala
 ---
@@ -0,0 +1,283 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.expressions.Literal
+import org.apache.flink.api.table.TableEnvironment
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class JoinPlanTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440286
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/SetOperatorsValidationTest.scala
 ---
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class SetOperatorsValidationTest(
+  mode: TestExecutionMode,
+  configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438092
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/JoinPlanTest.scala
 ---
@@ -0,0 +1,283 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.expressions.Literal
+import org.apache.flink.api.table.TableEnvironment
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438204
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/SortValidationTest.scala
 ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.scala.{ExecutionEnvironment, _}
+import org.apache.flink.api.table.{Row, TableEnvironment, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93448930
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsITCase.scala
 ---
@@ -400,5 +342,23 @@ class AggregationsITCase(
 TestBaseUtils.compareResultAsText(results.asJava, expected)
   }
 
+  @Test
+  def testPojoGrouping() {
--- End diff --

This test is not testing a Table API aggregation or grouping. I think it 
can be removed and the `MyPojo` class as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440266
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/JoinValidationTest.scala
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, TableException, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class JoinValidationTest(
+  mode: TestExecutionMode,
+  configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438117
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/JoinValidationTest.scala
 ---
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, TableException, 
ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438154
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/SetOperatorsValidationTest.scala
 ---
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93437942
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsValidationTest.scala
 ---
@@ -0,0 +1,148 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.{TableEnvironment, ValidationException}
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93436582
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CalcPlanTest.scala
 ---
@@ -0,0 +1,394 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import java.sql.{Date, Time, Timestamp}
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.expressions.Literal
+import org.apache.flink.api.table._
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93440220
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CastingPlanTest.scala
 ---
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.TableProgramsTestBase
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.table.TableEnvironment
+import org.apache.flink.api.table.Types._
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class CastingPlanTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

Do not extend `TableProgramsTestBase`. This is only necessary for ITCases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93436751
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CalcPlanTest.scala
 ---
@@ -0,0 +1,394 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import java.sql.{Date, Time, Timestamp}
+
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.expressions.Literal
+import org.apache.flink.api.table._
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
+class CalcPlanTest(
+mode: TestExecutionMode,
+configMode: TableConfigMode)
+  extends TableProgramsTestBase(mode, configMode) {
--- End diff --

`CalcPlanTest` should not extend a class. 
`TableProgramsTestBase` starts a Flink Minicluster with is quite expensive 
and only required for ITCases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93437895
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/AggregationsPlanTest.scala
 ---
@@ -0,0 +1,350 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.scala.batch.table
+
+import org.apache.flink.api.common.operators.Order
+import org.apache.flink.api.scala._
+import org.apache.flink.api.scala.batch.utils.{LogicalPlanFormatUtils, 
TableProgramsTestBase}
+import 
org.apache.flink.api.scala.batch.utils.TableProgramsTestBase.TableConfigMode
+import org.apache.flink.api.scala.table._
+import org.apache.flink.api.scala.util.CollectionDataSets
+import org.apache.flink.api.table.TableEnvironment
+import 
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.junit._
+import org.junit.runner.RunWith
+import org.junit.runners.Parameterized
+
+@RunWith(classOf[Parameterized])
--- End diff --

Remove `@RunWith` annotation


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2977: [FLINK-5084] Replace Java Table API integration te...

2016-12-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2977#discussion_r93438663
  
--- Diff: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/batch/table/CalcITCase.scala
 ---
@@ -424,6 +425,19 @@ class CalcITCase(
 val results = t.toDataSet[Row].collect()
 TestBaseUtils.compareResultAsText(results.asJava, expected)
   }
+
--- End diff --

Please remove the tests which have been moved to `CalcValidationTest` from 
this file.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-5380) Number of outgoing records not reported in web interface

2016-12-21 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-5380:
--
Attachment: outRecordsNotreported.png

> Number of outgoing records not reported in web interface
> 
>
> Key: FLINK-5380
> URL: https://issues.apache.org/jira/browse/FLINK-5380
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
> Attachments: outRecordsNotreported.png
>
>
> The web frontend does not report any outgoing records in the web frontend.
> The amount of data in MB is reported correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5380) Number of outgoing records not reported in web interface

2016-12-21 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-5380:
-

 Summary: Number of outgoing records not reported in web interface
 Key: FLINK-5380
 URL: https://issues.apache.org/jira/browse/FLINK-5380
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 1.2.0
Reporter: Robert Metzger


The web frontend does not report any outgoing records in the web frontend.
The amount of data in MB is reported correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5379) Flink CliFrontend does not return when not logged in with kerberos

2016-12-21 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-5379:
--
Description: 
In pre 1.2 versions, Flink immediately fails when trying to deploy it on YARN 
and the current user is not kerberos authenticated:

{code}
Error while deploying YARN cluster: Couldn't deploy Yarn cluster
java.lang.RuntimeException: Couldn't deploy Yarn cluster
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploy(AbstractYarnClusterDescriptor.java:384)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:591)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:465)
Caused by: 
org.apache.flink.yarn.AbstractYarnClusterDescriptor$YarnDeploymentException: In 
secure mode. Please provide Kerberos credentials in order to authenticate. You 
may use kinit to authenticate and request a TGT from the Kerberos server.
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploy(AbstractYarnClusterDescriptor.java:371)
... 2 more
{code}

In 1.2, the following happens (the CLI frontend does not return. It seems to be 
stuck in a loop)
{code}
2016-12-21 13:51:29,925 INFO  org.apache.hadoop.yarn.client.RMProxy 
- Connecting to ResourceManager at 
my-cluster-2wv1.c.sorter-757.internal/10.240.0.24:8032
2016-12-21 13:51:30,153 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2016-12-21 13:51:30,154 WARN  org.apache.hadoop.ipc.Client  
- Exception encountered while connecting to the server : 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
2016-12-21 13:51:30,154 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
2016-12-21 13:52:00,171 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2016-12-21 13:52:00,172 WARN  org.apache.hadoop.ipc.Client  
- Exception encountered while connecting to the server : 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
2016-12-21 13:52:00,172 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
2016-12-21 13:52:30,188 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2016-12-21 13:52:30,189 WARN  org.apache.hadoop.ipc.Client  
- Exception encountered while connecting to the server : 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
2016-12-21 13:52:30,189 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
2016-12-21 13:53:00,203 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2016-12-21 13:53:00,204 WARN  org.apache.hadoop.ipc.Client  
- Exception encountered while connecting to the server : 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism 

[jira] [Created] (FLINK-5379) Flink CliFrontend does not return when not logged in with kerberos

2016-12-21 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-5379:
-

 Summary: Flink CliFrontend does not return when not logged in with 
kerberos
 Key: FLINK-5379
 URL: https://issues.apache.org/jira/browse/FLINK-5379
 Project: Flink
  Issue Type: Bug
  Components: Client
Affects Versions: 1.2.0
Reporter: Robert Metzger


In pre 1.2 versions, Flink immediately fails when trying to deploy it on YARN 
and the current user is not kerberos authenticated:

{code}
Error while deploying YARN cluster: Couldn't deploy Yarn cluster
java.lang.RuntimeException: Couldn't deploy Yarn cluster
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploy(AbstractYarnClusterDescriptor.java:384)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:591)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:465)
Caused by: 
org.apache.flink.yarn.AbstractYarnClusterDescriptor$YarnDeploymentException: In 
secure mode. Please provide Kerberos credentials in order to authenticate. You 
may use kinit to authenticate and request a TGT from the Kerberos server.
at 
org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploy(AbstractYarnClusterDescriptor.java:371)
... 2 more
{code}

In 1.2, the following happens:
{code}
2016-12-21 13:51:29,925 INFO  org.apache.hadoop.yarn.client.RMProxy 
- Connecting to ResourceManager at 
my-cluster-2wv1.c.sorter-757.internal/10.240.0.24:8032
2016-12-21 13:51:30,153 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2016-12-21 13:51:30,154 WARN  org.apache.hadoop.ipc.Client  
- Exception encountered while connecting to the server : 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
2016-12-21 13:51:30,154 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
2016-12-21 13:52:00,171 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2016-12-21 13:52:00,172 WARN  org.apache.hadoop.ipc.Client  
- Exception encountered while connecting to the server : 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
2016-12-21 13:52:00,172 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
2016-12-21 13:52:30,188 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2016-12-21 13:52:30,189 WARN  org.apache.hadoop.ipc.Client  
- Exception encountered while connecting to the server : 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
2016-12-21 13:52:30,189 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
2016-12-21 13:53:00,203 WARN  org.apache.hadoop.security.UserGroupInformation   
- PriviledgedActionException as:longrunning (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
2016-12-21 13:53:00,204 WARN  org.apache.hadoop.ipc.Client  
- Exception encountered while connecting to the 

[jira] [Updated] (FLINK-1583) TaskManager reregistration in case of a restart

2016-12-21 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-1583:
-
Description: 
Currently, the {{InstanceManager}} identifies {{Instance}}s based on their 
{{InstanceConnectionInfo}}. In case of a restarted {{TaskManager}} which tries 
to register newly at the {{JobManager}}, the {{InstanceManager}} can mistake 
this {{TaskManager}} as already registered. This can lead to a corrupted state.

We should identify {{TaskManager}}s based on some ID to distinguish distinct 
registration attempts of a restarted {{TaskManager}}. This will improve the 
system's stability.

  was:
Currently, the {{InstanceManager}} identifies {{Instance}}'s based on their 
{{InstanceConnectionInfo}}. In case of a restarted {{TaskManager}} which tries 
to register newly at the {{JobManager}}, the {{InstanceManager}} can mistake 
this {{TaskManager}} as already registered. This can lead to a corrupted state.

We should identify {{TaskManager}}s based on some ID to distinguish distinct 
registration attempts of a restarted {{TaskManager}}. This will improve the 
system's stability.


> TaskManager reregistration in case of a restart
> ---
>
> Key: FLINK-1583
> URL: https://issues.apache.org/jira/browse/FLINK-1583
> Project: Flink
>  Issue Type: Bug
>Reporter: Till Rohrmann
>
> Currently, the {{InstanceManager}} identifies {{Instance}}s based on their 
> {{InstanceConnectionInfo}}. In case of a restarted {{TaskManager}} which 
> tries to register newly at the {{JobManager}}, the {{InstanceManager}} can 
> mistake this {{TaskManager}} as already registered. This can lead to a 
> corrupted state.
> We should identify {{TaskManager}}s based on some ID to distinguish distinct 
> registration attempts of a restarted {{TaskManager}}. This will improve the 
> system's stability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1583) TaskManager reregistration in case of a restart

2016-12-21 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-1583:
-
Description: 
Currently, the {{InstanceManager}} identifies {{Instance}} s based on their 
{{InstanceConnectionInfo}}. In case of a restarted {{TaskManager}} which tries 
to register newly at the {{JobManager}}, the {{InstanceManager}} can mistake 
this {{TaskManager}} as already registered. This can lead to a corrupted state.

We should identify {{TaskManager}}s based on some ID to distinguish distinct 
registration attempts of a restarted {{TaskManager}}. This will improve the 
system's stability.

  was:
Currently, the {{InstanceManager}} identifies {{Instance}}s based on their 
{{InstanceConnectionInfo}}. In case of a restarted {{TaskManager}} which tries 
to register newly at the {{JobManager}}, the {{InstanceManager}} can mistake 
this {{TaskManager}} as already registered. This can lead to a corrupted state.

We should identify {{TaskManager}}s based on some ID to distinguish distinct 
registration attempts of a restarted {{TaskManager}}. This will improve the 
system's stability.


> TaskManager reregistration in case of a restart
> ---
>
> Key: FLINK-1583
> URL: https://issues.apache.org/jira/browse/FLINK-1583
> Project: Flink
>  Issue Type: Bug
>Reporter: Till Rohrmann
>
> Currently, the {{InstanceManager}} identifies {{Instance}} s based on their 
> {{InstanceConnectionInfo}}. In case of a restarted {{TaskManager}} which 
> tries to register newly at the {{JobManager}}, the {{InstanceManager}} can 
> mistake this {{TaskManager}} as already registered. This can lead to a 
> corrupted state.
> We should identify {{TaskManager}}s based on some ID to distinguish distinct 
> registration attempts of a restarted {{TaskManager}}. This will improve the 
> system's stability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5364) Rework JAAS configuration to support user-supplied entries

2016-12-21 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767065#comment-15767065
 ] 

Till Rohrmann commented on FLINK-5364:
--

Hi [~eronwright], I think you're right and we should definitely try to fix it 
for the final 1.2 release. Great that you're looking into it.

Do you want to open the PR against the Flink Github repository? I guess we can 
already make a first round of review work.

> Rework JAAS configuration to support user-supplied entries
> --
>
> Key: FLINK-5364
> URL: https://issues.apache.org/jira/browse/FLINK-5364
> Project: Flink
>  Issue Type: Bug
>  Components: Cluster Management
>Reporter: Eron Wright 
>Assignee: Eron Wright 
>Priority: Critical
>  Labels: kerberos, security
>
> Recent issues (see linked) have brought to light a critical deficiency in the 
> handling of JAAS configuration.   
> 1. the MapR distribution relies on an explicit JAAS conf, rather than 
> in-memory conf used by stock Hadoop.
> 2. the ZK/Kafka/Hadoop security configuration is supposed to be independent 
> (one can enable each element separately) but isn't.
> Perhaps we should rework the JAAS conf code to merge any user-supplied 
> configuration with our defaults, rather than using an all-or-nothing 
> approach.   
> We should also address some recent regressions:
> 1. The HadoopSecurityContext should be installed regardless of auth mode, to 
> login with UserGroupInformation, which:
> - handles the HADOOP_USER_NAME variable.
> - installs an OS-specific user principal (from UnixLoginModule etc.) 
> unrelated to Kerberos.
> - picks up the HDFS/HBASE delegation tokens.
> 2. Fix the use of alternative authentication methods - delegation tokens and 
> Kerberos ticket cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1583) TaskManager reregistration in case of a restart

2016-12-21 Thread Anton Solovev (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Solovev updated FLINK-1583:
-
Description: 
Currently, the {{InstanceManager}} identifies {{Instance}}'s based on their 
{{InstanceConnectionInfo}}. In case of a restarted {{TaskManager}} which tries 
to register newly at the {{JobManager}}, the {{InstanceManager}} can mistake 
this {{TaskManager}} as already registered. This can lead to a corrupted state.

We should identify {{TaskManager}}s based on some ID to distinguish distinct 
registration attempts of a restarted {{TaskManager}}. This will improve the 
system's stability.

  was:
Currently, the {{InstanceManager}} identifies {{Instance}} s based on their 
{{InstanceConnectionInfo}}. In case of a restarted {{TaskManager}} which tries 
to register newly at the {{JobManager}}, the {{InstanceManager}} can mistake 
this {{TaskManager}} as already registered. This can lead to a corrupted state.

We should identify {{TaskManager}}s based on some ID to distinguish distinct 
registration attempts of a restarted {{TaskManager}}. This will improve the 
system's stability.


> TaskManager reregistration in case of a restart
> ---
>
> Key: FLINK-1583
> URL: https://issues.apache.org/jira/browse/FLINK-1583
> Project: Flink
>  Issue Type: Bug
>Reporter: Till Rohrmann
>
> Currently, the {{InstanceManager}} identifies {{Instance}}'s based on their 
> {{InstanceConnectionInfo}}. In case of a restarted {{TaskManager}} which 
> tries to register newly at the {{JobManager}}, the {{InstanceManager}} can 
> mistake this {{TaskManager}} as already registered. This can lead to a 
> corrupted state.
> We should identify {{TaskManager}}s based on some ID to distinguish distinct 
> registration attempts of a restarted {{TaskManager}}. This will improve the 
> system's stability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5086) Clean dead snapshot files produced by the tasks failing to acknowledge checkpoints

2016-12-21 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767058#comment-15767058
 ] 

Till Rohrmann commented on FLINK-5086:
--

Hi [~roman_maier], what's your plan to solve this problem?

> Clean dead snapshot files produced by the tasks failing to acknowledge 
> checkpoints
> --
>
> Key: FLINK-5086
> URL: https://issues.apache.org/jira/browse/FLINK-5086
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Xiaogang Shi
>Assignee: Roman Maier
>
> A task may fail when performing checkpoints. In that case, the task may have 
> already copied some data to external storage. But since the task fails to 
> send the state handler to {{CheckpointCoordinator}}, the copied data will not 
> be deleted by {{CheckpointCoordinator}}. 
> I think we must find a method to clean such dead snapshot data to avoid 
> unlimited usage of external storage. 
> One possible method is to clean these dead files when the task recovers. When 
> a task recovers, {{CheckpointCoordinator}} will tell the task all the 
> retained checkpoints. The task then can scan the external storage to delete 
> all the  snapshots not in these retained checkpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5280) Extend TableSource to support nested data

2016-12-21 Thread Jark Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766974#comment-15766974
 ] 

Jark Wu commented on FLINK-5280:


Thank you [~fhueske] for summarizing this, make sense to me :)

> Extend TableSource to support nested data
> -
>
> Key: FLINK-5280
> URL: https://issues.apache.org/jira/browse/FLINK-5280
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Ivan Mushketyk
>
> The {{TableSource}} interface does currently only support the definition of 
> flat rows. 
> However, there are several storage formats for nested data that should be 
> supported such as Avro, Json, Parquet, and Orc. The Table API and SQL can 
> also natively handle nested rows.
> The {{TableSource}} interface and the code to register table sources in 
> Calcite's schema need to be extended to support nested data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5368) Let Kafka consumer show something when it fails to read one topic out of topic list

2016-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766884#comment-15766884
 ] 

ASF GitHub Bot commented on FLINK-5368:
---

Github user DieBauer commented on the issue:

https://github.com/apache/flink/pull/3036
  
Looks like the buildjob ran out of memory:
```
Running org.apache.flink.api.scala.ScalaShellITCase
Running org.apache.flink.api.scala.ScalaShellLocalStartupITCase
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.355 sec 
- in org.apache.flink.api.scala.ScalaShellLocalStartupITCase
java.lang.OutOfMemoryError: Java heap space
```


> Let Kafka consumer show something when it fails to read one topic out of 
> topic list
> ---
>
> Key: FLINK-5368
> URL: https://issues.apache.org/jira/browse/FLINK-5368
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Reporter: Sendoh
>Assignee: Sendoh
>Priority: Critical
>
> As a developer when reading data from many topics, I want Kafka consumer to 
> show something if any topic is not available. The motivation is we read many 
> topics as list at one time, and sometimes we fail to recognize that one or 
> two topics' names have been changed or deprecated, and Flink Kafka connector 
> doesn't show the error.
> My proposed change would be either to throw RuntimeException or to use 
> LOG.error(topic + "doesn't have any partition") if partitionsForTopic is null 
> at this function. 
> https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumer09.java#L208
> Any suggestion is welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >