[jira] [Commented] (FLINK-6494) Migrate ResourceManager configuration options

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036500#comment-16036500
 ] 

ASF GitHub Bot commented on FLINK-6494:
---

Github user zjureel commented on the issue:

https://github.com/apache/flink/pull/4075
  
@zentol In addtion to ResourceManager, I have migrated the configuration 
options of yarn and mesos to their class files, would you please make a review 
while you are free, thanks :)


> Migrate ResourceManager configuration options
> -
>
> Key: FLINK-6494
> URL: https://issues.apache.org/jira/browse/FLINK-6494
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, ResourceManager
>Reporter: Chesnay Schepler
>Assignee: Fang Yong
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #4075: [FLINK-6494] Migrate ResourceManager/Yarn/Mesos configura...

2017-06-04 Thread zjureel
Github user zjureel commented on the issue:

https://github.com/apache/flink/pull/4075
  
@zentol In addtion to ResourceManager, I have migrated the configuration 
options of yarn and mesos to their class files, would you please make a review 
while you are free, thanks :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6494) Migrate ResourceManager configuration options

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036498#comment-16036498
 ] 

ASF GitHub Bot commented on FLINK-6494:
---

GitHub user zjureel opened a pull request:

https://github.com/apache/flink/pull/4075

[FLINK-6494] Migrate ResourceManager/Yarn/Mesos configuration options

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zjureel/flink FLINK-6494

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4075.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4075


commit 8b911d2a8799e3fb64b5f79306e3a629e890952a
Author: zjureel 
Date:   2017-06-05T03:51:31Z

[FLINK-6494] Migrate ResourceManager/Yarn/Mesos configuration options




> Migrate ResourceManager configuration options
> -
>
> Key: FLINK-6494
> URL: https://issues.apache.org/jira/browse/FLINK-6494
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, ResourceManager
>Reporter: Chesnay Schepler
>Assignee: Fang Yong
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4075: [FLINK-6494] Migrate ResourceManager/Yarn/Mesos co...

2017-06-04 Thread zjureel
GitHub user zjureel opened a pull request:

https://github.com/apache/flink/pull/4075

[FLINK-6494] Migrate ResourceManager/Yarn/Mesos configuration options

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zjureel/flink FLINK-6494

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4075.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4075


commit 8b911d2a8799e3fb64b5f79306e3a629e890952a
Author: zjureel 
Date:   2017-06-05T03:51:31Z

[FLINK-6494] Migrate ResourceManager/Yarn/Mesos configuration options




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6045) FLINK_CONF_DIR has to be set even though specifying --configDir

2017-06-04 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036472#comment-16036472
 ] 

mingleizhang commented on FLINK-6045:
-

Is there more messages such as logs from that user reports ?

> FLINK_CONF_DIR has to be set even though specifying --configDir
> ---
>
> Key: FLINK-6045
> URL: https://issues.apache.org/jira/browse/FLINK-6045
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Reporter: Till Rohrmann
>Priority: Minor
>
> A user reported that {{FLINK_CONF_DIR}} has to be set in addition to 
> specifying --configDir. Otherwise the {{JobManager}} and the {{TaskManagers}} 
> fail silently trying to read from {{fs.hdfs.hadoopconf}}. Specifying one of 
> the two configuration options should be enough to successfully run Flink.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6488) Remove 'start-local.sh' script

2017-06-04 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036455#comment-16036455
 ] 

mingleizhang commented on FLINK-6488:
-

Hey, [~StephanEwen] , Please helps review this modification. Thanks.

> Remove 'start-local.sh' script
> --
>
> Key: FLINK-6488
> URL: https://issues.apache.org/jira/browse/FLINK-6488
> Project: Flink
>  Issue Type: Sub-task
>  Components: Startup Shell Scripts
>Reporter: Stephan Ewen
>Assignee: mingleizhang
>
> The {{start-cluster.sh}} scripts work locally now, without needing SSH setup.
> We can remove {{start-local.sh}} without any loss of functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6488) Remove 'start-local.sh' script

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036454#comment-16036454
 ] 

ASF GitHub Bot commented on FLINK-6488:
---

GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4074

[FLINK-6488] [scripts] Remove 'start-local.sh' script

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6488

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4074.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4074


commit be3fffc8adcebcfc24f0c50de2abcf69354be8dd
Author: zhangminglei 
Date:   2017-06-05T01:38:04Z

[FLINK-6488] [scripts] Remove 'start-local.sh' script




> Remove 'start-local.sh' script
> --
>
> Key: FLINK-6488
> URL: https://issues.apache.org/jira/browse/FLINK-6488
> Project: Flink
>  Issue Type: Sub-task
>  Components: Startup Shell Scripts
>Reporter: Stephan Ewen
>Assignee: mingleizhang
>
> The {{start-cluster.sh}} scripts work locally now, without needing SSH setup.
> We can remove {{start-local.sh}} without any loss of functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4074: [FLINK-6488] [scripts] Remove 'start-local.sh' scr...

2017-06-04 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4074

[FLINK-6488] [scripts] Remove 'start-local.sh' script

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6488

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4074.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4074


commit be3fffc8adcebcfc24f0c50de2abcf69354be8dd
Author: zhangminglei 
Date:   2017-06-05T01:38:04Z

[FLINK-6488] [scripts] Remove 'start-local.sh' script




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-6488) Remove 'start-local.sh' script

2017-06-04 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-6488:
---

Assignee: mingleizhang

> Remove 'start-local.sh' script
> --
>
> Key: FLINK-6488
> URL: https://issues.apache.org/jira/browse/FLINK-6488
> Project: Flink
>  Issue Type: Sub-task
>  Components: Startup Shell Scripts
>Reporter: Stephan Ewen
>Assignee: mingleizhang
>
> The {{start-cluster.sh}} scripts work locally now, without needing SSH setup.
> We can remove {{start-local.sh}} without any loss of functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6817) Fix NPE when preceding is not set in OVER window

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036427#comment-16036427
 ] 

ASF GitHub Bot commented on FLINK-6817:
---

Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4055#discussion_r120023997
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
 ---
@@ -127,3 +127,21 @@ class PartitionedOver(private val partitionByExpr: 
Array[Expression]) {
 new OverWindowWithOrderBy(partitionByExpr, orderByExpr)
   }
 }
+
+
+class OverWindowWithOrderBy(
+  private val partitionByExpr: Array[Expression],
--- End diff --

Suggest that change `partitionByExpr` to `partitionBy` for keep param name 
consistent with SCALA. What do you think?


> Fix NPE when preceding is not set in OVER window
> 
>
> Key: FLINK-6817
> URL: https://issues.apache.org/jira/browse/FLINK-6817
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Minor
> Fix For: 1.4.0
>
>
> When preceding is not set in over window , a NPE will be thrown:
> {code}
> val result = table
>   .window(Over orderBy 'rowtime as 'w)
>   .select('c, 'a.count over 'w)
> {code}
> {code}
> java.lang.NullPointerException
>   at org.apache.flink.table.api.OverWindowWithOrderBy.as(windows.scala:97)
> {code}
> Preceding must be set in OVER window, so should throw a more explicit 
> exception not a NPE



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6817) Fix NPE when preceding is not set in OVER window

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036429#comment-16036429
 ] 

ASF GitHub Bot commented on FLINK-6817:
---

Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4055#discussion_r120024091
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
 ---
@@ -127,3 +127,21 @@ class PartitionedOver(private val partitionByExpr: 
Array[Expression]) {
 new OverWindowWithOrderBy(partitionByExpr, orderByExpr)
--- End diff --

Suggest that change `partitionByExpr` to `partitionBy` for keep param name 
consistent with SCALA. What do you think?


> Fix NPE when preceding is not set in OVER window
> 
>
> Key: FLINK-6817
> URL: https://issues.apache.org/jira/browse/FLINK-6817
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Minor
> Fix For: 1.4.0
>
>
> When preceding is not set in over window , a NPE will be thrown:
> {code}
> val result = table
>   .window(Over orderBy 'rowtime as 'w)
>   .select('c, 'a.count over 'w)
> {code}
> {code}
> java.lang.NullPointerException
>   at org.apache.flink.table.api.OverWindowWithOrderBy.as(windows.scala:97)
> {code}
> Preceding must be set in OVER window, so should throw a more explicit 
> exception not a NPE



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6817) Fix NPE when preceding is not set in OVER window

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036428#comment-16036428
 ] 

ASF GitHub Bot commented on FLINK-6817:
---

Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4055#discussion_r120023733
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
 ---
@@ -98,7 +98,7 @@ object Over {
 */
   def orderBy(orderBy: String): OverWindowWithOrderBy = {
 val orderByExpr = ExpressionParser.parseExpression(orderBy)
-new OverWindowWithOrderBy(Seq[Expression](), orderByExpr)
+new OverWindowWithOrderBy(Array[Expression](), orderByExpr)
--- End diff --

Good catch!
And suggest to change the `java/windows.scala` as well. What do you think?


> Fix NPE when preceding is not set in OVER window
> 
>
> Key: FLINK-6817
> URL: https://issues.apache.org/jira/browse/FLINK-6817
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Minor
> Fix For: 1.4.0
>
>
> When preceding is not set in over window , a NPE will be thrown:
> {code}
> val result = table
>   .window(Over orderBy 'rowtime as 'w)
>   .select('c, 'a.count over 'w)
> {code}
> {code}
> java.lang.NullPointerException
>   at org.apache.flink.table.api.OverWindowWithOrderBy.as(windows.scala:97)
> {code}
> Preceding must be set in OVER window, so should throw a more explicit 
> exception not a NPE



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4055: [FLINK-6817] [table] Fix NPE when preceding is not...

2017-06-04 Thread sunjincheng121
Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4055#discussion_r120024091
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
 ---
@@ -127,3 +127,21 @@ class PartitionedOver(private val partitionByExpr: 
Array[Expression]) {
 new OverWindowWithOrderBy(partitionByExpr, orderByExpr)
--- End diff --

Suggest that change `partitionByExpr` to `partitionBy` for keep param name 
consistent with SCALA. What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4055: [FLINK-6817] [table] Fix NPE when preceding is not...

2017-06-04 Thread sunjincheng121
Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4055#discussion_r120023997
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
 ---
@@ -127,3 +127,21 @@ class PartitionedOver(private val partitionByExpr: 
Array[Expression]) {
 new OverWindowWithOrderBy(partitionByExpr, orderByExpr)
   }
 }
+
+
+class OverWindowWithOrderBy(
+  private val partitionByExpr: Array[Expression],
--- End diff --

Suggest that change `partitionByExpr` to `partitionBy` for keep param name 
consistent with SCALA. What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4055: [FLINK-6817] [table] Fix NPE when preceding is not...

2017-06-04 Thread sunjincheng121
Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4055#discussion_r120023733
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
 ---
@@ -98,7 +98,7 @@ object Over {
 */
   def orderBy(orderBy: String): OverWindowWithOrderBy = {
 val orderByExpr = ExpressionParser.parseExpression(orderBy)
-new OverWindowWithOrderBy(Seq[Expression](), orderByExpr)
+new OverWindowWithOrderBy(Array[Expression](), orderByExpr)
--- End diff --

Good catch!
And suggest to change the `java/windows.scala` as well. What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6804) Inconsistent state migration behaviour between different state backends

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036413#comment-16036413
 ] 

ASF GitHub Bot commented on FLINK-6804:
---

GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/4073

[FLINK-6804] [state] Consistent state migration behaviour across state 
backends

This PR is based on #4044, which @tillrohrmann added the ITCases for 
upgrading POJO types w.r.t. state migration.

This PR collects several more follow-ups that eventually reaches one goal: 
unify the state migration behaviours across all state backends to be consistent.

The extra commits added onto #4044 are as follows:

- f568252 and 02a360d: fixes failing tests due to changes in #4044. Also, 
enhances the ITCases of #4044 to include equivalent tests for registering POJOs 
as operator state (also disabled because they do not pass).

- ba00f8e and dd81295: fixes the `PojoSerializer` of the issues that caused 
the tests to not pass. The deserialization of `PojoSerializer` and 
`PojoSerializerConfigSnapshot` is now resilient to missing fields.

- 36d87a0: adds compatibility check code paths for 
`DefaultOperatorStateBackend` and `HeapKeyedStateBackend`. Note that these 
checks are actually not required, since for the memory backends, all state is 
read to objects on restore and the job can always just use the new serializer 
to continue. The additional checks are to make the behaviour of state migration 
consistent across all backends.

- 0a045b3: Fully enables the ITCases in #4044 and 02a360d to arm against 
the fixes.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink pojoserializer-fixes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4073.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4073


commit 5d825c1b560c56e1a8f137b8c939e6ded16c5505
Author: Till Rohrmann 
Date:   2017-05-31T13:14:11Z

[FLINK-6796] [tests] Use Environment's class loader in 
AbstractStreamOperatorTestHarness

Generalize KeyedOneInputStreamOperatorTestHarness

Generalize AbstractStreamOperatorTestHarness

commit 30b43bf81a45131ddf5137d33d47265cc69713f8
Author: Till Rohrmann 
Date:   2017-05-31T16:37:12Z

[FLINK-6803] [tests] Add test for PojoSerializer state upgrade

The added PojoSerializerUpgradeTest tests the state migration behaviour 
when the
underlying pojo type changes and one tries to recover from old state. 
Currently
not all tests could be activated, because there still some pending issues 
to be
fixed first. We should arm these tests once the issues have been fixed.

commit f568252169ecbf07b76a227a596bb148804b6741
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-04T13:51:12Z

[hotfix] [tests] Fix failing tests in AsyncWaitOperatorTest and 
StateBackendTestBase

commit 02a360d98b6dc56a1d9a505411328ba405c78999
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-04T17:32:53Z

[FLINK-6803] [tests] Enhancements to PojoSerializerUpgradeTest

1. Allow tests to ignore missing fields.
2. Add equivalent tests which use POJOs as managed operator state.

For 2, all tests have to be ignored for now until FLINK-6804 is fixed.

commit ba00f8e34b4bb0651e63b84d13aaac3b5b3d3faa
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-04T10:30:58Z

[FLINK-6801] [core] Relax missing fields check when reading 
PojoSerializerConfigSnapshot

Prior to this commit, when reading the PojoSerializerConfigSnapshot, if
the underlying POJO type has a missing field, then the read would fail.
Failing the deserialization of the config snapshot is too severe,
because that would leave no oppurtunity to restore the checkpoint at
all, whereas we should be able to restore the config and provide it to
the new PojoSerializer for the change of getting a convert deserializer.

This commit changes this by only restoring the field names when reading
the PojoSerializerConfigSnapshot. In PojoSerializer.ensureCompatibility,
the field name is used to lookup the fields of the new PojoSerializer.
This change does not change the serialization format of the
PojoSerializerConfigSnapshot.

commit dd8129569de749b8173b7fa3de5eee0c46b9ebe4
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-04T18:41:59Z

[FLINK-6801] [core] Allow deserialized PojoSerializer to have removed fields

Prior to this commit, deserializing the PojoSerializer would fail when
we encounter a missing field that existed in the POJO type before. It is
actually perfectly fine to have a missing field; the deserialized
PojoSerializer should simply skip reading the removed field

[GitHub] flink pull request #4073: [FLINK-6804] [state] Consistent state migration be...

2017-06-04 Thread tzulitai
GitHub user tzulitai opened a pull request:

https://github.com/apache/flink/pull/4073

[FLINK-6804] [state] Consistent state migration behaviour across state 
backends

This PR is based on #4044, which @tillrohrmann added the ITCases for 
upgrading POJO types w.r.t. state migration.

This PR collects several more follow-ups that eventually reaches one goal: 
unify the state migration behaviours across all state backends to be consistent.

The extra commits added onto #4044 are as follows:

- f568252 and 02a360d: fixes failing tests due to changes in #4044. Also, 
enhances the ITCases of #4044 to include equivalent tests for registering POJOs 
as operator state (also disabled because they do not pass).

- ba00f8e and dd81295: fixes the `PojoSerializer` of the issues that caused 
the tests to not pass. The deserialization of `PojoSerializer` and 
`PojoSerializerConfigSnapshot` is now resilient to missing fields.

- 36d87a0: adds compatibility check code paths for 
`DefaultOperatorStateBackend` and `HeapKeyedStateBackend`. Note that these 
checks are actually not required, since for the memory backends, all state is 
read to objects on restore and the job can always just use the new serializer 
to continue. The additional checks are to make the behaviour of state migration 
consistent across all backends.

- 0a045b3: Fully enables the ITCases in #4044 and 02a360d to arm against 
the fixes.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzulitai/flink pojoserializer-fixes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4073.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4073


commit 5d825c1b560c56e1a8f137b8c939e6ded16c5505
Author: Till Rohrmann 
Date:   2017-05-31T13:14:11Z

[FLINK-6796] [tests] Use Environment's class loader in 
AbstractStreamOperatorTestHarness

Generalize KeyedOneInputStreamOperatorTestHarness

Generalize AbstractStreamOperatorTestHarness

commit 30b43bf81a45131ddf5137d33d47265cc69713f8
Author: Till Rohrmann 
Date:   2017-05-31T16:37:12Z

[FLINK-6803] [tests] Add test for PojoSerializer state upgrade

The added PojoSerializerUpgradeTest tests the state migration behaviour 
when the
underlying pojo type changes and one tries to recover from old state. 
Currently
not all tests could be activated, because there still some pending issues 
to be
fixed first. We should arm these tests once the issues have been fixed.

commit f568252169ecbf07b76a227a596bb148804b6741
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-04T13:51:12Z

[hotfix] [tests] Fix failing tests in AsyncWaitOperatorTest and 
StateBackendTestBase

commit 02a360d98b6dc56a1d9a505411328ba405c78999
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-04T17:32:53Z

[FLINK-6803] [tests] Enhancements to PojoSerializerUpgradeTest

1. Allow tests to ignore missing fields.
2. Add equivalent tests which use POJOs as managed operator state.

For 2, all tests have to be ignored for now until FLINK-6804 is fixed.

commit ba00f8e34b4bb0651e63b84d13aaac3b5b3d3faa
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-04T10:30:58Z

[FLINK-6801] [core] Relax missing fields check when reading 
PojoSerializerConfigSnapshot

Prior to this commit, when reading the PojoSerializerConfigSnapshot, if
the underlying POJO type has a missing field, then the read would fail.
Failing the deserialization of the config snapshot is too severe,
because that would leave no oppurtunity to restore the checkpoint at
all, whereas we should be able to restore the config and provide it to
the new PojoSerializer for the change of getting a convert deserializer.

This commit changes this by only restoring the field names when reading
the PojoSerializerConfigSnapshot. In PojoSerializer.ensureCompatibility,
the field name is used to lookup the fields of the new PojoSerializer.
This change does not change the serialization format of the
PojoSerializerConfigSnapshot.

commit dd8129569de749b8173b7fa3de5eee0c46b9ebe4
Author: Tzu-Li (Gordon) Tai 
Date:   2017-06-04T18:41:59Z

[FLINK-6801] [core] Allow deserialized PojoSerializer to have removed fields

Prior to this commit, deserializing the PojoSerializer would fail when
we encounter a missing field that existed in the POJO type before. It is
actually perfectly fine to have a missing field; the deserialized
PojoSerializer should simply skip reading the removed field's previously
serialized values, i.e. much like how Java Object Serialization works.

This commit relaxes the deserialization of the PojoSerializer, so that a
null will be used as a placeholder value to indicate a removed field
that pre

[jira] [Assigned] (FLINK-6804) Inconsistent state migration behaviour between different state backends

2017-06-04 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reassigned FLINK-6804:
--

Assignee: Tzu-Li (Gordon) Tai

> Inconsistent state migration behaviour between different state backends
> ---
>
> Key: FLINK-6804
> URL: https://issues.apache.org/jira/browse/FLINK-6804
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Type Serialization System
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Till Rohrmann
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Critical
>
> The {{MemoryStateBackend}}, {{FsStateBackend}} and {{RocksDBStateBackend}} 
> show a different behaviour when it comes to recovery from old state and state 
> migration. For example, using the {{MemoryStateBackend}} it is possible to 
> recover pojos which now have additional fields (at recovery time). The only 
> caveat is that the recovered {{PojoSerializer}} will silently drop the added 
> fields when writing the new Pojo. In contrast, the {{RocksDBStateBackend}} 
> correctly recognizes that a state migration is necessary and thus fails with 
> a {{StateMigrationException}}. The same applies to the case where Pojo field 
> types change. The {{MemoryStateBackend}} and the {{FsStateBackend}} accept 
> such a change as long as the fields still have the same length. The 
> {{RocksDBStateBackend}} correctly fails with a {{StateMigrationException}}.
> I think that all state backends should behave similarly and give the user the 
> same recovery and state migration guarantees. Otherwise, it could happen that 
> jobs run with one state backend but not with another (wrt semantic behaviour).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6816) Fix wrong usage of Scala string interpolation in Table API

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036298#comment-16036298
 ] 

ASF GitHub Bot commented on FLINK-6816:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4050
  
Hi @wuchong Thanks for the update. 
+1 to merged.


> Fix wrong usage of Scala string interpolation in Table API
> --
>
> Key: FLINK-6816
> URL: https://issues.apache.org/jira/browse/FLINK-6816
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Minor
> Fix For: 1.4.0
>
>
> This issue is to fix some wrong usage of  Scala string interpolation, such as 
> missing the "s" prefix .



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #4050: [FLINK-6816] [table] Fix wrong usage of Scala string inte...

2017-06-04 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4050
  
Hi @wuchong Thanks for the update. 
+1 to merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4072: [FLINK-6848] Update managed state docs

2017-06-04 Thread Fokko
GitHub user Fokko opened a pull request:

https://github.com/apache/flink/pull/4072

[FLINK-6848] Update managed state docs

Hi guys,

I would like to add an example of how to work with managed state in Scala. 
The code is tested locally and might be a nice addition to the docs.

Cheers,
Fokko Driesprong

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Fokko/flink 
fd-update-raw-and-managed-state-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4072.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4072


commit 8103fc28d10d131eb1273dba4b477c25ac278bf0
Author: Fokko Driesprong 
Date:   2017-06-04T14:08:44Z

Update managed state docs

Add an example of how to work with managed state in Scala




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6848) Extend the managed state docs with a Scala example

2017-06-04 Thread Fokko Driesprong (JIRA)
Fokko Driesprong created FLINK-6848:
---

 Summary: Extend the managed state docs with a Scala example
 Key: FLINK-6848
 URL: https://issues.apache.org/jira/browse/FLINK-6848
 Project: Flink
  Issue Type: Bug
Reporter: Fokko Driesprong


Hi all,

It would be nice to add a Scala example code snippet in the Managed state docs. 
This makes it a bit easier to start using managed state in Scala. The code is 
tested and works.

Kind regards,
Fokko



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6848) Extend the managed state docs with a Scala example

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036288#comment-16036288
 ] 

ASF GitHub Bot commented on FLINK-6848:
---

GitHub user Fokko opened a pull request:

https://github.com/apache/flink/pull/4072

[FLINK-6848] Update managed state docs

Hi guys,

I would like to add an example of how to work with managed state in Scala. 
The code is tested locally and might be a nice addition to the docs.

Cheers,
Fokko Driesprong

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Fokko/flink 
fd-update-raw-and-managed-state-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4072.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4072


commit 8103fc28d10d131eb1273dba4b477c25ac278bf0
Author: Fokko Driesprong 
Date:   2017-06-04T14:08:44Z

Update managed state docs

Add an example of how to work with managed state in Scala




> Extend the managed state docs with a Scala example
> --
>
> Key: FLINK-6848
> URL: https://issues.apache.org/jira/browse/FLINK-6848
> Project: Flink
>  Issue Type: Bug
>Reporter: Fokko Driesprong
>
> Hi all,
> It would be nice to add a Scala example code snippet in the Managed state 
> docs. This makes it a bit easier to start using managed state in Scala. The 
> code is tested and works.
> Kind regards,
> Fokko



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6803) Add test for PojoSerializer when Pojo changes

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036283#comment-16036283
 ] 

ASF GitHub Bot commented on FLINK-6803:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4044
  
I'll merge this (will address my own comments) together with the other 
pending `PojoSerializer` fixes.


> Add test for PojoSerializer when Pojo changes
> -
>
> Key: FLINK-6803
> URL: https://issues.apache.org/jira/browse/FLINK-6803
> Project: Flink
>  Issue Type: Improvement
>  Components: Type Serialization System
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> We should add test cases for the {{PojoSerializer}} when the underlying Pojo 
> type changes in order to test the proper behaviour of the serializer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #4044: [FLINK-6803] [tests] Add test for PojoSerializer state up...

2017-06-04 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4044
  
I'll merge this (will address my own comments) together with the other 
pending `PojoSerializer` fixes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6803) Add test for PojoSerializer when Pojo changes

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036282#comment-16036282
 ] 

ASF GitHub Bot commented on FLINK-6803:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4044#discussion_r120008474
  
--- Diff: 
flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendTestBase.java
 ---
@@ -1804,53 +1805,44 @@ public void testKeyGroupSnapshotRestore() throws 
Exception {
}
 
@Test
-   public void testRestoreWithWrongKeySerializer() {
-   try {
-   CheckpointStreamFactory streamFactory = 
createStreamFactory();
+   public void testRestoreWithWrongKeySerializer() throws Exception {
+   CheckpointStreamFactory streamFactory = createStreamFactory();
 
-   // use an IntSerializer at first
-   AbstractKeyedStateBackend backend = 
createKeyedBackend(IntSerializer.INSTANCE);
+   // use an IntSerializer at first
+   AbstractKeyedStateBackend backend = 
createKeyedBackend(IntSerializer.INSTANCE);
 
-   ValueStateDescriptor kvId = new 
ValueStateDescriptor<>("id", String.class);
+   ValueStateDescriptor kvId = new 
ValueStateDescriptor<>("id", String.class);
 
-   ValueState state = 
backend.getPartitionedState(VoidNamespace.INSTANCE, 
VoidNamespaceSerializer.INSTANCE, kvId);
+   ValueState state = 
backend.getPartitionedState(VoidNamespace.INSTANCE, 
VoidNamespaceSerializer.INSTANCE, kvId);
 
-   // write some state
-   backend.setCurrentKey(1);
-   state.update("1");
-   backend.setCurrentKey(2);
-   state.update("2");
+   // write some state
+   backend.setCurrentKey(1);
+   state.update("1");
+   backend.setCurrentKey(2);
+   state.update("2");
 
-   // draw a snapshot
-   KeyedStateHandle snapshot1 = 
FutureUtil.runIfNotDoneAndGet(backend.snapshot(682375462378L, 2, streamFactory, 
CheckpointOptions.forFullCheckpoint()));
+   // draw a snapshot
+   KeyedStateHandle snapshot1 = 
FutureUtil.runIfNotDoneAndGet(backend.snapshot(682375462378L, 2, streamFactory, 
CheckpointOptions.forFullCheckpoint()));
 
-   backend.dispose();
+   backend.dispose();
 
-   // restore with the wrong key serializer
-   try {
-   restoreKeyedBackend(DoubleSerializer.INSTANCE, 
snapshot1);
+   // restore with the wrong key serializer
+   try {
+   restoreKeyedBackend(DoubleSerializer.INSTANCE, 
snapshot1);
 
-   fail("should recognize wrong key serializer");
-   } catch (RuntimeException e) {
-   if (!e.getMessage().contains("The new key 
serializer is not compatible")) {
-   fail("wrong exception " + e);
-   }
-   // expected
-   }
-   }
-   catch (Exception e) {
-   e.printStackTrace();
-   fail(e.getMessage());
+   fail("should recognize wrong key serializer");
+   } catch (StateMigrationException ignored) {
--- End diff --

This change is failing because the `RocksDBKeyedStateBackend` is not 
throwing the new exception when checking key serializers.


> Add test for PojoSerializer when Pojo changes
> -
>
> Key: FLINK-6803
> URL: https://issues.apache.org/jira/browse/FLINK-6803
> Project: Flink
>  Issue Type: Improvement
>  Components: Type Serialization System
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> We should add test cases for the {{PojoSerializer}} when the underlying Pojo 
> type changes in order to test the proper behaviour of the serializer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6803) Add test for PojoSerializer when Pojo changes

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036281#comment-16036281
 ] 

ASF GitHub Bot commented on FLINK-6803:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4044#discussion_r120007888
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/util/AbstractStreamOperatorTestHarness.java
 ---
@@ -192,7 +187,7 @@ public StreamStatus getStreamStatus() {

when(mockTask.getTaskConfiguration()).thenReturn(underlyingConfig);
when(mockTask.getEnvironment()).thenReturn(environment);
when(mockTask.getExecutionConfig()).thenReturn(executionConfig);
-   
when(mockTask.getUserCodeClassLoader()).thenReturn(this.getClass().getClassLoader());
+   
when(mockTask.getUserCodeClassLoader()).thenReturn(environment.getUserClassLoader());
--- End diff --

Some tests are failing because of this change.

I think the problem is because the given environment may also be a mock 
whose stubbing isn't completed yet, leading to a 
`org.mockito.exceptions.misusing.UnfinishedStubbingException`.

We can avoid that by doing this:
```
ClassLoader cl = environment.getUserClassLoader();
when(mockTask.getUserCodeClassLoader()).thenReturn(cl);
```


> Add test for PojoSerializer when Pojo changes
> -
>
> Key: FLINK-6803
> URL: https://issues.apache.org/jira/browse/FLINK-6803
> Project: Flink
>  Issue Type: Improvement
>  Components: Type Serialization System
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> We should add test cases for the {{PojoSerializer}} when the underlying Pojo 
> type changes in order to test the proper behaviour of the serializer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4044: [FLINK-6803] [tests] Add test for PojoSerializer s...

2017-06-04 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4044#discussion_r120008474
  
--- Diff: 
flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendTestBase.java
 ---
@@ -1804,53 +1805,44 @@ public void testKeyGroupSnapshotRestore() throws 
Exception {
}
 
@Test
-   public void testRestoreWithWrongKeySerializer() {
-   try {
-   CheckpointStreamFactory streamFactory = 
createStreamFactory();
+   public void testRestoreWithWrongKeySerializer() throws Exception {
+   CheckpointStreamFactory streamFactory = createStreamFactory();
 
-   // use an IntSerializer at first
-   AbstractKeyedStateBackend backend = 
createKeyedBackend(IntSerializer.INSTANCE);
+   // use an IntSerializer at first
+   AbstractKeyedStateBackend backend = 
createKeyedBackend(IntSerializer.INSTANCE);
 
-   ValueStateDescriptor kvId = new 
ValueStateDescriptor<>("id", String.class);
+   ValueStateDescriptor kvId = new 
ValueStateDescriptor<>("id", String.class);
 
-   ValueState state = 
backend.getPartitionedState(VoidNamespace.INSTANCE, 
VoidNamespaceSerializer.INSTANCE, kvId);
+   ValueState state = 
backend.getPartitionedState(VoidNamespace.INSTANCE, 
VoidNamespaceSerializer.INSTANCE, kvId);
 
-   // write some state
-   backend.setCurrentKey(1);
-   state.update("1");
-   backend.setCurrentKey(2);
-   state.update("2");
+   // write some state
+   backend.setCurrentKey(1);
+   state.update("1");
+   backend.setCurrentKey(2);
+   state.update("2");
 
-   // draw a snapshot
-   KeyedStateHandle snapshot1 = 
FutureUtil.runIfNotDoneAndGet(backend.snapshot(682375462378L, 2, streamFactory, 
CheckpointOptions.forFullCheckpoint()));
+   // draw a snapshot
+   KeyedStateHandle snapshot1 = 
FutureUtil.runIfNotDoneAndGet(backend.snapshot(682375462378L, 2, streamFactory, 
CheckpointOptions.forFullCheckpoint()));
 
-   backend.dispose();
+   backend.dispose();
 
-   // restore with the wrong key serializer
-   try {
-   restoreKeyedBackend(DoubleSerializer.INSTANCE, 
snapshot1);
+   // restore with the wrong key serializer
+   try {
+   restoreKeyedBackend(DoubleSerializer.INSTANCE, 
snapshot1);
 
-   fail("should recognize wrong key serializer");
-   } catch (RuntimeException e) {
-   if (!e.getMessage().contains("The new key 
serializer is not compatible")) {
-   fail("wrong exception " + e);
-   }
-   // expected
-   }
-   }
-   catch (Exception e) {
-   e.printStackTrace();
-   fail(e.getMessage());
+   fail("should recognize wrong key serializer");
+   } catch (StateMigrationException ignored) {
--- End diff --

This change is failing because the `RocksDBKeyedStateBackend` is not 
throwing the new exception when checking key serializers.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4044: [FLINK-6803] [tests] Add test for PojoSerializer s...

2017-06-04 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4044#discussion_r120007888
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/util/AbstractStreamOperatorTestHarness.java
 ---
@@ -192,7 +187,7 @@ public StreamStatus getStreamStatus() {

when(mockTask.getTaskConfiguration()).thenReturn(underlyingConfig);
when(mockTask.getEnvironment()).thenReturn(environment);
when(mockTask.getExecutionConfig()).thenReturn(executionConfig);
-   
when(mockTask.getUserCodeClassLoader()).thenReturn(this.getClass().getClassLoader());
+   
when(mockTask.getUserCodeClassLoader()).thenReturn(environment.getUserClassLoader());
--- End diff --

Some tests are failing because of this change.

I think the problem is because the given environment may also be a mock 
whose stubbing isn't completed yet, leading to a 
`org.mockito.exceptions.misusing.UnfinishedStubbingException`.

We can avoid that by doing this:
```
ClassLoader cl = environment.getUserClassLoader();
when(mockTask.getUserCodeClassLoader()).thenReturn(cl);
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-6837) Fix a small error message bug, And improve some message info.

2017-06-04 Thread Jark Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-6837.
--
   Resolution: Fixed
Fix Version/s: 1.4.0

Fixed in d42225f1d08fbfc3badf1e840f1d94a873229c53

> Fix a small error message bug, And improve some message info.
> -
>
> Key: FLINK-6837
> URL: https://issues.apache.org/jira/browse/FLINK-6837
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
> Fix For: 1.4.0
>
>
> Fix a variable reference error, and improve some error message info.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6837) Fix a small error message bug, And improve some message info.

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036277#comment-16036277
 ] 

ASF GitHub Bot commented on FLINK-6837:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4060


> Fix a small error message bug, And improve some message info.
> -
>
> Key: FLINK-6837
> URL: https://issues.apache.org/jira/browse/FLINK-6837
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
> Fix For: 1.4.0
>
>
> Fix a variable reference error, and improve some error message info.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4060: [FLINK-6837][table]Fix a test case name error, a s...

2017-06-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4060


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6810:
---
Summary: Add Some built-in Scalar Function supported  (was: Add Some 
build-in Scalar Function)

> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>
> In this JIRA, will create some sub-task for add specific scalar function, 
> such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}},string-functions {{LPAD}}, etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6811) Add TIMESTAMPADD supported in SQL

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6811:
---
Description: 
* Syntax
timestampAdd (datepart , number , date )  
-datepart
Is the part of date to which an integer number is added. 
-number
Is an expression that can be resolved to an int that is added to a datepart of 
date
-date
Is an expression that can be resolved to a time.

* Example
SELECT timestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
00:00:00.000

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]

  was:
* Syntax
testTimestampAdd (datepart , number , date )  
-datepart
Is the part of date to which an integer number is added. 
-number
Is an expression that can be resolved to an int that is added to a datepart of 
date
-date
Is an expression that can be resolved to a time.

* Example
SELECT testTimestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
00:00:00.000

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]


> Add TIMESTAMPADD supported in SQL
> -
>
> Key: FLINK-6811
> URL: https://issues.apache.org/jira/browse/FLINK-6811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> timestampAdd (datepart , number , date )  
> -datepart
> Is the part of date to which an integer number is added. 
> -number
> Is an expression that can be resolved to an int that is added to a datepart 
> of date
> -date
> Is an expression that can be resolved to a time.
> * Example
> SELECT timestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
> 00:00:00.000
> See more: 
> [https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6845) Cleanup "println(StreamITCase.testResults)" call in test case

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036268#comment-16036268
 ] 

ASF GitHub Bot commented on FLINK-6845:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4071
  
+1 to merge


> Cleanup  "println(StreamITCase.testResults)" call in test case
> --
>
> Key: FLINK-6845
> URL: https://issues.apache.org/jira/browse/FLINK-6845
> Project: Flink
>  Issue Type: Test
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Cleanup  "println(StreamITCase.testResults)" call in test case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #4071: [FLINK-6845][table] Cleanup "println(StreamITCase.testRes...

2017-06-04 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4071
  
+1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-6811) Add TIMESTAMPADD supported in SQL

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6811:
---
Summary: Add TIMESTAMPADD supported in SQL  (was: Add TESTTIMESTAMPADD 
supported in SQL)

> Add TIMESTAMPADD supported in SQL
> -
>
> Key: FLINK-6811
> URL: https://issues.apache.org/jira/browse/FLINK-6811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> testTimestampAdd (datepart , number , date )  
> -datepart
> Is the part of date to which an integer number is added. 
> -number
> Is an expression that can be resolved to an int that is added to a datepart 
> of date
> -date
> Is an expression that can be resolved to a time.
> * Example
> SELECT testTimestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
> 00:00:00.000
> See more: 
> [https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6846) Add TIMESTAMPADD supported in TableAPI

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6846:
---
Summary: Add TIMESTAMPADD supported in TableAPI  (was: Add TESTTIMESTAMPADD 
supported in TableAPI)

> Add TIMESTAMPADD supported in TableAPI
> --
>
> Key: FLINK-6846
> URL: https://issues.apache.org/jira/browse/FLINK-6846
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> See FLINK-6811



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6847) Add TIMESTAMPDIFF supported in TableAPI

2017-06-04 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6847:
--

 Summary: Add TIMESTAMPDIFF supported in TableAPI
 Key: FLINK-6847
 URL: https://issues.apache.org/jira/browse/FLINK-6847
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng
Assignee: sunjincheng


see FLINK-6813



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6813) Add TIMESTAMPDIFF supported in SQL

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6813:
---
Description: 
* Syntax
TIMESTAMPDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]

  was:
* Syntax
DATEDIFF ( datepart , startdate , enddate )
-datepart
Is the part of startdate and enddate that specifies the type of boundary 
crossed.
-startdate
Is an expression that can be resolved to a time, date.
-enddate
Same with startdate.
* Example
SELECT DATEDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
00:00:00.000')  from tab; --> 2


> Add TIMESTAMPDIFF supported in SQL
> --
>
> Key: FLINK-6813
> URL: https://issues.apache.org/jira/browse/FLINK-6813
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> TIMESTAMPDIFF ( datepart , startdate , enddate )
> -datepart
> Is the part of startdate and enddate that specifies the type of boundary 
> crossed.
> -startdate
> Is an expression that can be resolved to a time, date.
> -enddate
> Same with startdate.
> * Example
> SELECT TIMESTAMPDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
> 00:00:00.000')  from tab; --> 2
> See more: 
> [https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6811) Add TESTTIMESTAMPADD supported in SQL

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6811:
---
Description: 
* Syntax
testTimestampAdd (datepart , number , date )  
-datepart
Is the part of date to which an integer number is added. 
-number
Is an expression that can be resolved to an int that is added to a datepart of 
date
-date
Is an expression that can be resolved to a time.

* Example
SELECT testTimestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
00:00:00.000

See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]

  was:
* Syntax
testTimestampAdd (datepart , number , date )  
-datepart
Is the part of date to which an integer number is added. 
-number
Is an expression that can be resolved to an int that is added to a datepart of 
date
-date
Is an expression that can be resolved to a time.

* Example
SELECT testTimestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
00:00:00.000
See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]


> Add TESTTIMESTAMPADD supported in SQL
> -
>
> Key: FLINK-6811
> URL: https://issues.apache.org/jira/browse/FLINK-6811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> testTimestampAdd (datepart , number , date )  
> -datepart
> Is the part of date to which an integer number is added. 
> -number
> Is an expression that can be resolved to an int that is added to a datepart 
> of date
> -date
> Is an expression that can be resolved to a time.
> * Example
> SELECT testTimestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
> 00:00:00.000
> See more: 
> [https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6811) Add TESTTIMESTAMPADD supported in SQL

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6811:
---
Description: 
* Syntax
testTimestampAdd (datepart , number , date )  
-datepart
Is the part of date to which an integer number is added. 
-number
Is an expression that can be resolved to an int that is added to a datepart of 
date
-date
Is an expression that can be resolved to a time.

* Example
SELECT testTimestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
00:00:00.000
See more: 
[https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]

  was:
* Syntax
testTimestampAdd (datepart , number , date )  
-datepart
Is the part of date to which an integer number is added. 
-number
Is an expression that can be resolved to an int that is added to a datepart of 
date
-date
Is an expression that can be resolved to a time.

* Example
SELECT testTimestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
00:00:00.000


> Add TESTTIMESTAMPADD supported in SQL
> -
>
> Key: FLINK-6811
> URL: https://issues.apache.org/jira/browse/FLINK-6811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> testTimestampAdd (datepart , number , date )  
> -datepart
> Is the part of date to which an integer number is added. 
> -number
> Is an expression that can be resolved to an int that is added to a datepart 
> of date
> -date
> Is an expression that can be resolved to a time.
> * Example
> SELECT testTimestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
> 00:00:00.000
> See more: 
> [https://issues.apache.org/jira/browse/CALCITE-1827|https://issues.apache.org/jira/browse/CALCITE-1827]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6813) Add TIMESTAMPDIFF supported in SQL

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6813:
---
Summary: Add TIMESTAMPDIFF supported in SQL  (was: Add DATEDIFF supported 
in SQL)

> Add TIMESTAMPDIFF supported in SQL
> --
>
> Key: FLINK-6813
> URL: https://issues.apache.org/jira/browse/FLINK-6813
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> DATEDIFF ( datepart , startdate , enddate )
> -datepart
> Is the part of startdate and enddate that specifies the type of boundary 
> crossed.
> -startdate
> Is an expression that can be resolved to a time, date.
> -enddate
> Same with startdate.
> * Example
> SELECT DATEDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
> 00:00:00.000')  from tab; --> 2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6811) Add TESTTIMESTAMPADD supported in SQL

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6811:
---
Description: 
* Syntax
testTimestampAdd (datepart , number , date )  
-datepart
Is the part of date to which an integer number is added. 
-number
Is an expression that can be resolved to an int that is added to a datepart of 
date
-date
Is an expression that can be resolved to a time.

* Example
SELECT testTimestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
00:00:00.000

  was:
* Syntax
DATEADD (datepart , number , date )  
-datepart
Is the part of date to which an integer number is added. 
-number
Is an expression that can be resolved to an int that is added to a datepart of 
date
-date
Is an expression that can be resolved to a time.

* Example
SELECT DATEADD(month, 1, '2017-05-31')  from tab; --> 2017-06-30 00:00:00.000


> Add TESTTIMESTAMPADD supported in SQL
> -
>
> Key: FLINK-6811
> URL: https://issues.apache.org/jira/browse/FLINK-6811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> testTimestampAdd (datepart , number , date )  
> -datepart
> Is the part of date to which an integer number is added. 
> -number
> Is an expression that can be resolved to an int that is added to a datepart 
> of date
> -date
> Is an expression that can be resolved to a time.
> * Example
> SELECT testTimestampAdd(month, 1, '2017-05-31')  from tab; --> 2017-06-30 
> 00:00:00.000



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6811) Add TESTTIMESTAMPADD supported in SQL

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6811:
---
Summary: Add TESTTIMESTAMPADD supported in SQL  (was: Add DATEADD supported 
in SQL)

> Add TESTTIMESTAMPADD supported in SQL
> -
>
> Key: FLINK-6811
> URL: https://issues.apache.org/jira/browse/FLINK-6811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> DATEADD (datepart , number , date )  
> -datepart
> Is the part of date to which an integer number is added. 
> -number
> Is an expression that can be resolved to an int that is added to a datepart 
> of date
> -date
> Is an expression that can be resolved to a time.
> * Example
> SELECT DATEADD(month, 1, '2017-05-31')  from tab; --> 2017-06-30 00:00:00.000



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6846) Add TESTTIMESTAMPADD supported in TableAPI

2017-06-04 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6846:
--

 Summary: Add TESTTIMESTAMPADD supported in TableAPI
 Key: FLINK-6846
 URL: https://issues.apache.org/jira/browse/FLINK-6846
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng
Assignee: sunjincheng


See FLINK-6811



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6813) Add DATEDIFF supported in SQL

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6813:
---
Summary: Add DATEDIFF supported in SQL  (was: Add DATEDIFF as build-in 
scalar function)

> Add DATEDIFF supported in SQL
> -
>
> Key: FLINK-6813
> URL: https://issues.apache.org/jira/browse/FLINK-6813
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> DATEDIFF ( datepart , startdate , enddate )
> -datepart
> Is the part of startdate and enddate that specifies the type of boundary 
> crossed.
> -startdate
> Is an expression that can be resolved to a time, date.
> -enddate
> Same with startdate.
> * Example
> SELECT DATEDIFF(year, '2015-12-31 23:59:59.999', '2017-01-01 
> 00:00:00.000')  from tab; --> 2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6811) Add DATEADD supported in SQL

2017-06-04 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6811:
---
Summary: Add DATEADD supported in SQL  (was: Add DATEADD as build-in scalar 
function)

> Add DATEADD supported in SQL
> 
>
> Key: FLINK-6811
> URL: https://issues.apache.org/jira/browse/FLINK-6811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> * Syntax
> DATEADD (datepart , number , date )  
> -datepart
> Is the part of date to which an integer number is added. 
> -number
> Is an expression that can be resolved to an int that is added to a datepart 
> of date
> -date
> Is an expression that can be resolved to a time.
> * Example
> SELECT DATEADD(month, 1, '2017-05-31')  from tab; --> 2017-06-30 00:00:00.000



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6845) Cleanup "println(StreamITCase.testResults)" call in test case

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036265#comment-16036265
 ] 

ASF GitHub Bot commented on FLINK-6845:
---

GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/4071

[FLINK-6845][table] Cleanup "println(StreamITCase.testResults)" call …

- [x] General
  - The pull request references the related JIRA issue 
("[FLINK-6845][table] Cleanup "println(StreamITCase.testResults)" call in test 
case")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink FLINK-6845-PR

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4071.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4071


commit bd88fe729256b396b505666a1ff7d3df31f0ec05
Author: sunjincheng121 
Date:   2017-06-04T12:56:31Z

[FLINK-6845][table] Cleanup "println(StreamITCase.testResults)" call in 
test case




> Cleanup  "println(StreamITCase.testResults)" call in test case
> --
>
> Key: FLINK-6845
> URL: https://issues.apache.org/jira/browse/FLINK-6845
> Project: Flink
>  Issue Type: Test
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Cleanup  "println(StreamITCase.testResults)" call in test case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #4071: [FLINK-6845][table] Cleanup "println(StreamITCase....

2017-06-04 Thread sunjincheng121
GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/4071

[FLINK-6845][table] Cleanup "println(StreamITCase.testResults)" call …

- [x] General
  - The pull request references the related JIRA issue 
("[FLINK-6845][table] Cleanup "println(StreamITCase.testResults)" call in test 
case")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink FLINK-6845-PR

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4071.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4071


commit bd88fe729256b396b505666a1ff7d3df31f0ec05
Author: sunjincheng121 
Date:   2017-06-04T12:56:31Z

[FLINK-6845][table] Cleanup "println(StreamITCase.testResults)" call in 
test case




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6845) Cleanup "println(StreamITCase.testResults)" call in test case

2017-06-04 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6845:
--

 Summary: Cleanup  "println(StreamITCase.testResults)" call in test 
case
 Key: FLINK-6845
 URL: https://issues.apache.org/jira/browse/FLINK-6845
 Project: Flink
  Issue Type: Test
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng
Assignee: sunjincheng


Cleanup  "println(StreamITCase.testResults)" call in test case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6073) Support for SQL inner queries for proctime

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036251#comment-16036251
 ] 

ASF GitHub Bot commented on FLINK-6073:
---

Github user rtudoran commented on a diff in the pull request:

https://github.com/apache/flink/pull/3609#discussion_r120007611
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamJoin.scala
 ---
@@ -0,0 +1,241 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.nodes.datastream
+
+import org.apache.calcite.plan.{ RelOptCluster, RelTraitSet }
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rel.core.AggregateCall
+import org.apache.calcite.rel.{ RelNode, RelWriter, BiRel }
+import org.apache.flink.api.java.tuple.Tuple
+import org.apache.flink.streaming.api.datastream.{ AllWindowedStream, 
DataStream, KeyedStream, WindowedStream }
+import org.apache.flink.streaming.api.windowing.assigners._
+import org.apache.flink.streaming.api.windowing.time.Time
+import org.apache.flink.streaming.api.windowing.windows.{ Window => 
DataStreamWindow }
+import org.apache.flink.table.api.StreamTableEnvironment
+import org.apache.flink.table.calcite.FlinkRelBuilder.NamedWindowProperty
+import org.apache.flink.table.calcite.FlinkTypeFactory
+import org.apache.flink.table.expressions._
+import org.apache.flink.table.plan.logical._
+import org.apache.flink.table.plan.nodes.CommonAggregate
+import org.apache.flink.table.plan.nodes.datastream.DataStreamAggregate._
+import org.apache.flink.table.runtime.aggregate.AggregateUtil._
+import org.apache.flink.table.runtime.aggregate._
+import org.apache.flink.table.typeutils.TypeCheckUtils.isTimeInterval
+import org.apache.flink.table.typeutils.{ RowIntervalTypeInfo, 
TimeIntervalTypeInfo }
+import org.apache.flink.types.Row
+import org.apache.calcite.rel.logical.LogicalJoin
+import org.apache.calcite.rel.core.JoinRelType
+import org.apache.flink.table.api.TableException
+import org.apache.flink.api.java.functions.KeySelector
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import org.apache.flink.streaming.api.windowing.triggers.Trigger
+import org.apache.flink.streaming.api.windowing.windows.GlobalWindow
+import 
org.apache.flink.streaming.api.windowing.triggers.Trigger.TriggerContext
+import org.apache.flink.streaming.api.windowing.triggers.TriggerResult
+import 
org.apache.flink.streaming.api.datastream.CoGroupedStreams.TaggedUnion
+import org.apache.flink.streaming.api.windowing.evictors.Evictor
+import 
org.apache.flink.streaming.api.windowing.evictors.Evictor.EvictorContext
+import java.lang.Iterable
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord
+import 
org.apache.flink.streaming.runtime.operators.windowing.TimestampedValue
+import org.apache.flink.api.common.functions.RichFlatJoinFunction
+import org.apache.flink.configuration.Configuration
+import org.apache.flink.api.common.state.ValueState
+import org.apache.flink.api.common.state.ValueStateDescriptor
+import org.apache.flink.util.Collector
+
+class DataStreamJoin(
+  calc: LogicalJoin,
+  cluster: RelOptCluster,
+  traitSet: RelTraitSet,
+  inputLeft: RelNode,
+  inputRight: RelNode,
+  rowType: RelDataType,
+  description: String)
+extends BiRel(cluster, traitSet, inputLeft, inputRight) with 
DataStreamRel {
+
+  override def deriveRowType(): RelDataType = rowType
+
+  override def copy(traitSet: RelTraitSet, inputs: 
java.util.List[RelNode]): RelNode = {
+new DataStreamJoin(
+  calc,
+  cluster,
+  traitSet,
+  inputs.get(0),
+  inputs.get(1),
+  rowType,
+  description + calc.getId())
+  }
+
+  override def toString: String = {
+s"Join(${
+  if (!calc.getCon

[GitHub] flink pull request #3609: [FLINK-6073] - Support for SQL inner queries for p...

2017-06-04 Thread rtudoran
Github user rtudoran commented on a diff in the pull request:

https://github.com/apache/flink/pull/3609#discussion_r120007611
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamJoin.scala
 ---
@@ -0,0 +1,241 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.nodes.datastream
+
+import org.apache.calcite.plan.{ RelOptCluster, RelTraitSet }
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rel.core.AggregateCall
+import org.apache.calcite.rel.{ RelNode, RelWriter, BiRel }
+import org.apache.flink.api.java.tuple.Tuple
+import org.apache.flink.streaming.api.datastream.{ AllWindowedStream, 
DataStream, KeyedStream, WindowedStream }
+import org.apache.flink.streaming.api.windowing.assigners._
+import org.apache.flink.streaming.api.windowing.time.Time
+import org.apache.flink.streaming.api.windowing.windows.{ Window => 
DataStreamWindow }
+import org.apache.flink.table.api.StreamTableEnvironment
+import org.apache.flink.table.calcite.FlinkRelBuilder.NamedWindowProperty
+import org.apache.flink.table.calcite.FlinkTypeFactory
+import org.apache.flink.table.expressions._
+import org.apache.flink.table.plan.logical._
+import org.apache.flink.table.plan.nodes.CommonAggregate
+import org.apache.flink.table.plan.nodes.datastream.DataStreamAggregate._
+import org.apache.flink.table.runtime.aggregate.AggregateUtil._
+import org.apache.flink.table.runtime.aggregate._
+import org.apache.flink.table.typeutils.TypeCheckUtils.isTimeInterval
+import org.apache.flink.table.typeutils.{ RowIntervalTypeInfo, 
TimeIntervalTypeInfo }
+import org.apache.flink.types.Row
+import org.apache.calcite.rel.logical.LogicalJoin
+import org.apache.calcite.rel.core.JoinRelType
+import org.apache.flink.table.api.TableException
+import org.apache.flink.api.java.functions.KeySelector
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import org.apache.flink.streaming.api.windowing.triggers.Trigger
+import org.apache.flink.streaming.api.windowing.windows.GlobalWindow
+import 
org.apache.flink.streaming.api.windowing.triggers.Trigger.TriggerContext
+import org.apache.flink.streaming.api.windowing.triggers.TriggerResult
+import 
org.apache.flink.streaming.api.datastream.CoGroupedStreams.TaggedUnion
+import org.apache.flink.streaming.api.windowing.evictors.Evictor
+import 
org.apache.flink.streaming.api.windowing.evictors.Evictor.EvictorContext
+import java.lang.Iterable
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord
+import 
org.apache.flink.streaming.runtime.operators.windowing.TimestampedValue
+import org.apache.flink.api.common.functions.RichFlatJoinFunction
+import org.apache.flink.configuration.Configuration
+import org.apache.flink.api.common.state.ValueState
+import org.apache.flink.api.common.state.ValueStateDescriptor
+import org.apache.flink.util.Collector
+
+class DataStreamJoin(
+  calc: LogicalJoin,
+  cluster: RelOptCluster,
+  traitSet: RelTraitSet,
+  inputLeft: RelNode,
+  inputRight: RelNode,
+  rowType: RelDataType,
+  description: String)
+extends BiRel(cluster, traitSet, inputLeft, inputRight) with 
DataStreamRel {
+
+  override def deriveRowType(): RelDataType = rowType
+
+  override def copy(traitSet: RelTraitSet, inputs: 
java.util.List[RelNode]): RelNode = {
+new DataStreamJoin(
+  calc,
+  cluster,
+  traitSet,
+  inputs.get(0),
+  inputs.get(1),
+  rowType,
+  description + calc.getId())
+  }
+
+  override def toString: String = {
+s"Join(${
+  if (!calc.getCondition.isAlwaysTrue()) {
+s"condition: (${calc.getCondition}), "
+  } else {
+""
+  }
+}left: ($inputLeft), right($inputRight))"
+  }
+
+  override def explainTerms(pw: RelWriter): RelWriter = {
 

[jira] [Assigned] (FLINK-6802) PojoSerializer does not create ConvertDeserializer for removed/added fields

2017-06-04 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reassigned FLINK-6802:
--

Assignee: Tzu-Li (Gordon) Tai

> PojoSerializer does not create ConvertDeserializer for removed/added fields
> ---
>
> Key: FLINK-6802
> URL: https://issues.apache.org/jira/browse/FLINK-6802
> Project: Flink
>  Issue Type: Improvement
>  Components: Type Serialization System
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Till Rohrmann
>Assignee: Tzu-Li (Gordon) Tai
>
> When calling {{PojoSerializer#ensureCompatibility}}, the PojoSerializer 
> checks for compatibility. Currently, the method only construct a 
> ConvertDeserializer if the number of old and new pojo fields is exactly the 
> same. However, given the {{TypeSerializerConfigurationSnapshots}} and the 
> current set of fields, it should also be possible to construct a 
> ConvertDeserializer if new fields were added or old fields removed from the 
> Pojo. I think that we should add this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6837) Fix a small error message bug, And improve some message info.

2017-06-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036200#comment-16036200
 ] 

ASF GitHub Bot commented on FLINK-6837:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4060
  
The error in Travias CI seems not be relative to this PR, it is filed in 
https://issues.apache.org/jira/browse/FLINK-6836. So I will merge this...


> Fix a small error message bug, And improve some message info.
> -
>
> Key: FLINK-6837
> URL: https://issues.apache.org/jira/browse/FLINK-6837
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Fix a variable reference error, and improve some error message info.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #4060: [FLINK-6837][table]Fix a test case name error, a small er...

2017-06-04 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4060
  
The error in Travias CI seems not be relative to this PR, it is filed in 
https://issues.apache.org/jira/browse/FLINK-6836. So I will merge this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---