[jira] [Commented] (FLINK-6379) Implement FLIP-6 Mesos Resource Manager

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067846#comment-16067846
 ] 

ASF GitHub Bot commented on FLINK-6379:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3942


> Implement FLIP-6 Mesos Resource Manager
> ---
>
> Key: FLINK-6379
> URL: https://issues.apache.org/jira/browse/FLINK-6379
> Project: Flink
>  Issue Type: Sub-task
>  Components: Mesos
>Reporter: Eron Wright 
>Assignee: Eron Wright 
>
> Given the new ResourceManager of FLIP-6, implement a new 
> MesosResourceManager.   
> The minimal effort would be to implement a new resource manager while 
> continuing to use the various local actors (launch coordinator, task monitor, 
> etc.) which implement the various FSMs associated with Mesos scheduling. 
> The Fenzo library would continue to solve the packing problem of matching 
> resource offers to slot requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-6379) Implement FLIP-6 Mesos Resource Manager

2017-06-28 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-6379.
--
Resolution: Fixed

Fixed via 3fe27ac0796bf7fffce0901954bc2eec3eee02be

> Implement FLIP-6 Mesos Resource Manager
> ---
>
> Key: FLINK-6379
> URL: https://issues.apache.org/jira/browse/FLINK-6379
> Project: Flink
>  Issue Type: Sub-task
>  Components: Mesos
>Reporter: Eron Wright 
>Assignee: Eron Wright 
>
> Given the new ResourceManager of FLIP-6, implement a new 
> MesosResourceManager.   
> The minimal effort would be to implement a new resource manager while 
> continuing to use the various local actors (launch coordinator, task monitor, 
> etc.) which implement the various FSMs associated with Mesos scheduling. 
> The Fenzo library would continue to solve the packing problem of matching 
> resource offers to slot requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #3942: FLINK-6379 Mesos ResourceManager (FLIP-6)

2017-06-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3942


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067811#comment-16067811
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4138
  
+1 to merge


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4138: [FLINK-6925][table]Add CONCAT/CONCAT_WS supported in SQL

2017-06-28 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4138
  
+1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6842) Uncomment or remove code in HadoopFileSystem

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067799#comment-16067799
 ] 

ASF GitHub Bot commented on FLINK-6842:
---

GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4219

[FLINK-6842] [runtime] Uncomment and activate code in HadoopFileSystem

Uncomment and  activate ```getFileSystemClass``` method in HadoopFileSystem.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6842

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4219.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4219


commit 91a7e6a14e827fe1c071fedd6ba3a37c7402b1f4
Author: zhangminglei 
Date:   2017-06-29T05:50:51Z

[FLINK-6842] [runtime] Uncomment and activate code in HadoopFileSystem




> Uncomment or remove code in HadoopFileSystem
> 
>
> Key: FLINK-6842
> URL: https://issues.apache.org/jira/browse/FLINK-6842
> Project: Flink
>  Issue Type: Improvement
>  Components: Local Runtime
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Minor
>
> I've found the following code in 
> {{HadoopFileSystem#getHadoopWrapperClassNameForFileSystem}}
> {code}
> Configuration hadoopConf = getHadoopConfiguration();
> Class clazz;
> // We can activate this block once we drop Hadoop1 support (only hd2 has the 
> getFileSystemClass-method)
> //try {
> //clazz = org.apache.hadoop.fs.FileSystem.getFileSystemClass(scheme, 
> hadoopConf);
> //} catch (IOException e) {
> //LOG.info("Flink could not load the Hadoop File system implementation 
> for scheme "+scheme);
> //return null;
> //}
> clazz = hadoopConf.getClass("fs." + scheme + ".impl", null, 
> org.apache.hadoop.fs.FileSystem.class);
> if (clazz != null && LOG.isDebugEnabled()) {
>   LOG.debug("Flink supports {} with the Hadoop file system wrapper, impl 
> {}", scheme, clazz);
> }
> return clazz;
> {code}
> Since we don't support hadoop1 anymore the commented code should either be 
> activated or removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4219: [FLINK-6842] [runtime] Uncomment and activate code...

2017-06-28 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4219

[FLINK-6842] [runtime] Uncomment and activate code in HadoopFileSystem

Uncomment and  activate ```getFileSystemClass``` method in HadoopFileSystem.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6842

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4219.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4219


commit 91a7e6a14e827fe1c071fedd6ba3a37c7402b1f4
Author: zhangminglei 
Date:   2017-06-29T05:50:51Z

[FLINK-6842] [runtime] Uncomment and activate code in HadoopFileSystem




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4218: [FLINK-5842] [runtime] Uncomment and activate code...

2017-06-28 Thread zhangminglei
Github user zhangminglei closed the pull request at:

https://github.com/apache/flink/pull/4218


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5842) Wrong 'since' version for ElasticSearch 5.x connector

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067794#comment-16067794
 ] 

ASF GitHub Bot commented on FLINK-5842:
---

Github user zhangminglei closed the pull request at:

https://github.com/apache/flink/pull/4218


> Wrong 'since' version for ElasticSearch 5.x connector
> -
>
> Key: FLINK-5842
> URL: https://issues.apache.org/jira/browse/FLINK-5842
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Streaming Connectors
>Affects Versions: 1.3.0
>Reporter: Dawid Wysakowicz
>Assignee: Patrick Lucas
> Fix For: 1.3.0
>
>
> The documentation claims that ElasticSearch 5.x is supported since Flink 
> 1.2.0 which is not true, as the support was merged after 1.2.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5842) Wrong 'since' version for ElasticSearch 5.x connector

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067793#comment-16067793
 ] 

ASF GitHub Bot commented on FLINK-5842:
---

GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4218

[FLINK-5842] [runtime] Uncomment and activate code in HadoopFileSystem

Uncomment and activate hadoop 2 ```getFileSystemClass``` method.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6842

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4218.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4218


commit 1749224055b0af9f3046780159c45063c081d659
Author: zhangminglei 
Date:   2017-06-29T05:50:51Z

[FLINK-5842] [runtime] Uncomment and activate code in HadoopFileSystem




> Wrong 'since' version for ElasticSearch 5.x connector
> -
>
> Key: FLINK-5842
> URL: https://issues.apache.org/jira/browse/FLINK-5842
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Streaming Connectors
>Affects Versions: 1.3.0
>Reporter: Dawid Wysakowicz
>Assignee: Patrick Lucas
> Fix For: 1.3.0
>
>
> The documentation claims that ElasticSearch 5.x is supported since Flink 
> 1.2.0 which is not true, as the support was merged after 1.2.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4218: [FLINK-5842] [runtime] Uncomment and activate code...

2017-06-28 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4218

[FLINK-5842] [runtime] Uncomment and activate code in HadoopFileSystem

Uncomment and activate hadoop 2 ```getFileSystemClass``` method.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6842

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4218.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4218


commit 1749224055b0af9f3046780159c45063c081d659
Author: zhangminglei 
Date:   2017-06-29T05:50:51Z

[FLINK-5842] [runtime] Uncomment and activate code in HadoopFileSystem




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-6890) flink-dist Jar contains non-shaded Guava dependencies (built with Maven 3.0.5)

2017-06-28 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-6890.
--
Resolution: Not A Problem

> flink-dist Jar contains non-shaded Guava dependencies (built with Maven 3.0.5)
> --
>
> Key: FLINK-6890
> URL: https://issues.apache.org/jira/browse/FLINK-6890
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Tzu-Li (Gordon) Tai
>
> See original discussion on ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Guava-version-conflict-td13561.html.
> Running {{mvn dependency:tree}} for {{flink-dist}} did not reveal any Guava 
> dependencies.
> This was tested with Maven 3.0.5.
> {code}
> com/google/common/util/concurrent/Futures$CombinedFuture.class
> com/google/common/util/concurrent/Futures$CombinerFuture.class
> com/google/common/util/concurrent/Futures$FallbackFuture$1$1.class
> com/google/common/util/concurrent/Futures$FallbackFuture$1.class
> com/google/common/util/concurrent/Futures$FallbackFuture.class
> com/google/common/util/concurrent/Futures$FutureCombiner.class
> com/google/common/util/concurrent/Futures$ImmediateCancelledFuture.class
> com/google/common/util/concurrent/Futures$ImmediateFailedCheckedFuture.class
> com/google/common/util/concurrent/Futures$ImmediateFailedFuture.class
> com/google/common/util/concurrent/Futures$ImmediateFuture.class
> com/google/common/util/concurrent/Futures$ImmediateSuccessfulCheckedFuture.class
> com/google/common/util/concurrent/Futures$ImmediateSuccessfulFuture.class
> com/google/common/util/concurrent/Futures$MappingCheckedFuture.class
> com/google/common/util/concurrent/Futures$NonCancellationPropagatingFuture$1.class
> com/google/common/util/concurrent/Futures$NonCancellationPropagatingFuture.class
> com/google/common/util/concurrent/Futures$WrappedCombiner.class
> com/google/common/util/concurrent/Futures.class
> com/google/common/util/concurrent/JdkFutureAdapters$ListenableFutureAdapter$1.class
> com/google/common/util/concurrent/JdkFutureAdapters$ListenableFutureAdapter.class
> com/google/common/util/concurrent/JdkFutureAdapters.class
> com/google/common/util/concurrent/ListenableFuture.class
> com/google/common/util/concurrent/ListenableFutureTask.class
> com/google/common/util/concurrent/ListenableScheduledFuture.class
> com/google/common/util/concurrent/ListenerCallQueue$Callback.class
> com/google/common/util/concurrent/ListenerCallQueue.class
> com/google/common/util/concurrent/ListeningExecutorService.class
> com/google/common/util/concurrent/ListeningScheduledExecutorService.class
> com/google/common/util/concurrent/Monitor$Guard.class
> com/google/common/util/concurrent/Monitor.class
> com/google/common/util/concurrent/MoreExecutors$1.class
> com/google/common/util/concurrent/MoreExecutors$2.class
> com/google/common/util/concurrent/MoreExecutors$3.class
> com/google/common/util/concurrent/MoreExecutors$4.class
> com/google/common/util/concurrent/MoreExecutors$Application$1.class
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6890) flink-dist Jar contains non-shaded Guava dependencies (built with Maven 3.0.5)

2017-06-28 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067790#comment-16067790
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-6890:


I can also confirm that this isn't a problem any more. It could have been a 
hiccup with Maven ..
Closing this

> flink-dist Jar contains non-shaded Guava dependencies (built with Maven 3.0.5)
> --
>
> Key: FLINK-6890
> URL: https://issues.apache.org/jira/browse/FLINK-6890
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Tzu-Li (Gordon) Tai
>
> See original discussion on ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Guava-version-conflict-td13561.html.
> Running {{mvn dependency:tree}} for {{flink-dist}} did not reveal any Guava 
> dependencies.
> This was tested with Maven 3.0.5.
> {code}
> com/google/common/util/concurrent/Futures$CombinedFuture.class
> com/google/common/util/concurrent/Futures$CombinerFuture.class
> com/google/common/util/concurrent/Futures$FallbackFuture$1$1.class
> com/google/common/util/concurrent/Futures$FallbackFuture$1.class
> com/google/common/util/concurrent/Futures$FallbackFuture.class
> com/google/common/util/concurrent/Futures$FutureCombiner.class
> com/google/common/util/concurrent/Futures$ImmediateCancelledFuture.class
> com/google/common/util/concurrent/Futures$ImmediateFailedCheckedFuture.class
> com/google/common/util/concurrent/Futures$ImmediateFailedFuture.class
> com/google/common/util/concurrent/Futures$ImmediateFuture.class
> com/google/common/util/concurrent/Futures$ImmediateSuccessfulCheckedFuture.class
> com/google/common/util/concurrent/Futures$ImmediateSuccessfulFuture.class
> com/google/common/util/concurrent/Futures$MappingCheckedFuture.class
> com/google/common/util/concurrent/Futures$NonCancellationPropagatingFuture$1.class
> com/google/common/util/concurrent/Futures$NonCancellationPropagatingFuture.class
> com/google/common/util/concurrent/Futures$WrappedCombiner.class
> com/google/common/util/concurrent/Futures.class
> com/google/common/util/concurrent/JdkFutureAdapters$ListenableFutureAdapter$1.class
> com/google/common/util/concurrent/JdkFutureAdapters$ListenableFutureAdapter.class
> com/google/common/util/concurrent/JdkFutureAdapters.class
> com/google/common/util/concurrent/ListenableFuture.class
> com/google/common/util/concurrent/ListenableFutureTask.class
> com/google/common/util/concurrent/ListenableScheduledFuture.class
> com/google/common/util/concurrent/ListenerCallQueue$Callback.class
> com/google/common/util/concurrent/ListenerCallQueue.class
> com/google/common/util/concurrent/ListeningExecutorService.class
> com/google/common/util/concurrent/ListeningScheduledExecutorService.class
> com/google/common/util/concurrent/Monitor$Guard.class
> com/google/common/util/concurrent/Monitor.class
> com/google/common/util/concurrent/MoreExecutors$1.class
> com/google/common/util/concurrent/MoreExecutors$2.class
> com/google/common/util/concurrent/MoreExecutors$3.class
> com/google/common/util/concurrent/MoreExecutors$4.class
> com/google/common/util/concurrent/MoreExecutors$Application$1.class
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6954) Flink 1.3 checkpointing failing with KeyedCEPPatternOperator

2017-06-28 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067784#comment-16067784
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-6954:


I'll close this JIRA now, as the discussions conclude that this is no longer an 
issue in 1.3.1.

> Flink 1.3 checkpointing failing with KeyedCEPPatternOperator
> 
>
> Key: FLINK-6954
> URL: https://issues.apache.org/jira/browse/FLINK-6954
> Project: Flink
>  Issue Type: Bug
>  Components: CEP, DataStream API, State Backends, Checkpointing
>Affects Versions: 1.3.0
> Environment: yarn, flink 1.3, HDFS
>Reporter: Shashank Agarwal
>Priority: Blocker
> Fix For: 1.3.1
>
>
> After upgrading to Flink 1.3 Checkpointing is not working, it's failing again 
> and again. Check operator state. I have checked with both Rocks DB state 
> backend and FS state backend. Check stack trace. 
> {code}
> java.lang.Exception: Could not perform checkpoint 1 for operator 
> KeyedCEPPatternOperator -> Map (6/6).
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:550)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.notifyCheckpoint(BarrierBuffer.java:378)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:281)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:183)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:213)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:262)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Could not complete snapshot 1 for operator 
> KeyedCEPPatternOperator -> Map (6/6).
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:406)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1157)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1089)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:653)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:589)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:542)
>   ... 8 more
> Caused by: java.lang.UnsupportedOperationException
>   at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.snapshotConfiguration(TraversableSerializer.scala:155)
>   at 
> org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
>   at 
> org.apache.flink.api.scala.typeutils.OptionSerializer$OptionSerializerConfigSnapshot.(OptionSerializer.scala:139)
>   at 
> org.apache.flink.api.scala.typeutils.OptionSerializer.snapshotConfiguration(OptionSerializer.scala:104)
>   at 
> org.apache.flink.api.scala.typeutils.OptionSerializer.snapshotConfiguration(OptionSerializer.scala:28)
>   at 
> org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializerConfigSnapshot.(TupleSerializerConfigSnapshot.java:45)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.snapshotConfiguration(TupleSerializerBase.java:132)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.snapshotConfiguration(TupleSerializerBase.java:39)
>   at 
> org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
>   at 
> org.apache.flink.api.common.typeutils.base.CollectionSerializerConfigSnapshot.(CollectionSerializerConfigSnapshot.java:39)
>   at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.snapshotConfiguration(ListSerializer.java:183)
>   at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.snapshotConfiguration(ListSerializer.java:47)
>   at 
> org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
>   at 
> org.apache.flink.api.common.typeutils.base.MapSerializerConfigSnapshot.(MapSerializerConfigSnapshot.java:38)
>   at 
> org.apache.flink.runtime.state.HashMapSerializer.

[jira] [Commented] (FLINK-6494) Migrate ResourceManager configuration options

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067781#comment-16067781
 ] 

ASF GitHub Bot commented on FLINK-6494:
---

Github user zjureel commented on the issue:

https://github.com/apache/flink/pull/4075
  
@zentol Thank you for your suggestion, I have fixed the problems you 
metioned. Thanks


> Migrate ResourceManager configuration options
> -
>
> Key: FLINK-6494
> URL: https://issues.apache.org/jira/browse/FLINK-6494
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, ResourceManager
>Reporter: Chesnay Schepler
>Assignee: Fang Yong
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-6954) Flink 1.3 checkpointing failing with KeyedCEPPatternOperator

2017-06-28 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-6954.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.2)
   1.3.1

> Flink 1.3 checkpointing failing with KeyedCEPPatternOperator
> 
>
> Key: FLINK-6954
> URL: https://issues.apache.org/jira/browse/FLINK-6954
> Project: Flink
>  Issue Type: Bug
>  Components: CEP, DataStream API, State Backends, Checkpointing
>Affects Versions: 1.3.0
> Environment: yarn, flink 1.3, HDFS
>Reporter: Shashank Agarwal
>Priority: Blocker
> Fix For: 1.3.1
>
>
> After upgrading to Flink 1.3 Checkpointing is not working, it's failing again 
> and again. Check operator state. I have checked with both Rocks DB state 
> backend and FS state backend. Check stack trace. 
> {code}
> java.lang.Exception: Could not perform checkpoint 1 for operator 
> KeyedCEPPatternOperator -> Map (6/6).
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:550)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.notifyCheckpoint(BarrierBuffer.java:378)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.processBarrier(BarrierBuffer.java:281)
>   at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:183)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:213)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:262)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.Exception: Could not complete snapshot 1 for operator 
> KeyedCEPPatternOperator -> Map (6/6).
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:406)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1157)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1089)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:653)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:589)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier(StreamTask.java:542)
>   ... 8 more
> Caused by: java.lang.UnsupportedOperationException
>   at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.snapshotConfiguration(TraversableSerializer.scala:155)
>   at 
> org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
>   at 
> org.apache.flink.api.scala.typeutils.OptionSerializer$OptionSerializerConfigSnapshot.(OptionSerializer.scala:139)
>   at 
> org.apache.flink.api.scala.typeutils.OptionSerializer.snapshotConfiguration(OptionSerializer.scala:104)
>   at 
> org.apache.flink.api.scala.typeutils.OptionSerializer.snapshotConfiguration(OptionSerializer.scala:28)
>   at 
> org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializerConfigSnapshot.(TupleSerializerConfigSnapshot.java:45)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.snapshotConfiguration(TupleSerializerBase.java:132)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializerBase.snapshotConfiguration(TupleSerializerBase.java:39)
>   at 
> org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
>   at 
> org.apache.flink.api.common.typeutils.base.CollectionSerializerConfigSnapshot.(CollectionSerializerConfigSnapshot.java:39)
>   at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.snapshotConfiguration(ListSerializer.java:183)
>   at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.snapshotConfiguration(ListSerializer.java:47)
>   at 
> org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.(CompositeTypeSerializerConfigSnapshot.java:53)
>   at 
> org.apache.flink.api.common.typeutils.base.MapSerializerConfigSnapshot.(MapSerializerConfigSnapshot.java:38)
>   at 
> org.apache.flink.runtime.state.HashMapSerializer.snapshotConfiguration(HashMapSerializer.java:210)
>   at 
> org.apa

[GitHub] flink issue #4075: [FLINK-6494] Migrate ResourceManager/Yarn/Mesos configura...

2017-06-28 Thread zjureel
Github user zjureel commented on the issue:

https://github.com/apache/flink/pull/4075
  
@zentol Thank you for your suggestion, I have fixed the problems you 
metioned. Thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-6842) Uncomment or remove code in HadoopFileSystem

2017-06-28 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-6842:
---

Assignee: mingleizhang

> Uncomment or remove code in HadoopFileSystem
> 
>
> Key: FLINK-6842
> URL: https://issues.apache.org/jira/browse/FLINK-6842
> Project: Flink
>  Issue Type: Improvement
>  Components: Local Runtime
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Minor
>
> I've found the following code in 
> {{HadoopFileSystem#getHadoopWrapperClassNameForFileSystem}}
> {code}
> Configuration hadoopConf = getHadoopConfiguration();
> Class clazz;
> // We can activate this block once we drop Hadoop1 support (only hd2 has the 
> getFileSystemClass-method)
> //try {
> //clazz = org.apache.hadoop.fs.FileSystem.getFileSystemClass(scheme, 
> hadoopConf);
> //} catch (IOException e) {
> //LOG.info("Flink could not load the Hadoop File system implementation 
> for scheme "+scheme);
> //return null;
> //}
> clazz = hadoopConf.getClass("fs." + scheme + ".impl", null, 
> org.apache.hadoop.fs.FileSystem.class);
> if (clazz != null && LOG.isDebugEnabled()) {
>   LOG.debug("Flink supports {} with the Hadoop file system wrapper, impl 
> {}", scheme, clazz);
> }
> return clazz;
> {code}
> Since we don't support hadoop1 anymore the commented code should either be 
> activated or removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7024) Add supported for selecting window proctime/rowtime on row-based Tumble/Slide window

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067755#comment-16067755
 ] 

ASF GitHub Bot commented on FLINK-7024:
---

Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4210#discussion_r124713891
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/logical/operators.scala
 ---
@@ -654,17 +654,23 @@ case class WindowAggregate(
 }
 
 // validate property
-if (propertyExpressions.nonEmpty) {
-  resolvedWindowAggregate.window match {
-case TumblingGroupWindow(_, _, size) if isRowCountLiteral(size) =>
-  failValidation("Window start and Window end cannot be selected " 
+
-   "for a row-count Tumbling window.")
+propertyExpressions.foreach {
+  _.child match {
+case WindowEnd(_) | WindowStart(_) =>
+  resolvedWindowAggregate.window match {
+case TumblingGroupWindow(_, _, size) if 
isRowCountLiteral(size) =>
+  failValidation(
+"Window start and Window end cannot be selected " +
+  "for a row-count Tumbling window.")
 
-case SlidingGroupWindow(_, _, size, _) if isRowCountLiteral(size) 
=>
-  failValidation("Window start and Window end cannot be selected " 
+
-   "for a row-count Sliding window.")
+case SlidingGroupWindow(_, _, size, _) if 
isRowCountLiteral(size) =>
+  failValidation(
+"Window start and Window end cannot be selected " +
+  "for a row-count Sliding window.")
 
-case _ => // ok
+case _ => // ok
--- End diff --

Sure. good idea. 


> Add supported for selecting window proctime/rowtime  on row-based 
> Tumble/Slide window 
> --
>
> Key: FLINK-7024
> URL: https://issues.apache.org/jira/browse/FLINK-7024
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> We get validate exception,when selecting window.proctime/rowtime on row-based 
> group window.
> {code}
>  table
>   .window(Tumble over 2.rows on 'proctime as 'w)
>   .groupBy('w, 'string)
>   .select('string, countFun('string) as 'cnt, 'w.rowtime as 'proctime)
>   .window(Over partitionBy 'string orderBy 'proctime preceding 
> UNBOUNDED_RANGE following CURRENT_RANGE as 'w2)
>   .select('string, 'cnt.sum over 'w2 as 'cnt)
> {code}
> Exception:
> {code}
> org.apache.flink.table.api.ValidationException: Window start and Window end 
> cannot be selected for a row-count Tumbling window.
>   at 
> org.apache.flink.table.plan.logical.LogicalNode.failValidation(LogicalNode.scala:143)
>   at 
> org.apache.flink.table.plan.logical.WindowAggregate.validate(operators.scala:660)
> {code}
> We should add window.proctime/rowtime check in `validate ` method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4210: [FLINK-7024][table]Add supported for selecting win...

2017-06-28 Thread sunjincheng121
Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4210#discussion_r124713891
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/logical/operators.scala
 ---
@@ -654,17 +654,23 @@ case class WindowAggregate(
 }
 
 // validate property
-if (propertyExpressions.nonEmpty) {
-  resolvedWindowAggregate.window match {
-case TumblingGroupWindow(_, _, size) if isRowCountLiteral(size) =>
-  failValidation("Window start and Window end cannot be selected " 
+
-   "for a row-count Tumbling window.")
+propertyExpressions.foreach {
+  _.child match {
+case WindowEnd(_) | WindowStart(_) =>
+  resolvedWindowAggregate.window match {
+case TumblingGroupWindow(_, _, size) if 
isRowCountLiteral(size) =>
+  failValidation(
+"Window start and Window end cannot be selected " +
+  "for a row-count Tumbling window.")
 
-case SlidingGroupWindow(_, _, size, _) if isRowCountLiteral(size) 
=>
-  failValidation("Window start and Window end cannot be selected " 
+
-   "for a row-count Sliding window.")
+case SlidingGroupWindow(_, _, size, _) if 
isRowCountLiteral(size) =>
+  failValidation(
+"Window start and Window end cannot be selected " +
+  "for a row-count Sliding window.")
 
-case _ => // ok
+case _ => // ok
--- End diff --

Sure. good idea. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6888) Can not determine TypeInformation of ACC type of AggregateFunction when ACC is a Scala case/tuple class

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067747#comment-16067747
 ] 

ASF GitHub Bot commented on FLINK-6888:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4105
  
Comments addressed


> Can not determine TypeInformation of ACC type of AggregateFunction when ACC 
> is a Scala case/tuple class
> ---
>
> Key: FLINK-6888
> URL: https://issues.apache.org/jira/browse/FLINK-6888
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
> Fix For: 1.4.0
>
>
> Currently the {{ACC}} TypeInformation of 
> {{org.apache.flink.table.functions.AggregateFunction[T, ACC]}} is extracted 
> using {{TypeInformation.of(Class)}}. When {{ACC}} is a Scala case class or 
> tuple class, the TypeInformation will fall back to {{GenericType}} which 
> result in bad performance when state de/serialization. 
> I suggest to extract the ACC TypeInformation when called 
> {{TableEnvironment.registerFunction()}}.
> Here is an example:
> {code}
> case class Accumulator(sum: Long, count: Long)
> class MyAgg extends AggregateFunction[Long, Accumulator] {
>   //Overloaded accumulate method
>   def accumulate(acc: Accumulator, value: Long): Unit = {
>   }
>   override def createAccumulator(): Accumulator = Accumulator(0, 0)
>   override def getValue(accumulator: Accumulator): Long = 1
> }
> {code}
> The {{Accumulator}} will be recognized as {{GenericType}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4105: [FLINK-6888] [table] Can not determine TypeInformation of...

2017-06-28 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4105
  
Comments addressed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067746#comment-16067746
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4138
  
@wuchong thanks very much for your reviewing. I have updated the PR. Please 
have look at it again.;)


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4138: [FLINK-6925][table]Add CONCAT/CONCAT_WS supported in SQL

2017-06-28 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4138
  
@wuchong thanks very much for your reviewing. I have updated the PR. Please 
have look at it again.;)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7035) Flink Kinesis connector forces hard coding the AWS Region

2017-06-28 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067705#comment-16067705
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-7035:


+1
We probably should use {{Regions.getCurrentRegion()}} as default, and only 
throw exception if it doesn't return anything (e.g. if the user is testing 
locally).

> Flink Kinesis connector forces hard coding the AWS Region
> -
>
> Key: FLINK-7035
> URL: https://issues.apache.org/jira/browse/FLINK-7035
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.2.0, 1.3.1
> Environment: All AWS Amazon Linux nodes (including EMR) and AWS 
> Lambda functions.
>Reporter: Matt Pouttu-clarke
>
> Hard coding the region frequently causes cross-region network access which 
> can in the worst cases could cause brown-out of AWS services and nodes, 
> violation of availability requirements per region, security and compliance 
> issues, and extremely poor performance for the end user's jobs.  All AWS 
> nodes and services are aware of the region they are running in, please see: 
> https://aws.amazon.com/blogs/developer/determining-an-applications-current-region/
> Need to change the following line of code to use Regions.getCurrentRegion() 
> rather than throwing an exception.  Also, code examples should be changed to 
> reflect correct practices.
> https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/KinesisConfigUtil.java#L174



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6934) Consider moving LRUCache class

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067703#comment-16067703
 ] 

ASF GitHub Bot commented on FLINK-6934:
---

GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4217

[FLINK-6934] [util] remove unused LRUCache class

Remove unused ```LRUCache``` class since it was created in 2014.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6934

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4217.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4217


commit a4884c3f7bf0564dc2469c1609e57798d646d503
Author: zhangminglei 
Date:   2017-06-29T04:13:22Z

[FLINK-6934] [util] remove unused LRUCache class




> Consider moving LRUCache class
> --
>
> Key: FLINK-6934
> URL: https://issues.apache.org/jira/browse/FLINK-6934
> Project: Flink
>  Issue Type: Improvement
>  Components: Local Runtime
>Reporter: mingleizhang
>Assignee: mingleizhang
>
> LRUCache class is not used any more. So, I would suggest remove it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4217: [FLINK-6934] [util] remove unused LRUCache class

2017-06-28 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4217

[FLINK-6934] [util] remove unused LRUCache class

Remove unused ```LRUCache``` class since it was created in 2014.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6934

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4217.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4217


commit a4884c3f7bf0564dc2469c1609e57798d646d503
Author: zhangminglei 
Date:   2017-06-29T04:13:22Z

[FLINK-6934] [util] remove unused LRUCache class




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6789) Remove duplicated test utility reducer in optimizer

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067694#comment-16067694
 ] 

ASF GitHub Bot commented on FLINK-6789:
---

GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4216

[FLINK-6789] [optimizer] Remove duplicated test utility reducer in op…

Removed ```DummyReducer``` class, keep ```SelectOneReducer``` instead.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6789

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4216.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4216


commit 1426b14198f33ef886498bc46964485b14afb8d0
Author: zhangminglei 
Date:   2017-06-29T03:49:30Z

[FLINK-6789] [optimizer] Remove duplicated test utility reducer in optimizer




> Remove duplicated test utility reducer in optimizer
> ---
>
> Key: FLINK-6789
> URL: https://issues.apache.org/jira/browse/FLINK-6789
> Project: Flink
>  Issue Type: Improvement
>  Components: Optimizer, Tests
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>
> The {{DummyReducer}} and {{SelectOneReducer}} in 
> {{org.apache.flink.optimizer.testfunctions}} are identical; we could remove 
> one of them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4216: [FLINK-6789] [optimizer] Remove duplicated test ut...

2017-06-28 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4216

[FLINK-6789] [optimizer] Remove duplicated test utility reducer in op…

Removed ```DummyReducer``` class, keep ```SelectOneReducer``` instead.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6789

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4216.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4216


commit 1426b14198f33ef886498bc46964485b14afb8d0
Author: zhangminglei 
Date:   2017-06-29T03:49:30Z

[FLINK-6789] [optimizer] Remove duplicated test utility reducer in optimizer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6789) Remove duplicated test utility reducer in optimizer

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067690#comment-16067690
 ] 

ASF GitHub Bot commented on FLINK-6789:
---

Github user zhangminglei closed the pull request at:

https://github.com/apache/flink/pull/4215


> Remove duplicated test utility reducer in optimizer
> ---
>
> Key: FLINK-6789
> URL: https://issues.apache.org/jira/browse/FLINK-6789
> Project: Flink
>  Issue Type: Improvement
>  Components: Optimizer, Tests
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>
> The {{DummyReducer}} and {{SelectOneReducer}} in 
> {{org.apache.flink.optimizer.testfunctions}} are identical; we could remove 
> one of them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4215: [FLINK-6789] [optimizer] Remove duplicated test ut...

2017-06-28 Thread zhangminglei
Github user zhangminglei closed the pull request at:

https://github.com/apache/flink/pull/4215


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4215: [FLINK] [optimizer] Remove duplicated test utility...

2017-06-28 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/4215

[FLINK] [optimizer] Remove duplicated test utility reducer in optimizer

Removed ```DummyReducer``` class, keep ```SelectOneReducer``` instead.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-6789

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4215.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4215


commit 392335413cdcbf5cc35a9d949c432605f14d17da
Author: zhangminglei 
Date:   2017-06-29T03:49:30Z

[FLINK] [optimizer] Remove duplicated test utility reducer in optimizer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7024) Add supported for selecting window proctime/rowtime on row-based Tumble/Slide window

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067675#comment-16067675
 ] 

ASF GitHub Bot commented on FLINK-7024:
---

Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/4210#discussion_r124705183
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/logical/operators.scala
 ---
@@ -654,17 +654,23 @@ case class WindowAggregate(
 }
 
 // validate property
-if (propertyExpressions.nonEmpty) {
-  resolvedWindowAggregate.window match {
-case TumblingGroupWindow(_, _, size) if isRowCountLiteral(size) =>
-  failValidation("Window start and Window end cannot be selected " 
+
-   "for a row-count Tumbling window.")
+propertyExpressions.foreach {
+  _.child match {
+case WindowEnd(_) | WindowStart(_) =>
+  resolvedWindowAggregate.window match {
+case TumblingGroupWindow(_, _, size) if 
isRowCountLiteral(size) =>
+  failValidation(
+"Window start and Window end cannot be selected " +
+  "for a row-count Tumbling window.")
 
-case SlidingGroupWindow(_, _, size, _) if isRowCountLiteral(size) 
=>
-  failValidation("Window start and Window end cannot be selected " 
+
-   "for a row-count Sliding window.")
+case SlidingGroupWindow(_, _, size, _) if 
isRowCountLiteral(size) =>
+  failValidation(
+"Window start and Window end cannot be selected " +
+  "for a row-count Sliding window.")
 
-case _ => // ok
+case _ => // ok
--- End diff --

Can you add a comment here that the `RowtimeAttribute` and 
`ProctimeAttribute` should pass ?


> Add supported for selecting window proctime/rowtime  on row-based 
> Tumble/Slide window 
> --
>
> Key: FLINK-7024
> URL: https://issues.apache.org/jira/browse/FLINK-7024
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> We get validate exception,when selecting window.proctime/rowtime on row-based 
> group window.
> {code}
>  table
>   .window(Tumble over 2.rows on 'proctime as 'w)
>   .groupBy('w, 'string)
>   .select('string, countFun('string) as 'cnt, 'w.rowtime as 'proctime)
>   .window(Over partitionBy 'string orderBy 'proctime preceding 
> UNBOUNDED_RANGE following CURRENT_RANGE as 'w2)
>   .select('string, 'cnt.sum over 'w2 as 'cnt)
> {code}
> Exception:
> {code}
> org.apache.flink.table.api.ValidationException: Window start and Window end 
> cannot be selected for a row-count Tumbling window.
>   at 
> org.apache.flink.table.plan.logical.LogicalNode.failValidation(LogicalNode.scala:143)
>   at 
> org.apache.flink.table.plan.logical.WindowAggregate.validate(operators.scala:660)
> {code}
> We should add window.proctime/rowtime check in `validate ` method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4210: [FLINK-7024][table]Add supported for selecting win...

2017-06-28 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/4210#discussion_r124705183
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/logical/operators.scala
 ---
@@ -654,17 +654,23 @@ case class WindowAggregate(
 }
 
 // validate property
-if (propertyExpressions.nonEmpty) {
-  resolvedWindowAggregate.window match {
-case TumblingGroupWindow(_, _, size) if isRowCountLiteral(size) =>
-  failValidation("Window start and Window end cannot be selected " 
+
-   "for a row-count Tumbling window.")
+propertyExpressions.foreach {
+  _.child match {
+case WindowEnd(_) | WindowStart(_) =>
+  resolvedWindowAggregate.window match {
+case TumblingGroupWindow(_, _, size) if 
isRowCountLiteral(size) =>
+  failValidation(
+"Window start and Window end cannot be selected " +
+  "for a row-count Tumbling window.")
 
-case SlidingGroupWindow(_, _, size, _) if isRowCountLiteral(size) 
=>
-  failValidation("Window start and Window end cannot be selected " 
+
-   "for a row-count Sliding window.")
+case SlidingGroupWindow(_, _, size, _) if 
isRowCountLiteral(size) =>
+  failValidation(
+"Window start and Window end cannot be selected " +
+  "for a row-count Sliding window.")
 
-case _ => // ok
+case _ => // ok
--- End diff --

Can you add a comment here that the `RowtimeAttribute` and 
`ProctimeAttribute` should pass ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-6789) Remove duplicated test utility reducer in optimizer

2017-06-28 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-6789:
---

Assignee: mingleizhang

> Remove duplicated test utility reducer in optimizer
> ---
>
> Key: FLINK-6789
> URL: https://issues.apache.org/jira/browse/FLINK-6789
> Project: Flink
>  Issue Type: Improvement
>  Components: Optimizer, Tests
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>
> The {{DummyReducer}} and {{SelectOneReducer}} in 
> {{org.apache.flink.optimizer.testfunctions}} are identical; we could remove 
> one of them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6888) Can not determine TypeInformation of ACC type of AggregateFunction when ACC is a Scala case/tuple class

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067649#comment-16067649
 ] 

ASF GitHub Bot commented on FLINK-6888:
---

Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/4105#discussion_r124704209
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/functions/utils/UserDefinedFunctionUtils.scala
 ---
@@ -320,12 +348,20 @@ object UserDefinedFunctionUtils {
 } else if(extractedType != null) {
   extractedType
 } else {
-  TypeExtractor
+  try {
+TypeExtractor
--- End diff --

I think it is more safe than `TypeInformation.of(Any)`, because 
`TypeInformation.of(Any)` can only works for non-generic types. 

For example: 

```
public class UDAF extends AggregateFunction[Tuple2, Long] {
public Long createAccumulator() {...}
public Tuple2 getValue(Long acc) {...}
}
```
For the given UDAF, the return type is a generic type `Tuple2`. We can't extract the return type  from an instance of Tuple2, because 
of Java type erasure. But we can extract it from the type hierarchy of UDAF 
class. 

Am I right, @twalthr ?



```






> Can not determine TypeInformation of ACC type of AggregateFunction when ACC 
> is a Scala case/tuple class
> ---
>
> Key: FLINK-6888
> URL: https://issues.apache.org/jira/browse/FLINK-6888
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
> Fix For: 1.4.0
>
>
> Currently the {{ACC}} TypeInformation of 
> {{org.apache.flink.table.functions.AggregateFunction[T, ACC]}} is extracted 
> using {{TypeInformation.of(Class)}}. When {{ACC}} is a Scala case class or 
> tuple class, the TypeInformation will fall back to {{GenericType}} which 
> result in bad performance when state de/serialization. 
> I suggest to extract the ACC TypeInformation when called 
> {{TableEnvironment.registerFunction()}}.
> Here is an example:
> {code}
> case class Accumulator(sum: Long, count: Long)
> class MyAgg extends AggregateFunction[Long, Accumulator] {
>   //Overloaded accumulate method
>   def accumulate(acc: Accumulator, value: Long): Unit = {
>   }
>   override def createAccumulator(): Accumulator = Accumulator(0, 0)
>   override def getValue(accumulator: Accumulator): Long = 1
> }
> {code}
> The {{Accumulator}} will be recognized as {{GenericType}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4105: [FLINK-6888] [table] Can not determine TypeInforma...

2017-06-28 Thread wuchong
Github user wuchong commented on a diff in the pull request:

https://github.com/apache/flink/pull/4105#discussion_r124704209
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/functions/utils/UserDefinedFunctionUtils.scala
 ---
@@ -320,12 +348,20 @@ object UserDefinedFunctionUtils {
 } else if(extractedType != null) {
   extractedType
 } else {
-  TypeExtractor
+  try {
+TypeExtractor
--- End diff --

I think it is more safe than `TypeInformation.of(Any)`, because 
`TypeInformation.of(Any)` can only works for non-generic types. 

For example: 

```
public class UDAF extends AggregateFunction[Tuple2, Long] {
public Long createAccumulator() {...}
public Tuple2 getValue(Long acc) {...}
}
```
For the given UDAF, the return type is a generic type `Tuple2`. We can't extract the return type  from an instance of Tuple2, because 
of Java type erasure. But we can extract it from the type hierarchy of UDAF 
class. 

Am I right, @twalthr ?



```






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7014) Expose isDeterministic interface to ScalarFunction and TableFunction

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067625#comment-16067625
 ] 

ASF GitHub Bot commented on FLINK-7014:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4200


> Expose isDeterministic interface to ScalarFunction and TableFunction
> 
>
> Key: FLINK-7014
> URL: https://issues.apache.org/jira/browse/FLINK-7014
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Ruidong Li
>Assignee: Ruidong Li
> Fix For: 1.4.0
>
>
> Currently, the `isDeterministic` method of implementations of `SqlFuntion` 
> are always returning true, which cause inappropriate optimization in Calcite, 
> such as taking user's stateful UDF as a pure functional procedure. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7014) Expose isDeterministic interface to ScalarFunction and TableFunction

2017-06-28 Thread Jark Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-7014.
--
   Resolution: Fixed
Fix Version/s: 1.4.0

Fixed in b59148cf7d206951191a24e6a6ce43937db5f045

> Expose isDeterministic interface to ScalarFunction and TableFunction
> 
>
> Key: FLINK-7014
> URL: https://issues.apache.org/jira/browse/FLINK-7014
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Ruidong Li
>Assignee: Ruidong Li
> Fix For: 1.4.0
>
>
> Currently, the `isDeterministic` method of implementations of `SqlFuntion` 
> are always returning true, which cause inappropriate optimization in Calcite, 
> such as taking user's stateful UDF as a pure functional procedure. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4200: [FLINK-7014] Expose isDeterministic interface to S...

2017-06-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4200


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6969) Add support for deferred computation for group window aggregates

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067618#comment-16067618
 ] 

ASF GitHub Bot commented on FLINK-6969:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4183
  
Regarding to early firing, I think we are not on the same page and need to 
discuss about what is early firing, how to configure it, how to implement it. 

IMO, the early firing is to config an update rate of window. Such as an 
one-hour window, but we want to see the result every minutes. So it is very 
similar to 
[FLINK-6649](https://issues.apache.org/jira/browse/FLINK-6649)(update rate for 
non-windowed group), but this is a 
update-rate/early-interval configuration on window group. If you agree with 
that, then it is not a time offset of first result, because it can't describe 
the rate.

Maybe we should move this discussion to the early firing issue FLINK-6961 ?


> Add support for deferred computation for group window aggregates
> 
>
> Key: FLINK-6969
> URL: https://issues.apache.org/jira/browse/FLINK-6969
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: sunjincheng
>
> Deferred computation is a strategy to deal with late arriving data and avoid 
> updates of previous results. Instead of computing a result as soon as it is 
> possible (i.e., when a corresponding watermark was received), deferred 
> computation adds a configurable amount of slack time in which late data is 
> accepted before the result is compute. For example, instead of computing a 
> tumbling window of 1 hour at each full hour, we can add a deferred 
> computation interval of 15 minute to compute the result quarter past each 
> full hour.
> This approach adds latency but can reduce the number of update esp. in use 
> cases where the user cannot influence the generation of watermarks. It is 
> also useful if the data is emitted to a system that cannot update result 
> (files or Kafka). The deferred computation interval should be configured via 
> the {{QueryConfig}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4183: [FLINK-6969][table]Add support for deferred computation f...

2017-06-28 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4183
  
Regarding to early firing, I think we are not on the same page and need to 
discuss about what is early firing, how to configure it, how to implement it. 

IMO, the early firing is to config an update rate of window. Such as an 
one-hour window, but we want to see the result every minutes. So it is very 
similar to 
[FLINK-6649](https://issues.apache.org/jira/browse/FLINK-6649)(update rate for 
non-windowed group), but this is a 
update-rate/early-interval configuration on window group. If you agree with 
that, then it is not a time offset of first result, because it can't describe 
the rate.

Maybe we should move this discussion to the early firing issue FLINK-6961 ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7008) Update NFA state only when the NFA changes.

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067583#comment-16067583
 ] 

ASF GitHub Bot commented on FLINK-7008:
---

Github user dianfu commented on a diff in the pull request:

https://github.com/apache/flink/pull/4195#discussion_r124697385
  
--- Diff: 
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/operator/CEPOperatorTest.java
 ---
@@ -297,6 +297,85 @@ public void testKeyedAdvancingTimeWithoutElements() 
throws Exception {
}
 
@Test
+   public void testKeyedCEPOperatorNFAChanged() throws Exception {
--- End diff --

Got it. Make sense and have updated the PR.


> Update NFA state only when the NFA changes.
> ---
>
> Key: FLINK-7008
> URL: https://issues.apache.org/jira/browse/FLINK-7008
> Project: Flink
>  Issue Type: Improvement
>  Components: CEP
>Affects Versions: 1.3.1
>Reporter: Kostas Kloudas
>Assignee: Dian Fu
> Fix For: 1.4.0
>
>
> Currently in the {{AbstractKeyedCEPPatternOperator.updateNFA()}} method we 
> update the NFA state every time the NFA is touched. This leads to redundant 
> puts/gets to the state when there are no changes to the NFA itself.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4195: [FLINK-7008] [cep] Update NFA state only when the ...

2017-06-28 Thread dianfu
Github user dianfu commented on a diff in the pull request:

https://github.com/apache/flink/pull/4195#discussion_r124697385
  
--- Diff: 
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/operator/CEPOperatorTest.java
 ---
@@ -297,6 +297,85 @@ public void testKeyedAdvancingTimeWithoutElements() 
throws Exception {
}
 
@Test
+   public void testKeyedCEPOperatorNFAChanged() throws Exception {
--- End diff --

Got it. Make sense and have updated the PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7035) Flink Kinesis connector forces hard coding the AWS Region

2017-06-28 Thread Matt Pouttu-clarke (JIRA)
Matt Pouttu-clarke created FLINK-7035:
-

 Summary: Flink Kinesis connector forces hard coding the AWS Region
 Key: FLINK-7035
 URL: https://issues.apache.org/jira/browse/FLINK-7035
 Project: Flink
  Issue Type: Bug
  Components: Kinesis Connector
Affects Versions: 1.3.1, 1.2.0
 Environment: All AWS Amazon Linux nodes (including EMR) and AWS Lambda 
functions.
Reporter: Matt Pouttu-clarke


Hard coding the region frequently causes cross-region network access which can 
in the worst cases could cause brown-out of AWS services and nodes, violation 
of availability requirements per region, security and compliance issues, and 
extremely poor performance for the end user's jobs.  All AWS nodes and services 
are aware of the region they are running in, please see: 
https://aws.amazon.com/blogs/developer/determining-an-applications-current-region/

Need to change the following line of code to use Regions.getCurrentRegion() 
rather than throwing an exception.  Also, code examples should be changed to 
reflect correct practices.
https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/KinesisConfigUtil.java#L174



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7032) Intellij is constantly changing language level of sub projects back to 1.6

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067375#comment-16067375
 ] 

ASF GitHub Bot commented on FLINK-7032:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4213
  
Are you using the Maven bundled with IntelliJ?


> Intellij is constantly changing language level of sub projects back to 1.6 
> ---
>
> Key: FLINK-7032
> URL: https://issues.apache.org/jira/browse/FLINK-7032
> Project: Flink
>  Issue Type: Improvement
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> Every time I do maven reimport projects, Intellij is switching back to 1.6 
> language level. I tracked down this issue to misconfiguration in our pom.xml 
> file. It correctly configure maven-compiler-plugin:
> {code:xml}
>   
>   
>   org.apache.maven.plugins
>   maven-compiler-plugin
>   3.1
>   
>   ${java.version}
>   ${java.version}
>   
>   
> -Xlint:all
>   
>   
> {code}
> where ${java.version} is set to 1.7 in the properties, but it forgets to 
> overwrite the following properties from apache-18.pom:
> {code:xml}
>   
> 1.6
> 1.6
>   
> {code}
> It seems like compiling from console using maven ignores those values, but 
> they are confusing Intellij.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4213: [FLINK-7032] Overwrite inherited properties from parent p...

2017-06-28 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4213
  
Are you using the Maven bundled with IntelliJ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6998) Kafka connector needs to expose metrics for failed/successful offset commits in the Kafka Consumer callback

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067369#comment-16067369
 ] 

ASF GitHub Bot commented on FLINK-6998:
---

Github user zhenzhongxu commented on the issue:

https://github.com/apache/flink/pull/4187
  
How about just "commits-succeeded" and "commits-failed" as metric names.


> Kafka connector needs to expose metrics for failed/successful offset commits 
> in the Kafka Consumer callback
> ---
>
> Key: FLINK-6998
> URL: https://issues.apache.org/jira/browse/FLINK-6998
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Reporter: Zhenzhong Xu
>Assignee: Zhenzhong Xu
>
> Propose to add "kafkaCommitsSucceeded" and "kafkaCommitsFailed" metrics in 
> KafkaConsumerThread class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4187: [FLINK-6998][Kafka Connector] Add kafka offset commit met...

2017-06-28 Thread zhenzhongxu
Github user zhenzhongxu commented on the issue:

https://github.com/apache/flink/pull/4187
  
How about just "commits-succeeded" and "commits-failed" as metric names.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3551) Sync Scala and Java Streaming Examples

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067314#comment-16067314
 ] 

ASF GitHub Bot commented on FLINK-3551:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/2761
  
Thanks for the update @ch33hau. 
I made a quick pass over the PR and it looks quite good. Will have a more 
detailed look in the next days and probably merge it.

Thanks for porting the examples and refactoring the tests!


> Sync Scala and Java Streaming Examples
> --
>
> Key: FLINK-3551
> URL: https://issues.apache.org/jira/browse/FLINK-3551
> Project: Flink
>  Issue Type: Sub-task
>  Components: Examples
>Affects Versions: 1.0.0
>Reporter: Stephan Ewen
>Assignee: Lim Chee Hau
>
> The Scala Examples lack behind the Java Examples



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #2761: [FLINK-3551] [examples] Sync Scala Streaming Examples

2017-06-28 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/2761
  
Thanks for the update @ch33hau. 
I made a quick pass over the PR and it looks quite good. Will have a more 
detailed look in the next days and probably merge it.

Thanks for porting the examples and refactoring the tests!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7021) Flink Task Manager hangs on startup if one Zookeeper node is unresolvable

2017-06-28 Thread Scott Kidder (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067189#comment-16067189
 ] 

Scott Kidder commented on FLINK-7021:
-

Logs from Flink Task Manager during startup when an unresolvable Zookeeper 
hostname is given:

{noformat}
2017-06-28 20:28:22,674 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: recovery.zookeeper.quorum, foo.bar:2181
2017-06-28 20:28:22,674 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: recovery.zookeeper.storageDir, 
hdfs://hdfs:8020/flink/recovery
2017-06-28 20:28:22,674 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: recovery.jobmanager.port, 6123
2017-06-28 20:28:22,674 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: taskmanager.rpc.port, 6122
2017-06-28 20:28:22,674 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: taskmanager.data.port, 6121
2017-06-28 20:28:22,674 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: taskmanager.hostname, 10.2.45.10
2017-06-28 20:28:24,249 INFO  org.apache.flink.runtime.blob.FileSystemBlobStore 
- Creating highly available BLOB storage directory at 
hdfs://hdfs:8020/flink/recovery//default/blob
2017-06-28 20:28:24,380 WARN  org.apache.flink.configuration.Configuration  
- Config uses deprecated configuration key 
'recovery.zookeeper.quorum' instead of proper key 
'high-availability.zookeeper.quorum'
2017-06-28 20:28:24,448 INFO  org.apache.flink.runtime.util.ZooKeeperUtils  
- Enforcing default ACL for ZK connections
2017-06-28 20:28:24,449 INFO  org.apache.flink.runtime.util.ZooKeeperUtils  
- Using '/flink/default' as Zookeeper namespace.

==> 
/usr/local/flink/log/flink--taskmanager-0-flink-taskmanager-3923888361-c6fdw.out
 <==
tail: unrecognized file system type 0x794c7630 for 
‘/usr/local/flink/log/flink--taskmanager-0-flink-taskmanager-3923888361-c6fdw.log’.
 please report this to bug-coreut...@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for 
‘/usr/local/flink/log/flink--taskmanager-0-flink-taskmanager-3923888361-c6fdw.out’.
 please report this to bug-coreut...@gnu.org. reverting to polling

==> 
/usr/local/flink/log/flink--taskmanager-0-flink-taskmanager-3923888361-c6fdw.log
 <==
2017-06-28 20:28:24,564 INFO  
org.apache.flink.shaded.org.apache.curator.framework.imps.CuratorFrameworkImpl  
- Starting
2017-06-28 20:28:24,569 INFO  org.apache.zookeeper.ZooKeeper
- Client 
environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, 
built on 03/23/2017 10:13 GMT
2017-06-28 20:28:24,570 INFO  org.apache.zookeeper.ZooKeeper
- Client environment:host.name=flink-taskmanager-3923888361-c6fdw
2017-06-28 20:28:24,570 INFO  org.apache.zookeeper.ZooKeeper
- Client environment:java.version=1.8.0_131
2017-06-28 20:28:24,570 INFO  org.apache.zookeeper.ZooKeeper
- Client environment:java.vendor=Oracle Corporation
2017-06-28 20:28:24,570 INFO  org.apache.zookeeper.ZooKeeper
- Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
2017-06-28 20:28:24,570 INFO  org.apache.zookeeper.ZooKeeper
- Client 
environment:java.class.path=/usr/local/flink-1.3.1/lib/egads-0.1.jar:/usr/local/flink-1.3.1/lib/flink-connector-kinesis_2.11-1.3.1.jar:/usr/local/flink-1.3.1/lib/flink-connector-rabbitmq_2.11-1.3.1.jar:/usr/local/flink-1.3.1/lib/flink-metrics-statsd-1.3.1.jar:/usr/local/flink-1.3.1/lib/flink-python_2.11-1.3.1.jar:/usr/local/flink-1.3.1/lib/flink-shaded-hadoop2-uber-1.3.1.jar:/usr/local/flink-1.3.1/lib/log4j-1.2.17.jar:/usr/local/flink-1.3.1/lib/slf4j-log4j12-1.7.7.jar:/usr/local/flink-1.3.1/lib/flink-dist_2.11-1.3.1.jar:::
2017-06-28 20:28:24,570 INFO  org.apache.zookeeper.ZooKeeper
- Client 
environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2017-06-28 20:28:24,570 INFO  org.apache.zookeeper.ZooKeeper
- Client environment:java.io.tmpdir=/tmp
2017-06-28 20:28:24,570 INFO  org.apache.zookeeper.ZooKeeper
- Client environment:java.compiler=
2017-06-28 20:28:24,571 INFO  org.apache.zookeeper.ZooKeeper
- Client environment:os.name=Linux
2017-06-28 20:28:24,571 INFO  org.apache.zookeeper.ZooKeeper
- Client environment:os.arch=amd64
2017-

[jira] [Commented] (FLINK-7021) Flink Task Manager hangs on startup if one Zookeeper node is unresolvable

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067187#comment-16067187
 ] 

ASF GitHub Bot commented on FLINK-7021:
---

GitHub user skidder opened a pull request:

https://github.com/apache/flink/pull/4214

[FLINK-7021]

Fixes issue FLINK-7021 by adding an `UnhandledErrorListener` implementation 
to the Task Manager that will shutdown the Task Manager if an unretryable 
exception is raised while retrieving the Zookeeper leader.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/muxinc/flink FLINK-7021

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4214.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4214


commit 02cd39e5b5c8b7e1936776476566b5363b40004f
Author: Scott Kidder 
Date:   2017-06-28T14:48:33Z

[FLINK-7021] [core] Handle Zookeeper leader retrieval error in TaskManager 
and throw RuntimeException




> Flink Task Manager hangs on startup if one Zookeeper node is unresolvable
> -
>
> Key: FLINK-7021
> URL: https://issues.apache.org/jira/browse/FLINK-7021
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 1.2.0, 1.3.0, 1.2.1, 1.3.1
> Environment: Kubernetes cluster running:
> * Flink 1.3.0 Job Manager & Task Manager on Java 8u131
> * Zookeeper 3.4.10 cluster with 3 nodes
>Reporter: Scott Kidder
>
> h2. Problem
> Flink Task Manager will hang during startup if one of the Zookeeper nodes in 
> the Zookeeper connection string is unresolvable.
> h2. Expected Behavior
> Flink should retry name resolution & connection to Zookeeper nodes with 
> exponential back-off.
> h2. Environment Details
> We're running Flink and Zookeeper in Kubernetes on CoreOS. CoreOS can run in 
> a configuration that automatically detects and applies operating system 
> updates. We have a Zookeeper node running on the same CoreOS instance as 
> Flink. It's possible that the Zookeeper node will not yet be started when the 
> Flink components are started. This could cause hostname resolution of the 
> Zookeeper nodes to fail.
> h3. Flink Task Manager Logs
> {noformat}
> 2017-06-27 15:38:51,713 INFO  
> org.apache.flink.runtime.taskmanager.TaskManager  - Using 
> configured hostname/address for TaskManager: 10.2.45.11
> 2017-06-27 15:38:51,714 INFO  
> org.apache.flink.runtime.taskmanager.TaskManager  - Starting 
> TaskManager
> 2017-06-27 15:38:51,714 INFO  
> org.apache.flink.runtime.taskmanager.TaskManager  - Starting 
> TaskManager actor system at 10.2.45.11:6122.
> 2017-06-27 15:38:52,950 INFO  akka.event.slf4j.Slf4jLogger
>   - Slf4jLogger started
> 2017-06-27 15:38:53,079 INFO  Remoting
>   - Starting remoting
> 2017-06-27 15:38:53,573 INFO  Remoting
>   - Remoting started; listening on addresses 
> :[akka.tcp://flink@10.2.45.11:6122]
> 2017-06-27 15:38:53,576 INFO  
> org.apache.flink.runtime.taskmanager.TaskManager  - Starting 
> TaskManager actor
> 2017-06-27 15:38:53,660 INFO  
> org.apache.flink.runtime.io.network.netty.NettyConfig - NettyConfig 
> [server address: /10.2.45.11, server port: 6121, ssl enabled: false, memory 
> segment size (bytes): 32768, transport type: NIO, number of server threads: 2 
> (manual), number of client threads: 2 (manual), server connect backlog: 0 
> (use Netty's default), client connect timeout (sec): 120, send/receive buffer 
> size (bytes): 0 (use Netty's default)]
> 2017-06-27 15:38:53,682 INFO  
> org.apache.flink.runtime.taskexecutor.TaskManagerConfiguration  - Messages 
> have a max timeout of 1 ms
> 2017-06-27 15:38:53,688 INFO  
> org.apache.flink.runtime.taskexecutor.TaskManagerServices - Temporary 
> file directory '/tmp': total 49 GB, usable 42 GB (85.71% usable)
> 2017-06-27 15:38:54,071 INFO  
> org.apache.flink.runtime.io.network.buffer.NetworkBufferPool  - Allocated 96 
> MB for network buffer pool (number of memory segments: 3095, bytes per 
> segment: 32768).
> 2017-06-27 15:38:54,564 INFO  
> org.apache.flink.runtime.io.network.NetworkEnvironment- Starting the 
> network environment and its components.
> 2017-06-27 15:38:54,576 INFO  
> org.apache.flink.runtime.io.network.netty.NettyClient - Successful 
> initialization (took 4 ms).
> 2017-06-27 15:38:54,677 INFO  
> org.apache.flink.runtime.io.network.netty.NettyServer - Successful 
> initialization (took 101 ms). Listening on SocketAddress /10.2.45.11

[GitHub] flink pull request #4214: [FLINK-7021]

2017-06-28 Thread skidder
GitHub user skidder opened a pull request:

https://github.com/apache/flink/pull/4214

[FLINK-7021]

Fixes issue FLINK-7021 by adding an `UnhandledErrorListener` implementation 
to the Task Manager that will shutdown the Task Manager if an unretryable 
exception is raised while retrieving the Zookeeper leader.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/muxinc/flink FLINK-7021

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4214.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4214


commit 02cd39e5b5c8b7e1936776476566b5363b40004f
Author: Scott Kidder 
Date:   2017-06-28T14:48:33Z

[FLINK-7021] [core] Handle Zookeeper leader retrieval error in TaskManager 
and throw RuntimeException




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7025) Using NullByteKeySelector for Unbounded ProcTime NonPartitioned Over

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067160#comment-16067160
 ] 

ASF GitHub Bot commented on FLINK-7025:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4212
  
+1 to merge


> Using NullByteKeySelector for Unbounded ProcTime NonPartitioned Over
> 
>
> Key: FLINK-7025
> URL: https://issues.apache.org/jira/browse/FLINK-7025
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.1, 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently we added `Cleanup State` feature. But It not work well if we 
> enabled the stateCleaning on Unbounded ProcTime NonPartitioned Over window, 
> Because in `ProcessFunctionWithCleanupState` we has using the keyed state.
> So, In this JIRA. I'll change the  `Unbounded ProcTime NonPartitioned Over` 
> to `partitioned Over` by using NullByteKeySelector. OR created a 
> `NonKeyedProcessFunctionWithCleanupState`. But I think the first way is 
> simpler. What do you think? [~fhueske]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4212: [FLINK-7025][table]Using NullByteKeySelector for Unbounde...

2017-06-28 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4212
  
+1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7034) GraphiteReporter cannot recover from lost connection

2017-06-28 Thread Aleksandr (JIRA)
Aleksandr created FLINK-7034:


 Summary: GraphiteReporter cannot recover from lost connection
 Key: FLINK-7034
 URL: https://issues.apache.org/jira/browse/FLINK-7034
 Project: Flink
  Issue Type: Bug
  Components: Metrics
Affects Versions: 1.3.0
Reporter: Aleksandr
Priority: Blocker


Now Flink uses metric version 1.3.0 in which there is a 
[Bug|https://github.com/dropwizard/metrics/issues/694]. I think you should use 
version 1.3.1 or higher



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6584) Support multiple consecutive windows in SQL

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067133#comment-16067133
 ] 

ASF GitHub Bot commented on FLINK-6584:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4199#discussion_r124638860
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/rules/common/WindowStartEndPropertiesRule.scala
 ---
@@ -54,34 +56,75 @@ class WindowStartEndPropertiesRule
 val project = call.rel(0).asInstanceOf[LogicalProject]
 val innerProject = call.rel(1).asInstanceOf[LogicalProject]
 val agg = call.rel(2).asInstanceOf[LogicalWindowAggregate]
+val window = agg.getWindow
 
-// Retrieve window start and end properties
+val isRowtime = isRowtimeAttribute(window.timeAttribute)
+val isProctime = isProctimeAttribute(window.timeAttribute)
+
+val startEndProperties = Seq(
+  NamedWindowProperty("w$start", WindowStart(window.aliasAttribute)),
+  NamedWindowProperty("w$end", WindowEnd(window.aliasAttribute)))
+
+// allow rowtime/proctime for rowtime windows and proctime for 
proctime windows
+val timeProperties = if (isRowtime) {
+  Seq(
+NamedWindowProperty("w$rowtime", 
RowtimeAttribute(window.aliasAttribute)),
+NamedWindowProperty("w$proctime", 
ProctimeAttribute(window.aliasAttribute)))
--- End diff --

It doesn't need it but we don't forbid to specify a processing time window 
after an event time window.


> Support multiple consecutive windows in SQL
> ---
>
> Key: FLINK-6584
> URL: https://issues.apache.org/jira/browse/FLINK-6584
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Right now, the Table API supports multiple consecutive windows as follows:
> {code}
> val table = stream.toTable(tEnv, 'rowtime.rowtime, 'int, 'double, 'float, 
> 'bigdec, 'string)
> val t = table
>   .window(Tumble over 2.millis on 'rowtime as 'w)
>   .groupBy('w)
>   .select('w.rowtime as 'rowtime, 'int.count as 'int)
>   .window(Tumble over 4.millis on 'rowtime as 'w2)
>   .groupBy('w2)
>   .select('w2.rowtime, 'w2.end, 'int.count)
> {code}
> Similar behavior should be supported by the SQL API as well. We need to 
> introduce a new auxiliary group function, but this should happen in sync with 
> Apache Calcite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6584) Support multiple consecutive windows in SQL

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067132#comment-16067132
 ] 

ASF GitHub Bot commented on FLINK-6584:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4199#discussion_r124630671
  
--- Diff: 
flink-libraries/flink-table/src/main/java/org/apache/calcite/sql/fun/SqlGroupFunction.java
 ---
@@ -50,7 +50,8 @@
  * GROUP BY HOP(rowtime, INTERVAL '1' HOUR), productId
  * 
  */
-class SqlGroupFunction extends SqlFunction {
+// FLINK QUICK FIX
+public class SqlGroupFunction extends SqlFunction {
--- End diff --

Calcite 1.13 was released and we are planning to update the dependency. 
I guess we cannot simply drop this file and use Calcite's version, if we 
start modifying the code. 

Do we have plans to contribute the changes back or do we want to keep this 
file?


> Support multiple consecutive windows in SQL
> ---
>
> Key: FLINK-6584
> URL: https://issues.apache.org/jira/browse/FLINK-6584
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Right now, the Table API supports multiple consecutive windows as follows:
> {code}
> val table = stream.toTable(tEnv, 'rowtime.rowtime, 'int, 'double, 'float, 
> 'bigdec, 'string)
> val t = table
>   .window(Tumble over 2.millis on 'rowtime as 'w)
>   .groupBy('w)
>   .select('w.rowtime as 'rowtime, 'int.count as 'int)
>   .window(Tumble over 4.millis on 'rowtime as 'w2)
>   .groupBy('w2)
>   .select('w2.rowtime, 'w2.end, 'int.count)
> {code}
> Similar behavior should be supported by the SQL API as well. We need to 
> introduce a new auxiliary group function, but this should happen in sync with 
> Apache Calcite.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4199: [FLINK-6584] [table] Support multiple consecutive ...

2017-06-28 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4199#discussion_r124630671
  
--- Diff: 
flink-libraries/flink-table/src/main/java/org/apache/calcite/sql/fun/SqlGroupFunction.java
 ---
@@ -50,7 +50,8 @@
  * GROUP BY HOP(rowtime, INTERVAL '1' HOUR), productId
  * 
  */
-class SqlGroupFunction extends SqlFunction {
+// FLINK QUICK FIX
+public class SqlGroupFunction extends SqlFunction {
--- End diff --

Calcite 1.13 was released and we are planning to update the dependency. 
I guess we cannot simply drop this file and use Calcite's version, if we 
start modifying the code. 

Do we have plans to contribute the changes back or do we want to keep this 
file?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4199: [FLINK-6584] [table] Support multiple consecutive ...

2017-06-28 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4199#discussion_r124638860
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/rules/common/WindowStartEndPropertiesRule.scala
 ---
@@ -54,34 +56,75 @@ class WindowStartEndPropertiesRule
 val project = call.rel(0).asInstanceOf[LogicalProject]
 val innerProject = call.rel(1).asInstanceOf[LogicalProject]
 val agg = call.rel(2).asInstanceOf[LogicalWindowAggregate]
+val window = agg.getWindow
 
-// Retrieve window start and end properties
+val isRowtime = isRowtimeAttribute(window.timeAttribute)
+val isProctime = isProctimeAttribute(window.timeAttribute)
+
+val startEndProperties = Seq(
+  NamedWindowProperty("w$start", WindowStart(window.aliasAttribute)),
+  NamedWindowProperty("w$end", WindowEnd(window.aliasAttribute)))
+
+// allow rowtime/proctime for rowtime windows and proctime for 
proctime windows
+val timeProperties = if (isRowtime) {
+  Seq(
+NamedWindowProperty("w$rowtime", 
RowtimeAttribute(window.aliasAttribute)),
+NamedWindowProperty("w$proctime", 
ProctimeAttribute(window.aliasAttribute)))
--- End diff --

It doesn't need it but we don't forbid to specify a processing time window 
after an event time window.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-7026) Add shaded asm dependency

2017-06-28 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-7026.
---
Resolution: Fixed

Fixed in 2bae90ea80f335cb55cddc754aeeb33168430937

> Add shaded asm dependency
> -
>
> Key: FLINK-7026
> URL: https://issues.apache.org/jira/browse/FLINK-7026
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7026) Add shaded asm dependency

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067120#comment-16067120
 ] 

ASF GitHub Bot commented on FLINK-7026:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink-shaded/pull/5


> Add shaded asm dependency
> -
>
> Key: FLINK-7026
> URL: https://issues.apache.org/jira/browse/FLINK-7026
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7030) Build with scala-2.11 by default

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067114#comment-16067114
 ] 

ASF GitHub Bot commented on FLINK-7030:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4209
  
will merge this once travis passes.


> Build with scala-2.11 by default
> 
>
> Key: FLINK-7030
> URL: https://issues.apache.org/jira/browse/FLINK-7030
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> As proposed recently on the dev mailing list.
> I propose to switch to Scala 2.11 as a default and to have a Scala 2.10 build 
> profile. Now it is the other way around. The reason for that is poor support 
> for build profiles in Intellij, I was unable to make it work after I added 
> Kafka 0.11 dependency (Kafka 0.11 dropped support for Scala 2.10).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4209: [FLINK-7030] Build with scala-2.11 by default

2017-06-28 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4209
  
will merge this once travis passes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6526) BlobStore files might become orphans in case of recovery

2017-06-28 Thread Gustavo Anatoly (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067104#comment-16067104
 ] 

Gustavo Anatoly commented on FLINK-6526:


Hi, Till...

I've been trying to reproduce this bug, with this gist: 
[https://gist.github.com/gustavoanatoly/b7a29062a45201168362401346cede61]
Could you please review it?

> BlobStore files might become orphans in case of recovery
> 
>
> Key: FLINK-6526
> URL: https://issues.apache.org/jira/browse/FLINK-6526
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Till Rohrmann
>
> The {{BlobStore}} is used to store {{BlobServer}} files persistently if HA is 
> enabled. The {{BlobLibraryCacheManager}} is responsible for keeping track of 
> a reference count for each file. Once the count is {{0}} the 
> {{BlobLibraryCacheManager}} will eventually delete this file from the 
> {{BlobServer}} and also the {{BlobStore}}. In case of recovery, the 
> {{BlobLibraryCacheManager}} will only recover those files which are actively 
> asked for (e.g. jar files of new job submission or job recovery). All other 
> files which might have had a reference count of {{0}} and were supposed to be 
> eventually deleted, won't be reregistered on the {{BlobLibraryCacheManager}}. 
> Consequently, these files will never be deleted and remain on the BlobStore 
> for all eternity.
> I think upon recovery, all files currently being held in the {{BlobStore}} 
> should be re-registered with the {{BlobLibraryCacheManager}} such that they 
> will be eventually deleted once they timed out with a reference count of 
> {{0}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7030) Build with scala-2.11 by default

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067070#comment-16067070
 ] 

ASF GitHub Bot commented on FLINK-7030:
---

Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4209
  
rebased and fixed conflict


> Build with scala-2.11 by default
> 
>
> Key: FLINK-7030
> URL: https://issues.apache.org/jira/browse/FLINK-7030
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> As proposed recently on the dev mailing list.
> I propose to switch to Scala 2.11 as a default and to have a Scala 2.10 build 
> profile. Now it is the other way around. The reason for that is poor support 
> for build profiles in Intellij, I was unable to make it work after I added 
> Kafka 0.11 dependency (Kafka 0.11 dropped support for Scala 2.10).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4209: [FLINK-7030] Build with scala-2.11 by default

2017-06-28 Thread pnowojski
Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4209
  
rebased and fixed conflict


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6725) make requiresOver as a contracted method in udagg

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067063#comment-16067063
 ] 

ASF GitHub Bot commented on FLINK-6725:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3993
  
How do we continue with this PR @shaoxuan-wang? 


> make requiresOver as a contracted method in udagg
> -
>
> Key: FLINK-6725
> URL: https://issues.apache.org/jira/browse/FLINK-6725
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Shaoxuan Wang
>Assignee: Shaoxuan Wang
>
> I realized requiresOver is defined in the udagg interface when I wrote up the 
> udagg doc. I would like to put requiresOver as a contract method. This makes 
> the entire udagg interface consistently and clean.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3993: [FLINK-6725][table] make requiresOver as a contracted met...

2017-06-28 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3993
  
How do we continue with this PR @shaoxuan-wang? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7032) Intellij is constantly changing language level of sub projects back to 1.6

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067061#comment-16067061
 ] 

ASF GitHub Bot commented on FLINK-7032:
---

GitHub user pnowojski opened a pull request:

https://github.com/apache/flink/pull/4213

[FLINK-7032] Overwrite inherited properties from parent pom



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pnowojski/flink java17

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4213.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4213


commit 257fc92a98dc1d28014bccc9bb3171e9a38062ab
Author: Piotr Nowojski 
Date:   2017-06-28T18:30:08Z

[FLINK-7032] Overwrite inherited properties from parent pom

Default values for compiler version are 1.6 and were causing Intellij to
constantly switch language level to 1.6, which in turn was causing
compilation errors. It worked fine for compiling from console using
maven, because those values are separetly set in maven-compiler-plugin
configuration.




> Intellij is constantly changing language level of sub projects back to 1.6 
> ---
>
> Key: FLINK-7032
> URL: https://issues.apache.org/jira/browse/FLINK-7032
> Project: Flink
>  Issue Type: Improvement
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> Every time I do maven reimport projects, Intellij is switching back to 1.6 
> language level. I tracked down this issue to misconfiguration in our pom.xml 
> file. It correctly configure maven-compiler-plugin:
> {code:xml}
>   
>   
>   org.apache.maven.plugins
>   maven-compiler-plugin
>   3.1
>   
>   ${java.version}
>   ${java.version}
>   
>   
> -Xlint:all
>   
>   
> {code}
> where ${java.version} is set to 1.7 in the properties, but it forgets to 
> overwrite the following properties from apache-18.pom:
> {code:xml}
>   
> 1.6
> 1.6
>   
> {code}
> It seems like compiling from console using maven ignores those values, but 
> they are confusing Intellij.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4213: [FLINK-7032] Overwrite inherited properties from p...

2017-06-28 Thread pnowojski
GitHub user pnowojski opened a pull request:

https://github.com/apache/flink/pull/4213

[FLINK-7032] Overwrite inherited properties from parent pom



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pnowojski/flink java17

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4213.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4213


commit 257fc92a98dc1d28014bccc9bb3171e9a38062ab
Author: Piotr Nowojski 
Date:   2017-06-28T18:30:08Z

[FLINK-7032] Overwrite inherited properties from parent pom

Default values for compiler version are 1.6 and were causing Intellij to
constantly switch language level to 1.6, which in turn was causing
compilation errors. It worked fine for compiling from console using
maven, because those values are separetly set in maven-compiler-plugin
configuration.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7033) Update Flink's MapR documentation to include default trust store for secure servers

2017-06-28 Thread Aniket (JIRA)
Aniket created FLINK-7033:
-

 Summary: Update Flink's MapR documentation to include default 
trust store for secure servers
 Key: FLINK-7033
 URL: https://issues.apache.org/jira/browse/FLINK-7033
 Project: Flink
  Issue Type: Improvement
Reporter: Aniket
Priority: Minor


As discussed at 
[http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/MapR-libraries-shading-issue-td13988.html]
 and at 
[https://community.mapr.com/message/60673-re-flink-with-mapr-shading-issues], 
we will need to update the Flink's MapR documentation to include an extra JVM 
arg as "-Djavax.net.ssl.trustStore="$JAVA_HOME/jre/lib/security/cacerts"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7032) Intellij is constantly changing language level of sub projects back to 1.6

2017-06-28 Thread Piotr Nowojski (JIRA)
Piotr Nowojski created FLINK-7032:
-

 Summary: Intellij is constantly changing language level of sub 
projects back to 1.6 
 Key: FLINK-7032
 URL: https://issues.apache.org/jira/browse/FLINK-7032
 Project: Flink
  Issue Type: Improvement
Reporter: Piotr Nowojski
Assignee: Piotr Nowojski


Every time I do maven reimport projects, Intellij is switching back to 1.6 
language level. I tracked down this issue to misconfiguration in our pom.xml 
file. It correctly configure maven-compiler-plugin:

{code:xml}



org.apache.maven.plugins
maven-compiler-plugin
3.1

${java.version}
${java.version}


-Xlint:all


{code}

where ${java.version} is set to 1.7 in the properties, but it forgets to 
overwrite the following properties from apache-18.pom:

{code:xml}
  
1.6
1.6
  
{code}

It seems like compiling from console using maven ignores those values, but they 
are confusing Intellij.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7031) Document Gelly examples

2017-06-28 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-7031:
-

 Summary: Document Gelly examples
 Key: FLINK-7031
 URL: https://issues.apache.org/jira/browse/FLINK-7031
 Project: Flink
  Issue Type: New Feature
  Components: Documentation
Affects Versions: 1.4.0
Reporter: Greg Hogan
Assignee: Greg Hogan
Priority: Minor
 Fix For: 1.4.0


The components comprising the Gelly examples runner (inputs, outputs, drivers, 
and soon transforms) were initially developed for internal Gelly use. As such, 
the Gelly documentation covers execution of the drivers but does not document 
the design and structure. The runner has become sufficiently advanced and 
integral to the development of new Gelly algorithms to warrant a page of 
documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7030) Build with scala-2.11 by default

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066921#comment-16066921
 ] 

ASF GitHub Bot commented on FLINK-7030:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4209
  
+1, but a rebase is needed.


> Build with scala-2.11 by default
> 
>
> Key: FLINK-7030
> URL: https://issues.apache.org/jira/browse/FLINK-7030
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> As proposed recently on the dev mailing list.
> I propose to switch to Scala 2.11 as a default and to have a Scala 2.10 build 
> profile. Now it is the other way around. The reason for that is poor support 
> for build profiles in Intellij, I was unable to make it work after I added 
> Kafka 0.11 dependency (Kafka 0.11 dropped support for Scala 2.10).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4209: [FLINK-7030] Build with scala-2.11 by default

2017-06-28 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4209
  
+1, but a rebase is needed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7004) Switch to Travis Trusty image

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066901#comment-16066901
 ] 

ASF GitHub Bot commented on FLINK-7004:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4208
  
Will merge once travis is done.


> Switch to Travis Trusty image
> -
>
> Key: FLINK-7004
> URL: https://issues.apache.org/jira/browse/FLINK-7004
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.2, 1.4.0, 1.3.2
>
>
> As shown in this PR https://github.com/apache/flink/pull/4167 switching to 
> the Trusty image on Travis seems to stabilize the build times.
> We should switch for 1.2, 1.3 and 1.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4208: [FLINK-7004] Switch to Travis Trusty image

2017-06-28 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4208
  
Will merge once travis is done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7004) Switch to Travis Trusty image

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066900#comment-16066900
 ] 

ASF GitHub Bot commented on FLINK-7004:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4207
  
Will merge once travis is done.


> Switch to Travis Trusty image
> -
>
> Key: FLINK-7004
> URL: https://issues.apache.org/jira/browse/FLINK-7004
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.2, 1.4.0, 1.3.2
>
>
> As shown in this PR https://github.com/apache/flink/pull/4167 switching to 
> the Trusty image on Travis seems to stabilize the build times.
> We should switch for 1.2, 1.3 and 1.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4207: [FLINK-7004] Switch to Travis Trusty image

2017-06-28 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4207
  
Will merge once travis is done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7004) Switch to Travis Trusty image

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066898#comment-16066898
 ] 

ASF GitHub Bot commented on FLINK-7004:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4182


> Switch to Travis Trusty image
> -
>
> Key: FLINK-7004
> URL: https://issues.apache.org/jira/browse/FLINK-7004
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.2, 1.4.0, 1.3.2
>
>
> As shown in this PR https://github.com/apache/flink/pull/4167 switching to 
> the Trusty image on Travis seems to stabilize the build times.
> We should switch for 1.2, 1.3 and 1.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4182: [FLINK-7004] Switch to Travis Trusty image

2017-06-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4182


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7004) Switch to Travis Trusty image

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066886#comment-16066886
 ] 

ASF GitHub Bot commented on FLINK-7004:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4182
  
Will address issues while merging.


> Switch to Travis Trusty image
> -
>
> Key: FLINK-7004
> URL: https://issues.apache.org/jira/browse/FLINK-7004
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.2, 1.4.0, 1.3.2
>
>
> As shown in this PR https://github.com/apache/flink/pull/4167 switching to 
> the Trusty image on Travis seems to stabilize the build times.
> We should switch for 1.2, 1.3 and 1.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4182: [FLINK-7004] Switch to Travis Trusty image

2017-06-28 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4182
  
Will address issues while merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7026) Add shaded asm dependency

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066817#comment-16066817
 ] 

ASF GitHub Bot commented on FLINK-7026:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink-shaded/pull/5
  
+1 to merge


> Add shaded asm dependency
> -
>
> Key: FLINK-7026
> URL: https://issues.apache.org/jira/browse/FLINK-7026
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6674) Update release 1.3 docs

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066814#comment-16066814
 ] 

ASF GitHub Bot commented on FLINK-6674:
---

Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/4211#discussion_r124589466
  
--- Diff: docs/dev/migration.md ---
@@ -25,6 +25,62 @@ under the License.
 * This will be replaced by the TOC
 {:toc}
 
+## Migrating from Flink 1.2 to Flink 1.3
+
+There are a few APIs that have been changed since Flink 1.2. Most of the 
changes are documented in their
+specific documentations. The following is a consolidated list of API 
changes and links to details for migration when
+upgrading to Flink 1.3.
+
+### `TypeSerializer` interface changes
+
+This would be relevant mostly for users implementing custom 
`TypeSerializer`s for their state.
+
+Since Flink 1.3, two additional methods have been added that are related 
to serializer compatibility
+across savepoint restores. Please see
+[Handling serializer upgrades and 
compatibility](../stream/state.html#handling-serializer-upgrades-and-compatibility)
+for further details on how to implement these methods.
+
+### `ProcessFunction` is always a `RichFunction`
+
+In Flink 1.2, `ProcessFunction` and its rich variant `RichProcessFunction` 
was introduced.
+Since Flink 1.3, `RichProcessFunction` was removed and `ProcessFunction` 
is now always a `RichFunction` with access to
+the lifecycle methods and runtime context.
+
+### Flink CEP library API changes
+
+The CEP library in Flink 1.3 ships with a number of new features which 
have led to some changes in the API.
+Please visit the [CEP Migration 
docs](../libs/cep.html#migrating-from-an-older-flink-version) for details.
+
+### Table API Changes
+
+*TBA*
+
+### Queryable State client construction changes
+
+*TBA*
--- End diff --

I don't think these TBA sections are a good idea in the documentation.

Maybe we should comment them out before committing them, so that they don't 
show up in the page


> Update release 1.3 docs
> ---
>
> Key: FLINK-6674
> URL: https://issues.apache.org/jira/browse/FLINK-6674
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
> Fix For: 1.3.0
>
>
> Umbrella issue to track required updates to the documentation for the 1.3 
> release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4211: [FLINK-6674] [FLINK-6680] [docs] Update migration ...

2017-06-28 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/4211#discussion_r124589466
  
--- Diff: docs/dev/migration.md ---
@@ -25,6 +25,62 @@ under the License.
 * This will be replaced by the TOC
 {:toc}
 
+## Migrating from Flink 1.2 to Flink 1.3
+
+There are a few APIs that have been changed since Flink 1.2. Most of the 
changes are documented in their
+specific documentations. The following is a consolidated list of API 
changes and links to details for migration when
+upgrading to Flink 1.3.
+
+### `TypeSerializer` interface changes
+
+This would be relevant mostly for users implementing custom 
`TypeSerializer`s for their state.
+
+Since Flink 1.3, two additional methods have been added that are related 
to serializer compatibility
+across savepoint restores. Please see
+[Handling serializer upgrades and 
compatibility](../stream/state.html#handling-serializer-upgrades-and-compatibility)
+for further details on how to implement these methods.
+
+### `ProcessFunction` is always a `RichFunction`
+
+In Flink 1.2, `ProcessFunction` and its rich variant `RichProcessFunction` 
was introduced.
+Since Flink 1.3, `RichProcessFunction` was removed and `ProcessFunction` 
is now always a `RichFunction` with access to
+the lifecycle methods and runtime context.
+
+### Flink CEP library API changes
+
+The CEP library in Flink 1.3 ships with a number of new features which 
have led to some changes in the API.
+Please visit the [CEP Migration 
docs](../libs/cep.html#migrating-from-an-older-flink-version) for details.
+
+### Table API Changes
+
+*TBA*
+
+### Queryable State client construction changes
+
+*TBA*
--- End diff --

I don't think these TBA sections are a good idea in the documentation.

Maybe we should comment them out before committing them, so that they don't 
show up in the page


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7004) Switch to Travis Trusty image

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066811#comment-16066811
 ] 

ASF GitHub Bot commented on FLINK-7004:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/4207
  
+1 to merge once travis has passed


> Switch to Travis Trusty image
> -
>
> Key: FLINK-7004
> URL: https://issues.apache.org/jira/browse/FLINK-7004
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.2, 1.4.0, 1.3.2
>
>
> As shown in this PR https://github.com/apache/flink/pull/4167 switching to 
> the Trusty image on Travis seems to stabilize the build times.
> We should switch for 1.2, 1.3 and 1.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7004) Switch to Travis Trusty image

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066812#comment-16066812
 ] 

ASF GitHub Bot commented on FLINK-7004:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/4208
  
+1 to merge once travis has passed


> Switch to Travis Trusty image
> -
>
> Key: FLINK-7004
> URL: https://issues.apache.org/jira/browse/FLINK-7004
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.2, 1.4.0, 1.3.2
>
>
> As shown in this PR https://github.com/apache/flink/pull/4167 switching to 
> the Trusty image on Travis seems to stabilize the build times.
> We should switch for 1.2, 1.3 and 1.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4208: [FLINK-7004] Switch to Travis Trusty image

2017-06-28 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/4208
  
+1 to merge once travis has passed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4207: [FLINK-7004] Switch to Travis Trusty image

2017-06-28 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/4207
  
+1 to merge once travis has passed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4182: [FLINK-7004] Switch to Travis Trusty image

2017-06-28 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/4182#discussion_r124588442
  
--- Diff: pom.xml ---
@@ -1021,6 +1021,7 @@ under the License.


tools/artifacts/**

tools/flink*/**
+   
apache-maven-3.2.5/**
--- End diff --

Maybe add a comment why this exclusion is needed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7004) Switch to Travis Trusty image

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066808#comment-16066808
 ] 

ASF GitHub Bot commented on FLINK-7004:
---

Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/4182#discussion_r124588442
  
--- Diff: pom.xml ---
@@ -1021,6 +1021,7 @@ under the License.


tools/artifacts/**

tools/flink*/**
+   
apache-maven-3.2.5/**
--- End diff --

Maybe add a comment why this exclusion is needed.


> Switch to Travis Trusty image
> -
>
> Key: FLINK-7004
> URL: https://issues.apache.org/jira/browse/FLINK-7004
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.2, 1.4.0, 1.3.2
>
>
> As shown in this PR https://github.com/apache/flink/pull/4167 switching to 
> the Trusty image on Travis seems to stabilize the build times.
> We should switch for 1.2, 1.3 and 1.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7004) Switch to Travis Trusty image

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066810#comment-16066810
 ] 

ASF GitHub Bot commented on FLINK-7004:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/4182
  
+1 to merge once my concerns are addressed


> Switch to Travis Trusty image
> -
>
> Key: FLINK-7004
> URL: https://issues.apache.org/jira/browse/FLINK-7004
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.2, 1.4.0, 1.3.2
>
>
> As shown in this PR https://github.com/apache/flink/pull/4167 switching to 
> the Trusty image on Travis seems to stabilize the build times.
> We should switch for 1.2, 1.3 and 1.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4182: [FLINK-7004] Switch to Travis Trusty image

2017-06-28 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/4182
  
+1 to merge once my concerns are addressed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066807#comment-16066807
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r124588270
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * Built-in scalar runtime functions.
+  */
+class ScalarFunctions {}
+
+object ScalarFunctions {
+
+  def power(a: Double, b: JBigDecimal): Double = {
+Math.pow(a, b.doubleValue())
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments.
+* Returns NULL if any argument is NULL.
+*/
+  @varargs
+  def concat(args: String*): String = {
+val sb = new JStringBuffer
--- End diff --

Yes, we do not need `synchronized ` the `append` method. Good catch!


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066806#comment-16066806
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r124588196
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * Built-in scalar runtime functions.
+  */
+class ScalarFunctions {}
+
+object ScalarFunctions {
+
+  def power(a: Double, b: JBigDecimal): Double = {
+Math.pow(a, b.doubleValue())
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments.
+* Returns NULL if any argument is NULL.
+*/
+  @varargs
+  def concat(args: String*): String = {
+val sb = new JStringBuffer
--- End diff --

Yes, we do not need `supplemented ` the `append` method. Good catch!


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4138: [FLINK-6925][table]Add CONCAT/CONCAT_WS supported ...

2017-06-28 Thread sunjincheng121
Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r124588270
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * Built-in scalar runtime functions.
+  */
+class ScalarFunctions {}
+
+object ScalarFunctions {
+
+  def power(a: Double, b: JBigDecimal): Double = {
+Math.pow(a, b.doubleValue())
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments.
+* Returns NULL if any argument is NULL.
+*/
+  @varargs
+  def concat(args: String*): String = {
+val sb = new JStringBuffer
--- End diff --

Yes, we do not need `synchronized ` the `append` method. Good catch!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4182: [FLINK-7004] Switch to Travis Trusty image

2017-06-28 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/4182#discussion_r124588048
  
--- Diff: .travis.yml ---
@@ -1,7 +1,8 @@
 # s3 deployment based on 
http://about.travis-ci.org/blog/2012-12-18-travis-artifacts/
 
 # send to container based infrastructure: 
http://docs.travis-ci.com/user/workers/container-based-infrastructure/
--- End diff --

This comment is actually invalid now. Its the "SUDO-ENABLED" infra.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   4   >