[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737298#comment-15737298
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user mtunique commented on the issue:

https://github.com/apache/flink/pull/2977
  
Thanks @fhueske .
I agree very much.
It is necessary to split the tests into a validation and a plan test class.
I will follow the suggestions.


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2977: [FLINK-5084] Replace Java Table API integration tests by ...

2016-12-09 Thread mtunique
Github user mtunique commented on the issue:

https://github.com/apache/flink/pull/2977
  
Thanks @fhueske .
I agree very much.
It is necessary to split the tests into a validation and a plan test class.
I will follow the suggestions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-5225) Add interface to override parameter types of UDTF's eval method

2016-12-09 Thread Jark Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-5225:
--

Assignee: Jark Wu

> Add interface to override parameter types of UDTF's eval method
> ---
>
> Key: FLINK-5225
> URL: https://issues.apache.org/jira/browse/FLINK-5225
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>
> {{ScalarFunction}} has {{getParameterTypes()}} to be overridden if the 
> parameter types of the eval method can not be determined automatically. This 
> is missing in {{TableFunction}}.  
> This needs to implement a custom Calcite's {{TableFunction}} and override 
> {{getParameters()}}. But currently, the {{FlinkTableFunctionImpl}} extends 
> {{ReflectiveFunctionBase}}, the {{ReflectiveFunctionBase}} determines the 
> parameter types of the eval method. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5223) Add documentation of UDTF in Table API & SQL

2016-12-09 Thread Jark Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-5223:
--

Assignee: Jark Wu

> Add documentation of UDTF in Table API & SQL
> 
>
> Key: FLINK-5223
> URL: https://issues.apache.org/jira/browse/FLINK-5223
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4460) Side Outputs in Flink

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737206#comment-15737206
 ] 

ASF GitHub Bot commented on FLINK-4460:
---

Github user chenqin commented on the issue:

https://github.com/apache/flink/pull/2982
  
cc @aljoscha 


> Side Outputs in Flink
> -
>
> Key: FLINK-4460
> URL: https://issues.apache.org/jira/browse/FLINK-4460
> Project: Flink
>  Issue Type: New Feature
>  Components: Core, DataStream API
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Chen Qin
>  Labels: latearrivingevents, sideoutput
>
> https://docs.google.com/document/d/1vg1gpR8JL4dM07Yu4NyerQhhVvBlde5qdqnuJv4LcV4/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2982: [FLINK-4460] Side Outputs in Flink

2016-12-09 Thread chenqin
Github user chenqin commented on the issue:

https://github.com/apache/flink/pull/2982
  
cc @aljoscha 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4693) Add session group-windows for batch tables

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737026#comment-15737026
 ] 

ASF GitHub Bot commented on FLINK-4693:
---

GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/2983

[FLINK-4693][tableApi] Add session group-windows for batch tables

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [×] General
  - The pull request references the related JIRA issue ("[FLINK-4693] Add 
session group-windows for batch tables")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [×] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink 
FLIP11-Batch-FLINK-4693-PR

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2983.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2983


commit 7c57b8c8e52380f6b08dcc152a32d0e018e39cb0
Author: Jincheng Sun 
Date:   2016-12-01T09:04:44Z

[FLINK-4693][tableApi] Add session group-windows for batch tables

commit 2b9fb1e9948e9de78e8dbafdf2fc1a87c7614d45
Author: Jincheng Sun 
Date:   2016-12-10T02:12:25Z

[FLINK-4693][tableApi] Repair expiration methods.




> Add session group-windows for batch tables
> ---
>
> Key: FLINK-4693
> URL: https://issues.apache.org/jira/browse/FLINK-4693
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: sunjincheng
>
> Add Session group-windows for batch tables as described in 
> [FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2983: [FLINK-4693][tableApi] Add session group-windows f...

2016-12-09 Thread sunjincheng121
GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/2983

[FLINK-4693][tableApi] Add session group-windows for batch tables

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [×] General
  - The pull request references the related JIRA issue ("[FLINK-4693] Add 
session group-windows for batch tables")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [×] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink 
FLIP11-Batch-FLINK-4693-PR

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2983.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2983


commit 7c57b8c8e52380f6b08dcc152a32d0e018e39cb0
Author: Jincheng Sun 
Date:   2016-12-01T09:04:44Z

[FLINK-4693][tableApi] Add session group-windows for batch tables

commit 2b9fb1e9948e9de78e8dbafdf2fc1a87c7614d45
Author: Jincheng Sun 
Date:   2016-12-10T02:12:25Z

[FLINK-4693][tableApi] Repair expiration methods.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4693) Add session group-windows for batch tables

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15737014#comment-15737014
 ] 

ASF GitHub Bot commented on FLINK-4693:
---

Github user sunjincheng121 closed the pull request at:

https://github.com/apache/flink/pull/2942


> Add session group-windows for batch tables
> ---
>
> Key: FLINK-4693
> URL: https://issues.apache.org/jira/browse/FLINK-4693
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: sunjincheng
>
> Add Session group-windows for batch tables as described in 
> [FLIP-11|https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations].
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2942: [FLINK-4693][tableApi] Add session group-windows f...

2016-12-09 Thread sunjincheng121
Github user sunjincheng121 closed the pull request at:

https://github.com/apache/flink/pull/2942


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4460) Side Outputs in Flink

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736976#comment-15736976
 ] 

ASF GitHub Bot commented on FLINK-4460:
---

GitHub user chenqin opened a pull request:

https://github.com/apache/flink/pull/2982

[FLINK-4460] Side Outputs in Flink


[FLIP-13](https://cwiki.apache.org/confluence/display/FLINK/FLIP-13+Side+Outputs+in+Flink)
 Expose sideOutput with `OutputTag`, 

For those userFunction provide `Collector collector` as a parameter, 
 - it offer a util class`CollectorWrapper wrapper = new 
CollectorWrapper(collector);` which can write sideOutput element 
`wrapper.collect(OutputTag tag, sideout)` as well as 
`getSideOutput(OutputTag tag)` in `singleStreamOutputOpeator` and get 
sideOutput DataStream.
 - each OutputTag with same type can have different value, getSideOutput 
will only expose element with exact same OutputTag type and value. 

sideOutput Late arriving event if
- time characteristic set to eventTime
- all assigned window(s) isLate return(s) true 
- event timestamp no later than currentWatermark+ allowedLateness)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chenqin/flink flip

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2982.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2982


commit de674f19fcbe9955cb4208ef0938fe5b0f7adc90
Author: Chen Qin 
Date:   2016-10-21T19:38:04Z

allow mutpile output stream

commit 3d91e6c69dbfbcb2c73dcc37ac2d8ed637a374eb
Author: Chen Qin 
Date:   2016-11-29T21:24:09Z

Merge branch 'master' into flip

commit 977b2d7fc54e1f9663a5ceb8a62ed2af5a955ca6
Author: Chen Qin 
Date:   2016-12-01T22:19:56Z

allow mutiple OutputTag with same type
implement windowopeator late arriving events
add unit/integration tests




> Side Outputs in Flink
> -
>
> Key: FLINK-4460
> URL: https://issues.apache.org/jira/browse/FLINK-4460
> Project: Flink
>  Issue Type: New Feature
>  Components: Core, DataStream API
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Chen Qin
>  Labels: latearrivingevents, sideoutput
>
> https://docs.google.com/document/d/1vg1gpR8JL4dM07Yu4NyerQhhVvBlde5qdqnuJv4LcV4/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2982: [FLINK-4460] Side Outputs in Flink

2016-12-09 Thread chenqin
GitHub user chenqin opened a pull request:

https://github.com/apache/flink/pull/2982

[FLINK-4460] Side Outputs in Flink


[FLIP-13](https://cwiki.apache.org/confluence/display/FLINK/FLIP-13+Side+Outputs+in+Flink)
 Expose sideOutput with `OutputTag`, 

For those userFunction provide `Collector collector` as a parameter, 
 - it offer a util class`CollectorWrapper wrapper = new 
CollectorWrapper(collector);` which can write sideOutput element 
`wrapper.collect(OutputTag tag, sideout)` as well as 
`getSideOutput(OutputTag tag)` in `singleStreamOutputOpeator` and get 
sideOutput DataStream.
 - each OutputTag with same type can have different value, getSideOutput 
will only expose element with exact same OutputTag type and value. 

sideOutput Late arriving event if
- time characteristic set to eventTime
- all assigned window(s) isLate return(s) true 
- event timestamp no later than currentWatermark+ allowedLateness)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chenqin/flink flip

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2982.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2982


commit de674f19fcbe9955cb4208ef0938fe5b0f7adc90
Author: Chen Qin 
Date:   2016-10-21T19:38:04Z

allow mutpile output stream

commit 3d91e6c69dbfbcb2c73dcc37ac2d8ed637a374eb
Author: Chen Qin 
Date:   2016-11-29T21:24:09Z

Merge branch 'master' into flip

commit 977b2d7fc54e1f9663a5ceb8a62ed2af5a955ca6
Author: Chen Qin 
Date:   2016-12-01T22:19:56Z

allow mutiple OutputTag with same type
implement windowopeator late arriving events
add unit/integration tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2974: [FLINK-5298] TM checks that log file exists

2016-12-09 Thread EronWright
Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/2974
  
@zentol I am fine with the 'error' behavior.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5298) TaskManager crashes when TM log not existant

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736736#comment-15736736
 ] 

ASF GitHub Bot commented on FLINK-5298:
---

Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/2974
  
@zentol I am fine with the 'error' behavior.


> TaskManager crashes when TM log not existant
> 
>
> Key: FLINK-5298
> URL: https://issues.apache.org/jira/browse/FLINK-5298
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager, Webfrontend
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Mischa Krüger
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.2.0
>
>
> {code}
> java.io.FileNotFoundException: flink-taskmanager.out (No such file or 
> directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleRequestTaskManagerLog(TaskManager.scala:833)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:340)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:44)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
> at akka.actor.ActorCell.invoke(ActorCell.scala:487)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
> at akka.dispatch.Mailbox.run(Mailbox.scala:221)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2016-12-08 16:45:14,995 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - Stopping 
> TaskManager akka://flink/user/taskmanager#1361882659.
> 2016-12-08 16:45:14,995 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - 
> Disassociating from JobManager
> 2016-12-08 16:45:14,997 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-08 16:45:15,006 INFO  
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - I/O manager 
> removed spill file directory 
> /tmp/flink-io-e61f717b-630c-4a2a-b3e3-62ccb40aa2f9
> 2016-12-08 16:45:15,006 INFO  
> org.apache.flink.runtime.io.network.NetworkEnvironment- Shutting down 
> the network environment and its components.
> 2016-12-08 16:45:15,008 INFO  
> org.apache.flink.runtime.io.network.netty.NettyClient - Successful 
> shutdown (took 1 ms).
> 2016-12-08 16:45:15,009 INFO  
> org.apache.flink.runtime.io.network.netty.NettyServer - Successful 
> shutdown (took 0 ms).
> 2016-12-08 16:45:15,020 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - Task 
> manager akka://flink/user/taskmanager is completely shut down.
> 2016-12-08 16:45:15,023 ERROR 
> org.apache.flink.runtime.taskmanager.TaskManager  - Actor 
> akka://flink/user/taskmanager#1361882659 terminated, stopping process...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2254) Add Bipartite Graph Support for Gelly

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736510#comment-15736510
 ] 

ASF GitHub Bot commented on FLINK-2254:
---

Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/2564
  
Hi @vasia , thank you for merging my PR.
Thank you for the reminder about the documentation. I've created the JIRA 
for it: https://issues.apache.org/jira/browse/FLINK-5311



> Add Bipartite Graph Support for Gelly
> -
>
> Key: FLINK-2254
> URL: https://issues.apache.org/jira/browse/FLINK-2254
> Project: Flink
>  Issue Type: New Feature
>  Components: Gelly
>Affects Versions: 0.10.0
>Reporter: Andra Lungu
>Assignee: Ivan Mushketyk
>  Labels: requires-design-doc
>
> A bipartite graph is a graph for which the set of vertices can be divided 
> into two disjoint sets such that each edge having a source vertex in the 
> first set, will have a target vertex in the second set. We would like to 
> support efficient operations for this type of graphs along with a set of 
> metrics(http://jponnela.com/web_documents/twomode.pdf). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2564: [FLINK-2254] Add BipartiateGraph class

2016-12-09 Thread mushketyk
Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/2564
  
Hi @vasia , thank you for merging my PR.
Thank you for the reminder about the documentation. I've created the JIRA 
for it: https://issues.apache.org/jira/browse/FLINK-5311



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5311) Write user documentation for BipartiteGraph

2016-12-09 Thread Ivan Mushketyk (JIRA)
Ivan Mushketyk created FLINK-5311:
-

 Summary: Write user documentation for BipartiteGraph
 Key: FLINK-5311
 URL: https://issues.apache.org/jira/browse/FLINK-5311
 Project: Flink
  Issue Type: Bug
  Components: Gelly
Reporter: Ivan Mushketyk
Assignee: Ivan Mushketyk


We need to add user documentation. The progress on BipartiteGraph can be 
tracked in the following JIRA:
https://issues.apache.org/jira/browse/FLINK-2254



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2254) Add Bipartite Graph Support for Gelly

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736459#comment-15736459
 ] 

ASF GitHub Bot commented on FLINK-2254:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2564
  
Thanks for the reminder @vasia. The separate JIRA sub-task does allow for a 
discussion of how best to document the full set of proposed bipartite 
functionality.


> Add Bipartite Graph Support for Gelly
> -
>
> Key: FLINK-2254
> URL: https://issues.apache.org/jira/browse/FLINK-2254
> Project: Flink
>  Issue Type: New Feature
>  Components: Gelly
>Affects Versions: 0.10.0
>Reporter: Andra Lungu
>Assignee: Ivan Mushketyk
>  Labels: requires-design-doc
>
> A bipartite graph is a graph for which the set of vertices can be divided 
> into two disjoint sets such that each edge having a source vertex in the 
> first set, will have a target vertex in the second set. We would like to 
> support efficient operations for this type of graphs along with a set of 
> metrics(http://jponnela.com/web_documents/twomode.pdf). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2564: [FLINK-2254] Add BipartiateGraph class

2016-12-09 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2564
  
Thanks for the reminder @vasia. The separate JIRA sub-task does allow for a 
discussion of how best to document the full set of proposed bipartite 
functionality.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5298) TaskManager crashes when TM log not existant

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736346#comment-15736346
 ] 

ASF GitHub Bot commented on FLINK-5298:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2974
  
It should display "Fetching TaskManager log failed.", and log the 
exception. (see `TaskManagerLogHandler#respondAsLeader(): 
logPathFuture.exceptionally(...))

This isn't a case that can only happen on Mesos. If the log was is deleted 
while the TM is running we have the exact same problem, except in this case it 
is in fact an error and should be displayed as such. Same if the logging is 
broken.

I agree that we should display something different if we know that no log 
file should exist; how/whether we can find that out however i simply don't 
know. That's maybe something that you could weigh in.



> TaskManager crashes when TM log not existant
> 
>
> Key: FLINK-5298
> URL: https://issues.apache.org/jira/browse/FLINK-5298
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager, Webfrontend
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Mischa Krüger
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.2.0
>
>
> {code}
> java.io.FileNotFoundException: flink-taskmanager.out (No such file or 
> directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleRequestTaskManagerLog(TaskManager.scala:833)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:340)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:44)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
> at akka.actor.ActorCell.invoke(ActorCell.scala:487)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
> at akka.dispatch.Mailbox.run(Mailbox.scala:221)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2016-12-08 16:45:14,995 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - Stopping 
> TaskManager akka://flink/user/taskmanager#1361882659.
> 2016-12-08 16:45:14,995 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - 
> Disassociating from JobManager
> 2016-12-08 16:45:14,997 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-08 16:45:15,006 INFO  
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - I/O manager 
> removed spill file directory 
> /tmp/flink-io-e61f717b-630c-4a2a-b3e3-62ccb40aa2f9
> 2016-12-08 16:45:15,006 INFO  
> org.apache.flink.runtime.io.network.NetworkEnvironment- Shutting down 
> the network environment and its components.
> 2016-12-08 16:45:15,008 INFO  
> org.apache.flink.runtime.io.network.netty.NettyClient - Successful 
> shutdown (took 1 ms).
> 2016-12-08 16:45:15,009 INFO  
> org.apache.flink.runtime.io.network.netty.NettyServer - Successful 
> shutdown (took 0 ms).
> 2016-12-08 16:45:15,020 INFO  
> 

[GitHub] flink issue #2974: [FLINK-5298] TM checks that log file exists

2016-12-09 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2974
  
It should display "Fetching TaskManager log failed.", and log the 
exception. (see `TaskManagerLogHandler#respondAsLeader(): 
logPathFuture.exceptionally(...))

This isn't a case that can only happen on Mesos. If the log was is deleted 
while the TM is running we have the exact same problem, except in this case it 
is in fact an error and should be displayed as such. Same if the logging is 
broken.

I agree that we should display something different if we know that no log 
file should exist; how/whether we can find that out however i simply don't 
know. That's maybe something that you could weigh in.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2974: [FLINK-5298] TM checks that log file exists

2016-12-09 Thread EronWright
Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/2974
  
This doesn't really address the 'root cause' here, that the .out file is 
missing (for Mesos deployments).  While we could change `mesos-taskmanager.sh` 
to redirect the output, I honestly hesitate to, because Mesos is already 
redirecting the output to 'stdout' and 'stderr'.   It has log-rolling features 
too.   Therefore I think it a step backwards to redirect to 
`flink-taskmanager.out`.

So, I think Flink should treat the lack of a log as a 'not applicable' 
situation, not an 'error' situation.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5298) TaskManager crashes when TM log not existant

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736276#comment-15736276
 ] 

ASF GitHub Bot commented on FLINK-5298:
---

Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/2974
  
This doesn't really address the 'root cause' here, that the .out file is 
missing (for Mesos deployments).  While we could change `mesos-taskmanager.sh` 
to redirect the output, I honestly hesitate to, because Mesos is already 
redirecting the output to 'stdout' and 'stderr'.   It has log-rolling features 
too.   Therefore I think it a step backwards to redirect to 
`flink-taskmanager.out`.

So, I think Flink should treat the lack of a log as a 'not applicable' 
situation, not an 'error' situation.



> TaskManager crashes when TM log not existant
> 
>
> Key: FLINK-5298
> URL: https://issues.apache.org/jira/browse/FLINK-5298
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager, Webfrontend
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Mischa Krüger
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.2.0
>
>
> {code}
> java.io.FileNotFoundException: flink-taskmanager.out (No such file or 
> directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleRequestTaskManagerLog(TaskManager.scala:833)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:340)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:44)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
> at akka.actor.ActorCell.invoke(ActorCell.scala:487)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
> at akka.dispatch.Mailbox.run(Mailbox.scala:221)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2016-12-08 16:45:14,995 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - Stopping 
> TaskManager akka://flink/user/taskmanager#1361882659.
> 2016-12-08 16:45:14,995 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - 
> Disassociating from JobManager
> 2016-12-08 16:45:14,997 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-08 16:45:15,006 INFO  
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - I/O manager 
> removed spill file directory 
> /tmp/flink-io-e61f717b-630c-4a2a-b3e3-62ccb40aa2f9
> 2016-12-08 16:45:15,006 INFO  
> org.apache.flink.runtime.io.network.NetworkEnvironment- Shutting down 
> the network environment and its components.
> 2016-12-08 16:45:15,008 INFO  
> org.apache.flink.runtime.io.network.netty.NettyClient - Successful 
> shutdown (took 1 ms).
> 2016-12-08 16:45:15,009 INFO  
> org.apache.flink.runtime.io.network.netty.NettyServer - Successful 
> shutdown (took 0 ms).
> 2016-12-08 16:45:15,020 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - Task 
> manager akka://flink/user/taskmanager is completely 

[jira] [Commented] (FLINK-2254) Add Bipartite Graph Support for Gelly

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736260#comment-15736260
 ] 

ASF GitHub Bot commented on FLINK-2254:
---

Github user vasia commented on the issue:

https://github.com/apache/flink/pull/2564
  
Thank you both for your work @mushketyk and @greghogan!
Please, keep in mind that we should always add documentation for every new 
feature; especially a big one such as supporting a new graph type. We've added 
the checklist template for each new PR so that we don't forget about it :)
Can you please open a JIRA to track that docs for bipartite graphs are 
missing? Thank you!


> Add Bipartite Graph Support for Gelly
> -
>
> Key: FLINK-2254
> URL: https://issues.apache.org/jira/browse/FLINK-2254
> Project: Flink
>  Issue Type: New Feature
>  Components: Gelly
>Affects Versions: 0.10.0
>Reporter: Andra Lungu
>Assignee: Ivan Mushketyk
>  Labels: requires-design-doc
>
> A bipartite graph is a graph for which the set of vertices can be divided 
> into two disjoint sets such that each edge having a source vertex in the 
> first set, will have a target vertex in the second set. We would like to 
> support efficient operations for this type of graphs along with a set of 
> metrics(http://jponnela.com/web_documents/twomode.pdf). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2564: [FLINK-2254] Add BipartiateGraph class

2016-12-09 Thread vasia
Github user vasia commented on the issue:

https://github.com/apache/flink/pull/2564
  
Thank you both for your work @mushketyk and @greghogan!
Please, keep in mind that we should always add documentation for every new 
feature; especially a big one such as supporting a new graph type. We've added 
the checklist template for each new PR so that we don't forget about it :)
Can you please open a JIRA to track that docs for bipartite graphs are 
missing? Thank you!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-4646) Add BipartiteGraph class

2016-12-09 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-4646.
-
Resolution: Implemented

Thanks for your contribution [~ivan.mushketyk]!

Implemented in 365cd987cc90fa9b399acbb4fe0af3f995f604e3

PR #2564 is attached to FLINK-2254.

> Add BipartiteGraph class
> 
>
> Key: FLINK-4646
> URL: https://issues.apache.org/jira/browse/FLINK-4646
> Project: Flink
>  Issue Type: Sub-task
>  Components: Gelly
>Reporter: Ivan Mushketyk
>Assignee: Ivan Mushketyk
>
> Implement a class to represent a bipartite graph in Flink Gelly. Design 
> discussions can be found in the parent task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2254) Add Bipartite Graph Support for Gelly

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736234#comment-15736234
 ] 

ASF GitHub Bot commented on FLINK-2254:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2564


> Add Bipartite Graph Support for Gelly
> -
>
> Key: FLINK-2254
> URL: https://issues.apache.org/jira/browse/FLINK-2254
> Project: Flink
>  Issue Type: New Feature
>  Components: Gelly
>Affects Versions: 0.10.0
>Reporter: Andra Lungu
>Assignee: Ivan Mushketyk
>  Labels: requires-design-doc
>
> A bipartite graph is a graph for which the set of vertices can be divided 
> into two disjoint sets such that each edge having a source vertex in the 
> first set, will have a target vertex in the second set. We would like to 
> support efficient operations for this type of graphs along with a set of 
> metrics(http://jponnela.com/web_documents/twomode.pdf). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2564: [FLINK-2254] Add BipartiateGraph class

2016-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2564


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5298) TaskManager crashes when TM log not existant

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736215#comment-15736215
 ] 

ASF GitHub Bot commented on FLINK-5298:
---

Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/2974
  
@zentol what is the new behavior?  Does the webui show the IOException?


> TaskManager crashes when TM log not existant
> 
>
> Key: FLINK-5298
> URL: https://issues.apache.org/jira/browse/FLINK-5298
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager, Webfrontend
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Mischa Krüger
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.2.0
>
>
> {code}
> java.io.FileNotFoundException: flink-taskmanager.out (No such file or 
> directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleRequestTaskManagerLog(TaskManager.scala:833)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:340)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:44)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
> at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
> at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
> at akka.actor.ActorCell.invoke(ActorCell.scala:487)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
> at akka.dispatch.Mailbox.run(Mailbox.scala:221)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
> at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2016-12-08 16:45:14,995 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - Stopping 
> TaskManager akka://flink/user/taskmanager#1361882659.
> 2016-12-08 16:45:14,995 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - 
> Disassociating from JobManager
> 2016-12-08 16:45:14,997 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-08 16:45:15,006 INFO  
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - I/O manager 
> removed spill file directory 
> /tmp/flink-io-e61f717b-630c-4a2a-b3e3-62ccb40aa2f9
> 2016-12-08 16:45:15,006 INFO  
> org.apache.flink.runtime.io.network.NetworkEnvironment- Shutting down 
> the network environment and its components.
> 2016-12-08 16:45:15,008 INFO  
> org.apache.flink.runtime.io.network.netty.NettyClient - Successful 
> shutdown (took 1 ms).
> 2016-12-08 16:45:15,009 INFO  
> org.apache.flink.runtime.io.network.netty.NettyServer - Successful 
> shutdown (took 0 ms).
> 2016-12-08 16:45:15,020 INFO  
> org.apache.flink.mesos.runtime.clusterframework.MesosTaskManager  - Task 
> manager akka://flink/user/taskmanager is completely shut down.
> 2016-12-08 16:45:15,023 ERROR 
> org.apache.flink.runtime.taskmanager.TaskManager  - Actor 
> akka://flink/user/taskmanager#1361882659 terminated, stopping process...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2974: [FLINK-5298] TM checks that log file exists

2016-12-09 Thread EronWright
Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/2974
  
@zentol what is the new behavior?  Does the webui show the IOException?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5147) StreamingOperatorsITCase.testGroupedFoldOperation failed on Travis

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15736060#comment-15736060
 ] 

ASF GitHub Bot commented on FLINK-5147:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2859
  
@rmetzger ty for review, merging.


> StreamingOperatorsITCase.testGroupedFoldOperation failed on Travis
> --
>
> Key: FLINK-5147
> URL: https://issues.apache.org/jira/browse/FLINK-5147
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.1.3
> Environment: https://travis-ci.org/apache/flink/jobs/177675906
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>
> The test failed with the following exception:
> {code}
> testGroupedFoldOperation(org.apache.flink.test.streaming.api.StreamingOperatorsITCase)
>   Time elapsed: 0.573 sec  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:905)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:848)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:848)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.NullPointerException: null
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.delete(LocalFileSystem.java:187)
>   at 
> org.apache.flink.core.fs.FileSystem.initOutPathLocalFS(FileSystem.java:632)
>   at 
> org.apache.flink.api.common.io.FileOutputFormat.open(FileOutputFormat.java:239)
>   at 
> org.apache.flink.api.java.io.TextOutputFormat.open(TextOutputFormat.java:78)
>   at 
> org.apache.flink.streaming.api.functions.sink.OutputFormatSinkFunction.open(OutputFormatSinkFunction.java:60)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:154)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:383)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:259)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:650)
>   at java.lang.Thread.run(Thread.java:745)
> testGroupedFoldOperation(org.apache.flink.test.streaming.api.StreamingOperatorsITCase)
>   Time elapsed: 0.573 sec  <<< FAILURE!
> java.lang.AssertionError: Different number of lines in expected and obtained 
> result. expected:<4> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:316)
>   at 
> org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:302)
>   at 
> org.apache.flink.test.streaming.api.StreamingOperatorsITCase.after(StreamingOperatorsITCase.java:63)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2859: [FLINK-5147] Prevent NPE in LocalFS#delete()

2016-12-09 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2859
  
@rmetzger ty for review, merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2981: [docker] improve Dockerfile host configuration

2016-12-09 Thread mxm
GitHub user mxm opened a pull request:

https://github.com/apache/flink/pull/2981

[docker] improve Dockerfile host configuration

- configure job manager address for both operation modes
- introduce argument to specify the external job manager address
- replace ARG with ENV for backwards-compatibility
- EXPOSE web port and RPC port

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mxm/flink docker

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2981.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2981


commit b0e19d6c2d5bea45e2b87ff63c01c07996ef665c
Author: Maximilian Michels 
Date:   2016-12-09T16:58:30Z

[docker] improve Dockerfile host configuration

- configure job manager address for both operation modes
- introduce argument to specify the external job manager address
- replace ARG with ENV for backwards-compatibility
- EXPOSE web port and RPC port




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5147) StreamingOperatorsITCase.testGroupedFoldOperation failed on Travis

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735939#comment-15735939
 ] 

ASF GitHub Bot commented on FLINK-5147:
---

Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2859
  
+1 to merge


> StreamingOperatorsITCase.testGroupedFoldOperation failed on Travis
> --
>
> Key: FLINK-5147
> URL: https://issues.apache.org/jira/browse/FLINK-5147
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.1.3
> Environment: https://travis-ci.org/apache/flink/jobs/177675906
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>
> The test failed with the following exception:
> {code}
> testGroupedFoldOperation(org.apache.flink.test.streaming.api.StreamingOperatorsITCase)
>   Time elapsed: 0.573 sec  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:905)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:848)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:848)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.NullPointerException: null
>   at 
> org.apache.flink.core.fs.local.LocalFileSystem.delete(LocalFileSystem.java:187)
>   at 
> org.apache.flink.core.fs.FileSystem.initOutPathLocalFS(FileSystem.java:632)
>   at 
> org.apache.flink.api.common.io.FileOutputFormat.open(FileOutputFormat.java:239)
>   at 
> org.apache.flink.api.java.io.TextOutputFormat.open(TextOutputFormat.java:78)
>   at 
> org.apache.flink.streaming.api.functions.sink.OutputFormatSinkFunction.open(OutputFormatSinkFunction.java:60)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:154)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:383)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:259)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:650)
>   at java.lang.Thread.run(Thread.java:745)
> testGroupedFoldOperation(org.apache.flink.test.streaming.api.StreamingOperatorsITCase)
>   Time elapsed: 0.573 sec  <<< FAILURE!
> java.lang.AssertionError: Different number of lines in expected and obtained 
> result. expected:<4> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:316)
>   at 
> org.apache.flink.test.util.TestBaseUtils.compareResultsByLinesInMemory(TestBaseUtils.java:302)
>   at 
> org.apache.flink.test.streaming.api.StreamingOperatorsITCase.after(StreamingOperatorsITCase.java:63)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2859: [FLINK-5147] Prevent NPE in LocalFS#delete()

2016-12-09 Thread rmetzger
Github user rmetzger commented on the issue:

https://github.com/apache/flink/pull/2859
  
+1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5299) DataStream support for arrays as keys

2016-12-09 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735870#comment-15735870
 ] 

Chesnay Schepler commented on FLINK-5299:
-

Well i missed a small thing. We also have to modify the KeySelectors in 
org.apache.flink.streaming.util.keys.KeySelectorUtil to call 
"extractHashStableKeys" instead of "extractKeys".

> DataStream support for arrays as keys
> -
>
> Key: FLINK-5299
> URL: https://issues.apache.org/jira/browse/FLINK-5299
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Chesnay Schepler
>  Labels: star
>
> It is currently not possible to use an array as a key in the DataStream api, 
> as it relies on hashcodes which aren't stable for arrays.
> One way to implement this would be to check for the key type and inject a 
> KeySelector that calls "Arrays.hashcode(values)".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5299) DataStream support for arrays as keys

2016-12-09 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735864#comment-15735864
 ] 

Chesnay Schepler commented on FLINK-5299:
-

That's a good question. I've looked into it a bit more and think now that my 
original idea was kinda bad actually :/ (since i pretty much only accounted for 
the case of a single key which is some array)

What we could maybe do is the following (note that this is completely theory, i 
haven't tried out anything):

Add a new method to the TypeComparator that extracts a hash-stable Key:

{code}
public int extractHashStableKeys(Object record, Object[] target, int index) {
return extractKeys(record, target, index); // to not break existing 
implementations
}
{code}

The TupleComparator implementation would look like this: (it is identical to 
extractKeys, except that it calls extractHashStableKeys)
{code}
@Override
public int extractHashStableKeys(Object record, Object[] target, int index) {
int localIndex = index;
for(int i = 0; i < comparators.length; i++) {
localIndex += comparators[i].extractHashStableKeys(((Tuple) 
record).getField(keyPositions[i]), target, localIndex);
}
return localIndex - index;
}
{code}

Finally, we add the following method to the primitive array comparator:

{code}
@Override
public int extractHashStableKeys(Object record, Object[] target, int index) {
target[index] = Arrays.hashCode(record);
return 1;
}
{code}

There you go.

> DataStream support for arrays as keys
> -
>
> Key: FLINK-5299
> URL: https://issues.apache.org/jira/browse/FLINK-5299
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Chesnay Schepler
>  Labels: star
>
> It is currently not possible to use an array as a key in the DataStream api, 
> as it relies on hashcodes which aren't stable for arrays.
> One way to implement this would be to check for the key type and inject a 
> KeySelector that calls "Arrays.hashcode(values)".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2890: [hotfix] properly encapsulate the original excepti...

2016-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2890


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2964: [backport] [FLINK-5285] Abort checkpoint only once...

2016-12-09 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2964


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5158) Handle ZooKeeperCompletedCheckpointStore exceptions in CheckpointCoordinator

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735774#comment-15735774
 ] 

ASF GitHub Bot commented on FLINK-5158:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2873


> Handle ZooKeeperCompletedCheckpointStore exceptions in CheckpointCoordinator
> 
>
> Key: FLINK-5158
> URL: https://issues.apache.org/jira/browse/FLINK-5158
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.2.0, 1.1.4
>
>
> The checkpoint coordinator does not properly handle exceptions when trying to 
> store completed checkpoints. As a result, completed checkpoints are not 
> properly cleaned up and even worse, the {{CheckpointCoordinator}} might get 
> stuck stopping triggering checkpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5193) Recovering all jobs fails completely if a single recovery fails

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735778#comment-15735778
 ] 

ASF GitHub Bot commented on FLINK-5193:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2910


> Recovering all jobs fails completely if a single recovery fails
> ---
>
> Key: FLINK-5193
> URL: https://issues.apache.org/jira/browse/FLINK-5193
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.2.0, 1.1.4
>
>
> In HA case where the {{JobManager}} tries to recover all submitted job 
> graphs, e.g. when regaining leadership, it can happen that none of the 
> submitted jobs are recovered if a single recovery fails. Instead of failing 
> the complete recovery procedure, the {{JobManager}} should still try to 
> recover the remaining (non-failing) jobs and print a proper error message for 
> the failed recoveries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2977: [FLINK-5084] Replace Java Table API integration tests by ...

2016-12-09 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/2977
  
Thanks for working on this @mtunique!

I would like to suggest the following. All test classes in 
`./test/java/org/apache/flink/api/java/batch/table` are split into a validation 
and a plan test class. All tests are implemented in Scala and moved to the 
`./test/scala/` directory.

- The validation tests contain the test methods which check for failing 
validation. These tests are all unit tests and are named like 
`CalcValidationTest`.
- The plan test compare the logical plans of the string Table API and the 
expression Table API. These are also unit tests and named like `CalcPlanTest`. 
I would not merge these tests with the execution tests of the expression Table 
API but keep them separate.

The file `./test/java/org/apache/flink/api/java/batch/ExplainTest` can be 
removed. It checks exactly the same as the Scala version of this test.

What do you think?

Best, Fabian


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5084) Replace Java Table API integration tests by unit tests

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735776#comment-15735776
 ] 

ASF GitHub Bot commented on FLINK-5084:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/2977
  
Thanks for working on this @mtunique!

I would like to suggest the following. All test classes in 
`./test/java/org/apache/flink/api/java/batch/table` are split into a validation 
and a plan test class. All tests are implemented in Scala and moved to the 
`./test/scala/` directory.

- The validation tests contain the test methods which check for failing 
validation. These tests are all unit tests and are named like 
`CalcValidationTest`.
- The plan test compare the logical plans of the string Table API and the 
expression Table API. These are also unit tests and named like `CalcPlanTest`. 
I would not merge these tests with the execution tests of the expression Table 
API but keep them separate.

The file `./test/java/org/apache/flink/api/java/batch/ExplainTest` can be 
removed. It checks exactly the same as the Scala version of this test.

What do you think?

Best, Fabian


> Replace Java Table API integration tests by unit tests
> --
>
> Key: FLINK-5084
> URL: https://issues.apache.org/jira/browse/FLINK-5084
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Priority: Minor
>
> The Java Table API is a wrapper on top of the Scala Table API. 
> Instead of operating directly with Expressions like the Scala API, the Java 
> API accepts a String parameter which is parsed into Expressions.
> We could therefore replace the Java Table API ITCases by tests that check 
> that the parsing step produces a valid logical plan.
> This could be done by creating two {{Table}} objects for an identical query 
> once with the Scala Expression API and one with the Java String API and 
> comparing the logical plans of both {{Table}} objects. Basically something 
> like the following:
> {code}
> val ds1 = CollectionDataSets.getSmall3TupleDataSet(env).toTable(tEnv, 'a, 'b, 
> 'c)
> val ds2 = CollectionDataSets.get5TupleDataSet(env).toTable(tEnv, 'd, 'e, 'f, 
> 'g, 'h)
> val joinT1 = ds1.join(ds2).where('b === 'e).select('c, 'g)
> val joinT2 = ds1.join(ds2).where("b = e").select("c, g")
> val lPlan1 = joinT1.logicalPlan
> val lPlan2 = joinT2.logicalPlan
> Assert.assertEquals("Logical Plans do not match", lPlan1, lPlan2)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5158) Handle ZooKeeperCompletedCheckpointStore exceptions in CheckpointCoordinator

2016-12-09 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-5158.

Resolution: Fixed

Fixed in 1.2.0 via 0c42d258e9d9d30e7d0f1487ef0ac8b90fa4
Fixed in 1.1.4 via 4b734d7b8726200e5293c32f2cb9e8c77db4d378

> Handle ZooKeeperCompletedCheckpointStore exceptions in CheckpointCoordinator
> 
>
> Key: FLINK-5158
> URL: https://issues.apache.org/jira/browse/FLINK-5158
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.2.0, 1.1.4
>
>
> The checkpoint coordinator does not properly handle exceptions when trying to 
> store completed checkpoints. As a result, completed checkpoints are not 
> properly cleaned up and even worse, the {{CheckpointCoordinator}} might get 
> stuck stopping triggering checkpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5278) Improve Task and checkpoint logging

2016-12-09 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-5278.

Resolution: Fixed

Fixed in 1.2.0 via ea7080712f2dcbdf125b806007c80aa3d120f30a
Fixed in 1.1.4 via b046038ae11f7662b6d788c1f005a9a61a45393b

> Improve Task and checkpoint logging 
> 
>
> Key: FLINK-5278
> URL: https://issues.apache.org/jira/browse/FLINK-5278
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing, TaskManager
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
> Fix For: 1.2.0, 1.1.4
>
>
> The logging of task and checkpoint logic could be improved to contain more 
> information relevant for debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5285) CancelCheckpointMarker flood when using at least once mode

2016-12-09 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-5285.

Resolution: Fixed

Fixed in 1.2.0 via d3f19a5bead1d0709da733b75d729afa9341c250
Fixed in 1.1.4 via afaa27e9faeb0352a49f30de90e719572caa97c5

> CancelCheckpointMarker flood when using at least once mode
> --
>
> Key: FLINK-5285
> URL: https://issues.apache.org/jira/browse/FLINK-5285
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.2.0, 1.1.4
>
>
> When using at least once mode ({{BarrierTracker}}), then an interleaved 
> arrival of cancellation barriers at the {{BarrierTracker}} of two consecutive 
> checkpoints can trigger a flood of {{CancelCheckpointMarkers}}. 
> The following sequence is problematic:
> {code}
> Cancel(1, 0),
> Cancel(2, 0),
> Cancel(1, 1),
> Cancel(2, 1),
> Cancel(1, 2),
> Cancel(2, 2)
> {code}
> with {{Cancel(checkpointId, channelId)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5285) CancelCheckpointMarker flood when using at least once mode

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735781#comment-15735781
 ] 

ASF GitHub Bot commented on FLINK-5285:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2964


> CancelCheckpointMarker flood when using at least once mode
> --
>
> Key: FLINK-5285
> URL: https://issues.apache.org/jira/browse/FLINK-5285
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.2.0, 1.1.4
>
>
> When using at least once mode ({{BarrierTracker}}), then an interleaved 
> arrival of cancellation barriers at the {{BarrierTracker}} of two consecutive 
> checkpoints can trigger a flood of {{CancelCheckpointMarkers}}. 
> The following sequence is problematic:
> {code}
> Cancel(1, 0),
> Cancel(2, 0),
> Cancel(1, 1),
> Cancel(2, 1),
> Cancel(1, 2),
> Cancel(2, 2)
> {code}
> with {{Cancel(checkpointId, channelId)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5193) Recovering all jobs fails completely if a single recovery fails

2016-12-09 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-5193.

Resolution: Fixed

Fixed in 1.2.0 via add3765d1626a04fb98b8f36cb725eb32806d8b6
Fixed in 1.1.4 via d314bc5235e2573ff77f45d327bc62f521063b71

> Recovering all jobs fails completely if a single recovery fails
> ---
>
> Key: FLINK-5193
> URL: https://issues.apache.org/jira/browse/FLINK-5193
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.2.0, 1.1.4
>
>
> In HA case where the {{JobManager}} tries to recover all submitted job 
> graphs, e.g. when regaining leadership, it can happen that none of the 
> submitted jobs are recovered if a single recovery fails. Instead of failing 
> the complete recovery procedure, the {{JobManager}} should still try to 
> recover the remaining (non-failing) jobs and print a proper error message for 
> the failed recoveries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2910: [backport] [FLINK-5193] [jm] Harden job recovery i...

2016-12-09 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2910


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2873: [backport] [FLINK-5158] [ckPtCoord] Handle excepti...

2016-12-09 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/2873


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5310) Harden the RocksDB JNI library loading

2016-12-09 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-5310:
---

 Summary: Harden the RocksDB JNI library loading
 Key: FLINK-5310
 URL: https://issues.apache.org/jira/browse/FLINK-5310
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Affects Versions: 1.2.0
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.2.0


Currently, the RocksDB JNI library is automatically and implicitly loaded by 
RocksDB upon initialization. If the loading fails, there is little information 
about why the loading failed.

We should explicitly load the JNI library with retries and log better error 
information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5285) CancelCheckpointMarker flood when using at least once mode

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735761#comment-15735761
 ] 

ASF GitHub Bot commented on FLINK-5285:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2963


> CancelCheckpointMarker flood when using at least once mode
> --
>
> Key: FLINK-5285
> URL: https://issues.apache.org/jira/browse/FLINK-5285
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.2.0, 1.1.4
>
>
> When using at least once mode ({{BarrierTracker}}), then an interleaved 
> arrival of cancellation barriers at the {{BarrierTracker}} of two consecutive 
> checkpoints can trigger a flood of {{CancelCheckpointMarkers}}. 
> The following sequence is problematic:
> {code}
> Cancel(1, 0),
> Cancel(2, 0),
> Cancel(1, 1),
> Cancel(2, 1),
> Cancel(1, 2),
> Cancel(2, 2)
> {code}
> with {{Cancel(checkpointId, channelId)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2872: [FLINK-5158] [ckPtCoord] Handle exceptions from Co...

2016-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2872


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2963: [FLINK-5285] Abort checkpoint only once in Barrier...

2016-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2963


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5193) Recovering all jobs fails completely if a single recovery fails

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735762#comment-15735762
 ] 

ASF GitHub Bot commented on FLINK-5193:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2909


> Recovering all jobs fails completely if a single recovery fails
> ---
>
> Key: FLINK-5193
> URL: https://issues.apache.org/jira/browse/FLINK-5193
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.2.0, 1.1.4
>
>
> In HA case where the {{JobManager}} tries to recover all submitted job 
> graphs, e.g. when regaining leadership, it can happen that none of the 
> submitted jobs are recovered if a single recovery fails. Instead of failing 
> the complete recovery procedure, the {{JobManager}} should still try to 
> recover the remaining (non-failing) jobs and print a proper error message for 
> the failed recoveries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5278) Improve Task and checkpoint logging

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735760#comment-15735760
 ] 

ASF GitHub Bot commented on FLINK-5278:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2959


> Improve Task and checkpoint logging 
> 
>
> Key: FLINK-5278
> URL: https://issues.apache.org/jira/browse/FLINK-5278
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing, TaskManager
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
> Fix For: 1.2.0, 1.1.4
>
>
> The logging of task and checkpoint logic could be improved to contain more 
> information relevant for debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5158) Handle ZooKeeperCompletedCheckpointStore exceptions in CheckpointCoordinator

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735763#comment-15735763
 ] 

ASF GitHub Bot commented on FLINK-5158:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2872


> Handle ZooKeeperCompletedCheckpointStore exceptions in CheckpointCoordinator
> 
>
> Key: FLINK-5158
> URL: https://issues.apache.org/jira/browse/FLINK-5158
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
> Fix For: 1.2.0, 1.1.4
>
>
> The checkpoint coordinator does not properly handle exceptions when trying to 
> store completed checkpoints. As a result, completed checkpoints are not 
> properly cleaned up and even worse, the {{CheckpointCoordinator}} might get 
> stuck stopping triggering checkpoints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5309) documentation links on the home page point to 1.2-SNAPSHOT

2016-12-09 Thread Nico Kruber (JIRA)
Nico Kruber created FLINK-5309:
--

 Summary: documentation links on the home page point to 1.2-SNAPSHOT
 Key: FLINK-5309
 URL: https://issues.apache.org/jira/browse/FLINK-5309
 Project: Flink
  Issue Type: Bug
  Components: Project Website
Reporter: Nico Kruber


The main website at https://flink.apache.org/ has several links to the 
documentation but despite advertising a stable release download, all of those 
links point to the 1.2 branch.
This should be set to the same stable version's documentation instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2909: [FLINK-5193] [jm] Harden job recovery in case of r...

2016-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2909


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2959: [FLINK-5278] Improve task and checkpoint related l...

2016-12-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2959


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3133) Introduce collect()/count()/print() methods in DataStream API

2016-12-09 Thread Alexander Shoshin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735742#comment-15735742
 ] 

Alexander Shoshin commented on FLINK-3133:
--

Sorry, I wanted to take this issue but I am not able to take it at the moment. 
Nevertheless I am sure our discussion will be very helpfull for the next 
assignee.

> Introduce collect()/count()/print() methods in DataStream API
> -
>
> Key: FLINK-3133
> URL: https://issues.apache.org/jira/browse/FLINK-3133
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API, Streaming
>Affects Versions: 0.10.0, 1.0.0, 0.10.1
>Reporter: Maximilian Michels
> Fix For: 1.0.0
>
>
> The DataSet API's methods {{collect()}}, {{count()}}, and {{print()}} should 
> be mirrored to the DataStream API. 
> The semantics of the calls are different. We need to be able to sample parts 
> of a stream, e.g. by supplying a time period in the arguments to the methods. 
> Users should use the {{JobClient}} to retrieve the results.
> {code:java}
> StreamExecutionEnvironment env = 
> StramEnvironment.getStreamExecutionEnvironment();
> DataStream streamData = env.addSource(..).map(..);
> JobClient jobClient = env.executeWithControl();
> Iterable sampled = jobClient.sampleStream(streamData, 
> Time.seconds(5));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2968: [FLINK-5187] [core] Create analog of Row and RowTy...

2016-12-09 Thread tonycox
Github user tonycox commented on a diff in the pull request:

https://github.com/apache/flink/pull/2968#discussion_r91748535
  
--- Diff: flink-core/src/main/java/org/apache/flink/types/Row.java ---
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.types;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+
+import java.io.Serializable;
+import java.util.Arrays;
+
+/**
+ * A Row has no limited length and contain a set of fields, which may all 
be different types.
+ * Because Row is not strongly typed, Flink's type extraction mechanism 
can't extract correct field
+ * types. So that users should manually tell Flink the type information 
via creating a
+ * {@link RowTypeInfo}.
+ *
+ * 
+ * The fields in the Row may be accessed by position (zero-based) {@link 
#getField(int)}. And can
+ * set fields by {@link #setField(int, Object)}.
+ * 
+ * Row is in principle serializable. However, it may contain 
non-serializable fields,
+ * in which case serialization will fail.
+ *
+ */
+@PublicEvolving
+public class Row implements Serializable{
+
+   private static final long serialVersionUID = 1L;
+
+   /** The array to store actual values. */
+   private final Object[] fields;
+
+   /**
+* Create a new Row instance.
+* @param arity The number of field in the Row
+*/
+   public Row(int arity) {
+   this.fields = new Object[arity];
+   }
+
+   /**
+* Get the number of field in the Row.
+* @return The number of field in the Row.
+*/
+   public int getArity() {
+   return fields.length;
+   }
+
+   /**
+* Gets the field at the specified position.
+* @param pos The position of the field, 0-based.
+* @return The field at the specified position.
+* @throws IndexOutOfBoundsException Thrown, if the position is 
negative, or equal to, or larger than the number of fields.
+*/
+   public Object getField(int pos) {
+   return fields[pos];
+   }
+
+   /**
+* Sets the field at the specified position.
+*
+* @param pos The position of the field, 0-based.
+* @param value The value to be assigned to the field at the specified 
position.
+* @throws IndexOutOfBoundsException Thrown, if the position is 
negative, or equal to, or larger than the number of fields.
+*/
+   public void setField(int pos, Object value) {
+   fields[pos] = value;
+   }
+
+   @Override
+   public String toString() {
+   return Arrays.deepToString(fields);
--- End diff --

What doy think about reduce all '[' and ']' from string?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5187) Create analog of Row in core

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735722#comment-15735722
 ] 

ASF GitHub Bot commented on FLINK-5187:
---

Github user tonycox commented on a diff in the pull request:

https://github.com/apache/flink/pull/2968#discussion_r91748535
  
--- Diff: flink-core/src/main/java/org/apache/flink/types/Row.java ---
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.types;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+
+import java.io.Serializable;
+import java.util.Arrays;
+
+/**
+ * A Row has no limited length and contain a set of fields, which may all 
be different types.
+ * Because Row is not strongly typed, Flink's type extraction mechanism 
can't extract correct field
+ * types. So that users should manually tell Flink the type information 
via creating a
+ * {@link RowTypeInfo}.
+ *
+ * 
+ * The fields in the Row may be accessed by position (zero-based) {@link 
#getField(int)}. And can
+ * set fields by {@link #setField(int, Object)}.
+ * 
+ * Row is in principle serializable. However, it may contain 
non-serializable fields,
+ * in which case serialization will fail.
+ *
+ */
+@PublicEvolving
+public class Row implements Serializable{
+
+   private static final long serialVersionUID = 1L;
+
+   /** The array to store actual values. */
+   private final Object[] fields;
+
+   /**
+* Create a new Row instance.
+* @param arity The number of field in the Row
+*/
+   public Row(int arity) {
+   this.fields = new Object[arity];
+   }
+
+   /**
+* Get the number of field in the Row.
+* @return The number of field in the Row.
+*/
+   public int getArity() {
+   return fields.length;
+   }
+
+   /**
+* Gets the field at the specified position.
+* @param pos The position of the field, 0-based.
+* @return The field at the specified position.
+* @throws IndexOutOfBoundsException Thrown, if the position is 
negative, or equal to, or larger than the number of fields.
+*/
+   public Object getField(int pos) {
+   return fields[pos];
+   }
+
+   /**
+* Sets the field at the specified position.
+*
+* @param pos The position of the field, 0-based.
+* @param value The value to be assigned to the field at the specified 
position.
+* @throws IndexOutOfBoundsException Thrown, if the position is 
negative, or equal to, or larger than the number of fields.
+*/
+   public void setField(int pos, Object value) {
+   fields[pos] = value;
+   }
+
+   @Override
+   public String toString() {
+   return Arrays.deepToString(fields);
--- End diff --

What doy think about reduce all '[' and ']' from string?


> Create analog of Row in core
> 
>
> Key: FLINK-5187
> URL: https://issues.apache.org/jira/browse/FLINK-5187
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core, Table API & SQL
>Reporter: Anton Solovev
>Assignee: Jark Wu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5299) DataStream support for arrays as keys

2016-12-09 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735675#comment-15735675
 ] 

Greg Hogan commented on FLINK-5299:
---

How would the {{KeySelector}} be typed for arrays?

> DataStream support for arrays as keys
> -
>
> Key: FLINK-5299
> URL: https://issues.apache.org/jira/browse/FLINK-5299
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Chesnay Schepler
>  Labels: star
>
> It is currently not possible to use an array as a key in the DataStream api, 
> as it relies on hashcodes which aren't stable for arrays.
> One way to implement this would be to check for the key type and inject a 
> KeySelector that calls "Arrays.hashcode(values)".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5308) download links to previous releases are incomplete

2016-12-09 Thread Nico Kruber (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735659#comment-15735659
 ] 

Nico Kruber commented on FLINK-5308:


I was thinking about this, too: to only link them in the newest dot-release. 
Let's hear what the others have to say...

> download links to previous releases are incomplete
> --
>
> Key: FLINK-5308
> URL: https://issues.apache.org/jira/browse/FLINK-5308
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> The list of all releases under 
> https://flink.apache.org/downloads.html#all-releases
> does not contain several previous releases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5299) DataStream support for arrays as keys

2016-12-09 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735652#comment-15735652
 ] 

Chesnay Schepler commented on FLINK-5299:
-

do you mean instead of "return Arrays.hashCode(values);" we use "return 
Integer.valueOf(Arrays.hashCode(values));"?

> DataStream support for arrays as keys
> -
>
> Key: FLINK-5299
> URL: https://issues.apache.org/jira/browse/FLINK-5299
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Chesnay Schepler
>  Labels: star
>
> It is currently not possible to use an array as a key in the DataStream api, 
> as it relies on hashcodes which aren't stable for arrays.
> One way to implement this would be to check for the key type and inject a 
> KeySelector that calls "Arrays.hashcode(values)".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2961: [FLINK-5266] [table] eagerly project unused fields...

2016-12-09 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2961#discussion_r91738368
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/table.scala
 ---
@@ -881,24 +883,21 @@ class GroupWindowedTable(
 * }}}
 */
   def select(fields: Expression*): Table = {
--- End diff --

At the moment there is no way to specify watermarks inside of a Table API 
or SQL query. This can only be done on a DataStream before it is converted into 
a Table. Therefore, watermarks and timestamps are already assigned before the 
first Table or SQL operator can remove anything. In case of a TableSource which 
assigns timestamps, the TableSource needs to take care that the assignment 
happens before a pushed-down projection is applied.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5266) Eagerly project unused fields when selecting aggregation fields

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735625#comment-15735625
 ] 

ASF GitHub Bot commented on FLINK-5266:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/2961#discussion_r91738368
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/table.scala
 ---
@@ -881,24 +883,21 @@ class GroupWindowedTable(
 * }}}
 */
   def select(fields: Expression*): Table = {
--- End diff --

At the moment there is no way to specify watermarks inside of a Table API 
or SQL query. This can only be done on a DataStream before it is converted into 
a Table. Therefore, watermarks and timestamps are already assigned before the 
first Table or SQL operator can remove anything. In case of a TableSource which 
assigns timestamps, the TableSource needs to take care that the assignment 
happens before a pushed-down projection is applied.


> Eagerly project unused fields when selecting aggregation fields
> ---
>
> Key: FLINK-5266
> URL: https://issues.apache.org/jira/browse/FLINK-5266
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Kurt Young
>Assignee: Kurt Young
>
> When we call table's {{select}} method and if it contains some aggregations, 
> we will project fields after the aggregation. Would be better to project 
> unused fields before the aggregation, and can furthermore leave the 
> opportunity to push the project into scan.
> For example, the current logical plan of a simple query:
> {code}
> table.select('a.sum as 's, 'a.max)
> {code}
> is
> {code}
> LogicalProject(s=[$0], TMP_2=[$1])
>   LogicalAggregate(group=[{}], TMP_0=[SUM($5)], TMP_1=[MAX($5)])
> LogicalTableScan(table=[[supplier]])
> {code}
> Would be better if we can project unused fields right after scan, and looks 
> like this:
> {code}
> LogicalProject(s=[$0], EXPR$1=[$0])
>   LogicalAggregate(group=[{}], EXPR$1=[SUM($0)])
> LogicalProject(a=[$5])
>   LogicalTableScan(table=[[supplier]])
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5300) FileStateHandle#discard & FsCheckpointStateOutputStream#close tries to delete non-empty directory

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735612#comment-15735612
 ] 

ASF GitHub Bot commented on FLINK-5300:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2971
  
Thanks for the review @uce. I think we can include this also in a possible 
1.1.5 later. Who knows which other issues will still come up.


> FileStateHandle#discard & FsCheckpointStateOutputStream#close tries to delete 
> non-empty directory
> -
>
> Key: FLINK-5300
> URL: https://issues.apache.org/jira/browse/FLINK-5300
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
>
> Flink's behaviour to delete {{FileStateHandles}} and closing 
> {{FsCheckpointStateOutputStream}} always triggers a delete operation on the 
> parent directory. Often this call will fail because the directory still 
> contains some other files.
> A user reported that the SRE of their Hadoop cluster noticed this behaviour 
> in the logs. It might be more system friendly if we first checked whether the 
> directory is empty or not. This would prevent many error message to appear in 
> the Hadoop logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2971: [backport] [FLINK-5300] Add more gentle file deletion pro...

2016-12-09 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2971
  
Thanks for the review @uce. I think we can include this also in a possible 
1.1.5 later. Who knows which other issues will still come up.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5300) FileStateHandle#discard & FsCheckpointStateOutputStream#close tries to delete non-empty directory

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735607#comment-15735607
 ] 

ASF GitHub Bot commented on FLINK-5300:
---

Github user uce commented on the issue:

https://github.com/apache/flink/pull/2971
  
Looks good to be merged imo. I just kicked off RC2... If you would like to 
have this in 1.1.4, I can re-trigger the build.


> FileStateHandle#discard & FsCheckpointStateOutputStream#close tries to delete 
> non-empty directory
> -
>
> Key: FLINK-5300
> URL: https://issues.apache.org/jira/browse/FLINK-5300
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
>
> Flink's behaviour to delete {{FileStateHandles}} and closing 
> {{FsCheckpointStateOutputStream}} always triggers a delete operation on the 
> parent directory. Often this call will fail because the directory still 
> contains some other files.
> A user reported that the SRE of their Hadoop cluster noticed this behaviour 
> in the logs. It might be more system friendly if we first checked whether the 
> directory is empty or not. This would prevent many error message to appear in 
> the Hadoop logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2971: [backport] [FLINK-5300] Add more gentle file deletion pro...

2016-12-09 Thread uce
Github user uce commented on the issue:

https://github.com/apache/flink/pull/2971
  
Looks good to be merged imo. I just kicked off RC2... If you would like to 
have this in 1.1.4, I can re-trigger the build.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2971: [backport] [FLINK-5300] Add more gentle file deletion pro...

2016-12-09 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2971
  
I updated the PR wrt the results of the discussion in #2970.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5300) FileStateHandle#discard & FsCheckpointStateOutputStream#close tries to delete non-empty directory

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735586#comment-15735586
 ] 

ASF GitHub Bot commented on FLINK-5300:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2971
  
I updated the PR wrt the results of the discussion in #2970.


> FileStateHandle#discard & FsCheckpointStateOutputStream#close tries to delete 
> non-empty directory
> -
>
> Key: FLINK-5300
> URL: https://issues.apache.org/jira/browse/FLINK-5300
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
>
> Flink's behaviour to delete {{FileStateHandles}} and closing 
> {{FsCheckpointStateOutputStream}} always triggers a delete operation on the 
> parent directory. Often this call will fail because the directory still 
> contains some other files.
> A user reported that the SRE of their Hadoop cluster noticed this behaviour 
> in the logs. It might be more system friendly if we first checked whether the 
> directory is empty or not. This would prevent many error message to appear in 
> the Hadoop logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5299) DataStream support for arrays as keys

2016-12-09 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735576#comment-15735576
 ] 

Greg Hogan commented on FLINK-5299:
---

Could the {{KeySelector}} return {{Integer}} which has a self-referencing 
{{hashCode()}}?

> DataStream support for arrays as keys
> -
>
> Key: FLINK-5299
> URL: https://issues.apache.org/jira/browse/FLINK-5299
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Chesnay Schepler
>  Labels: star
>
> It is currently not possible to use an array as a key in the DataStream api, 
> as it relies on hashcodes which aren't stable for arrays.
> One way to implement this would be to check for the key type and inject a 
> KeySelector that calls "Arrays.hashcode(values)".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2939: [FLINK-5113] Ports all functions in the tests to t...

2016-12-09 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91730804
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/classloading/jar/CheckpointedStreamingProgram.java
 ---
@@ -72,34 +74,27 @@ public void run(SourceContext ctx) throws 
Exception {
public void cancel() {
running = false;
}
-
-   @Override
-   public Integer snapshotState(long checkpointId, long 
checkpointTimestamp) throws Exception {
-   return null;
-   }
-
-   @Override
-   public void restoreState(Integer state) {
-
-   }
}
 
-   public static class StatefulMapper implements MapFunction, Checkpointed, CheckpointListener {
+   public static class StatefulMapper implements MapFunction, ListCheckpointed, CheckpointListener {
 
private String someState;
private boolean atLeastOneSnapshotComplete = false;
private boolean restored = false;
 
@Override
-   public StatefulMapper snapshotState(long checkpointId, long 
checkpointTimestamp) throws Exception {
-   return this;
+   public List snapshotState(long checkpointId, 
long timestamp) throws Exception {
+   return Collections.singletonList(this);
}
 
@Override
-   public void restoreState(StatefulMapper state) {
-   restored = true;
-   this.someState = state.someState;
-   this.atLeastOneSnapshotComplete = 
state.atLeastOneSnapshotComplete;
+   public void restoreState(List state) throws 
Exception {
+   if (!state.isEmpty()) {
--- End diff --

Alright, i figured out why we can't fail here immediately. It still seems 
odd though that do not explicitly differentiate between a call to restore 
before any state was snapshotted and a broken snapshotting that doesn't return 
a state, although this applies to all other tests as well.

If the test is successful f that we are getting the state that we 
snapshotted we should also have failure condition in case this does not happen; 
currently we simply enter undefined territory.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5300) FileStateHandle#discard & FsCheckpointStateOutputStream#close tries to delete non-empty directory

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735539#comment-15735539
 ] 

ASF GitHub Bot commented on FLINK-5300:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2970
  
I've update this PR @StephanEwen. Unfortunately, I couldn't use Hadoop's 
`FileSystem#getContentSummary` because it will first request the status for the 
given path, then list all files and directories if the path is a directory. For 
each file it will aggregate the `FileStatus` and then recursively descend into 
each directory. Thus, I think that this method is not faster.

I've refactored the code to contain a method `FileUtils#deletePathIfEmpty` 
to delete the path if it does not contain any files/directories.


> FileStateHandle#discard & FsCheckpointStateOutputStream#close tries to delete 
> non-empty directory
> -
>
> Key: FLINK-5300
> URL: https://issues.apache.org/jira/browse/FLINK-5300
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
>
> Flink's behaviour to delete {{FileStateHandles}} and closing 
> {{FsCheckpointStateOutputStream}} always triggers a delete operation on the 
> parent directory. Often this call will fail because the directory still 
> contains some other files.
> A user reported that the SRE of their Hadoop cluster noticed this behaviour 
> in the logs. It might be more system friendly if we first checked whether the 
> directory is empty or not. This would prevent many error message to appear in 
> the Hadoop logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735541#comment-15735541
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91730804
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/classloading/jar/CheckpointedStreamingProgram.java
 ---
@@ -72,34 +74,27 @@ public void run(SourceContext ctx) throws 
Exception {
public void cancel() {
running = false;
}
-
-   @Override
-   public Integer snapshotState(long checkpointId, long 
checkpointTimestamp) throws Exception {
-   return null;
-   }
-
-   @Override
-   public void restoreState(Integer state) {
-
-   }
}
 
-   public static class StatefulMapper implements MapFunction, Checkpointed, CheckpointListener {
+   public static class StatefulMapper implements MapFunction, ListCheckpointed, CheckpointListener {
 
private String someState;
private boolean atLeastOneSnapshotComplete = false;
private boolean restored = false;
 
@Override
-   public StatefulMapper snapshotState(long checkpointId, long 
checkpointTimestamp) throws Exception {
-   return this;
+   public List snapshotState(long checkpointId, 
long timestamp) throws Exception {
+   return Collections.singletonList(this);
}
 
@Override
-   public void restoreState(StatefulMapper state) {
-   restored = true;
-   this.someState = state.someState;
-   this.atLeastOneSnapshotComplete = 
state.atLeastOneSnapshotComplete;
+   public void restoreState(List state) throws 
Exception {
+   if (!state.isEmpty()) {
--- End diff --

Alright, i figured out why we can't fail here immediately. It still seems 
odd though that do not explicitly differentiate between a call to restore 
before any state was snapshotted and a broken snapshotting that doesn't return 
a state, although this applies to all other tests as well.

If the test is successful f that we are getting the state that we 
snapshotted we should also have failure condition in case this does not happen; 
currently we simply enter undefined territory.


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2970: [FLINK-5300] Add more gentle file deletion procedure

2016-12-09 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/2970
  
I've update this PR @StephanEwen. Unfortunately, I couldn't use Hadoop's 
`FileSystem#getContentSummary` because it will first request the status for the 
given path, then list all files and directories if the path is a directory. For 
each file it will aggregate the `FileStatus` and then recursively descend into 
each directory. Thus, I think that this method is not faster.

I've refactored the code to contain a method `FileUtils#deletePathIfEmpty` 
to delete the path if it does not contain any files/directories.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5308) download links to previous releases are incomplete

2016-12-09 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735535#comment-15735535
 ] 

Greg Hogan commented on FLINK-5308:
---

While updating the list of releases, should we reorganize these to only link 
once to each docs / javadocs / scaladocs since these are reused across patch 
updates?

> download links to previous releases are incomplete
> --
>
> Key: FLINK-5308
> URL: https://issues.apache.org/jira/browse/FLINK-5308
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> The list of all releases under 
> https://flink.apache.org/downloads.html#all-releases
> does not contain several previous releases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735515#comment-15735515
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91724133
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/classloading/jar/CheckpointedStreamingProgram.java
 ---
@@ -72,34 +74,27 @@ public void run(SourceContext ctx) throws 
Exception {
public void cancel() {
running = false;
}
-
-   @Override
-   public Integer snapshotState(long checkpointId, long 
checkpointTimestamp) throws Exception {
-   return null;
-   }
-
-   @Override
-   public void restoreState(Integer state) {
-
-   }
}
 
-   public static class StatefulMapper implements MapFunction, Checkpointed, CheckpointListener {
+   public static class StatefulMapper implements MapFunction, ListCheckpointed, CheckpointListener {
 
private String someState;
private boolean atLeastOneSnapshotComplete = false;
private boolean restored = false;
 
@Override
-   public StatefulMapper snapshotState(long checkpointId, long 
checkpointTimestamp) throws Exception {
-   return this;
+   public List snapshotState(long checkpointId, 
long timestamp) throws Exception {
+   return Collections.singletonList(this);
}
 
@Override
-   public void restoreState(StatefulMapper state) {
-   restored = true;
-   this.someState = state.someState;
-   this.atLeastOneSnapshotComplete = 
state.atLeastOneSnapshotComplete;
+   public void restoreState(List state) throws 
Exception {
+   if (!state.isEmpty()) {
--- End diff --

If the state is empty we should fail immediately; currently (I think) this 
would cause us to fail with the RuntimeException saying "Intended failure, to 
trigger restore", which is a bit inaccurate.


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5206) Flakey PythonPlanBinderTest

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735502#comment-15735502
 ] 

ASF GitHub Bot commented on FLINK-5206:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2973
  
@StephanEwen Thank your for the review, merging.


> Flakey PythonPlanBinderTest
> ---
>
> Key: FLINK-5206
> URL: https://issues.apache.org/jira/browse/FLINK-5206
> Project: Flink
>  Issue Type: Bug
>  Components: Python API
>Affects Versions: 1.2.0
> Environment: in TravisCI
>Reporter: Nico Kruber
>Assignee: Chesnay Schepler
>  Labels: test-stability
>
> {code:none}
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> Job execution failed.
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:903)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:846)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:846)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.io.IOException: Output path '/tmp/flink/result2' could not be 
> initialized. Canceling task...
>   at 
> org.apache.flink.api.common.io.FileOutputFormat.open(FileOutputFormat.java:233)
>   at 
> org.apache.flink.api.java.io.TextOutputFormat.open(TextOutputFormat.java:78)
>   at 
> org.apache.flink.runtime.operators.DataSinkTask.invoke(DataSinkTask.java:178)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:654)
>   at java.lang.Thread.run(Thread.java:745)
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 36.891 sec 
> <<< FAILURE! - in org.apache.flink.python.api.PythonPlanBinderTest
> testJobWithoutObjectReuse(org.apache.flink.python.api.PythonPlanBinderTest)  
> Time elapsed: 11.53 sec  <<< FAILURE!
> java.lang.AssertionError: Error while calling the test program: Job execution 
> failed.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:180)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> 

[jira] [Commented] (FLINK-5266) Eagerly project unused fields when selecting aggregation fields

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735523#comment-15735523
 ] 

ASF GitHub Bot commented on FLINK-5266:
---

Github user KurtYoung commented on a diff in the pull request:

https://github.com/apache/flink/pull/2961#discussion_r91729382
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/table.scala
 ---
@@ -881,24 +883,21 @@ class GroupWindowedTable(
 * }}}
 */
   def select(fields: Expression*): Table = {
--- End diff --

What if user use a customized watermark extracter which used some fields 
from the element. For example, we have original table source containing 4 
fields: a, b, c, d. And user used "a" field to extract the timestamp and 
watermark. But in the later query on the table, only "b" and "c" are used. And 
if we do a projection on "b" and "c", and the projection is pushed into table 
source further, we will not get field "a" which used to produce timestamp 
anymore. 


> Eagerly project unused fields when selecting aggregation fields
> ---
>
> Key: FLINK-5266
> URL: https://issues.apache.org/jira/browse/FLINK-5266
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Kurt Young
>Assignee: Kurt Young
>
> When we call table's {{select}} method and if it contains some aggregations, 
> we will project fields after the aggregation. Would be better to project 
> unused fields before the aggregation, and can furthermore leave the 
> opportunity to push the project into scan.
> For example, the current logical plan of a simple query:
> {code}
> table.select('a.sum as 's, 'a.max)
> {code}
> is
> {code}
> LogicalProject(s=[$0], TMP_2=[$1])
>   LogicalAggregate(group=[{}], TMP_0=[SUM($5)], TMP_1=[MAX($5)])
> LogicalTableScan(table=[[supplier]])
> {code}
> Would be better if we can project unused fields right after scan, and looks 
> like this:
> {code}
> LogicalProject(s=[$0], EXPR$1=[$0])
>   LogicalAggregate(group=[{}], EXPR$1=[SUM($0)])
> LogicalProject(a=[$5])
>   LogicalTableScan(table=[[supplier]])
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2961: [FLINK-5266] [table] eagerly project unused fields...

2016-12-09 Thread KurtYoung
Github user KurtYoung commented on a diff in the pull request:

https://github.com/apache/flink/pull/2961#discussion_r91729382
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/table.scala
 ---
@@ -881,24 +883,21 @@ class GroupWindowedTable(
 * }}}
 */
   def select(fields: Expression*): Table = {
--- End diff --

What if user use a customized watermark extracter which used some fields 
from the element. For example, we have original table source containing 4 
fields: a, b, c, d. And user used "a" field to extract the timestamp and 
watermark. But in the later query on the table, only "b" and "c" are used. And 
if we do a projection on "b" and "c", and the projection is pushed into table 
source further, we will not get field "a" which used to produce timestamp 
anymore. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735508#comment-15735508
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91719698
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java
 ---
@@ -199,11 +200,13 @@ public Serializable snapshotState(long checkpointId, 
long checkpointTimestamp) t
Assert.fail("Count is different at start end 
end of snapshot.");
}
semaphore.release();
-   return sum;
+   return Collections.singletonList((Serializable) sum);
}
 
@Override
-   public void restoreState(Serializable state) {}
+   public void restoreState(List state) throws 
Exception {
+
--- End diff --

can you remove this empty line?


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2939: [FLINK-5113] Ports all functions in the tests to t...

2016-12-09 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91720508
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java
 ---
@@ -199,11 +200,13 @@ public Serializable snapshotState(long checkpointId, 
long checkpointTimestamp) t
Assert.fail("Count is different at start end 
end of snapshot.");
}
semaphore.release();
-   return sum;
+   return Collections.singletonList((Serializable) sum);
--- End diff --

You don't to cast here, instead use `return 
Collections.singletonList(sum);`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735512#comment-15735512
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91719644
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java
 ---
@@ -1,4 +1,4 @@
-/**
+/*
--- End diff --

unrelated change


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2939: [FLINK-5113] Ports all functions in the tests to t...

2016-12-09 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91720690
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/AbstractEventTimeWindowCheckpointingITCase.java
 ---
@@ -566,23 +568,25 @@ public void notifyCheckpointComplete(long 
checkpointId) {
numSuccessfulCheckpoints++;
}
 
-   @Override
-   public Integer snapshotState(long checkpointId, long 
checkpointTimestamp) {
-   return numElementsEmitted;
+   public static void reset() {
--- End diff --

please move this method to the bottom of the class again.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735514#comment-15735514
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91721509
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/StateCheckpointedITCase.java
 ---
@@ -47,7 +47,7 @@
  * A simple test that runs a streaming topology with checkpointing enabled.
  *
  * The test triggers a failure after a while and verifies that, after 
completion, the
- * state defined with either the {@link ValueState} or the {@link 
Checkpointed}
+ * state defined with either the {@link ValueState} or the {@link 
org.apache.flink.streaming.api.checkpoint.ListCheckpointed}
--- End diff --

do not use the fully qualified class name here. (it's not required since 
you import the class anyway ;) )


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2939: [FLINK-5113] Ports all functions in the tests to t...

2016-12-09 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91721756
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/StreamCheckpointNotifierITCase.java
 ---
@@ -447,5 +434,23 @@ public void notifyCheckpointComplete(long 
checkpointId) {

GeneratingSourceFunction.numPostFailureNotifications.incrementAndGet();
}
}
+
+   @Override
+   public List snapshotState(long checkpointId, long 
timestamp) throws Exception {
+   if (!hasFailed && count >= failurePos && 
getRuntimeContext().getIndexOfThisSubtask() == 0) {
--- End diff --

please move this methods up again to reduce the diff. The methods are 
identical apart from the signature and return statement, the diff should 
reflect that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2939: [FLINK-5113] Ports all functions in the tests to t...

2016-12-09 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91724133
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/classloading/jar/CheckpointedStreamingProgram.java
 ---
@@ -72,34 +74,27 @@ public void run(SourceContext ctx) throws 
Exception {
public void cancel() {
running = false;
}
-
-   @Override
-   public Integer snapshotState(long checkpointId, long 
checkpointTimestamp) throws Exception {
-   return null;
-   }
-
-   @Override
-   public void restoreState(Integer state) {
-
-   }
}
 
-   public static class StatefulMapper implements MapFunction, Checkpointed, CheckpointListener {
+   public static class StatefulMapper implements MapFunction, ListCheckpointed, CheckpointListener {
 
private String someState;
private boolean atLeastOneSnapshotComplete = false;
private boolean restored = false;
 
@Override
-   public StatefulMapper snapshotState(long checkpointId, long 
checkpointTimestamp) throws Exception {
-   return this;
+   public List snapshotState(long checkpointId, 
long timestamp) throws Exception {
+   return Collections.singletonList(this);
}
 
@Override
-   public void restoreState(StatefulMapper state) {
-   restored = true;
-   this.someState = state.someState;
-   this.atLeastOneSnapshotComplete = 
state.atLeastOneSnapshotComplete;
+   public void restoreState(List state) throws 
Exception {
+   if (!state.isEmpty()) {
--- End diff --

If the state is empty we should fail immediately; currently (I think) this 
would cause us to fail with the RuntimeException saying "Intended failure, to 
trigger restore", which is a bit inaccurate.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735516#comment-15735516
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91721182
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/CoStreamCheckpointingITCase.java
 ---
@@ -362,19 +367,21 @@ public void flatMap2(String value, Collector 
out) {
}
 
@Override
-   public Long snapshotState(long checkpointId, long 
checkpointTimestamp) {
--- End diff --

Please retain the order of the original methods; snapshot -> restore -> 
close


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2939: [FLINK-5113] Ports all functions in the tests to t...

2016-12-09 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91721182
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/CoStreamCheckpointingITCase.java
 ---
@@ -362,19 +367,21 @@ public void flatMap2(String value, Collector 
out) {
}
 
@Override
-   public Long snapshotState(long checkpointId, long 
checkpointTimestamp) {
--- End diff --

Please retain the order of the original methods; snapshot -> restore -> 
close


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2973: [FLINK-5206] [py] Use random file names in tests

2016-12-09 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2973
  
@StephanEwen Thank your for the review, merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735510#comment-15735510
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91721756
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/StreamCheckpointNotifierITCase.java
 ---
@@ -447,5 +434,23 @@ public void notifyCheckpointComplete(long 
checkpointId) {

GeneratingSourceFunction.numPostFailureNotifications.incrementAndGet();
}
}
+
+   @Override
+   public List snapshotState(long checkpointId, long 
timestamp) throws Exception {
+   if (!hasFailed && count >= failurePos && 
getRuntimeContext().getIndexOfThisSubtask() == 0) {
--- End diff --

please move this methods up again to reduce the diff. The methods are 
identical apart from the signature and return statement, the diff should 
reflect that.


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4906) Use constants for the name of system-defined metrics

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735503#comment-15735503
 ] 

ASF GitHub Bot commented on FLINK-4906:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2980
  
@StephanEwen Thank your for the review, merging.


> Use constants for the name of system-defined metrics
> 
>
> Key: FLINK-4906
> URL: https://issues.apache.org/jira/browse/FLINK-4906
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.3
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.2.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735505#comment-15735505
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91721848
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/WindowCheckpointingITCase.java
 ---
@@ -372,23 +374,25 @@ public void notifyCheckpointComplete(long 
checkpointId) {
numSuccessfulCheckpoints++;
}
 
-   @Override
-   public Integer snapshotState(long checkpointId, long 
checkpointTimestamp) {
-   return numElementsEmitted;
+   public static void reset() {
+   failedBefore = false;
--- End diff --

method order, snapshot -> restore -> reset


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735509#comment-15735509
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91719518
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/InterruptSensitiveRestoreTest.java
 ---
@@ -128,11 +131,16 @@ private static Task createTask(

when(networkEnvironment.createKvStateTaskRegistry(any(JobID.class), 
any(JobVertexID.class)))
.thenReturn(mock(TaskKvStateRegistry.class));
 
-   ChainedStateHandle operatorState = new 
ChainedStateHandle<>(Collections.singletonList(state));
+   ChainedStateHandle operatorState = null;
List keyGroupStateFromBackend = 
Collections.emptyList();
List keyGroupStateFromStream = 
Collections.emptyList();
-   List operatorStateBackend = 
Collections.emptyList();
-   List operatorStateStream = 
Collections.emptyList();
+
+   Map testState = new HashMap<>();
+   testState.put("test", new long[] {0, 10});
+
+   Collection handle = 
Collections.singletonList(new OperatorStateHandle(testState, state));
+   List operatorStateBackend = 
Collections.singletonList(handle);
+   List operatorStateStream = 
Collections.singletonList(handle);
--- End diff --

Can this be an empty list as well?


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735506#comment-15735506
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91721132
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/checkpointing/CoStreamCheckpointingITCase.java
 ---
@@ -323,26 +329,25 @@ public PrefixCount map(String value) {
}
 
@Override
-   public Long snapshotState(long checkpointId, long 
checkpointTimestamp) {
-   return count;
+   public void close() throws IOException {
+   counts[getRuntimeContext().getIndexOfThisSubtask()] = 
count;
--- End diff --

Please retain the order of the original methods; snapshot -> restore -> 
close


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5113) Make all Testing Functions implement CheckpointedFunction Interface.

2016-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735511#comment-15735511
 ] 

ASF GitHub Bot commented on FLINK-5113:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/2939#discussion_r91720508
  
--- Diff: 
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTaskTest.java
 ---
@@ -199,11 +200,13 @@ public Serializable snapshotState(long checkpointId, 
long checkpointTimestamp) t
Assert.fail("Count is different at start end 
end of snapshot.");
}
semaphore.release();
-   return sum;
+   return Collections.singletonList((Serializable) sum);
--- End diff --

You don't to cast here, instead use `return 
Collections.singletonList(sum);`


> Make all Testing Functions implement CheckpointedFunction Interface.
> 
>
> Key: FLINK-5113
> URL: https://issues.apache.org/jira/browse/FLINK-5113
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.2.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
> Fix For: 1.2.0
>
>
> Currently stateful functions implement the (old) Checkpointed interface.
> This is issue aims at porting all these function to the new 
> CheckpointedFunction interface, so that they can leverage the new 
> capabilities by it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >