[jira] [Commented] (FLINK-2178) groupReduceOnNeighbors throws NoSuchElementException

2015-06-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585030#comment-14585030
 ] 

ASF GitHub Bot commented on FLINK-2178:
---

Github user andralungu commented on the pull request:

https://github.com/apache/flink/pull/799#issuecomment-111811473
  
PR updated. 


 groupReduceOnNeighbors throws NoSuchElementException
 

 Key: FLINK-2178
 URL: https://issues.apache.org/jira/browse/FLINK-2178
 Project: Flink
  Issue Type: Bug
  Components: Gelly
Affects Versions: 0.9
Reporter: Andra Lungu
Assignee: Andra Lungu

 In the ALL EdgeDirection case, ApplyCoGroupFunctionOnAllNeighbors does not 
 check whether the vertex iterator has elements causing the aforementioned 
 exception. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2149][gelly] Simplified Jaccard Example

2015-06-14 Thread andralungu
Github user andralungu commented on the pull request:

https://github.com/apache/flink/pull/770#issuecomment-111807874
  
PR updated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2208) Build error for Java IBM

2015-06-14 Thread Felix Neutatz (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585045#comment-14585045
 ] 

Felix Neutatz commented on FLINK-2208:
--

Thanks, that solved the issue :) I will push the fix soon ;)

 Build error for Java IBM
 

 Key: FLINK-2208
 URL: https://issues.apache.org/jira/browse/FLINK-2208
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 0.9
Reporter: Felix Neutatz
Priority: Minor

 Using IBM Java 7 will break the built:
 {code:xml}
 [INFO] --- scala-maven-plugin:3.1.4:compile (scala-compile-first) @ 
 flink-runtime ---
 [INFO] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/java:-1: info: 
 compiling
 [INFO] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala:-1: 
 info: compiling
 [INFO] Compiling 461 source files to 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/target/classes at 
 1434059956648
 [ERROR] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala:1768:
  error: type OperatingSystemMXBean is not a member of package 
 com.sun.management
 [ERROR] asInstanceOf[com.sun.management.OperatingSystemMXBean]).
 [ERROR] ^
 [ERROR] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala:1787:
  error: type OperatingSystemMXBean is not a member of package 
 com.sun.management
 [ERROR] val methodsList = 
 classOf[com.sun.management.OperatingSystemMXBean].getMethods()
 [ERROR]  ^
 [ERROR] two errors found
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO] 
 [INFO] flink .. SUCCESS [ 14.447 
 s]
 [INFO] flink-shaded-hadoop  SUCCESS [  2.548 
 s]
 [INFO] flink-shaded-include-yarn .. SUCCESS [ 36.122 
 s]
 [INFO] flink-shaded-include-yarn-tests  SUCCESS [ 36.980 
 s]
 [INFO] flink-core . SUCCESS [ 21.887 
 s]
 [INFO] flink-java . SUCCESS [ 16.023 
 s]
 [INFO] flink-runtime .. FAILURE [ 20.241 
 s]
 [INFO] flink-optimizer  SKIPPED
 [hadoop@ibm-power-1 /]$ java -version
 java version 1.7.0
 Java(TM) SE Runtime Environment (build pxp6470_27sr1fp1-20140708_01(SR1 FP1))
 IBM J9 VM (build 2.7, JRE 1.7.0 Linux ppc64-64 Compressed References 
 20140707_205525 (JIT enabled, AOT enabled)
 J9VM - R27_Java727_SR1_20140707_1408_B205525
 JIT  - tr.r13.java_20140410_61421.07
 GC   - R27_Java727_SR1_20140707_1408_B205525_CMPRSS
 J9CL - 20140707_205525)
 JCL - 20140707_01 based on Oracle 7u65-b16
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1818) Provide API to cancel running job

2015-06-14 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585064#comment-14585064
 ] 

Matthias J. Sax commented on FLINK-1818:


Nice! :)

 Provide API to cancel running job
 -

 Key: FLINK-1818
 URL: https://issues.apache.org/jira/browse/FLINK-1818
 Project: Flink
  Issue Type: Improvement
  Components: Java API
Affects Versions: 0.9
Reporter: Robert Metzger
Assignee: niraj rai
  Labels: starter

 http://apache-flink-incubator-mailing-list-archive.1008284.n3.nabble.com/Canceling-a-Cluster-Job-from-Java-td4897.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2210) Compute aggregations by ignoring null values

2015-06-14 Thread Shiti Saxena (JIRA)
Shiti Saxena created FLINK-2210:
---

 Summary: Compute aggregations by ignoring null values
 Key: FLINK-2210
 URL: https://issues.apache.org/jira/browse/FLINK-2210
 Project: Flink
  Issue Type: Bug
Reporter: Shiti Saxena
Assignee: Shiti Saxena
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2152) Provide zipWithIndex utility in flink-contrib

2015-06-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585155#comment-14585155
 ] 

ASF GitHub Bot commented on FLINK-2152:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/832#issuecomment-111856162
  
+1 to merge.


 Provide zipWithIndex utility in flink-contrib
 -

 Key: FLINK-2152
 URL: https://issues.apache.org/jira/browse/FLINK-2152
 Project: Flink
  Issue Type: Improvement
  Components: Java API
Reporter: Robert Metzger
Assignee: Andra Lungu
Priority: Trivial
  Labels: starter

 We should provide a simple utility method for zipping elements in a data set 
 with a dense index.
 its up for discussion whether we want it directly in the API or if we should 
 provide it only as a utility from {{flink-contrib}}.
 I would put it in {{flink-contrib}}.
 See my answer on SO: 
 http://stackoverflow.com/questions/30596556/zipwithindex-on-apache-flink



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2208) Build error for Java IBM

2015-06-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585149#comment-14585149
 ] 

ASF GitHub Bot commented on FLINK-2208:
---

Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/833#discussion_r32382190
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala
 ---
@@ -1765,7 +1765,7 @@ object TaskManager {
   override def getValue: Double = {
 try{
   
fetchCPULoad.map(_.invoke(ManagementFactory.getOperatingSystemMXBean().
-asInstanceOf[com.sun.management.OperatingSystemMXBean]).
+asInstanceOf[java.lang.management.OperatingSystemMXBean]).
--- End diff --

I think the change makes the cast unnecessary because thats exactly the 
type returned by the method.


 Build error for Java IBM
 

 Key: FLINK-2208
 URL: https://issues.apache.org/jira/browse/FLINK-2208
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 0.9
Reporter: Felix Neutatz
Assignee: Felix Neutatz
Priority: Minor

 Using IBM Java 7 will break the built:
 {code:xml}
 [INFO] --- scala-maven-plugin:3.1.4:compile (scala-compile-first) @ 
 flink-runtime ---
 [INFO] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/java:-1: info: 
 compiling
 [INFO] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala:-1: 
 info: compiling
 [INFO] Compiling 461 source files to 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/target/classes at 
 1434059956648
 [ERROR] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala:1768:
  error: type OperatingSystemMXBean is not a member of package 
 com.sun.management
 [ERROR] asInstanceOf[com.sun.management.OperatingSystemMXBean]).
 [ERROR] ^
 [ERROR] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala:1787:
  error: type OperatingSystemMXBean is not a member of package 
 com.sun.management
 [ERROR] val methodsList = 
 classOf[com.sun.management.OperatingSystemMXBean].getMethods()
 [ERROR]  ^
 [ERROR] two errors found
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO] 
 [INFO] flink .. SUCCESS [ 14.447 
 s]
 [INFO] flink-shaded-hadoop  SUCCESS [  2.548 
 s]
 [INFO] flink-shaded-include-yarn .. SUCCESS [ 36.122 
 s]
 [INFO] flink-shaded-include-yarn-tests  SUCCESS [ 36.980 
 s]
 [INFO] flink-core . SUCCESS [ 21.887 
 s]
 [INFO] flink-java . SUCCESS [ 16.023 
 s]
 [INFO] flink-runtime .. FAILURE [ 20.241 
 s]
 [INFO] flink-optimizer  SKIPPED
 [hadoop@ibm-power-1 /]$ java -version
 java version 1.7.0
 Java(TM) SE Runtime Environment (build pxp6470_27sr1fp1-20140708_01(SR1 FP1))
 IBM J9 VM (build 2.7, JRE 1.7.0 Linux ppc64-64 Compressed References 
 20140707_205525 (JIT enabled, AOT enabled)
 J9VM - R27_Java727_SR1_20140707_1408_B205525
 JIT  - tr.r13.java_20140410_61421.07
 GC   - R27_Java727_SR1_20140707_1408_B205525_CMPRSS
 J9CL - 20140707_205525)
 JCL - 20140707_01 based on Oracle 7u65-b16
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2208] Fix IBM Java integration bug

2015-06-14 Thread FelixNeutatz
GitHub user FelixNeutatz opened a pull request:

https://github.com/apache/flink/pull/833

[FLINK-2208] Fix IBM Java integration bug



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/FelixNeutatz/incubator-flink 
FLINK_2208_IBM_JAVA

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/833.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #833


commit feb7ab113f5bdeb4449c2c1c1bf3ca6b38e64e89
Author: FelixNeutatz neut...@googlemail.com
Date:   2015-06-14T12:59:21Z

[FLINK-2208] Fix IBM Java integration bug




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2208) Build error for Java IBM

2015-06-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585085#comment-14585085
 ] 

ASF GitHub Bot commented on FLINK-2208:
---

GitHub user FelixNeutatz opened a pull request:

https://github.com/apache/flink/pull/833

[FLINK-2208] Fix IBM Java integration bug



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/FelixNeutatz/incubator-flink 
FLINK_2208_IBM_JAVA

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/833.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #833


commit feb7ab113f5bdeb4449c2c1c1bf3ca6b38e64e89
Author: FelixNeutatz neut...@googlemail.com
Date:   2015-06-14T12:59:21Z

[FLINK-2208] Fix IBM Java integration bug




 Build error for Java IBM
 

 Key: FLINK-2208
 URL: https://issues.apache.org/jira/browse/FLINK-2208
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 0.9
Reporter: Felix Neutatz
Assignee: Felix Neutatz
Priority: Minor

 Using IBM Java 7 will break the built:
 {code:xml}
 [INFO] --- scala-maven-plugin:3.1.4:compile (scala-compile-first) @ 
 flink-runtime ---
 [INFO] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/java:-1: info: 
 compiling
 [INFO] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala:-1: 
 info: compiling
 [INFO] Compiling 461 source files to 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/target/classes at 
 1434059956648
 [ERROR] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala:1768:
  error: type OperatingSystemMXBean is not a member of package 
 com.sun.management
 [ERROR] asInstanceOf[com.sun.management.OperatingSystemMXBean]).
 [ERROR] ^
 [ERROR] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala:1787:
  error: type OperatingSystemMXBean is not a member of package 
 com.sun.management
 [ERROR] val methodsList = 
 classOf[com.sun.management.OperatingSystemMXBean].getMethods()
 [ERROR]  ^
 [ERROR] two errors found
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO] 
 [INFO] flink .. SUCCESS [ 14.447 
 s]
 [INFO] flink-shaded-hadoop  SUCCESS [  2.548 
 s]
 [INFO] flink-shaded-include-yarn .. SUCCESS [ 36.122 
 s]
 [INFO] flink-shaded-include-yarn-tests  SUCCESS [ 36.980 
 s]
 [INFO] flink-core . SUCCESS [ 21.887 
 s]
 [INFO] flink-java . SUCCESS [ 16.023 
 s]
 [INFO] flink-runtime .. FAILURE [ 20.241 
 s]
 [INFO] flink-optimizer  SKIPPED
 [hadoop@ibm-power-1 /]$ java -version
 java version 1.7.0
 Java(TM) SE Runtime Environment (build pxp6470_27sr1fp1-20140708_01(SR1 FP1))
 IBM J9 VM (build 2.7, JRE 1.7.0 Linux ppc64-64 Compressed References 
 20140707_205525 (JIT enabled, AOT enabled)
 J9VM - R27_Java727_SR1_20140707_1408_B205525
 JIT  - tr.r13.java_20140410_61421.07
 GC   - R27_Java727_SR1_20140707_1408_B205525_CMPRSS
 J9CL - 20140707_205525)
 JCL - 20140707_01 based on Oracle 7u65-b16
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Flink Storm compatibility

2015-06-14 Thread mbalassi
Github user mbalassi commented on the pull request:

https://github.com/apache/flink/pull/573#issuecomment-111848332
  
Hey @mjsax, thanks for the cleanup. I was focused on the release during the 
week, I'm doing the merge this evening.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-2211) Generalize ALS API

2015-06-14 Thread JIRA
Ronny Bräunlich created FLINK-2211:
--

 Summary: Generalize ALS API
 Key: FLINK-2211
 URL: https://issues.apache.org/jira/browse/FLINK-2211
 Project: Flink
  Issue Type: Bug
  Components: Machine Learning Library
Affects Versions: 0.9
Reporter: Ronny Bräunlich
Priority: Minor


predict() and fit() require at the moment DataSet[(Int, Int)] or DataSet[(Int, 
Int, Double]) respectively.
This should be changed to Long to accept more values or to something more 
general.
See also 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Apache-Flink-0-9-ALS-API-td6424.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2208] Fix IBM Java integration bug

2015-06-14 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/833#issuecomment-111855825
  
I think this pull request is disabling the CPU utilization monitoring 
because the check whether the `getProcessCpuLoad()` exists will always be false 
(even though we are on a Sun JVM).



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2208] Fix IBM Java integration bug

2015-06-14 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/833#discussion_r32382190
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala
 ---
@@ -1765,7 +1765,7 @@ object TaskManager {
   override def getValue: Double = {
 try{
   
fetchCPULoad.map(_.invoke(ManagementFactory.getOperatingSystemMXBean().
-asInstanceOf[com.sun.management.OperatingSystemMXBean]).
+asInstanceOf[java.lang.management.OperatingSystemMXBean]).
--- End diff --

I think the change makes the cast unnecessary because thats exactly the 
type returned by the method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2208] Fix IBM Java integration bug

2015-06-14 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/833#discussion_r32382202
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala
 ---
@@ -1784,7 +1784,7 @@ object TaskManager {
* @return
*/
   private def getMethodToFetchCPULoad(): Option[Method] = {
-val methodsList = 
classOf[com.sun.management.OperatingSystemMXBean].getMethods()
+val methodsList = 
classOf[java.lang.management.OperatingSystemMXBean].getMethods()
--- End diff --

getMethods() will never contain the `getProcessCpuLoad` method, so the 
feature is basically disabled


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2208) Build error for Java IBM

2015-06-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585150#comment-14585150
 ] 

ASF GitHub Bot commented on FLINK-2208:
---

Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/833#discussion_r32382202
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala
 ---
@@ -1784,7 +1784,7 @@ object TaskManager {
* @return
*/
   private def getMethodToFetchCPULoad(): Option[Method] = {
-val methodsList = 
classOf[com.sun.management.OperatingSystemMXBean].getMethods()
+val methodsList = 
classOf[java.lang.management.OperatingSystemMXBean].getMethods()
--- End diff --

getMethods() will never contain the `getProcessCpuLoad` method, so the 
feature is basically disabled


 Build error for Java IBM
 

 Key: FLINK-2208
 URL: https://issues.apache.org/jira/browse/FLINK-2208
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 0.9
Reporter: Felix Neutatz
Assignee: Felix Neutatz
Priority: Minor

 Using IBM Java 7 will break the built:
 {code:xml}
 [INFO] --- scala-maven-plugin:3.1.4:compile (scala-compile-first) @ 
 flink-runtime ---
 [INFO] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/java:-1: info: 
 compiling
 [INFO] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala:-1: 
 info: compiling
 [INFO] Compiling 461 source files to 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/target/classes at 
 1434059956648
 [ERROR] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala:1768:
  error: type OperatingSystemMXBean is not a member of package 
 com.sun.management
 [ERROR] asInstanceOf[com.sun.management.OperatingSystemMXBean]).
 [ERROR] ^
 [ERROR] 
 /share/flink/flink-0.9-SNAPSHOT-wo-Yarn/flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala:1787:
  error: type OperatingSystemMXBean is not a member of package 
 com.sun.management
 [ERROR] val methodsList = 
 classOf[com.sun.management.OperatingSystemMXBean].getMethods()
 [ERROR]  ^
 [ERROR] two errors found
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO] 
 [INFO] flink .. SUCCESS [ 14.447 
 s]
 [INFO] flink-shaded-hadoop  SUCCESS [  2.548 
 s]
 [INFO] flink-shaded-include-yarn .. SUCCESS [ 36.122 
 s]
 [INFO] flink-shaded-include-yarn-tests  SUCCESS [ 36.980 
 s]
 [INFO] flink-core . SUCCESS [ 21.887 
 s]
 [INFO] flink-java . SUCCESS [ 16.023 
 s]
 [INFO] flink-runtime .. FAILURE [ 20.241 
 s]
 [INFO] flink-optimizer  SKIPPED
 [hadoop@ibm-power-1 /]$ java -version
 java version 1.7.0
 Java(TM) SE Runtime Environment (build pxp6470_27sr1fp1-20140708_01(SR1 FP1))
 IBM J9 VM (build 2.7, JRE 1.7.0 Linux ppc64-64 Compressed References 
 20140707_205525 (JIT enabled, AOT enabled)
 J9VM - R27_Java727_SR1_20140707_1408_B205525
 JIT  - tr.r13.java_20140410_61421.07
 GC   - R27_Java727_SR1_20140707_1408_B205525_CMPRSS
 J9CL - 20140707_205525)
 JCL - 20140707_01 based on Oracle 7u65-b16
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2209) Document how to use TableAPI, Gelly and FlinkML, StreamingConnectors on a cluster

2015-06-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585257#comment-14585257
 ] 

ASF GitHub Bot commented on FLINK-2209:
---

GitHub user mbalassi opened a pull request:

https://github.com/apache/flink/pull/835

[FLINK-2209] Document linking with jars not in the binary dist



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mbalassi/flink flink-2209

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/835.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #835


commit 5abcb8b90da9e20f1ca0b4da7fd7f139435c3e10
Author: mbalassi mbala...@apache.org
Date:   2015-06-14T20:21:43Z

[FLINK-2209] [docs] Document linking with jars not in the binary dist

commit c8d526da18ca0a882d9eb37ad9282c1152effb06
Author: mbalassi mbala...@apache.org
Date:   2015-06-14T20:24:52Z

[docs] Update obsolate cluster execution guide




 Document how to use TableAPI, Gelly and FlinkML, StreamingConnectors on a 
 cluster
 -

 Key: FLINK-2209
 URL: https://issues.apache.org/jira/browse/FLINK-2209
 Project: Flink
  Issue Type: Improvement
Reporter: Till Rohrmann
Assignee: Márton Balassi

 Currently the TableAPI, Gelly, FlinkML and StreamingConnectors are not part 
 of the Flink dist module. Therefore they are not included in the binary 
 distribution. As a consequence, if you want to use one of these libraries the 
 corresponding jar and all their dependencies have to be either manually put 
 on the cluster or the user has to include them in the user code jar.
 Usually a fat jar is built if the one uses the quickstart archetypes. However 
 if one sets the project manually up this ist not necessarily the case. 
 Therefore, it should be well documented how to run programs using one of 
 these libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Flink Storm compatibility

2015-06-14 Thread mbalassi
Github user mbalassi commented on the pull request:

https://github.com/apache/flink/pull/573#issuecomment-111878533
  
Did a final pass and a rebase, code now looks good. There is one little 
issue with the tests that I need to check again tomorrow morning then I'm 
merging it. :+1: 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-2002) Iterative test fails when ran with other tests in the same environment

2015-06-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/FLINK-2002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Márton Balassi closed FLINK-2002.
-
   Resolution: Fixed
Fix Version/s: 0.9

 Iterative test fails when ran with other tests in the same environment
 --

 Key: FLINK-2002
 URL: https://issues.apache.org/jira/browse/FLINK-2002
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Reporter: Péter Szabó
Assignee: Márton Balassi
 Fix For: 0.9


 I run tests in the same StreamExecutionEnvironment with 
 MultipleProgramsTestBase. One of the tests uses an iterative data stream. It 
 fails as well as all tests after that. (When I put the iterative test in a 
 separate environment, all tests passes.) For me it seems that it is a 
 state-related issue but there is also some problem with the broker slots.
 The iterative test throws:
 java.lang.Exception: TaskManager sent illegal state update: CANCELING
   at 
 org.apache.flink.runtime.executiongraph.ExecutionGraph.updateState(ExecutionGraph.java:618)
   at 
 org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1$$anonfun$applyOrElse$2.apply$mcV$sp(JobManager.scala:222)
   at 
 org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1$$anonfun$applyOrElse$2.apply(JobManager.scala:221)
   at 
 org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1$$anonfun$applyOrElse$2.apply(JobManager.scala:221)
   at 
 scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
   at 
 scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
   at 
 akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
   at 
 scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
   at 
 scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
   at 
 scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
   at 
 org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1.applyOrElse(JobManager.scala:314)
   at 
 scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
   at 
 scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
   at 
 scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
   at 
 org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:36)
   at 
 org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:29)
   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
   at 
 org.apache.flink.runtime.ActorLogMessages$$anon$1.applyOrElse(ActorLogMessages.scala:29)
   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
   at 
 org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:95)
   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
   at akka.dispatch.Mailbox.run(Mailbox.scala:221)
   at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
   at 
 scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
   at 
 scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
   at 
 scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
   at 
 scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 Caused by: 
 org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: 
 Not enough free slots available to run the job. You can decrease the operator 
 parallelism or increase the number of slots per TaskManager in the 
 configuration. Task to schedule:  Attempt #0 (GroupedActiveDiscretizer 
 (2/4)) @ (unassigned) - [SCHEDULED]  with groupID  
 e8f7c9c85e64403962648bc7e2aead8b  in sharing group  SlotSharingGroup 
 [5e62f1cc5cae2c088430ef935470a8d5, 5bc227941969d1daa1ebb1ba070b55ce, 
 d999ee6c10730775a8fef1c6f1af1dbd, 45b73caa75424d84adbb7bb92671591d, 
 5c94c54d9316b827c6eba6c721329549, 794d6c56bee347dcdd62ffdf189de267, 
 4c3b72e17a4acecde4241fe6e63355b8, f6a6028c224a7b81e4802eeaf9c8487e, 
 989c68790fc7c5e2f8b8c150a33fef89, db93daa1f9e5194f0079df2629b08efb, 
 

[GitHub] flink pull request: [FLINK-2209] Document linking with jars not in...

2015-06-14 Thread mbalassi
GitHub user mbalassi opened a pull request:

https://github.com/apache/flink/pull/835

[FLINK-2209] Document linking with jars not in the binary dist



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mbalassi/flink flink-2209

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/835.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #835


commit 5abcb8b90da9e20f1ca0b4da7fd7f139435c3e10
Author: mbalassi mbala...@apache.org
Date:   2015-06-14T20:21:43Z

[FLINK-2209] [docs] Document linking with jars not in the binary dist

commit c8d526da18ca0a882d9eb37ad9282c1152effb06
Author: mbalassi mbala...@apache.org
Date:   2015-06-14T20:24:52Z

[docs] Update obsolate cluster execution guide




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [scripts] remove quickstart scripts

2015-06-14 Thread mxm
Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/822#issuecomment-111852542
  
Right you are. I have pushed a fix to the release-0.8 branch. Like in the 
current master, the script will always point to the website's script.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2209) Document how to use TableAPI, Gelly and FlinkML, StreamingConnectors on a cluster

2015-06-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585458#comment-14585458
 ] 

ASF GitHub Bot commented on FLINK-2209:
---

Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/835#discussion_r32392995
  
--- Diff: docs/apis/cluster_execution.md ---
@@ -80,67 +80,73 @@ Note that the program contains custom user code and 
hence requires a JAR file wi
 the classes of the code attached. The constructor of the remote environment
 takes the path(s) to the JAR file(s).
 
-## Remote Executor
+## Linking with modules not contained in the binary distribution
 
-Similar to the RemoteEnvironment, the RemoteExecutor lets you execute
-Flink programs on a cluster directly. The remote executor accepts a
-*Plan* object, which describes the program as a single executable unit.
+The binary distribution contains jar packages in the `lib` folder that are 
automatically
+provided to the classpath of your distrbuted programs. Almost all of Flink 
classes are
+located there with a few exceptions, for example the streaming connectors 
and some freshly
+added modules. To run code depending on these modules you need to make 
them accessible
+during runtime, for which we suggest two options:
 
-### Maven Dependency
-
-If you are developing your program in a Maven project, you have to add the
-`flink-clients` module using this dependency:
-
-~~~xml
-dependency
-  groupIdorg.apache.flink/groupId
-  artifactIdflink-clients/artifactId
-  version{{ site.version }}/version
-/dependency
-~~~
-
-### Example
-
-The following illustrates the use of the `RemoteExecutor` with the Scala 
API:
-
-~~~scala
-def main(args: Array[String]) {
-val input = TextFile(hdfs://path/to/file)
+1. Either copy the required jar files to the `lib` folder onto all of your 
TaskManagers.
+2. Or package them with your usercode.
 
-val words = input flatMap { _.toLowerCase().split(\W+) filter { 
_ !=  } }
-val counts = words groupBy { x = x } count()
+The latter version is recommended as it respects the classloader 
management in Flink.
 
-val output = counts.write(wordsOutput, CsvOutputFormat())
-  
-val plan = new ScalaPlan(Seq(output), Word Count)
-val executor = new RemoteExecutor(strato-master, 7881, 
/path/to/jarfile.jar)
-executor.executePlan(p);
-}
-~~~
+### Packaging dependencies with your usercode with Maven
 
-The following illustrates the use of the `RemoteExecutor` with the Java 
API (as
-an alternative to the RemoteEnvironment):
+To provide these dependencies not included by Flink we suggest two options 
with Maven.
 
-~~~java
-public static void main(String[] args) throws Exception {
-ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
+1. The maven assembly plugin builds a so called fat jar cointaining all 
your dependencies.
+Easy to configure, but is an overkill in many cases. See 
+[usage](http://maven.apache.org/plugins/maven-assembly-plugin/usage.html).
+2. The maven unpack plugin, for unpacking the relevant parts of the 
dependencies and
+then package it with your code.
 
-DataSetString data = env.readTextFile(hdfs://path/to/file);
+To the the latter for example for the streaming Kafka connector, 
`flink-connector-kafka`
--- End diff --

the the 


 Document how to use TableAPI, Gelly and FlinkML, StreamingConnectors on a 
 cluster
 -

 Key: FLINK-2209
 URL: https://issues.apache.org/jira/browse/FLINK-2209
 Project: Flink
  Issue Type: Improvement
Reporter: Till Rohrmann
Assignee: Márton Balassi

 Currently the TableAPI, Gelly, FlinkML and StreamingConnectors are not part 
 of the Flink dist module. Therefore they are not included in the binary 
 distribution. As a consequence, if you want to use one of these libraries the 
 corresponding jar and all their dependencies have to be either manually put 
 on the cluster or the user has to include them in the user code jar.
 Usually a fat jar is built if the one uses the quickstart archetypes. However 
 if one sets the project manually up this ist not necessarily the case. 
 Therefore, it should be well documented how to run programs using one of 
 these libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2209] Document linking with jars not in...

2015-06-14 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/835#discussion_r32392995
  
--- Diff: docs/apis/cluster_execution.md ---
@@ -80,67 +80,73 @@ Note that the program contains custom user code and 
hence requires a JAR file wi
 the classes of the code attached. The constructor of the remote environment
 takes the path(s) to the JAR file(s).
 
-## Remote Executor
+## Linking with modules not contained in the binary distribution
 
-Similar to the RemoteEnvironment, the RemoteExecutor lets you execute
-Flink programs on a cluster directly. The remote executor accepts a
-*Plan* object, which describes the program as a single executable unit.
+The binary distribution contains jar packages in the `lib` folder that are 
automatically
+provided to the classpath of your distrbuted programs. Almost all of Flink 
classes are
+located there with a few exceptions, for example the streaming connectors 
and some freshly
+added modules. To run code depending on these modules you need to make 
them accessible
+during runtime, for which we suggest two options:
 
-### Maven Dependency
-
-If you are developing your program in a Maven project, you have to add the
-`flink-clients` module using this dependency:
-
-~~~xml
-dependency
-  groupIdorg.apache.flink/groupId
-  artifactIdflink-clients/artifactId
-  version{{ site.version }}/version
-/dependency
-~~~
-
-### Example
-
-The following illustrates the use of the `RemoteExecutor` with the Scala 
API:
-
-~~~scala
-def main(args: Array[String]) {
-val input = TextFile(hdfs://path/to/file)
+1. Either copy the required jar files to the `lib` folder onto all of your 
TaskManagers.
+2. Or package them with your usercode.
 
-val words = input flatMap { _.toLowerCase().split(\W+) filter { 
_ !=  } }
-val counts = words groupBy { x = x } count()
+The latter version is recommended as it respects the classloader 
management in Flink.
 
-val output = counts.write(wordsOutput, CsvOutputFormat())
-  
-val plan = new ScalaPlan(Seq(output), Word Count)
-val executor = new RemoteExecutor(strato-master, 7881, 
/path/to/jarfile.jar)
-executor.executePlan(p);
-}
-~~~
+### Packaging dependencies with your usercode with Maven
 
-The following illustrates the use of the `RemoteExecutor` with the Java 
API (as
-an alternative to the RemoteEnvironment):
+To provide these dependencies not included by Flink we suggest two options 
with Maven.
 
-~~~java
-public static void main(String[] args) throws Exception {
-ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
+1. The maven assembly plugin builds a so called fat jar cointaining all 
your dependencies.
+Easy to configure, but is an overkill in many cases. See 
+[usage](http://maven.apache.org/plugins/maven-assembly-plugin/usage.html).
+2. The maven unpack plugin, for unpacking the relevant parts of the 
dependencies and
+then package it with your code.
 
-DataSetString data = env.readTextFile(hdfs://path/to/file);
+To the the latter for example for the streaming Kafka connector, 
`flink-connector-kafka`
--- End diff --

the the 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2210) Table API aggregate by ignoring null values

2015-06-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585201#comment-14585201
 ] 

ASF GitHub Bot commented on FLINK-2210:
---

GitHub user Shiti opened a pull request:

https://github.com/apache/flink/pull/834

[FLINK-2210] Table API support for aggregation on columns with null 

When a value is null, the AggregationFunction's aggregate method is not 
called.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Shiti/flink FLINK-2210

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/834.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #834


commit f37d4c04ee585c91e6f865eb58b893c863cf4df9
Author: Shiti ssaxena@gmail.com
Date:   2015-06-14T18:59:02Z

[FLINK-2210] Table API support for aggregation on columns with null values




 Table API aggregate by ignoring null values
 ---

 Key: FLINK-2210
 URL: https://issues.apache.org/jira/browse/FLINK-2210
 Project: Flink
  Issue Type: Bug
Reporter: Shiti Saxena
Assignee: Shiti Saxena
Priority: Minor

 Attempting to aggregate on columns which may have null values results in 
 NullPointerException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2209) Document how to use TableAPI, Gelly and FlinkML, StreamingConnectors on a cluster

2015-06-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/FLINK-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14585246#comment-14585246
 ] 

Márton Balassi commented on FLINK-2209:
---

I think this is more cluster execution than best practices. All the parts 
having this issue should have a link to the documentation anyway. :)

 Document how to use TableAPI, Gelly and FlinkML, StreamingConnectors on a 
 cluster
 -

 Key: FLINK-2209
 URL: https://issues.apache.org/jira/browse/FLINK-2209
 Project: Flink
  Issue Type: Improvement
Reporter: Till Rohrmann
Assignee: Márton Balassi

 Currently the TableAPI, Gelly, FlinkML and StreamingConnectors are not part 
 of the Flink dist module. Therefore they are not included in the binary 
 distribution. As a consequence, if you want to use one of these libraries the 
 corresponding jar and all their dependencies have to be either manually put 
 on the cluster or the user has to include them in the user code jar.
 Usually a fat jar is built if the one uses the quickstart archetypes. However 
 if one sets the project manually up this ist not necessarily the case. 
 Therefore, it should be well documented how to run programs using one of 
 these libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)