[jira] [Updated] (FLINK-1785) Master tests in flink-tachyon fail with java.lang.NoSuchFieldError: IBM_JAVA

2015-03-25 Thread Henry Saputra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Saputra updated FLINK-1785:
-
Description: 
The master fail in flink-tachyon test when running mvn test:

{code}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.flink.tachyon.HDFSTest
Running org.apache.flink.tachyon.TachyonFileSystemWrapperTest
java.lang.NoSuchFieldError: IBM_JAVA
at 
org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:303)
at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:348)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:266)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:122)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:775)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:642)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:334)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
at org.apache.flink.tachyon.HDFSTest.createHDFS(HDFSTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)

...

Results :

Failed tests:
  HDFSTest.createHDFS:76 Test failed IBM_JAVA
  HDFSTest.createHDFS:76 Test failed Could not initialize class
org.apache.hadoop.security.UserGroupInformation

Tests in error:
  HDFSTest.destroyHDFS:83 NullPointer
  HDFSTest.destroyHDFS:83 NullPointer
  TachyonFileSystemWrapperTest.testHadoopLoadability:116 »
NoClassDefFound Could...

Tests run: 6, Failures: 3, Errors: 3, Skipped: 0

{code}

  was:
The master fail in flink-tachyon test when running mvn test:

{code}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.flink.tachyon.HDFSTest
Running org.apache.flink.tachyon.TachyonFileSystemWrapperTest
java.lang.NoSuchFieldError: IBM_JAVA
at 
org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:303)
at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:348)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:266)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:122)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:775)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:642)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:334)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
at org.apache.flink.tachyon.HDFSTest.createHDFS(HDFSTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.

[jira] [Created] (FLINK-1785) Master tests in flink-tachyon fail with java.lang.NoSuchFieldError: IBM_JAVA

2015-03-25 Thread Henry Saputra (JIRA)
Henry Saputra created FLINK-1785:


 Summary: Master tests in flink-tachyon fail with 
java.lang.NoSuchFieldError: IBM_JAVA
 Key: FLINK-1785
 URL: https://issues.apache.org/jira/browse/FLINK-1785
 Project: Flink
  Issue Type: Bug
  Components: test
Reporter: Henry Saputra


The master fail in flink-tachyon test when running mvn test:

{code}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.flink.tachyon.HDFSTest
Running org.apache.flink.tachyon.TachyonFileSystemWrapperTest
java.lang.NoSuchFieldError: IBM_JAVA
at 
org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:303)
at 
org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:348)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:807)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:266)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:122)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:775)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:642)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:334)
at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:316)
at org.apache.flink.tachyon.HDFSTest.createHDFS(HDFSTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)

...

Results :

Failed tests:
  HDFSTest.createHDFS:76 Test failed IBM_JAVA
  HDFSTest.createHDFS:76 Test failed Could not initialize class
org.apache.hadoop.security.UserGroupInformation
  TachyonFileSystemWrapperTest.testTachyon:149 Test failed with
exception: Cannot initialize task 'DataSink (CsvOutputFormat (path:
tachyon://x1carbon:18998/result, delimiter:  ))': Could not initialize
class org.apache.hadoop.security.UserGroupInformation

Tests in error:
  HDFSTest.destroyHDFS:83 NullPointer
  HDFSTest.destroyHDFS:83 NullPointer
  TachyonFileSystemWrapperTest.testHadoopLoadability:116 »
NoClassDefFound Could...

Tests run: 6, Failures: 3, Errors: 3, Skipped: 0

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-703) Use complete element as join key.

2015-03-25 Thread Chiwan Park (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14381219#comment-14381219
 ] 

Chiwan Park commented on FLINK-703:
---

Hi, I have a question about this issue. I think that we can use complete 
element as join, group, or cogroup keys. Is this issue unresolved? If right, 
could you explain more detail about this issue?

I attach a example. (https://gist.github.com/chiwanpark/755e34561d30df0014b9)

> Use complete element as join key.
> -
>
> Key: FLINK-703
> URL: https://issues.apache.org/jira/browse/FLINK-703
> Project: Flink
>  Issue Type: Improvement
>Reporter: GitHub Import
>Assignee: Chiwan Park
>Priority: Trivial
>  Labels: github-import
> Fix For: pre-apache
>
>
> In some situations such as semi-joins it could make sense to use a complete 
> element as join key. 
> Currently this can be done using a key-selector function, but we could offer 
> a shortcut for that.
> This is not an urgent issue, but might be helpful.
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/703
> Created by: [fhueske|https://github.com/fhueske]
> Labels: enhancement, java api, user satisfaction, 
> Milestone: Release 0.6 (unplanned)
> Created at: Thu Apr 17 23:40:00 CEST 2014
> State: open



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1512) Add CsvReader for reading into POJOs.

2015-03-25 Thread Chiwan Park (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14381156#comment-14381156
 ] 

Chiwan Park commented on FLINK-1512:


Thanks for merging it down :)

> Add CsvReader for reading into POJOs.
> -
>
> Key: FLINK-1512
> URL: https://issues.apache.org/jira/browse/FLINK-1512
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Robert Metzger
>Assignee: Chiwan Park
>Priority: Minor
>  Labels: starter
> Fix For: 0.9
>
>
> Currently, the {{CsvReader}} supports only TupleXX types. 
> It would be nice if users were also able to read into POJOs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Fix issue where Windows paths were not recogni...

2015-03-25 Thread balidani
Github user balidani commented on the pull request:

https://github.com/apache/flink/pull/491#issuecomment-86219748
  
Thanks, I did that, hopefully the tests will pass now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Fix issue where Windows paths were not recogni...

2015-03-25 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/491#issuecomment-86199581
  
I had a look at the Travis logs and it seems that the build failures are 
not related to your change.
We had a few issues with build stability on the master branch in the last 
days. Can you rebase your change to the current master and update the PR? That 
will trigger the build tests another time.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-1512) Add CsvReader for reading into POJOs.

2015-03-25 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske resolved FLINK-1512.
--
   Resolution: Implemented
Fix Version/s: 0.9

Implemented in 43ac967acb589790f5b3befd6f932e325d4ba681

Thanks for the contribution!

> Add CsvReader for reading into POJOs.
> -
>
> Key: FLINK-1512
> URL: https://issues.apache.org/jira/browse/FLINK-1512
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Robert Metzger
>Assignee: Chiwan Park
>Priority: Minor
>  Labels: starter
> Fix For: 0.9
>
>
> Currently, the {{CsvReader}} supports only TupleXX types. 
> It would be nice if users were also able to read into POJOs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1512] Add CsvReader for reading into PO...

2015-03-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/426


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1512) Add CsvReader for reading into POJOs.

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380612#comment-14380612
 ] 

ASF GitHub Bot commented on FLINK-1512:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/426


> Add CsvReader for reading into POJOs.
> -
>
> Key: FLINK-1512
> URL: https://issues.apache.org/jira/browse/FLINK-1512
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Robert Metzger
>Assignee: Chiwan Park
>Priority: Minor
>  Labels: starter
>
> Currently, the {{CsvReader}} supports only TupleXX types. 
> It would be nice if users were also able to read into POJOs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1775) BarrierBuffers don't correctly handle end of stream events

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380332#comment-14380332
 ] 

ASF GitHub Bot commented on FLINK-1775:
---

Github user gyfora commented on the pull request:

https://github.com/apache/flink/pull/534#issuecomment-86140841
  
Should we merge this anyway? The recordwriter issue seems to be independent 
from this, and this commit solves some other bugs.


> BarrierBuffers don't correctly handle end of stream events
> --
>
> Key: FLINK-1775
> URL: https://issues.apache.org/jira/browse/FLINK-1775
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>
> The current implementation causes deadlocks when the end of stream event 
> comes from a currently blocked channel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1775] BarrierBuffer fix to avoid end of...

2015-03-25 Thread gyfora
Github user gyfora commented on the pull request:

https://github.com/apache/flink/pull/534#issuecomment-86140841
  
Should we merge this anyway? The recordwriter issue seems to be independent 
from this, and this commit solves some other bugs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-1784) KafkaITCase

2015-03-25 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-1784:
-
Description: 
I observed a non-deterministic failure of the {{KafkaITCase}} on Travis:

https://travis-ci.org/fhueske/flink/jobs/55815808

{code}
Running org.apache.flink.streaming.connectors.kafka.KafkaITCase
03/25/2015 16:08:15 Job execution switched to status RUNNING.
03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to SCHEDULED
03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to DEPLOYING
03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to SCHEDULED
03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to DEPLOYING
03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to RUNNING
03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to RUNNING
03/25/2015 16:08:17 Custom Source -> Stream Sink(1/1) switched to FAILED

java.util.NoSuchElementException: next on empty iterator
at scala.collection.Iterator$$anon$2.next(Iterator.scala:39)
at scala.collection.Iterator$$anon$2.next(Iterator.scala:37)
at scala.collection.LinearSeqLike$$anon$1.next(LinearSeqLike.scala:62)
at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:30)
at 
org.apache.flink.streaming.connectors.kafka.api.simple.KafkaTopicUtils.getLeaderBrokerAddressForTopic(KafkaTopicUtils.java:83)
at 
org.apache.flink.streaming.connectors.kafka.api.KafkaSink.open(KafkaSink.java:118)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:33)
at 
org.apache.flink.streaming.api.invokable.StreamInvokable.open(StreamInvokable.java:158)
at 
org.apache.flink.streaming.api.streamvertex.StreamVertex.openOperator(StreamVertex.java:202)
at 
org.apache.flink.streaming.api.streamvertex.StreamVertex.invoke(StreamVertex.java:165)
at 
org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:209)
at java.lang.Thread.run(Thread.java:701)

03/25/2015 16:08:17 Job execution switched to status FAILING.
03/25/2015 16:08:17 Custom Source -> Stream Sink(1/1) switched to CANCELING
03/25/2015 16:08:17 Custom Source -> Stream Sink(1/1) switched to FAILED

java.util.NoSuchElementException: next on empty iterator
at scala.collection.Iterator$$anon$2.next(Iterator.scala:39)
at scala.collection.Iterator$$anon$2.next(Iterator.scala:37)
at scala.collection.LinearSeqLike$$anon$1.next(LinearSeqLike.scala:62)
at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:30)
at 
org.apache.flink.streaming.connectors.kafka.api.simple.KafkaTopicUtils.getLeaderBrokerAddressForTopic(KafkaTopicUtils.java:83)
at 
org.apache.flink.streaming.connectors.kafka.api.simple.PersistentKafkaSource.open(PersistentKafkaSource.java:159)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:33)
at 
org.apache.flink.streaming.api.invokable.StreamInvokable.open(StreamInvokable.java:158)
at 
org.apache.flink.streaming.api.streamvertex.StreamVertex.openOperator(StreamVertex.java:198)
at 
org.apache.flink.streaming.api.streamvertex.StreamVertex.invoke(StreamVertex.java:165)
at 
org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:209)
at java.lang.Thread.run(Thread.java:701)

03/25/2015 16:08:17 Job execution switched to status FAILED.

Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.774 sec <<< 
FAILURE! - in org.apache.flink.streaming.connectors.kafka.KafkaITCase

test(org.apache.flink.streaming.connectors.kafka.KafkaITCase) Time elapsed: 
10.604 sec <<< FAILURE!
java.lang.AssertionError: Test failed with: null
at org.junit.Assert.fail(Assert.java:88)

at 
org.apache.flink.streaming.connectors.kafka.KafkaITCase.test(KafkaITCase.java:104)
{code}

  was:
I observed a non-deterministic failure of the {{KafkaITCase}} on Travis:

https://travis-ci.org/fhueske/flink/jobs/55815808

{{code}}
Running org.apache.flink.streaming.connectors.kafka.KafkaITCase

03/25/2015 16:08:15 Job execution switched to status RUNNING.

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to SCHEDULED

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to DEPLOYING

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to SCHEDULED

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to DEPLOYING

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to RUNNING

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to RUNNING

03/25/2015 16:08:17 Custom Source -> Stream Sink(1/1) switched to FAILED

java.util.NoSuchElementException: next on empty iterator

at scala.collection.Iterator$$anon$2.next(Iterator.scala:39)

at scala.collection.Iterator$$anon$2.next(Iterator.scala:37)

at scala.collection.LinearSeqLike$$anon$1.next(LinearSeqLike.scala:62)

at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:30)

at 
org.apache.

[jira] [Created] (FLINK-1784) KafkaITCase

2015-03-25 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-1784:


 Summary: KafkaITCase
 Key: FLINK-1784
 URL: https://issues.apache.org/jira/browse/FLINK-1784
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Affects Versions: 0.9
Reporter: Fabian Hueske
Priority: Minor


I observed a non-deterministic failure of the {{KafkaITCase}} on Travis:

https://travis-ci.org/fhueske/flink/jobs/55815808

{{code}}
Running org.apache.flink.streaming.connectors.kafka.KafkaITCase

03/25/2015 16:08:15 Job execution switched to status RUNNING.

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to SCHEDULED

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to DEPLOYING

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to SCHEDULED

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to DEPLOYING

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to RUNNING

03/25/2015 16:08:15 Custom Source -> Stream Sink(1/1) switched to RUNNING

03/25/2015 16:08:17 Custom Source -> Stream Sink(1/1) switched to FAILED

java.util.NoSuchElementException: next on empty iterator

at scala.collection.Iterator$$anon$2.next(Iterator.scala:39)

at scala.collection.Iterator$$anon$2.next(Iterator.scala:37)

at scala.collection.LinearSeqLike$$anon$1.next(LinearSeqLike.scala:62)

at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:30)

at 
org.apache.flink.streaming.connectors.kafka.api.simple.KafkaTopicUtils.getLeaderBrokerAddressForTopic(KafkaTopicUtils.java:83)

at 
org.apache.flink.streaming.connectors.kafka.api.KafkaSink.open(KafkaSink.java:118)

at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:33)

at 
org.apache.flink.streaming.api.invokable.StreamInvokable.open(StreamInvokable.java:158)

at 
org.apache.flink.streaming.api.streamvertex.StreamVertex.openOperator(StreamVertex.java:202)

at 
org.apache.flink.streaming.api.streamvertex.StreamVertex.invoke(StreamVertex.java:165)

at 
org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:209)

at java.lang.Thread.run(Thread.java:701)

03/25/2015 16:08:17 Job execution switched to status FAILING.

03/25/2015 16:08:17 Custom Source -> Stream Sink(1/1) switched to CANCELING

03/25/2015 16:08:17 Custom Source -> Stream Sink(1/1) switched to FAILED

java.util.NoSuchElementException: next on empty iterator

at scala.collection.Iterator$$anon$2.next(Iterator.scala:39)

at scala.collection.Iterator$$anon$2.next(Iterator.scala:37)

at scala.collection.LinearSeqLike$$anon$1.next(LinearSeqLike.scala:62)

at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:30)

at 
org.apache.flink.streaming.connectors.kafka.api.simple.KafkaTopicUtils.getLeaderBrokerAddressForTopic(KafkaTopicUtils.java:83)

at 
org.apache.flink.streaming.connectors.kafka.api.simple.PersistentKafkaSource.open(PersistentKafkaSource.java:159)

at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:33)

at 
org.apache.flink.streaming.api.invokable.StreamInvokable.open(StreamInvokable.java:158)

at 
org.apache.flink.streaming.api.streamvertex.StreamVertex.openOperator(StreamVertex.java:198)

at 
org.apache.flink.streaming.api.streamvertex.StreamVertex.invoke(StreamVertex.java:165)

at 
org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:209)

at java.lang.Thread.run(Thread.java:701)

03/25/2015 16:08:17 Job execution switched to status FAILED.

Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.774 sec <<< 
FAILURE! - in org.apache.flink.streaming.connectors.kafka.KafkaITCase

test(org.apache.flink.streaming.connectors.kafka.KafkaITCase) Time elapsed: 
10.604 sec <<< FAILURE!

java.lang.AssertionError: Test failed with: null

at org.junit.Assert.fail(Assert.java:88)

at 
org.apache.flink.streaming.connectors.kafka.KafkaITCase.test(KafkaITCase.java:104)
{{code}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Make Expression API available to Java, Rename ...

2015-03-25 Thread aljoscha
Github user aljoscha commented on the pull request:

https://github.com/apache/flink/pull/503#issuecomment-86112294
  
I fixed @rmetzger's remarks. Still waiting for a solution to the naming 
issue.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-1780) Rename FlatCombineFunction to GroupCombineFunction

2015-03-25 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels closed FLINK-1780.
-
Resolution: Fixed

https://github.com/apache/flink/pull/530

Merged as of 033c69f9477c6352865e7e0da01296dd778ffe59

> Rename FlatCombineFunction to GroupCombineFunction
> --
>
> Key: FLINK-1780
> URL: https://issues.apache.org/jira/browse/FLINK-1780
> Project: Flink
>  Issue Type: Improvement
>  Components: Java API, Scala API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Assignee: Suneel Marthi
>Priority: Minor
>  Labels: starter
> Fix For: 0.9
>
>
> The recently added {{GroupCombineOperator}} requires a 
> {{FlatCombineFunction}}, however a {{GroupReduceOperator}} requires a 
> {{GroupReduceFunction}}. {{FlatCombineFunction}} and {{GroupReduceFunction}} 
> work on the same types of parameters (Iterable and Collector).
> Therefore, I propose to change the name {{FlatCombineFunction}} to 
> {{GroupCombineFunction}}. Since the function could not be independently used 
> until recently, this API breaking change should be OK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread mxm
Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/530#issuecomment-86103055
  
Thank you for the pull request!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/530


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread smarthi
Github user smarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/530#discussion_r27135717
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/RegularPactTask.java
 ---
@@ -1465,7 +1467,9 @@ public static void 
cancelChainedTasks(List> tasks) {
for (int i = 0; i < tasks.size(); i++) {
try {
tasks.get(i).cancelTask();
-   } catch (Throwable t) {}
+   } catch (Throwable t) {
+   // do nothing
+   }
}
--- End diff --

Same as above :) :) 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread smarthi
Github user smarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/530#discussion_r27135877
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java 
---
@@ -1425,11 +1425,9 @@ public static boolean isClassType(Type t) {
}
 
private static boolean sameTypeVars(Type t1, Type t2) {
-   if (!(t1 instanceof TypeVariable) || !(t2 instanceof 
TypeVariable)) {
-   return false;
-   }
-   return ((TypeVariable) 
t1).getName().equals(((TypeVariable)t2).getName())
-   && ((TypeVariable) 
t1).getGenericDeclaration().equals(((TypeVariable)t2).getGenericDeclaration());
+   return !(!(t1 instanceof TypeVariable) || !(t2 instanceof 
TypeVariable)) &&
+   ((TypeVariable) 
t1).getName().equals(((TypeVariable) t2).getName()) &&
+   ((TypeVariable) 
t1).getGenericDeclaration().equals(((TypeVariable) 
t2).getGenericDeclaration());
--- End diff --

Agreed. please feel free to revert this change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread smarthi
Github user smarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/530#discussion_r27135686
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/RegularPactTask.java
 ---
@@ -525,7 +525,9 @@ protected void run() throws Exception {
try {
FunctionUtils.closeFunction(this.stub);
}
-   catch (Throwable t) {}
+   catch (Throwable t) {
+   // do nothing
+   }
}
--- End diff --

The javadocs comment describe as to why we do a catch all and suppress all 
exceptions/errors. :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/530#discussion_r27135343
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/AllGroupCombineDriver.java
 ---
@@ -35,20 +35,20 @@
 * Like @org.apache.flink.runtime.operators.GroupCombineDriver but without 
grouping and sorting. May emit partially
 * reduced results.
 *
-* @see org.apache.flink.api.common.functions.FlatCombineFunction
+* @see GroupCombineFunction
--- End diff --

You're right :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread smarthi
Github user smarthi commented on the pull request:

https://github.com/apache/flink/pull/530#issuecomment-86091921
  
@mxm please go ahead


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread smarthi
Github user smarthi commented on a diff in the pull request:

https://github.com/apache/flink/pull/530#discussion_r27133810
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/AllGroupCombineDriver.java
 ---
@@ -35,20 +35,20 @@
 * Like @org.apache.flink.runtime.operators.GroupCombineDriver but without 
grouping and sorting. May emit partially
 * reduced results.
 *
-* @see org.apache.flink.api.common.functions.FlatCombineFunction
+* @see GroupCombineFunction
--- End diff --

Yes, since the path is in the imports, its redundant adding that here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread mxm
Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/530#issuecomment-86090016
  
The documentation also needs to be adapted to use the new interface name.

@smarthi If you don't mind I will merge your commit with my remarks 
addressed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-1729) Assess performance of classification algorithms

2015-03-25 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-1729:
-
Description: 
In order to validate Flink's classification algorithms (in terms of performance 
and accuracy), we should run them on publicly available classification data 
sets. This will not only serve as a proof for the correctness of the 
implementations but will also show how easy the machine learning library can be 
used.

Bottou [1] published some results for the RCV1 dataset using SVMs for 
classification. The SVMs are trained using stochastic gradient descent. Thus, 
they would be a good comparison for the CoCoA trained SVMs.

Some more benchmark results and publicly available data sets ca be found here 
[2].

Resources:
[1] [http://leon.bottou.org/projects/sgd]
[2] [https://github.com/BIDData/BIDMach/wiki/Benchmarks]

  was:
In order to validate Flink's classification algorithms (in terms of performance 
and accuracy), we should run them on publicly available classification data 
sets. This will not only serve as a proof for the correctness of the 
implementations but will also show how easy the machine learning library can be 
used.

Bottou [1] published some results for the RCV1 dataset using SVMs for 
classification. The SVMs are trained using stochastic gradient descent. Thus, 
they would be a good comparison for the CoCoA trained SVMs.

Resources:
[1] [http://leon.bottou.org/projects/sgd]


> Assess performance of classification algorithms
> ---
>
> Key: FLINK-1729
> URL: https://issues.apache.org/jira/browse/FLINK-1729
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Till Rohrmann
>  Labels: ML
>
> In order to validate Flink's classification algorithms (in terms of 
> performance and accuracy), we should run them on publicly available 
> classification data sets. This will not only serve as a proof for the 
> correctness of the implementations but will also show how easy the machine 
> learning library can be used.
> Bottou [1] published some results for the RCV1 dataset using SVMs for 
> classification. The SVMs are trained using stochastic gradient descent. Thus, 
> they would be a good comparison for the CoCoA trained SVMs.
> Some more benchmark results and publicly available data sets ca be found here 
> [2].
> Resources:
> [1] [http://leon.bottou.org/projects/sgd]
> [2] [https://github.com/BIDData/BIDMach/wiki/Benchmarks]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1512) Add CsvReader for reading into POJOs.

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380037#comment-14380037
 ] 

ASF GitHub Bot commented on FLINK-1512:
---

Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/426#issuecomment-86079595
  
@chiwanpark excellent job, thanks!
Will merge it after a final round of Travis tests passed.




> Add CsvReader for reading into POJOs.
> -
>
> Key: FLINK-1512
> URL: https://issues.apache.org/jira/browse/FLINK-1512
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Robert Metzger
>Assignee: Chiwan Park
>Priority: Minor
>  Labels: starter
>
> Currently, the {{CsvReader}} supports only TupleXX types. 
> It would be nice if users were also able to read into POJOs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1694) Change the split between create/run of a vertex-centric iteration

2015-03-25 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14380041#comment-14380041
 ] 

Fabian Hueske commented on FLINK-1694:
--

+1 for the second option with the configuration object.

> Change the split between create/run of a vertex-centric iteration
> -
>
> Key: FLINK-1694
> URL: https://issues.apache.org/jira/browse/FLINK-1694
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Reporter: Vasia Kalavri
>
> Currently, the vertex-centric API in Gelly looks like this:
> {code:java}
> Graph inputGaph = ... //create graph
> VertexCentricIteration iteration = inputGraph.createVertexCentricIteration();
> ... // configure the iteration
> Graph newGraph = inputGaph.runVertexCentricIteration(iteration);
> {code}
> We have this create/run split, in order to expose the iteration object and be 
> able to call the public methods of VertexCentricIteration.
> However, this is not very nice and might lead to errors, if create and run 
> are  mistakenly called on different graph objects.
> One suggestion is to change this to the following:
> {code:java}
> VertexCentricIteration iteration = inputGraph.createVertexCentricIteration();
> ... // configure the iteration
> Graph newGraph = iteration.result();
> {code}
> or to go with a single run call, where we add an IterationConfiguration 
> object as a parameter and we don't expose the iteration object to the user at 
> all:
> {code:java}
> IterationConfiguration parameters  = ...
> Graph newGraph = inputGraph.runVertexCentricIteration(parameters);
> {code}
> and we can also have a simplified method where no configuration is passed.
> What do you think?
> Personally, I like the second option a bit more.
> -Vasia.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1512] Add CsvReader for reading into PO...

2015-03-25 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/426#issuecomment-86079595
  
@chiwanpark excellent job, thanks!
Will merge it after a final round of Travis tests passed.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Make Expression API available to Java, Rename ...

2015-03-25 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/503#discussion_r27127405
  
--- Diff: 
flink-staging/flink-table/src/test/java/org/apache/flink/api/java/table/test/AggregationsITCase.java
 ---
@@ -0,0 +1,210 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.api.java.table.test;
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import org.apache.flink.api.table.ExpressionException;
+import org.apache.flink.api.table.Table;
+import org.apache.flink.api.table.Row;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.table.TableUtil;
+import org.apache.flink.api.java.operators.DataSource;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.api.java.tuple.Tuple7;
+import org.apache.flink.api.java.table.JavaBatchTranslator;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.test.javaApiOperators.util.CollectionDataSets;
+import org.apache.flink.test.util.MultipleProgramsTestBase;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+@RunWith(Parameterized.class)
+public class AggregationsITCase extends MultipleProgramsTestBase {
+
+
+   public AggregationsITCase(TestExecutionMode mode){
+   super(mode);
+   }
+
+   private String resultPath;
+   private String expected = "";
+
+   @Rule
+   public TemporaryFolder tempFolder = new TemporaryFolder();
+
+   @Before
+   public void before() throws Exception{
+   resultPath = tempFolder.newFile().toURI().toString();
+   }
+
+   @After
+   public void after() throws Exception{
+   compareResultsByLinesInMemory(expected, resultPath);
+   }
+
+   @Test
+   public void testAggregationTypes() throws Exception {
+   ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
+
+   Table table =
+   
TableUtil.from(CollectionDataSets.get3TupleDataSet(env));
+
+   Table result =
+   table.select("f0.sum, f0.min, f0.max, f0.count, 
f0.avg");
+
+   DataSet ds = TableUtil.toSet(result, Row.class);
+   ds.writeAsText(resultPath, FileSystem.WriteMode.OVERWRITE);
+
+   env.execute();
+
+   expected = "231,1,21,21,11";
+   }
+
+   @Test(expected = ExpressionException.class)
+   public void testAggregationOnNonExistingField() throws Exception {
+   ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
+
+   Table table =
+   
TableUtil.from(CollectionDataSets.get3TupleDataSet(env));
+
+   Table result =
+   table.select("'foo.avg");
--- End diff --

No, it is not optional. And it is not allowed there so I'll fix it.


-

[jira] [Commented] (FLINK-1650) Suppress Akka's Netty Shutdown Errors through the log config

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379926#comment-14379926
 ] 

ASF GitHub Bot commented on FLINK-1650:
---

Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/518#issuecomment-86050697
  
LGTM.


> Suppress Akka's Netty Shutdown Errors through the log config
> 
>
> Key: FLINK-1650
> URL: https://issues.apache.org/jira/browse/FLINK-1650
> Project: Flink
>  Issue Type: Bug
>  Components: other
>Affects Versions: 0.9
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 0.9
>
>
> I suggest to set the logging for 
> `org.jboss.netty.channel.DefaultChannelPipeline` to error, in order to get 
> rid of the misleading stack trace caused by an akka/netty hickup on shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1650] Configure Netty (akka) to use Slf...

2015-03-25 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/518#issuecomment-86050697
  
LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1766]Fix the bug of equals function of ...

2015-03-25 Thread matadorhong
Github user matadorhong commented on the pull request:

https://github.com/apache/flink/pull/511#issuecomment-86050415
  
@StephanEwen I have fixed the code style issue and changed the PR to an 
"Improvement" one in jira.
Finally, great thanks for your help. And I will take more time to 
contribute Flink.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1766) Fix the bug of equals function of FSKey

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379924#comment-14379924
 ] 

ASF GitHub Bot commented on FLINK-1766:
---

Github user matadorhong commented on the pull request:

https://github.com/apache/flink/pull/511#issuecomment-86050415
  
@StephanEwen I have fixed the code style issue and changed the PR to an 
"Improvement" one in jira.
Finally, great thanks for your help. And I will take more time to 
contribute Flink.


> Fix the bug of equals function of FSKey
> ---
>
> Key: FLINK-1766
> URL: https://issues.apache.org/jira/browse/FLINK-1766
> Project: Flink
>  Issue Type: Improvement
>Reporter: Sibao Hong
>Assignee: Sibao Hong
>Priority: Minor
>
> The equals function in org.apache.flink.core.fs.FileSystem.FSKey should first 
> have a confirm whether obj == this, if obj is the same object.It should 
> return True.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-1766) Fix the bug of equals function of FSKey

2015-03-25 Thread Sibao Hong (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sibao Hong updated FLINK-1766:
--
Issue Type: Improvement  (was: Bug)

> Fix the bug of equals function of FSKey
> ---
>
> Key: FLINK-1766
> URL: https://issues.apache.org/jira/browse/FLINK-1766
> Project: Flink
>  Issue Type: Improvement
>Reporter: Sibao Hong
>Assignee: Sibao Hong
>Priority: Minor
>
> The equals function in org.apache.flink.core.fs.FileSystem.FSKey should first 
> have a confirm whether obj == this, if obj is the same object.It should 
> return True.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1769] Fix deploy bug caused by ScalaDoc...

2015-03-25 Thread aljoscha
GitHub user aljoscha opened a pull request:

https://github.com/apache/flink/pull/535

[FLINK-1769] Fix deploy bug caused by ScalaDoc aggregation

I added a new profile that should be used when building the combined 
JavaDoc. I will add a jira issue for changing this in the nightly javadoc 
deployment.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aljoscha/flink fix-deploy

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/535.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #535


commit 3f805ac3f22bb8060ddf8f7cfe90e4f95dc7d7a9
Author: Aljoscha Krettek 
Date:   2015-03-25T13:50:40Z

[FLINK-1769] Fix deploy bug caused by ScalaDoc aggregation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1769) Maven deploy is broken (build artifacts are cleaned in docs-and-sources profile)

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379886#comment-14379886
 ] 

ASF GitHub Bot commented on FLINK-1769:
---

GitHub user aljoscha opened a pull request:

https://github.com/apache/flink/pull/535

[FLINK-1769] Fix deploy bug caused by ScalaDoc aggregation

I added a new profile that should be used when building the combined 
JavaDoc. I will add a jira issue for changing this in the nightly javadoc 
deployment.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aljoscha/flink fix-deploy

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/535.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #535


commit 3f805ac3f22bb8060ddf8f7cfe90e4f95dc7d7a9
Author: Aljoscha Krettek 
Date:   2015-03-25T13:50:40Z

[FLINK-1769] Fix deploy bug caused by ScalaDoc aggregation




> Maven deploy is broken (build artifacts are cleaned in docs-and-sources 
> profile)
> 
>
> Key: FLINK-1769
> URL: https://issues.apache.org/jira/browse/FLINK-1769
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Robert Metzger
>Assignee: Aljoscha Krettek
>Priority: Critical
>
> The issue has been introduced by FLINK-1720.
> This change broke the deployment to maven snapshots / central.
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:2.5.1:install (default-install) 
> on project flink-shaded-include-yarn: Failed to install artifact 
> org.apache.flink:flink-shaded-include-yarn:pom:0.9-SNAPSHOT: 
> /home/robert/incubator-flink/flink-shaded-hadoop/flink-shaded-include-yarn/target/dependency-reduced-pom.xml
>  (No such file or directory) -> [Help 1]
> {code}
> The issue is that maven is now executing {{clean}} after {{shade}} and then 
> {{install}} can not store the result of {{shade}} anymore (because it has 
> been deleted)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-1781) Quickstarts broken due to Scala Version Variables

2015-03-25 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-1781:
-

Assignee: Robert Metzger

> Quickstarts broken due to Scala Version Variables
> -
>
> Key: FLINK-1781
> URL: https://issues.apache.org/jira/browse/FLINK-1781
> Project: Flink
>  Issue Type: Bug
>  Components: Quickstarts
>Affects Versions: 0.9
>Reporter: Stephan Ewen
>Assignee: Robert Metzger
>Priority: Blocker
> Fix For: 0.9
>
>
> The quickstart archetype resources refer to the scala version variables.
> When creating a maven project standalone, these variables are not defined, 
> and the pom is invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1781) Quickstarts broken due to Scala Version Variables

2015-03-25 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379866#comment-14379866
 ] 

Robert Metzger commented on FLINK-1781:
---

I'll look into this.

> Quickstarts broken due to Scala Version Variables
> -
>
> Key: FLINK-1781
> URL: https://issues.apache.org/jira/browse/FLINK-1781
> Project: Flink
>  Issue Type: Bug
>  Components: Quickstarts
>Affects Versions: 0.9
>Reporter: Stephan Ewen
>Priority: Blocker
> Fix For: 0.9
>
>
> The quickstart archetype resources refer to the scala version variables.
> When creating a maven project standalone, these variables are not defined, 
> and the pom is invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1783) Quickstart shading should not created shaded jar and dependency reduced pom

2015-03-25 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379859#comment-14379859
 ] 

Robert Metzger commented on FLINK-1783:
---

To my understanding the new quickstarts are not relocating any classes. It only 
creates a fat jar (and filters out some unnecessary files)
You are right, there is no need to create a dependency reduced pom. But that 
will probably not affect many users (because the dep red pom is only used when 
installing (=depending on) or deploying the project created by the quickstart)

> Quickstart shading should not created shaded jar and dependency reduced pom
> ---
>
> Key: FLINK-1783
> URL: https://issues.apache.org/jira/browse/FLINK-1783
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Affects Versions: 0.9
>Reporter: Stephan Ewen
> Fix For: 0.9
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1782) Change Quickstart Java version to 1.7

2015-03-25 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379865#comment-14379865
 ] 

Robert Metzger commented on FLINK-1782:
---

I agree. Users should explicitly downgrade to 1.6 if they want to use a 
insecure, unmaintained java version.

> Change Quickstart Java version to 1.7
> -
>
> Key: FLINK-1782
> URL: https://issues.apache.org/jira/browse/FLINK-1782
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Affects Versions: 0.9
>Reporter: Stephan Ewen
> Fix For: 0.9
>
>
> The quickstarts refer to the outdated Java 1.6 source and bin version. We 
> should upgrade this to 1.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1512) Add CsvReader for reading into POJOs.

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379815#comment-14379815
 ] 

ASF GitHub Bot commented on FLINK-1512:
---

Github user chiwanpark commented on the pull request:

https://github.com/apache/flink/pull/426#issuecomment-86011782
  
@fhueske You can check it now :)


> Add CsvReader for reading into POJOs.
> -
>
> Key: FLINK-1512
> URL: https://issues.apache.org/jira/browse/FLINK-1512
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Robert Metzger
>Assignee: Chiwan Park
>Priority: Minor
>  Labels: starter
>
> Currently, the {{CsvReader}} supports only TupleXX types. 
> It would be nice if users were also able to read into POJOs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1512] Add CsvReader for reading into PO...

2015-03-25 Thread chiwanpark
Github user chiwanpark commented on the pull request:

https://github.com/apache/flink/pull/426#issuecomment-86011782
  
@fhueske You can check it now :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1512) Add CsvReader for reading into POJOs.

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379813#comment-14379813
 ] 

ASF GitHub Bot commented on FLINK-1512:
---

Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/426#issuecomment-86011531
  
Sure, no problem :-) 
Can I check it now or do you need a bit more time?


> Add CsvReader for reading into POJOs.
> -
>
> Key: FLINK-1512
> URL: https://issues.apache.org/jira/browse/FLINK-1512
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Robert Metzger
>Assignee: Chiwan Park
>Priority: Minor
>  Labels: starter
>
> Currently, the {{CsvReader}} supports only TupleXX types. 
> It would be nice if users were also able to read into POJOs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1512] Add CsvReader for reading into PO...

2015-03-25 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/426#issuecomment-86011531
  
Sure, no problem :-) 
Can I check it now or do you need a bit more time?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1512) Add CsvReader for reading into POJOs.

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379809#comment-14379809
 ] 

ASF GitHub Bot commented on FLINK-1512:
---

Github user chiwanpark commented on the pull request:

https://github.com/apache/flink/pull/426#issuecomment-86011099
  
Oops, I pushed a intermediate commit a8a5c37. I will fix it.


> Add CsvReader for reading into POJOs.
> -
>
> Key: FLINK-1512
> URL: https://issues.apache.org/jira/browse/FLINK-1512
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Robert Metzger
>Assignee: Chiwan Park
>Priority: Minor
>  Labels: starter
>
> Currently, the {{CsvReader}} supports only TupleXX types. 
> It would be nice if users were also able to read into POJOs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1512] Add CsvReader for reading into PO...

2015-03-25 Thread chiwanpark
Github user chiwanpark commented on the pull request:

https://github.com/apache/flink/pull/426#issuecomment-86011099
  
Oops, I pushed a intermediate commit a8a5c37. I will fix it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1512) Add CsvReader for reading into POJOs.

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379805#comment-14379805
 ] 

ASF GitHub Bot commented on FLINK-1512:
---

Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/426#issuecomment-86010852
  
@chiwanpark Thanks, the code looks good to me!
Will try and merge it if everything works.


> Add CsvReader for reading into POJOs.
> -
>
> Key: FLINK-1512
> URL: https://issues.apache.org/jira/browse/FLINK-1512
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Robert Metzger
>Assignee: Chiwan Park
>Priority: Minor
>  Labels: starter
>
> Currently, the {{CsvReader}} supports only TupleXX types. 
> It would be nice if users were also able to read into POJOs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1512] Add CsvReader for reading into PO...

2015-03-25 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/426#issuecomment-86010852
  
@chiwanpark Thanks, the code looks good to me!
Will try and merge it if everything works.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1726][gelly] Added Community Detection ...

2015-03-25 Thread vasia
Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/505#issuecomment-86007328
  
Thanks a lot for the updates @andralungu and for testing this on a cluster 
environment :-) 
The example code looks much nicer now ^-^. 
Only minor issue is that the file input format description should be in the 
example not the library method, but I can take care of this change before 
merging.
Once Mr. Travis turns green, I'll merge this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-1769) Maven deploy is broken (build artifacts are cleaned in docs-and-sources profile)

2015-03-25 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-1769:
---

Assignee: Aljoscha Krettek

> Maven deploy is broken (build artifacts are cleaned in docs-and-sources 
> profile)
> 
>
> Key: FLINK-1769
> URL: https://issues.apache.org/jira/browse/FLINK-1769
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Robert Metzger
>Assignee: Aljoscha Krettek
>Priority: Critical
>
> The issue has been introduced by FLINK-1720.
> This change broke the deployment to maven snapshots / central.
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:2.5.1:install (default-install) 
> on project flink-shaded-include-yarn: Failed to install artifact 
> org.apache.flink:flink-shaded-include-yarn:pom:0.9-SNAPSHOT: 
> /home/robert/incubator-flink/flink-shaded-hadoop/flink-shaded-include-yarn/target/dependency-reduced-pom.xml
>  (No such file or directory) -> [Help 1]
> {code}
> The issue is that maven is now executing {{clean}} after {{shade}} and then 
> {{install}} can not store the result of {{shade}} anymore (because it has 
> been deleted)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1726) Add Community Detection Library and Example

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379794#comment-14379794
 ] 

ASF GitHub Bot commented on FLINK-1726:
---

Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/505#issuecomment-86007328
  
Thanks a lot for the updates @andralungu and for testing this on a cluster 
environment :-) 
The example code looks much nicer now ^-^. 
Only minor issue is that the file input format description should be in the 
example not the library method, but I can take care of this change before 
merging.
Once Mr. Travis turns green, I'll merge this.


> Add Community Detection Library and Example
> ---
>
> Key: FLINK-1726
> URL: https://issues.apache.org/jira/browse/FLINK-1726
> Project: Flink
>  Issue Type: Task
>  Components: Gelly
>Affects Versions: 0.9
>Reporter: Andra Lungu
>Assignee: Andra Lungu
>
> Community detection paper: http://arxiv.org/pdf/0808.2633.pdf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1756) Rename Stream Monitoring to Stream Checkpointing

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379792#comment-14379792
 ] 

ASF GitHub Bot commented on FLINK-1756:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/531


> Rename Stream Monitoring to Stream Checkpointing
> 
>
> Key: FLINK-1756
> URL: https://issues.apache.org/jira/browse/FLINK-1756
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 0.9
>Reporter: Stephan Ewen
>Assignee: Márton Balassi
> Fix For: 0.9
>
>
> Currently, to enable the streaming checkpointing, you have to set 
> "monitoring" on. I vote to call it "checkpointing", because that describes it 
> better and is more intuitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1756] [streaming] Rename Stream Monitor...

2015-03-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/531


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1775] BarrierBuffer fix to avoid end of...

2015-03-25 Thread gyfora
Github user gyfora commented on a diff in the pull request:

https://github.com/apache/flink/pull/534#discussion_r27115550
  
--- Diff: 
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/streamvertex/StreamVertex.java
 ---
@@ -283,16 +288,18 @@ public StreamConfig getConfig() {
 * 
 * @param id
 */
-   private void actOnBarrier(long id) {
-   try {
-   outputHandler.broadcastBarrier(id);
-   // TODO checkpoint state here
-   confirmBarrier(id);
-   if (LOG.isDebugEnabled()) {
-   LOG.debug("Superstep " + id + " processed: " + 
StreamVertex.this);
+   private synchronized void actOnBarrier(long id) {
+   if (this.isRunning) {
+   try {
+   outputHandler.broadcastBarrier(id);
+   // TODO checkpoint state here
+   confirmBarrier(id);
+   if (LOG.isDebugEnabled()) {
+   LOG.debug("Superstep " + id + " 
processed: " + StreamVertex.this);
+   }
+   } catch (Exception e) {
+   // TODO:Figure this out properly
--- End diff --

but I am trying to sort all these out with ufuk


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1775) BarrierBuffers don't correctly handle end of stream events

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379787#comment-14379787
 ] 

ASF GitHub Bot commented on FLINK-1775:
---

Github user gyfora commented on a diff in the pull request:

https://github.com/apache/flink/pull/534#discussion_r27115550
  
--- Diff: 
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/streamvertex/StreamVertex.java
 ---
@@ -283,16 +288,18 @@ public StreamConfig getConfig() {
 * 
 * @param id
 */
-   private void actOnBarrier(long id) {
-   try {
-   outputHandler.broadcastBarrier(id);
-   // TODO checkpoint state here
-   confirmBarrier(id);
-   if (LOG.isDebugEnabled()) {
-   LOG.debug("Superstep " + id + " processed: " + 
StreamVertex.this);
+   private synchronized void actOnBarrier(long id) {
+   if (this.isRunning) {
+   try {
+   outputHandler.broadcastBarrier(id);
+   // TODO checkpoint state here
+   confirmBarrier(id);
+   if (LOG.isDebugEnabled()) {
+   LOG.debug("Superstep " + id + " 
processed: " + StreamVertex.this);
+   }
+   } catch (Exception e) {
+   // TODO:Figure this out properly
--- End diff --

but I am trying to sort all these out with ufuk


> BarrierBuffers don't correctly handle end of stream events
> --
>
> Key: FLINK-1775
> URL: https://issues.apache.org/jira/browse/FLINK-1775
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>
> The current implementation causes deadlocks when the end of stream event 
> comes from a currently blocked channel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1775) BarrierBuffers don't correctly handle end of stream events

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379782#comment-14379782
 ] 

ASF GitHub Bot commented on FLINK-1775:
---

Github user gyfora commented on a diff in the pull request:

https://github.com/apache/flink/pull/534#discussion_r27115257
  
--- Diff: 
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/streamvertex/StreamVertex.java
 ---
@@ -283,16 +288,18 @@ public StreamConfig getConfig() {
 * 
 * @param id
 */
-   private void actOnBarrier(long id) {
-   try {
-   outputHandler.broadcastBarrier(id);
-   // TODO checkpoint state here
-   confirmBarrier(id);
-   if (LOG.isDebugEnabled()) {
-   LOG.debug("Superstep " + id + " processed: " + 
StreamVertex.this);
+   private synchronized void actOnBarrier(long id) {
+   if (this.isRunning) {
+   try {
+   outputHandler.broadcastBarrier(id);
+   // TODO checkpoint state here
+   confirmBarrier(id);
+   if (LOG.isDebugEnabled()) {
+   LOG.debug("Superstep " + id + " 
processed: " + StreamVertex.this);
+   }
+   } catch (Exception e) {
+   // TODO:Figure this out properly
--- End diff --

I had some weird exceptions after the streams finished, and in any case 
this shouldnt affect the recovery logic


> BarrierBuffers don't correctly handle end of stream events
> --
>
> Key: FLINK-1775
> URL: https://issues.apache.org/jira/browse/FLINK-1775
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>
> The current implementation causes deadlocks when the end of stream event 
> comes from a currently blocked channel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1775] BarrierBuffer fix to avoid end of...

2015-03-25 Thread gyfora
Github user gyfora commented on a diff in the pull request:

https://github.com/apache/flink/pull/534#discussion_r27115257
  
--- Diff: 
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/streamvertex/StreamVertex.java
 ---
@@ -283,16 +288,18 @@ public StreamConfig getConfig() {
 * 
 * @param id
 */
-   private void actOnBarrier(long id) {
-   try {
-   outputHandler.broadcastBarrier(id);
-   // TODO checkpoint state here
-   confirmBarrier(id);
-   if (LOG.isDebugEnabled()) {
-   LOG.debug("Superstep " + id + " processed: " + 
StreamVertex.this);
+   private synchronized void actOnBarrier(long id) {
+   if (this.isRunning) {
+   try {
+   outputHandler.broadcastBarrier(id);
+   // TODO checkpoint state here
+   confirmBarrier(id);
+   if (LOG.isDebugEnabled()) {
+   LOG.debug("Superstep " + id + " 
processed: " + StreamVertex.this);
+   }
+   } catch (Exception e) {
+   // TODO:Figure this out properly
--- End diff --

I had some weird exceptions after the streams finished, and in any case 
this shouldnt affect the recovery logic


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Issue Comment Deleted] (FLINK-1319) Add static code analysis for UDFs

2015-03-25 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-1319:

Comment: was deleted

(was: Is there a reason why Semantic Properties are not available in the Scala 
API? Does it make sense to also move it to core / org.apache.flink.api.common?)

> Add static code analysis for UDFs
> -
>
> Key: FLINK-1319
> URL: https://issues.apache.org/jira/browse/FLINK-1319
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Stephan Ewen
>Assignee: Timo Walther
>Priority: Minor
>
> Flink's Optimizer takes information that tells it for UDFs which fields of 
> the input elements are accessed, modified, or frwarded/copied. This 
> information frequently helps to reuse partitionings, sorts, etc. It may speed 
> up programs significantly, as it can frequently eliminate sorts and shuffles, 
> which are costly.
> Right now, users can add lightweight annotations to UDFs to provide this 
> information (such as adding {{@ConstandFields("0->3, 1, 2->1")}}.
> We worked with static code analysis of UDFs before, to determine this 
> information automatically. This is an incredible feature, as it "magically" 
> makes programs faster.
> For record-at-a-time operations (Map, Reduce, FlatMap, Join, Cross), this 
> works surprisingly well in many cases. We used the "Soot" toolkit for the 
> static code analysis. Unfortunately, Soot is LGPL licensed and thus we did 
> not include any of the code so far.
> I propose to add this functionality to Flink, in the form of a drop-in 
> addition, to work around the LGPL incompatibility with ALS 2.0. Users could 
> simply download a special "flink-code-analysis.jar" and drop it into the 
> "lib" folder to enable this functionality. We may even add a script to 
> "tools" that downloads that library automatically into the lib folder. This 
> should be legally fine, since we do not redistribute LGPL code and only 
> dynamically link it (the incompatibility with ASL 2.0 is mainly in the 
> patentability, if I remember correctly).
> Prior work on this has been done by [~aljoscha] and [~skunert], which could 
> provide a code base to start with.
> *Appendix*
> Hompage to Soot static analysis toolkit: http://www.sable.mcgill.ca/soot/
> Papers on static analysis and for optimization: 
> http://stratosphere.eu/assets/papers/EnablingOperatorReorderingSCA_12.pdf and 
> http://stratosphere.eu/assets/papers/openingTheBlackBoxes_12.pdf
> Quick introduction to the Optimizer: 
> http://stratosphere.eu/assets/papers/2014-VLDBJ_Stratosphere_Overview.pdf 
> (Section 6)
> Optimizer for Iterations: 
> http://stratosphere.eu/assets/papers/spinningFastIterativeDataFlows_12.pdf 
> (Sections 4.3 and 5.3)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1319) Add static code analysis for UDFs

2015-03-25 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379764#comment-14379764
 ] 

Timo Walther commented on FLINK-1319:
-

Is there a reason why Semantic Properties are not available in the Scala API? 
Does it make sense to also move it to core / org.apache.flink.api.common?

> Add static code analysis for UDFs
> -
>
> Key: FLINK-1319
> URL: https://issues.apache.org/jira/browse/FLINK-1319
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Stephan Ewen
>Assignee: Timo Walther
>Priority: Minor
>
> Flink's Optimizer takes information that tells it for UDFs which fields of 
> the input elements are accessed, modified, or frwarded/copied. This 
> information frequently helps to reuse partitionings, sorts, etc. It may speed 
> up programs significantly, as it can frequently eliminate sorts and shuffles, 
> which are costly.
> Right now, users can add lightweight annotations to UDFs to provide this 
> information (such as adding {{@ConstandFields("0->3, 1, 2->1")}}.
> We worked with static code analysis of UDFs before, to determine this 
> information automatically. This is an incredible feature, as it "magically" 
> makes programs faster.
> For record-at-a-time operations (Map, Reduce, FlatMap, Join, Cross), this 
> works surprisingly well in many cases. We used the "Soot" toolkit for the 
> static code analysis. Unfortunately, Soot is LGPL licensed and thus we did 
> not include any of the code so far.
> I propose to add this functionality to Flink, in the form of a drop-in 
> addition, to work around the LGPL incompatibility with ALS 2.0. Users could 
> simply download a special "flink-code-analysis.jar" and drop it into the 
> "lib" folder to enable this functionality. We may even add a script to 
> "tools" that downloads that library automatically into the lib folder. This 
> should be legally fine, since we do not redistribute LGPL code and only 
> dynamically link it (the incompatibility with ASL 2.0 is mainly in the 
> patentability, if I remember correctly).
> Prior work on this has been done by [~aljoscha] and [~skunert], which could 
> provide a code base to start with.
> *Appendix*
> Hompage to Soot static analysis toolkit: http://www.sable.mcgill.ca/soot/
> Papers on static analysis and for optimization: 
> http://stratosphere.eu/assets/papers/EnablingOperatorReorderingSCA_12.pdf and 
> http://stratosphere.eu/assets/papers/openingTheBlackBoxes_12.pdf
> Quick introduction to the Optimizer: 
> http://stratosphere.eu/assets/papers/2014-VLDBJ_Stratosphere_Overview.pdf 
> (Section 6)
> Optimizer for Iterations: 
> http://stratosphere.eu/assets/papers/spinningFastIterativeDataFlows_12.pdf 
> (Sections 4.3 and 5.3)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1694) Change the split between create/run of a vertex-centric iteration

2015-03-25 Thread Vasia Kalavri (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379694#comment-14379694
 ] 

Vasia Kalavri commented on FLINK-1694:
--

Any more opinions on this?
I know it's a trivial change, but I'd like to change it once and for all and 
not make a mistake.
So, I would very much appreciate your input :-) Thanks!

> Change the split between create/run of a vertex-centric iteration
> -
>
> Key: FLINK-1694
> URL: https://issues.apache.org/jira/browse/FLINK-1694
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Reporter: Vasia Kalavri
>
> Currently, the vertex-centric API in Gelly looks like this:
> {code:java}
> Graph inputGaph = ... //create graph
> VertexCentricIteration iteration = inputGraph.createVertexCentricIteration();
> ... // configure the iteration
> Graph newGraph = inputGaph.runVertexCentricIteration(iteration);
> {code}
> We have this create/run split, in order to expose the iteration object and be 
> able to call the public methods of VertexCentricIteration.
> However, this is not very nice and might lead to errors, if create and run 
> are  mistakenly called on different graph objects.
> One suggestion is to change this to the following:
> {code:java}
> VertexCentricIteration iteration = inputGraph.createVertexCentricIteration();
> ... // configure the iteration
> Graph newGraph = iteration.result();
> {code}
> or to go with a single run call, where we add an IterationConfiguration 
> object as a parameter and we don't expose the iteration object to the user at 
> all:
> {code:java}
> IterationConfiguration parameters  = ...
> Graph newGraph = inputGraph.runVertexCentricIteration(parameters);
> {code}
> and we can also have a simplified method where no configuration is passed.
> What do you think?
> Personally, I like the second option a bit more.
> -Vasia.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1319) Add static code analysis for UDFs

2015-03-25 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379653#comment-14379653
 ] 

Timo Walther commented on FLINK-1319:
-

Thanks for your comments!

The analyzer also supports group-at-a-time functions.

Supported:
CoGroupFunction
CrossFunction
FlatCombineFunction
FlatJoinFunction
FlatMapFunction
GroupReduceFunction
JoinFunction
MapFunction
MapPartitionFunction
ReduceFunction

Not supported yet:
FilterFunction
CombineFunction

I will implement Ufuks summary.

> Add static code analysis for UDFs
> -
>
> Key: FLINK-1319
> URL: https://issues.apache.org/jira/browse/FLINK-1319
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API, Scala API
>Reporter: Stephan Ewen
>Assignee: Timo Walther
>Priority: Minor
>
> Flink's Optimizer takes information that tells it for UDFs which fields of 
> the input elements are accessed, modified, or frwarded/copied. This 
> information frequently helps to reuse partitionings, sorts, etc. It may speed 
> up programs significantly, as it can frequently eliminate sorts and shuffles, 
> which are costly.
> Right now, users can add lightweight annotations to UDFs to provide this 
> information (such as adding {{@ConstandFields("0->3, 1, 2->1")}}.
> We worked with static code analysis of UDFs before, to determine this 
> information automatically. This is an incredible feature, as it "magically" 
> makes programs faster.
> For record-at-a-time operations (Map, Reduce, FlatMap, Join, Cross), this 
> works surprisingly well in many cases. We used the "Soot" toolkit for the 
> static code analysis. Unfortunately, Soot is LGPL licensed and thus we did 
> not include any of the code so far.
> I propose to add this functionality to Flink, in the form of a drop-in 
> addition, to work around the LGPL incompatibility with ALS 2.0. Users could 
> simply download a special "flink-code-analysis.jar" and drop it into the 
> "lib" folder to enable this functionality. We may even add a script to 
> "tools" that downloads that library automatically into the lib folder. This 
> should be legally fine, since we do not redistribute LGPL code and only 
> dynamically link it (the incompatibility with ASL 2.0 is mainly in the 
> patentability, if I remember correctly).
> Prior work on this has been done by [~aljoscha] and [~skunert], which could 
> provide a code base to start with.
> *Appendix*
> Hompage to Soot static analysis toolkit: http://www.sable.mcgill.ca/soot/
> Papers on static analysis and for optimization: 
> http://stratosphere.eu/assets/papers/EnablingOperatorReorderingSCA_12.pdf and 
> http://stratosphere.eu/assets/papers/openingTheBlackBoxes_12.pdf
> Quick introduction to the Optimizer: 
> http://stratosphere.eu/assets/papers/2014-VLDBJ_Stratosphere_Overview.pdf 
> (Section 6)
> Optimizer for Iterations: 
> http://stratosphere.eu/assets/papers/spinningFastIterativeDataFlows_12.pdf 
> (Sections 4.3 and 5.3)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Mesos integration of Apache Flink

2015-03-25 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/251#issuecomment-85970300
  
There seems to be a happy Flink user deploying Flink on Yarn on Myriad on 
Mesos ;)
https://twitter.com/crypt_tech/status/580605081625726976


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread mxm
Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/530#issuecomment-85954079
  
+1 This rename makes a lot of sense. Looks good to me apart from the minor 
comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/530#discussion_r27104945
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/RegularPactTask.java
 ---
@@ -1465,7 +1467,9 @@ public static void 
cancelChainedTasks(List> tasks) {
for (int i = 0; i < tasks.size(); i++) {
try {
tasks.get(i).cancelTask();
-   } catch (Throwable t) {}
+   } catch (Throwable t) {
+   // do nothing
+   }
}
--- End diff --

Same as above.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/530#discussion_r27104851
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/RegularPactTask.java
 ---
@@ -525,7 +525,9 @@ protected void run() throws Exception {
try {
FunctionUtils.closeFunction(this.stub);
}
-   catch (Throwable t) {}
+   catch (Throwable t) {
+   // do nothing
+   }
}
--- End diff --

A comment could be useful here to explain why we do a catch-all.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1756) Rename Stream Monitoring to Stream Checkpointing

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379549#comment-14379549
 ] 

ASF GitHub Bot commented on FLINK-1756:
---

Github user mbalassi commented on the pull request:

https://github.com/apache/flink/pull/531#issuecomment-85951702
  
Thanks for spotting it, it was my bad. LGTM, merging.


> Rename Stream Monitoring to Stream Checkpointing
> 
>
> Key: FLINK-1756
> URL: https://issues.apache.org/jira/browse/FLINK-1756
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 0.9
>Reporter: Stephan Ewen
>Assignee: Márton Balassi
> Fix For: 0.9
>
>
> Currently, to enable the streaming checkpointing, you have to set 
> "monitoring" on. I vote to call it "checkpointing", because that describes it 
> better and is more intuitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/530#discussion_r27104741
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/AllGroupCombineDriver.java
 ---
@@ -35,20 +35,20 @@
 * Like @org.apache.flink.runtime.operators.GroupCombineDriver but without 
grouping and sorting. May emit partially
 * reduced results.
 *
-* @see org.apache.flink.api.common.functions.FlatCombineFunction
+* @see GroupCombineFunction
--- End diff --

Path to class missing here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1756] [streaming] Rename Stream Monitor...

2015-03-25 Thread mbalassi
Github user mbalassi commented on the pull request:

https://github.com/apache/flink/pull/531#issuecomment-85951702
  
Thanks for spotting it, it was my bad. LGTM, merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Flink-1780: Rename FlatCombineFunction to Grou...

2015-03-25 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/530#discussion_r27104700
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/TypeExtractor.java 
---
@@ -1425,11 +1425,9 @@ public static boolean isClassType(Type t) {
}
 
private static boolean sameTypeVars(Type t1, Type t2) {
-   if (!(t1 instanceof TypeVariable) || !(t2 instanceof 
TypeVariable)) {
-   return false;
-   }
-   return ((TypeVariable) 
t1).getName().equals(((TypeVariable)t2).getName())
-   && ((TypeVariable) 
t1).getGenericDeclaration().equals(((TypeVariable)t2).getGenericDeclaration());
+   return !(!(t1 instanceof TypeVariable) || !(t2 instanceof 
TypeVariable)) &&
+   ((TypeVariable) 
t1).getName().equals(((TypeVariable) t2).getName()) &&
+   ((TypeVariable) 
t1).getGenericDeclaration().equals(((TypeVariable) 
t2).getGenericDeclaration());
--- End diff --

Not sure if this change improves the readability of the code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-1544) Extend streaming aggregation tests to include POJOs

2015-03-25 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/FLINK-1544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Márton Balassi resolved FLINK-1544.
---
   Resolution: Implemented
Fix Version/s: 0.9

Implemented via 597d8b8

> Extend streaming aggregation tests to include POJOs
> ---
>
> Key: FLINK-1544
> URL: https://issues.apache.org/jira/browse/FLINK-1544
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Gyula Fora
>Assignee: Péter Szabó
>  Labels: starter
> Fix For: 0.9
>
>
> Currently the streaming aggregation tests don't test pojo aggregations which 
> makes newly introduced bugs harder to detect.
> New tests should be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1544) Extend streaming aggregation tests to include POJOs

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379541#comment-14379541
 ] 

ASF GitHub Bot commented on FLINK-1544:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/517


> Extend streaming aggregation tests to include POJOs
> ---
>
> Key: FLINK-1544
> URL: https://issues.apache.org/jira/browse/FLINK-1544
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Reporter: Gyula Fora
>Assignee: Péter Szabó
>  Labels: starter
>
> Currently the streaming aggregation tests don't test pojo aggregations which 
> makes newly introduced bugs harder to detect.
> New tests should be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1544] [streaming] POJO types added to A...

2015-03-25 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/517


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1775) BarrierBuffers don't correctly handle end of stream events

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379468#comment-14379468
 ] 

ASF GitHub Bot commented on FLINK-1775:
---

Github user uce commented on the pull request:

https://github.com/apache/flink/pull/534#issuecomment-85909585
  
Hey gyula,

What is the problem with the stream writer? And how can I reproduce the 
record writer broadcast problem? If you give me some instructions to reproduce 
the problem, I could also look into it.


> BarrierBuffers don't correctly handle end of stream events
> --
>
> Key: FLINK-1775
> URL: https://issues.apache.org/jira/browse/FLINK-1775
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>
> The current implementation causes deadlocks when the end of stream event 
> comes from a currently blocked channel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1775] BarrierBuffer fix to avoid end of...

2015-03-25 Thread uce
Github user uce commented on the pull request:

https://github.com/apache/flink/pull/534#issuecomment-85909585
  
Hey gyula,

What is the problem with the stream writer? And how can I reproduce the 
record writer broadcast problem? If you give me some instructions to reproduce 
the problem, I could also look into it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1775] BarrierBuffer fix to avoid end of...

2015-03-25 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/534#discussion_r27100141
  
--- Diff: 
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/streamvertex/StreamVertex.java
 ---
@@ -283,16 +288,18 @@ public StreamConfig getConfig() {
 * 
 * @param id
 */
-   private void actOnBarrier(long id) {
-   try {
-   outputHandler.broadcastBarrier(id);
-   // TODO checkpoint state here
-   confirmBarrier(id);
-   if (LOG.isDebugEnabled()) {
-   LOG.debug("Superstep " + id + " processed: " + 
StreamVertex.this);
+   private synchronized void actOnBarrier(long id) {
+   if (this.isRunning) {
+   try {
+   outputHandler.broadcastBarrier(id);
+   // TODO checkpoint state here
+   confirmBarrier(id);
+   if (LOG.isDebugEnabled()) {
+   LOG.debug("Superstep " + id + " 
processed: " + StreamVertex.this);
+   }
+   } catch (Exception e) {
+   // TODO:Figure this out properly
--- End diff --

Why are you swallowing the exception here silently?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1775) BarrierBuffers don't correctly handle end of stream events

2015-03-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14379449#comment-14379449
 ] 

ASF GitHub Bot commented on FLINK-1775:
---

Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/534#discussion_r27100141
  
--- Diff: 
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/streamvertex/StreamVertex.java
 ---
@@ -283,16 +288,18 @@ public StreamConfig getConfig() {
 * 
 * @param id
 */
-   private void actOnBarrier(long id) {
-   try {
-   outputHandler.broadcastBarrier(id);
-   // TODO checkpoint state here
-   confirmBarrier(id);
-   if (LOG.isDebugEnabled()) {
-   LOG.debug("Superstep " + id + " processed: " + 
StreamVertex.this);
+   private synchronized void actOnBarrier(long id) {
+   if (this.isRunning) {
+   try {
+   outputHandler.broadcastBarrier(id);
+   // TODO checkpoint state here
+   confirmBarrier(id);
+   if (LOG.isDebugEnabled()) {
+   LOG.debug("Superstep " + id + " 
processed: " + StreamVertex.this);
+   }
+   } catch (Exception e) {
+   // TODO:Figure this out properly
--- End diff --

Why are you swallowing the exception here silently?


> BarrierBuffers don't correctly handle end of stream events
> --
>
> Key: FLINK-1775
> URL: https://issues.apache.org/jira/browse/FLINK-1775
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>
> The current implementation causes deadlocks when the end of stream event 
> comes from a currently blocked channel.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)