[jira] [Commented] (FLINK-2622) Scala DataStream API does not have writeAsText method which supports WriteMode

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876916#comment-14876916
 ] 

ASF GitHub Bot commented on FLINK-2622:
---

Github user chiwanpark commented on the pull request:

https://github.com/apache/flink/pull/1098#issuecomment-141629844
  
There is a checkstyle error:

```
[INFO] There is 1 error reported by Checkstyle 6.2 with 
/tools/maven/checkstyle.xml ruleset.
[ERROR] 
src/main/java/org/apache/flink/streaming/api/datastream/DataStream.java[992] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
```


> Scala DataStream API does not have writeAsText method which supports WriteMode
> --
>
> Key: FLINK-2622
> URL: https://issues.apache.org/jira/browse/FLINK-2622
> Project: Flink
>  Issue Type: Bug
>  Components: Scala API, Streaming
>Reporter: Till Rohrmann
>
> The Scala DataStream API, unlike the Java DataStream API, does not support a 
> {{writeAsText}} method which takes the {{WriteMode}} as a parameter. In order 
> to make the two APIs consistent, it should be added to the Scala DataStream 
> API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2622][streaming]add WriteMode for write...

2015-09-18 Thread chiwanpark
Github user chiwanpark commented on the pull request:

https://github.com/apache/flink/pull/1098#issuecomment-141629844
  
There is a checkstyle error:

```
[INFO] There is 1 error reported by Checkstyle 6.2 with 
/tools/maven/checkstyle.xml ruleset.
[ERROR] 
src/main/java/org/apache/flink/streaming/api/datastream/DataStream.java[992] 
(regexp) RegexpSinglelineJava: Line has leading space characters; indentation 
should be performed with tabs only.
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2622) Scala DataStream API does not have writeAsText method which supports WriteMode

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876915#comment-14876915
 ] 

ASF GitHub Bot commented on FLINK-2622:
---

Github user HuangWHWHW commented on the pull request:

https://github.com/apache/flink/pull/1098#issuecomment-141629775
  
Hi, could anyone tell me why the last CI failed?
Thanks:)


> Scala DataStream API does not have writeAsText method which supports WriteMode
> --
>
> Key: FLINK-2622
> URL: https://issues.apache.org/jira/browse/FLINK-2622
> Project: Flink
>  Issue Type: Bug
>  Components: Scala API, Streaming
>Reporter: Till Rohrmann
>
> The Scala DataStream API, unlike the Java DataStream API, does not support a 
> {{writeAsText}} method which takes the {{WriteMode}} as a parameter. In order 
> to make the two APIs consistent, it should be added to the Scala DataStream 
> API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2622][streaming]add WriteMode for write...

2015-09-18 Thread HuangWHWHW
Github user HuangWHWHW commented on the pull request:

https://github.com/apache/flink/pull/1098#issuecomment-141629775
  
Hi, could anyone tell me why the last CI failed?
Thanks:)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2576) Add outer joins to API and Optimizer

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876425#comment-14876425
 ] 

ASF GitHub Bot commented on FLINK-2576:
---

Github user jkovacs commented on the pull request:

https://github.com/apache/flink/pull/1138#issuecomment-141569673
  
To partly answer my own question: One big drawback of downgrading the tuple 
field types to `GenericTypeInfo` is that for (de)serialization and comparison 
the generic Kryo serializers will be used, which are significantly slower than 
the native flink serializers and comparators for basic types, such as Integer 
(according to [this blog 
post](http://flink.apache.org/news/2015/05/11/Juggling-with-Bits-and-Bytes.html)).

One obvious way to work around this is to only downgrade the fields that 
are actually nullable, and keep the original types of the definitely non-null 
fields (i.e. the types from the outer side of a left or right outer join). This 
way the user can still group/join/sort efficiently on the non-null fields, 
while preserving null safety for the other fields.

I pushed another commit for this to my temporary branch for review, if this 
makes sense: 
https://github.com/jkovacs/flink/compare/feature/FLINK-2576...jkovacs:feature/FLINK-2576-projection-types

As you can see I was really hoping to make the projection joins work 
properly :-) but if you feel that the effort isn't worth it or I'm missing 
something else entirely, we can for sure simply scrap that and throw an 
`InvalidProgramException` when the user tries to do a project outer join 
instead of defining his own join udf. Opinions on that are welcome.


> Add outer joins to API and Optimizer
> 
>
> Key: FLINK-2576
> URL: https://issues.apache.org/jira/browse/FLINK-2576
> Project: Flink
>  Issue Type: Sub-task
>  Components: Java API, Optimizer, Scala API
>Reporter: Ricky Pogalz
>Priority: Minor
> Fix For: pre-apache
>
>
> Add left/right/full outer join methods to the DataSet APIs (Java, Scala) and 
> to the optimizer of Flink.
> Initially, the execution strategy should be a sort-merge outer join 
> (FLINK-2105) but can later be extended to hash joins for left/right outer 
> joins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2576] Add Outer Join operator to Optimi...

2015-09-18 Thread jkovacs
Github user jkovacs commented on the pull request:

https://github.com/apache/flink/pull/1138#issuecomment-141569673
  
To partly answer my own question: One big drawback of downgrading the tuple 
field types to `GenericTypeInfo` is that for (de)serialization and comparison 
the generic Kryo serializers will be used, which are significantly slower than 
the native flink serializers and comparators for basic types, such as Integer 
(according to [this blog 
post](http://flink.apache.org/news/2015/05/11/Juggling-with-Bits-and-Bytes.html)).

One obvious way to work around this is to only downgrade the fields that 
are actually nullable, and keep the original types of the definitely non-null 
fields (i.e. the types from the outer side of a left or right outer join). This 
way the user can still group/join/sort efficiently on the non-null fields, 
while preserving null safety for the other fields.

I pushed another commit for this to my temporary branch for review, if this 
makes sense: 
https://github.com/jkovacs/flink/compare/feature/FLINK-2576...jkovacs:feature/FLINK-2576-projection-types

As you can see I was really hoping to make the projection joins work 
properly :-) but if you feel that the effort isn't worth it or I'm missing 
something else entirely, we can for sure simply scrap that and throw an 
`InvalidProgramException` when the user tries to do a project outer join 
instead of defining his own join udf. Opinions on that are welcome.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-2709) line editing in scala shell

2015-09-18 Thread Matthew Farrellee (JIRA)
Matthew Farrellee created FLINK-2709:


 Summary: line editing in scala shell
 Key: FLINK-2709
 URL: https://issues.apache.org/jira/browse/FLINK-2709
 Project: Flink
  Issue Type: New Feature
  Components: Scala Shell
Reporter: Matthew Farrellee


it would be very helpful to be able to edit lines in the shell. for instance, 
up/down arrow to navigate history and left/right to navigate a line.

bonus for history search and advanced single line editing (e.g. emacs bindings)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2525) Add configuration support in Storm-compatibility

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876240#comment-14876240
 ] 

ASF GitHub Bot commented on FLINK-2525:
---

Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1046#issuecomment-141550369
  
Hi @ffbin,
not sure if you followed the discussion on the mailing list, but we 
discussed to use the ExecutionConfig instead of the JobConfig. The reason is 
that ExecutionConfig is user-facing and JobConfig is used for system internal 
configurations. 

See the discussion 
[here](https://mail-archives.apache.org/mod_mbox/flink-dev/201509.mbox/%3CCANC1h_tVL1NHpoYrB9LaGRUP=tsyn2cpd2zfjoop2bhuddp...@mail.gmail.com%3E).

It would be nice, if you could update the PR to use ExecutionConfig. 
Thanks a lot, Fabian


> Add configuration support in Storm-compatibility
> 
>
> Key: FLINK-2525
> URL: https://issues.apache.org/jira/browse/FLINK-2525
> Project: Flink
>  Issue Type: New Feature
>  Components: Storm Compatibility
>Reporter: fangfengbin
>Assignee: fangfengbin
>
> Spouts and Bolt are initialized by a call to `Spout.open(...)` and 
> `Bolt.prepare()`, respectively. Both methods have a config `Map` as first 
> parameter. This map is currently not populated. Thus, Spouts and Bolts cannot 
> be configure with user defined parameters. In order to support this feature, 
> spout and bolt wrapper classes need to be extended to create a proper `Map` 
> object. Furthermore, the clients need to be extended to take a `Map`, 
> translate it into a Flink `Configuration` that is forwarded to the wrappers 
> for proper initialization of the map.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2525]Add configuration support in Storm...

2015-09-18 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1046#issuecomment-141550369
  
Hi @ffbin,
not sure if you followed the discussion on the mailing list, but we 
discussed to use the ExecutionConfig instead of the JobConfig. The reason is 
that ExecutionConfig is user-facing and JobConfig is used for system internal 
configurations. 

See the discussion 
[here](https://mail-archives.apache.org/mod_mbox/flink-dev/201509.mbox/%3CCANC1h_tVL1NHpoYrB9LaGRUP=tsyn2cpd2zfjoop2bhuddp...@mail.gmail.com%3E).

It would be nice, if you could update the PR to use ExecutionConfig. 
Thanks a lot, Fabian


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-2708) Potential null pointer dereference in JoinWithSolutionSetFirstDriver#run()

2015-09-18 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-2708.

Resolution: Not A Problem

Thanks for opening this issue.
However, null values are actually fine in this case, because joining with a 
solution set has outer join semantic and the join function must be able to 
handle null values.

Thanks, Fabian

> Potential null pointer dereference in JoinWithSolutionSetFirstDriver#run()
> --
>
> Key: FLINK-2708
> URL: https://issues.apache.org/jira/browse/FLINK-2708
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> while (this.running && ((probeSideRecord = 
> probeSideInput.next(probeSideRecord)) != null)) {
> IT1 matchedRecord = 
> prober.getMatchFor(probeSideRecord, buildSideRecord);
> joinFunction.join(matchedRecord, 
> probeSideRecord, collector);
> }
> {code}
> The return value from getMatchFor() should be checked to avoid NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2708) Potential null pointer dereference in JoinWithSolutionSetFirstDriver#run()

2015-09-18 Thread Ted Yu (JIRA)
Ted Yu created FLINK-2708:
-

 Summary: Potential null pointer dereference in 
JoinWithSolutionSetFirstDriver#run()
 Key: FLINK-2708
 URL: https://issues.apache.org/jira/browse/FLINK-2708
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
while (this.running && ((probeSideRecord = 
probeSideInput.next(probeSideRecord)) != null)) {
IT1 matchedRecord = 
prober.getMatchFor(probeSideRecord, buildSideRecord);
joinFunction.join(matchedRecord, 
probeSideRecord, collector);
}
{code}
The return value from getMatchFor() should be checked to avoid NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Windows

2015-09-18 Thread StephanEwen
GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/1147

Windows

Adds new dedicated implementations for grouped windows on aligned 
processing time (tumbling and sliding time windows). A lot of code is written 
to be easily adaptable to event-time implementations.

The pull request currently includes only the operator implementations - the 
operators are not yet integrated with the APIs and stream graph builder.

The new operators fix many problems that the current time operators have, 
including cleanup of stale keys, deadlocks, resource leaks, incorrect time 
alignment, and occasional wrong results.

The pull request also adds quite a few tests for the operators and data 
structures, including checks whether data structures are released and whether 
threads are properly disposed.

One can try out the implementation by running the 
`GroupedProcessingTimeWindowExample` that I added for experimental purposes 
into the examples project (should be removed before merging).

 Speed

The new implementations are also much faster:

Setup is a streaming program that generates (Long, Long) tuples, and groups 
them (field 0) in a window of 2500 msecs sliding by 500 msecs, and aggregates 
them (field 1). I use 10k unique keys in the example

Times:
 - current implementation: 50K / core / sec (gets slower over time, high GC 
overhead)
 - new implementation w/o pre-aggregation: 800K / sec / core (moderate GC 
overhead)
 - new implementation w/ pre-aggregation: 3mio / sec / core (low GC 
overhead)

This build on #1133 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink windows

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1147.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1147


commit 3c113064826612d64bc6902733511a5f332543cd
Author: Stephan Ewen 
Date:   2015-09-15T12:55:41Z

[FLINK-2675] [streaming] Add utilities for scheduled triggers.

commit 1654189e772e55c04175374ec878ba398e2c9fbf
Author: Stephan Ewen 
Date:   2015-09-15T13:00:17Z

[FLINK-2683] [FLINK-2682] [runtime] Add dedicated operator for aligned 
processing time windows.

Also add utilities for heap-backed keyed state in panes (dedicated tailored 
hash table)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-2707) Set state checkpointer before default state for PartitionedStreamOperatorState

2015-09-18 Thread Gyula Fora (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora resolved FLINK-2707.
---
Resolution: Fixed

Fixed + added a test case

https://github.com/apache/flink/commit/3e233a3894a2701cc16c6cb3bc8778fc84482e20

> Set state checkpointer before default state for PartitionedStreamOperatorState
> --
>
> Key: FLINK-2707
> URL: https://issues.apache.org/jira/browse/FLINK-2707
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>
> Currently the default state is set before the passed StateCheckpointer 
> instance for operator states.
> What currently happens because of this is that the default value is 
> serialized with Java serialization and then deserialized on the 
> opstate.value() call using the StateCheckpointer most likely causing a 
> failure.
> This can be trivially fixed by swaping the order of the 2 calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-2706) Add support for streaming RollingFileSink to truncate / append on UNIX file systems

2015-09-18 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-2706:
---

Assignee: Aljoscha Krettek

> Add support for streaming RollingFileSink to truncate / append on UNIX file 
> systems
> ---
>
> Key: FLINK-2706
> URL: https://issues.apache.org/jira/browse/FLINK-2706
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Assignee: Aljoscha Krettek
>
> Efficient exactly-once behavior needs the filesystem to support appending and 
> truncating files.
> Since the UNIX file system API allows to append files and truncate files, we 
> can support perfect exactly-once behavior efficiently on all file systems 
> that expose a UNIX / POSIX-style interface (local FS, NFS, MapR FS).
> Without this support, only Hadoop 2.7+ versions support proper exactly once 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2706) Add support for streaming RollingFileSink to truncate / append on UNIX file systems

2015-09-18 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875948#comment-14875948
 ] 

Aljoscha Krettek commented on FLINK-2706:
-

This should not be too hard. The Java Filesystem API can truncate.

> Add support for streaming RollingFileSink to truncate / append on UNIX file 
> systems
> ---
>
> Key: FLINK-2706
> URL: https://issues.apache.org/jira/browse/FLINK-2706
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>
> Efficient exactly-once behavior needs the filesystem to support appending and 
> truncating files.
> Since the UNIX file system API allows to append files and truncate files, we 
> can support perfect exactly-once behavior efficiently on all file systems 
> that expose a UNIX / POSIX-style interface (local FS, NFS, MapR FS).
> Without this support, only Hadoop 2.7+ versions support proper exactly once 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2576) Add outer joins to API and Optimizer

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875879#comment-14875879
 ] 

ASF GitHub Bot commented on FLINK-2576:
---

Github user jkovacs commented on the pull request:

https://github.com/apache/flink/pull/1138#issuecomment-141497981
  
Thanks @fhueske, that's a good point I haven't considered. 

Another idea that occurred to me was to convert the result tuple types to 
`GenericTypeInfo` (instead of `GenericTypeInfo`), where `T` is the 
original type of the tuple field (e.g. `String` or `Integer`). This would be 
null safe _and_ would allow the user to group by those fields, assuming of 
course they are sure that the fields are non-null (e.g. on a left or right 
outer join).
Although I'm not sure of all the consequences of using, say, 
`GenericTypeInfo` instead of `BasicTypeInfo` for serialization 
and comparison.

I pushed this change as 
https://github.com/jkovacs/flink/commit/f682baa50137e0a54bae091ba60ba85fdb8f4c1b
 to a different branch to test it 

Also rebased branch onto current master and resolved conflicts (Failing 
test is some YARN integration test).


> Add outer joins to API and Optimizer
> 
>
> Key: FLINK-2576
> URL: https://issues.apache.org/jira/browse/FLINK-2576
> Project: Flink
>  Issue Type: Sub-task
>  Components: Java API, Optimizer, Scala API
>Reporter: Ricky Pogalz
>Priority: Minor
> Fix For: pre-apache
>
>
> Add left/right/full outer join methods to the DataSet APIs (Java, Scala) and 
> to the optimizer of Flink.
> Initially, the execution strategy should be a sort-merge outer join 
> (FLINK-2105) but can later be extended to hash joins for left/right outer 
> joins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2576] Add Outer Join operator to Optimi...

2015-09-18 Thread jkovacs
Github user jkovacs commented on the pull request:

https://github.com/apache/flink/pull/1138#issuecomment-141497981
  
Thanks @fhueske, that's a good point I haven't considered. 

Another idea that occurred to me was to convert the result tuple types to 
`GenericTypeInfo` (instead of `GenericTypeInfo`), where `T` is the 
original type of the tuple field (e.g. `String` or `Integer`). This would be 
null safe _and_ would allow the user to group by those fields, assuming of 
course they are sure that the fields are non-null (e.g. on a left or right 
outer join).
Although I'm not sure of all the consequences of using, say, 
`GenericTypeInfo` instead of `BasicTypeInfo` for serialization 
and comparison.

I pushed this change as 
https://github.com/jkovacs/flink/commit/f682baa50137e0a54bae091ba60ba85fdb8f4c1b
 to a different branch to test it 

Also rebased branch onto current master and resolved conflicts (Failing 
test is some YARN integration test).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-2697) Deadlock in StreamDiscretizer

2015-09-18 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-2697:
---

Assignee: Aljoscha Krettek

> Deadlock in StreamDiscretizer
> -
>
> Key: FLINK-2697
> URL: https://issues.apache.org/jira/browse/FLINK-2697
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Till Rohrmann
>Assignee: Aljoscha Krettek
>
> Encountered a deadlock in the {{StreamDiscretizer}}
> {code}
> Found one Java-level deadlock:
> =
> "Thread-11":
>   waiting to lock monitor 0x7f9d081e1ab8 (object 0xff6b4590, a 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer),
>   which is held by "StreamDiscretizer -> TumblingGroupedPreReducer -> 
> (Filter, ExtractParts) (3/4)"
> "StreamDiscretizer -> TumblingGroupedPreReducer -> (Filter, ExtractParts) 
> (3/4)":
>   waiting to lock monitor 0x7f9d081e20e8 (object 0xff75fd88, a 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy),
>   which is held by "Thread-11"
> Java stack information for the threads listed above:
> ===
> "Thread-11":
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer.triggerOnFakeElement(StreamDiscretizer.java:121)
>   - waiting to lock <0xff6b4590> (a 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer)
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer$WindowingCallback.sendFakeElement(StreamDiscretizer.java:203)
>   at 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy.activeFakeElementEmission(TimeTriggerPolicy.java:117)
>   - locked <0xff75fd88> (a 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy)
>   at 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy$TimeCheck.run(TimeTriggerPolicy.java:144)
>   at java.lang.Thread.run(Thread.java:745)
> "StreamDiscretizer -> TumblingGroupedPreReducer -> (Filter, ExtractParts) 
> (3/4)":
>   at 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy.preNotifyTrigger(TimeTriggerPolicy.java:74)
>   - waiting to lock <0xff75fd88> (a 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy)
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer.processRealElement(StreamDiscretizer.java:91)
>   - locked <0xff6b4590> (a 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer)
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer.processElement(StreamDiscretizer.java:73)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:162)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:171)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581)
>   at java.lang.Thread.run(Thread.java:745)
> Found 1 deadlock.
> {code}
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/80770719/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2697) Deadlock in StreamDiscretizer

2015-09-18 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875857#comment-14875857
 ] 

Aljoscha Krettek commented on FLINK-2697:
-

I've seen this before as well. But this is going away with the windowing 
rewrite.

> Deadlock in StreamDiscretizer
> -
>
> Key: FLINK-2697
> URL: https://issues.apache.org/jira/browse/FLINK-2697
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Till Rohrmann
>
> Encountered a deadlock in the {{StreamDiscretizer}}
> {code}
> Found one Java-level deadlock:
> =
> "Thread-11":
>   waiting to lock monitor 0x7f9d081e1ab8 (object 0xff6b4590, a 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer),
>   which is held by "StreamDiscretizer -> TumblingGroupedPreReducer -> 
> (Filter, ExtractParts) (3/4)"
> "StreamDiscretizer -> TumblingGroupedPreReducer -> (Filter, ExtractParts) 
> (3/4)":
>   waiting to lock monitor 0x7f9d081e20e8 (object 0xff75fd88, a 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy),
>   which is held by "Thread-11"
> Java stack information for the threads listed above:
> ===
> "Thread-11":
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer.triggerOnFakeElement(StreamDiscretizer.java:121)
>   - waiting to lock <0xff6b4590> (a 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer)
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer$WindowingCallback.sendFakeElement(StreamDiscretizer.java:203)
>   at 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy.activeFakeElementEmission(TimeTriggerPolicy.java:117)
>   - locked <0xff75fd88> (a 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy)
>   at 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy$TimeCheck.run(TimeTriggerPolicy.java:144)
>   at java.lang.Thread.run(Thread.java:745)
> "StreamDiscretizer -> TumblingGroupedPreReducer -> (Filter, ExtractParts) 
> (3/4)":
>   at 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy.preNotifyTrigger(TimeTriggerPolicy.java:74)
>   - waiting to lock <0xff75fd88> (a 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy)
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer.processRealElement(StreamDiscretizer.java:91)
>   - locked <0xff6b4590> (a 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer)
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer.processElement(StreamDiscretizer.java:73)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:162)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:171)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581)
>   at java.lang.Thread.run(Thread.java:745)
> Found 1 deadlock.
> {code}
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/80770719/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2697) Deadlock in StreamDiscretizer

2015-09-18 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875858#comment-14875858
 ] 

Aljoscha Krettek commented on FLINK-2697:
-

Should we keep the issue around until the new windowing stuff is in an then 
mark it as fixed?

> Deadlock in StreamDiscretizer
> -
>
> Key: FLINK-2697
> URL: https://issues.apache.org/jira/browse/FLINK-2697
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Reporter: Till Rohrmann
>
> Encountered a deadlock in the {{StreamDiscretizer}}
> {code}
> Found one Java-level deadlock:
> =
> "Thread-11":
>   waiting to lock monitor 0x7f9d081e1ab8 (object 0xff6b4590, a 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer),
>   which is held by "StreamDiscretizer -> TumblingGroupedPreReducer -> 
> (Filter, ExtractParts) (3/4)"
> "StreamDiscretizer -> TumblingGroupedPreReducer -> (Filter, ExtractParts) 
> (3/4)":
>   waiting to lock monitor 0x7f9d081e20e8 (object 0xff75fd88, a 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy),
>   which is held by "Thread-11"
> Java stack information for the threads listed above:
> ===
> "Thread-11":
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer.triggerOnFakeElement(StreamDiscretizer.java:121)
>   - waiting to lock <0xff6b4590> (a 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer)
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer$WindowingCallback.sendFakeElement(StreamDiscretizer.java:203)
>   at 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy.activeFakeElementEmission(TimeTriggerPolicy.java:117)
>   - locked <0xff75fd88> (a 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy)
>   at 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy$TimeCheck.run(TimeTriggerPolicy.java:144)
>   at java.lang.Thread.run(Thread.java:745)
> "StreamDiscretizer -> TumblingGroupedPreReducer -> (Filter, ExtractParts) 
> (3/4)":
>   at 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy.preNotifyTrigger(TimeTriggerPolicy.java:74)
>   - waiting to lock <0xff75fd88> (a 
> org.apache.flink.streaming.api.windowing.policy.TimeTriggerPolicy)
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer.processRealElement(StreamDiscretizer.java:91)
>   - locked <0xff6b4590> (a 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer)
>   at 
> org.apache.flink.streaming.api.operators.windowing.StreamDiscretizer.processElement(StreamDiscretizer.java:73)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:162)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:56)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:171)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581)
>   at java.lang.Thread.run(Thread.java:745)
> Found 1 deadlock.
> {code}
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/80770719/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2600) Failing ElasticsearchSinkITCase.testNodeClient test case

2015-09-18 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875850#comment-14875850
 ] 

Aljoscha Krettek commented on FLINK-2600:
-

But that's differente, isn't it? The other time the test failed with an 
exception in Elasticsearch, here it's a deadlock.

> Failing ElasticsearchSinkITCase.testNodeClient test case
> 
>
> Key: FLINK-2600
> URL: https://issues.apache.org/jira/browse/FLINK-2600
> Project: Flink
>  Issue Type: Bug
>Reporter: Till Rohrmann
>Assignee: Aljoscha Krettek
>  Labels: test-stability
>
> I observed that the {{ElasticsearchSinkITCase.testNodeClient}} test case 
> fails on Travis. The stack trace is
> {code}
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:414)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.testingUtils.TestingJobManager$$anonfun$handleTestingMessage$1.applyOrElse(TestingJobManager.scala:285)
>   at scala.PartialFunction$OrElse.apply(PartialFunction.scala:162)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:104)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.RuntimeException: An error occured in ElasticsearchSink.
>   at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSink.close(ElasticsearchSink.java:307)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:40)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.close(AbstractUdfStreamOperator.java:75)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.closeAllOperators(StreamTask.java:243)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:185)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: IndexMissingException[[my-index] 
> missing]
>   at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSink$1.afterBulk(ElasticsearchSink.java:240)
>   at 
> org.elasticsearch.action.bulk.BulkProcessor.execute(BulkProcessor.java:316)
>   at 
> org.elasticsearch.action.bulk.BulkProcessor.executeIfNeeded(BulkProcessor.java:299)
>   at 
> org.elasticsearch.action.bulk.BulkProcessor.internalAdd(BulkProcessor.java:281)
>   at 
> org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:264)
>   at 
> org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:260)
>   at 
> org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:246)
>   at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSink.invoke(ElasticsearchSink.java:286)
>   at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:37)
>   at 
> org.apache.flink.stream

[jira] [Created] (FLINK-2707) Set state checkpointer before default state for PartitionedStreamOperatorState

2015-09-18 Thread Gyula Fora (JIRA)
Gyula Fora created FLINK-2707:
-

 Summary: Set state checkpointer before default state for 
PartitionedStreamOperatorState
 Key: FLINK-2707
 URL: https://issues.apache.org/jira/browse/FLINK-2707
 Project: Flink
  Issue Type: Bug
  Components: Streaming
Reporter: Gyula Fora
Assignee: Gyula Fora


Currently the default state is set before the passed StateCheckpointer instance 
for operator states.

What currently happens because of this is that the default value is serialized 
with Java serialization and then deserialized on the opstate.value() call using 
the StateCheckpointer most likely causing a failure.

This can be trivially fixed by swaping the order of the 2 calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2706) Add support for streaming RollingFileSink to truncate / append on UNIX file systems

2015-09-18 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-2706:
---

 Summary: Add support for streaming RollingFileSink to truncate / 
append on UNIX file systems
 Key: FLINK-2706
 URL: https://issues.apache.org/jira/browse/FLINK-2706
 Project: Flink
  Issue Type: Improvement
  Components: Streaming Connectors
Affects Versions: 0.10
Reporter: Stephan Ewen


Efficient exactly-once behavior needs the filesystem to support appending and 
truncating files.

Since the UNIX file system API allows to append files and truncate files, we 
can support perfect exactly-once behavior efficiently on all file systems that 
expose a UNIX / POSIX-style interface (local FS, NFS, MapR FS).

Without this support, only Hadoop 2.7+ versions support proper exactly once 
behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2583) Add Stream Sink For Rolling HDFS Files

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2583.
---

> Add Stream Sink For Rolling HDFS Files
> --
>
> Key: FLINK-2583
> URL: https://issues.apache.org/jira/browse/FLINK-2583
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 0.10
>
>
> In addition to having configurable file-rolling behavior the Sink should also 
> integrate with checkpointing to make it possible to have exactly-once 
> semantics throughout the topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2643) Change Travis Build Profile to Exclude Hadoop 2.0.0-alpha, Include 2.7.0

2015-09-18 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875753#comment-14875753
 ] 

Stephan Ewen commented on FLINK-2643:
-

First version by [~aljoscha] merged in b234b0b16d01c0f843ae6031337a8a2a3ba16d5b

> Change Travis Build Profile to Exclude Hadoop 2.0.0-alpha, Include 2.7.0
> 
>
> Key: FLINK-2643
> URL: https://issues.apache.org/jira/browse/FLINK-2643
> Project: Flink
>  Issue Type: Improvement
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Minor
>
> In discussion on the mailing list we reached consensus to change the Hadoop 
> versions that we build Flink with on Travis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2583) Add Stream Sink For Rolling HDFS Files

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2583.
-
Resolution: Fixed

Done in 35dcceb9ebc3a8a47b8b8aeb2c4e1e2d453767f4

> Add Stream Sink For Rolling HDFS Files
> --
>
> Key: FLINK-2583
> URL: https://issues.apache.org/jira/browse/FLINK-2583
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 0.10
>
>
> In addition to having configurable file-rolling behavior the Sink should also 
> integrate with checkpointing to make it possible to have exactly-once 
> semantics throughout the topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2685) TaskManager deadlock on NetworkBufferPool

2015-09-18 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875748#comment-14875748
 ] 

Stephan Ewen commented on FLINK-2685:
-

Thanks for posing this, Greg!

We'll have a look at this very soon...

> TaskManager deadlock on NetworkBufferPool
> -
>
> Key: FLINK-2685
> URL: https://issues.apache.org/jira/browse/FLINK-2685
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Ufuk Celebi
>
> This deadlock occurs intermittently. I have a {{join}} followed by a 
> {{chain}} followed by a {{reduceGroup}}. Stack traces and local 
> variables from one each of the {{join}} threads below.
> The {{join}}'s are waiting on a buffer to become available 
> ({{networkBufferPool.availableMemorySegments.count=0}}). Both 
> {{LocalBufferPool}}'s have been given extra capacity ({{currentPoolSize=60 > 
> numberOfRequiredMemorySegments=32}}). The first {{join}} is at full capacity 
> ({{currentPoolSize=numberOfRequestedMemorySegments=60}}) yet the second 
> {{join}} has not acquired any ({{numberOfRequestedMemorySegments=0}}).
> {{LocalBufferPool.returnExcessMemorySegments}} only recycles 
> {{MemorySegment}}'s from its {{availableMemorySegments}}, so any requested 
> {{Buffer}}'s will only be released when explicitly recycled.
> First join stack trace and variable values from 
> {{LocalBufferPool.requestBuffer}}:
> {noformat}
> owns: SpanningRecordSerializer  (id=723)   
> waiting for: ArrayDeque  (id=724)  
> Object.wait(long) line: not available [native method] 
> LocalBufferPool.requestBuffer(boolean) line: 163  
> LocalBufferPool.requestBufferBlocking() line: 133 
> RecordWriter.emit(T) line: 92  
> OutputCollector.collect(T) line: 65
> JoinOperator$ProjectFlatJoinFunction.join(T1, T2, Collector) 
> line: 1088   
> ReusingBuildSecondHashMatchIterator.callWithNextKey(FlatJoinFunction,
>  Collector) line: 137   
> JoinDriver.run() line: 208
> RegularPactTask.run() line: 489 
> RegularPactTask.invoke() line: 354  
> Task.run() line: 581  
> Thread.run() line: 745
> {noformat}
> {noformat}
> this  LocalBufferPool  (id=403)   
>   availableMemorySegments ArrayDeque  (id=398) 
>   elementsObject[16]  (id=422)
>   head14  
>   tail14  
>   currentPoolSize 60  
>   isDestroyed false   
>   networkBufferPool   NetworkBufferPool  (id=354) 
>   allBufferPools  HashSet  (id=424)
>   availableMemorySegments ArrayBlockingQueue  (id=427) 
>   count   0   
>   items   Object[10240]  (id=674) 
>   itrsnull
>   lockReentrantLock  (id=675) 
>   notEmpty
> AbstractQueuedSynchronizer$ConditionObject  (id=678)
>   notFull AbstractQueuedSynchronizer$ConditionObject  
> (id=679)
>   putIndex6954
>   takeIndex   6954
>   factoryLock Object  (id=430)
>   isDestroyed false   
>   managedBufferPools  HashSet  (id=431)
>   memorySegmentSize   32768   
>   numTotalRequiredBuffers 3226
>   totalNumberOfMemorySegments 10240   
>   numberOfRequestedMemorySegments 60  
>   numberOfRequiredMemorySegments  32  
>   owner   null
>   registeredListeners ArrayDeque  (id=421) 
>   elementsObject[16]  (id=685)
>   head0   
>   tail0   
> askToRecycle  false   
> isBlockingtrue
> {noformat}
> Second join stack trace and variable values from 
> {{SingleInputGate.getNextBufferOrEvent}}:
> {noformat}
> Unsafe.park(boolean, long) line: not available [native method]
> LockSupport.parkNanos(Object, long) line: 215 
> AbstractQueuedSynchronizer$ConditionObject.awaitNanos(long) line: 2078
> LinkedBlockingQueue.poll(long, TimeUnit) line: 467 
> SingleInputGate.getNextBufferOrEvent() line: 414  
> MutableRecordReader(AbstractRecordReader).getNextRecord(T) line: 79 
> MutableRecordReader.next(T) line: 34   
> ReaderIterator.next(T) line: 59
> MutableHashTable$ProbeIterator.next() line: 1581  
> MutableHashTable.processProbeIter() line: 457  
> MutableHashTable.nextRecord() line: 555
> ReusingBuildSecondHashMatchIterator.callWithNextKey(FlatJoinFunction,
>  Collector) line: 110   
> JoinDriver.run() line: 208
> RegularPactTask.run() line: 489 
> RegularPactTask.invoke() line: 354  
> Task.run() line: 581  
> Threa

[jira] [Commented] (FLINK-2671) Instable Test StreamCheckpointNotifierITCase

2015-09-18 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875740#comment-14875740
 ] 

Till Rohrmann commented on FLINK-2671:
--

The maven log does not really help to find out the cause of the problem.

> Instable Test StreamCheckpointNotifierITCase
> 
>
> Key: FLINK-2671
> URL: https://issues.apache.org/jira/browse/FLINK-2671
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: test-stability
>
> {noformat}
> Failed tests: 
>   
> StreamCheckpointNotifierITCase>StreamFaultToleranceTestBase.runCheckpointedProgram:105->postSubmit:115
>  No checkpoint notification was received.{noformat}
> https://travis-ci.org/apache/flink/jobs/80344489



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2652) Failing PartitionRequestClientFactoryTest

2015-09-18 Thread Ufuk Celebi (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875729#comment-14875729
 ] 

Ufuk Celebi commented on FLINK-2652:


Will look into it next week. But I didn't change anything yet.

> Failing PartitionRequestClientFactoryTest
> -
>
> Key: FLINK-2652
> URL: https://issues.apache.org/jira/browse/FLINK-2652
> Project: Flink
>  Issue Type: Bug
>Reporter: Ufuk Celebi
>Priority: Minor
>  Labels: test-stability
>
> PartitionRequestClientFactoryTest fails when running {{mvn 
> -Dhadoop.version=2.6.0 clean verify}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2030) Implement discrete and continuous histograms

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875702#comment-14875702
 ] 

ASF GitHub Bot commented on FLINK-2030:
---

Github user sachingoel0101 commented on the pull request:

https://github.com/apache/flink/pull/861#issuecomment-141471679
  
Rebased to reflect the changes in the scala utility functions.
Travis failure on an unrelated error. Reported at jira id 2700.

This has already undergone three reviews, and been open for almost four 
months (as part of #710 too). If this functionality is not required, I can 
close the PR; that'd however automatically mean closing #710 too since this is 
essential to implementing a Decision Tree. I'd prefer not to though as I've 
spent a lot of time optimizing this.

@chiwanpark @tillrohrmann 


> Implement discrete and continuous histograms
> 
>
> Key: FLINK-2030
> URL: https://issues.apache.org/jira/browse/FLINK-2030
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Sachin Goel
>Assignee: Sachin Goel
>Priority: Minor
>
> For the implementation of the decision tree in 
> https://issues.apache.org/jira/browse/FLINK-1727, we need to implement an 
> histogram with online updates, merging and equalization features. A reference 
> implementation is provided in [1]
> [1].http://www.jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2030][FLINK-2274][core][utils]Histogram...

2015-09-18 Thread sachingoel0101
Github user sachingoel0101 commented on the pull request:

https://github.com/apache/flink/pull/861#issuecomment-141471679
  
Rebased to reflect the changes in the scala utility functions.
Travis failure on an unrelated error. Reported at jira id 2700.

This has already undergone three reviews, and been open for almost four 
months (as part of #710 too). If this functionality is not required, I can 
close the PR; that'd however automatically mean closing #710 too since this is 
essential to implementing a Decision Tree. I'd prefer not to though as I've 
spent a lot of time optimizing this.

@chiwanpark @tillrohrmann 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2643) Change Travis Build Profile to Exclude Hadoop 2.0.0-alpha, Include 2.7.0

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875696#comment-14875696
 ] 

ASF GitHub Bot commented on FLINK-2643:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1146#issuecomment-141470722
  
+1 to merge


> Change Travis Build Profile to Exclude Hadoop 2.0.0-alpha, Include 2.7.0
> 
>
> Key: FLINK-2643
> URL: https://issues.apache.org/jira/browse/FLINK-2643
> Project: Flink
>  Issue Type: Improvement
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Minor
>
> In discussion on the mailing list we reached consensus to change the Hadoop 
> versions that we build Flink with on Travis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2643] [build] Update Travis build matri...

2015-09-18 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1146#issuecomment-141470722
  
+1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2652) Failing PartitionRequestClientFactoryTest

2015-09-18 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875691#comment-14875691
 ] 

Till Rohrmann commented on FLINK-2652:
--

I couldn't reproduce the problem locally. [~uce], could you verify if this 
problem still exists?

> Failing PartitionRequestClientFactoryTest
> -
>
> Key: FLINK-2652
> URL: https://issues.apache.org/jira/browse/FLINK-2652
> Project: Flink
>  Issue Type: Bug
>Reporter: Ufuk Celebi
>Priority: Minor
>  Labels: test-stability
>
> PartitionRequestClientFactoryTest fails when running {{mvn 
> -Dhadoop.version=2.6.0 clean verify}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2616) Failing Test: ZooKeeperLeaderElectionTest

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875680#comment-14875680
 ] 

ASF GitHub Bot commented on FLINK-2616:
---

Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1143#issuecomment-141465883
  
Hmm, it does not seem to solve the problem. I will further investigate the 
problem and reopen a PR which will hopefully fix it.


> Failing Test: ZooKeeperLeaderElectionTest
> -
>
> Key: FLINK-2616
> URL: https://issues.apache.org/jira/browse/FLINK-2616
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Matthias J. Sax
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {noformat}
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 146.262 sec 
> <<< FAILURE! - in 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest
> testMultipleLeaders(org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest)
>  Time elapsed: 22.329 sec <<< ERROR!
> java.util.concurrent.TimeoutException: Listener was not notified about a 
> leader within 2ms
> at 
> org.apache.flink.runtime.leaderelection.TestingListener.waitForLeader(TestingListener.java:69)
> at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testMultipleLeaders(ZooKeeperLeaderElectionTest.java:334)
> {noformat}
> https://travis-ci.org/mjsax/flink/jobs/78553799



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2616) Failing Test: ZooKeeperLeaderElectionTest

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875681#comment-14875681
 ] 

ASF GitHub Bot commented on FLINK-2616:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/1143


> Failing Test: ZooKeeperLeaderElectionTest
> -
>
> Key: FLINK-2616
> URL: https://issues.apache.org/jira/browse/FLINK-2616
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Matthias J. Sax
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {noformat}
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 146.262 sec 
> <<< FAILURE! - in 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest
> testMultipleLeaders(org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest)
>  Time elapsed: 22.329 sec <<< ERROR!
> java.util.concurrent.TimeoutException: Listener was not notified about a 
> leader within 2ms
> at 
> org.apache.flink.runtime.leaderelection.TestingListener.waitForLeader(TestingListener.java:69)
> at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testMultipleLeaders(ZooKeeperLeaderElectionTest.java:334)
> {noformat}
> https://travis-ci.org/mjsax/flink/jobs/78553799



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2616] [test-stability] Hardens ZooKeepe...

2015-09-18 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/1143


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2616] [test-stability] Hardens ZooKeepe...

2015-09-18 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1143#issuecomment-141465883
  
Hmm, it does not seem to solve the problem. I will further investigate the 
problem and reopen a PR which will hopefully fix it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2616) Failing Test: ZooKeeperLeaderElectionTest

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875668#comment-14875668
 ] 

ASF GitHub Bot commented on FLINK-2616:
---

Github user uce commented on the pull request:

https://github.com/apache/flink/pull/1143#issuecomment-141462318
  
Changes look good. Good debug messages as well :)


> Failing Test: ZooKeeperLeaderElectionTest
> -
>
> Key: FLINK-2616
> URL: https://issues.apache.org/jira/browse/FLINK-2616
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Matthias J. Sax
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {noformat}
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 146.262 sec 
> <<< FAILURE! - in 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest
> testMultipleLeaders(org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest)
>  Time elapsed: 22.329 sec <<< ERROR!
> java.util.concurrent.TimeoutException: Listener was not notified about a 
> leader within 2ms
> at 
> org.apache.flink.runtime.leaderelection.TestingListener.waitForLeader(TestingListener.java:69)
> at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testMultipleLeaders(ZooKeeperLeaderElectionTest.java:334)
> {noformat}
> https://travis-ci.org/mjsax/flink/jobs/78553799



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2616] [test-stability] Hardens ZooKeepe...

2015-09-18 Thread uce
Github user uce commented on the pull request:

https://github.com/apache/flink/pull/1143#issuecomment-141462318
  
Changes look good. Good debug messages as well :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2643) Change Travis Build Profile to Exclude Hadoop 2.0.0-alpha, Include 2.7.0

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875669#comment-14875669
 ] 

ASF GitHub Bot commented on FLINK-2643:
---

GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/1146

[FLINK-2643] [build] Update Travis build matrix and change snapshot deploy 
profiles

Changes Travis build profiles to test the following Hadoop versions:

  - 1.0
  - 2.3
  - 2.4
  - 2.5
  - 2.6

Maven uploads and binary build uploads happen for versions 1.0 and 2.3.

Removes the "include-yarn" profile, as now all Hadoop2 profiles work with 
YARN.
Also drops the YARN-tests for Hadoop 2.3, because of an apparent bug in the 
hostname resolution affecting Hadoop 2.3.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink travis_matrix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1146.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1146


commit 27d4e77b5b1c2d0e8e77b9924ddecbc9b0b3808c
Author: Stephan Ewen 
Date:   2015-09-18T10:47:07Z

[FLINK-2643] [build] Update Travis build matrix and change snapshot deploy 
profiles




> Change Travis Build Profile to Exclude Hadoop 2.0.0-alpha, Include 2.7.0
> 
>
> Key: FLINK-2643
> URL: https://issues.apache.org/jira/browse/FLINK-2643
> Project: Flink
>  Issue Type: Improvement
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Minor
>
> In discussion on the mailing list we reached consensus to change the Hadoop 
> versions that we build Flink with on Travis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2643] [build] Update Travis build matri...

2015-09-18 Thread StephanEwen
GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/1146

[FLINK-2643] [build] Update Travis build matrix and change snapshot deploy 
profiles

Changes Travis build profiles to test the following Hadoop versions:

  - 1.0
  - 2.3
  - 2.4
  - 2.5
  - 2.6

Maven uploads and binary build uploads happen for versions 1.0 and 2.3.

Removes the "include-yarn" profile, as now all Hadoop2 profiles work with 
YARN.
Also drops the YARN-tests for Hadoop 2.3, because of an apparent bug in the 
hostname resolution affecting Hadoop 2.3.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink travis_matrix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1146.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1146


commit 27d4e77b5b1c2d0e8e77b9924ddecbc9b0b3808c
Author: Stephan Ewen 
Date:   2015-09-18T10:47:07Z

[FLINK-2643] [build] Update Travis build matrix and change snapshot deploy 
profiles




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2616) Failing Test: ZooKeeperLeaderElectionTest

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875664#comment-14875664
 ] 

ASF GitHub Bot commented on FLINK-2616:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1143#issuecomment-141461360
  
Let's merge this and see if it fixes the problem...


> Failing Test: ZooKeeperLeaderElectionTest
> -
>
> Key: FLINK-2616
> URL: https://issues.apache.org/jira/browse/FLINK-2616
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Matthias J. Sax
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {noformat}
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 146.262 sec 
> <<< FAILURE! - in 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest
> testMultipleLeaders(org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest)
>  Time elapsed: 22.329 sec <<< ERROR!
> java.util.concurrent.TimeoutException: Listener was not notified about a 
> leader within 2ms
> at 
> org.apache.flink.runtime.leaderelection.TestingListener.waitForLeader(TestingListener.java:69)
> at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testMultipleLeaders(ZooKeeperLeaderElectionTest.java:334)
> {noformat}
> https://travis-ci.org/mjsax/flink/jobs/78553799



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2616] [test-stability] Hardens ZooKeepe...

2015-09-18 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1143#issuecomment-141461360
  
Let's merge this and see if it fixes the problem...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2488) Expose attemptNumber in RuntimeContext

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875663#comment-14875663
 ] 

ASF GitHub Bot commented on FLINK-2488:
---

Github user sachingoel0101 commented on the pull request:

https://github.com/apache/flink/pull/1026#issuecomment-141461094
  
Is it possible to get this in soon? I need access to the task manager 
configuration for something I'm working on. @StephanEwen 


> Expose attemptNumber in RuntimeContext
> --
>
> Key: FLINK-2488
> URL: https://issues.apache.org/jira/browse/FLINK-2488
> Project: Flink
>  Issue Type: Improvement
>  Components: JobManager, TaskManager
>Affects Versions: 0.10
>Reporter: Robert Metzger
>Assignee: Sachin Goel
>Priority: Minor
>
> It would be nice to expose the attemptNumber of a task in the 
> {{RuntimeContext}}. 
> This would allow user code to behave differently in restart scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2488][FLINK-2496] Expose Task Manager c...

2015-09-18 Thread sachingoel0101
Github user sachingoel0101 commented on the pull request:

https://github.com/apache/flink/pull/1026#issuecomment-141461094
  
Is it possible to get this in soon? I need access to the task manager 
configuration for something I'm working on. @StephanEwen 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2696) ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest failed on Travis

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875661#comment-14875661
 ] 

ASF GitHub Bot commented on FLINK-2696:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1144#issuecomment-141460904
  
+1 to merge


> ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest failed on 
> Travis
> 
>
> Key: FLINK-2696
> URL: https://issues.apache.org/jira/browse/FLINK-2696
> Project: Flink
>  Issue Type: Bug
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> The {{ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest}} 
> failed on Travis with 
> {code}
> ---
>  T E S T S
> ---
> Running 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerPartitionAssignmentTest
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 sec - in 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerPartitionAssignmentTest
> Running 
> org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandlerTest
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 2
>   at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:98)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:84)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createZookeeperClient(KafkaTestBase.java:278)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest(ZookeeperOffsetHandlerTest.java:44)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 43.695 sec 
> <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandlerTest
> runOffsetManipulationinZooKeeperTest(org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandlerTest)
>   Time elapsed: 21.258 sec  <<< FAILURE!
> java.lang.AssertionError: Unable to connect to zookeeper server within 
> timeout: 2
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.a

[GitHub] flink pull request: [FLINK-2696] [test-stability] Hardens Zookeepe...

2015-09-18 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1144#issuecomment-141460904
  
+1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2583) Add Stream Sink For Rolling HDFS Files

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875653#comment-14875653
 ] 

ASF GitHub Bot commented on FLINK-2583:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1084


> Add Stream Sink For Rolling HDFS Files
> --
>
> Key: FLINK-2583
> URL: https://issues.apache.org/jira/browse/FLINK-2583
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 0.10
>
>
> In addition to having configurable file-rolling behavior the Sink should also 
> integrate with checkpointing to make it possible to have exactly-once 
> semantics throughout the topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1789) Allow adding of URLs to the usercode class loader

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875654#comment-14875654
 ] 

ASF GitHub Bot commented on FLINK-1789:
---

Github user aalexandrov commented on the pull request:

https://github.com/apache/flink/pull/593#issuecomment-141459804
  
@rmetzger the final 10.0, this and the Scala 2.11 compatibility are the two 
pending issues that make the current Emma master incompatible with vanilla 
Flink.


> Allow adding of URLs to the usercode class loader
> -
>
> Key: FLINK-1789
> URL: https://issues.apache.org/jira/browse/FLINK-1789
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Minor
>
> Currently, there is no option to add customs classpath URLs to the 
> FlinkUserCodeClassLoader. JARs always need to be shipped to the cluster even 
> if they are already present on all nodes.
> It would be great if RemoteEnvironment also accepts valid classpaths URLs and 
> forwards them to BlobLibraryCacheManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1789] [core] [runtime] [java-api] Allow...

2015-09-18 Thread aalexandrov
Github user aalexandrov commented on the pull request:

https://github.com/apache/flink/pull/593#issuecomment-141459804
  
@rmetzger the final 10.0, this and the Scala 2.11 compatibility are the two 
pending issues that make the current Emma master incompatible with vanilla 
Flink.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2583] Add Stream Sink For Rolling HDFS ...

2015-09-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1084


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2694) JobManagerProcessReapingTest.testReapProcessOnFailure failed on Travis

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875644#comment-14875644
 ] 

ASF GitHub Bot commented on FLINK-2694:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/1145

[FLINK-2694] [test-stability] Hardens the 
JobManagerProcessReapingTest.testReapProcessOnFailure test case

The `JobManagerProcessReapingTest.testReapProcessOnFailure` predetermines 
the port for the `JobManager` which is started by using 
`NetUtils.getAvailablePort`. This can cause problems if in the meantime another 
process has opened a socket binding to this port. In order to harden this test 
case, the `JobManager` is now allowed to pick itself a free port. The selected 
port is extracted from the logging output of the `JobManager`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
fixJobManagerProcessReapingTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1145.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1145


commit 4332b8e8aafd43f766be5357e505297322d109c7
Author: Till Rohrmann 
Date:   2015-09-18T13:49:24Z

[FLINK-2694] [test-stability] Hardens the 
JobManagerProcessReapingTest.testReapProcessOnFailure test case by letting the 
JobManager choose its port instead of predetermining it via the 
NetUtils.getAvailablePort.




> JobManagerProcessReapingTest.testReapProcessOnFailure failed on Travis
> --
>
> Key: FLINK-2694
> URL: https://issues.apache.org/jira/browse/FLINK-2694
> Project: Flink
>  Issue Type: Bug
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> I observed a failing 
> {{JobManagerProcessReapingTest.testReapProcessOnFailure}} test case on 
> Travis. The reason for the test failure seems to be that the {{JobManager}} 
> could not be started. The reason for this was that Netty could not bind to 
> the specified port.
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/80642036/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1789) Allow adding of URLs to the usercode class loader

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875643#comment-14875643
 ] 

ASF GitHub Bot commented on FLINK-1789:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/593#issuecomment-141457169
  
Do you mean the 0.10-milestone-1 release? or the final 0.10 release?

Fabian is managing the 0.10-milestone-1 release, as far as I know he has 
not started with the release activities, so we could still include it, given 
that we have consensus to merge the PR 


> Allow adding of URLs to the usercode class loader
> -
>
> Key: FLINK-1789
> URL: https://issues.apache.org/jira/browse/FLINK-1789
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Minor
>
> Currently, there is no option to add customs classpath URLs to the 
> FlinkUserCodeClassLoader. JARs always need to be shipped to the cluster even 
> if they are already present on all nodes.
> It would be great if RemoteEnvironment also accepts valid classpaths URLs and 
> forwards them to BlobLibraryCacheManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2694] [test-stability] Hardens the JobM...

2015-09-18 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/1145

[FLINK-2694] [test-stability] Hardens the 
JobManagerProcessReapingTest.testReapProcessOnFailure test case

The `JobManagerProcessReapingTest.testReapProcessOnFailure` predetermines 
the port for the `JobManager` which is started by using 
`NetUtils.getAvailablePort`. This can cause problems if in the meantime another 
process has opened a socket binding to this port. In order to harden this test 
case, the `JobManager` is now allowed to pick itself a free port. The selected 
port is extracted from the logging output of the `JobManager`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
fixJobManagerProcessReapingTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1145.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1145


commit 4332b8e8aafd43f766be5357e505297322d109c7
Author: Till Rohrmann 
Date:   2015-09-18T13:49:24Z

[FLINK-2694] [test-stability] Hardens the 
JobManagerProcessReapingTest.testReapProcessOnFailure test case by letting the 
JobManager choose its port instead of predetermining it via the 
NetUtils.getAvailablePort.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1789] [core] [runtime] [java-api] Allow...

2015-09-18 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/593#issuecomment-141457169
  
Do you mean the 0.10-milestone-1 release? or the final 0.10 release?

Fabian is managing the 0.10-milestone-1 release, as far as I know he has 
not started with the release activities, so we could still include it, given 
that we have consensus to merge the PR 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1789) Allow adding of URLs to the usercode class loader

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875637#comment-14875637
 ] 

ASF GitHub Bot commented on FLINK-1789:
---

Github user aalexandrov commented on the pull request:

https://github.com/apache/flink/pull/593#issuecomment-141455406
  
@twalthr @rmetzger is there a chance to include this PR in the 10.0 release?


> Allow adding of URLs to the usercode class loader
> -
>
> Key: FLINK-1789
> URL: https://issues.apache.org/jira/browse/FLINK-1789
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Minor
>
> Currently, there is no option to add customs classpath URLs to the 
> FlinkUserCodeClassLoader. JARs always need to be shipped to the cluster even 
> if they are already present on all nodes.
> It would be great if RemoteEnvironment also accepts valid classpaths URLs and 
> forwards them to BlobLibraryCacheManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1789] [core] [runtime] [java-api] Allow...

2015-09-18 Thread aalexandrov
Github user aalexandrov commented on the pull request:

https://github.com/apache/flink/pull/593#issuecomment-141455406
  
@twalthr @rmetzger is there a chance to include this PR in the 10.0 release?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-2705) Yarn fails with NoSuchMethodError when log level is set to DEBUG

2015-09-18 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-2705:


 Summary: Yarn fails with NoSuchMethodError when log level is set 
to DEBUG
 Key: FLINK-2705
 URL: https://issues.apache.org/jira/browse/FLINK-2705
 Project: Flink
  Issue Type: Bug
Reporter: Till Rohrmann


The Yarn tests fail with a {{NoSuchMethodError}} when the log level is set to 
DEBUG. Apparently, we have a wrong dependency version of 
{{org.apache.commons.codec}} in our scope when the Yarn is executed. 

{code}
Exception in thread "LocalizerRunner for 
container_1442415732615_0009_01_01" java.lang.NoSuchMethodError: 
org.apache.commons.codec.binary.Base64.(I[BZ)V
at org.apache.hadoop.security.token.Token.encodeWritable(Token.java:236)
at 
org.apache.hadoop.security.token.Token.encodeToUrlString(Token.java:263)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.writeCredentials(ResourceLocalizationService.java:1016)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:972)
Error while deploying YARN cluster: The YARN application unexpectedly switched 
to state FAILED during deployment. 
{code}

https://s3.amazonaws.com/archive.travis-ci.org/jobs/80641990/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2696] [test-stability] Hardens Zookeepe...

2015-09-18 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1144#issuecomment-141443252
  
+1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2696) ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest failed on Travis

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14852725#comment-14852725
 ] 

ASF GitHub Bot commented on FLINK-2696:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1144#issuecomment-141443252
  
+1 to merge


> ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest failed on 
> Travis
> 
>
> Key: FLINK-2696
> URL: https://issues.apache.org/jira/browse/FLINK-2696
> Project: Flink
>  Issue Type: Bug
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> The {{ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest}} 
> failed on Travis with 
> {code}
> ---
>  T E S T S
> ---
> Running 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerPartitionAssignmentTest
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 sec - in 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerPartitionAssignmentTest
> Running 
> org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandlerTest
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 2
>   at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:98)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:84)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createZookeeperClient(KafkaTestBase.java:278)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest(ZookeeperOffsetHandlerTest.java:44)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 43.695 sec 
> <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandlerTest
> runOffsetManipulationinZooKeeperTest(org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandlerTest)
>   Time elapsed: 21.258 sec  <<< FAILURE!
> java.lang.AssertionError: Unable to connect to zookeeper server within 
> timeout: 2
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apac

[jira] [Commented] (FLINK-2696) ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest failed on Travis

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14848542#comment-14848542
 ] 

ASF GitHub Bot commented on FLINK-2696:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/1144

[FLINK-2696] [test-stability] Hardens 
ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest

The test case 
`ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest` failed, 
because the ZooKeeper client could not connect to the ZooKeeper server. I 
suspect that this might be caused by contention of multiple test cases for the 
same port to bind to. Internally, Curator's `TestingServer` is started with a 
port which has been determined using `NetUtils.getAvailablePort`. Since the 
port is not directly bound, it is possible that another process uses the 
determined port and, thus, preventing the `TestingServer` from starting. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
fixZookeeperOffsetHandlerTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1144.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1144


commit 5889624ab6d345ced0be59bb71ca3dc3c84de0e1
Author: Till Rohrmann 
Date:   2015-09-18T12:28:56Z

[FLINK-2696] [test-stability] Hardens ZookeeperOffsetHandlerTest by letting 
Curator's TestingServer select the port to bind to instead of using 
NetUtils.getAvailablePort.




> ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest failed on 
> Travis
> 
>
> Key: FLINK-2696
> URL: https://issues.apache.org/jira/browse/FLINK-2696
> Project: Flink
>  Issue Type: Bug
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> The {{ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest}} 
> failed on Travis with 
> {code}
> ---
>  T E S T S
> ---
> Running 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerPartitionAssignmentTest
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 sec - in 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerPartitionAssignmentTest
> Running 
> org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandlerTest
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 2
>   at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:98)
>   at org.I0Itec.zkclient.ZkClient.(ZkClient.java:84)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createZookeeperClient(KafkaTestBase.java:278)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest(ZookeeperOffsetHandlerTest.java:44)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(

[GitHub] flink pull request: [FLINK-2696] [test-stability] Hardens Zookeepe...

2015-09-18 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/1144

[FLINK-2696] [test-stability] Hardens 
ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest

The test case 
`ZookeeperOffsetHandlerTest.runOffsetManipulationinZooKeeperTest` failed, 
because the ZooKeeper client could not connect to the ZooKeeper server. I 
suspect that this might be caused by contention of multiple test cases for the 
same port to bind to. Internally, Curator's `TestingServer` is started with a 
port which has been determined using `NetUtils.getAvailablePort`. Since the 
port is not directly bound, it is possible that another process uses the 
determined port and, thus, preventing the `TestingServer` from starting. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
fixZookeeperOffsetHandlerTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1144.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1144


commit 5889624ab6d345ced0be59bb71ca3dc3c84de0e1
Author: Till Rohrmann 
Date:   2015-09-18T12:28:56Z

[FLINK-2696] [test-stability] Hardens ZookeeperOffsetHandlerTest by letting 
Curator's TestingServer select the port to bind to instead of using 
NetUtils.getAvailablePort.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-2701) Getter for wrapped Java StreamExecutionEnvironment in the Scala Api

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2701.
-
Resolution: Fixed

Fixed in 6dcc38d532e68a08b0ba04d0d58f652e4140a68c

Thank you for the contribution!

> Getter for wrapped Java StreamExecutionEnvironment in the Scala Api
> ---
>
> Key: FLINK-2701
> URL: https://issues.apache.org/jira/browse/FLINK-2701
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Priority: Minor
> Fix For: 0.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1831) runtime.taskmanager.RegistrationTests fails spuriously

2015-09-18 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14847317#comment-14847317
 ] 

Till Rohrmann commented on FLINK-1831:
--

The posted log is not really helpful in tracking down the problem. Maybe the 
timeout for the task manager registration is too low. We can increase it to see 
whether the problem occurs again.

> runtime.taskmanager.RegistrationTests fails spuriously
> --
>
> Key: FLINK-1831
> URL: https://issues.apache.org/jira/browse/FLINK-1831
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: 0.9
>Reporter: Márton Balassi
>Assignee: Stephan Ewen
> Fix For: 0.9
>
>
> Failing in a decent chunk of travis builds, I could also reproduce it locally:
> akka.actor.InvalidActorNameException: actor name [FAKE_JOB_MANAGER] is not 
> unique!
>   at 
> akka.actor.dungeon.ChildrenContainer$TerminatingChildrenContainer.reserve(ChildrenContainer.scala:192)
>   at akka.actor.dungeon.Children$class.reserveChild(Children.scala:77)
>   at akka.actor.ActorCell.reserveChild(ActorCell.scala:369)
>   at akka.actor.dungeon.Children$class.makeChild(Children.scala:202)
>   at akka.actor.dungeon.Children$class.attachChild(Children.scala:42)
>   at akka.actor.ActorCell.attachChild(ActorCell.scala:369)
>   at akka.actor.ActorSystemImpl.actorOf(ActorSystem.scala:553)
>   at 
> org.apache.flink.runtime.taskmanager.RegistrationTest$5.(RegistrationTest.java:308)
>   at 
> org.apache.flink.runtime.taskmanager.RegistrationTest.testTaskManagerResumesConnectAfterJobManagerFailure(RegistrationTest.java:266)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:74)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:211)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
> 09:46:39.486 [flink-akka.actor.default-dispatcher-2] 
> [flink-akka.actor.default-dispatcher-3 - akka://flink/user/FAKE_JOB_MANAGER] 
> ERROR akka.actor.OneForOneStrategy - Kill
> akka.actor.ActorKilledException: Kill
> java.lang.AssertionError: actor name [FAKE_JOB_MANAGER] is not unique!
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.runtime.taskmanager.RegistrationTest$5.(RegistrationTest.java:328)
>   at 
> org.apache.flink.runtime.taskmanager.RegistrationTest.testTaskManagerResumesConnectAfterJobManagerFailure(RegistrationTest.java:266)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect

[jira] [Resolved] (FLINK-2627) Make Scala Data Set utils easier to access

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2627.
-
   Resolution: Fixed
Fix Version/s: 0.10

Fixed in 0c5ebc8b9361e48c32425f1235a86e5e1bf34976

Thank you for the contribution!

> Make Scala Data Set utils easier to access
> --
>
> Key: FLINK-2627
> URL: https://issues.apache.org/jira/browse/FLINK-2627
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala API
>Reporter: Sachin Goel
>Assignee: Sachin Goel
>Priority: Trivial
> Fix For: 0.10
>
>
> Currently, to use the Scala Data Set utility functions, one needs to import 
> {{import org.apache.flink.api.scala.DataSetUtils.utilsToDataSet}}
> This is counter-intuitive, extra complicated and should be more in sync with 
> how Java utils are imported. I propose a package object which can allow 
> importing utils like
> {{import org.apache.flink.api.scala.utils._}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2702) Add an implementation of distributed copying utility using Flink

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2702.
---

> Add an implementation of distributed copying utility using Flink
> 
>
> Key: FLINK-2702
> URL: https://issues.apache.org/jira/browse/FLINK-2702
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Affects Versions: 0.10
>Reporter: Stephan Ewen
> Fix For: 0.10
>
>
> Add the DistCP example proposed in this pull request: 
> https://github.com/apache/flink/pull/1090



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2627) Make Scala Data Set utils easier to access

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2627.
---

> Make Scala Data Set utils easier to access
> --
>
> Key: FLINK-2627
> URL: https://issues.apache.org/jira/browse/FLINK-2627
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala API
>Reporter: Sachin Goel
>Assignee: Sachin Goel
>Priority: Trivial
> Fix For: 0.10
>
>
> Currently, to use the Scala Data Set utility functions, one needs to import 
> {{import org.apache.flink.api.scala.DataSetUtils.utilsToDataSet}}
> This is counter-intuitive, extra complicated and should be more in sync with 
> how Java utils are imported. I propose a package object which can allow 
> importing utils like
> {{import org.apache.flink.api.scala.utils._}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2698) Add trailing newline to flink-conf.yaml

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2698.
-
   Resolution: Fixed
Fix Version/s: 0.10

Fixed via db49a3f097e1c1b9f87f5ec0dfba23d9a5faab1a

Thank you for the contribution!

> Add trailing newline to flink-conf.yaml
> ---
>
> Key: FLINK-2698
> URL: https://issues.apache.org/jira/browse/FLINK-2698
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 0.10
>
>
> The distributed flink-conf.yaml does not contain a trailing newline. This 
> interferes with 
> [bdutil|https://github.com/GoogleCloudPlatform/bdutil/blob/master/extensions/flink/install_flink.sh#L64]
>  which appends extra/override configuration parameters with a heredoc.
> There are many other files without trailing newlines, but this looks to be 
> the only detrimental effect.
> {code}
> for i in $(find * -type f) ; do if diff /dev/null "$i" | tail -1 | grep '^\\ 
> No newline' > /dev/null; then  echo $i; fi; done
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2702) Add an implementation of distributed copying utility using Flink

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2702.
-
Resolution: Fixed

Added in b9148b667d24d4d30a0fd877848c1178e4be647a

> Add an implementation of distributed copying utility using Flink
> 
>
> Key: FLINK-2702
> URL: https://issues.apache.org/jira/browse/FLINK-2702
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Affects Versions: 0.10
>Reporter: Stephan Ewen
> Fix For: 0.10
>
>
> Add the DistCP example proposed in this pull request: 
> https://github.com/apache/flink/pull/1090



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2557) Manual type information via "returns" fails in DataSet API

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2557.
-
   Resolution: Fixed
Fix Version/s: 0.10

Fixed in 3fe9145df37fd353229d1de297cc134f3a553c07

Thank you for the contribution!

> Manual type information via "returns" fails in DataSet API
> --
>
> Key: FLINK-2557
> URL: https://issues.apache.org/jira/browse/FLINK-2557
> Project: Flink
>  Issue Type: Bug
>  Components: Java API
>Reporter: Matthias J. Sax
>Assignee: Chesnay Schepler
> Fix For: 0.10
>
>
> I changed the WordCount example as below and get an exception:
> Tokenizer is change to this (removed generics and added cast to String):
> {code:java}
> public static final class Tokenizer implements FlatMapFunction {
>   public void flatMap(Object value, Collector out) {
>   String[] tokens = ((String) value).toLowerCase().split("\\W+");
>   for (String token : tokens) {
>   if (token.length() > 0) {
>   out.collect(new Tuple2(token, 
> 1));
>   }
>   }
>   }
> }
> {code}
> I added call to "returns()" here:
> {code:java}
> DataSet> counts =
>   text.flatMap(new Tokenizer()).returns("Tuple2")
>   .groupBy(0).sum(1);
> {code}
> The exception is:
> {noformat}
> Exception in thread "main" java.lang.IllegalArgumentException: The types of 
> the interface org.apache.flink.api.common.functions.FlatMapFunction could not 
> be inferred. Support for synthetic interfaces, lambdas, and generic types is 
> limited at this point.
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getParameterType(TypeExtractor.java:686)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getParameterTypeFromGenericType(TypeExtractor.java:710)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getParameterType(TypeExtractor.java:673)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:365)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:279)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getFlatMapReturnTypes(TypeExtractor.java:120)
>   at org.apache.flink.api.java.DataSet.flatMap(DataSet.java:262)
>   at 
> org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:69)
> {noformat}
> Fix:
> This should not immediately fail, but also only give a "MissingTypeInfo" so 
> that type hints would work.
> The error message is also wrong, btw: It should state that raw types are not 
> supported.
> The issue has been reported here: 
> http://stackoverflow.com/questions/32122495/stuck-with-type-hints-in-clojure-for-generic-class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2704) Clean up naming of State/Checkpoint Interfaces

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2704.
-
Resolution: Fixed

Merged in 0571094885da2766dfb52a6fb38020cab7602114

> Clean up naming of State/Checkpoint Interfaces
> --
>
> Key: FLINK-2704
> URL: https://issues.apache.org/jira/browse/FLINK-2704
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Assignee: Aljoscha Krettek
>Priority: Minor
> Fix For: 0.10
>
>
> Add the name cleanups proposed in https://github.com/apache/flink/pull/671
> They refer to an internal interface that is implemented by stateful tasks 
> that are checkpointed. It consolidates three interfaces into one, because all 
> stateful tasks need to implement all three of them together anyways.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2704) Clean up naming of State/Checkpoint Interfaces

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2704.
---

> Clean up naming of State/Checkpoint Interfaces
> --
>
> Key: FLINK-2704
> URL: https://issues.apache.org/jira/browse/FLINK-2704
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Assignee: Aljoscha Krettek
>Priority: Minor
> Fix For: 0.10
>
>
> Add the name cleanups proposed in https://github.com/apache/flink/pull/671
> They refer to an internal interface that is implemented by stateful tasks 
> that are checkpointed. It consolidates three interfaces into one, because all 
> stateful tasks need to implement all three of them together anyways.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Clean up naming of State/Checkpoint Interfaces

2015-09-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/671


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2698) Add trailing newline to flink-conf.yaml

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14847301#comment-14847301
 ] 

ASF GitHub Bot commented on FLINK-2698:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1142


> Add trailing newline to flink-conf.yaml
> ---
>
> Key: FLINK-2698
> URL: https://issues.apache.org/jira/browse/FLINK-2698
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> The distributed flink-conf.yaml does not contain a trailing newline. This 
> interferes with 
> [bdutil|https://github.com/GoogleCloudPlatform/bdutil/blob/master/extensions/flink/install_flink.sh#L64]
>  which appends extra/override configuration parameters with a heredoc.
> There are many other files without trailing newlines, but this looks to be 
> the only detrimental effect.
> {code}
> for i in $(find * -type f) ; do if diff /dev/null "$i" | tail -1 | grep '^\\ 
> No newline' > /dev/null; then  echo $i; fi; done
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Getter for wrapped StreamExecutionEnvironment ...

2015-09-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1120


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-2557) Manual type information via "returns" fails in DataSet API

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2557.
---

> Manual type information via "returns" fails in DataSet API
> --
>
> Key: FLINK-2557
> URL: https://issues.apache.org/jira/browse/FLINK-2557
> Project: Flink
>  Issue Type: Bug
>  Components: Java API
>Reporter: Matthias J. Sax
>Assignee: Chesnay Schepler
> Fix For: 0.10
>
>
> I changed the WordCount example as below and get an exception:
> Tokenizer is change to this (removed generics and added cast to String):
> {code:java}
> public static final class Tokenizer implements FlatMapFunction {
>   public void flatMap(Object value, Collector out) {
>   String[] tokens = ((String) value).toLowerCase().split("\\W+");
>   for (String token : tokens) {
>   if (token.length() > 0) {
>   out.collect(new Tuple2(token, 
> 1));
>   }
>   }
>   }
> }
> {code}
> I added call to "returns()" here:
> {code:java}
> DataSet> counts =
>   text.flatMap(new Tokenizer()).returns("Tuple2")
>   .groupBy(0).sum(1);
> {code}
> The exception is:
> {noformat}
> Exception in thread "main" java.lang.IllegalArgumentException: The types of 
> the interface org.apache.flink.api.common.functions.FlatMapFunction could not 
> be inferred. Support for synthetic interfaces, lambdas, and generic types is 
> limited at this point.
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getParameterType(TypeExtractor.java:686)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getParameterTypeFromGenericType(TypeExtractor.java:710)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getParameterType(TypeExtractor.java:673)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:365)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:279)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getFlatMapReturnTypes(TypeExtractor.java:120)
>   at org.apache.flink.api.java.DataSet.flatMap(DataSet.java:262)
>   at 
> org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:69)
> {noformat}
> Fix:
> This should not immediately fail, but also only give a "MissingTypeInfo" so 
> that type hints would work.
> The error message is also wrong, btw: It should state that raw types are not 
> supported.
> The issue has been reported here: 
> http://stackoverflow.com/questions/32122495/stuck-with-type-hints-in-clojure-for-generic-class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2698) Add trailing newline to flink-conf.yaml

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2698.
---

> Add trailing newline to flink-conf.yaml
> ---
>
> Key: FLINK-2698
> URL: https://issues.apache.org/jira/browse/FLINK-2698
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 0.10
>
>
> The distributed flink-conf.yaml does not contain a trailing newline. This 
> interferes with 
> [bdutil|https://github.com/GoogleCloudPlatform/bdutil/blob/master/extensions/flink/install_flink.sh#L64]
>  which appends extra/override configuration parameters with a heredoc.
> There are many other files without trailing newlines, but this looks to be 
> the only detrimental effect.
> {code}
> for i in $(find * -type f) ; do if diff /dev/null "$i" | tail -1 | grep '^\\ 
> No newline' > /dev/null; then  echo $i; fi; done
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2701) Getter for wrapped Java StreamExecutionEnvironment in the Scala Api

2015-09-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2701.
---

> Getter for wrapped Java StreamExecutionEnvironment in the Scala Api
> ---
>
> Key: FLINK-2701
> URL: https://issues.apache.org/jira/browse/FLINK-2701
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Priority: Minor
> Fix For: 0.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2698] Add trailing newline to flink-con...

2015-09-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1142


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2627) Make Scala Data Set utils easier to access

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14847300#comment-14847300
 ] 

ASF GitHub Bot commented on FLINK-2627:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1099


> Make Scala Data Set utils easier to access
> --
>
> Key: FLINK-2627
> URL: https://issues.apache.org/jira/browse/FLINK-2627
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala API
>Reporter: Sachin Goel
>Assignee: Sachin Goel
>Priority: Trivial
>
> Currently, to use the Scala Data Set utility functions, one needs to import 
> {{import org.apache.flink.api.scala.DataSetUtils.utilsToDataSet}}
> This is counter-intuitive, extra complicated and should be more in sync with 
> how Java utils are imported. I propose a package object which can allow 
> importing utils like
> {{import org.apache.flink.api.scala.utils._}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2627][utils]Make Scala Data Set utils e...

2015-09-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1099


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2557) Manual type information via "returns" fails in DataSet API

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14847302#comment-14847302
 ] 

ASF GitHub Bot commented on FLINK-2557:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1045


> Manual type information via "returns" fails in DataSet API
> --
>
> Key: FLINK-2557
> URL: https://issues.apache.org/jira/browse/FLINK-2557
> Project: Flink
>  Issue Type: Bug
>  Components: Java API
>Reporter: Matthias J. Sax
>Assignee: Chesnay Schepler
>
> I changed the WordCount example as below and get an exception:
> Tokenizer is change to this (removed generics and added cast to String):
> {code:java}
> public static final class Tokenizer implements FlatMapFunction {
>   public void flatMap(Object value, Collector out) {
>   String[] tokens = ((String) value).toLowerCase().split("\\W+");
>   for (String token : tokens) {
>   if (token.length() > 0) {
>   out.collect(new Tuple2(token, 
> 1));
>   }
>   }
>   }
> }
> {code}
> I added call to "returns()" here:
> {code:java}
> DataSet> counts =
>   text.flatMap(new Tokenizer()).returns("Tuple2")
>   .groupBy(0).sum(1);
> {code}
> The exception is:
> {noformat}
> Exception in thread "main" java.lang.IllegalArgumentException: The types of 
> the interface org.apache.flink.api.common.functions.FlatMapFunction could not 
> be inferred. Support for synthetic interfaces, lambdas, and generic types is 
> limited at this point.
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getParameterType(TypeExtractor.java:686)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getParameterTypeFromGenericType(TypeExtractor.java:710)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getParameterType(TypeExtractor.java:673)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:365)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:279)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getFlatMapReturnTypes(TypeExtractor.java:120)
>   at org.apache.flink.api.java.DataSet.flatMap(DataSet.java:262)
>   at 
> org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:69)
> {noformat}
> Fix:
> This should not immediately fail, but also only give a "MissingTypeInfo" so 
> that type hints would work.
> The error message is also wrong, btw: It should state that raw types are not 
> supported.
> The issue has been reported here: 
> http://stackoverflow.com/questions/32122495/stuck-with-type-hints-in-clojure-for-generic-class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Implementation of distributed copying utility ...

2015-09-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1090


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2557] TypeExtractor properly returns Mi...

2015-09-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1045


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2200) Flink API with Scala 2.11 - Maven Repository

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14847296#comment-14847296
 ] 

ASF GitHub Bot commented on FLINK-2200:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/885#issuecomment-141425832
  
Sure, I don't have time over the weekend to work on this anyways ;)


> Flink API with Scala 2.11 - Maven Repository
> 
>
> Key: FLINK-2200
> URL: https://issues.apache.org/jira/browse/FLINK-2200
> Project: Flink
>  Issue Type: Wish
>  Components: Build System, Scala API
>Reporter: Philipp Götze
>Assignee: Chiwan Park
>Priority: Trivial
>  Labels: maven
>
> It would be nice if you could upload a pre-built version of the Flink API 
> with Scala 2.11 to the maven repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2200] Add Flink with Scala 2.11 in Mave...

2015-09-18 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/885#issuecomment-141425832
  
Sure, I don't have time over the weekend to work on this anyways ;)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2200] Add Flink with Scala 2.11 in Mave...

2015-09-18 Thread chiwanpark
Github user chiwanpark commented on the pull request:

https://github.com/apache/flink/pull/885#issuecomment-141416793
  
I hope to create a shell script and make setting for deploying `_2.11_ 
artifact. Could you wait a while to update this? Maybe I can update this PR 
until this weekend.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2200) Flink API with Scala 2.11 - Maven Repository

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14824511#comment-14824511
 ] 

ASF GitHub Bot commented on FLINK-2200:
---

Github user chiwanpark commented on the pull request:

https://github.com/apache/flink/pull/885#issuecomment-141416793
  
I hope to create a shell script and make setting for deploying `_2.11_ 
artifact. Could you wait a while to update this? Maybe I can update this PR 
until this weekend.


> Flink API with Scala 2.11 - Maven Repository
> 
>
> Key: FLINK-2200
> URL: https://issues.apache.org/jira/browse/FLINK-2200
> Project: Flink
>  Issue Type: Wish
>  Components: Build System, Scala API
>Reporter: Philipp Götze
>Assignee: Chiwan Park
>Priority: Trivial
>  Labels: maven
>
> It would be nice if you could upload a pre-built version of the Flink API 
> with Scala 2.11 to the maven repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2616) Failing Test: ZooKeeperLeaderElectionTest

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14818819#comment-14818819
 ] 

ASF GitHub Bot commented on FLINK-2616:
---

Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1143#issuecomment-141416101
  
I really tried hard to reproduce the failing test case 
`ZooKeeperLeaderElectionTest.testMultipleLeaders` but it was really hard. I 
think the test case fails because of too much load on Travis. Therefore, I 
tried to reduce the resources used by the `ZooKeeperLeaderElectionTest` and 
increased the timeouts in order to mitigate the problem.


> Failing Test: ZooKeeperLeaderElectionTest
> -
>
> Key: FLINK-2616
> URL: https://issues.apache.org/jira/browse/FLINK-2616
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Matthias J. Sax
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {noformat}
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 146.262 sec 
> <<< FAILURE! - in 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest
> testMultipleLeaders(org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest)
>  Time elapsed: 22.329 sec <<< ERROR!
> java.util.concurrent.TimeoutException: Listener was not notified about a 
> leader within 2ms
> at 
> org.apache.flink.runtime.leaderelection.TestingListener.waitForLeader(TestingListener.java:69)
> at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testMultipleLeaders(ZooKeeperLeaderElectionTest.java:334)
> {noformat}
> https://travis-ci.org/mjsax/flink/jobs/78553799



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2616] [test-stability] Hardens ZooKeepe...

2015-09-18 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1143#issuecomment-141416101
  
I really tried hard to reproduce the failing test case 
`ZooKeeperLeaderElectionTest.testMultipleLeaders` but it was really hard. I 
think the test case fails because of too much load on Travis. Therefore, I 
tried to reduce the resources used by the `ZooKeeperLeaderElectionTest` and 
increased the timeouts in order to mitigate the problem.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2616) Failing Test: ZooKeeperLeaderElectionTest

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14810734#comment-14810734
 ] 

ASF GitHub Bot commented on FLINK-2616:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/1143

[FLINK-2616] [test-stability] Hardens ZooKeeperLeaderElectionTest

Replaces Curator's TestingCluster with TestingServer in 
ZooKeeperElection/RetrievalTests. This should be more lightweight. Increased 
the timeout to 200s in ZooKeeperLeaderElectionTest.

Adds logging statements to ZooKeeperLeaderElection/RetrievalService.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
fixZooKeeperLeaderElectionTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1143.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1143


commit dd26ffc0688c89ec67ef738d050eb3311514d717
Author: Till Rohrmann 
Date:   2015-09-16T14:01:13Z

[FLINK-2616] [test-stability] Replaces Curator's TestingCluster with 
TestingServer in ZooKeeperElection/RetrievalTests. Increased the timeout to 
200s in ZooKeeperLeaderElectionTest.

Adds logging statements to ZooKeeperLeaderElection/RetrievalService




> Failing Test: ZooKeeperLeaderElectionTest
> -
>
> Key: FLINK-2616
> URL: https://issues.apache.org/jira/browse/FLINK-2616
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Matthias J. Sax
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {noformat}
> Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 146.262 sec 
> <<< FAILURE! - in 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest
> testMultipleLeaders(org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest)
>  Time elapsed: 22.329 sec <<< ERROR!
> java.util.concurrent.TimeoutException: Listener was not notified about a 
> leader within 2ms
> at 
> org.apache.flink.runtime.leaderelection.TestingListener.waitForLeader(TestingListener.java:69)
> at 
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionTest.testMultipleLeaders(ZooKeeperLeaderElectionTest.java:334)
> {noformat}
> https://travis-ci.org/mjsax/flink/jobs/78553799



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2616] [test-stability] Hardens ZooKeepe...

2015-09-18 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/1143

[FLINK-2616] [test-stability] Hardens ZooKeeperLeaderElectionTest

Replaces Curator's TestingCluster with TestingServer in 
ZooKeeperElection/RetrievalTests. This should be more lightweight. Increased 
the timeout to 200s in ZooKeeperLeaderElectionTest.

Adds logging statements to ZooKeeperLeaderElection/RetrievalService.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink 
fixZooKeeperLeaderElectionTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1143.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1143


commit dd26ffc0688c89ec67ef738d050eb3311514d717
Author: Till Rohrmann 
Date:   2015-09-16T14:01:13Z

[FLINK-2616] [test-stability] Replaces Curator's TestingCluster with 
TestingServer in ZooKeeperElection/RetrievalTests. Increased the timeout to 
200s in ZooKeeperLeaderElectionTest.

Adds logging statements to ZooKeeperLeaderElection/RetrievalService




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Clean up naming of State/Checkpoint Interfaces

2015-09-18 Thread aljoscha
Github user aljoscha commented on the pull request:

https://github.com/apache/flink/pull/671#issuecomment-141413305
  
ok, I'm merging it


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2583] Add Stream Sink For Rolling HDFS ...

2015-09-18 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1084#issuecomment-141411612
  
Looks good, will merge this!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2583) Add Stream Sink For Rolling HDFS Files

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14805348#comment-14805348
 ] 

ASF GitHub Bot commented on FLINK-2583:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1084#issuecomment-141411612
  
Looks good, will merge this!


> Add Stream Sink For Rolling HDFS Files
> --
>
> Key: FLINK-2583
> URL: https://issues.apache.org/jira/browse/FLINK-2583
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 0.10
>
>
> In addition to having configurable file-rolling behavior the Sink should also 
> integrate with checkpointing to make it possible to have exactly-once 
> semantics throughout the topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2704) Clean up naming of State/Checkpoint Interfaces

2015-09-18 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-2704:
---

 Summary: Clean up naming of State/Checkpoint Interfaces
 Key: FLINK-2704
 URL: https://issues.apache.org/jira/browse/FLINK-2704
 Project: Flink
  Issue Type: Improvement
  Components: Streaming
Affects Versions: 0.10
Reporter: Stephan Ewen
Assignee: Aljoscha Krettek
Priority: Minor
 Fix For: 0.10


Add the name cleanups proposed in https://github.com/apache/flink/pull/671

They refer to an internal interface that is implemented by stateful tasks that 
are checkpointed. It consolidates three interfaces into one, because all 
stateful tasks need to implement all three of them together anyways.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2703) Remove log4j classes from fat jar / document how to use Flink with logback

2015-09-18 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-2703:
-

 Summary: Remove log4j classes from fat jar / document how to use 
Flink with logback
 Key: FLINK-2703
 URL: https://issues.apache.org/jira/browse/FLINK-2703
 Project: Flink
  Issue Type: Task
  Components: Build System
Affects Versions: 0.10
Reporter: Robert Metzger
Assignee: Robert Metzger


Flink is using slf4j as the logging interface and log4j as the logging backend.
I got requests from users which want to use a different logging backend 
(logback) with Flink. Currently, its quite hard for them, because they have to 
do a custom Flink build with log4j excluded.

The purpose of this JIRA is to ship a Flink build that is prepared for users to 
use a different logging backend.
I'm also going to document how to use logback with Flink.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2702) Add an implementation of distributed copying utility using Flink

2015-09-18 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-2702:
---

 Summary: Add an implementation of distributed copying utility 
using Flink
 Key: FLINK-2702
 URL: https://issues.apache.org/jira/browse/FLINK-2702
 Project: Flink
  Issue Type: Improvement
  Components: Examples
Affects Versions: 0.10
Reporter: Stephan Ewen
 Fix For: 0.10


Add the DistCP example proposed in this pull request: 
https://github.com/apache/flink/pull/1090



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2312) Random Splits

2015-09-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14805323#comment-14805323
 ] 

ASF GitHub Bot commented on FLINK-2312:
---

Github user sachingoel0101 closed the pull request at:

https://github.com/apache/flink/pull/921


> Random Splits
> -
>
> Key: FLINK-2312
> URL: https://issues.apache.org/jira/browse/FLINK-2312
> Project: Flink
>  Issue Type: Wish
>  Components: Machine Learning Library
>Reporter: Maximilian Alber
>Assignee: pietro pinoli
>Priority: Minor
>
> In machine learning applications it is common to split data sets into f.e. 
> training and testing set.
> To the best of my knowledge there is at the moment no nice way in Flink to 
> split a data set randomly into several partitions according to some ratio.
> The wished semantic would be the same as of Sparks RDD randomSplit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >