[jira] [Updated] (FLINK-11420) Serialization of case classes containing a Map[String, Any] sometimes throws ArrayIndexOutOfBoundsException

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11420:
---
Labels: pull-request-available  (was: )

> Serialization of case classes containing a Map[String, Any] sometimes throws 
> ArrayIndexOutOfBoundsException
> ---
>
> Key: FLINK-11420
> URL: https://issues.apache.org/jira/browse/FLINK-11420
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.7.1
>Reporter: Jürgen Kreileder
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>
> We frequently run into random ArrayIndexOutOfBounds exceptions when flink 
> tries to serialize Scala case classes containing a Map[String, Any] (Any 
> being String, Long, Int, or Boolean) with the FsStateBackend. (This probably 
> happens with any case class containing a type requiring Kryo, see this thread 
> for instance: 
> [http://mail-archives.apache.org/mod_mbox/flink-user/201710.mbox/%3cCANNGFpjX4gjV=df6tlfeojsb_rhwxs_ruoylqcqv2gvwqtt...@mail.gmail.com%3e])
> Disabling asynchronous snapshots seems to work around the problem, so maybe 
> something is not thread-safe in CaseClassSerializer.
> Our objects look like this:
> {code}
> case class Event(timestamp: Long, [...], content: Map[String, Any]
> case class EnrichedEvent(event: Event, additionalInfo: Map[String, Any])
> {code}
> I've looked at a few of the exceptions in a debugger. It always happens when 
> serializing the right-hand side a tuple from EnrichedEvent -> Event -> 
> content, e.g: 13 from ("foo", 13) or false from ("bar", false).
> Stacktrace:
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: Index -1 out of bounds for length 0
>  at com.esotericsoftware.kryo.util.IntArray.pop(IntArray.java:157)
>  at com.esotericsoftware.kryo.Kryo.reference(Kryo.java:822)
>  at com.esotericsoftware.kryo.Kryo.copy(Kryo.java:863)
>  at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:217)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>  at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.$anonfun$copy$1(TraversableSerializer.scala:69)
>  at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:234)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:465)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:465)
>  at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:69)
>  at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:33)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>  at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.copy(ListSerializer.java:99)
>  at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.copy(ListSerializer.java:42)
>  at 
> org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:287)
>  at 
> org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:311)
>  at 
> org.apache.flink.runtime.state.heap.HeapListState.add(HeapListState.java:95)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:391)
>  at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
>  at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
>  at java.base/java.lang.Thread.run(Thread.java:834){code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] dawidwys opened a new pull request #7873: [FLINK-11420][core][bp1.8] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
dawidwys opened a new pull request #7873: [FLINK-11420][core][bp1.8] Fixed 
duplicate method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7873
 
 
   ## What is the purpose of the change
   
   The duplicate method of TypeSerializer used an equality check rather
   than reference check of the element serializer to decide if we need a
   deep copy. This commit uses proper reference comparison.
   
   ## Brief change log
   
   *(for example:)*
 - enabled additional tests in SerializerTestInstance
 - fixed duplicate method of TraversableSerializer
   
   
   ## Verifying this change
   
   * enabled additional test (including `duplicate` method test in 
`SerializerTestInstance`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (**yes** / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicabl**e / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7873: [FLINK-11420][core][bp1.8] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
flinkbot commented on issue #7873: [FLINK-11420][core][bp1.8] Fixed duplicate 
method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7873#issuecomment-468577984
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] libenchao closed pull request #7868: [hotfix][datastream] Fix typo in JoinedStreams

2019-03-01 Thread GitBox
libenchao closed pull request #7868: [hotfix][datastream] Fix typo in 
JoinedStreams
URL: https://github.com/apache/flink/pull/7868
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11420) Serialization of case classes containing a Map[String, Any] sometimes throws ArrayIndexOutOfBoundsException

2019-03-01 Thread Dawid Wysakowicz (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-11420:
-
Priority: Blocker  (was: Critical)

> Serialization of case classes containing a Map[String, Any] sometimes throws 
> ArrayIndexOutOfBoundsException
> ---
>
> Key: FLINK-11420
> URL: https://issues.apache.org/jira/browse/FLINK-11420
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.7.1
>Reporter: Jürgen Kreileder
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We frequently run into random ArrayIndexOutOfBounds exceptions when flink 
> tries to serialize Scala case classes containing a Map[String, Any] (Any 
> being String, Long, Int, or Boolean) with the FsStateBackend. (This probably 
> happens with any case class containing a type requiring Kryo, see this thread 
> for instance: 
> [http://mail-archives.apache.org/mod_mbox/flink-user/201710.mbox/%3cCANNGFpjX4gjV=df6tlfeojsb_rhwxs_ruoylqcqv2gvwqtt...@mail.gmail.com%3e])
> Disabling asynchronous snapshots seems to work around the problem, so maybe 
> something is not thread-safe in CaseClassSerializer.
> Our objects look like this:
> {code}
> case class Event(timestamp: Long, [...], content: Map[String, Any]
> case class EnrichedEvent(event: Event, additionalInfo: Map[String, Any])
> {code}
> I've looked at a few of the exceptions in a debugger. It always happens when 
> serializing the right-hand side a tuple from EnrichedEvent -> Event -> 
> content, e.g: 13 from ("foo", 13) or false from ("bar", false).
> Stacktrace:
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: Index -1 out of bounds for length 0
>  at com.esotericsoftware.kryo.util.IntArray.pop(IntArray.java:157)
>  at com.esotericsoftware.kryo.Kryo.reference(Kryo.java:822)
>  at com.esotericsoftware.kryo.Kryo.copy(Kryo.java:863)
>  at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:217)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>  at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.$anonfun$copy$1(TraversableSerializer.scala:69)
>  at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:234)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:465)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:465)
>  at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:69)
>  at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:33)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>  at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.copy(ListSerializer.java:99)
>  at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.copy(ListSerializer.java:42)
>  at 
> org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:287)
>  at 
> org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:311)
>  at 
> org.apache.flink.runtime.state.heap.HeapListState.add(HeapListState.java:95)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:391)
>  at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
>  at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
>  at java.base/java.lang.Thread.run(Thread.java:834){code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] libenchao closed pull request #7869: [hotfix][datastream] Fix typo in WindowedStream

2019-03-01 Thread GitBox
libenchao closed pull request #7869: [hotfix][datastream] Fix typo in 
WindowedStream
URL: https://github.com/apache/flink/pull/7869
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] libenchao closed pull request #7870: [hotfix][datastream] Fix wrong javadoc for OneInputTransformation.getOperator()

2019-03-01 Thread GitBox
libenchao closed pull request #7870: [hotfix][datastream] Fix wrong javadoc for 
OneInputTransformation.getOperator()
URL: https://github.com/apache/flink/pull/7870
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] libenchao closed pull request #7871: [hotfix][datastream] Fix wrong javadoc for TwoInputTransformation

2019-03-01 Thread GitBox
libenchao closed pull request #7871: [hotfix][datastream] Fix wrong javadoc for 
 TwoInputTransformation
URL: https://github.com/apache/flink/pull/7871
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-8390) Refactor Hadoop kerberos integration test code

2019-03-01 Thread Robert Metzger (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-8390:
--
Component/s: Deployment / YARN

> Refactor Hadoop kerberos integration test code
> --
>
> Key: FLINK-8390
> URL: https://issues.apache.org/jira/browse/FLINK-8390
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.5.0
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>
> As suggested in [Flink-8270| 
> https://issues.apache.org/jira/browse/FLINK-8270] and 
> [Flink-8275|https://issues.apache.org/jira/browse/FLINK-8275], we want to 
> refactor, possibly remove, the Hadoop-kerberos integration test code from the 
> main code. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11420) Serialization of case classes containing a Map[String, Any] sometimes throws ArrayIndexOutOfBoundsException

2019-03-01 Thread Dawid Wysakowicz (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16781413#comment-16781413
 ] 

Dawid Wysakowicz commented on FLINK-11420:
--

I found out that there is a bug in {{TraversableSerializer#duplicate}} method.

> Serialization of case classes containing a Map[String, Any] sometimes throws 
> ArrayIndexOutOfBoundsException
> ---
>
> Key: FLINK-11420
> URL: https://issues.apache.org/jira/browse/FLINK-11420
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.7.1
>Reporter: Jürgen Kreileder
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We frequently run into random ArrayIndexOutOfBounds exceptions when flink 
> tries to serialize Scala case classes containing a Map[String, Any] (Any 
> being String, Long, Int, or Boolean) with the FsStateBackend. (This probably 
> happens with any case class containing a type requiring Kryo, see this thread 
> for instance: 
> [http://mail-archives.apache.org/mod_mbox/flink-user/201710.mbox/%3cCANNGFpjX4gjV=df6tlfeojsb_rhwxs_ruoylqcqv2gvwqtt...@mail.gmail.com%3e])
> Disabling asynchronous snapshots seems to work around the problem, so maybe 
> something is not thread-safe in CaseClassSerializer.
> Our objects look like this:
> {code}
> case class Event(timestamp: Long, [...], content: Map[String, Any]
> case class EnrichedEvent(event: Event, additionalInfo: Map[String, Any])
> {code}
> I've looked at a few of the exceptions in a debugger. It always happens when 
> serializing the right-hand side a tuple from EnrichedEvent -> Event -> 
> content, e.g: 13 from ("foo", 13) or false from ("bar", false).
> Stacktrace:
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: Index -1 out of bounds for length 0
>  at com.esotericsoftware.kryo.util.IntArray.pop(IntArray.java:157)
>  at com.esotericsoftware.kryo.Kryo.reference(Kryo.java:822)
>  at com.esotericsoftware.kryo.Kryo.copy(Kryo.java:863)
>  at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:217)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>  at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.$anonfun$copy$1(TraversableSerializer.scala:69)
>  at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:234)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:465)
>  at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:465)
>  at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:69)
>  at 
> org.apache.flink.api.scala.typeutils.TraversableSerializer.copy(TraversableSerializer.scala:33)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
>  at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>  at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.copy(ListSerializer.java:99)
>  at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.copy(ListSerializer.java:42)
>  at 
> org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:287)
>  at 
> org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:311)
>  at 
> org.apache.flink.runtime.state.heap.HeapListState.add(HeapListState.java:95)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:391)
>  at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
>  at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
>  at java.base/java.lang.Thread.run(Thread.java:834){code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha commented on a change in pull request #7872: [FLINK-11420][core] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
aljoscha commented on a change in pull request #7872: [FLINK-11420][core] Fixed 
duplicate method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7872#discussion_r261509197
 
 

 ##
 File path: 
flink-core/src/test/java/org/apache/flink/api/common/typeutils/SerializerTestInstance.java
 ##
 @@ -62,19 +62,27 @@ protected int getLength() {
}

// 

-   
+
public void testAll() {
-   testInstantiate();
-   testGetLength();
-   testCopy();
-   testCopyIntoNewElements();
-   testCopyIntoReusedElements();
-   testSerializeIndividually();
-   testSerializeIndividuallyReusingValues();
-   testSerializeAsSequenceNoReuse();
-   testSerializeAsSequenceReusingValues();
-   testSerializedCopyIndividually();
-   testSerializedCopyAsSequence();
-   testSerializabilityAndEquals();
+   try {
 
 Review comment:
   Do you know why these were not called before? Also, why do we need the 
`try-catch` around this now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dawidwys commented on a change in pull request #7872: [FLINK-11420][core] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
dawidwys commented on a change in pull request #7872: [FLINK-11420][core] Fixed 
duplicate method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7872#discussion_r261510127
 
 

 ##
 File path: 
flink-core/src/test/java/org/apache/flink/api/common/typeutils/SerializerTestInstance.java
 ##
 @@ -62,19 +62,27 @@ protected int getLength() {
}

// 

-   
+
public void testAll() {
-   testInstantiate();
-   testGetLength();
-   testCopy();
-   testCopyIntoNewElements();
-   testCopyIntoReusedElements();
-   testSerializeIndividually();
-   testSerializeIndividuallyReusingValues();
-   testSerializeAsSequenceNoReuse();
-   testSerializeAsSequenceReusingValues();
-   testSerializedCopyIndividually();
-   testSerializedCopyAsSequence();
-   testSerializabilityAndEquals();
+   try {
 
 Review comment:
   I guess they were added later to the base class and the person that added 
those cases was probably not aware of existence of this class.
   
   The try-catch is because a few of the new test cases throw exception, 
whereas the `testAll` doesn't. Did not want to change it in every place where 
the `testAll` is called. I think though in the long run we should revisit those 
places and actually get rid of this method at all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11335) Kafka consumer can not commit offset at checkpoint

2019-03-01 Thread andy hoang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16781469#comment-16781469
 ] 

andy hoang commented on FLINK-11335:


[~dawidwys], yep, I sented to mailing list. Thanks for your help. About the 
reproducable example, does it need to include the jar file with local env 
(kafka env) so that anyone has current local cluster and local kafka server can 
use it? Do I need to include the script to pump msg into kafka (so that flink 
app can read on?)

In the description, I included all the code that is in main file.

> Kafka consumer can not commit offset at checkpoint
> --
>
> Key: FLINK-11335
> URL: https://issues.apache.org/jira/browse/FLINK-11335
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.6.2
> Environment: AWS EMR 5.20: hadoop, flink plugin
> flink: 1.62
> run under yarn-cluster
> Kafka cluster: 1.0
>  
>Reporter: andy hoang
>Priority: Critical
> Attachments: repeated.log
>
>
> When trying to commit offset to kafka, I always get warning
> {noformat}
> 2019-01-15 11:18:55,405 WARN  
> org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher  - 
> Committing offsets to Kafka takes longer than the checkpoint interval. 
> Skipping commit of previous offsets because newer complete checkpoint offsets 
> are available. This does not compromise Flink's checkpoint integrity.
> {noformat}
> The result is not commiting any message to kafka
> The code was simplified be remove business
> {code:java}
>     val env = StreamExecutionEnvironment.getExecutionEnvironment
>     env.setStateBackend(new FsStateBackend("s3://pp-andy-test/checkpoint"))
>     env.enableCheckpointing(6000, CheckpointingMode.AT_LEAST_ONCE)
> env.getCheckpointConfig.setCheckpointingMode(CheckpointingMode.AT_LEAST_ONCE)
>     env.getCheckpointConfig.setMinPauseBetweenCheckpoints(500)
>     env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
>   
> env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
>     val properties = new Properties()
>     properties.setProperty("group.id", "my_groupid")
>     //properties.setProperty("enable.auto.commit", "false")
>     val consumer = new FlinkKafkaConsumer011[ObjectNode]("my_topic",
>   new JSONKeyValueDeserializationSchema(true),
>   
> properties).setStartFromGroupOffsets().setCommitOffsetsOnCheckpoints(true)
>     val stream = env.addSource(consumer)
>     
>     stream.map(new MapFunction[ObjectNode, Either[(Exception, ObjectNode), 
> (Int, ujson.Value)]] {
>   override def map(node:ObjectNode): scala.Either[(Exception, 
> ObjectNode), (Int, ujson.Value)] = {
>   logger.info("## 
> %s".format(node.get("metadata").toString))
>   Thread.sleep(3000)
>   return Right(200, writeJs(node.toString))
>   }
>     }).print()
>     env.execute("pp_convoy_flink")
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #7872: [FLINK-11420][core] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
zentol commented on a change in pull request #7872: [FLINK-11420][core] Fixed 
duplicate method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7872#discussion_r261521908
 
 

 ##
 File path: 
flink-core/src/test/java/org/apache/flink/api/common/typeutils/SerializerTestInstance.java
 ##
 @@ -62,19 +62,27 @@ protected int getLength() {
}

// 

-   
+
public void testAll() {
-   testInstantiate();
-   testGetLength();
-   testCopy();
-   testCopyIntoNewElements();
-   testCopyIntoReusedElements();
-   testSerializeIndividually();
-   testSerializeIndividuallyReusingValues();
-   testSerializeAsSequenceNoReuse();
-   testSerializeAsSequenceReusingValues();
-   testSerializedCopyIndividually();
-   testSerializedCopyAsSequence();
-   testSerializabilityAndEquals();
+   try {
 
 Review comment:
   so this isn't even compiling at the moment?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dawidwys commented on a change in pull request #7872: [FLINK-11420][core] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
dawidwys commented on a change in pull request #7872: [FLINK-11420][core] Fixed 
duplicate method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7872#discussion_r261522541
 
 

 ##
 File path: 
flink-core/src/test/java/org/apache/flink/api/common/typeutils/SerializerTestInstance.java
 ##
 @@ -62,19 +62,27 @@ protected int getLength() {
}

// 

-   
+
public void testAll() {
-   testInstantiate();
-   testGetLength();
-   testCopy();
-   testCopyIntoNewElements();
-   testCopyIntoReusedElements();
-   testSerializeIndividually();
-   testSerializeIndividuallyReusingValues();
-   testSerializeAsSequenceNoReuse();
-   testSerializeAsSequenceReusingValues();
-   testSerializedCopyIndividually();
-   testSerializedCopyAsSequence();
-   testSerializabilityAndEquals();
+   try {
 
 Review comment:
   it does


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dawidwys commented on a change in pull request #7872: [FLINK-11420][core] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
dawidwys commented on a change in pull request #7872: [FLINK-11420][core] Fixed 
duplicate method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7872#discussion_r261522541
 
 

 ##
 File path: 
flink-core/src/test/java/org/apache/flink/api/common/typeutils/SerializerTestInstance.java
 ##
 @@ -62,19 +62,27 @@ protected int getLength() {
}

// 

-   
+
public void testAll() {
-   testInstantiate();
-   testGetLength();
-   testCopy();
-   testCopyIntoNewElements();
-   testCopyIntoReusedElements();
-   testSerializeIndividually();
-   testSerializeIndividuallyReusingValues();
-   testSerializeAsSequenceNoReuse();
-   testSerializeAsSequenceReusingValues();
-   testSerializedCopyIndividually();
-   testSerializedCopyAsSequence();
-   testSerializabilityAndEquals();
+   try {
 
 Review comment:
   it is compiling


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-3745) TimestampITCase testWatermarkPropagationNoFinalWatermarkOnStop failing intermittently

2019-03-01 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-3745:

Labels: test-stability  (was: flaky-test)

> TimestampITCase testWatermarkPropagationNoFinalWatermarkOnStop failing 
> intermittently
> -
>
> Key: FLINK-3745
> URL: https://issues.apache.org/jira/browse/FLINK-3745
> Project: Flink
>  Issue Type: Bug
>Reporter: Todd Lisonbee
>Assignee: Stephan Ewen
>Priority: Minor
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> Test failed randomly in Travis,
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/122624297/log.txt
> {noformat}
> java.lang.Exception: Stopping the job with ID 
> ef892dfdf31b74a9a3da991d2240716e failed.
>   at 
> org.apache.flink.runtime.minicluster.LocalFlinkMiniCluster.stopJob(LocalFlinkMiniCluster.scala:283)
>   at 
> org.apache.flink.streaming.timestamp.TimestampITCase$1.run(TimestampITCase.java:213)
> Caused by: java.lang.IllegalStateException: Job with ID 
> ef892dfdf31b74a9a3da991d2240716e is in state FAILING but stopping is only 
> allowed in state RUNNING.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:577)
>   at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
>   at 
> org.apache.flink.runtime.testingUtils.TestingJobManagerLike$$anonfun$handleTestingMessage$1.applyOrElse(TestingJobManagerLike.scala:90)
>   at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:113)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 15.087 sec 
> <<< FAILURE! - in org.apache.flink.streaming.timestamp.TimestampITCase
> testWatermarkPropagationNoFinalWatermarkOnStop(org.apache.flink.streaming.timestamp.TimestampITCase)
>   Time elapsed: 0.792 sec  <<< ERROR!
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:381)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:355)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:348)
>   at 
> org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.executeRemotely(RemoteStreamEnvironment.java:206)
>   at 
> org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.execute(RemoteStreamEnvironment.java:172)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1170)
>   at 
> org.apache.flink.streaming.timestamp.TimestampITCase.testWatermarkPropagationNoFinalWatermarkOnStop(TimestampITCase.java:223)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:805)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:751)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:751)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurr

[jira] [Updated] (FLINK-3744) LocalFlinkMiniClusterITCase times out occasionally when building locally

2019-03-01 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-3744:

Labels: build test-stability  (was: build flaky-test)

> LocalFlinkMiniClusterITCase times out occasionally when building locally
> 
>
> Key: FLINK-3744
> URL: https://issues.apache.org/jira/browse/FLINK-3744
> Project: Flink
>  Issue Type: Bug
>Reporter: Todd Lisonbee
>Priority: Trivial
>  Labels: build, test-stability
>
> When building locally 
> LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers 
> timedout.  
> Out of many local builds this has only happened to me once.  This test 
> immediately passed when I ran `mvn verify` a second time.
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 23.139 sec 
> <<< FAILURE! - in 
> org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase
> testLocalFlinkMiniClusterWithMultipleTaskManagers(org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase)
>   Time elapsed: 23.087 sec  <<< ERROR!
> java.util.concurrent.TimeoutException: Futures timed out after [1 
> milliseconds]
>   at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>   at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:153)
>   at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:86)
>   at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:86)
>   at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>   at scala.concurrent.Await$.ready(package.scala:86)
>   at 
> org.apache.flink.runtime.minicluster.FlinkMiniCluster.waitForTaskManagersToBeRegistered(FlinkMiniCluster.scala:455)
>   at 
> org.apache.flink.runtime.minicluster.FlinkMiniCluster.waitForTaskManagersToBeRegistered(FlinkMiniCluster.scala:439)
>   at 
> org.apache.flink.runtime.minicluster.FlinkMiniCluster.start(FlinkMiniCluster.scala:330)
>   at 
> org.apache.flink.runtime.minicluster.FlinkMiniCluster.start(FlinkMiniCluster.scala:269)
>   at 
> org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers(LocalFlinkMiniClusterITCase.java:73)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] dawidwys commented on a change in pull request #7872: [FLINK-11420][core] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
dawidwys commented on a change in pull request #7872: [FLINK-11420][core] Fixed 
duplicate method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7872#discussion_r261524907
 
 

 ##
 File path: 
flink-core/src/test/java/org/apache/flink/api/common/typeutils/SerializerTestInstance.java
 ##
 @@ -62,19 +62,27 @@ protected int getLength() {
}

// 

-   
+
public void testAll() {
-   testInstantiate();
-   testGetLength();
-   testCopy();
-   testCopyIntoNewElements();
-   testCopyIntoReusedElements();
-   testSerializeIndividually();
-   testSerializeIndividuallyReusingValues();
-   testSerializeAsSequenceNoReuse();
-   testSerializeAsSequenceReusingValues();
-   testSerializedCopyIndividually();
-   testSerializedCopyAsSequence();
-   testSerializabilityAndEquals();
+   try {
 
 Review comment:
   I just realized though there some test failures for different serializers, 
investigating...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-3746) WebRuntimeMonitorITCase.testNoCopyFromJar failing intermittently

2019-03-01 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-3746:

Labels: test-stability  (was: flaky-test)

> WebRuntimeMonitorITCase.testNoCopyFromJar failing intermittently
> 
>
> Key: FLINK-3746
> URL: https://issues.apache.org/jira/browse/FLINK-3746
> Project: Flink
>  Issue Type: Bug
>Reporter: Todd Lisonbee
>Assignee: Boris Osipov
>Priority: Minor
>  Labels: test-stability
>
> Test failed randomly in Travis,
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/122624299/log.txt
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 13.127 sec 
> <<< FAILURE! - in org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
> testNoCopyFromJar(org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase)
>   Time elapsed: 0.124 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<200 OK> but was:<503 Service Unavailable>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase.testNoCopyFromJar(WebRuntimeMonitorITCase.java:456)
> Results :
> Failed tests: 
>   WebRuntimeMonitorITCase.testNoCopyFromJar:456 expected:<200 OK> but 
> was:<503 Service Unavailable>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] dianfu commented on issue #7642: [FLINK-11516][table] Port and move some Descriptor classes to flink-table-common

2019-03-01 Thread GitBox
dianfu commented on issue #7642: [FLINK-11516][table] Port and move some 
Descriptor classes to flink-table-common
URL: https://github.com/apache/flink/pull/7642#issuecomment-468595671
 
 
   Hi @godfreyhe Thanks a lot for the review. Have addressed your comments 
except the BigDecimal/BigInteger as it seems that it introduces a new feature 
to support BigDecimal/BigInteger as the min/max value. What about address that 
in a separate PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha commented on a change in pull request #7872: [FLINK-11420][core] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
aljoscha commented on a change in pull request #7872: [FLINK-11420][core] Fixed 
duplicate method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7872#discussion_r261530337
 
 

 ##
 File path: 
flink-core/src/test/java/org/apache/flink/api/common/typeutils/SerializerTestInstance.java
 ##
 @@ -62,19 +62,27 @@ protected int getLength() {
}

// 

-   
+
public void testAll() {
-   testInstantiate();
-   testGetLength();
-   testCopy();
-   testCopyIntoNewElements();
-   testCopyIntoReusedElements();
-   testSerializeIndividually();
-   testSerializeIndividuallyReusingValues();
-   testSerializeAsSequenceNoReuse();
-   testSerializeAsSequenceReusingValues();
-   testSerializedCopyIndividually();
-   testSerializedCopyAsSequence();
-   testSerializabilityAndEquals();
+   try {
 
 Review comment:
   I think it would probably be cleaner to not catch the exception and simply 
declare that it throws. Which might lead to other places where we need to 
declare, as you said.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11379) "java.lang.OutOfMemoryError: Direct buffer memory" when TM loads a large size TDD

2019-03-01 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-11379:
-
Fix Version/s: 1.8.0

> "java.lang.OutOfMemoryError: Direct buffer memory" when TM loads a large size 
> TDD
> -
>
> Key: FLINK-11379
> URL: https://issues.apache.org/jira/browse/FLINK-11379
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.7.0, 1.7.1
>Reporter: Haibo Sun
>Assignee: Haibo Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When TM loads a offloaded TDD with large size, it may throw a 
> "java.lang.OutOfMemoryError: Direct Buffer Memory" error. The loading uses 
> nio's _Files.readAllBytes()_ to read serialized TDD. In the call stack of 
> _Files.readAllBytes()_ , it will allocate a direct memory buffer which's size 
> is equal the length of the file. This will cause OutOfMemoryErro error when 
> direct memory is not enough.
> If the length of a file is large than a maximum buffer size,  the maximum 
> size direct-buffer should be used to read bytes of the file to avoid direct 
> memory OutOfMemoryError.  The maximum buffer size can be 8K or others.
> The exception stack is as follows (this exception stack is from an old Flink 
> version, but the master branch has the same problem).
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
>    at java.nio.Bits.reserveMemory(Bits.java:706)
>    at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
>    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
>    at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241)
>    at sun.nio.ch.IOUtil.read(IOUtil.java:195)
>    at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:182)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:65)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
>    at java.nio.file.Files.read(Files.java:3105)
>    at java.nio.file.Files.readAllBytes(Files.java:3158)
>    at 
> org.apache.flink.runtime.deployment.TaskDeploymentDescriptor.loadBigData(TaskDeploymentDescriptor.java:338)
>    at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.submitTask(TaskExecutor.java:397)
>    at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>    at java.lang.reflect.Method.invoke(Method.java:498)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:211)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:155)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:133)
>    at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
>    at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>    at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>    ... 9 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-11379) "java.lang.OutOfMemoryError: Direct buffer memory" when TM loads a large size TDD

2019-03-01 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-11379:
--

> "java.lang.OutOfMemoryError: Direct buffer memory" when TM loads a large size 
> TDD
> -
>
> Key: FLINK-11379
> URL: https://issues.apache.org/jira/browse/FLINK-11379
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.7.0, 1.7.1
>Reporter: Haibo Sun
>Assignee: Haibo Sun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When TM loads a offloaded TDD with large size, it may throw a 
> "java.lang.OutOfMemoryError: Direct Buffer Memory" error. The loading uses 
> nio's _Files.readAllBytes()_ to read serialized TDD. In the call stack of 
> _Files.readAllBytes()_ , it will allocate a direct memory buffer which's size 
> is equal the length of the file. This will cause OutOfMemoryErro error when 
> direct memory is not enough.
> If the length of a file is large than a maximum buffer size,  the maximum 
> size direct-buffer should be used to read bytes of the file to avoid direct 
> memory OutOfMemoryError.  The maximum buffer size can be 8K or others.
> The exception stack is as follows (this exception stack is from an old Flink 
> version, but the master branch has the same problem).
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
>    at java.nio.Bits.reserveMemory(Bits.java:706)
>    at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
>    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
>    at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241)
>    at sun.nio.ch.IOUtil.read(IOUtil.java:195)
>    at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:182)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:65)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
>    at java.nio.file.Files.read(Files.java:3105)
>    at java.nio.file.Files.readAllBytes(Files.java:3158)
>    at 
> org.apache.flink.runtime.deployment.TaskDeploymentDescriptor.loadBigData(TaskDeploymentDescriptor.java:338)
>    at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.submitTask(TaskExecutor.java:397)
>    at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>    at java.lang.reflect.Method.invoke(Method.java:498)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:211)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:155)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:133)
>    at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
>    at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>    at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>    ... 9 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11379) OutOfMemoryError when loading large TaskDeploymentDescriptor

2019-03-01 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-11379:
-
Summary: OutOfMemoryError when loading large TaskDeploymentDescriptor  
(was: "java.lang.OutOfMemoryError: Direct buffer memory" when TM loads a large 
size TDD)

> OutOfMemoryError when loading large TaskDeploymentDescriptor
> 
>
> Key: FLINK-11379
> URL: https://issues.apache.org/jira/browse/FLINK-11379
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.7.0, 1.7.1
>Reporter: Haibo Sun
>Assignee: Haibo Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When TM loads a offloaded TDD with large size, it may throw a 
> "java.lang.OutOfMemoryError: Direct Buffer Memory" error. The loading uses 
> nio's _Files.readAllBytes()_ to read serialized TDD. In the call stack of 
> _Files.readAllBytes()_ , it will allocate a direct memory buffer which's size 
> is equal the length of the file. This will cause OutOfMemoryErro error when 
> direct memory is not enough.
> If the length of a file is large than a maximum buffer size,  the maximum 
> size direct-buffer should be used to read bytes of the file to avoid direct 
> memory OutOfMemoryError.  The maximum buffer size can be 8K or others.
> The exception stack is as follows (this exception stack is from an old Flink 
> version, but the master branch has the same problem).
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
>    at java.nio.Bits.reserveMemory(Bits.java:706)
>    at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
>    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
>    at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241)
>    at sun.nio.ch.IOUtil.read(IOUtil.java:195)
>    at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:182)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:65)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
>    at java.nio.file.Files.read(Files.java:3105)
>    at java.nio.file.Files.readAllBytes(Files.java:3158)
>    at 
> org.apache.flink.runtime.deployment.TaskDeploymentDescriptor.loadBigData(TaskDeploymentDescriptor.java:338)
>    at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.submitTask(TaskExecutor.java:397)
>    at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>    at java.lang.reflect.Method.invoke(Method.java:498)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:211)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:155)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:133)
>    at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
>    at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>    at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>    ... 9 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11379) OutOfMemoryError when loading large TaskDeploymentDescriptor

2019-03-01 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-11379.

Resolution: Fixed

> OutOfMemoryError when loading large TaskDeploymentDescriptor
> 
>
> Key: FLINK-11379
> URL: https://issues.apache.org/jira/browse/FLINK-11379
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.7.0, 1.7.1
>Reporter: Haibo Sun
>Assignee: Haibo Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When TM loads a offloaded TDD with large size, it may throw a 
> "java.lang.OutOfMemoryError: Direct Buffer Memory" error. The loading uses 
> nio's _Files.readAllBytes()_ to read serialized TDD. In the call stack of 
> _Files.readAllBytes()_ , it will allocate a direct memory buffer which's size 
> is equal the length of the file. This will cause OutOfMemoryErro error when 
> direct memory is not enough.
> If the length of a file is large than a maximum buffer size,  the maximum 
> size direct-buffer should be used to read bytes of the file to avoid direct 
> memory OutOfMemoryError.  The maximum buffer size can be 8K or others.
> The exception stack is as follows (this exception stack is from an old Flink 
> version, but the master branch has the same problem).
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
>    at java.nio.Bits.reserveMemory(Bits.java:706)
>    at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
>    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
>    at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241)
>    at sun.nio.ch.IOUtil.read(IOUtil.java:195)
>    at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:182)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:65)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109)
>    at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
>    at java.nio.file.Files.read(Files.java:3105)
>    at java.nio.file.Files.readAllBytes(Files.java:3158)
>    at 
> org.apache.flink.runtime.deployment.TaskDeploymentDescriptor.loadBigData(TaskDeploymentDescriptor.java:338)
>    at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.submitTask(TaskExecutor.java:397)
>    at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>    at java.lang.reflect.Method.invoke(Method.java:498)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:211)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:155)
>    at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:133)
>    at akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
>    at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>    at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>    ... 9 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11446) FlinkKafkaProducer011ITCase.testRecoverCommittedTransaction failed on Travis

2019-03-01 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-11446:
--

Assignee: Kostas Kloudas

> FlinkKafkaProducer011ITCase.testRecoverCommittedTransaction failed on Travis
> 
>
> Key: FLINK-11446
> URL: https://issues.apache.org/jira/browse/FLINK-11446
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0
>Reporter: Till Rohrmann
>Assignee: Kostas Kloudas
>Priority: Critical
>  Labels: test-stability
>
> The {{FlinkKafkaProducer011ITCase.testRecoverCommittedTransaction}} failed on 
> Travis with producing no output for 10 minutes: 
> https://api.travis-ci.org/v3/job/485771998/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11717) Translate the "Material" page into Chinese

2019-03-01 Thread Leonard Xu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-11717:
--

Assignee: Leonard Xu  (was: Jark Wu)

> Translate the "Material" page into Chinese
> --
>
> Key: FLINK-11717
> URL: https://issues.apache.org/jira/browse/FLINK-11717
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Project Website
>Reporter: Jark Wu
>Assignee: Leonard Xu
>Priority: Major
>
> The markdown file is not created yet. So please copy the markdown file from 
> flink-web/material.md and translate it.
> The url link is: https://flink.apache.org/zh/material.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] dawidwys commented on issue #7873: [FLINK-11420][core][bp1.8] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
dawidwys commented on issue #7873: [FLINK-11420][core][bp1.8] Fixed duplicate 
method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7873#issuecomment-468605438
 
 
   It requires some improvements, closing for now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dawidwys closed pull request #7873: [FLINK-11420][core][bp1.8] Fixed duplicate method of TraversableSerializer

2019-03-01 Thread GitBox
dawidwys closed pull request #7873: [FLINK-11420][core][bp1.8] Fixed duplicate 
method of TraversableSerializer
URL: https://github.com/apache/flink/pull/7873
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azagrebin commented on issue #7186: [FLINK-10941] Keep slots which contain unconsumed result partitions

2019-03-01 Thread GitBox
azagrebin commented on issue #7186: [FLINK-10941] Keep slots which contain 
unconsumed result partitions
URL: https://github.com/apache/flink/pull/7186#issuecomment-468605747
 
 
   @zhijiangW 
   Thanks for your answer. The consumer will close the connection on its side. 
   
   If consumer fails or somehow realises that `CloseRequest` failed then I 
think the job and producer will be canceled by `JobMaster` and producer will 
get `sendFailIntermediateResultPartitionsRpcCall` to release partitions.
   
   My question is if `CloseRequest` is not received by producer and there can 
be a scenario when consumer still treats the close operation as success (if I 
understood you correctly) and reports success to `JobMaster`. I might miss a 
code path which would prevent the partition from lingering. How does it become 
aware about consumer is done?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10988) Improve debugging / visibility of job state

2019-03-01 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-10988:
--
Component/s: (was: Runtime / Coordination)
 Runtime / Operators

> Improve debugging / visibility of job state
> ---
>
> Key: FLINK-10988
> URL: https://issues.apache.org/jira/browse/FLINK-10988
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Operators
>Reporter: Scott Sue
>Priority: Major
>
> When a Flink Job is running and encounters an unexpected exception, either 
> through processing an expected message, or a message that may be well formed, 
> but the state of the job renders a exception.  It can be difficult to 
> diagnose the cause of the issue.  For example I would get a NPE in one of the 
> Operators:
> 2018-11-13 10:10:26,332 INFO 
> org.apache.flink.runtime.executiongraph.ExecutionGraph - 
> Co-Process-Broadcast-Keyed -> Map -> Map -> Sin
> k: Unnamed (1/1) (9a8f3b970570742b7b174a01a9bb1405) switched from RUNNING to 
> FAILED.
> java.lang.NullPointerException
>  at 
> com.celertech.analytics.flink.topology.marketimpact.PriceUtils.findPriceForEntryType(PriceUtils.java:28)
>  at 
> com.celertech.analytics.flink.topology.marketimpact.PriceUtils.getPriceForMarketDataEntryType(PriceUtils.java:18)
>  at 
> com.celertech.analytics.flink.function.midrate.MidRateBroadcaster.processBroadcastElement(MidRateBroadcaster.java:77)
>  at 
> com.celertech.analytics.flink.function.midrate.MidRateTagKeyedBroadcastProcessFunction.processBroadcastElement(MidRateTagKeyedBroa
> dcastProcessFunction.java:36)
>  at 
> com.celertech.analytics.flink.function.midrate.MidRateTagKeyedBroadcastProcessFunction.processBroadcastElement(MidRateTagKeyedBroa
> dcastProcessFunction.java:12)
>  at 
> org.apache.flink.streaming.api.operators.co.CoBroadcastWithKeyedOperator.processElement2(CoBroadcastWithKeyedOperator.java:121)
>  
> An improvement to this would be to allow the printing of the incoming message 
> so the developer can diagnose if that message was correct.  Printing of the 
> state of the job would be nice as well just in case the state of the job was 
> incorrect leading to the exception
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10988) Improve debugging / visibility of job state

2019-03-01 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16781516#comment-16781516
 ] 

Till Rohrmann commented on FLINK-10988:
---

Thanks for opening this issue [~scottsue]. Wouldn't it be possible to do this 
outside of Flink as user code? You could catch exceptions and then print the 
state and message you were processing at this moment. I would therefore like to 
close this issue.

> Improve debugging / visibility of job state
> ---
>
> Key: FLINK-10988
> URL: https://issues.apache.org/jira/browse/FLINK-10988
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Operators
>Reporter: Scott Sue
>Priority: Major
>
> When a Flink Job is running and encounters an unexpected exception, either 
> through processing an expected message, or a message that may be well formed, 
> but the state of the job renders a exception.  It can be difficult to 
> diagnose the cause of the issue.  For example I would get a NPE in one of the 
> Operators:
> 2018-11-13 10:10:26,332 INFO 
> org.apache.flink.runtime.executiongraph.ExecutionGraph - 
> Co-Process-Broadcast-Keyed -> Map -> Map -> Sin
> k: Unnamed (1/1) (9a8f3b970570742b7b174a01a9bb1405) switched from RUNNING to 
> FAILED.
> java.lang.NullPointerException
>  at 
> com.celertech.analytics.flink.topology.marketimpact.PriceUtils.findPriceForEntryType(PriceUtils.java:28)
>  at 
> com.celertech.analytics.flink.topology.marketimpact.PriceUtils.getPriceForMarketDataEntryType(PriceUtils.java:18)
>  at 
> com.celertech.analytics.flink.function.midrate.MidRateBroadcaster.processBroadcastElement(MidRateBroadcaster.java:77)
>  at 
> com.celertech.analytics.flink.function.midrate.MidRateTagKeyedBroadcastProcessFunction.processBroadcastElement(MidRateTagKeyedBroa
> dcastProcessFunction.java:36)
>  at 
> com.celertech.analytics.flink.function.midrate.MidRateTagKeyedBroadcastProcessFunction.processBroadcastElement(MidRateTagKeyedBroa
> dcastProcessFunction.java:12)
>  at 
> org.apache.flink.streaming.api.operators.co.CoBroadcastWithKeyedOperator.processElement2(CoBroadcastWithKeyedOperator.java:121)
>  
> An improvement to this would be to allow the printing of the incoming message 
> so the developer can diagnose if that message was correct.  Printing of the 
> state of the job would be nice as well just in case the state of the job was 
> incorrect leading to the exception
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] uce opened a new pull request #7874: [FLINK-11752] [dist] Move flink-python to opt (backport)

2019-03-01 Thread GitBox
uce opened a new pull request #7874: [FLINK-11752] [dist] Move flink-python to 
opt (backport)
URL: https://github.com/apache/flink/pull/7874
 
 
   Backport of #7843 to `release-1.8`. Cherry picked d637d8613d without any 
conflicts.
   
   I talked to @aljoscha (1.8 release manager) before opening this PR and he 
agrees that this should go into 1.8.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7874: [FLINK-11752] [dist] Move flink-python to opt (backport)

2019-03-01 Thread GitBox
flinkbot commented on issue #7874: [FLINK-11752] [dist] Move flink-python to 
opt (backport)
URL: https://github.com/apache/flink/pull/7874#issuecomment-468607399
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-10683) Error while executing BLOB connection. java.io.IOException: Unknown operation

2019-03-01 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-10683.
-
Resolution: Won't Do

The stack trace line numbers indicate that the reporter uses an old Flink 
version which we are no longer supporting. Please re-open this issue if the 
issue should also occur in the latest Flink version.

> Error while executing BLOB connection. java.io.IOException: Unknown operation
> -
>
> Key: FLINK-10683
> URL: https://issues.apache.org/jira/browse/FLINK-10683
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Yee
>Priority: Major
>
> ERROR org.apache.flink.runtime.blob.BlobServerConnection- Error 
> while executing BLOB connection.
> java.io.IOException: Unknown operation 5
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:136)
> 2018-10-26 01:49:35,247 ERROR 
> org.apache.flink.runtime.blob.BlobServerConnection- Error while 
> executing BLOB connection.
> java.io.IOException: Unknown operation 18
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:136)
> 2018-10-26 01:49:35,550 ERROR 
> org.apache.flink.runtime.blob.BlobServerConnection- PUT operation 
> failed
> java.io.IOException: Unknown type of BLOB addressing.
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.put(BlobServerConnection.java:347)
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:127)
> 2018-10-26 01:49:35,854 ERROR 
> org.apache.flink.runtime.blob.BlobServerConnection- Error while 
> executing BLOB connection.
> java.io.IOException: Unknown operation 3
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:136)
> 2018-10-26 01:49:36,159 ERROR 
> org.apache.flink.runtime.blob.BlobServerConnection- PUT operation 
> failed
> java.io.IOException: Unexpected number of incoming bytes: 50353152
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.put(BlobServerConnection.java:368)
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:127)
> 2018-10-26 01:49:36,463 ERROR 
> org.apache.flink.runtime.blob.BlobServerConnection- Error while 
> executing BLOB connection.
> java.io.IOException: Unknown operation 105
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:136)
> 2018-10-26 01:49:36,765 ERROR 
> org.apache.flink.runtime.blob.BlobServerConnection- Error while 
> executing BLOB connection.
> java.io.IOException: Unknown operation 71
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:136)
> 2018-10-26 01:49:37,069 ERROR 
> org.apache.flink.runtime.blob.BlobServerConnection- Error while 
> executing BLOB connection.
> java.io.IOException: Unknown operation 128
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:136)
> 2018-10-26 01:49:37,373 ERROR 
> org.apache.flink.runtime.blob.BlobServerConnection- PUT operation 
> failed
> java.io.IOException: Unexpected number of incoming bytes: 4302592
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.put(BlobServerConnection.java:368)
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:127)
> 2018-10-26 01:49:37,676 ERROR 
> org.apache.flink.runtime.blob.BlobServerConnection- Error while 
> executing BLOB connection.
> java.io.IOException: Unknown operation 115
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:136)
> 2018-10-26 01:49:37,980 ERROR 
> org.apache.flink.runtime.blob.BlobServerConnection- Error while 
> executing BLOB connection.
> java.io.IOException: Unknown operation 71
>   at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:136)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] uce opened a new pull request #7875: [FLINK-11533] [container] Scan class path for entry class (backport)

2019-03-01 Thread GitBox
uce opened a new pull request #7875: [FLINK-11533] [container] Scan class path 
for entry class (backport)
URL: https://github.com/apache/flink/pull/7875
 
 
   Backport of #7717 to `release-1.8`. Cherry picked bd60104dc1, b54fb2e626, 
34aa3f5ac9, 1f1cc86a82, 753e0c618a without any conflicts.
   
   I talked to @aljoscha (1.8 release manager) before opening this PR and he 
agrees that this should go into 1.8.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-9029) Getting write permission from HDFS after updating flink-1.40 to flink-1.4.2

2019-03-01 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-9029.

Resolution: Won't Do

Flink 1.4.x is no longer supported. Please try out whether this problem also 
occurs with the latest Flink version and re-open this issue if it's still 
present.

> Getting write permission from HDFS after updating flink-1.40 to flink-1.4.2
> ---
>
> Key: FLINK-9029
> URL: https://issues.apache.org/jira/browse/FLINK-9029
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.4.1, 1.4.2
> Environment: * Flink-1.4.2 (Flink-1.4.1)
>  * Hadoop 2.6.0-cdh5.13.0 with 4 nodes in service and {{Security is off.}}
>  * Ubuntu 16.04.3 LTS
>  * Java 8
>Reporter: Mohammad Abareghi
>Priority: Major
>
> *Environment*
>  * Flink-1.4.2
>  * Hadoop 2.6.0-cdh5.13.0 with 4 nodes in service and {{Security is off.}}
>  * Ubuntu 16.04.3 LTS
>  * Java 8
>  
> *Description*
> I have a Java job in flink-1.4.0 which writes to HDFS to a specific path. 
> After updating to flink-1.4.2 I'm getting the following error from Hadoop 
> complaining that the user doesn't have write permission to the given path:
> {code:java}
> WARN org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:xng (auth:SIMPLE) 
> cause:org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=user1, access=WRITE, inode="/user":hdfs:hadoop:drwxr-xr-x
> {code}
> *NOTE*:
>  * If I run the same job on flink-1.4.0, Error disappears regardless of what 
> version of flink (1.4.0 or 1.4.2) dependencies I have for job
>  * Also if I run the job main method from my IDE and pass the same 
> parameters, I don't get above error.
> *NOTE*:
> It seems the problem somehow is in 
> {{flink-1.4.2/lib/flink-shaded-hadoop2-uber-1.4.2.jar}}. If I replace that 
> with {{flink-1.4.0/lib/flink-shaded-hadoop2-uber-1.4.0.jar}}, restart the 
> cluster and run my job (flink topology) then the error doesn't appear.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flinkbot commented on issue #7875: [FLINK-11533] [container] Scan class path for entry class (backport)

2019-03-01 Thread GitBox
flinkbot commented on issue #7875: [FLINK-11533] [container] Scan class path 
for entry class (backport)
URL: https://github.com/apache/flink/pull/7875#issuecomment-468608532
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on a change in pull request #7820: [FLINK-11742][Metrics]Push metrics to Pushgateway without "instance"

2019-03-01 Thread GitBox
zentol commented on a change in pull request #7820: [FLINK-11742][Metrics]Push 
metrics to Pushgateway without "instance"
URL: https://github.com/apache/flink/pull/7820#discussion_r261541662
 
 

 ##
 File path: 
flink-metrics/flink-metrics-prometheus/src/main/java/org/apache/flink/metrics/prometheus/PrometheusPushGatewayReporter.java
 ##
 @@ -73,7 +77,7 @@ public void open(MetricConfig config) {
@Override
public void report() {
try {
-   pushGateway.push(CollectorRegistry.defaultRegistry, 
jobName);
+   pushGateway.push(CollectorRegistry.defaultRegistry, 
jobName, instance);
 
 Review comment:
   We had a similar discussion back when we added the pushgateway reporter.
   
   The gist is that 2)/3) are not desirable since it makes the internals of the 
reporter rather complex (additionally for 2) the question remains what to do 
with clusters that aren't running a job) and 1) requires additional work 
outside of the metric system first to actually identify clusters. As of right 
now there's no concept of a cluster ID _anywhere_, and it doesn't make sense to 
only introduce it for the Prometheus reporter specifically. The current 
implementation for `jobName` was very much just a compromise.
   
   The thing is the following: Ultimately a user can already group metrics in 
any way he/she desires. The jobName/instance stuff is, for our purposes, 
exclusively used to prevent conflicts between different containers. That's it, 
there's no semantic value in them since they (will, after FLINK-9543) only 
duplicate the container id label that we attach anyway for consistency with 
other reporters.
   As such it isn't necessary to introduce additional complexity or significant 
reporter-specific logic.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #7874: [FLINK-11752] [dist] Move flink-python to opt (backport)

2019-03-01 Thread GitBox
zentol commented on issue #7874: [FLINK-11752] [dist] Move flink-python to opt 
(backport)
URL: https://github.com/apache/flink/pull/7874#issuecomment-468610182
 
 
   Please to remember to set the `fixVersion` to `1.8.0`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10658) org.apache.flink.util.FlinkException: Releasing shared slot parent.

2019-03-01 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16781527#comment-16781527
 ] 

Till Rohrmann commented on FLINK-10658:
---

Could you share the logs with us to see what's going on on your cluster 
[~Xingbin]?

> org.apache.flink.util.FlinkException: Releasing shared slot parent.
> ---
>
> Key: FLINK-10658
> URL: https://issues.apache.org/jira/browse/FLINK-10658
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.5.4
>Reporter: chauncy
>Priority: Major
>
> i don't when throw the exception  who tell me ?  thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol closed pull request #2412: FLINK-4462 Add RUN_TIME retention policy for all the flink annotations

2019-03-01 Thread GitBox
zentol closed pull request #2412: FLINK-4462 Add RUN_TIME retention policy for 
all the flink annotations
URL: https://github.com/apache/flink/pull/2412
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-4462) Add RUN_TIME retention policy for all the flink annotations

2019-03-01 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-4462.
---
Resolution: Won't Do

No clear use-case has been provided.

> Add RUN_TIME retention policy for all the flink annotations
> ---
>
> Key: FLINK-4462
> URL: https://issues.apache.org/jira/browse/FLINK-4462
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
>
> It is better to add RUNTIME retention policy to flink annotations. So that 
> utilites/tests can be added to ensure if the classes/interfaces are all 
> tagged with proper annotations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tillrohrmann closed pull request #7290: [FLINK-11137] [runtime] Fix unexpected RegistrationTimeoutException of TaskExecutor

2019-03-01 Thread GitBox
tillrohrmann closed pull request #7290: [FLINK-11137] [runtime] Fix unexpected 
RegistrationTimeoutException of TaskExecutor
URL: https://github.com/apache/flink/pull/7290
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #3554: Detached (Remote)StreamEnvironment execution

2019-03-01 Thread GitBox
zentol closed pull request #3554: Detached (Remote)StreamEnvironment execution
URL: https://github.com/apache/flink/pull/3554
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tillrohrmann commented on issue #7290: [FLINK-11137] [runtime] Fix unexpected RegistrationTimeoutException of TaskExecutor

2019-03-01 Thread GitBox
tillrohrmann commented on issue #7290: [FLINK-11137] [runtime] Fix unexpected 
RegistrationTimeoutException of TaskExecutor
URL: https://github.com/apache/flink/pull/7290#issuecomment-468614535
 
 
   Closing because this PR is superseded by #7808.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11137) Unexpected RegistrationTimeoutException of TaskExecutor

2019-03-01 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-11137.
-
Resolution: Duplicate

> Unexpected RegistrationTimeoutException of TaskExecutor
> ---
>
> Key: FLINK-11137
> URL: https://issues.apache.org/jira/browse/FLINK-11137
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.7.0
>Reporter: Biao Liu
>Assignee: Biao Liu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> There is a race condition in {{TaskExecutor}} between starting registering to 
> RM and checking registration timeout. Currently we start RM leader retriever 
> first, and then start registration timeout checking. If registration is fast 
> enough, there is a possibility that registration is finished before starting 
> checking registration timeout. The timeout checking will fail later.
> There is a stack trace of exception below:
> {quote}2018-11-05 14:16:52,464 ERROR 
> org.apache.flink.runtime.taskexecutor.TaskExecutor - Fatal error occurred in 
> TaskExecutor akka.tcp://flink@/user/taskmanager_0.
>  
> org.apache.flink.runtime.taskexecutor.exceptions.RegistrationTimeoutException:
>  Could not register at the ResourceManager within the specified maximum 
> registration duration 30 ms. This indicates a problem with this instance. 
> Terminating now.
>  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.registrationTimeout(TaskExecutor.java:1110)
>  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.lambda$startRegistrationTimeout$4(TaskExecutor.java:1096)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:332)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:158)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
>  at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)
>  at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>  at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
>  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>  at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>  at akka.dispatch.Mailbox.run(Mailbox.scala:224)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-4462) Add RUN_TIME retention policy for all the flink annotations

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-4462:
--
Labels: pull-request-available  (was: )

> Add RUN_TIME retention policy for all the flink annotations
> ---
>
> Key: FLINK-4462
> URL: https://issues.apache.org/jira/browse/FLINK-4462
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
>  Labels: pull-request-available
>
> It is better to add RUNTIME retention policy to flink annotations. So that 
> utilites/tests can be added to ensure if the classes/interfaces are all 
> tagged with proper annotations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on issue #4717: [FLINK-3831][build][WIP] POM Cleanup flink-streaming-java

2019-03-01 Thread GitBox
zentol commented on issue #4717: [FLINK-3831][build][WIP] POM Cleanup  
flink-streaming-java
URL: https://github.com/apache/flink/pull/4717#issuecomment-468615643
 
 
   We will have to redo this from scratch since too much time has passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #4715: [FLINK-3830][build][WIP] POM Cleanup flink-scala

2019-03-01 Thread GitBox
zentol commented on issue #4715: [FLINK-3830][build][WIP] POM Cleanup 
flink-scala 
URL: https://github.com/apache/flink/pull/4715#issuecomment-468615620
 
 
   We will have to redo this from scratch since too much time has passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #4716: [FLINK-3833][build][WIP] POM Cleanup flink-test-utils

2019-03-01 Thread GitBox
zentol commented on issue #4716: [FLINK-3833][build][WIP] POM Cleanup 
flink-test-utils
URL: https://github.com/apache/flink/pull/4716#issuecomment-468615626
 
 
   We will have to redo this from scratch since too much time has passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #4713: [FLINK-3828][build][WIP] POM Cleanup flink-runtime

2019-03-01 Thread GitBox
zentol commented on issue #4713: [FLINK-3828][build][WIP] POM Cleanup 
flink-runtime
URL: https://github.com/apache/flink/pull/4713#issuecomment-468615595
 
 
   We will have to redo this from scratch since too much time has passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #4714: [FLINK-7577][build][WIP] POM Cleanup flink-core

2019-03-01 Thread GitBox
zentol commented on issue #4714: [FLINK-7577][build][WIP] POM Cleanup flink-core
URL: https://github.com/apache/flink/pull/4714#issuecomment-468615611
 
 
   We will have to redo this from scratch since too much time has passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #4717: [FLINK-3831][build][WIP] POM Cleanup flink-streaming-java

2019-03-01 Thread GitBox
zentol closed pull request #4717: [FLINK-3831][build][WIP] POM Cleanup  
flink-streaming-java
URL: https://github.com/apache/flink/pull/4717
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #4715: [FLINK-3830][build][WIP] POM Cleanup flink-scala

2019-03-01 Thread GitBox
zentol closed pull request #4715: [FLINK-3830][build][WIP] POM Cleanup 
flink-scala 
URL: https://github.com/apache/flink/pull/4715
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #4719: [FLINK-3829][build][WIP] POM Cleanup flink-java

2019-03-01 Thread GitBox
zentol commented on issue #4719: [FLINK-3829][build][WIP] POM Cleanup flink-java
URL: https://github.com/apache/flink/pull/4719#issuecomment-468615664
 
 
   We will have to redo this from scratch since too much time has passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #4714: [FLINK-7577][build][WIP] POM Cleanup flink-core

2019-03-01 Thread GitBox
zentol closed pull request #4714: [FLINK-7577][build][WIP] POM Cleanup 
flink-core
URL: https://github.com/apache/flink/pull/4714
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #4718: [FLINK-3832][build][WIP] POM Cleanup flink-streaming-scala

2019-03-01 Thread GitBox
zentol commented on issue #4718: [FLINK-3832][build][WIP] POM Cleanup 
flink-streaming-scala
URL: https://github.com/apache/flink/pull/4718#issuecomment-468615653
 
 
   We will have to redo this from scratch since too much time has passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #4716: [FLINK-3833][build][WIP] POM Cleanup flink-test-utils

2019-03-01 Thread GitBox
zentol closed pull request #4716: [FLINK-3833][build][WIP] POM Cleanup 
flink-test-utils
URL: https://github.com/apache/flink/pull/4716
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #4713: [FLINK-3828][build][WIP] POM Cleanup flink-runtime

2019-03-01 Thread GitBox
zentol closed pull request #4713: [FLINK-3828][build][WIP] POM Cleanup 
flink-runtime
URL: https://github.com/apache/flink/pull/4713
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #4718: [FLINK-3832][build][WIP] POM Cleanup flink-streaming-scala

2019-03-01 Thread GitBox
zentol closed pull request #4718: [FLINK-3832][build][WIP] POM Cleanup 
flink-streaming-scala
URL: https://github.com/apache/flink/pull/4718
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-3830) Remove unused dependencies from flink-scala

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-3830:
--
Labels: pull-request-available  (was: )

> Remove unused dependencies from flink-scala
> ---
>
> Key: FLINK-3830
> URL: https://issues.apache.org/jira/browse/FLINK-3830
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Till Rohrmann
>Assignee: Hai Zhou
>Priority: Major
>  Labels: pull-request-available
>
> [INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
> flink-scala_2.11 ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.apache.flink:flink-metrics-core:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.flink:flink-shaded-hadoop2:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.flink:flink-test-utils-junit:jar:1.4-SNAPSHOT:test
> [WARNING]com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
> [WARNING]org.apache.flink:flink-annotations:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.commons:commons-lang3:jar:3.3.2:compile
> [WARNING] Unused declared dependencies found:
> [WARNING]org.hamcrest:hamcrest-all:jar:1.3:test
> [WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
> [WARNING]org.powermock:powermock-module-junit4:jar:1.6.5:test
> [WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [WARNING]log4j:log4j:jar:1.2.17:test
> [WARNING]org.mockito:mockito-all:jar:1.10.19:test
> [WARNING]org.powermock:powermock-api-mockito:jar:1.6.5:test
> [WARNING]
> org.apache.flink:flink-test-utils_2.11:test-jar:tests:1.4-SNAPSHOT:test
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-3832) Remove unused dependencies from flink-streaming-scala

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-3832:
--
Labels: pull-request-available  (was: )

> Remove unused dependencies from flink-streaming-scala
> -
>
> Key: FLINK-3832
> URL: https://issues.apache.org/jira/browse/FLINK-3832
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Till Rohrmann
>Assignee: Hai Zhou
>Priority: Major
>  Labels: pull-request-available
>
> [INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
> flink-streaming-scala_2.11 ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.apache.flink:flink-core:jar:1.4-SNAPSHOT:compile
> [WARNING]com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
> [WARNING]org.apache.flink:flink-test-utils-junit:jar:1.4-SNAPSHOT:test
> [WARNING]org.apache.flink:flink-annotations:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.flink:flink-runtime_2.11:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.flink:flink-java:jar:1.4-SNAPSHOT:compile
> [WARNING] Unused declared dependencies found:
> [WARNING]org.hamcrest:hamcrest-all:jar:1.3:test
> [WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
> [WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [WARNING]log4j:log4j:jar:1.2.17:test
> [WARNING]org.powermock:powermock-api-mockito:jar:1.6.5:test
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test
> [WARNING]org.scala-lang:scala-compiler:jar:2.11.11:compile
> [WARNING]org.powermock:powermock-module-junit4:jar:1.6.5:test
> [WARNING]org.mockito:mockito-all:jar:1.10.19:test
> [WARNING]org.scala-lang:scala-reflect:jar:2.11.11:compile
> [WARNING]org.slf4j:slf4j-api:jar:1.7.7:compile
> [WARNING]
> org.apache.flink:flink-runtime_2.11:test-jar:tests:1.4-SNAPSHOT:test



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol closed pull request #4719: [FLINK-3829][build][WIP] POM Cleanup flink-java

2019-03-01 Thread GitBox
zentol closed pull request #4719: [FLINK-3829][build][WIP] POM Cleanup 
flink-java
URL: https://github.com/apache/flink/pull/4719
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-3831) Remove unused dependencies from flink-streaming-java

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-3831:
--
Labels: pull-request-available  (was: )

> Remove unused dependencies from flink-streaming-java
> 
>
> Key: FLINK-3831
> URL: https://issues.apache.org/jira/browse/FLINK-3831
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
> Environment: Apache Maven 3.3.9, Java version: 1.8.0_144
>Reporter: Till Rohrmann
>Assignee: Hai Zhou
>Priority: Major
>  Labels: pull-request-available
>
> {noformat}
> [INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
> flink-streaming-java_2.11 ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.apache.flink:flink-metrics-core:jar:1.4-SNAPSHOT:compile
> [WARNING]com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
> [WARNING]org.apache.flink:flink-annotations:jar:1.4-SNAPSHOT:compile
> [WARNING]org.scala-lang:scala-library:jar:2.11.11:compile
> [WARNING]commons-io:commons-io:jar:2.4:compile
> [WARNING]org.hamcrest:hamcrest-core:jar:1.3:test
> [WARNING]com.fasterxml.jackson.core:jackson-databind:jar:2.7.4:compile
> [WARNING]org.apache.flink:flink-java:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.flink:flink-shaded-hadoop2:jar:1.4-SNAPSHOT:compile
> [WARNING]com.data-artisans:flakka-actor_2.11:jar:2.3-custom:compile
> [WARNING]org.apache.flink:flink-optimizer_2.11:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.commons:commons-lang3:jar:3.3.2:compile
> [WARNING]org.powermock:powermock-core:jar:1.6.5:test
> [WARNING] Unused declared dependencies found:
> [WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
> [WARNING]log4j:log4j:jar:1.2.17:test
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-3828) Remove unused dependencies from flink-runtime

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-3828:
--
Labels: pull-request-available  (was: )

> Remove unused dependencies from flink-runtime
> -
>
> Key: FLINK-3828
> URL: https://issues.apache.org/jira/browse/FLINK-3828
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
> Environment: Apache Maven 3.3.9, Java version: 1.8.0_144
>Reporter: Till Rohrmann
>Assignee: Hai Zhou
>Priority: Major
>  Labels: pull-request-available
>
> {noformat}
> [INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
> flink-runtime_2.11 ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.apache.flink:flink-metrics-core:jar:1.4-SNAPSHOT:compile
> [WARNING]io.netty:netty:jar:3.8.0.Final:compile
> [WARNING]com.google.code.findbugs:annotations:jar:2.0.1:test
> [WARNING]com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
> [WARNING]com.typesafe:config:jar:1.2.1:compile
> [WARNING]org.apache.flink:flink-annotations:jar:1.4-SNAPSHOT:compile
> [WARNING]commons-io:commons-io:jar:2.4:compile
> [WARNING]org.hamcrest:hamcrest-core:jar:1.3:test
> [WARNING]org.powermock:powermock-api-support:jar:1.6.5:test
> [WARNING]javax.xml.bind:jaxb-api:jar:2.2.2:compile
> [WARNING]commons-collections:commons-collections:jar:3.2.2:compile
> [WARNING]com.fasterxml.jackson.core:jackson-annotations:jar:2.7.4:compile
> [WARNING]org.powermock:powermock-core:jar:1.6.5:test
> [WARNING]org.powermock:powermock-reflect:jar:1.6.5:test
> [WARNING] Unused declared dependencies found:
> [WARNING]com.data-artisans:flakka-slf4j_2.11:jar:2.3-custom:compile
> [WARNING]org.reflections:reflections:jar:0.9.10:test
> [WARNING]org.javassist:javassist:jar:3.18.2-GA:compile
> [WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
> [WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [WARNING]log4j:log4j:jar:1.2.17:test
> [WARNING]com.twitter:chill_2.11:jar:0.7.4:compile
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-7577) Remove unused dependencies from flink-core

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-7577:
--
Labels: pull-request-available  (was: )

> Remove unused dependencies from flink-core
> --
>
> Key: FLINK-7577
> URL: https://issues.apache.org/jira/browse/FLINK-7577
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
> Environment: Apache Maven 3.3.9, Java version: 1.8.0_144
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Major
>  Labels: pull-request-available
>
> {noformat}
> [INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ flink-core ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.hamcrest:hamcrest-core:jar:1.3:test
> [WARNING]org.powermock:powermock-core:jar:1.6.5:test
> [WARNING]org.powermock:powermock-reflect:jar:1.6.5:test
> [WARNING]org.objenesis:objenesis:jar:2.1:compile
> [WARNING] Unused declared dependencies found:
> [WARNING]org.xerial.snappy:snappy-java:jar:1.1.1.3:compile
> [WARNING]org.hamcrest:hamcrest-all:jar:1.3:test
> [WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
> [WARNING]log4j:log4j:jar:1.2.17:test
> [WARNING]org.joda:joda-convert:jar:1.7:test
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-3833) Remove unused dependencies from flink-test-utils

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-3833:
--
Labels: pull-request-available  (was: )

> Remove unused dependencies from flink-test-utils
> 
>
> Key: FLINK-3833
> URL: https://issues.apache.org/jira/browse/FLINK-3833
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Till Rohrmann
>Assignee: Hai Zhou
>Priority: Major
>  Labels: pull-request-available
>
> [INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
> flink-test-utils_2.11 ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.apache.flink:flink-core:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.flink:flink-shaded-hadoop2:jar:1.4-SNAPSHOT:compile
> [WARNING]io.netty:netty:jar:3.8.0.Final:compile
> [WARNING]org.apache.flink:flink-annotations:jar:1.4-SNAPSHOT:compile
> [WARNING]org.scala-lang:scala-library:jar:2.11.11:compile
> [WARNING]commons-io:commons-io:jar:2.4:compile
> [WARNING]com.data-artisans:flakka-actor_2.11:jar:2.3-custom:compile
> [WARNING]org.apache.flink:flink-shaded-netty:jar:4.0.27.Final-1.0:compile
> [WARNING]org.apache.flink:flink-optimizer_2.11:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.flink:flink-java:jar:1.4-SNAPSHOT:compile
> [WARNING] Unused declared dependencies found:
> [WARNING]org.apache.flink:flink-clients_2.11:jar:1.4-SNAPSHOT:compile
> [WARNING]org.hamcrest:hamcrest-all:jar:1.3:test
> [WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
> [WARNING]org.powermock:powermock-module-junit4:jar:1.6.5:test
> [WARNING]log4j:log4j:jar:1.2.17:test
> [WARNING]org.mockito:mockito-all:jar:1.10.19:test
> [WARNING]org.powermock:powermock-api-mockito:jar:1.6.5:test
> [WARNING]org.apache.curator:curator-test:jar:2.12.0:compile
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-3829) Remove unused dependencies from flink-java

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-3829:
--
Labels: pull-request-available  (was: )

> Remove unused dependencies from flink-java
> --
>
> Key: FLINK-3829
> URL: https://issues.apache.org/jira/browse/FLINK-3829
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Till Rohrmann
>Assignee: Hai Zhou
>Priority: Major
>  Labels: pull-request-available
>
> {noformat}
> [WARNING] Used undeclared dependencies found:
> [WARNING]com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
> [WARNING]commons-cli:commons-cli:jar:1.3.1:compile
> [WARNING]org.apache.flink:flink-metrics-core:jar:1.4-SNAPSHOT:compile
> [WARNING]org.hamcrest:hamcrest-core:jar:1.3:test
> [WARNING]org.apache.flink:flink-annotations:jar:1.4-SNAPSHOT:compile
> [WARNING] Unused declared dependencies found:
> [WARNING]org.hamcrest:hamcrest-all:jar:1.3:test
> [WARNING]org.powermock:powermock-module-junit4:jar:1.6.5:test
> [WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [WARNING]log4j:log4j:jar:1.2.17:test
> [WARNING]org.powermock:powermock-api-mockito:jar:1.6.5:test
> [WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on issue #4724: [FLINK-7690] [Cluster Management] Do not call actorSystem.awaitTermination from akka's main message han…

2019-03-01 Thread GitBox
zentol commented on issue #4724: [FLINK-7690] [Cluster Management] Do not call 
actorSystem.awaitTermination from akka's main message han…
URL: https://github.com/apache/flink/pull/4724#issuecomment-468616139
 
 
   JIRA has been closed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #4724: [FLINK-7690] [Cluster Management] Do not call actorSystem.awaitTermination from akka's main message han…

2019-03-01 Thread GitBox
zentol closed pull request #4724: [FLINK-7690] [Cluster Management] Do not call 
actorSystem.awaitTermination from akka's main message han…
URL: https://github.com/apache/flink/pull/4724
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-7574) Remove unused dependencies from flink-clients

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-7574:
--
Labels: pull-request-available  (was: )

> Remove unused dependencies from flink-clients
> -
>
> Key: FLINK-7574
> URL: https://issues.apache.org/jira/browse/FLINK-7574
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 1.3.2
> Environment: Apache Maven 3.3.9, Java version: 1.8.0_144
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Major
>  Labels: pull-request-available
>
> [INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
> flink-clients_2.11 ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.scala-lang:scala-library:jar:2.11.11:compile
> [WARNING]com.data-artisans:flakka-actor_2.11:jar:2.3-custom:compile
> [WARNING] Unused declared dependencies found:
> [WARNING]org.hamcrest:hamcrest-all:jar:1.3:test
> [WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
> [WARNING]org.powermock:powermock-module-junit4:jar:1.6.5:test
> [WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [WARNING]log4j:log4j:jar:1.2.17:test
> [WARNING]org.powermock:powermock-api-mockito:jar:1.6.5:test
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol closed pull request #5076: [FLINK-7574][build] POM Cleanup flink-clients

2019-03-01 Thread GitBox
zentol closed pull request #5076: [FLINK-7574][build] POM Cleanup flink-clients
URL: https://github.com/apache/flink/pull/5076
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #5076: [FLINK-7574][build] POM Cleanup flink-clients

2019-03-01 Thread GitBox
zentol commented on issue #5076: [FLINK-7574][build] POM Cleanup flink-clients
URL: https://github.com/apache/flink/pull/5076#issuecomment-468616292
 
 
   We will have to redo this from scratch since too much time has passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-7690) Do not call actorSystem.awaitTermination from the main akka message handling thread

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-7690:
--
Labels: flip-6 pull-request-available  (was: flip-6)

> Do not call actorSystem.awaitTermination from the main akka message handling 
> thread
> ---
>
> Key: FLINK-7690
> URL: https://issues.apache.org/jira/browse/FLINK-7690
> Project: Flink
>  Issue Type: Bug
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: flip-6, pull-request-available
>
> In flip-6, this bug causes the yarn job to hang forever with RUNNING status 
> when the enclosing flink job has already failed/finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10756) TaskManagerProcessFailureBatchRecoveryITCase did not finish on time

2019-03-01 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16781538#comment-16781538
 ] 

Till Rohrmann commented on FLINK-10756:
---

The second issue should not be a problem if we allow queued scheduling.

The first problem could indicate that one combiner hasn't read all of its data 
(maybe it received an outdated partition descriptor).

> TaskManagerProcessFailureBatchRecoveryITCase did not finish on time
> ---
>
> Key: FLINK-10756
> URL: https://issues.apache.org/jira/browse/FLINK-10756
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.6.2, 1.7.0
>Reporter: Bowen Li
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.8.0
>
>
> {code:java}
> Failed tests: 
>   
> TaskManagerProcessFailureBatchRecoveryITCase>AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure:207
>  The program did not finish in time
> {code}
> https://travis-ci.org/apache/flink/jobs/449439623



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10737) FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint failed on Travis

2019-03-01 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-10737:
-
Priority: Critical  (was: Blocker)

> FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint failed on Travis
> 
>
> Key: FLINK-10737
> URL: https://issues.apache.org/jira/browse/FLINK-10737
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.7.0, 1.8.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.8.0
>
>
> The {{FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint}} failed on 
> Travis:
> https://api.travis-ci.org/v3/job/448781612/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on issue #6493: [FLINK-10058] Add getStateBackend/setStateBackend API for PythonStreamExecutionEnvironment

2019-03-01 Thread GitBox
zentol commented on issue #6493: [FLINK-10058] Add 
getStateBackend/setStateBackend API for PythonStreamExecutionEnvironment
URL: https://github.com/apache/flink/pull/6493#issuecomment-468617533
 
 
   flink-streaming-python is in a limbo state and I can't get a clear answer 
whether we want to continue maintaining it. After the 1.8 release I'll start a 
mailing list thread about this very topic.
   
   For the time being I will close this PR, I apologize for wasting your time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #6475: [FLINK-10012] Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API

2019-03-01 Thread GitBox
zentol commented on issue #6475: [FLINK-10012] Add 
setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
URL: https://github.com/apache/flink/pull/6475#issuecomment-468617559
 
 
   flink-streaming-python is in a limbo state and I can't get a clear answer 
whether we want to continue maintaining it. After the 1.8 release I'll start a 
mailing list thread about this very topic.
   
   For the time being I will close this PR, I apologize for wasting your time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #6476: [FLINK-10025] Add getCheckpointConfig API for PythonStreamExecutionEnvironment

2019-03-01 Thread GitBox
zentol commented on issue #6476: [FLINK-10025] Add getCheckpointConfig API for 
PythonStreamExecutionEnvironment
URL: https://github.com/apache/flink/pull/6476#issuecomment-468617542
 
 
   flink-streaming-python is in a limbo state and I can't get a clear answer 
whether we want to continue maintaining it. After the 1.8 release I'll start a 
mailing list thread about this very topic.
   
   For the time being I will close this PR, I apologize for wasting your time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #6476: [FLINK-10025] Add getCheckpointConfig API for PythonStreamExecutionEnvironment

2019-03-01 Thread GitBox
zentol closed pull request #6476: [FLINK-10025] Add getCheckpointConfig API for 
PythonStreamExecutionEnvironment
URL: https://github.com/apache/flink/pull/6476
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #6475: [FLINK-10012] Add setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API

2019-03-01 Thread GitBox
zentol closed pull request #6475: [FLINK-10012] Add 
setStreamTimeCharacteristic/getStreamTimeCharacteristic for Python API
URL: https://github.com/apache/flink/pull/6475
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10756) TaskManagerProcessFailureBatchRecoveryITCase did not finish on time

2019-03-01 Thread Till Rohrmann (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16781544#comment-16781544
 ] 

Till Rohrmann commented on FLINK-10756:
---

I checked and the {{JobGraph}} is submitted with {{allowQueuedScheduling == 
true}}. 

> TaskManagerProcessFailureBatchRecoveryITCase did not finish on time
> ---
>
> Key: FLINK-10756
> URL: https://issues.apache.org/jira/browse/FLINK-10756
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.6.2, 1.7.0
>Reporter: Bowen Li
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.8.0
>
>
> {code:java}
> Failed tests: 
>   
> TaskManagerProcessFailureBatchRecoveryITCase>AbstractTaskManagerProcessFailureRecoveryTest.testTaskManagerProcessFailure:207
>  The program did not finish in time
> {code}
> https://travis-ci.org/apache/flink/jobs/449439623



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol closed pull request #6493: [FLINK-10058] Add getStateBackend/setStateBackend API for PythonStreamExecutionEnvironment

2019-03-01 Thread GitBox
zentol closed pull request #6493: [FLINK-10058] Add 
getStateBackend/setStateBackend API for PythonStreamExecutionEnvironment
URL: https://github.com/apache/flink/pull/6493
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #7325: [FLINK-11162][rest] Provide a rest API to list all logical operators

2019-03-01 Thread GitBox
zentol closed pull request #7325: [FLINK-11162][rest] Provide a rest API to 
list all logical operators
URL: https://github.com/apache/flink/pull/7325
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #7325: [FLINK-11162][rest] Provide a rest API to list all logical operators

2019-03-01 Thread GitBox
zentol commented on issue #7325: [FLINK-11162][rest] Provide a rest API to list 
all logical operators
URL: https://github.com/apache/flink/pull/7325#issuecomment-468618728
 
 
   As out-lined in the JIRA I would like to have this but the required changes 
for a proper implementation (that covers metrics, backpressure, the web UI, 
etc) are so significant that I don't believe anyone has time for this right 
now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11336) Flink HA didn't remove ZK metadata

2019-03-01 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-11336:
--
Issue Type: Bug  (was: Improvement)

> Flink HA didn't remove ZK metadata
> --
>
> Key: FLINK-11336
> URL: https://issues.apache.org/jira/browse/FLINK-11336
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: shengjk1
>Priority: Major
> Attachments: image-2019-01-15-19-42-21-902.png
>
>
> Flink HA didn't remove ZK metadata
> such as 
> go to zk cli  : ls /flinkone
> !image-2019-01-15-19-42-21-902.png!
>  
> i suggest we should delete this metadata when the application  cancel or 
> throw exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol closed pull request #6962: change accumulator value to be formatted with digit group separator

2019-03-01 Thread GitBox
zentol closed pull request #6962: change accumulator value to be formatted with 
digit group separator
URL: https://github.com/apache/flink/pull/6962
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol closed pull request #7305: [FLINK-11161]Unable to import java packages in scala-shell

2019-03-01 Thread GitBox
zentol closed pull request #7305: [FLINK-11161]Unable to import java packages 
in scala-shell
URL: https://github.com/apache/flink/pull/7305
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on issue #7305: [FLINK-11161]Unable to import java packages in scala-shell

2019-03-01 Thread GitBox
zentol commented on issue #7305: [FLINK-11161]Unable to import java packages in 
scala-shell
URL: https://github.com/apache/flink/pull/7305#issuecomment-468619871
 
 
   JIRA has been closed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-11336) Flink HA didn't remove ZK metadata

2019-03-01 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-11336:
-

Assignee: Till Rohrmann

> Flink HA didn't remove ZK metadata
> --
>
> Key: FLINK-11336
> URL: https://issues.apache.org/jira/browse/FLINK-11336
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: shengjk1
>Assignee: Till Rohrmann
>Priority: Major
> Attachments: image-2019-01-15-19-42-21-902.png
>
>
> Flink HA didn't remove ZK metadata
> such as 
> go to zk cli  : ls /flinkone
> !image-2019-01-15-19-42-21-902.png!
>  
> i suggest we should delete this metadata when the application  cancel or 
> throw exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] aljoscha opened a new pull request #7876: [FLINK-11751] Extend release notes for Flink 1.8

2019-03-01 Thread GitBox
aljoscha opened a new pull request #7876: [FLINK-11751] Extend release notes 
for Flink 1.8
URL: https://github.com/apache/flink/pull/7876
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] flinkbot commented on issue #7876: [FLINK-11751] Extend release notes for Flink 1.8

2019-03-01 Thread GitBox
flinkbot commented on issue #7876: [FLINK-11751] Extend release notes for Flink 
1.8
URL: https://github.com/apache/flink/pull/7876#issuecomment-468623662
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
 Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve the 1st aspect (similarly, it 
also supports the `consensus`, `architecture` and `quality` keywords)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11751) Extend release notes for Flink 1.8

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11751:
---
Labels: pull-request-available  (was: )

> Extend release notes for Flink 1.8
> --
>
> Key: FLINK-11751
> URL: https://issues.apache.org/jira/browse/FLINK-11751
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zhijiangW commented on issue #7186: [FLINK-10941] Keep slots which contain unconsumed result partitions

2019-03-01 Thread GitBox
zhijiangW commented on issue #7186: [FLINK-10941] Keep slots which contain 
unconsumed result partitions
URL: https://github.com/apache/flink/pull/7186#issuecomment-468624034
 
 
   @azagrebin , `sendFailIntermediateResultPartitionsRpcCall` is mainly used 
for the scenario that the producer task is not in `TaskManager` (such as 
already FINISHED) but its `ResultPartition` might still exist in `TaskManager`, 
then we can cancel its `ResultPartition` instead of cancel `Task`.
   
   I might catch your confusing of how the producer releases its partition if 
not receiving `ClosePartition` and the consumer exits successfully no failover. 
The key point is aware of the inactive channel on consumer side. Once the 
consumer (tcp client) exits to close the tcp channel on its side, the producer 
(tcp server) would be aware of this inactive channel in short time (based on 
tcp mechanism and netty), and then release all the partitions, finally close 
the tcp channel on server side.
   
   I ever encountered a scenario that when the tcp client is closed, it takes 
about two hours for tcp server awareness to close because of the hardware issue 
and setting. Then we added an idle hander in netty to find the closed client 
real time. In common case, this awareness is nearly real time. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhijiangW edited a comment on issue #7186: [FLINK-10941] Keep slots which contain unconsumed result partitions

2019-03-01 Thread GitBox
zhijiangW edited a comment on issue #7186: [FLINK-10941] Keep slots which 
contain unconsumed result partitions
URL: https://github.com/apache/flink/pull/7186#issuecomment-468624034
 
 
   @azagrebin , `sendFailIntermediateResultPartitionsRpcCall` is mainly used 
for the scenario that the producer task is not in `TaskManager` (such as 
already FINISHED) but its `ResultPartition` might still exist in `TaskManager`, 
then we can cancel its `ResultPartition` instead of cancel `Task`.
   
   I might catch your confusing of how the producer releases its partition if 
not receiving `ClosePartition` and the consumer exits successfully no failover. 
The key point is aware of the inactive channel on producer side. Once the 
consumer (tcp client) exits to close the tcp channel on its side, the producer 
(tcp server) would be aware of this inactive channel in short time (based on 
tcp mechanism and netty), and then release all the partitions, finally close 
the tcp channel on server side.
   
   I ever encountered a scenario that when the tcp client is closed, it takes 
about two hours for tcp server awareness to close because of the hardware issue 
and setting. Then we added an idle hander in netty to find the closed client 
real time. In common case, this awareness is nearly real time. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aljoscha commented on issue #7156: [FLINK-10967] Update kafka dependency to 2.1.1

2019-03-01 Thread GitBox
aljoscha commented on issue #7156: [FLINK-10967] Update kafka dependency to 
2.1.1
URL: https://github.com/apache/flink/pull/7156#issuecomment-468626863
 
 
   Could you try and rebase to latest master?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tarmazakov commented on issue #7858: [FLINK-11787] Update Kubernetes resources: workaround to make TM reachable from JM in Kubernetes

2019-03-01 Thread GitBox
tarmazakov commented on issue #7858: [FLINK-11787] Update Kubernetes resources: 
workaround to make TM reachable from JM in Kubernetes
URL: https://github.com/apache/flink/pull/7858#issuecomment-468629356
 
 
   Hi, thank you for this WA.
   
   I downloaded the flink.tgz  from 
   
http://apache-mirror.rbc.ru/pub/apache/flink/flink-1.7.2/flink-1.7.2-bin-scala_2.12.tgz
 
   that corresponding docker image flink:latest (flink:1.7)
   
   You pass 2 args to the docker-entrypoint.sh:
   ```yaml
   args:
   - taskmanager
   - "-Dtaskmanager.host=$(K8S_POD_IP)"
   ```
   
   
   But in 
[docker-entrypoint.sh](https://github.com/docker-flink/docker-flink/blob/master/1.7/scala_2.12-debian/docker-entrypoint.sh)
 you did't pass argument `-Dtaskmanager.host..` to the taskmanager.sh:
   `exec $(drop_privs_cmd) "$FLINK_HOME/bin/taskmanager.sh" start-foreground`
   
   Tell me how it will work?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tillrohrmann commented on issue #7858: [FLINK-11787] Update Kubernetes resources: workaround to make TM reachable from JM in Kubernetes

2019-03-01 Thread GitBox
tillrohrmann commented on issue #7858: [FLINK-11787] Update Kubernetes 
resources: workaround to make TM reachable from JM in Kubernetes
URL: https://github.com/apache/flink/pull/7858#issuecomment-468638351
 
 
   The script needs to be updated to call `exec $(drop_privs_cmd) 
"$FLINK_HOME/bin/taskmanager.sh" start-foreground "$@"`. Do you want to open a 
PR for this feature @tarmazakov.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >