[jira] [Commented] (FLINK-10555) Port AkkaSslITCase to new code base

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679224#comment-16679224
 ] 

ASF GitHub Bot commented on FLINK-10555:


TisonKun commented on issue #6849: [FLINK-10555] [test] Port AkkaSslITCase to 
new code base
URL: https://github.com/apache/flink/pull/6849#issuecomment-436855977
 
 
   re-push for resolving conflict.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Port AkkaSslITCase to new code base
> ---
>
> Key: FLINK-10555
> URL: https://issues.apache.org/jira/browse/FLINK-10555
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Port {{AkkaSslITCase}} to new code base, as {{MiniClusterSslITCase}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9875) Add concurrent creation of execution job vertex

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679221#comment-16679221
 ] 

ASF GitHub Bot commented on FLINK-9875:
---

TisonKun closed pull request #6353: [FLINK-9875][runtime] Add concurrent 
creation of execution job vertex
URL: https://github.com/apache/flink/pull/6353
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
index acb1e16fe71..42440182293 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
@@ -88,6 +88,7 @@
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
+import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.NoSuchElementException;
@@ -808,6 +809,55 @@ public Executor getFutureExecutor() {
//  Actions
// 

 
+   private void createExecutionJobVertex(List 
topologiallySorted) throws JobException {
+   final List> futures = new 
LinkedList<>();
+   final long createTimestamp = System.currentTimeMillis();
+
+   for (JobVertex jobVertex: topologiallySorted) {
+   futures.add(CompletableFuture.supplyAsync(() -> {
+   try {
+   ExecutionJobVertex ejv = new 
ExecutionJobVertex(
+   this,
+   jobVertex,
+   1,
+   rpcTimeout,
+   globalModVersion,
+   createTimestamp);
+   ExecutionJobVertex previousTask = 
tasks.putIfAbsent(jobVertex.getID(), ejv);
+   if (previousTask != null) {
+   throw new JobException(
+   String.format(
+   "Encountered 
two job vertices with ID %s : previous=[%s] / new=[%s]",
+   
jobVertex.getID(), ejv, previousTask));
+   }
+   return null;
+   } catch (JobException e) {
+   return e;
+   }
+   }, futureExecutor));
+   }
+
+   try {
+   // wait for all futures done
+   Collection exceptions = 
FutureUtils.combineAll(futures).get();
+
+   // suppress all optional exceptions
+   Exception suppressedException = null;
+   for (Exception exception : exceptions) {
+   if (exception != null) {
+   suppressedException = 
ExceptionUtils.firstOrSuppressed(exception, suppressedException);
+   }
+   }
+
+   if (suppressedException != null) {
+   throw suppressedException;
+   }
+   } catch (Exception e) {
+   throw new JobException("Could not create execution job 
vertex.", e);
+   }
+   }
+
+
public void attachJobGraph(List topologiallySorted) throws 
JobException {
 
LOG.debug("Attaching {} topologically sorted vertices to 
existing job graph with {} " +
@@ -815,7 +865,8 @@ public void attachJobGraph(List 
topologiallySorted) throws JobExcepti
topologiallySorted.size(), tasks.size(), 
intermediateResults.size());
 
final ArrayList newExecJobVertices = new 
ArrayList<>(topologiallySorted.size());
-   final long createTimestamp = System.currentTimeMillis();
+
+   createExecutionJobVertex(topologiallySorted);
 
for (JobVertex jobVertex : topologiallySorted) {
 
@@ -823,23 +874,10 @@ public void attachJobGraph(List 
topologiallySorted) throws JobExcepti
this.isStoppable = false;
   

[jira] [Commented] (FLINK-9875) Add concurrent creation of execution job vertex

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679220#comment-16679220
 ] 

ASF GitHub Bot commented on FLINK-9875:
---

TisonKun commented on issue #6353: [FLINK-9875][runtime] Add concurrent 
creation of execution job vertex
URL: https://github.com/apache/flink/pull/6353#issuecomment-436855410
 
 
   close as no consensus, further discussion happened as described above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add concurrent creation of execution job vertex
> ---
>
> Key: FLINK-9875
> URL: https://issues.apache.org/jira/browse/FLINK-9875
> Project: Flink
>  Issue Type: Improvement
>  Components: Local Runtime
>Affects Versions: 1.5.1
>Reporter: TisonKun
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
>
> in some case like inputformat vertex, creation of execution job vertex is time
> consuming, this pr add concurrent creation of execution job vertex to 
> accelerate it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10706) Update the Flink Dashboard and its components

2018-11-07 Thread Yadong Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679189#comment-16679189
 ] 

Yadong Xie commented on FLINK-10706:


[~Zentol] [~wolli] thank you!

I will start my work soon.

> Update the Flink Dashboard and its components
> -
>
> Key: FLINK-10706
> URL: https://issues.apache.org/jira/browse/FLINK-10706
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
>Affects Versions: 1.6.2
>Reporter: Fabian Wollert
>Assignee: Yadong Xie
>Priority: Major
> Fix For: 1.8.0
>
>
> The Flink Dashboard uses currently Angular 1, which had its successor coming 
> out two years ago. Its expected that Angular 1 (or Angular.js how it is 
> called now) will ceise to exist in the future, due to its successor Angular 
> 2-7 and React being the more actively developed platforms.
> We should move to Angular 7 or React.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9002) Add operators with input type that goes through Avro serialization (with schema/generic)

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679345#comment-16679345
 ] 

ASF GitHub Bot commented on FLINK-9002:
---

tzulitai commented on issue #7041: [FLINK-9002] [e2e] Add operators with input 
type that goes through Avro serialization
URL: https://github.com/apache/flink/pull/7041#issuecomment-436888088
 
 
   @zentol Thanks, will remember to fix the typo when merging.
   
   @dawidwys I addressed your comments. Could you have another look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add operators with input type that goes through Avro serialization (with 
> schema/generic) 
> -
>
> Key: FLINK-9002
> URL: https://issues.apache.org/jira/browse/FLINK-9002
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10815) Rethink the rescale operation, can we do it async

2018-11-07 Thread Shimin Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679185#comment-16679185
 ] 

Shimin Yang commented on FLINK-10815:
-

Thanks [~dawidwys]. I will close this issue.

> Rethink the rescale operation, can we do it async
> -
>
> Key: FLINK-10815
> URL: https://issues.apache.org/jira/browse/FLINK-10815
> Project: Flink
>  Issue Type: Improvement
>  Components: ResourceManager, Scheduler
>Reporter: Shimin Yang
>Assignee: Shimin Yang
>Priority: Major
>
> Currently, the rescale operation is to stop the whole job and restart it with 
> different parrellism. But the rescale operation cost a lot and took lots of 
> time to recover if the state size is quite big. 
> And a long-time rescale might cause other problems like latency increase and 
> back pressure. For some circumstances like a streaming computing cloud 
> service, users may be very sensitive to latency and resource usage. So it 
> would be better to make the rescale a cheaper operation.
> I wonder if we could make it an async operation just like checkpoint. But how 
> to deal with the keyed state would be a pain in the ass. Currently I just 
> want to make some assumption to make things simpler. The asnyc rescale 
> operation can only double the parrellism or make it half.
> In the scale up circumstance, we can copy the state to the newly created 
> worker and change the partitioner of the upstream. The best timing might be 
> get notified of checkpoint completed. But we still need to change the 
> partitioner of upstream. So the upstream should buffer the result or block 
> the computation util the state copy finished. Then make the partitioner to 
> send differnt elements with the same key to the same downstream operator.
> In the scale down circumstance, we can merge the keyed state of two operators 
> and also change the partitioner of upstream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9002) Add operators with input type that goes through Avro serialization (with schema/generic)

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679342#comment-16679342
 ] 

ASF GitHub Bot commented on FLINK-9002:
---

tzulitai commented on a change in pull request #7041: [FLINK-9002] [e2e] Add 
operators with input type that goes through Avro serialization
URL: https://github.com/apache/flink/pull/7041#discussion_r231775682
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-datastream-allround-test/src/main/java/org/apache/flink/streaming/tests/DataStreamAllroundTestJobFactory.java
 ##
 @@ -455,4 +457,15 @@ static boolean isSimulateFailures(ParameterTool pt) {
listStateGenerator,
listStateDescriptor);
}
+
+   static  DataStream testSpecificOperatorInputTypeSerialization(
+   DataStream streamToConvert,
+   MapFunction convertToInputType,
+   MapFunction convertFromInputType,
+   KeySelector inputTypeKeySelector,
+   String inputTypeId) {
+
+   DataStream convertedStream = 
streamToConvert.map(convertToInputType);
+   return 
convertedStream.keyBy(inputTypeKeySelector).map(convertFromInputType).name("InputTypeSerializationTestOperator-"
 + inputTypeId);
 
 Review comment:
   Yes, that's the idea.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add operators with input type that goes through Avro serialization (with 
> schema/generic) 
> -
>
> Key: FLINK-9002
> URL: https://issues.apache.org/jira/browse/FLINK-9002
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10815) Rethink the rescale operation, can we do it async

2018-11-07 Thread Shimin Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shimin Yang closed FLINK-10815.
---
Resolution: Workaround

> Rethink the rescale operation, can we do it async
> -
>
> Key: FLINK-10815
> URL: https://issues.apache.org/jira/browse/FLINK-10815
> Project: Flink
>  Issue Type: Improvement
>  Components: ResourceManager, Scheduler
>Reporter: Shimin Yang
>Assignee: Shimin Yang
>Priority: Major
>
> Currently, the rescale operation is to stop the whole job and restart it with 
> different parrellism. But the rescale operation cost a lot and took lots of 
> time to recover if the state size is quite big. 
> And a long-time rescale might cause other problems like latency increase and 
> back pressure. For some circumstances like a streaming computing cloud 
> service, users may be very sensitive to latency and resource usage. So it 
> would be better to make the rescale a cheaper operation.
> I wonder if we could make it an async operation just like checkpoint. But how 
> to deal with the keyed state would be a pain in the ass. Currently I just 
> want to make some assumption to make things simpler. The asnyc rescale 
> operation can only double the parrellism or make it half.
> In the scale up circumstance, we can copy the state to the newly created 
> worker and change the partitioner of the upstream. The best timing might be 
> get notified of checkpoint completed. But we still need to change the 
> partitioner of upstream. So the upstream should buffer the result or block 
> the computation util the state copy finished. Then make the partitioner to 
> send differnt elements with the same key to the same downstream operator.
> In the scale down circumstance, we can merge the keyed state of two operators 
> and also change the partitioner of upstream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] tzulitai commented on issue #7041: [FLINK-9002] [e2e] Add operators with input type that goes through Avro serialization

2018-11-07 Thread GitBox
tzulitai commented on issue #7041: [FLINK-9002] [e2e] Add operators with input 
type that goes through Avro serialization
URL: https://github.com/apache/flink/pull/7041#issuecomment-436888088
 
 
   @zentol Thanks, will remember to fix the typo when merging.
   
   @dawidwys I addressed your comments. Could you have another look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tzulitai commented on a change in pull request #7041: [FLINK-9002] [e2e] Add operators with input type that goes through Avro serialization

2018-11-07 Thread GitBox
tzulitai commented on a change in pull request #7041: [FLINK-9002] [e2e] Add 
operators with input type that goes through Avro serialization
URL: https://github.com/apache/flink/pull/7041#discussion_r231775682
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-datastream-allround-test/src/main/java/org/apache/flink/streaming/tests/DataStreamAllroundTestJobFactory.java
 ##
 @@ -455,4 +457,15 @@ static boolean isSimulateFailures(ParameterTool pt) {
listStateGenerator,
listStateDescriptor);
}
+
+   static  DataStream testSpecificOperatorInputTypeSerialization(
+   DataStream streamToConvert,
+   MapFunction convertToInputType,
+   MapFunction convertFromInputType,
+   KeySelector inputTypeKeySelector,
+   String inputTypeId) {
+
+   DataStream convertedStream = 
streamToConvert.map(convertToInputType);
+   return 
convertedStream.keyBy(inputTypeKeySelector).map(convertFromInputType).name("InputTypeSerializationTestOperator-"
 + inputTypeId);
 
 Review comment:
   Yes, that's the idea.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tzulitai commented on a change in pull request #7041: [FLINK-9002] [e2e] Add operators with input type that goes through Avro serialization

2018-11-07 Thread GitBox
tzulitai commented on a change in pull request #7041: [FLINK-9002] [e2e] Add 
operators with input type that goes through Avro serialization
URL: https://github.com/apache/flink/pull/7041#discussion_r231775701
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-datastream-allround-test/src/main/java/org/apache/flink/streaming/tests/DataStreamAllroundTestProgram.java
 ##
 @@ -143,6 +148,30 @@ public void apply(Integer integer, TimeWindow window, 
Iterable input, Col
}
}).name(TIME_WINDOW_OPER_NAME);
 
+   // test operator input serialization with Avro specific records 
as the input type
+   eventStream3 = testSpecificOperatorInputTypeSerialization(
+   eventStream3,
+   in -> new AvroEvent(in.getKey(), in.getEventTime(), 
in.getSequenceNumber(), in.getPayload()),
+   in -> new Event(in.getKey(), in.getEventTime(), 
in.getSequenceNumber(), in.getPayload()),
+   AvroEvent::getKey,
+   "Avro (Specific)");
+
+   // test operator input serialization with Avro generic records 
as the input type
+   eventStream3 = testSpecificOperatorInputTypeSerialization(
+   eventStream3,
+   in -> {
+   GenericRecord record = new 
GenericData.Record(AvroEvent.SCHEMA$);
 
 Review comment:
   Will change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhijiangW opened a new pull request #7051: [FLINK-10820][network] Simplify the RebalancePartitioner implementation

2018-11-07 Thread GitBox
zhijiangW opened a new pull request #7051: [FLINK-10820][network] Simplify the 
RebalancePartitioner implementation
URL: https://github.com/apache/flink/pull/7051
 
 
   ## What is the purpose of the change
   
   *The current `RebalancePartitioner` implementations seems a little hacky for 
selecting a random number as the first channel index, and the following 
selections based on this random index in round-robin fashion.*
   
   *We can define a constant as the first channel index to make the 
implementation simple and readable. To do so, it will not change the rebalance 
semantics.*
   
   *In performance aspect, it will reduce some overheads by checking the 
condition previously, but it should not have obvious effects. And the key 
motivation is not related to performance, just confirm no regression.*
   
   ## Brief change log
   
 - *Simplify the implementation of `RebalancePartitioner` in round-robin 
way.*
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
*RebalancePartitionerTest*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asfgit closed pull request #6949: [FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy

2018-11-07 Thread GitBox
asfgit closed pull request #6949: [FLINK-10676][table] Add 'as' method for 
OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/dev/table/tableApi.md b/docs/dev/table/tableApi.md
index 76bd5b28ef2..ed3a97d8def 100644
--- a/docs/dev/table/tableApi.md
+++ b/docs/dev/table/tableApi.md
@@ -1571,13 +1571,15 @@ The `OverWindow` defines a range of rows over which 
aggregates are computed. `Ov
 
 
   preceding
-  Required
+  Optional
   
 Defines the interval of rows that are included in the window and 
precede the current row. The interval can either be specified as time or 
row-count interval.
 
 Bounded over 
windows are specified with the size of the interval, e.g., 
10.minutes for a time interval or 10.rows for a 
row-count interval.
 
 Unbounded over 
windows are specified using a constant, i.e., UNBOUNDED_RANGE 
for a time interval or UNBOUNDED_ROW for a row-count interval. 
Unbounded over windows start with the first row of a partition.
+
+If the preceding clause is omitted, 
UNBOUNDED_RANGE and CURRENT_RANGE are used as the 
default preceding and following for the window.
   
 
 
diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
index f326f6f6a7c..121aab8f776 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
@@ -18,7 +18,8 @@
 
 package org.apache.flink.table.api.java
 
-import org.apache.flink.table.api.{TumbleWithSize, OverWindowWithPreceding, 
SlideWithSize, SessionWithGap}
+import org.apache.flink.table.api.scala.{CURRENT_RANGE, UNBOUNDED_RANGE}
+import org.apache.flink.table.api.{OverWindow, TumbleWithSize, 
OverWindowWithPreceding, SlideWithSize, SessionWithGap}
 import org.apache.flink.table.expressions.{Expression, ExpressionParser}
 
 /**
@@ -144,4 +145,21 @@ class OverWindowWithOrderBy(
 new OverWindowWithPreceding(partitionByExpr, orderByExpr, precedingExpr)
   }
 
+  /**
+* Assigns an alias for this window that the following `select()` clause 
can refer to.
+*
+* @param alias alias for this over window
+* @return over window
+*/
+  def as(alias: String): OverWindow = 
as(ExpressionParser.parseExpression(alias))
+
+  /**
+* Assigns an alias for this window that the following `select()` clause 
can refer to.
+*
+* @param alias alias for this over window
+* @return over window
+*/
+  def as(alias: Expression): OverWindow = {
+OverWindow(alias, partitionByExpr, orderByExpr, UNBOUNDED_RANGE, 
CURRENT_RANGE)
+  }
 }
diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/windows.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/windows.scala
index 91bf1a6c739..2f88248a7f1 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/windows.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/windows.scala
@@ -18,8 +18,8 @@
 
 package org.apache.flink.table.api.scala
 
-import org.apache.flink.table.api.{TumbleWithSize, OverWindowWithPreceding, 
SlideWithSize, SessionWithGap}
-import org.apache.flink.table.expressions.Expression
+import org.apache.flink.table.api.{OverWindow, TumbleWithSize, 
OverWindowWithPreceding, SlideWithSize, SessionWithGap}
+import org.apache.flink.table.expressions.{Expression, ExpressionParser}
 
 /**
   * Helper object for creating a tumbling window. Tumbling windows are 
consecutive, non-overlapping
@@ -127,7 +127,6 @@ case class PartitionedOver(partitionBy: Array[Expression]) {
 
 case class OverWindowWithOrderBy(partitionBy: Seq[Expression], orderBy: 
Expression) {
 
-
   /**
 * Set the preceding offset (based on time or row-count intervals) for over 
window.
 *
@@ -138,4 +137,21 @@ case class OverWindowWithOrderBy(partitionBy: 
Seq[Expression], orderBy: Expressi
 new OverWindowWithPreceding(partitionBy, orderBy, preceding)
   }
 
+  /**
+* Assigns an alias for this window that the following `select()` clause 
can refer to.
+*
+* @param alias alias for this over window
+* @return over window
+*/
+  def as(alias: String): OverWindow = 
as(ExpressionParser.parseExpression(alias))
+
+  /**
+* Assigns an alias for this window that the following `select()` clause 
can refer to.
+*
+* @param alias alias for this over window
+* @return over window
+*/
+  def as(alias: Expressio

[GitHub] sunjincheng121 commented on issue #6949: [FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy

2018-11-07 Thread GitBox
sunjincheng121 commented on issue #6949: [FLINK-10676][table] Add 'as' method 
for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#issuecomment-436862717
 
 
   After the rerun, the error disappeared. no specific reasons are found, so 
created a JIRA. 
[FLINK-10819](https://issues.apache.org/jira/browse/FLINK-10819), and will 
continue to pay attention.
   
   Merging this PR...
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TisonKun commented on issue #6946: [FLINK-10700] [cluster managerment] Remove LegacyStandaloneClusterDes…

2018-11-07 Thread GitBox
TisonKun commented on issue #6946: [FLINK-10700] [cluster managerment] Remove 
LegacyStandaloneClusterDes…
URL: https://github.com/apache/flink/pull/6946#issuecomment-436856580
 
 
   cc @StefanRRichter @GJL 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TisonKun commented on issue #6872: [FLINK-10436] Add ConfigOption#withFallbackKeys

2018-11-07 Thread GitBox
TisonKun commented on issue #6872: [FLINK-10436] Add 
ConfigOption#withFallbackKeys
URL: https://github.com/apache/flink/pull/6872#issuecomment-436856289
 
 
   ping @tillrohrmann as we have cut the release branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TisonKun commented on issue #6849: [FLINK-10555] [test] Port AkkaSslITCase to new code base

2018-11-07 Thread GitBox
TisonKun commented on issue #6849: [FLINK-10555] [test] Port AkkaSslITCase to 
new code base
URL: https://github.com/apache/flink/pull/6849#issuecomment-436855977
 
 
   re-push for resolving conflict.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TisonKun closed pull request #6353: [FLINK-9875][runtime] Add concurrent creation of execution job vertex

2018-11-07 Thread GitBox
TisonKun closed pull request #6353: [FLINK-9875][runtime] Add concurrent 
creation of execution job vertex
URL: https://github.com/apache/flink/pull/6353
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
index acb1e16fe71..42440182293 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
@@ -88,6 +88,7 @@
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
+import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
 import java.util.NoSuchElementException;
@@ -808,6 +809,55 @@ public Executor getFutureExecutor() {
//  Actions
// 

 
+   private void createExecutionJobVertex(List 
topologiallySorted) throws JobException {
+   final List> futures = new 
LinkedList<>();
+   final long createTimestamp = System.currentTimeMillis();
+
+   for (JobVertex jobVertex: topologiallySorted) {
+   futures.add(CompletableFuture.supplyAsync(() -> {
+   try {
+   ExecutionJobVertex ejv = new 
ExecutionJobVertex(
+   this,
+   jobVertex,
+   1,
+   rpcTimeout,
+   globalModVersion,
+   createTimestamp);
+   ExecutionJobVertex previousTask = 
tasks.putIfAbsent(jobVertex.getID(), ejv);
+   if (previousTask != null) {
+   throw new JobException(
+   String.format(
+   "Encountered 
two job vertices with ID %s : previous=[%s] / new=[%s]",
+   
jobVertex.getID(), ejv, previousTask));
+   }
+   return null;
+   } catch (JobException e) {
+   return e;
+   }
+   }, futureExecutor));
+   }
+
+   try {
+   // wait for all futures done
+   Collection exceptions = 
FutureUtils.combineAll(futures).get();
+
+   // suppress all optional exceptions
+   Exception suppressedException = null;
+   for (Exception exception : exceptions) {
+   if (exception != null) {
+   suppressedException = 
ExceptionUtils.firstOrSuppressed(exception, suppressedException);
+   }
+   }
+
+   if (suppressedException != null) {
+   throw suppressedException;
+   }
+   } catch (Exception e) {
+   throw new JobException("Could not create execution job 
vertex.", e);
+   }
+   }
+
+
public void attachJobGraph(List topologiallySorted) throws 
JobException {
 
LOG.debug("Attaching {} topologically sorted vertices to 
existing job graph with {} " +
@@ -815,7 +865,8 @@ public void attachJobGraph(List 
topologiallySorted) throws JobExcepti
topologiallySorted.size(), tasks.size(), 
intermediateResults.size());
 
final ArrayList newExecJobVertices = new 
ArrayList<>(topologiallySorted.size());
-   final long createTimestamp = System.currentTimeMillis();
+
+   createExecutionJobVertex(topologiallySorted);
 
for (JobVertex jobVertex : topologiallySorted) {
 
@@ -823,23 +874,10 @@ public void attachJobGraph(List 
topologiallySorted) throws JobExcepti
this.isStoppable = false;
}
 
-   // create the execution job vertex and attach it to the 
graph
-   ExecutionJobVertex ejv = new ExecutionJobVertex(
-   this,
-   jobVertex,

[GitHub] TisonKun commented on issue #6353: [FLINK-9875][runtime] Add concurrent creation of execution job vertex

2018-11-07 Thread GitBox
TisonKun commented on issue #6353: [FLINK-9875][runtime] Add concurrent 
creation of execution job vertex
URL: https://github.com/apache/flink/pull/6353#issuecomment-436855410
 
 
   close as no consensus, further discussion happened as described above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TisonKun closed pull request #6345: [FLINK-9869][runtime] Send PartitionInfo in batch to Improve perfornance

2018-11-07 Thread GitBox
TisonKun closed pull request #6345: [FLINK-9869][runtime] Send PartitionInfo in 
batch to Improve perfornance
URL: https://github.com/apache/flink/pull/6345
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/_includes/generated/job_manager_configuration.html 
b/docs/_includes/generated/job_manager_configuration.html
index 0458af24c06..83d8abb7d27 100644
--- a/docs/_includes/generated/job_manager_configuration.html
+++ b/docs/_includes/generated/job_manager_configuration.html
@@ -42,6 +42,11 @@
 6123
 The config parameter defining the network port to connect to 
for communication with the job manager. Like jobmanager.rpc.address, this value 
is only interpreted in setups where a single JobManager with static 
name/address and port exists (simple standalone setups, or container setups 
with dynamic service name resolution). This config option is not used in many 
high-availability setups, when a leader-election service (like ZooKeeper) is 
used to elect and discover the JobManager leader from potentially multiple 
standby JobManagers.
 
+
+jobmanager.update-partition-info.send-interval
+10
+The interval of send update-partition-info message.
+
 
 jobstore.cache-size
 52428800
diff --git 
a/flink-core/src/main/java/org/apache/flink/configuration/JobManagerOptions.java
 
b/flink-core/src/main/java/org/apache/flink/configuration/JobManagerOptions.java
index 1666f213d18..43091a256b2 100644
--- 
a/flink-core/src/main/java/org/apache/flink/configuration/JobManagerOptions.java
+++ 
b/flink-core/src/main/java/org/apache/flink/configuration/JobManagerOptions.java
@@ -154,6 +154,11 @@
.defaultValue(60L * 60L)
.withDescription("The time in seconds after which a completed 
job expires and is purged from the job store.");
 
+   public static final ConfigOption 
UPDATE_PARTITION_INFO_SEND_INTERVAL =
+   key("jobmanager.update-partition-info.send-interval")
+   .defaultValue(10L)
+   .withDescription("The interval of send update-partition-info 
message.");
+
/**
 * The timeout in milliseconds for requesting a slot from Slot Pool.
 */
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
index 801f35a41dc..4a157f9cb60 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
@@ -27,18 +27,13 @@
 import org.apache.flink.runtime.checkpoint.CheckpointOptions;
 import org.apache.flink.runtime.checkpoint.JobManagerTaskRestore;
 import org.apache.flink.runtime.clusterframework.types.AllocationID;
-import org.apache.flink.runtime.clusterframework.types.ResourceID;
 import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
 import org.apache.flink.runtime.clusterframework.types.SlotProfile;
 import org.apache.flink.runtime.concurrent.FutureUtils;
-import org.apache.flink.runtime.deployment.InputChannelDeploymentDescriptor;
 import 
org.apache.flink.runtime.deployment.PartialInputChannelDeploymentDescriptor;
-import org.apache.flink.runtime.deployment.ResultPartitionLocation;
 import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
 import org.apache.flink.runtime.execution.ExecutionState;
 import org.apache.flink.runtime.instance.SlotSharingGroupId;
-import org.apache.flink.runtime.io.network.ConnectionID;
-import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
 import org.apache.flink.runtime.jobmanager.scheduler.CoLocationConstraint;
 import 
org.apache.flink.runtime.jobmanager.scheduler.LocationPreferenceConstraint;
 import org.apache.flink.runtime.jobmanager.scheduler.ScheduledUnit;
@@ -69,6 +64,8 @@
 import java.util.concurrent.CompletionException;
 import java.util.concurrent.ConcurrentLinkedQueue;
 import java.util.concurrent.Executor;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
 import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
 import java.util.stream.Collectors;
@@ -178,6 +175,10 @@
 
// 

 
+   private final Object updatePartitionLock = new Object();
+
+   private ScheduledFuture updatePartitionFuture;
+
/**
 * Creates a new Execution attempt.
 *
@@ -588,24 +589,27 @@ public void deploy() throws JobException {
 
final TaskM

[GitHub] TisonKun commented on issue #6345: [FLINK-9869][runtime] Send PartitionInfo in batch to Improve perfornance

2018-11-07 Thread GitBox
TisonKun commented on issue #6345: [FLINK-9869][runtime] Send PartitionInfo in 
batch to Improve perfornance
URL: https://github.com/apache/flink/pull/6345#issuecomment-436854489
 
 
   close for no longer interest


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TisonKun opened a new pull request #7050: [hotfix] Follow up FLINK-6046 to clean unused variable

2018-11-07 Thread GitBox
TisonKun opened a new pull request #7050: [hotfix] Follow up FLINK-6046 to 
clean unused variable
URL: https://github.com/apache/flink/pull/7050
 
 
   ## What is the purpose of the change
   
   After FLINK-6046 5ff07e63d1e9a98959e5edf6687b847d23d5 resolved, we 
replaced `serializedTaskInformation` and `taskInformationBlobKey` with 
`taskInformationOrBlobKey`. This pull requests acts as a follow up to clean the 
unused variables and update document.
   
   ## Verifying this change
   
   This change is a code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (**no**)
 - The public API, i.e., is any changed class annotated with `(Evolving)`: 
(**no**)
 - The serializers: (**no**)
 - The runtime per-record code paths (performance sensitive): (**no**)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (**no**)
 - The S3 file system connector: (**no**)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**no**)
 - If yes, how is the feature documented? (**not applicable**)
   
   cc @tillrohrmann @zentol 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sunjincheng121 commented on issue #6949: [FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy

2018-11-07 Thread GitBox
sunjincheng121 commented on issue #6949: [FLINK-10676][table] Add 'as' method 
for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#issuecomment-436838520
 
 
   CI find 
`JobManagerHAProcessFailureRecoveryITCase.testDispatcherProcessFailure:331 » 
IllegalArgument` test fail which looks no relation with this PR. I'll check it, 
then merge this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10817) Upgrade presto dependency to support path-style access

2018-11-07 Thread Adam Lamar (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Lamar updated FLINK-10817:
---
Description: 
In order to use any given non-AWS s3 implementation backed by the presto s3 
filesystem, it is necessary to set at least one configuration parameter in 
flink-conf.yaml:
 * presto.s3.endpoint: https://example.com

This appears to work as expected for hosted s3 alternatives.

In order to use a bring-your-own, self-hosted s3 alternative like 
[minio|https://www.minio.io/], at least two configuration parameters are 
required:
 * presto.s3.endpoint: https://example.com
 * presto.s3.path-style-access: true

However, the second path-style-access parameter doesn't work because the 0.185 
version of presto doesn't support passing through that configuration option to 
the hive s3 client.

To work around the issue, path-style-access can be forced on the s3 client by 
using an IP address for the endpoint (instead of a hostname). Without this 
workaround, flink attempts to use the virtualhost-style at 
bucketname.example.com, which fails unless the expected DNS records exist.

To solve this problem and enable non-IP endpoints, upgrade the 
[pom|https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-presto/pom.xml#L36]
 to at least 0.186 which includes [this 
commit|https://github.com/prestodb/presto/commit/0707f2f21a96d2fd30953fb3fa9a9a03f03d88bd]
 Note that 0.213 is the latest presto release.

  was:
In order to use any given non-AWS s3 implementation backed by the presto s3 
filesystem, it is necessary to set at least one configuration parameter in 
flink-conf.yaml:
 * presto.s3.endpoint: https://example.com

This appears to work as expected for hosted s3 alternatives.

In order to use a bring-your-own, self-hosted s3 alternative like 
[minio|https://www.minio.io/], at least two configuration parameters are 
required:
 * presto.s3.endpoint: https://example.com
 * presto.s3.path-style-access: true

However, the second path-style-access parameter doesn't work because the 0.185 
version of presto doesn't support passing through that configuration option to 
the hive s3 client.

To work around the issue, path-style-access can be forced on the s3 client by 
using an IP address for the endpoint (instead of a hostname). Without this 
workaround, flink attempts to use the virtualhost-style at 
bucketname.example.com, which fails unless the expected DNS records exist.

To solve this problem and enable non-IP endpoints, upgrade the 
[pom|https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-presto/pom.xml#L36]
 to at least 0.186 which includes [this 
commit|[https://github.com/prestodb/presto/commit/0707f2f21a96d2fd30953fb3fa9a9a03f03d88bd.]


> Upgrade presto dependency to support path-style access
> --
>
> Key: FLINK-10817
> URL: https://issues.apache.org/jira/browse/FLINK-10817
> Project: Flink
>  Issue Type: Improvement
>Reporter: Adam Lamar
>Priority: Major
>
> In order to use any given non-AWS s3 implementation backed by the presto s3 
> filesystem, it is necessary to set at least one configuration parameter in 
> flink-conf.yaml:
>  * presto.s3.endpoint: https://example.com
> This appears to work as expected for hosted s3 alternatives.
> In order to use a bring-your-own, self-hosted s3 alternative like 
> [minio|https://www.minio.io/], at least two configuration parameters are 
> required:
>  * presto.s3.endpoint: https://example.com
>  * presto.s3.path-style-access: true
> However, the second path-style-access parameter doesn't work because the 
> 0.185 version of presto doesn't support passing through that configuration 
> option to the hive s3 client.
> To work around the issue, path-style-access can be forced on the s3 client by 
> using an IP address for the endpoint (instead of a hostname). Without this 
> workaround, flink attempts to use the virtualhost-style at 
> bucketname.example.com, which fails unless the expected DNS records exist.
> To solve this problem and enable non-IP endpoints, upgrade the 
> [pom|https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-presto/pom.xml#L36]
>  to at least 0.186 which includes [this 
> commit|https://github.com/prestodb/presto/commit/0707f2f21a96d2fd30953fb3fa9a9a03f03d88bd]
>  Note that 0.213 is the latest presto release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10817) Upgrade presto dependency to support path-style access

2018-11-07 Thread Adam Lamar (JIRA)
Adam Lamar created FLINK-10817:
--

 Summary: Upgrade presto dependency to support path-style access
 Key: FLINK-10817
 URL: https://issues.apache.org/jira/browse/FLINK-10817
 Project: Flink
  Issue Type: Improvement
Reporter: Adam Lamar


In order to use any given non-AWS s3 implementation backed by the presto s3 
filesystem, it is necessary to set at least one configuration parameter in 
flink-conf.yaml:
 * presto.s3.endpoint: https://example.com

This appears to work as expected for hosted s3 alternatives.

In order to use a bring-your-own, self-hosted s3 alternative like 
[minio|https://www.minio.io/], at least two configuration parameters are 
required:
 * presto.s3.endpoint: https://example.com
 * presto.s3.path-style-access: true

However, the second path-style-access parameter doesn't work because the 0.185 
version of presto doesn't support passing through that configuration option to 
the hive s3 client.

To work around the issue, path-style-access can be forced on the s3 client by 
using an IP address for the endpoint (instead of a hostname). Without this 
workaround, flink attempts to use the virtualhost-style at 
bucketname.example.com, which fails unless the expected DNS records exist.

To solve this problem and enable non-IP endpoints, upgrade the 
[pom|https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-presto/pom.xml#L36]
 to at least 0.186 which includes [this 
commit|[https://github.com/prestodb/presto/commit/0707f2f21a96d2fd30953fb3fa9a9a03f03d88bd.]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10682) EOFException occurs during deserialization of Avro class

2018-11-07 Thread Ben La Monica (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben La Monica closed FLINK-10682.
-
   Resolution: Resolved
Fix Version/s: 1.5.5

[~till.rohrmann] that did the trick! It no longer has those exceptions. I think 
I must have also been running into FLINK-10469. I'll close out this ticket. 

> EOFException occurs during deserialization of Avro class
> 
>
> Key: FLINK-10682
> URL: https://issues.apache.org/jira/browse/FLINK-10682
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 1.5.4
> Environment: AWS EMR 5.17 (upgraded to Flink 1.5.4)
> 3 task managers, 1 job manager running in YARN in Hadoop
> Running on Amazon Linux with OpenJDK 1.8
>Reporter: Ben La Monica
>Priority: Critical
> Fix For: 1.5.5
>
>
> I'm having trouble (which usually occurs after an hour of processing in a 
> StreamExecutionEnvironment) where I get this failure message. I'm at a loss 
> for what is causing it. I'm running this in AWS on EMR 5.17 with 3 task 
> managers and a job manager running in a YARN cluster and I've upgraded my 
> flink libraries to 1.5.4 to bypass another serialization issue and the 
> kerberos auth issues.
> The avro classes that are being deserialized were generated with avro 1.8.2.
> {code:java}
> 2018-10-22 16:12:10,680 [INFO ] class=o.a.flink.runtime.taskmanager.Task 
> thread="Calculate Estimated NAV -> Split into single messages (3/10)" 
> Calculate Estimated NAV -> Split into single messages (3/10) (de7d8fa77
> 84903a475391d0168d56f2e) switched from RUNNING to FAILED.
> java.io.EOFException: null
> at 
> org.apache.flink.core.memory.DataInputDeserializer.readLong(DataInputDeserializer.java:219)
> at 
> org.apache.flink.core.memory.DataInputDeserializer.readDouble(DataInputDeserializer.java:138)
> at 
> org.apache.flink.formats.avro.utils.DataInputDecoder.readDouble(DataInputDecoder.java:70)
> at org.apache.avro.io.ResolvingDecoder.readDouble(ResolvingDecoder.java:190)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:186)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179)
> at 
> org.apache.avro.specific.SpecificDatumReader.readField(SpecificDatumReader.java:116)
> at 
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:222)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:175)
> at 
> org.apache.avro.generic.GenericDatumReader.readArray(GenericDatumReader.java:266)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:177)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179)
> at 
> org.apache.avro.specific.SpecificDatumReader.readField(SpecificDatumReader.java:116)
> at 
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:222)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:175)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:145)
> at 
> org.apache.flink.formats.avro.typeutils.AvroSerializer.deserialize(AvroSerializer.java:172)
> at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:208)
> at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:49)
> at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
> at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:140)
> at 
> org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:208)
> at 
> org.apache.flink.streaming.runtime.tasks.TwoInputStreamTask.run(TwoInputStreamTask.java:116)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:712)
> at java.lang.Thread.run(Thread.java:748){code}
> Do you have any ideas on how I could further troubleshoot this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10816) Fix LockableTypeSerializer.duplicate()

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678568#comment-16678568
 ] 

ASF GitHub Bot commented on FLINK-10816:


zentol commented on a change in pull request #7049: [FLINK-10816][cep] Fix 
LockableTypeSerializer.duplicate() to consider…
URL: https://github.com/apache/flink/pull/7049#discussion_r231608543
 
 

 ##
 File path: 
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/nfa/sharedbuffer/LockableTypeSerializerTest.java
 ##
 @@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cep.nfa.sharedbuffer;
+
+import org.apache.flink.api.common.typeutils.base.IntSerializer;
+import org.apache.flink.runtime.state.heap.CopyOnWriteStateTableTest;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Tests for {@link 
org.apache.flink.cep.nfa.sharedbuffer.Lockable.LockableTypeSerializer}.
+ */
+public class LockableTypeSerializerTest {
+
+   /**
+* This tests that {@link Lockable.LockableTypeSerializer#duplicate()} 
works as expected.
+*/
+   @Test
+   public void testDuplicate() {
+   IntSerializer nonDuplicatingInnerSerializer = 
IntSerializer.INSTANCE;
+   Assert.assertSame(nonDuplicatingInnerSerializer, 
nonDuplicatingInnerSerializer.duplicate());
+   Lockable.LockableTypeSerializer 
candidateTestShallowDuplicate =
+   new 
Lockable.LockableTypeSerializer<>(nonDuplicatingInnerSerializer);
+   Assert.assertSame(candidateTestShallowDuplicate, 
candidateTestShallowDuplicate.duplicate());
+
+   CopyOnWriteStateTableTest.TestDuplicateSerializer 
duplicatingInnerSerializer =
 
 Review comment:
   This is more of a personal gripe but I'm not a huge fan of re-using inner 
classes from other test-cases directly just because they happen to fit the 
use-case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Fix LockableTypeSerializer.duplicate() 
> ---
>
> Key: FLINK-10816
> URL: https://issues.apache.org/jira/browse/FLINK-10816
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.6.2, 1.7.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
>
> {{LockableTypeSerializer.duplicate()}} is not properly implemented and must 
> consider duplicating the element serializer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on issue #7041: [FLINK-9002] [e2e] Add operators with input type that goes through Avro serialization

2018-11-07 Thread GitBox
zentol commented on issue #7041: [FLINK-9002] [e2e] Add operators with input 
type that goes through Avro serialization
URL: https://github.com/apache/flink/pull/7041#issuecomment-436715885
 
 
   There's a typo in the commit message: `tyoe`-> `type`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9002) Add operators with input type that goes through Avro serialization (with schema/generic)

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678574#comment-16678574
 ] 

ASF GitHub Bot commented on FLINK-9002:
---

zentol commented on issue #7041: [FLINK-9002] [e2e] Add operators with input 
type that goes through Avro serialization
URL: https://github.com/apache/flink/pull/7041#issuecomment-436715885
 
 
   There's a typo in the commit message: `tyoe`-> `type`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add operators with input type that goes through Avro serialization (with 
> schema/generic) 
> -
>
> Key: FLINK-9002
> URL: https://issues.apache.org/jira/browse/FLINK-9002
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10803) Add documentation about S3 support by the StreamingFileSink

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678571#comment-16678571
 ] 

ASF GitHub Bot commented on FLINK-10803:


zentol commented on a change in pull request #7046: [FLINK-10803] Update the 
documentation to include changes to the S3 connector.
URL: https://github.com/apache/flink/pull/7046#discussion_r231609764
 
 

 ##
 File path: docs/dev/connectors/streamfile_sink.md
 ##
 @@ -24,16 +24,24 @@ under the License.
 -->
 
 This connector provides a Sink that writes partitioned files to filesystems
-supported by the Flink `FileSystem` abstraction. Since in streaming the input
-is potentially infinite, the streaming file sink writes data into buckets. The
-bucketing behaviour is configurable but a useful default is time-based
+supported by the [Flink `FileSystem` abstraction]({{ 
site.baseurl}}/ops/filesystems.html).
+
+Important Note: For S3, the 
`StreamingFileSink` 
+supports only Hadoop (not Presto). In case your job uses the 
`StreamingFileSink` to write to S3 but
 
 Review comment:
   does this page contain a link to presto somewhere? If not, please add one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add documentation about S3 support by the StreamingFileSink
> ---
>
> Key: FLINK-10803
> URL: https://issues.apache.org/jira/browse/FLINK-10803
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.7.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #7046: [FLINK-10803] Update the documentation to include changes to the S3 connector.

2018-11-07 Thread GitBox
zentol commented on a change in pull request #7046: [FLINK-10803] Update the 
documentation to include changes to the S3 connector.
URL: https://github.com/apache/flink/pull/7046#discussion_r231609764
 
 

 ##
 File path: docs/dev/connectors/streamfile_sink.md
 ##
 @@ -24,16 +24,24 @@ under the License.
 -->
 
 This connector provides a Sink that writes partitioned files to filesystems
-supported by the Flink `FileSystem` abstraction. Since in streaming the input
-is potentially infinite, the streaming file sink writes data into buckets. The
-bucketing behaviour is configurable but a useful default is time-based
+supported by the [Flink `FileSystem` abstraction]({{ 
site.baseurl}}/ops/filesystems.html).
+
+Important Note: For S3, the 
`StreamingFileSink` 
+supports only Hadoop (not Presto). In case your job uses the 
`StreamingFileSink` to write to S3 but
 
 Review comment:
   does this page contain a link to presto somewhere? If not, please add one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10816) Fix LockableTypeSerializer.duplicate()

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678557#comment-16678557
 ] 

ASF GitHub Bot commented on FLINK-10816:


zentol commented on a change in pull request #7049: [FLINK-10816][cep] Fix 
LockableTypeSerializer.duplicate() to consider…
URL: https://github.com/apache/flink/pull/7049#discussion_r231607022
 
 

 ##
 File path: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/sharedbuffer/Lockable.java
 ##
 @@ -219,5 +222,10 @@ public boolean canEqual(Object obj) {
: CompatibilityResult.compatible();
}
}
+
+   @VisibleForTesting
+   public TypeSerializer getElementSerializer() {
 
 Review comment:
   package-private since it's only used for testing?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Fix LockableTypeSerializer.duplicate() 
> ---
>
> Key: FLINK-10816
> URL: https://issues.apache.org/jira/browse/FLINK-10816
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.6.2, 1.7.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
>
> {{LockableTypeSerializer.duplicate()}} is not properly implemented and must 
> consider duplicating the element serializer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10816) Fix LockableTypeSerializer.duplicate()

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678567#comment-16678567
 ] 

ASF GitHub Bot commented on FLINK-10816:


zentol commented on a change in pull request #7049: [FLINK-10816][cep] Fix 
LockableTypeSerializer.duplicate() to consider…
URL: https://github.com/apache/flink/pull/7049#discussion_r231609118
 
 

 ##
 File path: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/sharedbuffer/Lockable.java
 ##
 @@ -109,8 +110,10 @@ public boolean isImmutableType() {
}
 
@Override
-   public TypeSerializer> duplicate() {
-   return new LockableTypeSerializer<>(elementSerializer);
+   public LockableTypeSerializer duplicate() {
 
 Review comment:
   why are you changing the signature?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Fix LockableTypeSerializer.duplicate() 
> ---
>
> Key: FLINK-10816
> URL: https://issues.apache.org/jira/browse/FLINK-10816
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.6.2, 1.7.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
>
> {{LockableTypeSerializer.duplicate()}} is not properly implemented and must 
> consider duplicating the element serializer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zentol commented on a change in pull request #7049: [FLINK-10816][cep] Fix LockableTypeSerializer.duplicate() to consider…

2018-11-07 Thread GitBox
zentol commented on a change in pull request #7049: [FLINK-10816][cep] Fix 
LockableTypeSerializer.duplicate() to consider…
URL: https://github.com/apache/flink/pull/7049#discussion_r231609118
 
 

 ##
 File path: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/sharedbuffer/Lockable.java
 ##
 @@ -109,8 +110,10 @@ public boolean isImmutableType() {
}
 
@Override
-   public TypeSerializer> duplicate() {
-   return new LockableTypeSerializer<>(elementSerializer);
+   public LockableTypeSerializer duplicate() {
 
 Review comment:
   why are you changing the signature?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on a change in pull request #7049: [FLINK-10816][cep] Fix LockableTypeSerializer.duplicate() to consider…

2018-11-07 Thread GitBox
zentol commented on a change in pull request #7049: [FLINK-10816][cep] Fix 
LockableTypeSerializer.duplicate() to consider…
URL: https://github.com/apache/flink/pull/7049#discussion_r231608543
 
 

 ##
 File path: 
flink-libraries/flink-cep/src/test/java/org/apache/flink/cep/nfa/sharedbuffer/LockableTypeSerializerTest.java
 ##
 @@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cep.nfa.sharedbuffer;
+
+import org.apache.flink.api.common.typeutils.base.IntSerializer;
+import org.apache.flink.runtime.state.heap.CopyOnWriteStateTableTest;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Tests for {@link 
org.apache.flink.cep.nfa.sharedbuffer.Lockable.LockableTypeSerializer}.
+ */
+public class LockableTypeSerializerTest {
+
+   /**
+* This tests that {@link Lockable.LockableTypeSerializer#duplicate()} 
works as expected.
+*/
+   @Test
+   public void testDuplicate() {
+   IntSerializer nonDuplicatingInnerSerializer = 
IntSerializer.INSTANCE;
+   Assert.assertSame(nonDuplicatingInnerSerializer, 
nonDuplicatingInnerSerializer.duplicate());
+   Lockable.LockableTypeSerializer 
candidateTestShallowDuplicate =
+   new 
Lockable.LockableTypeSerializer<>(nonDuplicatingInnerSerializer);
+   Assert.assertSame(candidateTestShallowDuplicate, 
candidateTestShallowDuplicate.duplicate());
+
+   CopyOnWriteStateTableTest.TestDuplicateSerializer 
duplicatingInnerSerializer =
 
 Review comment:
   This is more of a personal gripe but I'm not a huge fan of re-using inner 
classes from other test-cases directly just because they happen to fit the 
use-case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TisonKun closed pull request #6947: [hotfix] [tests] refactor YARNSessionCapacitySchedulerITCase

2018-11-07 Thread GitBox
TisonKun closed pull request #6947: [hotfix] [tests] refactor 
YARNSessionCapacitySchedulerITCase
URL: https://github.com/apache/flink/pull/6947
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionCapacitySchedulerITCase.java
 
b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionCapacitySchedulerITCase.java
index 1009fbbf529..98a937780b0 100644
--- 
a/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionCapacitySchedulerITCase.java
+++ 
b/flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionCapacitySchedulerITCase.java
@@ -22,7 +22,6 @@
 import org.apache.flink.configuration.GlobalConfiguration;
 import org.apache.flink.configuration.JobManagerOptions;
 import org.apache.flink.configuration.ResourceManagerOptions;
-import org.apache.flink.runtime.client.JobClient;
 import org.apache.flink.runtime.taskexecutor.TaskManagerServices;
 import org.apache.flink.runtime.webmonitor.WebMonitorUtils;
 import org.apache.flink.test.testdata.WordCountData;
@@ -123,7 +122,6 @@ public void testClientStartup() throws IOException {
@Test
public void perJobYarnCluster() throws IOException {
LOG.info("Starting perJobYarnCluster()");
-   addTestAppender(JobClient.class, Level.INFO);
File exampleJarLocation = getTestJarPath("BatchWordCount.jar");
runWithArgs(new String[]{"run", "-m", "yarn-cluster",
"-yj", flinkUberjar.getAbsolutePath(), "-yt", 
flinkLibFolder.getAbsolutePath(),
@@ -136,7 +134,7 @@ public void perJobYarnCluster() throws IOException {
/* prohibited strings: (to verify the parallelism) */
// (we should see "DataSink (...) (1/2)" and "DataSink 
(...) (2/2)" instead)
new String[]{"DataSink \\(.*\\) \\(1/1\\) switched to 
FINISHED"},
-   RunTypes.CLI_FRONTEND, 0, true);
+   RunTypes.CLI_FRONTEND, 0, false);
LOG.info("Finished perJobYarnCluster()");
}
 
@@ -151,7 +149,6 @@ public void perJobYarnCluster() throws IOException {
@Test
public void perJobYarnClusterOffHeap() throws IOException {
LOG.info("Starting perJobYarnCluster()");
-   addTestAppender(JobClient.class, Level.INFO);
File exampleJarLocation = getTestJarPath("BatchWordCount.jar");
 
// set memory constraints (otherwise this is the same test as 
perJobYarnCluster() above)
@@ -182,7 +179,7 @@ public void perJobYarnClusterOffHeap() throws IOException {
/* prohibited strings: (to verify the parallelism) */
// (we should see "DataSink (...) (1/2)" and "DataSink 
(...) (2/2)" instead)
new String[]{"DataSink \\(.*\\) \\(1/1\\) switched to 
FINISHED"},
-   RunTypes.CLI_FRONTEND, 0, true);
+   RunTypes.CLI_FRONTEND, 0, false);
LOG.info("Finished perJobYarnCluster()");
}
 
@@ -392,7 +389,6 @@ public void perJobYarnClusterWithParallelism() throws 
IOException {
LOG.info("Starting perJobYarnClusterWithParallelism()");
// write log messages to stdout as well, so that the 
runWithArgs() method
// is catching the log output
-   addTestAppender(JobClient.class, Level.INFO);
File exampleJarLocation = getTestJarPath("BatchWordCount.jar");
runWithArgs(new String[]{"run",
"-p", "2", //test that the job is executed with 
a DOP of 2
@@ -407,7 +403,7 @@ public void perJobYarnClusterWithParallelism() throws 
IOException {
"Program execution finished",
/* prohibited strings: (we want to see "DataSink (...) 
(2/2) switched to FINISHED") */
new String[]{"DataSink \\(.*\\) \\(1/1\\) switched to 
FINISHED"},
-   RunTypes.CLI_FRONTEND, 0, true);
+   RunTypes.CLI_FRONTEND, 0, false);
LOG.info("Finished perJobYarnClusterWithParallelism()");
}
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on a change in pull request #7049: [FLINK-10816][cep] Fix LockableTypeSerializer.duplicate() to consider…

2018-11-07 Thread GitBox
zentol commented on a change in pull request #7049: [FLINK-10816][cep] Fix 
LockableTypeSerializer.duplicate() to consider…
URL: https://github.com/apache/flink/pull/7049#discussion_r231607022
 
 

 ##
 File path: 
flink-libraries/flink-cep/src/main/java/org/apache/flink/cep/nfa/sharedbuffer/Lockable.java
 ##
 @@ -219,5 +222,10 @@ public boolean canEqual(Object obj) {
: CompatibilityResult.compatible();
}
}
+
+   @VisibleForTesting
+   public TypeSerializer getElementSerializer() {
 
 Review comment:
   package-private since it's only used for testing?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10816) Fix LockableTypeSerializer.duplicate()

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678517#comment-16678517
 ] 

ASF GitHub Bot commented on FLINK-10816:


StefanRRichter opened a new pull request #7049: [FLINK-10816][cep] Fix 
LockableTypeSerializer.duplicate() to consider…
URL: https://github.com/apache/flink/pull/7049
 
 
   … deep duplication of element serializer
   
   ## What is the purpose of the change
   
   `LockableTypeSerializer.duplicate()` did not duplicate the nested element 
serializers. This is fixed by the PR.
   
   I introduced `LockableTypeSerializerTest` to test the change.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Fix LockableTypeSerializer.duplicate() 
> ---
>
> Key: FLINK-10816
> URL: https://issues.apache.org/jira/browse/FLINK-10816
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.6.2, 1.7.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
>
> {{LockableTypeSerializer.duplicate()}} is not properly implemented and must 
> consider duplicating the element serializer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10816) Fix LockableTypeSerializer.duplicate()

2018-11-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10816:
---
Labels: pull-request-available  (was: )

> Fix LockableTypeSerializer.duplicate() 
> ---
>
> Key: FLINK-10816
> URL: https://issues.apache.org/jira/browse/FLINK-10816
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.6.2, 1.7.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
>
> {{LockableTypeSerializer.duplicate()}} is not properly implemented and must 
> consider duplicating the element serializer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] StefanRRichter opened a new pull request #7049: [FLINK-10816][cep] Fix LockableTypeSerializer.duplicate() to consider…

2018-11-07 Thread GitBox
StefanRRichter opened a new pull request #7049: [FLINK-10816][cep] Fix 
LockableTypeSerializer.duplicate() to consider…
URL: https://github.com/apache/flink/pull/7049
 
 
   … deep duplication of element serializer
   
   ## What is the purpose of the change
   
   `LockableTypeSerializer.duplicate()` did not duplicate the nested element 
serializers. This is fixed by the PR.
   
   I introduced `LockableTypeSerializerTest` to test the change.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on a change in pull request #7045: [hotfix] Update nightly master cron jobs

2018-11-07 Thread GitBox
zentol commented on a change in pull request #7045: [hotfix] Update nightly 
master cron jobs
URL: https://github.com/apache/flink/pull/7045#discussion_r231588464
 
 

 ##
 File path: splits/split_kubernetes_e2e.sh
 ##
 @@ -44,7 +44,6 @@ echo "Flink distribution directory: $FLINK_DIR"
 # run_test "" "$END_TO_END_DIR/test-scripts/"
 
 run_test "Run kubernetes test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_embedded_job.sh"
-run_test "Run docker test" 
"$END_TO_END_DIR/test-scripts/test_docker_embedded_job.sh"
 
 Review comment:
   whats up here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on a change in pull request #7045: [hotfix] Update nightly master cron jobs

2018-11-07 Thread GitBox
zentol commented on a change in pull request #7045: [hotfix] Update nightly 
master cron jobs
URL: https://github.com/apache/flink/pull/7045#discussion_r231588464
 
 

 ##
 File path: splits/split_kubernetes_e2e.sh
 ##
 @@ -44,7 +44,6 @@ echo "Flink distribution directory: $FLINK_DIR"
 # run_test "" "$END_TO_END_DIR/test-scripts/"
 
 run_test "Run kubernetes test" 
"$END_TO_END_DIR/test-scripts/test_kubernetes_embedded_job.sh"
-run_test "Run docker test" 
"$END_TO_END_DIR/test-scripts/test_docker_embedded_job.sh"
 
 Review comment:
   whats up here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zentol commented on a change in pull request #7045: [hotfix] Update nightly master cron jobs

2018-11-07 Thread GitBox
zentol commented on a change in pull request #7045: [hotfix] Update nightly 
master cron jobs
URL: https://github.com/apache/flink/pull/7045#discussion_r231588225
 
 

 ##
 File path: splits/split_misc.sh
 ##
 @@ -43,12 +43,16 @@ echo "Flink distribution directory: $FLINK_DIR"
 
 # run_test "" "$END_TO_END_DIR/test-scripts/"
 
+run_test "Flink CLI end-to-end test" "$END_TO_END_DIR/test-scripts/test_cli.sh"
 
 Review comment:
   please also update `split_misc_hadoopfree` to include all tests independent 
of hadoop. I've copied `split_misc.sh` for now since it's the easiest&fastest 
option.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-10816) Fix LockableTypeSerializer.duplicate()

2018-11-07 Thread Stefan Richter (JIRA)
Stefan Richter created FLINK-10816:
--

 Summary: Fix LockableTypeSerializer.duplicate() 
 Key: FLINK-10816
 URL: https://issues.apache.org/jira/browse/FLINK-10816
 Project: Flink
  Issue Type: Bug
  Components: CEP
Affects Versions: 1.6.2, 1.7.0
Reporter: Stefan Richter
Assignee: Stefan Richter


{{LockableTypeSerializer.duplicate()}} is not properly implemented and must 
consider duplicating the element serializer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10815) Rethink the rescale operation, can we do it async

2018-11-07 Thread Dawid Wysakowicz (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678468#comment-16678468
 ] 

Dawid Wysakowicz commented on FLINK-10815:
--

I think the correct way to proceed with such proposal would be to post it on 
dev mailing list with [DISCUSS] tag.

> Rethink the rescale operation, can we do it async
> -
>
> Key: FLINK-10815
> URL: https://issues.apache.org/jira/browse/FLINK-10815
> Project: Flink
>  Issue Type: Improvement
>  Components: ResourceManager, Scheduler
>Reporter: Shimin Yang
>Assignee: Shimin Yang
>Priority: Major
>
> Currently, the rescale operation is to stop the whole job and restart it with 
> different parrellism. But the rescale operation cost a lot and took lots of 
> time to recover if the state size is quite big. 
> And a long-time rescale might cause other problems like latency increase and 
> back pressure. For some circumstances like a streaming computing cloud 
> service, users may be very sensitive to latency and resource usage. So it 
> would be better to make the rescale a cheaper operation.
> I wonder if we could make it an async operation just like checkpoint. But how 
> to deal with the keyed state would be a pain in the ass. Currently I just 
> want to make some assumption to make things simpler. The asnyc rescale 
> operation can only double the parrellism or make it half.
> In the scale up circumstance, we can copy the state to the newly created 
> worker and change the partitioner of the upstream. The best timing might be 
> get notified of checkpoint completed. But we still need to change the 
> partitioner of upstream. So the upstream should buffer the result or block 
> the computation util the state copy finished. Then make the partitioner to 
> send differnt elements with the same key to the same downstream operator.
> In the scale down circumstance, we can merge the keyed state of two operators 
> and also change the partitioner of upstream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10676) Add 'as' method for OverWindowWithOrderBy in Java API

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678462#comment-16678462
 ] 

ASF GitHub Bot commented on FLINK-10676:


sunjincheng121 edited a comment on issue #6949: [FLINK-10676][table] Add 'as' 
method for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#issuecomment-436683222
 
 
   @hequn8128 Thanks for the updated!
   +1 to merged.
   
   Thanks,
   Jincheng


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add 'as' method for OverWindowWithOrderBy in Java API
> -
>
> Key: FLINK-10676
> URL: https://issues.apache.org/jira/browse/FLINK-10676
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.7.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The preceding clause of OVER Window in the traditional database is optional. 
> The default is UNBOUNDED. So we can add the "as" method to 
> OverWindowWithOrderBy. This way OVERWindow is written more easily. e.g.:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW as 
> 'w){code}
> Can be simplified as follows:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime as 'w){code}
> What do you think?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] sunjincheng121 edited a comment on issue #6949: [FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy

2018-11-07 Thread GitBox
sunjincheng121 edited a comment on issue #6949: [FLINK-10676][table] Add 'as' 
method for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#issuecomment-436683222
 
 
   @hequn8128 Thanks for the updated!
   +1 to merged.
   
   Thanks,
   Jincheng


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sunjincheng121 commented on issue #6949: [FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy

2018-11-07 Thread GitBox
sunjincheng121 commented on issue #6949: [FLINK-10676][table] Add 'as' method 
for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#issuecomment-436683222
 
 
   @hequn8128 Thanks for the updated!
   + 1 to merged.
   
   Thanks,
   Jincheng


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10676) Add 'as' method for OverWindowWithOrderBy in Java API

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678461#comment-16678461
 ] 

ASF GitHub Bot commented on FLINK-10676:


sunjincheng121 commented on issue #6949: [FLINK-10676][table] Add 'as' method 
for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#issuecomment-436683222
 
 
   @hequn8128 Thanks for the updated!
   + 1 to merged.
   
   Thanks,
   Jincheng


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add 'as' method for OverWindowWithOrderBy in Java API
> -
>
> Key: FLINK-10676
> URL: https://issues.apache.org/jira/browse/FLINK-10676
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.7.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The preceding clause of OVER Window in the traditional database is optional. 
> The default is UNBOUNDED. So we can add the "as" method to 
> OverWindowWithOrderBy. This way OVERWindow is written more easily. e.g.:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW as 
> 'w){code}
> Can be simplified as follows:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime as 'w){code}
> What do you think?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-10815) Rethink the rescale operation, can we do it async

2018-11-07 Thread Shimin Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678456#comment-16678456
 ] 

Shimin Yang edited comment on FLINK-10815 at 11/7/18 4:19 PM:
--

[~till.rohrmann], I would like to hear your opinion before diving into more 
detail. Since I am not very sure whether this is a nice feature and is it 
worthy to work on this. Currently, it's more like a proposal.


was (Author: dangdangdang):
[~till.rohrmann], I would like to hear your opinion before diving into more 
detail. Since I am not very sure whether this is a nice feature and is it 
worthy to work on this.

> Rethink the rescale operation, can we do it async
> -
>
> Key: FLINK-10815
> URL: https://issues.apache.org/jira/browse/FLINK-10815
> Project: Flink
>  Issue Type: Improvement
>  Components: ResourceManager, Scheduler
>Reporter: Shimin Yang
>Assignee: Shimin Yang
>Priority: Major
>
> Currently, the rescale operation is to stop the whole job and restart it with 
> different parrellism. But the rescale operation cost a lot and took lots of 
> time to recover if the state size is quite big. 
> And a long-time rescale might cause other problems like latency increase and 
> back pressure. For some circumstances like a streaming computing cloud 
> service, users may be very sensitive to latency and resource usage. So it 
> would be better to make the rescale a cheaper operation.
> I wonder if we could make it an async operation just like checkpoint. But how 
> to deal with the keyed state would be a pain in the ass. Currently I just 
> want to make some assumption to make things simpler. The asnyc rescale 
> operation can only double the parrellism or make it half.
> In the scale up circumstance, we can copy the state to the newly created 
> worker and change the partitioner of the upstream. The best timing might be 
> get notified of checkpoint completed. But we still need to change the 
> partitioner of upstream. So the upstream should buffer the result or block 
> the computation util the state copy finished. Then make the partitioner to 
> send differnt elements with the same key to the same downstream operator.
> In the scale down circumstance, we can merge the keyed state of two operators 
> and also change the partitioner of upstream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10809) Using DataStreamUtils.reinterpretAsKeyedStream produces corrupted keyed state after restore

2018-11-07 Thread Stefan Richter (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Richter reassigned FLINK-10809:
--

Assignee: Stefan Richter

> Using DataStreamUtils.reinterpretAsKeyedStream produces corrupted keyed state 
> after restore
> ---
>
> Key: FLINK-10809
> URL: https://issues.apache.org/jira/browse/FLINK-10809
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, State Backends, Checkpointing
>Affects Versions: 1.7.0
>Reporter: Dawid Wysakowicz
>Assignee: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
>
> I've tried using {{DataStreamUtils.reinterpretAsKeyedStream}} for results of 
> windowed aggregation:
> {code}
>   DataStream>> eventStream4 = 
> eventStream2.keyBy(Event::getKey)
>   
> .window(SlidingEventTimeWindows.of(Time.milliseconds(150 * 3), 
> Time.milliseconds(150)))
>   .apply(new WindowFunction List>, Integer, TimeWindow>() {
>   private static final long serialVersionUID = 
> 3166250579972849440L;
>   @Override
>   public void apply(
>   Integer key, TimeWindow window, 
> Iterable input,
>   Collector>> 
> out) throws Exception {
>   out.collect(Tuple2.of(key, 
> StreamSupport.stream(input.spliterator(), 
> false).collect(Collectors.toList(;
>   }
>   });
>   DataStreamUtils.reinterpretAsKeyedStream(eventStream4, events-> 
> events.f0)
>   .flatMap(createSlidingWindowCheckMapper(pt))
>   .addSink(new PrintSinkFunction<>());
> {code}
> and then in the createSlidingWindowCheckMapper I verify that each event 
> belongs to 3 consecutive windows, for which I keep contents of last window in 
> ValueState. In a non-failure setup this check runs fine, but it misses few 
> windows after restore at the beginning.
> {code}
> public class SlidingWindowCheckMapper extends 
> RichFlatMapFunction>, String> {
>   private static final long serialVersionUID = -744070793650644485L;
>   /** This value state tracks previously seen events with the number of 
> windows they appeared in. */
>   private transient ValueState>> 
> previousWindow;
>   private final int slideFactor;
>   SlidingWindowCheckMapper(int slideFactor) {
>   this.slideFactor = slideFactor;
>   }
>   @Override
>   public void open(Configuration parameters) throws Exception {
>   ValueStateDescriptor>> 
> previousWindowDescriptor =
>   new ValueStateDescriptor<>("previousWindow",
>   new ListTypeInfo<>(new 
> TupleTypeInfo<>(TypeInformation.of(Event.class), 
> BasicTypeInfo.INT_TYPE_INFO)));
>   previousWindow = 
> getRuntimeContext().getState(previousWindowDescriptor);
>   }
>   @Override
>   public void flatMap(Tuple2> value, 
> Collector out) throws Exception {
>   List> previousWindowValues = 
> Optional.ofNullable(previousWindow.value()).orElseGet(
>   Collections::emptyList);
>   List newValues = value.f1;
>   newValues.stream().reduce(new BinaryOperator() {
>   @Override
>   public Event apply(Event event, Event event2) {
>   if (event2.getSequenceNumber() - 1 != 
> event.getSequenceNumber()) {
>   out.collect("Alert: events in window 
> out ouf order!");
>   }
>   return event2;
>   }
>   });
>   List> newWindow = new ArrayList<>();
>   for (Tuple2 windowValue : previousWindowValues) 
> {
>   if (!newValues.contains(windowValue.f0)) {
>   out.collect(String.format("Alert: event %s did 
> not belong to %d consecutive windows. Event seen so far %d times.Current 
> window: %s",
>   windowValue.f0,
>   slideFactor,
>   windowValue.f1,
>   value.f1));
>   } else {
>   newValues.remove(windowValue.f0);
>   if (windowValue.f1 + 1 != slideFactor) {
>   newWindow.add(Tuple2.of(windowValue.f0, 
> windowValue.f1 + 1));
>   }
>   }
>  

[jira] [Commented] (FLINK-10815) Rethink the rescale operation, can we do it async

2018-11-07 Thread Shimin Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678456#comment-16678456
 ] 

Shimin Yang commented on FLINK-10815:
-

[~till.rohrmann], I would like to hear your opinion before diving into more 
detail. Since I am not very sure whether this is a nice feature and is it 
worthy to work on this.

> Rethink the rescale operation, can we do it async
> -
>
> Key: FLINK-10815
> URL: https://issues.apache.org/jira/browse/FLINK-10815
> Project: Flink
>  Issue Type: Improvement
>  Components: ResourceManager, Scheduler
>Reporter: Shimin Yang
>Assignee: Shimin Yang
>Priority: Major
>
> Currently, the rescale operation is to stop the whole job and restart it with 
> different parrellism. But the rescale operation cost a lot and took lots of 
> time to recover if the state size is quite big. 
> And a long-time rescale might cause other problems like latency increase and 
> back pressure. For some circumstances like a streaming computing cloud 
> service, users may be very sensitive to latency and resource usage. So it 
> would be better to make the rescale a cheaper operation.
> I wonder if we could make it an async operation just like checkpoint. But how 
> to deal with the keyed state would be a pain in the ass. Currently I just 
> want to make some assumption to make things simpler. The asnyc rescale 
> operation can only double the parrellism or make it half.
> In the scale up circumstance, we can copy the state to the newly created 
> worker and change the partitioner of the upstream. The best timing might be 
> get notified of checkpoint completed. But we still need to change the 
> partitioner of upstream. So the upstream should buffer the result or block 
> the computation util the state copy finished. Then make the partitioner to 
> send differnt elements with the same key to the same downstream operator.
> In the scale down circumstance, we can merge the keyed state of two operators 
> and also change the partitioner of upstream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10809) Using DataStreamUtils.reinterpretAsKeyedStream produces corrupted keyed state after restore

2018-11-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10809:
---
Labels: pull-request-available  (was: )

> Using DataStreamUtils.reinterpretAsKeyedStream produces corrupted keyed state 
> after restore
> ---
>
> Key: FLINK-10809
> URL: https://issues.apache.org/jira/browse/FLINK-10809
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, State Backends, Checkpointing
>Affects Versions: 1.7.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
>
> I've tried using {{DataStreamUtils.reinterpretAsKeyedStream}} for results of 
> windowed aggregation:
> {code}
>   DataStream>> eventStream4 = 
> eventStream2.keyBy(Event::getKey)
>   
> .window(SlidingEventTimeWindows.of(Time.milliseconds(150 * 3), 
> Time.milliseconds(150)))
>   .apply(new WindowFunction List>, Integer, TimeWindow>() {
>   private static final long serialVersionUID = 
> 3166250579972849440L;
>   @Override
>   public void apply(
>   Integer key, TimeWindow window, 
> Iterable input,
>   Collector>> 
> out) throws Exception {
>   out.collect(Tuple2.of(key, 
> StreamSupport.stream(input.spliterator(), 
> false).collect(Collectors.toList(;
>   }
>   });
>   DataStreamUtils.reinterpretAsKeyedStream(eventStream4, events-> 
> events.f0)
>   .flatMap(createSlidingWindowCheckMapper(pt))
>   .addSink(new PrintSinkFunction<>());
> {code}
> and then in the createSlidingWindowCheckMapper I verify that each event 
> belongs to 3 consecutive windows, for which I keep contents of last window in 
> ValueState. In a non-failure setup this check runs fine, but it misses few 
> windows after restore at the beginning.
> {code}
> public class SlidingWindowCheckMapper extends 
> RichFlatMapFunction>, String> {
>   private static final long serialVersionUID = -744070793650644485L;
>   /** This value state tracks previously seen events with the number of 
> windows they appeared in. */
>   private transient ValueState>> 
> previousWindow;
>   private final int slideFactor;
>   SlidingWindowCheckMapper(int slideFactor) {
>   this.slideFactor = slideFactor;
>   }
>   @Override
>   public void open(Configuration parameters) throws Exception {
>   ValueStateDescriptor>> 
> previousWindowDescriptor =
>   new ValueStateDescriptor<>("previousWindow",
>   new ListTypeInfo<>(new 
> TupleTypeInfo<>(TypeInformation.of(Event.class), 
> BasicTypeInfo.INT_TYPE_INFO)));
>   previousWindow = 
> getRuntimeContext().getState(previousWindowDescriptor);
>   }
>   @Override
>   public void flatMap(Tuple2> value, 
> Collector out) throws Exception {
>   List> previousWindowValues = 
> Optional.ofNullable(previousWindow.value()).orElseGet(
>   Collections::emptyList);
>   List newValues = value.f1;
>   newValues.stream().reduce(new BinaryOperator() {
>   @Override
>   public Event apply(Event event, Event event2) {
>   if (event2.getSequenceNumber() - 1 != 
> event.getSequenceNumber()) {
>   out.collect("Alert: events in window 
> out ouf order!");
>   }
>   return event2;
>   }
>   });
>   List> newWindow = new ArrayList<>();
>   for (Tuple2 windowValue : previousWindowValues) 
> {
>   if (!newValues.contains(windowValue.f0)) {
>   out.collect(String.format("Alert: event %s did 
> not belong to %d consecutive windows. Event seen so far %d times.Current 
> window: %s",
>   windowValue.f0,
>   slideFactor,
>   windowValue.f1,
>   value.f1));
>   } else {
>   newValues.remove(windowValue.f0);
>   if (windowValue.f1 + 1 != slideFactor) {
>   newWindow.add(Tuple2.of(windowValue.f0, 
> windowValue.f1 + 1));
>   }
>   }
>   }
>   new

[jira] [Commented] (FLINK-10809) Using DataStreamUtils.reinterpretAsKeyedStream produces corrupted keyed state after restore

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678454#comment-16678454
 ] 

ASF GitHub Bot commented on FLINK-10809:


StefanRRichter opened a new pull request #7048: [FLINK-10809][state] Include 
keyed state that is not from head operat…
URL: https://github.com/apache/flink/pull/7048
 
 
   …ors in state assignment
   
   ## What is the purpose of the change
   
   This PR includes keyed state that was not from a head operator (head of 
operator chain) in the state assignment. This fixes problems with restoring 
keyed state for operators after `DataStreamUtils.reinterpretAsKeyedStream`.
   
   
   ## Brief change log
   
   Remove a check if keyed state is from a head operator in the state 
assignment algorithm. This was an optimization from the times where Flink only 
allowed keyed state in the head operators (like what happens after every 
`keyBy`).
   
   
   ## Verifying this change
   
   Extended `ReinterpretDataStreamAsKeyedStreamITCase` with a recovery cycle to 
test proper state restore of non-head operators.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Using DataStreamUtils.reinterpretAsKeyedStream produces corrupted keyed state 
> after restore
> ---
>
> Key: FLINK-10809
> URL: https://issues.apache.org/jira/browse/FLINK-10809
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, State Backends, Checkpointing
>Affects Versions: 1.7.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
>
> I've tried using {{DataStreamUtils.reinterpretAsKeyedStream}} for results of 
> windowed aggregation:
> {code}
>   DataStream>> eventStream4 = 
> eventStream2.keyBy(Event::getKey)
>   
> .window(SlidingEventTimeWindows.of(Time.milliseconds(150 * 3), 
> Time.milliseconds(150)))
>   .apply(new WindowFunction List>, Integer, TimeWindow>() {
>   private static final long serialVersionUID = 
> 3166250579972849440L;
>   @Override
>   public void apply(
>   Integer key, TimeWindow window, 
> Iterable input,
>   Collector>> 
> out) throws Exception {
>   out.collect(Tuple2.of(key, 
> StreamSupport.stream(input.spliterator(), 
> false).collect(Collectors.toList(;
>   }
>   });
>   DataStreamUtils.reinterpretAsKeyedStream(eventStream4, events-> 
> events.f0)
>   .flatMap(createSlidingWindowCheckMapper(pt))
>   .addSink(new PrintSinkFunction<>());
> {code}
> and then in the createSlidingWindowCheckMapper I verify that each event 
> belongs to 3 consecutive windows, for which I keep contents of last window in 
> ValueState. In a non-failure setup this check runs fine, but it misses few 
> windows after restore at the beginning.
> {code}
> public class SlidingWindowCheckMapper extends 
> RichFlatMapFunction>, String> {
>   private static final long serialVersionUID = -744070793650644485L;
>   /** This value state tracks previously seen events with the number of 
> windows they appeared in. */
>   private transient ValueState>> 
> previousWindow;
>   private final int slideFactor;
>   SlidingWindowCheckMapper(int slideFactor) {
>   this.slideFactor = slideFactor;
>   }
>   @Override
>   public void open(Configuration parameters) throws Exception {
>   ValueStateDescriptor>> 
> previousWindowDescriptor =
>   new ValueStateDescriptor<>("previousWindow",
>   new ListTypeInfo<>(new 
> TupleTypeInf

[GitHub] StefanRRichter opened a new pull request #7048: [FLINK-10809][state] Include keyed state that is not from head operat…

2018-11-07 Thread GitBox
StefanRRichter opened a new pull request #7048: [FLINK-10809][state] Include 
keyed state that is not from head operat…
URL: https://github.com/apache/flink/pull/7048
 
 
   …ors in state assignment
   
   ## What is the purpose of the change
   
   This PR includes keyed state that was not from a head operator (head of 
operator chain) in the state assignment. This fixes problems with restoring 
keyed state for operators after `DataStreamUtils.reinterpretAsKeyedStream`.
   
   
   ## Brief change log
   
   Remove a check if keyed state is from a head operator in the state 
assignment algorithm. This was an optimization from the times where Flink only 
allowed keyed state in the head operators (like what happens after every 
`keyBy`).
   
   
   ## Verifying this change
   
   Extended `ReinterpretDataStreamAsKeyedStreamITCase` with a recovery cycle to 
test proper state restore of non-head operators.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10676) Add 'as' method for OverWindowWithOrderBy in Java API

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678451#comment-16678451
 ] 

ASF GitHub Bot commented on FLINK-10676:


hequn8128 commented on issue #6949: [FLINK-10676][table] Add 'as' method for 
OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#issuecomment-436680634
 
 
   Hi @sunjincheng121 , thanks a lot for your view. I think all of the 
suggestions are good! I have addressed the comments and updated the pr.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add 'as' method for OverWindowWithOrderBy in Java API
> -
>
> Key: FLINK-10676
> URL: https://issues.apache.org/jira/browse/FLINK-10676
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.7.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The preceding clause of OVER Window in the traditional database is optional. 
> The default is UNBOUNDED. So we can add the "as" method to 
> OverWindowWithOrderBy. This way OVERWindow is written more easily. e.g.:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW as 
> 'w){code}
> Can be simplified as follows:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime as 'w){code}
> What do you think?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] hequn8128 commented on issue #6949: [FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy

2018-11-07 Thread GitBox
hequn8128 commented on issue #6949: [FLINK-10676][table] Add 'as' method for 
OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#issuecomment-436680634
 
 
   Hi @sunjincheng121 , thanks a lot for your view. I think all of the 
suggestions are good! I have addressed the comments and updated the pr.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-10815) Rethink the rescale operation, can we do it async

2018-11-07 Thread Shimin Yang (JIRA)
Shimin Yang created FLINK-10815:
---

 Summary: Rethink the rescale operation, can we do it async
 Key: FLINK-10815
 URL: https://issues.apache.org/jira/browse/FLINK-10815
 Project: Flink
  Issue Type: Improvement
  Components: ResourceManager, Scheduler
Reporter: Shimin Yang
Assignee: Shimin Yang


Currently, the rescale operation is to stop the whole job and restart it with 
different parrellism. But the rescale operation cost a lot and took lots of 
time to recover if the state size is quite big. 

And a long-time rescale might cause other problems like latency increase and 
back pressure. For some circumstances like a streaming computing cloud service, 
users may be very sensitive to latency and resource usage. So it would be 
better to make the rescale a cheaper operation.

I wonder if we could make it an async operation just like checkpoint. But how 
to deal with the keyed state would be a pain in the ass. Currently I just want 
to make some assumption to make things simpler. The asnyc rescale operation can 
only double the parrellism or make it half.

In the scale up circumstance, we can copy the state to the newly created worker 
and change the partitioner of the upstream. The best timing might be get 
notified of checkpoint completed. But we still need to change the partitioner 
of upstream. So the upstream should buffer the result or block the computation 
util the state copy finished. Then make the partitioner to send differnt 
elements with the same key to the same downstream operator.

In the scale down circumstance, we can merge the keyed state of two operators 
and also change the partitioner of upstream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] kl0u commented on issue #7047: [FLINK-10522] Check if RecoverableWriter supportsResume() and act accordingly.

2018-11-07 Thread GitBox
kl0u commented on issue #7047: [FLINK-10522] Check if RecoverableWriter 
supportsResume() and act accordingly.
URL: https://github.com/apache/flink/pull/7047#issuecomment-436675707
 
 
   R @azagrebin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10522) Check if RecoverableWriter supportsResume and act accordingly.

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678437#comment-16678437
 ] 

ASF GitHub Bot commented on FLINK-10522:


kl0u commented on issue #7047: [FLINK-10522] Check if RecoverableWriter 
supportsResume() and act accordingly.
URL: https://github.com/apache/flink/pull/7047#issuecomment-436675707
 
 
   R @azagrebin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Check if RecoverableWriter supportsResume and act accordingly.
> --
>
> Key: FLINK-10522
> URL: https://issues.apache.org/jira/browse/FLINK-10522
> Project: Flink
>  Issue Type: Sub-task
>  Components: filesystem-connector
>Affects Versions: 1.6.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> So far we assumed that all `RecoverableWriters` support "resuming", i.e. 
> after recovering from a failure or from a savepoint they could keep writing 
> to the previously "in-progress" file. This assumption holds for all current 
> writers, but in order to be able to accommodate also filesystems that may not 
> support this operation, we should check upon initialization if the writer 
> supports resuming and if yes, we go as before, if not, we recover for commit 
> and commit the previously in-progress file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10522) Check if RecoverableWriter supportsResume and act accordingly.

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678420#comment-16678420
 ] 

ASF GitHub Bot commented on FLINK-10522:


kl0u opened a new pull request #7047: [FLINK-10522] Check if RecoverableWriter 
supportsResume() and act accordingly.
URL: https://github.com/apache/flink/pull/7047
 
 
   ## What is the purpose of the change
   
   So far, the `StreamingFileSink` was always resuming when recovering from a 
checkpoint. This is a valid approach, as all implementation of a 
`RecoverableWriter` support resume.
   
   Given that the `RecoverableWriter` also allows to query if the writer 
supports resuming or not, and that in the future we may introduce writers that 
do not support resume (e.g. Hadoop versions that do not allow `truncate`) this 
PR leverages this method and depending on the outcome either resumes writing to 
the pre-failure in-progress file, or it simply commits the file and opens a new 
in-progress one.
   
   
   ## Verifying this change
   
   Added unit tests under the `BucketTest` class. 
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (**yes** / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Check if RecoverableWriter supportsResume and act accordingly.
> --
>
> Key: FLINK-10522
> URL: https://issues.apache.org/jira/browse/FLINK-10522
> Project: Flink
>  Issue Type: Sub-task
>  Components: filesystem-connector
>Affects Versions: 1.6.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> So far we assumed that all `RecoverableWriters` support "resuming", i.e. 
> after recovering from a failure or from a savepoint they could keep writing 
> to the previously "in-progress" file. This assumption holds for all current 
> writers, but in order to be able to accommodate also filesystems that may not 
> support this operation, we should check upon initialization if the writer 
> supports resuming and if yes, we go as before, if not, we recover for commit 
> and commit the previously in-progress file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10522) Check if RecoverableWriter supportsResume and act accordingly.

2018-11-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10522:
---
Labels: pull-request-available  (was: )

> Check if RecoverableWriter supportsResume and act accordingly.
> --
>
> Key: FLINK-10522
> URL: https://issues.apache.org/jira/browse/FLINK-10522
> Project: Flink
>  Issue Type: Sub-task
>  Components: filesystem-connector
>Affects Versions: 1.6.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> So far we assumed that all `RecoverableWriters` support "resuming", i.e. 
> after recovering from a failure or from a savepoint they could keep writing 
> to the previously "in-progress" file. This assumption holds for all current 
> writers, but in order to be able to accommodate also filesystems that may not 
> support this operation, we should check upon initialization if the writer 
> supports resuming and if yes, we go as before, if not, we recover for commit 
> and commit the previously in-progress file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] kl0u opened a new pull request #7047: [FLINK-10522] Check if RecoverableWriter supportsResume() and act accordingly.

2018-11-07 Thread GitBox
kl0u opened a new pull request #7047: [FLINK-10522] Check if RecoverableWriter 
supportsResume() and act accordingly.
URL: https://github.com/apache/flink/pull/7047
 
 
   ## What is the purpose of the change
   
   So far, the `StreamingFileSink` was always resuming when recovering from a 
checkpoint. This is a valid approach, as all implementation of a 
`RecoverableWriter` support resume.
   
   Given that the `RecoverableWriter` also allows to query if the writer 
supports resuming or not, and that in the future we may introduce writers that 
do not support resume (e.g. Hadoop versions that do not allow `truncate`) this 
PR leverages this method and depending on the outcome either resumes writing to 
the pre-failure in-progress file, or it simply commits the file and opens a new 
in-progress one.
   
   
   ## Verifying this change
   
   Added unit tests under the `BucketTest` class. 
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (**yes** / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10803) Add documentation about S3 support by the StreamingFileSink

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678409#comment-16678409
 ] 

ASF GitHub Bot commented on FLINK-10803:


kl0u commented on issue #7046: [FLINK-10803] Update the documentation to 
include changes to the S3 connector.
URL: https://github.com/apache/flink/pull/7046#issuecomment-436667078
 
 
   R @GJL , @twalthr , @aljoscha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add documentation about S3 support by the StreamingFileSink
> ---
>
> Key: FLINK-10803
> URL: https://issues.apache.org/jira/browse/FLINK-10803
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.7.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-10361) Elasticsearch (v6.3.1) sink end-to-end test instable

2018-11-07 Thread Timo Walther (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reopened FLINK-10361:
--
  Assignee: (was: Biao Liu)

Apparently, this issue is not fixed entirely. I got the same exception in 
Travis today:

{code}
2018-11-07 15:19:03,440 INFO  org.apache.flink.runtime.taskmanager.Task 
- Source: Sequence Source -> Flat Map -> Sink: Unnamed (1/1) 
(4436f1f70cbac0f66173f75bf215d1d3) switched from RUNNING to FAILED.
java.io.IOException: Connection refused
at 
org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:728)
at 
org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
at 
org.elasticsearch.client.RestClient.performRequest(RestClient.java:198)
at 
org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:522)
at 
org.elasticsearch.client.RestHighLevelClient.ping(RestHighLevelClient.java:275)
at 
org.apache.flink.streaming.connectors.elasticsearch6.Elasticsearch6ApiCallBridge.createClient(Elasticsearch6ApiCallBridge.java:81)
at 
org.apache.flink.streaming.connectors.elasticsearch6.Elasticsearch6ApiCallBridge.createClient(Elasticsearch6ApiCallBridge.java:47)
at 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:296)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at 
org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:48)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:424)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:290)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:171)
at 
org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:145)
at 
org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348)
at 
org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:192)
at 
org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64)
... 1 more
{code}

> Elasticsearch (v6.3.1) sink end-to-end test instable
> 
>
> Key: FLINK-10361
> URL: https://issues.apache.org/jira/browse/FLINK-10361
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.6.0, 1.7.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.7.0
>
> Attachments: flink-elasticsearch-logs.tgz
>
>
> The Elasticsearch (v6.3.1) sink end-to-end test is instable. Running it on an 
> Amazon instance it failed with the following exception in the logs:
> {code}
> 2018-09-17 20:46:04,856 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: Sequence Source -> Flat Map -> Sink: Unnamed (1/1) 
> (cb23fdd9df0d4e09270b2ae9970efbac) switched from RUNNING to FAILED.
> java.io.IOException: Connection refused
>   at 
> org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:728)
>   at 
> org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
>   at 
> org.elasticsearch.client.RestClient.performRequest(RestClient.java:198)
>   at 
> org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:522)
>   at 
> org.elasticsearch.client.RestHighLevelClient.ping(RestHighLevelClient.java:275)
>   at 
> org.apache.flink.streaming.connectors.elasticsearch6.Elasticsearch6ApiCallBridge.createClient(Elasticsearch6ApiCallBridge.java:81)
>   at 
> org.apache.flink.streaming.connectors.elasticsearch6.Elasticsearch6ApiCallBridge.createClient(Elasticsearch6ApiCallBridge.java:47)
>   at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:296)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator

[GitHub] kl0u commented on issue #7046: [FLINK-10803] Update the documentation to include changes to the S3 connector.

2018-11-07 Thread GitBox
kl0u commented on issue #7046: [FLINK-10803] Update the documentation to 
include changes to the S3 connector.
URL: https://github.com/apache/flink/pull/7046#issuecomment-436667078
 
 
   R @GJL , @twalthr , @aljoscha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10803) Add documentation about S3 support by the StreamingFileSink

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678406#comment-16678406
 ] 

ASF GitHub Bot commented on FLINK-10803:


kl0u opened a new pull request #7046: [FLINK-10803] Update the documentation to 
include changes to the S3 connector.
URL: https://github.com/apache/flink/pull/7046
 
 
   This PR simply documents already introduced changes.
   
   To build the docs and verify the change go to the `FLINK_DIR/docs/` 
directory and run `./build_docs.sh -p`. The documentation will be available at 
`localhost:4000`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add documentation about S3 support by the StreamingFileSink
> ---
>
> Key: FLINK-10803
> URL: https://issues.apache.org/jira/browse/FLINK-10803
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.7.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10803) Add documentation about S3 support by the StreamingFileSink

2018-11-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-10803:
---
Labels: pull-request-available  (was: )

> Add documentation about S3 support by the StreamingFileSink
> ---
>
> Key: FLINK-10803
> URL: https://issues.apache.org/jira/browse/FLINK-10803
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.7.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] kl0u opened a new pull request #7046: [FLINK-10803] Update the documentation to include changes to the S3 connector.

2018-11-07 Thread GitBox
kl0u opened a new pull request #7046: [FLINK-10803] Update the documentation to 
include changes to the S3 connector.
URL: https://github.com/apache/flink/pull/7046
 
 
   This PR simply documents already introduced changes.
   
   To build the docs and verify the change go to the `FLINK_DIR/docs/` 
directory and run `./build_docs.sh -p`. The documentation will be available at 
`localhost:4000`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10676) Add 'as' method for OverWindowWithOrderBy in Java API

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678405#comment-16678405
 ] 

ASF GitHub Bot commented on FLINK-10676:


hequn8128 commented on a change in pull request #6949: [FLINK-10676][table] Add 
'as' method for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#discussion_r231555804
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -1571,13 +1571,15 @@ The `OverWindow` defines a range of rows over which 
aggregates are computed. `Ov
 
 
   preceding
-  Required
+  Optional
   
 Defines the interval of rows that are included in the window and 
precede the current row. The interval can either be specified as time or 
row-count interval.
 
 Bounded over 
windows are specified with the size of the interval, e.g., 
10.minutes for a time interval or 10.rows for a 
row-count interval.
 
 Unbounded over 
windows are specified using a constant, i.e., UNBOUNDED_RANGE 
for a time interval or UNBOUNDED_ROW for a row-count interval. 
Unbounded over windows start with the first row of a partition.
+
+If the preceding and following clause 
both are omitted, RANGE UNBOUNDED PRECEDING AND CURRENT ROW is used as default 
for window.
 
 Review comment:
   make sense!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add 'as' method for OverWindowWithOrderBy in Java API
> -
>
> Key: FLINK-10676
> URL: https://issues.apache.org/jira/browse/FLINK-10676
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.7.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The preceding clause of OVER Window in the traditional database is optional. 
> The default is UNBOUNDED. So we can add the "as" method to 
> OverWindowWithOrderBy. This way OVERWindow is written more easily. e.g.:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW as 
> 'w){code}
> Can be simplified as follows:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime as 'w){code}
> What do you think?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] hequn8128 commented on a change in pull request #6949: [FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy

2018-11-07 Thread GitBox
hequn8128 commented on a change in pull request #6949: [FLINK-10676][table] Add 
'as' method for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#discussion_r231555804
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -1571,13 +1571,15 @@ The `OverWindow` defines a range of rows over which 
aggregates are computed. `Ov
 
 
   preceding
-  Required
+  Optional
   
 Defines the interval of rows that are included in the window and 
precede the current row. The interval can either be specified as time or 
row-count interval.
 
 Bounded over 
windows are specified with the size of the interval, e.g., 
10.minutes for a time interval or 10.rows for a 
row-count interval.
 
 Unbounded over 
windows are specified using a constant, i.e., UNBOUNDED_RANGE 
for a time interval or UNBOUNDED_ROW for a row-count interval. 
Unbounded over windows start with the first row of a partition.
+
+If the preceding and following clause 
both are omitted, RANGE UNBOUNDED PRECEDING AND CURRENT ROW is used as default 
for window.
 
 Review comment:
   make sense!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] twalthr opened a new pull request #7045: [hotfix] Update nightly master cron jobs

2018-11-07 Thread GitBox
twalthr opened a new pull request #7045: [hotfix] Update nightly master cron 
jobs
URL: https://github.com/apache/flink/pull/7045
 
 
   ## What is the purpose of the change
   
   This PR updates the nightly master cron jobs to be in sync with the 
available end-to-end tests.
   
   ## Brief change log
   
   - Missing end-to-end tests added
   - Redundant end-to-end tests removed
   
   
   ## Verifying this change
   
   Not verified yet as builds are failing. We might need to further stabilize 
the tests and might further split the `misc` tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10676) Add 'as' method for OverWindowWithOrderBy in Java API

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678374#comment-16678374
 ] 

ASF GitHub Bot commented on FLINK-10676:


sunjincheng121 commented on a change in pull request #6949: 
[FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#discussion_r231500525
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -1571,13 +1571,15 @@ The `OverWindow` defines a range of rows over which 
aggregates are computed. `Ov
 
 
   preceding
-  Required
+  Optional
   
 Defines the interval of rows that are included in the window and 
precede the current row. The interval can either be specified as time or 
row-count interval.
 
 Bounded over 
windows are specified with the size of the interval, e.g., 
10.minutes for a time interval or 10.rows for a 
row-count interval.
 
 Unbounded over 
windows are specified using a constant, i.e., UNBOUNDED_RANGE 
for a time interval or UNBOUNDED_ROW for a row-count interval. 
Unbounded over windows start with the first row of a partition.
+
+If the preceding and following clause 
both are omitted, RANGE UNBOUNDED PRECEDING AND CURRENT ROW is used as default 
for window.
 
 Review comment:
   
   UNBOUNDED_RANGE and CURRENT_RANGE appear in pairs,So the following 
description is recommended:
   If the preceding clause is omitted, UNBOUNDED_RANGE and CURRENT_RANGE is 
used as default for window. What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add 'as' method for OverWindowWithOrderBy in Java API
> -
>
> Key: FLINK-10676
> URL: https://issues.apache.org/jira/browse/FLINK-10676
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.7.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The preceding clause of OVER Window in the traditional database is optional. 
> The default is UNBOUNDED. So we can add the "as" method to 
> OverWindowWithOrderBy. This way OVERWindow is written more easily. e.g.:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW as 
> 'w){code}
> Can be simplified as follows:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime as 'w){code}
> What do you think?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10676) Add 'as' method for OverWindowWithOrderBy in Java API

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678376#comment-16678376
 ] 

ASF GitHub Bot commented on FLINK-10676:


sunjincheng121 commented on a change in pull request #6949: 
[FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#discussion_r231500755
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/windows.scala
 ##
 @@ -18,8 +18,8 @@
 
 package org.apache.flink.table.api.scala
 
-import org.apache.flink.table.api.{TumbleWithSize, OverWindowWithPreceding, 
SlideWithSize, SessionWithGap}
-import org.apache.flink.table.expressions.Expression
+import org.apache.flink.table.api._
 
 Review comment:
   Same as java/windows.scala comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add 'as' method for OverWindowWithOrderBy in Java API
> -
>
> Key: FLINK-10676
> URL: https://issues.apache.org/jira/browse/FLINK-10676
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.7.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The preceding clause of OVER Window in the traditional database is optional. 
> The default is UNBOUNDED. So we can add the "as" method to 
> OverWindowWithOrderBy. This way OVERWindow is written more easily. e.g.:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW as 
> 'w){code}
> Can be simplified as follows:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime as 'w){code}
> What do you think?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10676) Add 'as' method for OverWindowWithOrderBy in Java API

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678375#comment-16678375
 ] 

ASF GitHub Bot commented on FLINK-10676:


sunjincheng121 commented on a change in pull request #6949: 
[FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#discussion_r231493947
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
 ##
 @@ -18,7 +18,8 @@
 
 package org.apache.flink.table.api.java
 
-import org.apache.flink.table.api.{TumbleWithSize, OverWindowWithPreceding, 
SlideWithSize, SessionWithGap}
+import org.apache.flink.table.api.scala.{CURRENT_RANGE, UNBOUNDED_RANGE}
+import org.apache.flink.table.api._
 
 Review comment:
   I think using `import org.apache.flink.table.api.{OverWindow, 
TumbleWithSize, OverWindowWithPreceding, SlideWithSize, SessionWithGap}
   ` is better than using wildcard, because there are not many classes imported 
here.What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add 'as' method for OverWindowWithOrderBy in Java API
> -
>
> Key: FLINK-10676
> URL: https://issues.apache.org/jira/browse/FLINK-10676
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.7.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The preceding clause of OVER Window in the traditional database is optional. 
> The default is UNBOUNDED. So we can add the "as" method to 
> OverWindowWithOrderBy. This way OVERWindow is written more easily. e.g.:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime preceding UNBOUNDED_ROW as 
> 'w){code}
> Can be simplified as follows:
> {code:java}
> .window(Over partitionBy 'c orderBy 'proctime as 'w){code}
> What do you think?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] sunjincheng121 commented on a change in pull request #6949: [FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy

2018-11-07 Thread GitBox
sunjincheng121 commented on a change in pull request #6949: 
[FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#discussion_r231493947
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/java/windows.scala
 ##
 @@ -18,7 +18,8 @@
 
 package org.apache.flink.table.api.java
 
-import org.apache.flink.table.api.{TumbleWithSize, OverWindowWithPreceding, 
SlideWithSize, SessionWithGap}
+import org.apache.flink.table.api.scala.{CURRENT_RANGE, UNBOUNDED_RANGE}
+import org.apache.flink.table.api._
 
 Review comment:
   I think using `import org.apache.flink.table.api.{OverWindow, 
TumbleWithSize, OverWindowWithPreceding, SlideWithSize, SessionWithGap}
   ` is better than using wildcard, because there are not many classes imported 
here.What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sunjincheng121 commented on a change in pull request #6949: [FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy

2018-11-07 Thread GitBox
sunjincheng121 commented on a change in pull request #6949: 
[FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#discussion_r231500755
 
 

 ##
 File path: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/windows.scala
 ##
 @@ -18,8 +18,8 @@
 
 package org.apache.flink.table.api.scala
 
-import org.apache.flink.table.api.{TumbleWithSize, OverWindowWithPreceding, 
SlideWithSize, SessionWithGap}
-import org.apache.flink.table.expressions.Expression
+import org.apache.flink.table.api._
 
 Review comment:
   Same as java/windows.scala comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sunjincheng121 commented on a change in pull request #6949: [FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy

2018-11-07 Thread GitBox
sunjincheng121 commented on a change in pull request #6949: 
[FLINK-10676][table] Add 'as' method for OverWindowWithOrderBy
URL: https://github.com/apache/flink/pull/6949#discussion_r231500525
 
 

 ##
 File path: docs/dev/table/tableApi.md
 ##
 @@ -1571,13 +1571,15 @@ The `OverWindow` defines a range of rows over which 
aggregates are computed. `Ov
 
 
   preceding
-  Required
+  Optional
   
 Defines the interval of rows that are included in the window and 
precede the current row. The interval can either be specified as time or 
row-count interval.
 
 Bounded over 
windows are specified with the size of the interval, e.g., 
10.minutes for a time interval or 10.rows for a 
row-count interval.
 
 Unbounded over 
windows are specified using a constant, i.e., UNBOUNDED_RANGE 
for a time interval or UNBOUNDED_ROW for a row-count interval. 
Unbounded over windows start with the first row of a partition.
+
+If the preceding and following clause 
both are omitted, RANGE UNBOUNDED PRECEDING AND CURRENT ROW is used as default 
for window.
 
 Review comment:
   
   UNBOUNDED_RANGE and CURRENT_RANGE appear in pairs,So the following 
description is recommended:
   If the preceding clause is omitted, UNBOUNDED_RANGE and CURRENT_RANGE is 
used as default for window. What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10213) Task managers cache a negative DNS lookup of the blob server indefinitely

2018-11-07 Thread TisonKun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TisonKun updated FLINK-10213:
-
Fix Version/s: 1.8.0

> Task managers cache a negative DNS lookup of the blob server indefinitely
> -
>
> Key: FLINK-10213
> URL: https://issues.apache.org/jira/browse/FLINK-10213
> Project: Flink
>  Issue Type: Bug
>  Components: TaskManager
>Affects Versions: 1.5.0
>Reporter: Joey Echeverria
>Assignee: Joey Echeverria
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0
>
>
> When the task manager establishes a connection with the resource manager, it 
> gets the hostname and port of the blob server and uses that to create an 
> instance of an {{InetSocketAddress}}. Per the documentation of the 
> constructor:
> {quote}An attempt will be made to resolve the hostname into an InetAddress. 
> If that attempt fails, the address will be flagged as _unresolved_{quote}
> Flink never checks to see if the address was unresolved. Later when executing 
> a task that needs to download from the blob server, it will use that same 
> {{InetSocketAddress}} instance to attempt to connect a {{Socket}}. This will 
> result in an exception similar to:
> {noformat}
> java.io.IOException: Failed to fetch BLOB 
> 97799b827ef073e04178a99f0f40b00e/p-6d8ec2ad31337110819c7c3641fdb18d3793a7fb-72bf00066308f4b4d2a9c5aea593b41f
>  from jobmanager:6124 and store it under 
> /tmp/blobStore-d135961a-03cb-4542-af6d-cf378ff83c12/incoming/temp-00018669
>   at 
> org.apache.flink.runtime.blob.BlobClient.downloadFromBlobServer(BlobClient.java:191)
>  ~[flink-dist_2.11-1.5.0.jar:1.5.0]
>   at 
> org.apache.flink.runtime.blob.AbstractBlobCache.getFileInternal(AbstractBlobCache.java:181)
>  ~[flink-dist_2.11-1.5.0.jar:1.5.0]
>   at 
> org.apache.flink.runtime.blob.PermanentBlobCache.getFile(PermanentBlobCache.java:206)
>  ~[flink-dist_2.11-1.5.0.jar:1.5.0]
>   at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager.registerTask(BlobLibraryCacheManager.java:120)
>  ~[flink-dist_2.11-1.5.0.jar:1.5.0]
>   at 
> org.apache.flink.runtime.taskmanager.Task.createUserCodeClassloader(Task.java:863)
>  [flink-dist_2.11-1.5.0.jar:1.5.0]
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:579) 
> [flink-dist_2.11-1.5.0.jar:1.5.0]
>   at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
> Caused by: java.io.IOException: Could not connect to BlobServer at address 
> flink-jobmanager:6124
>   at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124) 
> ~[flink-dist_2.11-1.5.0.jar:1.5.0]
>   at 
> org.apache.flink.runtime.blob.BlobClient.downloadFromBlobServer(BlobClient.java:165)
>  ~[flink-dist_2.11-1.5.0.jar:1.5.0]
>   ... 6 more
> Caused by: java.net.UnknownHostException: jobmanager
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) 
> ~[?:1.8.0_171]
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 
> ~[?:1.8.0_171]
>   at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_171]
>   at java.net.Socket.connect(Socket.java:538) ~[?:1.8.0_171]
>   at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:118) 
> ~[flink-dist_2.11-1.5.0.jar:1.5.0]
>   at 
> org.apache.flink.runtime.blob.BlobClient.downloadFromBlobServer(BlobClient.java:165)
>  ~[flink-dist_2.11-1.5.0.jar:1.5.0]
>   ... 6 more
> {noformat}
> Since the {{InetSocketAddress}} is re-used, you'll have repeated failures of 
> any tasks that are executed on that task manager and the only current 
> workaround is to manually restart the task manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-10812) javadoc-plugin fails for flink-e2e-test-utils

2018-11-07 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-10812:
-
Summary: javadoc-plugin fails for flink-e2e-test-utils  (was: Snapshot 
deployment broken for master)

> javadoc-plugin fails for flink-e2e-test-utils
> -
>
> Key: FLINK-10812
> URL: https://issues.apache.org/jira/browse/FLINK-10812
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:jar (attach-javadocs) on 
> project flink-e2e-test-utils: MavenReportException: Error while creating 
> archive:
> [ERROR] Exit code: 1 - javadoc: error - No public or protected classes found 
> to document.
> [ERROR] 
> [ERROR] Command line was: 
> /usr/local/asfpackages/java/jdk1.8.0_191/jre/../bin/javadoc @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/home/jenkins/jenkins-slave/workspace/flink-snapshot-deployment/flink-end-to-end-tests/flink-e2e-test-utils/target/apidocs'
>  dir.
> [ERROR] -> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10813) Automatically seach for missing scala suffixes

2018-11-07 Thread Chesnay Schepler (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678352#comment-16678352
 ] 

Chesnay Schepler commented on FLINK-10813:
--

Not as simple unfortunately. Basically what the script does is search the 
transitive dependencies of every module for {{scala.lang}}, and if found 
ensures that it has a suffix.

It does not account or the scope however and rejects multiple modules (like 
flink-metrics-dropwizard. It also doesn't care about provided/optional 
dependencies. WIll have to spend some extra time on this.

[~aljoscha] Is it correct to say that "A module needs a scala-suffix if it 
contains scala-code and/or has a compile/runtime (i.e. transitive) dependency 
on an artifact with a scala-suffix?"

> Automatically seach for missing scala suffixes
> --
>
> Key: FLINK-10813
> URL: https://issues.apache.org/jira/browse/FLINK-10813
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.8.0
>
>
> To ensure issues like FLINK-10811 cannot pop up again we should look into 
> automating the scala-suffix requirements check.
> We already have a script to do this check (tools/verify_scala_suffixes.sh), 
> supposedly we just have to add it to travis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10803) Add documentation about S3 support by the StreamingFileSink

2018-11-07 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-10803:
--

Assignee: Kostas Kloudas

> Add documentation about S3 support by the StreamingFileSink
> ---
>
> Key: FLINK-10803
> URL: https://issues.apache.org/jira/browse/FLINK-10803
> Project: Flink
>  Issue Type: Bug
>  Components: filesystem-connector
>Affects Versions: 1.7.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10812) javadoc-plugin fails for flink-e2e-test-utils

2018-11-07 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10812.

Resolution: Fixed

> javadoc-plugin fails for flink-e2e-test-utils
> -
>
> Key: FLINK-10812
> URL: https://issues.apache.org/jira/browse/FLINK-10812
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:jar (attach-javadocs) on 
> project flink-e2e-test-utils: MavenReportException: Error while creating 
> archive:
> [ERROR] Exit code: 1 - javadoc: error - No public or protected classes found 
> to document.
> [ERROR] 
> [ERROR] Command line was: 
> /usr/local/asfpackages/java/jdk1.8.0_191/jre/../bin/javadoc @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/home/jenkins/jenkins-slave/workspace/flink-snapshot-deployment/flink-end-to-end-tests/flink-e2e-test-utils/target/apidocs'
>  dir.
> [ERROR] -> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10812) Snapshot deployment broken for master

2018-11-07 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-10812.

Resolution: Fixed

master: 3abf676911ab782ed7dcfcb4f7c3b4bf06a6b2e4
1.7: ef4394188988568f805f40848f420173f36a4b12

> Snapshot deployment broken for master
> -
>
> Key: FLINK-10812
> URL: https://issues.apache.org/jira/browse/FLINK-10812
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:jar (attach-javadocs) on 
> project flink-e2e-test-utils: MavenReportException: Error while creating 
> archive:
> [ERROR] Exit code: 1 - javadoc: error - No public or protected classes found 
> to document.
> [ERROR] 
> [ERROR] Command line was: 
> /usr/local/asfpackages/java/jdk1.8.0_191/jre/../bin/javadoc @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/home/jenkins/jenkins-slave/workspace/flink-snapshot-deployment/flink-end-to-end-tests/flink-e2e-test-utils/target/apidocs'
>  dir.
> [ERROR] -> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-10812) Snapshot deployment broken for master

2018-11-07 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-10812:
--

> Snapshot deployment broken for master
> -
>
> Key: FLINK-10812
> URL: https://issues.apache.org/jira/browse/FLINK-10812
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:jar (attach-javadocs) on 
> project flink-e2e-test-utils: MavenReportException: Error while creating 
> archive:
> [ERROR] Exit code: 1 - javadoc: error - No public or protected classes found 
> to document.
> [ERROR] 
> [ERROR] Command line was: 
> /usr/local/asfpackages/java/jdk1.8.0_191/jre/../bin/javadoc @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/home/jenkins/jenkins-slave/workspace/flink-snapshot-deployment/flink-end-to-end-tests/flink-e2e-test-utils/target/apidocs'
>  dir.
> [ERROR] -> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10625) Add MATCH_RECOGNIZE documentation

2018-11-07 Thread Dawid Wysakowicz (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reassigned FLINK-10625:


Assignee: Dawid Wysakowicz

> Add MATCH_RECOGNIZE documentation
> -
>
> Key: FLINK-10625
> URL: https://issues.apache.org/jira/browse/FLINK-10625
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table API & SQL
>Affects Versions: 1.7.0
>Reporter: Till Rohrmann
>Assignee: Dawid Wysakowicz
>Priority: Major
> Fix For: 1.7.0
>
>
> The newly added {{MATCH_RECOGNIZE}} functionality needs to be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10814) Kafka examples modules need scala suffix

2018-11-07 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-10814:


Assignee: (was: Chesnay Schepler)

> Kafka examples modules need scala suffix
> 
>
> Key: FLINK-10814
> URL: https://issues.apache.org/jira/browse/FLINK-10814
> Project: Flink
>  Issue Type: Bug
>  Components: Examples, Kafka Connector
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.7.0
>
>
> The kafka examples need scala suffixes just like {{flink-examples-batch}} and 
> {{flink-examples-streaming}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10419) ClassNotFoundException while deserializing user exceptions from checkpointing

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678327#comment-16678327
 ] 

ASF GitHub Bot commented on FLINK-10419:


NicoK commented on issue #7033: [FLINK-10419][checkpoint] fix 
ClassNotFoundException while deserializing user exceptions
URL: https://github.com/apache/flink/pull/7033#issuecomment-436647987
 
 
   True, this only helps with the old non-Flip6 code. I guess, the 
`declineCheckpoint()` RPC method should have a `DeclineCheckpoint` parameter 
instead which then properly wraps the exception using this code. At the moment, 
I don't have time to follow-up on this though - closing for now in the hopes 
that someone else can take over maybe re-using this code (or going another way).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> ClassNotFoundException while deserializing user exceptions from checkpointing
> -
>
> Key: FLINK-10419
> URL: https://issues.apache.org/jira/browse/FLINK-10419
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, State Backends, Checkpointing
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.6.0, 1.6.1, 1.7.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> It seems that somewhere in the operator's failure handling, we hand a 
> user-code exception to the checkpoint coordinator via Java serialization but 
> it will then fail during the de-serialization because the class is not 
> available. This will result in the following error shadowing the real one:
> {code}
> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafka011Exception
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher.loadClass(Launcher.java:338)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.flink.util.InstantiationUtil.resolveClass(InstantiationUtil.java:76)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1859)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1745)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2033)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
> at 
> java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:557)
> at java.lang.Throwable.readObject(Throwable.java:914)
> at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
> at 
> org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation.readObject(RemoteRpcInvocation.java:222)
> at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:502)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:489)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeO

[jira] [Comment Edited] (FLINK-10674) DistinctAccumulator.remove lead to NPE

2018-11-07 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678335#comment-16678335
 ] 

Timo Walther edited comment on FLINK-10674 at 11/7/18 2:55 PM:
---

[~winipanda] have you started working on this already? It would be great if we 
could fix this for Flink 1.7. If not, I would be happy to fix this bug this 
week.


was (Author: twalthr):
[~winipanda] have you started working on this already? It would be great if we 
could fix this for Flink 1.7. If not, I would be happy to fix this bug.

> DistinctAccumulator.remove lead to NPE
> --
>
> Key: FLINK-10674
> URL: https://issues.apache.org/jira/browse/FLINK-10674
> Project: Flink
>  Issue Type: Bug
>  Components: flink-contrib
>Affects Versions: 1.6.1
> Environment: Flink 1.6.0
>Reporter: ambition
>Assignee: winifredtang
>Priority: Minor
> Attachments: image-2018-10-25-14-46-03-373.png
>
>
> Our online Flink Job run about a week,job contain sql :
> {code:java}
> select  `time`,  
> lower(trim(os_type)) as os_type, 
> count(distinct feed_id) as feed_total_view  
> from  my_table 
> group by `time`, lower(trim(os_type)){code}
>  
>   then occur NPE: 
>  
> {code:java}
> java.lang.NullPointerException
> at scala.Predef$.Long2long(Predef.scala:363)
> at 
> org.apache.flink.table.functions.aggfunctions.DistinctAccumulator.remove(DistinctAccumulator.scala:109)
> at NonWindowedAggregationHelper$894.retract(Unknown Source)
> at 
> org.apache.flink.table.runtime.aggregate.GroupAggProcessFunction.processElement(GroupAggProcessFunction.scala:124)
> at 
> org.apache.flink.table.runtime.aggregate.GroupAggProcessFunction.processElement(GroupAggProcessFunction.scala:39)
> at 
> org.apache.flink.streaming.api.operators.LegacyKeyedProcessOperator.processElement(LegacyKeyedProcessOperator.java:88)
> at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
> at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:745)
> {code}
>  
>  
> View DistinctAccumulator.remove
> !image-2018-10-25-14-46-03-373.png!
>  
> this NPE should currentCnt = null lead to, so we simple handle like :
> {code:java}
> def remove(params: Row): Boolean = {
>   if(!distinctValueMap.contains(params)){
> true
>   }else{
> val currentCnt = distinctValueMap.get(params)
> // 
> if (currentCnt == null || currentCnt == 1) {
>   distinctValueMap.remove(params)
>   true
> } else {
>   var value = currentCnt - 1L
>   if(value < 0){
> value = 1
>   }
>   distinctValueMap.put(params, value)
>   false
> }
>   }
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-10797) "IntelliJ Setup" link is broken in Readme.md

2018-11-07 Thread TisonKun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TisonKun resolved FLINK-10797.
--
Resolution: Fixed

master(1.8-SNAPSHOT) at 
https://github.com/apache/flink/commit/1a0a005eea2247ce2628d99a8bc66e142544594e

1.7 at 
https://github.com/apache/flink/commit/9db96301f22403fe84bb279df1f348dff7a2974f

> "IntelliJ Setup" link is broken in Readme.md
> 
>
> Key: FLINK-10797
> URL: https://issues.apache.org/jira/browse/FLINK-10797
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.0
>Reporter: Xiening Dai
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> The link points to 
> https://github.com/apache/flink/blob/master/docs/internals/ide_setup.md#intellij-idea
>  which is a 404 not found.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10814) Kafka examples modules need scala suffix

2018-11-07 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-10814:


 Summary: Kafka examples modules need scala suffix
 Key: FLINK-10814
 URL: https://issues.apache.org/jira/browse/FLINK-10814
 Project: Flink
  Issue Type: Bug
  Components: Examples, Kafka Connector
Affects Versions: 1.7.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.7.0


The kafka examples need scala suffixes just like {{flink-examples-batch}} and 
{{flink-examples-streaming}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10674) DistinctAccumulator.remove lead to NPE

2018-11-07 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678335#comment-16678335
 ] 

Timo Walther commented on FLINK-10674:
--

[~winipanda] have you started working on this already? It would be great if we 
could fix this for Flink 1.7. If not, I would be happy to fix this bug.

> DistinctAccumulator.remove lead to NPE
> --
>
> Key: FLINK-10674
> URL: https://issues.apache.org/jira/browse/FLINK-10674
> Project: Flink
>  Issue Type: Bug
>  Components: flink-contrib
>Affects Versions: 1.6.1
> Environment: Flink 1.6.0
>Reporter: ambition
>Assignee: winifredtang
>Priority: Minor
> Attachments: image-2018-10-25-14-46-03-373.png
>
>
> Our online Flink Job run about a week,job contain sql :
> {code:java}
> select  `time`,  
> lower(trim(os_type)) as os_type, 
> count(distinct feed_id) as feed_total_view  
> from  my_table 
> group by `time`, lower(trim(os_type)){code}
>  
>   then occur NPE: 
>  
> {code:java}
> java.lang.NullPointerException
> at scala.Predef$.Long2long(Predef.scala:363)
> at 
> org.apache.flink.table.functions.aggfunctions.DistinctAccumulator.remove(DistinctAccumulator.scala:109)
> at NonWindowedAggregationHelper$894.retract(Unknown Source)
> at 
> org.apache.flink.table.runtime.aggregate.GroupAggProcessFunction.processElement(GroupAggProcessFunction.scala:124)
> at 
> org.apache.flink.table.runtime.aggregate.GroupAggProcessFunction.processElement(GroupAggProcessFunction.scala:39)
> at 
> org.apache.flink.streaming.api.operators.LegacyKeyedProcessOperator.processElement(LegacyKeyedProcessOperator.java:88)
> at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
> at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:745)
> {code}
>  
>  
> View DistinctAccumulator.remove
> !image-2018-10-25-14-46-03-373.png!
>  
> this NPE should currentCnt = null lead to, so we simple handle like :
> {code:java}
> def remove(params: Row): Boolean = {
>   if(!distinctValueMap.contains(params)){
> true
>   }else{
> val currentCnt = distinctValueMap.get(params)
> // 
> if (currentCnt == null || currentCnt == 1) {
>   distinctValueMap.remove(params)
>   true
> } else {
>   var value = currentCnt - 1L
>   if(value < 0){
> value = 1
>   }
>   distinctValueMap.put(params, value)
>   false
> }
>   }
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10812) Snapshot deployment broken for master

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678330#comment-16678330
 ] 

ASF GitHub Bot commented on FLINK-10812:


zentol closed pull request #7044: [FLINK-10812][build] Skip javadoc plugin for 
e2e-test-utils
URL: https://github.com/apache/flink/pull/7044
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/flink-end-to-end-tests/flink-e2e-test-utils/pom.xml 
b/flink-end-to-end-tests/flink-e2e-test-utils/pom.xml
index 67988b3ce6b..f3de836d509 100644
--- a/flink-end-to-end-tests/flink-e2e-test-utils/pom.xml
+++ b/flink-end-to-end-tests/flink-e2e-test-utils/pom.xml
@@ -65,6 +65,15 @@ under the License.



+
+   
+   org.apache.maven.plugins
+   maven-javadoc-plugin
+   
+   
+   true
+   
+   


 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Snapshot deployment broken for master
> -
>
> Key: FLINK-10812
> URL: https://issues.apache.org/jira/browse/FLINK-10812
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:jar (attach-javadocs) on 
> project flink-e2e-test-utils: MavenReportException: Error while creating 
> archive:
> [ERROR] Exit code: 1 - javadoc: error - No public or protected classes found 
> to document.
> [ERROR] 
> [ERROR] Command line was: 
> /usr/local/asfpackages/java/jdk1.8.0_191/jre/../bin/javadoc @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/home/jenkins/jenkins-slave/workspace/flink-snapshot-deployment/flink-end-to-end-tests/flink-e2e-test-utils/target/apidocs'
>  dir.
> [ERROR] -> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-10419) ClassNotFoundException while deserializing user exceptions from checkpointing

2018-11-07 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber reassigned FLINK-10419:
---

Assignee: (was: Nico Kruber)

> ClassNotFoundException while deserializing user exceptions from checkpointing
> -
>
> Key: FLINK-10419
> URL: https://issues.apache.org/jira/browse/FLINK-10419
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, State Backends, Checkpointing
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.6.0, 1.6.1, 1.7.0
>Reporter: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> It seems that somewhere in the operator's failure handling, we hand a 
> user-code exception to the checkpoint coordinator via Java serialization but 
> it will then fail during the de-serialization because the class is not 
> available. This will result in the following error shadowing the real one:
> {code}
> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafka011Exception
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher.loadClass(Launcher.java:338)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.flink.util.InstantiationUtil.resolveClass(InstantiationUtil.java:76)
> at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1859)
> at 
> java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1745)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2033)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
> at 
> java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:557)
> at java.lang.Throwable.readObject(Throwable.java:914)
> at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
> at 
> org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation.readObject(RemoteRpcInvocation.java:222)
> at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:502)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:489)
> at 
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:477)
> at 
> org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:58)
> at 
> org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation.deserializeMethodInvocation(RemoteRpcInvocation.java:118)
> at 
> org.apache.flink.runtime.rpc.messages.RemoteRpcInvocation.getMethodName(RemoteRpcInvocation.java:59)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:214)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:162)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:70)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.onReceive(AkkaRpcActor.java:142)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.onReceive(FencedAkkaRpcActor.java:40)
> at 
> akka.actor.UntypedActor3728anonfu

[GitHub] zentol closed pull request #7044: [FLINK-10812][build] Skip javadoc plugin for e2e-test-utils

2018-11-07 Thread GitBox
zentol closed pull request #7044: [FLINK-10812][build] Skip javadoc plugin for 
e2e-test-utils
URL: https://github.com/apache/flink/pull/7044
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/flink-end-to-end-tests/flink-e2e-test-utils/pom.xml 
b/flink-end-to-end-tests/flink-e2e-test-utils/pom.xml
index 67988b3ce6b..f3de836d509 100644
--- a/flink-end-to-end-tests/flink-e2e-test-utils/pom.xml
+++ b/flink-end-to-end-tests/flink-e2e-test-utils/pom.xml
@@ -65,6 +65,15 @@ under the License.



+
+   
+   org.apache.maven.plugins
+   maven-javadoc-plugin
+   
+   
+   true
+   
+   


 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10419) ClassNotFoundException while deserializing user exceptions from checkpointing

2018-11-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678328#comment-16678328
 ] 

ASF GitHub Bot commented on FLINK-10419:


NicoK closed pull request #7033: [FLINK-10419][checkpoint] fix 
ClassNotFoundException while deserializing user exceptions
URL: https://github.com/apache/flink/pull/7033
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/messages/checkpoint/DeclineCheckpoint.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/messages/checkpoint/DeclineCheckpoint.java
index 7b0b55c9a1b..f8f3c0117bd 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/messages/checkpoint/DeclineCheckpoint.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/messages/checkpoint/DeclineCheckpoint.java
@@ -19,12 +19,6 @@
 package org.apache.flink.runtime.messages.checkpoint;
 
 import org.apache.flink.api.common.JobID;
-import 
org.apache.flink.runtime.checkpoint.decline.AlignmentLimitExceededException;
-import 
org.apache.flink.runtime.checkpoint.decline.CheckpointDeclineOnCancellationBarrierException;
-import 
org.apache.flink.runtime.checkpoint.decline.CheckpointDeclineSubsumedException;
-import 
org.apache.flink.runtime.checkpoint.decline.CheckpointDeclineTaskNotCheckpointingException;
-import 
org.apache.flink.runtime.checkpoint.decline.CheckpointDeclineTaskNotReadyException;
-import org.apache.flink.runtime.checkpoint.decline.InputEndOfStreamException;
 import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
 import org.apache.flink.util.SerializedThrowable;
 
@@ -38,7 +32,7 @@
 
private static final long serialVersionUID = 2094094662279578953L;
 
-   /** The reason why the checkpoint was declined */
+   /** The reason why the checkpoint was declined. */
private final Throwable reason;
 
public DeclineCheckpoint(JobID job, ExecutionAttemptID taskExecutionId, 
long checkpointId) {
@@ -47,19 +41,12 @@ public DeclineCheckpoint(JobID job, ExecutionAttemptID 
taskExecutionId, long che
 
public DeclineCheckpoint(JobID job, ExecutionAttemptID taskExecutionId, 
long checkpointId, Throwable reason) {
super(job, taskExecutionId, checkpointId);
-   
-   if (reason == null ||
-   reason.getClass() == 
AlignmentLimitExceededException.class ||
-   reason.getClass() == 
CheckpointDeclineOnCancellationBarrierException.class ||
-   reason.getClass() == 
CheckpointDeclineSubsumedException.class ||
-   reason.getClass() == 
CheckpointDeclineTaskNotCheckpointingException.class ||
-   reason.getClass() == 
CheckpointDeclineTaskNotReadyException.class ||
-   reason.getClass() == InputEndOfStreamException.class)
-   {
-   // null or known common exceptions that cannot 
reference any dynamically loaded code
+
+   if (reason == null) {
this.reason = reason;
} else {
-   // some other exception. replace with a serialized 
throwable, to be on the safe side
+   // exceptions may reference dynamically loaded code 
(exception itself, cause, suppressed)
+   // -> replace with a serialized throwable, to be on the 
safe side
this.reason = new SerializedThrowable(reason);
}
}
@@ -68,7 +55,7 @@ public DeclineCheckpoint(JobID job, ExecutionAttemptID 
taskExecutionId, long che
 
/**
 * Gets the reason why the checkpoint was declined.
-* 
+*
 * @return The reason why the checkpoint was declined
 */
public Throwable getReason() {


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> ClassNotFoundException while deserializing user exceptions from checkpointing
> -
>
> Key: FLINK-10419
> URL: https://issues.apache.org/jira/browse/FLINK-10419
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination, State Backends, Checkpointing
>Affects Versions: 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.6.0, 1.6.1, 1.7.0
>Reporter: Nico K

[GitHub] NicoK commented on issue #7033: [FLINK-10419][checkpoint] fix ClassNotFoundException while deserializing user exceptions

2018-11-07 Thread GitBox
NicoK commented on issue #7033: [FLINK-10419][checkpoint] fix 
ClassNotFoundException while deserializing user exceptions
URL: https://github.com/apache/flink/pull/7033#issuecomment-436647987
 
 
   True, this only helps with the old non-Flip6 code. I guess, the 
`declineCheckpoint()` RPC method should have a `DeclineCheckpoint` parameter 
instead which then properly wraps the exception using this code. At the moment, 
I don't have time to follow-up on this though - closing for now in the hopes 
that someone else can take over maybe re-using this code (or going another way).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] NicoK closed pull request #7033: [FLINK-10419][checkpoint] fix ClassNotFoundException while deserializing user exceptions

2018-11-07 Thread GitBox
NicoK closed pull request #7033: [FLINK-10419][checkpoint] fix 
ClassNotFoundException while deserializing user exceptions
URL: https://github.com/apache/flink/pull/7033
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/messages/checkpoint/DeclineCheckpoint.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/messages/checkpoint/DeclineCheckpoint.java
index 7b0b55c9a1b..f8f3c0117bd 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/messages/checkpoint/DeclineCheckpoint.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/messages/checkpoint/DeclineCheckpoint.java
@@ -19,12 +19,6 @@
 package org.apache.flink.runtime.messages.checkpoint;
 
 import org.apache.flink.api.common.JobID;
-import 
org.apache.flink.runtime.checkpoint.decline.AlignmentLimitExceededException;
-import 
org.apache.flink.runtime.checkpoint.decline.CheckpointDeclineOnCancellationBarrierException;
-import 
org.apache.flink.runtime.checkpoint.decline.CheckpointDeclineSubsumedException;
-import 
org.apache.flink.runtime.checkpoint.decline.CheckpointDeclineTaskNotCheckpointingException;
-import 
org.apache.flink.runtime.checkpoint.decline.CheckpointDeclineTaskNotReadyException;
-import org.apache.flink.runtime.checkpoint.decline.InputEndOfStreamException;
 import org.apache.flink.runtime.executiongraph.ExecutionAttemptID;
 import org.apache.flink.util.SerializedThrowable;
 
@@ -38,7 +32,7 @@
 
private static final long serialVersionUID = 2094094662279578953L;
 
-   /** The reason why the checkpoint was declined */
+   /** The reason why the checkpoint was declined. */
private final Throwable reason;
 
public DeclineCheckpoint(JobID job, ExecutionAttemptID taskExecutionId, 
long checkpointId) {
@@ -47,19 +41,12 @@ public DeclineCheckpoint(JobID job, ExecutionAttemptID 
taskExecutionId, long che
 
public DeclineCheckpoint(JobID job, ExecutionAttemptID taskExecutionId, 
long checkpointId, Throwable reason) {
super(job, taskExecutionId, checkpointId);
-   
-   if (reason == null ||
-   reason.getClass() == 
AlignmentLimitExceededException.class ||
-   reason.getClass() == 
CheckpointDeclineOnCancellationBarrierException.class ||
-   reason.getClass() == 
CheckpointDeclineSubsumedException.class ||
-   reason.getClass() == 
CheckpointDeclineTaskNotCheckpointingException.class ||
-   reason.getClass() == 
CheckpointDeclineTaskNotReadyException.class ||
-   reason.getClass() == InputEndOfStreamException.class)
-   {
-   // null or known common exceptions that cannot 
reference any dynamically loaded code
+
+   if (reason == null) {
this.reason = reason;
} else {
-   // some other exception. replace with a serialized 
throwable, to be on the safe side
+   // exceptions may reference dynamically loaded code 
(exception itself, cause, suppressed)
+   // -> replace with a serialized throwable, to be on the 
safe side
this.reason = new SerializedThrowable(reason);
}
}
@@ -68,7 +55,7 @@ public DeclineCheckpoint(JobID job, ExecutionAttemptID 
taskExecutionId, long che
 
/**
 * Gets the reason why the checkpoint was declined.
-* 
+*
 * @return The reason why the checkpoint was declined
 */
public Throwable getReason() {


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-10750) SocketClientSinkTest.testRetry fails on Travis

2018-11-07 Thread Till Rohrmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-10750.
-
   Resolution: Fixed
Fix Version/s: 1.6.3
   1.5.6

Fixed via
master: dfe7be7cb014b1ac27b39a2c9e24769322fd9102
1.7.0: 600869e1f26c5b43e68e66023c95fc1ea0267c37
1.6.3: 035365220ce039d3bb04e101ba8c253850108599
1.5.6: 58d71b8709e5d6739986a55564681294ac2e7b8c

> SocketClientSinkTest.testRetry fails on Travis
> --
>
> Key: FLINK-10750
> URL: https://issues.apache.org/jira/browse/FLINK-10750
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.7.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.5.6, 1.6.3, 1.7.0
>
>
> The {{SocketClientSinkTest.testRetry}} fails on Travis because of a 
> {{BindException}}: https://api.travis-ci.org/v3/job/448907069/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10682) EOFException occurs during deserialization of Avro class

2018-11-07 Thread Ben La Monica (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16678306#comment-16678306
 ] 

Ben La Monica commented on FLINK-10682:
---

Thank you, I'll give it a try.

> EOFException occurs during deserialization of Avro class
> 
>
> Key: FLINK-10682
> URL: https://issues.apache.org/jira/browse/FLINK-10682
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 1.5.4
> Environment: AWS EMR 5.17 (upgraded to Flink 1.5.4)
> 3 task managers, 1 job manager running in YARN in Hadoop
> Running on Amazon Linux with OpenJDK 1.8
>Reporter: Ben La Monica
>Priority: Critical
>
> I'm having trouble (which usually occurs after an hour of processing in a 
> StreamExecutionEnvironment) where I get this failure message. I'm at a loss 
> for what is causing it. I'm running this in AWS on EMR 5.17 with 3 task 
> managers and a job manager running in a YARN cluster and I've upgraded my 
> flink libraries to 1.5.4 to bypass another serialization issue and the 
> kerberos auth issues.
> The avro classes that are being deserialized were generated with avro 1.8.2.
> {code:java}
> 2018-10-22 16:12:10,680 [INFO ] class=o.a.flink.runtime.taskmanager.Task 
> thread="Calculate Estimated NAV -> Split into single messages (3/10)" 
> Calculate Estimated NAV -> Split into single messages (3/10) (de7d8fa77
> 84903a475391d0168d56f2e) switched from RUNNING to FAILED.
> java.io.EOFException: null
> at 
> org.apache.flink.core.memory.DataInputDeserializer.readLong(DataInputDeserializer.java:219)
> at 
> org.apache.flink.core.memory.DataInputDeserializer.readDouble(DataInputDeserializer.java:138)
> at 
> org.apache.flink.formats.avro.utils.DataInputDecoder.readDouble(DataInputDecoder.java:70)
> at org.apache.avro.io.ResolvingDecoder.readDouble(ResolvingDecoder.java:190)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:186)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179)
> at 
> org.apache.avro.specific.SpecificDatumReader.readField(SpecificDatumReader.java:116)
> at 
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:222)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:175)
> at 
> org.apache.avro.generic.GenericDatumReader.readArray(GenericDatumReader.java:266)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:177)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179)
> at 
> org.apache.avro.specific.SpecificDatumReader.readField(SpecificDatumReader.java:116)
> at 
> org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:222)
> at 
> org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:175)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
> at 
> org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:145)
> at 
> org.apache.flink.formats.avro.typeutils.AvroSerializer.deserialize(AvroSerializer.java:172)
> at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:208)
> at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:49)
> at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
> at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:140)
> at 
> org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:208)
> at 
> org.apache.flink.streaming.runtime.tasks.TwoInputStreamTask.run(TwoInputStreamTask.java:116)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:712)
> at java.lang.Thread.run(Thread.java:748){code}
> Do you have any ideas on how I could further troubleshoot this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >