[GitHub] flink issue #4921: [FLINK-7943] Make ParameterTool thread safe

2017-11-14 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4921
  
LGTM +1


---


[jira] [Commented] (FLINK-7943) OptionalDataException when launching Flink jobs concurrently

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16253001#comment-16253001
 ] 

ASF GitHub Bot commented on FLINK-7943:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4921
  
LGTM +1


> OptionalDataException when launching Flink jobs concurrently
> 
>
> Key: FLINK-7943
> URL: https://issues.apache.org/jira/browse/FLINK-7943
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> A user reported that he is getting a {{OptionalDataException}} when he 
> launches multiple Flink jobs from the same program concurrently. The problem 
> seems to appear if one sets the {{GlobalJobParameters}}. The stack trace can 
> be found below:
> {code}
> Failed to submit job 60f4da5cf76836fe52ceba5cebdae412 (Union4a:14:15)
> java.io.OptionalDataException
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1588)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:428)
> at java.util.HashMap.readObject(HashMap.java:1407)
> at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1158)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2173)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1568)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2282)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2206)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1568)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2282)
> at 
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2206)
> at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2064)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1568)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:428)
> at
> org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:290)
> at
> org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:58)
> at
> org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1283)
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:495)
> at
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
> at
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:38)
> at
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
> at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> at
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
> at
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:125)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
> at akka.actor.ActorCell.invoke(ActorCell.scala:487)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
> at akka.dispatch.Mailbox.run(Mailbox.scala:220)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
> at 
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}
> The user code causing the problem is:
> {code}
> @SuppressWarnings("serial")
> public class UnionThreaded {
> static int ThreadPoolSize = 3;
> static int JobsPerThread = 2;
> static ParameterTool params;
> public stati

[jira] [Commented] (FLINK-7856) Port JobVertexBackPressureHandler to REST endpoint

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252985#comment-16252985
 ] 

ASF GitHub Bot commented on FLINK-7856:
---

Github user zjureel commented on the issue:

https://github.com/apache/flink/pull/4893
  
@tillrohrmann Thank you for your review, I have fixed the problems of this 
PR, thanks


> Port JobVertexBackPressureHandler to REST endpoint
> --
>
> Key: FLINK-7856
> URL: https://issues.apache.org/jira/browse/FLINK-7856
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, REST, Webfrontend
>Reporter: Fang Yong
>Assignee: Fang Yong
>
> Port JobVertexBackPressureHandler to REST endpoint



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4893: [FLINK-7856][flip6] Port JobVertexBackPressureHandler to ...

2017-11-14 Thread zjureel
Github user zjureel commented on the issue:

https://github.com/apache/flink/pull/4893
  
@tillrohrmann Thank you for your review, I have fixed the problems of this 
PR, thanks


---


[jira] [Commented] (FLINK-8069) Support empty watermark strategy for TableSources

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252931#comment-16252931
 ] 

ASF GitHub Bot commented on FLINK-8069:
---

GitHub user xccui opened a pull request:

https://github.com/apache/flink/pull/5016

[FLINK-8069] [table] Support empty watermark strategy for TableSources

## What is the purpose of the change

This PR enables an empty watermark strategy for 
`RowtimeAttributeDescriptor`.

## Brief change log

  - Add a default `null` value for the watermark strategy in 
`RowtimeAttributeDescriptor`.
  - Add a case in `StreamTableSourceScan` for empty watermark strategy.
  - Add a test case and update related docs.

## Verifying this change

The change can be verified by the new added 
`testRowtimeTableSourceWithoutWMStrategy()`.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (docs)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xccui/flink FLINK-8069

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5016.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5016


commit ba0fc2777f2c81c582032124d301f5ee301ae009
Author: Xingcan Cui 
Date:   2017-11-15T03:01:13Z

[FLINK-8069] [table] Support empty watermark strategy for TableSources




> Support empty watermark strategy for TableSources
> -
>
> Key: FLINK-8069
> URL: https://issues.apache.org/jira/browse/FLINK-8069
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Xingcan Cui
>
> In case the underlying data stream source emits watermarks, it should be 
> possible to define an empty watermark strategy for rowtime attributes in the 
> {{RowtimeAttributeDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8076) Upgrade KinesisProducer to 0.10.6 to set properties approperiately

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252932#comment-16252932
 ] 

ASF GitHub Bot commented on FLINK-8076:
---

GitHub user bowenli86 opened a pull request:

https://github.com/apache/flink/pull/5017

[FLINK-8076] Upgrade KinesisProducer to 0.10.6 to set properties 
approperiately

## What is the purpose of the change

https://github.com/awslabs/amazon-kinesis-producer/issues/124 has been 
resolved in kpl 0.10.6, thus we don't need to explicitly set a few default 
configs anymore.

## Brief change log

- upgraded kpl from 0.10.5 to 0.10.6
- removed some legacy code
- updated unit tests

## Verifying this change


This change added tests and can be verified as follows:

- testRateLimitInProducerConfiguration()
- testThreadingModelInProducerConfiguration()
- testThreadPoolSizeInProducerConfiguration()

## Does this pull request potentially affect one of the following parts:

none

## Documentation

none

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bowenli86/flink FLINK-8076

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5017.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5017


commit 0a292d6e0467e8db02f112575f257b3819697c24
Author: Bowen Li 
Date:   2017-11-15T04:31:03Z

[FLINK-8076] Upgrade KinesisProducer to 0.10.6 to set properties 
approperiately




> Upgrade KinesisProducer to 0.10.6 to set properties approperiately
> --
>
> Key: FLINK-8076
> URL: https://issues.apache.org/jira/browse/FLINK-8076
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5016: [FLINK-8069] [table] Support empty watermark strat...

2017-11-14 Thread xccui
GitHub user xccui opened a pull request:

https://github.com/apache/flink/pull/5016

[FLINK-8069] [table] Support empty watermark strategy for TableSources

## What is the purpose of the change

This PR enables an empty watermark strategy for 
`RowtimeAttributeDescriptor`.

## Brief change log

  - Add a default `null` value for the watermark strategy in 
`RowtimeAttributeDescriptor`.
  - Add a case in `StreamTableSourceScan` for empty watermark strategy.
  - Add a test case and update related docs.

## Verifying this change

The change can be verified by the new added 
`testRowtimeTableSourceWithoutWMStrategy()`.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (yes)
  - If yes, how is the feature documented? (docs)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xccui/flink FLINK-8069

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5016.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5016


commit ba0fc2777f2c81c582032124d301f5ee301ae009
Author: Xingcan Cui 
Date:   2017-11-15T03:01:13Z

[FLINK-8069] [table] Support empty watermark strategy for TableSources




---


[GitHub] flink pull request #5017: [FLINK-8076] Upgrade KinesisProducer to 0.10.6 to ...

2017-11-14 Thread bowenli86
GitHub user bowenli86 opened a pull request:

https://github.com/apache/flink/pull/5017

[FLINK-8076] Upgrade KinesisProducer to 0.10.6 to set properties 
approperiately

## What is the purpose of the change

https://github.com/awslabs/amazon-kinesis-producer/issues/124 has been 
resolved in kpl 0.10.6, thus we don't need to explicitly set a few default 
configs anymore.

## Brief change log

- upgraded kpl from 0.10.5 to 0.10.6
- removed some legacy code
- updated unit tests

## Verifying this change


This change added tests and can be verified as follows:

- testRateLimitInProducerConfiguration()
- testThreadingModelInProducerConfiguration()
- testThreadPoolSizeInProducerConfiguration()

## Does this pull request potentially affect one of the following parts:

none

## Documentation

none

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bowenli86/flink FLINK-8076

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5017.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5017


commit 0a292d6e0467e8db02f112575f257b3819697c24
Author: Bowen Li 
Date:   2017-11-15T04:31:03Z

[FLINK-8076] Upgrade KinesisProducer to 0.10.6 to set properties 
approperiately




---


[jira] [Created] (FLINK-8076) Upgrade KinesisProducer to 0.10.6 to set properties approperiately

2017-11-14 Thread Bowen Li (JIRA)
Bowen Li created FLINK-8076:
---

 Summary: Upgrade KinesisProducer to 0.10.6 to set properties 
approperiately
 Key: FLINK-8076
 URL: https://issues.apache.org/jira/browse/FLINK-8076
 Project: Flink
  Issue Type: Improvement
  Components: Kinesis Connector
Affects Versions: 1.4.0
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.5.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7923) SQL parser exception when accessing subfields of a Composite element in an Object Array type column

2017-11-14 Thread Shuyi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252876#comment-16252876
 ] 

Shuyi Chen commented on FLINK-7923:
---

Already sent out PR for Calcite-2016 to modify the Calcite parser. If we can 
make this into Calcite's 1.15 release, I can update the dependency and 
integrate the feature into Flink SQL.

> SQL parser exception when accessing subfields of a Composite element in an 
> Object Array type column
> ---
>
> Key: FLINK-7923
> URL: https://issues.apache.org/jira/browse/FLINK-7923
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Rong Rong
>Assignee: Shuyi Chen
>
> Access type such as:
> {code:SQL}
> SELECT 
>   a[1].f0 
> FROM 
>   MyTable
> {code}
> will cause problem. 
> See following test sample for more details:
> https://github.com/walterddr/flink/commit/03c93bcb0fb30bd2d327e35b5e244322d449b06a



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7923) SQL parser exception when accessing subfields of a Composite element in an Object Array type column

2017-11-14 Thread Shuyi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuyi Chen reassigned FLINK-7923:
-

Assignee: Shuyi Chen

> SQL parser exception when accessing subfields of a Composite element in an 
> Object Array type column
> ---
>
> Key: FLINK-7923
> URL: https://issues.apache.org/jira/browse/FLINK-7923
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Rong Rong
>Assignee: Shuyi Chen
>
> Access type such as:
> {code:SQL}
> SELECT 
>   a[1].f0 
> FROM 
>   MyTable
> {code}
> will cause problem. 
> See following test sample for more details:
> https://github.com/walterddr/flink/commit/03c93bcb0fb30bd2d327e35b5e244322d449b06a



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7934) Upgrade Calcite dependency to 1.15

2017-11-14 Thread Shuyi Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuyi Chen reassigned FLINK-7934:
-

Assignee: Shuyi Chen

> Upgrade Calcite dependency to 1.15
> --
>
> Key: FLINK-7934
> URL: https://issues.apache.org/jira/browse/FLINK-7934
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Rong Rong
>Assignee: Shuyi Chen
>
> Umbrella issue for all related issues for Apache Calcite 1.15 release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7962) Add built-in support for min/max aggregation for Timestamp

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252867#comment-16252867
 ] 

ASF GitHub Bot commented on FLINK-7962:
---

Github user dianfu commented on the issue:

https://github.com/apache/flink/pull/4936
  
@wuchong Thanks a lot for your review. 
@fhueske It would be great if you can also give some feedback.



> Add built-in support for min/max aggregation for Timestamp
> --
>
> Key: FLINK-7962
> URL: https://issues.apache.org/jira/browse/FLINK-7962
> Project: Flink
>  Issue Type: Task
>Reporter: Dian Fu
>Assignee: Dian Fu
>
> This JIRA adds the built-in support for min/max aggregation for Timestamp.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4936: [FLINK-7962] Add built-in support for min/max aggregation...

2017-11-14 Thread dianfu
Github user dianfu commented on the issue:

https://github.com/apache/flink/pull/4936
  
@wuchong Thanks a lot for your review. 
@fhueske It would be great if you can also give some feedback.



---


[jira] [Comment Edited] (FLINK-7490) UDF Agg throws Exception when flink-table is loaded with AppClassLoader

2017-11-14 Thread Colin Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252789#comment-16252789
 ] 

Colin Williams edited comment on FLINK-7490 at 11/15/17 1:44 AM:
-

I'm effected by this also, but for the Streaming Table API:

java.io.IOException: Exception while applying AggregateFunction in aggregating 
state
at 
org.apache.flink.runtime.state.heap.HeapAggregatingState.add(HeapAggregatingState.java:91)
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:442)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:206)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:263)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.
at 
org.apache.flink.table.codegen.Compiler$class.compile(Compiler.scala:36)
at 
org.apache.flink.table.runtime.aggregate.AggregateAggFunction.compile(AggregateAggFunction.scala:33)
at 
org.apache.flink.table.runtime.aggregate.AggregateAggFunction.initFunction(AggregateAggFunction.scala:72)
at 
org.apache.flink.table.runtime.aggregate.AggregateAggFunction.createAccumulator(AggregateAggFunction.scala:41)
at 
org.apache.flink.table.runtime.aggregate.AggregateAggFunction.createAccumulator(AggregateAggFunction.scala:33)
at 
org.apache.flink.runtime.state.heap.HeapAggregatingState$AggregateTransformation.apply(HeapAggregatingState.java:115)
at 
org.apache.flink.runtime.state.heap.NestedMapsStateTable.transform(NestedMapsStateTable.java:298)
at 
org.apache.flink.runtime.state.heap.HeapAggregatingState.add(HeapAggregatingState.java:89)
... 6 more
Caused by: org.codehaus.commons.compiler.CompileException: Line 6, Column 14: 
Cannot determine simple type name "com"
at 
org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11672)
at 
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6416)
at 
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6177)
at 
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6190)
at 
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6190)
at 
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6190)
at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6156)
at org.codehaus.janino.UnitCompiler.access$13300(UnitCompiler.java:212)
at 
org.codehaus.janino.UnitCompiler$18$1.visitReferenceType(UnitCompiler.java:6064)
at 
org.codehaus.janino.UnitCompiler$18$1.visitReferenceType(UnitCompiler.java:6059)
at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3754)
at org.codehaus.janino.UnitCompiler$18.visitType(UnitCompiler.java:6059)
at org.codehaus.janino.UnitCompiler$18.visitType(UnitCompiler.java:6052)
at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3753)
at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6052)
at org.codehaus.janino.UnitCompiler.access$1200(UnitCompiler.java:212)
at org.codehaus.janino.UnitCompiler$21.getType(UnitCompiler.java:7844)
at org.codehaus.janino.IClass$IField.getDescriptor(IClass.java:1299)
at org.codehaus.janino.UnitCompiler.getfield(UnitCompiler.java:11439)
at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:4118)
at org.codehaus.janino.UnitCompiler.access$6800(UnitCompiler.java:212)
at 
org.codehaus.janino.UnitCompiler$12$1.visitFieldAccess(UnitCompiler.java:4053)
at 
org.codehaus.janino.UnitCompiler$12$1.visitFieldAccess(UnitCompiler.java:4048)
at org.codehaus.janino.Java$FieldAccess.accept(Java.java:4136)
at 
org.codehaus.janino.UnitCompiler$12.visitLvalue(UnitCompiler.java:4048)
at 
org.codehaus.janino.UnitCompiler$12.visitLvalue(UnitCompiler.java:4044)
at org.codehaus.janino.Java$Lvalue.accept(Java.java:3974)
at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4044)
at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:4109)
at org.codehaus.janino.UnitCompiler.access$6600(UnitCompiler.java:212)
at 
org.codehaus.janino.UnitCompiler$12$1.visitAmbiguousName(UnitCompiler.java:4051)
at 
org.codehaus.janino.UnitCompiler$12$1.visitAmbiguousName(UnitCompiler.java:4048)
at org.codehaus.janino.Java$AmbiguousName.accept(Java.java:4050)
at 
o

[jira] [Commented] (FLINK-8069) Support empty watermark strategy for TableSources

2017-11-14 Thread Xingcan Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252794#comment-16252794
 ] 

Xingcan Cui commented on FLINK-8069:


Thanks for the explanation [~twalthr] 

> Support empty watermark strategy for TableSources
> -
>
> Key: FLINK-8069
> URL: https://issues.apache.org/jira/browse/FLINK-8069
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Xingcan Cui
>
> In case the underlying data stream source emits watermarks, it should be 
> possible to define an empty watermark strategy for rowtime attributes in the 
> {{RowtimeAttributeDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7490) UDF Agg throws Exception when flink-table is loaded with AppClassLoader

2017-11-14 Thread Colin Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252789#comment-16252789
 ] 

Colin Williams commented on FLINK-7490:
---

I'm effected by this also:

java.io.IOException: Exception while applying AggregateFunction in aggregating 
state
at 
org.apache.flink.runtime.state.heap.HeapAggregatingState.add(HeapAggregatingState.java:91)
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:442)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:206)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:263)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.
at 
org.apache.flink.table.codegen.Compiler$class.compile(Compiler.scala:36)
at 
org.apache.flink.table.runtime.aggregate.AggregateAggFunction.compile(AggregateAggFunction.scala:33)
at 
org.apache.flink.table.runtime.aggregate.AggregateAggFunction.initFunction(AggregateAggFunction.scala:72)
at 
org.apache.flink.table.runtime.aggregate.AggregateAggFunction.createAccumulator(AggregateAggFunction.scala:41)
at 
org.apache.flink.table.runtime.aggregate.AggregateAggFunction.createAccumulator(AggregateAggFunction.scala:33)
at 
org.apache.flink.runtime.state.heap.HeapAggregatingState$AggregateTransformation.apply(HeapAggregatingState.java:115)
at 
org.apache.flink.runtime.state.heap.NestedMapsStateTable.transform(NestedMapsStateTable.java:298)
at 
org.apache.flink.runtime.state.heap.HeapAggregatingState.add(HeapAggregatingState.java:89)
... 6 more
Caused by: org.codehaus.commons.compiler.CompileException: Line 6, Column 14: 
Cannot determine simple type name "com"
at 
org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11672)
at 
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6416)
at 
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6177)
at 
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6190)
at 
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6190)
at 
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6190)
at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6156)
at org.codehaus.janino.UnitCompiler.access$13300(UnitCompiler.java:212)
at 
org.codehaus.janino.UnitCompiler$18$1.visitReferenceType(UnitCompiler.java:6064)
at 
org.codehaus.janino.UnitCompiler$18$1.visitReferenceType(UnitCompiler.java:6059)
at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3754)
at org.codehaus.janino.UnitCompiler$18.visitType(UnitCompiler.java:6059)
at org.codehaus.janino.UnitCompiler$18.visitType(UnitCompiler.java:6052)
at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3753)
at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6052)
at org.codehaus.janino.UnitCompiler.access$1200(UnitCompiler.java:212)
at org.codehaus.janino.UnitCompiler$21.getType(UnitCompiler.java:7844)
at org.codehaus.janino.IClass$IField.getDescriptor(IClass.java:1299)
at org.codehaus.janino.UnitCompiler.getfield(UnitCompiler.java:11439)
at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:4118)
at org.codehaus.janino.UnitCompiler.access$6800(UnitCompiler.java:212)
at 
org.codehaus.janino.UnitCompiler$12$1.visitFieldAccess(UnitCompiler.java:4053)
at 
org.codehaus.janino.UnitCompiler$12$1.visitFieldAccess(UnitCompiler.java:4048)
at org.codehaus.janino.Java$FieldAccess.accept(Java.java:4136)
at 
org.codehaus.janino.UnitCompiler$12.visitLvalue(UnitCompiler.java:4048)
at 
org.codehaus.janino.UnitCompiler$12.visitLvalue(UnitCompiler.java:4044)
at org.codehaus.janino.Java$Lvalue.accept(Java.java:3974)
at org.codehaus.janino.UnitCompiler.compileGet(UnitCompiler.java:4044)
at org.codehaus.janino.UnitCompiler.compileGet2(UnitCompiler.java:4109)
at org.codehaus.janino.UnitCompiler.access$6600(UnitCompiler.java:212)
at 
org.codehaus.janino.UnitCompiler$12$1.visitAmbiguousName(UnitCompiler.java:4051)
at 
org.codehaus.janino.UnitCompiler$12$1.visitAmbiguousName(UnitCompiler.java:4048)
at org.codehaus.janino.Java$AmbiguousName.accept(Java.java:4050)
at 
org.codehaus.janino.UnitCompiler$12.visitLvalue(UnitCompiler.java:4048)
at 
o

[jira] [Updated] (FLINK-7795) Utilize error-prone to discover common coding mistakes

2017-11-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-7795:
--
Description: 
http://errorprone.info/ is a tool which detects common coding mistakes.


We should incorporate into Flink build process.

  was:
http://errorprone.info/ is a tool which detects common coding mistakes.

We should incorporate into Flink build process.


> Utilize error-prone to discover common coding mistakes
> --
>
> Key: FLINK-7795
> URL: https://issues.apache.org/jira/browse/FLINK-7795
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>
> http://errorprone.info/ is a tool which detects common coding mistakes.
> We should incorporate into Flink build process.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7917) The return of taskInformationOrBlobKey should be placed inside synchronized in ExecutionJobVertex

2017-11-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-7917:
--
Description: 
Currently:

{code}
}

return taskInformationOrBlobKey;
{code}
The return should be placed inside synchronized block.

  was:
Currently:
{code}
}

return taskInformationOrBlobKey;
{code}
The return should be placed inside synchronized block.


> The return of taskInformationOrBlobKey should be placed inside synchronized 
> in ExecutionJobVertex
> -
>
> Key: FLINK-7917
> URL: https://issues.apache.org/jira/browse/FLINK-7917
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> Currently:
> {code}
> }
> return taskInformationOrBlobKey;
> {code}
> The return should be placed inside synchronized block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-8075) Lack of synchronization calling clone() in Configuration ctor

2017-11-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-8075:
--
Summary: Lack of synchronization calling clone() in Configuration ctor  
(was: Redundant clone() in Configuration ctor)

> Lack of synchronization calling clone() in Configuration ctor
> -
>
> Key: FLINK-8075
> URL: https://issues.apache.org/jira/browse/FLINK-8075
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> In  
> flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/hadoop/conf/Configuration.java
>  , at line 703:
> {code}
>   public Configuration(Configuration other) {
> this.resources = (ArrayList) other.resources.clone();
> synchronized(other) {
>   if (other.properties != null) {
> this.properties = (Properties)other.properties.clone();
>   }
> {code}
> The first clone() call is without synchronization and without null check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8075) Redundant clone() in Configuration ctor

2017-11-14 Thread Ted Yu (JIRA)
Ted Yu created FLINK-8075:
-

 Summary: Redundant clone() in Configuration ctor
 Key: FLINK-8075
 URL: https://issues.apache.org/jira/browse/FLINK-8075
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


In  
flink-filesystems/flink-s3-fs-presto/src/main/java/org/apache/hadoop/conf/Configuration.java
 , at line 703:
{code}
  public Configuration(Configuration other) {
this.resources = (ArrayList) other.resources.clone();
synchronized(other) {
  if (other.properties != null) {
this.properties = (Properties)other.properties.clone();
  }
{code}
The first clone() call is without synchronization and without null check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7475) support update() in ListState

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252190#comment-16252190
 ] 

ASF GitHub Bot commented on FLINK-7475:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4963
  
@yunfan123 @aljoscha @StefanRRichter  I chose the "shallow" simulation. 
What do you guys think? 

The build failure seems to be because one build profile timed out.


> support update() in ListState
> -
>
> Key: FLINK-7475
> URL: https://issues.apache.org/jira/browse/FLINK-7475
> Project: Flink
>  Issue Type: Improvement
>  Components: Core, DataStream API, State Backends, Checkpointing
>Affects Versions: 1.4.0
>Reporter: yf
>Assignee: Bowen Li
> Fix For: 1.5.0
>
>
> If I want to update the list. 
> I have to do two steps: 
> listState.clear() 
> for (Element e : myList) { 
> listState.add(e); 
> } 
> Why not I update the state by: 
> listState.update(myList) ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4963: [FLINK-7475] [core][DataStream API] support update() in L...

2017-11-14 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4963
  
@yunfan123 @aljoscha @StefanRRichter  I chose the "shallow" simulation. 
What do you guys think? 

The build failure seems to be because one build profile timed out.


---


[jira] [Closed] (FLINK-8071) Akka shading sometimes produces invalid code

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-8071.
---
Resolution: Fixed

1.4: 195e3da8a78b80116ef00e65d771daa5a0a1d7a3
1.5: e16472953cef75743791c91f17e8114f2f045054

> Akka shading sometimes produces invalid code
> 
>
> Key: FLINK-8071
> URL: https://issues.apache.org/jira/browse/FLINK-8071
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Local Runtime
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> On 2 separate occasions on separate machines I hit the exception below when 
> starting a cluster. Once it happened in the yarn tests, another time after 
> starting a standalone cluster with flink-dist.
> The issue appears to be related to some asm bug that affects both 
> sbt-assembly and the maven-shade-plugin.
> References:
> * https://github.com/akka/akka/issues/21596
> * https://github.com/sbt/sbt-assembly/issues/205
> From what I have found this should be fixable my bumping the asm version of 
> the maven-shade-plugin to 5.1 (our version uses 5.0.2), or just increment the 
> plugin version to 3.0.0 (which already uses 5.1).
> *Important:* This only occurs if a relocation is performed in flink-dist.
> {code}
> java.lang.Exception: Could not create actor system
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:171)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:115)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.runApplicationMaster(YarnApplicationMasterRunner.java:313)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:199)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:196)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
> at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.run(YarnApplicationMasterRunner.java:196)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.main(YarnApplicationMasterRunner.java:123)
> Caused by: java.lang.VerifyError: Inconsistent stackmap frames at branch 
> target 152
> Exception Details:
>   Location:
> akka/dispatch/Mailbox.processAllSystemMessages()V @152: getstatic
>   Reason:
> Type top (current frame, locals[9]) is not assignable to 
> 'akka/dispatch/sysmsg/SystemMessage' (stack map, locals[9])
>   Current Frame:
> bci: @131
> flags: { }
> locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
> 'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
> 'java/lang/Throwable', 'java/lang/Throwable' }
> stack: { integer }
>   Stackmap Frame:
> bci: @152
> flags: { }
> locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
> 'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
> 'java/lang/Throwable', 'java/lang/Throwable', top, top, 
> 'akka/dispatch/sysmsg/SystemMessage' }
> stack: { }
>   Bytecode:
> 0x000: 014c 2ab2 0132 b601 35b6 0139 4db2 013e
> 0x010: 2cb6 0142 9900 522a b600 c69a 004b 2c4e
> 0x020: b201 3e2c b601 454d 2db9 0148 0100 2ab6
> 0x030: 0052 2db6 014b b801 0999 000e bb00 e759
> 0x040: 1301 4db7 010f 4cb2 013e 2cb6 0150 99ff
> 0x050: bf2a b600 c69a ffb8 2ab2 0132 b601 35b6
> 0x060: 0139 4da7 ffaa 2ab6 0052 b600 56b6 0154
> 0x070: b601 5a3a 04a7 0091 3a05 1905 3a06 1906
> 0x080: c100 e799 0015 1906 c000 e73a 0719 074c
> 0x090: b200 f63a 08a7 0071 b201 5f19 06b6 0163
> 0x0a0: 3a0a 190a b601 6899 0006 1905 bf19 0ab6
> 0x0b0: 016c c000 df3a 0b2a b600 52b6 0170 b601
> 0x0c0: 76bb 000f 5919 0b2a b600 52b6 017a b601
> 0x0d0: 80b6 0186 2ab6 018a bb01 8c59 b701 8e13
> 0x0e0: 0190 b601 9419 09b6 0194 1301 96b6 0194
> 0x0f0: 190b b601 99b6 0194 b601 9ab7 019d b601
> 0x100: a3b2 00f6 3a08 b201 3e2c b601 4299 0026
> 0x110: 2c3a 09b2 013e 2cb6 0145 4d19 09b9 0148
> 0x120: 0100 1904 2ab6 0052 b601 7a19 09b6 01a7
> 0x130: a7ff d62b c600 09b8 0109 572b bfb1
>   Exception Handler Table:
> bci [290, 307] => handler: 120
>   Stackmap Table:
> append_frame(@13,Object[#231],Object[#177])
> append_frame(@71,Object[#177])
> chop_frame(@102,1)
> 

[jira] [Closed] (FLINK-7419) Shade jackson dependency in flink-avro

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-7419.
---
Resolution: Fixed

1.4: 5c6eaabfcd541ffb35d911c6d9d2f6f12f207cd8
1.5: fe98cb77c4fcf2ff0e2840c2254fb8b517274917

> Shade jackson dependency in flink-avro
> --
>
> Key: FLINK-7419
> URL: https://issues.apache.org/jira/browse/FLINK-7419
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> Avro uses {{org.codehouse.jackson}} which also exists in multiple 
> incompatible versions. We should shade it to 
> {{org.apache.flink.shaded.avro.org.codehouse.jackson}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7419) Shade jackson dependency in flink-avro

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252082#comment-16252082
 ] 

ASF GitHub Bot commented on FLINK-7419:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4981


> Shade jackson dependency in flink-avro
> --
>
> Key: FLINK-7419
> URL: https://issues.apache.org/jira/browse/FLINK-7419
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> Avro uses {{org.codehouse.jackson}} which also exists in multiple 
> incompatible versions. We should shade it to 
> {{org.apache.flink.shaded.avro.org.codehouse.jackson}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8071) Akka shading sometimes produces invalid code

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252083#comment-16252083
 ] 

ASF GitHub Bot commented on FLINK-8071:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5014


> Akka shading sometimes produces invalid code
> 
>
> Key: FLINK-8071
> URL: https://issues.apache.org/jira/browse/FLINK-8071
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Local Runtime
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> On 2 separate occasions on separate machines I hit the exception below when 
> starting a cluster. Once it happened in the yarn tests, another time after 
> starting a standalone cluster with flink-dist.
> The issue appears to be related to some asm bug that affects both 
> sbt-assembly and the maven-shade-plugin.
> References:
> * https://github.com/akka/akka/issues/21596
> * https://github.com/sbt/sbt-assembly/issues/205
> From what I have found this should be fixable my bumping the asm version of 
> the maven-shade-plugin to 5.1 (our version uses 5.0.2), or just increment the 
> plugin version to 3.0.0 (which already uses 5.1).
> *Important:* This only occurs if a relocation is performed in flink-dist.
> {code}
> java.lang.Exception: Could not create actor system
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:171)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:115)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.runApplicationMaster(YarnApplicationMasterRunner.java:313)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:199)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:196)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
> at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.run(YarnApplicationMasterRunner.java:196)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.main(YarnApplicationMasterRunner.java:123)
> Caused by: java.lang.VerifyError: Inconsistent stackmap frames at branch 
> target 152
> Exception Details:
>   Location:
> akka/dispatch/Mailbox.processAllSystemMessages()V @152: getstatic
>   Reason:
> Type top (current frame, locals[9]) is not assignable to 
> 'akka/dispatch/sysmsg/SystemMessage' (stack map, locals[9])
>   Current Frame:
> bci: @131
> flags: { }
> locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
> 'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
> 'java/lang/Throwable', 'java/lang/Throwable' }
> stack: { integer }
>   Stackmap Frame:
> bci: @152
> flags: { }
> locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
> 'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
> 'java/lang/Throwable', 'java/lang/Throwable', top, top, 
> 'akka/dispatch/sysmsg/SystemMessage' }
> stack: { }
>   Bytecode:
> 0x000: 014c 2ab2 0132 b601 35b6 0139 4db2 013e
> 0x010: 2cb6 0142 9900 522a b600 c69a 004b 2c4e
> 0x020: b201 3e2c b601 454d 2db9 0148 0100 2ab6
> 0x030: 0052 2db6 014b b801 0999 000e bb00 e759
> 0x040: 1301 4db7 010f 4cb2 013e 2cb6 0150 99ff
> 0x050: bf2a b600 c69a ffb8 2ab2 0132 b601 35b6
> 0x060: 0139 4da7 ffaa 2ab6 0052 b600 56b6 0154
> 0x070: b601 5a3a 04a7 0091 3a05 1905 3a06 1906
> 0x080: c100 e799 0015 1906 c000 e73a 0719 074c
> 0x090: b200 f63a 08a7 0071 b201 5f19 06b6 0163
> 0x0a0: 3a0a 190a b601 6899 0006 1905 bf19 0ab6
> 0x0b0: 016c c000 df3a 0b2a b600 52b6 0170 b601
> 0x0c0: 76bb 000f 5919 0b2a b600 52b6 017a b601
> 0x0d0: 80b6 0186 2ab6 018a bb01 8c59 b701 8e13
> 0x0e0: 0190 b601 9419 09b6 0194 1301 96b6 0194
> 0x0f0: 190b b601 99b6 0194 b601 9ab7 019d b601
> 0x100: a3b2 00f6 3a08 b201 3e2c b601 4299 0026
> 0x110: 2c3a 09b2 013e 2cb6 0145 4d19 09b9 0148
> 0x120: 0100 1904 2ab6 0052 b601 7a19 09b6 01a7
> 0x130: a7ff d62b c600 09b8 0109 572b bfb1
>   Exception Handler Table:
> bci [290, 307] => handler: 120
>   Stackmap Table:
> append_frame(@13,Object[#231],Object[#177])
> append_frame(@71,Object[#1

[GitHub] flink pull request #4981: [FLINK-7419][build][avro] Relocate jackson in flin...

2017-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4981


---


[GitHub] flink pull request #5014: [FLINK-8071][build] Bump shade-plugin asm version ...

2017-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5014


---


[jira] [Commented] (FLINK-8038) Support MAP value constructor

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251994#comment-16251994
 ] 

ASF GitHub Bot commented on FLINK-8038:
---

GitHub user walterddr opened a pull request:

https://github.com/apache/flink/pull/5015

[FLINK-8038][Table API] Support MAP value constructor

## What is the purpose of the change

This pull request makes creates Map value constructor support for Table and 
SQL API. 
This is to enable creating Map literals or fields combination, such as:
```
MAP('a', '1', 'b', f4, 'c', intField.cast(STRING)) // Table API
MAP['a', '1', 'b', stringField. 'c', CAST(intField AS VARCHAR(65536)) // 
SQL API
```
It also supports accessing a particular value within a MAP object, such as:
```
MAP('foo', 'bar').getValue('foo') // Table API
MAP['foo', 'bar']['foo'] // SQL API, field access is already supported in 
FLINK-6377 
```

## Brief change log

Changes includes:
  - Created map case class in flink table expression to support map 
generation and get value operation
  - Created `getValue` and `map` in ExpressionDsl
  - added `generateMap` in CodeGenerator
  - define `generateOperator` and `generateMap` impl in ScalarOperators
  - added in expression parsing logic to feed type information in 
ExpressionParser

## Verifying this change

Added in various Map operator tests in MapTypeTest and SqlExpressionTest.

## Does this pull request potentially affect one of the following parts:

Not that I know of.

## Documentation

  - Does this pull request introduce a new feature? yes
  - If yes, how is the feature documented? (not documented, please advise)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/walterddr/flink FLINK-8038

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5015


commit 64f37583b9fda9ecd691a7245be250f4f7531c04
Author: Rong Rong 
Date:   2017-11-14T18:48:16Z

initial support for Map literals, there are several literal operations 
not supported, such as
MAP('a', 1, 'b', 2).getValue('a') is supported but .get('a') is not 
supported as MapTypeInfo is not a compositeType
MAP('a', 1, 'b', 2).cardinality() is not supported as cardinality now 
is only supported by ObjectArrayTypeInfo
MAP('a', 1, 'b', 2).keySet/valueSet is not supported yet
implicity Type casting is not available yet as it has not been 
supported on ObjectArrayType either




> Support MAP value constructor
> -
>
> Key: FLINK-8038
> URL: https://issues.apache.org/jira/browse/FLINK-8038
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Rong Rong
>Assignee: Rong Rong
>
> Similar to https://issues.apache.org/jira/browse/FLINK-4554
> We want to support Map value constructor which is supported by Calcite:
> https://calcite.apache.org/docs/reference.html#value-constructors
> {code:sql}
> SELECT
>   MAP['key1', f0, 'key2', f1] AS stringKeyedMap,
>   MAP['key', 'value'] AS literalMap,
>   MAP[f0, f1] AS fieldMap
> FROM
>   table
> {code}
> This should enable users to construct MapTypeInfo, one of the CompositeType.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5015: [FLINK-8038][Table API] Support MAP value construc...

2017-11-14 Thread walterddr
GitHub user walterddr opened a pull request:

https://github.com/apache/flink/pull/5015

[FLINK-8038][Table API] Support MAP value constructor

## What is the purpose of the change

This pull request makes creates Map value constructor support for Table and 
SQL API. 
This is to enable creating Map literals or fields combination, such as:
```
MAP('a', '1', 'b', f4, 'c', intField.cast(STRING)) // Table API
MAP['a', '1', 'b', stringField. 'c', CAST(intField AS VARCHAR(65536)) // 
SQL API
```
It also supports accessing a particular value within a MAP object, such as:
```
MAP('foo', 'bar').getValue('foo') // Table API
MAP['foo', 'bar']['foo'] // SQL API, field access is already supported in 
FLINK-6377 
```

## Brief change log

Changes includes:
  - Created map case class in flink table expression to support map 
generation and get value operation
  - Created `getValue` and `map` in ExpressionDsl
  - added `generateMap` in CodeGenerator
  - define `generateOperator` and `generateMap` impl in ScalarOperators
  - added in expression parsing logic to feed type information in 
ExpressionParser

## Verifying this change

Added in various Map operator tests in MapTypeTest and SqlExpressionTest.

## Does this pull request potentially affect one of the following parts:

Not that I know of.

## Documentation

  - Does this pull request introduce a new feature? yes
  - If yes, how is the feature documented? (not documented, please advise)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/walterddr/flink FLINK-8038

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5015


commit 64f37583b9fda9ecd691a7245be250f4f7531c04
Author: Rong Rong 
Date:   2017-11-14T18:48:16Z

initial support for Map literals, there are several literal operations 
not supported, such as
MAP('a', 1, 'b', 2).getValue('a') is supported but .get('a') is not 
supported as MapTypeInfo is not a compositeType
MAP('a', 1, 'b', 2).cardinality() is not supported as cardinality now 
is only supported by ObjectArrayTypeInfo
MAP('a', 1, 'b', 2).keySet/valueSet is not supported yet
implicity Type casting is not available yet as it has not been 
supported on ObjectArrayType either




---


[jira] [Created] (FLINK-8074) Launch Flink jobs using Maven coordinates

2017-11-14 Thread Ron Crocker (JIRA)
Ron Crocker created FLINK-8074:
--

 Summary: Launch Flink jobs using Maven coordinates
 Key: FLINK-8074
 URL: https://issues.apache.org/jira/browse/FLINK-8074
 Project: Flink
  Issue Type: Improvement
Reporter: Ron Crocker
Priority: Minor


As a Flink user, I want to be able to submit my job using the Maven coordinates 
(see https://maven.apache.org/pom.html#Maven_Coordinates) of its jar instead of 
a path to a local copy of that jar. 

For example, instead of submitting my job using: 
{{bin/flink run /local/path/to/word-count-1.0.1.jar }}

I would specify it's Maven coordinates:
{{bin/flink run com.newrelic:word-count:1.0.1 }}

This latter form would contact known Maven repositories to acquire the jar at 
the specified coordinates and submit that to the cluster. 

Considerations:
* No transitive dependencies should be included - the target, specified either 
as a jar file in the local file system or by its maven coordinates, should be a 
complete Flink job.
* Maven repositories need to be specified _somewhere_. It's reasonable to 
expect that these repositories are independent of the cluster configurations.
* Specified repositories must meet the Maven API, but don't need to be Maven - 
artifactory, for example, is a valid repository as long as it meets the Maven 
API.
* _minor point_: _Indeterminate versions_ should be prohibited - that is, 
consider {{com.newrelic:word-count:+}} an invalid coordinate specification.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8072) Travis job marked as failed for no apparent reason

2017-11-14 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251816#comment-16251816
 ] 

Aljoscha Krettek commented on FLINK-8072:
-

[~Zentol] There is this snipped in the log:

{code}
Found exception in log files:
2017-11-14 12:45:08,780 INFO  org.apache.flink.client.CliFrontend   
- 

2017-11-14 12:45:08,782 INFO  org.apache.flink.client.CliFrontend   
-  Starting Command Line Client (Version: 1.5-SNAPSHOT, 
Rev:1ab029e, Date:14.11.2017 @ 10:11:10 UTC)
2017-11-14 12:45:08,782 INFO  org.apache.flink.client.CliFrontend   
-  OS current user: travis
2017-11-14 12:45:09,167 INFO  org.apache.flink.client.CliFrontend   
-  Current Hadoop/Kerberos user: travis
2017-11-14 12:45:09,168 INFO  org.apache.flink.client.CliFrontend   
-  JVM: OpenJDK 64-Bit Server VM - Oracle Corporation - 
1.8/25.141-b15
2017-11-14 12:45:09,168 INFO  org.apache.flink.client.CliFrontend   
-  Maximum heap size: 1662 MiBytes
2017-11-14 12:45:09,168 INFO  org.apache.flink.client.CliFrontend   
-  JAVA_HOME: /usr/lib/jvm/java-8-openjdk-amd64
2017-11-14 12:45:09,170 INFO  org.apache.flink.client.CliFrontend   
-  Hadoop version: 2.4.1
2017-11-14 12:45:09,170 INFO  org.apache.flink.client.CliFrontend   
-  JVM Options:
2017-11-14 12:45:09,170 INFO  org.apache.flink.client.CliFrontend   
- 
-Dlog.file=/home/travis/build/zentol/flink/flink-dist/target/flink-1.5-SNAPSHOT-bin/flink-1.5-SNAPSHOT/log/flink-travis-client-travis-job-3ea0a296-db89-42a6-b995-b62b79c9d422.log
2017-11-14 12:45:09,170 INFO  org.apache.flink.client.CliFrontend   
- 
-Dlog4j.configuration=file:/home/travis/build/zentol/flink/flink-dist/target/flink-1.5-SNAPSHOT-bin/flink-1.5-SNAPSHOT/conf/log4j-cli.properties
2017-11-14 12:45:09,170 INFO  org.apache.flink.client.CliFrontend   
- 
-Dlogback.configurationFile=file:/home/travis/build/zentol/flink/flink-dist/target/flink-1.5-SNAPSHOT-bin/flink-1.5-SNAPSHOT/conf/logback.xml
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
-  Program Arguments:
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- run
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- -d
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- build-target/examples/streaming/Kafka010Example.jar
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- --input-topic
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- test-input
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- --output-topic
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- test-output
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- --prefix=PREFIX
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- --bootstrap.servers
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- localhost:9092
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- --zookeeper.connect
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- localhost:2181
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- --group.id
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- myconsumer
2017-11-14 12:45:09,171 INFO  org.apache.flink.client.CliFrontend   
- --auto.offset.reset
2017-11-14 12:45:09,172 INFO  org.apache.flink.client.CliFrontend   
- earliest
2017-11-14 12:45:09,172 INFO  org.apache.flink.client.CliFrontend   
-  Classpath: 
:/home/travis/build/zentol/flink/flink-dist/target/flink-1.5-SNAPSHOT-bin/flink-1.5-SNAPSHOT/lib/flink-dist_2.11-1.5-SNAPSHOT.jar:/home/travis/build/zentol/flink/flink-dist/target/flink-1.5-SNAPSHOT-bin/flink-1.5-SNAPSHOT/lib/flink-python_2.11-1.5-SNAPSHOT.jar:/home/travis/build/zentol/flink/flink-dist/target/flink-1.5-SNAPSHOT-bin/flink-1.5-SNAPSHOT/lib/flink-shaded-hadoop2-uber-1.5-SNAPSHOT.jar:/home/travis/build/zentol/flink/flink-dist/target/flink-1.5-SNAPSHOT-bin/flink-1.5-SNAPSHOT/lib/log4j-1.2.17.jar:/home/travis/build/zentol/flink/flink

[jira] [Commented] (FLINK-7974) AbstractServerBase#shutdown does not wait for shutdown completion

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251751#comment-16251751
 ] 

ASF GitHub Bot commented on FLINK-7974:
---

Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/4993#discussion_r150893389
  
--- Diff: 
flink-queryable-state/flink-queryable-state-runtime/src/main/java/org/apache/flink/queryablestate/server/KvStateServerImpl.java
 ---
@@ -106,6 +108,12 @@ public InetSocketAddress getServerAddress() {
 
@Override
public void shutdown() {
-   super.shutdown();
+   try {
--- End diff --

Why is this not also returning a future?


> AbstractServerBase#shutdown does not wait for shutdown completion
> -
>
> Key: FLINK-7974
> URL: https://issues.apache.org/jira/browse/FLINK-7974
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Kostas Kloudas
>Priority: Critical
>
> The {{AbstractServerBase}} does not wait for the completion of its shutdown 
> when calling {{AbstractServerBase#shutdown}}. This is problematic since it 
> leads to resource leaks and instable tests such as the 
> {{AbstractServerTest}}. I propose to let the {{AbstractServerBase#shutdown}} 
> return a termination future which is completed upon shutdown completion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7974) AbstractServerBase#shutdown does not wait for shutdown completion

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251749#comment-16251749
 ] 

ASF GitHub Bot commented on FLINK-7974:
---

Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/4993#discussion_r150890435
  
--- Diff: 
flink-queryable-state/flink-queryable-state-client-java/src/main/java/org/apache/flink/queryablestate/network/Client.java
 ---
@@ -208,9 +239,15 @@ public void shutdown() {
/** The established connection after the connect succeeds. */
private EstablishedConnection established;
 
+   /** Atomic shut down future. */
+   private final AtomicReference> 
connectionShutdownFuture = new AtomicReference<>(null);
+
/** Closed flag. */
private boolean closed;
 
+// /** Shut down future. */
--- End diff --

?


> AbstractServerBase#shutdown does not wait for shutdown completion
> -
>
> Key: FLINK-7974
> URL: https://issues.apache.org/jira/browse/FLINK-7974
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Kostas Kloudas
>Priority: Critical
>
> The {{AbstractServerBase}} does not wait for the completion of its shutdown 
> when calling {{AbstractServerBase#shutdown}}. This is problematic since it 
> leads to resource leaks and instable tests such as the 
> {{AbstractServerTest}}. I propose to let the {{AbstractServerBase#shutdown}} 
> return a termination future which is completed upon shutdown completion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4993: [FLINK-7974][FLINK-7975][QS] Wait for shutdown in ...

2017-11-14 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/4993#discussion_r150890435
  
--- Diff: 
flink-queryable-state/flink-queryable-state-client-java/src/main/java/org/apache/flink/queryablestate/network/Client.java
 ---
@@ -208,9 +239,15 @@ public void shutdown() {
/** The established connection after the connect succeeds. */
private EstablishedConnection established;
 
+   /** Atomic shut down future. */
+   private final AtomicReference> 
connectionShutdownFuture = new AtomicReference<>(null);
+
/** Closed flag. */
private boolean closed;
 
+// /** Shut down future. */
--- End diff --

?


---


[GitHub] flink pull request #4993: [FLINK-7974][FLINK-7975][QS] Wait for shutdown in ...

2017-11-14 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/4993#discussion_r150899000
  
--- Diff: 
flink-queryable-state/flink-queryable-state-client-java/src/main/java/org/apache/flink/queryablestate/network/Client.java
 ---
@@ -419,20 +472,31 @@ void close() {
 * @param cause The cause to close the channel with.
 * @return Channel close future
 */
-   private boolean close(Throwable cause) {
-   if (failureCause.compareAndSet(null, cause)) {
-   channel.close();
-   stats.reportInactiveConnection();
+   private CompletableFuture close(Throwable cause) {
+   final CompletableFuture shutdownFuture = new 
CompletableFuture<>();
 
-   for (long requestId : pendingRequests.keySet()) 
{
-   TimestampedCompletableFuture pending = 
pendingRequests.remove(requestId);
-   if (pending != null && 
pending.completeExceptionally(cause)) {
-   stats.reportFailedRequest();
+   if (connectionShutdownFuture.compareAndSet(null, 
shutdownFuture) &&
+   failureCause.compareAndSet(null, 
cause)) {
+
+   channel.close().addListener(finished -> {
+   stats.reportInactiveConnection();
+   for (long requestId : 
pendingRequests.keySet()) {
+   TimestampedCompletableFuture 
pending = pendingRequests.remove(requestId);
+   if (pending != null && 
pending.completeExceptionally(cause)) {
+   
stats.reportFailedRequest();
+   }
}
-   }
-   return true;
+
+   if (finished.isSuccess()) {
--- End diff --

This seems weird at first sight but I'm guessing it's correct. I.e. we 
never finish the returned Future with the `cause` that was handed in. We only 
fail it exceptionally if anything in closing the channel went wrong, right? 


---


[GitHub] flink pull request #4993: [FLINK-7974][FLINK-7975][QS] Wait for shutdown in ...

2017-11-14 Thread aljoscha
Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/4993#discussion_r150893389
  
--- Diff: 
flink-queryable-state/flink-queryable-state-runtime/src/main/java/org/apache/flink/queryablestate/server/KvStateServerImpl.java
 ---
@@ -106,6 +108,12 @@ public InetSocketAddress getServerAddress() {
 
@Override
public void shutdown() {
-   super.shutdown();
+   try {
--- End diff --

Why is this not also returning a future?


---


[jira] [Commented] (FLINK-7974) AbstractServerBase#shutdown does not wait for shutdown completion

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251750#comment-16251750
 ] 

ASF GitHub Bot commented on FLINK-7974:
---

Github user aljoscha commented on a diff in the pull request:

https://github.com/apache/flink/pull/4993#discussion_r150899000
  
--- Diff: 
flink-queryable-state/flink-queryable-state-client-java/src/main/java/org/apache/flink/queryablestate/network/Client.java
 ---
@@ -419,20 +472,31 @@ void close() {
 * @param cause The cause to close the channel with.
 * @return Channel close future
 */
-   private boolean close(Throwable cause) {
-   if (failureCause.compareAndSet(null, cause)) {
-   channel.close();
-   stats.reportInactiveConnection();
+   private CompletableFuture close(Throwable cause) {
+   final CompletableFuture shutdownFuture = new 
CompletableFuture<>();
 
-   for (long requestId : pendingRequests.keySet()) 
{
-   TimestampedCompletableFuture pending = 
pendingRequests.remove(requestId);
-   if (pending != null && 
pending.completeExceptionally(cause)) {
-   stats.reportFailedRequest();
+   if (connectionShutdownFuture.compareAndSet(null, 
shutdownFuture) &&
+   failureCause.compareAndSet(null, 
cause)) {
+
+   channel.close().addListener(finished -> {
+   stats.reportInactiveConnection();
+   for (long requestId : 
pendingRequests.keySet()) {
+   TimestampedCompletableFuture 
pending = pendingRequests.remove(requestId);
+   if (pending != null && 
pending.completeExceptionally(cause)) {
+   
stats.reportFailedRequest();
+   }
}
-   }
-   return true;
+
+   if (finished.isSuccess()) {
--- End diff --

This seems weird at first sight but I'm guessing it's correct. I.e. we 
never finish the returned Future with the `cause` that was handed in. We only 
fail it exceptionally if anything in closing the channel went wrong, right? 


> AbstractServerBase#shutdown does not wait for shutdown completion
> -
>
> Key: FLINK-7974
> URL: https://issues.apache.org/jira/browse/FLINK-7974
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Kostas Kloudas
>Priority: Critical
>
> The {{AbstractServerBase}} does not wait for the completion of its shutdown 
> when calling {{AbstractServerBase#shutdown}}. This is problematic since it 
> leads to resource leaks and instable tests such as the 
> {{AbstractServerTest}}. I propose to let the {{AbstractServerBase#shutdown}} 
> return a termination future which is completed upon shutdown completion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8073) Test instability FlinkKafkaProducer011ITCase.testScaleDownBeforeFirstCheckpoint()

2017-11-14 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-8073:
-

 Summary: Test instability 
FlinkKafkaProducer011ITCase.testScaleDownBeforeFirstCheckpoint()
 Key: FLINK-8073
 URL: https://issues.apache.org/jira/browse/FLINK-8073
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector, Tests
Affects Versions: 1.4.0
Reporter: Kostas Kloudas


Travis log: https://travis-ci.org/kl0u/flink/jobs/301985988



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7419) Shade jackson dependency in flink-avro

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251707#comment-16251707
 ] 

ASF GitHub Bot commented on FLINK-7419:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4981
  
I've rebased the PR on top of #5014. Once travis gives a green light I will 
merge this PR.


> Shade jackson dependency in flink-avro
> --
>
> Key: FLINK-7419
> URL: https://issues.apache.org/jira/browse/FLINK-7419
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> Avro uses {{org.codehouse.jackson}} which also exists in multiple 
> incompatible versions. We should shade it to 
> {{org.apache.flink.shaded.avro.org.codehouse.jackson}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4981: [FLINK-7419][build][avro] Relocate jackson in flink-dist

2017-11-14 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4981
  
I've rebased the PR on top of #5014. Once travis gives a green light I will 
merge this PR.


---


[jira] [Updated] (FLINK-8071) Akka shading sometimes produces invalid code

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-8071:

Description: 
On 2 separate occasions on separate machines I hit the exception below when 
starting a cluster. Once it happened in the yarn tests, another time after 
starting a standalone cluster with flink-dist.

The issue appears to be related to some asm bug that affects both sbt-assembly 
and the maven-shade-plugin.

References:
* https://github.com/akka/akka/issues/21596
* https://github.com/sbt/sbt-assembly/issues/205

>From what I have found this should be fixable my bumping the asm version of 
>the maven-shade-plugin to 5.1 (our version uses 5.0.2), or just increment the 
>plugin version to 3.0.0 (which already uses 5.1).

*Important:* This only occurs if a relocation is performed in flink-dist.

{code}
java.lang.Exception: Could not create actor system
at 
org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:171)
at 
org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:115)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner.runApplicationMaster(YarnApplicationMasterRunner.java:313)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:199)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:196)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner.run(YarnApplicationMasterRunner.java:196)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner.main(YarnApplicationMasterRunner.java:123)
Caused by: java.lang.VerifyError: Inconsistent stackmap frames at branch target 
152
Exception Details:
  Location:
akka/dispatch/Mailbox.processAllSystemMessages()V @152: getstatic
  Reason:
Type top (current frame, locals[9]) is not assignable to 
'akka/dispatch/sysmsg/SystemMessage' (stack map, locals[9])
  Current Frame:
bci: @131
flags: { }
locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
'java/lang/Throwable', 'java/lang/Throwable' }
stack: { integer }
  Stackmap Frame:
bci: @152
flags: { }
locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
'java/lang/Throwable', 'java/lang/Throwable', top, top, 
'akka/dispatch/sysmsg/SystemMessage' }
stack: { }
  Bytecode:
0x000: 014c 2ab2 0132 b601 35b6 0139 4db2 013e
0x010: 2cb6 0142 9900 522a b600 c69a 004b 2c4e
0x020: b201 3e2c b601 454d 2db9 0148 0100 2ab6
0x030: 0052 2db6 014b b801 0999 000e bb00 e759
0x040: 1301 4db7 010f 4cb2 013e 2cb6 0150 99ff
0x050: bf2a b600 c69a ffb8 2ab2 0132 b601 35b6
0x060: 0139 4da7 ffaa 2ab6 0052 b600 56b6 0154
0x070: b601 5a3a 04a7 0091 3a05 1905 3a06 1906
0x080: c100 e799 0015 1906 c000 e73a 0719 074c
0x090: b200 f63a 08a7 0071 b201 5f19 06b6 0163
0x0a0: 3a0a 190a b601 6899 0006 1905 bf19 0ab6
0x0b0: 016c c000 df3a 0b2a b600 52b6 0170 b601
0x0c0: 76bb 000f 5919 0b2a b600 52b6 017a b601
0x0d0: 80b6 0186 2ab6 018a bb01 8c59 b701 8e13
0x0e0: 0190 b601 9419 09b6 0194 1301 96b6 0194
0x0f0: 190b b601 99b6 0194 b601 9ab7 019d b601
0x100: a3b2 00f6 3a08 b201 3e2c b601 4299 0026
0x110: 2c3a 09b2 013e 2cb6 0145 4d19 09b9 0148
0x120: 0100 1904 2ab6 0052 b601 7a19 09b6 01a7
0x130: a7ff d62b c600 09b8 0109 572b bfb1
  Exception Handler Table:
bci [290, 307] => handler: 120
  Stackmap Table:
append_frame(@13,Object[#231],Object[#177])
append_frame(@71,Object[#177])
chop_frame(@102,1)

full_frame(@120,{Object[#2],Object[#231],Object[#177],Top,Object[#2],Object[#177]},{Object[#223]})

full_frame(@152,{Object[#2],Object[#231],Object[#177],Top,Object[#2],Object[#223],Object[#223],Top,Top,Object[#177]},{})
append_frame(@173,Object[#357])
full_frame(@262,{Object[#2],Object[#231],Object[#177],Top,Object[#2]},{})
same_frame(@307)
same_frame(@317)

at akka.dispatch.Mailboxes.(Mailboxes.scala:33)
at akka.actor.ActorSystemImpl.(ActorSystem.scala:800)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:245)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:288)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:263)
at akka.actor.ActorSystem$.create(Actor

[jira] [Updated] (FLINK-8071) Akka shading sometimes produces invalid code

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-8071:

Description: 
On 2 separate occasions on separate machines I hit the exception below when 
starting a cluster. Once it happened in the yarn tests, another time after 
starting a standalone cluster with flink-dist.

The issue appears to be related to some asm bug that affects both sbt-assembly 
and the maven-shade-plugin.

References:
* https://github.com/akka/akka/issues/21596
* https://github.com/sbt/sbt-assembly/issues/205

>From what I have found this should be fixable my bumping the asm version of 
>the maven-shade-plugin to 5.1 (our version uses 5.0.2), or just increment the 
>plugin version to 3.0.0 (which already uses 5.1).

Important:

{code}
java.lang.Exception: Could not create actor system
at 
org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:171)
at 
org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:115)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner.runApplicationMaster(YarnApplicationMasterRunner.java:313)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:199)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:196)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner.run(YarnApplicationMasterRunner.java:196)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner.main(YarnApplicationMasterRunner.java:123)
Caused by: java.lang.VerifyError: Inconsistent stackmap frames at branch target 
152
Exception Details:
  Location:
akka/dispatch/Mailbox.processAllSystemMessages()V @152: getstatic
  Reason:
Type top (current frame, locals[9]) is not assignable to 
'akka/dispatch/sysmsg/SystemMessage' (stack map, locals[9])
  Current Frame:
bci: @131
flags: { }
locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
'java/lang/Throwable', 'java/lang/Throwable' }
stack: { integer }
  Stackmap Frame:
bci: @152
flags: { }
locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
'java/lang/Throwable', 'java/lang/Throwable', top, top, 
'akka/dispatch/sysmsg/SystemMessage' }
stack: { }
  Bytecode:
0x000: 014c 2ab2 0132 b601 35b6 0139 4db2 013e
0x010: 2cb6 0142 9900 522a b600 c69a 004b 2c4e
0x020: b201 3e2c b601 454d 2db9 0148 0100 2ab6
0x030: 0052 2db6 014b b801 0999 000e bb00 e759
0x040: 1301 4db7 010f 4cb2 013e 2cb6 0150 99ff
0x050: bf2a b600 c69a ffb8 2ab2 0132 b601 35b6
0x060: 0139 4da7 ffaa 2ab6 0052 b600 56b6 0154
0x070: b601 5a3a 04a7 0091 3a05 1905 3a06 1906
0x080: c100 e799 0015 1906 c000 e73a 0719 074c
0x090: b200 f63a 08a7 0071 b201 5f19 06b6 0163
0x0a0: 3a0a 190a b601 6899 0006 1905 bf19 0ab6
0x0b0: 016c c000 df3a 0b2a b600 52b6 0170 b601
0x0c0: 76bb 000f 5919 0b2a b600 52b6 017a b601
0x0d0: 80b6 0186 2ab6 018a bb01 8c59 b701 8e13
0x0e0: 0190 b601 9419 09b6 0194 1301 96b6 0194
0x0f0: 190b b601 99b6 0194 b601 9ab7 019d b601
0x100: a3b2 00f6 3a08 b201 3e2c b601 4299 0026
0x110: 2c3a 09b2 013e 2cb6 0145 4d19 09b9 0148
0x120: 0100 1904 2ab6 0052 b601 7a19 09b6 01a7
0x130: a7ff d62b c600 09b8 0109 572b bfb1
  Exception Handler Table:
bci [290, 307] => handler: 120
  Stackmap Table:
append_frame(@13,Object[#231],Object[#177])
append_frame(@71,Object[#177])
chop_frame(@102,1)

full_frame(@120,{Object[#2],Object[#231],Object[#177],Top,Object[#2],Object[#177]},{Object[#223]})

full_frame(@152,{Object[#2],Object[#231],Object[#177],Top,Object[#2],Object[#223],Object[#223],Top,Top,Object[#177]},{})
append_frame(@173,Object[#357])
full_frame(@262,{Object[#2],Object[#231],Object[#177],Top,Object[#2]},{})
same_frame(@307)
same_frame(@317)

at akka.dispatch.Mailboxes.(Mailboxes.scala:33)
at akka.actor.ActorSystemImpl.(ActorSystem.scala:800)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:245)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:288)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:263)
at akka.actor.ActorSystem$.create(ActorSystem.scala:191)
at 
org.apache.flink.runtime.akka.Akk

[jira] [Reopened] (FLINK-7419) Shade jackson dependency in flink-avro

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-7419:
-

jackson is currently not relocated

> Shade jackson dependency in flink-avro
> --
>
> Key: FLINK-7419
> URL: https://issues.apache.org/jira/browse/FLINK-7419
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> Avro uses {{org.codehouse.jackson}} which also exists in multiple 
> incompatible versions. We should shade it to 
> {{org.apache.flink.shaded.avro.org.codehouse.jackson}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8071) Akka shading sometimes produces invalid code

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251697#comment-16251697
 ] 

ASF GitHub Bot commented on FLINK-8071:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5014
  
Will rebase the PR and merge it once travis is green.


> Akka shading sometimes produces invalid code
> 
>
> Key: FLINK-8071
> URL: https://issues.apache.org/jira/browse/FLINK-8071
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Local Runtime
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> On 2 separate occasions on separate machines I hit the exception below when 
> starting a cluster. Once it happened in the yarn tests, another time after 
> starting a standalone cluster with flink-dist.
> The issue appears to be related to some asm bug that affects both 
> sbt-assembly and the maven-shade-plugin.
> References:
> * https://github.com/akka/akka/issues/21596
> * https://github.com/sbt/sbt-assembly/issues/205
> From what I have found this should be fixable my bumping the asm version of 
> the maven-shade-plugin to 5.1 (our version uses 5.0.2), or just increment the 
> plugin version to 3.0.0 (which already uses 5.1).
> {code}
> java.lang.Exception: Could not create actor system
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:171)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:115)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.runApplicationMaster(YarnApplicationMasterRunner.java:313)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:199)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:196)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
> at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.run(YarnApplicationMasterRunner.java:196)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.main(YarnApplicationMasterRunner.java:123)
> Caused by: java.lang.VerifyError: Inconsistent stackmap frames at branch 
> target 152
> Exception Details:
>   Location:
> akka/dispatch/Mailbox.processAllSystemMessages()V @152: getstatic
>   Reason:
> Type top (current frame, locals[9]) is not assignable to 
> 'akka/dispatch/sysmsg/SystemMessage' (stack map, locals[9])
>   Current Frame:
> bci: @131
> flags: { }
> locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
> 'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
> 'java/lang/Throwable', 'java/lang/Throwable' }
> stack: { integer }
>   Stackmap Frame:
> bci: @152
> flags: { }
> locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
> 'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
> 'java/lang/Throwable', 'java/lang/Throwable', top, top, 
> 'akka/dispatch/sysmsg/SystemMessage' }
> stack: { }
>   Bytecode:
> 0x000: 014c 2ab2 0132 b601 35b6 0139 4db2 013e
> 0x010: 2cb6 0142 9900 522a b600 c69a 004b 2c4e
> 0x020: b201 3e2c b601 454d 2db9 0148 0100 2ab6
> 0x030: 0052 2db6 014b b801 0999 000e bb00 e759
> 0x040: 1301 4db7 010f 4cb2 013e 2cb6 0150 99ff
> 0x050: bf2a b600 c69a ffb8 2ab2 0132 b601 35b6
> 0x060: 0139 4da7 ffaa 2ab6 0052 b600 56b6 0154
> 0x070: b601 5a3a 04a7 0091 3a05 1905 3a06 1906
> 0x080: c100 e799 0015 1906 c000 e73a 0719 074c
> 0x090: b200 f63a 08a7 0071 b201 5f19 06b6 0163
> 0x0a0: 3a0a 190a b601 6899 0006 1905 bf19 0ab6
> 0x0b0: 016c c000 df3a 0b2a b600 52b6 0170 b601
> 0x0c0: 76bb 000f 5919 0b2a b600 52b6 017a b601
> 0x0d0: 80b6 0186 2ab6 018a bb01 8c59 b701 8e13
> 0x0e0: 0190 b601 9419 09b6 0194 1301 96b6 0194
> 0x0f0: 190b b601 99b6 0194 b601 9ab7 019d b601
> 0x100: a3b2 00f6 3a08 b201 3e2c b601 4299 0026
> 0x110: 2c3a 09b2 013e 2cb6 0145 4d19 09b9 0148
> 0x120: 0100 1904 2ab6 0052 b601 7a19 09b6 01a7
> 0x130: a7ff d62b c600 09b8 0109 572b bfb1
>   Exception Handler Table:
> bci [290, 307] => handler: 120
>   Stackmap Table:
> append_frame(@13,Object[#231],Object[#177])
> append_frame(@71,Object[#177])
> chop_fra

[GitHub] flink issue #5014: [FLINK-8071][build] Bump shade-plugin asm version to 5.1

2017-11-14 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5014
  
Will rebase the PR and merge it once travis is green.


---


[jira] [Commented] (FLINK-8071) Akka shading sometimes produces invalid code

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251693#comment-16251693
 ] 

ASF GitHub Bot commented on FLINK-8071:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5014
  
Some more info that i just stumbled on:

This doesn't actually happen on master, or on release-1.4 for that matter.

For FLINK-7419 we're adding another relocation pass to flink-dist for 
jackson, and this appears to trigger the error.

master: succeeds
+ relocation in flink-dist: fails, _always_
+ asm dependency bump: succeeds


> Akka shading sometimes produces invalid code
> 
>
> Key: FLINK-8071
> URL: https://issues.apache.org/jira/browse/FLINK-8071
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Local Runtime
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> On 2 separate occasions on separate machines I hit the exception below when 
> starting a cluster. Once it happened in the yarn tests, another time after 
> starting a standalone cluster with flink-dist.
> The issue appears to be related to some asm bug that affects both 
> sbt-assembly and the maven-shade-plugin.
> References:
> * https://github.com/akka/akka/issues/21596
> * https://github.com/sbt/sbt-assembly/issues/205
> From what I have found this should be fixable my bumping the asm version of 
> the maven-shade-plugin to 5.1 (our version uses 5.0.2), or just increment the 
> plugin version to 3.0.0 (which already uses 5.1).
> {code}
> java.lang.Exception: Could not create actor system
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:171)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:115)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.runApplicationMaster(YarnApplicationMasterRunner.java:313)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:199)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:196)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
> at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.run(YarnApplicationMasterRunner.java:196)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.main(YarnApplicationMasterRunner.java:123)
> Caused by: java.lang.VerifyError: Inconsistent stackmap frames at branch 
> target 152
> Exception Details:
>   Location:
> akka/dispatch/Mailbox.processAllSystemMessages()V @152: getstatic
>   Reason:
> Type top (current frame, locals[9]) is not assignable to 
> 'akka/dispatch/sysmsg/SystemMessage' (stack map, locals[9])
>   Current Frame:
> bci: @131
> flags: { }
> locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
> 'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
> 'java/lang/Throwable', 'java/lang/Throwable' }
> stack: { integer }
>   Stackmap Frame:
> bci: @152
> flags: { }
> locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
> 'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
> 'java/lang/Throwable', 'java/lang/Throwable', top, top, 
> 'akka/dispatch/sysmsg/SystemMessage' }
> stack: { }
>   Bytecode:
> 0x000: 014c 2ab2 0132 b601 35b6 0139 4db2 013e
> 0x010: 2cb6 0142 9900 522a b600 c69a 004b 2c4e
> 0x020: b201 3e2c b601 454d 2db9 0148 0100 2ab6
> 0x030: 0052 2db6 014b b801 0999 000e bb00 e759
> 0x040: 1301 4db7 010f 4cb2 013e 2cb6 0150 99ff
> 0x050: bf2a b600 c69a ffb8 2ab2 0132 b601 35b6
> 0x060: 0139 4da7 ffaa 2ab6 0052 b600 56b6 0154
> 0x070: b601 5a3a 04a7 0091 3a05 1905 3a06 1906
> 0x080: c100 e799 0015 1906 c000 e73a 0719 074c
> 0x090: b200 f63a 08a7 0071 b201 5f19 06b6 0163
> 0x0a0: 3a0a 190a b601 6899 0006 1905 bf19 0ab6
> 0x0b0: 016c c000 df3a 0b2a b600 52b6 0170 b601
> 0x0c0: 76bb 000f 5919 0b2a b600 52b6 017a b601
> 0x0d0: 80b6 0186 2ab6 018a bb01 8c59 b701 8e13
> 0x0e0: 0190 b601 9419 09b6 0194 1301 96b6 0194
> 0x0f0: 190b b601 99b6 0194 b601 9ab7 019d b601
> 0x100: a3b2 00f6 3a08 b201 3e2c b601 4299 0026
> 0x110: 2c3a 09b2 013e 2cb6 0145 4d1

[GitHub] flink issue #5014: [FLINK-8071][build] Bump shade-plugin asm version to 5.1

2017-11-14 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5014
  
Some more info that i just stumbled on:

This doesn't actually happen on master, or on release-1.4 for that matter.

For FLINK-7419 we're adding another relocation pass to flink-dist for 
jackson, and this appears to trigger the error.

master: succeeds
+ relocation in flink-dist: fails, _always_
+ asm dependency bump: succeeds


---


[jira] [Commented] (FLINK-7974) AbstractServerBase#shutdown does not wait for shutdown completion

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251613#comment-16251613
 ] 

ASF GitHub Bot commented on FLINK-7974:
---

Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/4993#discussion_r150878782
  
--- Diff: 
flink-queryable-state/flink-queryable-state-client-java/src/main/java/org/apache/flink/queryablestate/network/AbstractServerBase.java
 ---
@@ -260,25 +262,47 @@ private boolean attemptToBind(final int port) throws 
Throwable {
/**
 * Shuts down the server and all related thread pools.
 */
-   public void shutdown() {
-   LOG.info("Shutting down server {} @ {}", serverName, 
serverAddress);
-
-   if (handler != null) {
-   handler.shutdown();
-   handler = null;
-   }
-
-   if (queryExecutor != null) {
-   queryExecutor.shutdown();
-   }
+   public CompletableFuture shutdownServer(Time timeout) throws 
InterruptedException {
--- End diff --

I agree, but we need the `timeout` in order to shut down the `executors`.


> AbstractServerBase#shutdown does not wait for shutdown completion
> -
>
> Key: FLINK-7974
> URL: https://issues.apache.org/jira/browse/FLINK-7974
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Kostas Kloudas
>Priority: Critical
>
> The {{AbstractServerBase}} does not wait for the completion of its shutdown 
> when calling {{AbstractServerBase#shutdown}}. This is problematic since it 
> leads to resource leaks and instable tests such as the 
> {{AbstractServerTest}}. I propose to let the {{AbstractServerBase#shutdown}} 
> return a termination future which is completed upon shutdown completion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4993: [FLINK-7974][FLINK-7975][QS] Wait for shutdown in ...

2017-11-14 Thread kl0u
Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/4993#discussion_r150878782
  
--- Diff: 
flink-queryable-state/flink-queryable-state-client-java/src/main/java/org/apache/flink/queryablestate/network/AbstractServerBase.java
 ---
@@ -260,25 +262,47 @@ private boolean attemptToBind(final int port) throws 
Throwable {
/**
 * Shuts down the server and all related thread pools.
 */
-   public void shutdown() {
-   LOG.info("Shutting down server {} @ {}", serverName, 
serverAddress);
-
-   if (handler != null) {
-   handler.shutdown();
-   handler = null;
-   }
-
-   if (queryExecutor != null) {
-   queryExecutor.shutdown();
-   }
+   public CompletableFuture shutdownServer(Time timeout) throws 
InterruptedException {
--- End diff --

I agree, but we need the `timeout` in order to shut down the `executors`.


---


[jira] [Updated] (FLINK-8071) Akka shading sometimes produces invalid code

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-8071:

Description: 
On 2 separate occasions on separate machines I hit the exception below when 
starting a cluster. Once it happened in the yarn tests, another time after 
starting a standalone cluster with flink-dist.

The issue appears to be related to some asm bug that affects both sbt-assembly 
and the maven-shade-plugin.

References:
* https://github.com/akka/akka/issues/21596
* https://github.com/sbt/sbt-assembly/issues/205

>From what I have found this should be fixable my bumping the asm version of 
>the maven-shade-plugin to 5.1 (our version uses 5.0.2), or just increment the 
>plugin version to 3.0.0 (which already uses 5.1).
{code}
java.lang.Exception: Could not create actor system
at 
org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:171)
at 
org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:115)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner.runApplicationMaster(YarnApplicationMasterRunner.java:313)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:199)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:196)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner.run(YarnApplicationMasterRunner.java:196)
at 
org.apache.flink.yarn.YarnApplicationMasterRunner.main(YarnApplicationMasterRunner.java:123)
Caused by: java.lang.VerifyError: Inconsistent stackmap frames at branch target 
152
Exception Details:
  Location:
akka/dispatch/Mailbox.processAllSystemMessages()V @152: getstatic
  Reason:
Type top (current frame, locals[9]) is not assignable to 
'akka/dispatch/sysmsg/SystemMessage' (stack map, locals[9])
  Current Frame:
bci: @131
flags: { }
locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
'java/lang/Throwable', 'java/lang/Throwable' }
stack: { integer }
  Stackmap Frame:
bci: @152
flags: { }
locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
'java/lang/Throwable', 'java/lang/Throwable', top, top, 
'akka/dispatch/sysmsg/SystemMessage' }
stack: { }
  Bytecode:
0x000: 014c 2ab2 0132 b601 35b6 0139 4db2 013e
0x010: 2cb6 0142 9900 522a b600 c69a 004b 2c4e
0x020: b201 3e2c b601 454d 2db9 0148 0100 2ab6
0x030: 0052 2db6 014b b801 0999 000e bb00 e759
0x040: 1301 4db7 010f 4cb2 013e 2cb6 0150 99ff
0x050: bf2a b600 c69a ffb8 2ab2 0132 b601 35b6
0x060: 0139 4da7 ffaa 2ab6 0052 b600 56b6 0154
0x070: b601 5a3a 04a7 0091 3a05 1905 3a06 1906
0x080: c100 e799 0015 1906 c000 e73a 0719 074c
0x090: b200 f63a 08a7 0071 b201 5f19 06b6 0163
0x0a0: 3a0a 190a b601 6899 0006 1905 bf19 0ab6
0x0b0: 016c c000 df3a 0b2a b600 52b6 0170 b601
0x0c0: 76bb 000f 5919 0b2a b600 52b6 017a b601
0x0d0: 80b6 0186 2ab6 018a bb01 8c59 b701 8e13
0x0e0: 0190 b601 9419 09b6 0194 1301 96b6 0194
0x0f0: 190b b601 99b6 0194 b601 9ab7 019d b601
0x100: a3b2 00f6 3a08 b201 3e2c b601 4299 0026
0x110: 2c3a 09b2 013e 2cb6 0145 4d19 09b9 0148
0x120: 0100 1904 2ab6 0052 b601 7a19 09b6 01a7
0x130: a7ff d62b c600 09b8 0109 572b bfb1
  Exception Handler Table:
bci [290, 307] => handler: 120
  Stackmap Table:
append_frame(@13,Object[#231],Object[#177])
append_frame(@71,Object[#177])
chop_frame(@102,1)

full_frame(@120,{Object[#2],Object[#231],Object[#177],Top,Object[#2],Object[#177]},{Object[#223]})

full_frame(@152,{Object[#2],Object[#231],Object[#177],Top,Object[#2],Object[#223],Object[#223],Top,Top,Object[#177]},{})
append_frame(@173,Object[#357])
full_frame(@262,{Object[#2],Object[#231],Object[#177],Top,Object[#2]},{})
same_frame(@307)
same_frame(@317)

at akka.dispatch.Mailboxes.(Mailboxes.scala:33)
at akka.actor.ActorSystemImpl.(ActorSystem.scala:800)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:245)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:288)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:263)
at akka.actor.ActorSystem$.create(ActorSystem.scala:191)
at 
org.apache.flink.runtime.akka.AkkaUtils$.creat

[jira] [Commented] (FLINK-8063) Client blocks indefinitely when querying a non-existing state

2017-11-14 Thread Kostas Kloudas (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251569#comment-16251569
 ] 

Kostas Kloudas commented on FLINK-8063:
---

I am not sure about this, as also in the 1.3 we had this:

{code}
@SuppressWarnings("unchecked")
public Future getKvState(
final JobID jobId,
final String queryableStateName,
final int keyHashCode,
final byte[] serializedKeyAndNamespace) {

return getKvState(jobId, queryableStateName, keyHashCode, 
serializedKeyAndNamespace, false)
.recoverWith(new Recover>() {
@Override
public Future recover(Throwable 
failure) throws Throwable {
if (failure instanceof 
UnknownKvStateID ||
failure 
instanceof UnknownKvStateKeyGroupLocation ||
failure 
instanceof UnknownKvStateLocation ||
failure 
instanceof ConnectException) {
// These failures are 
likely to be caused by out-of-sync
// KvStateLocation. 
Therefore we retry this query and
// force look up the 
location.
return getKvState(
jobId,

queryableStateName,

keyHashCode,

serializedKeyAndNamespace,
true);
} else {
return 
Futures.failed(failure);
}
}
}, executionContext);
}
{code}

> Client blocks indefinitely when querying a non-existing state
> -
>
> Key: FLINK-8063
> URL: https://issues.apache.org/jira/browse/FLINK-8063
> Project: Flink
>  Issue Type: Improvement
>  Components: Queryable State
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Kostas Kloudas
>Priority: Critical
> Fix For: 1.4.0
>
>
> When querying for a non-existing state (as in, no state was registered under 
> queryableStateName) the client blocks indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7973) Fix service shading relocation for S3 file systems

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251564#comment-16251564
 ] 

ASF GitHub Bot commented on FLINK-7973:
---

Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/5013
  
This would be difficult to test - the most reasonable test would be to put 
the Hadoop JNI into the classpath and then run the end-to-end test on that but 
we don't want to include the platform-dependant jni library, I suppose


> Fix service shading relocation for S3 file systems
> --
>
> Key: FLINK-7973
> URL: https://issues.apache.org/jira/browse/FLINK-7973
> Project: Flink
>  Issue Type: Bug
>Reporter: Stephan Ewen
>Assignee: Nico Kruber
>Priority: Blocker
> Fix For: 1.4.0
>
>
> The shade plugin relocates services incorrectly currently, applying 
> relocation patterns multiple times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5013: [FLINK-7973] disable JNI bridge for relocated hadoop clas...

2017-11-14 Thread NicoK
Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/5013
  
This would be difficult to test - the most reasonable test would be to put 
the Hadoop JNI into the classpath and then run the end-to-end test on that but 
we don't want to include the platform-dependant jni library, I suppose


---


[jira] [Commented] (FLINK-8071) Akka shading sometimes produces invalid code

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251546#comment-16251546
 ] 

ASF GitHub Bot commented on FLINK-8071:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5014
  
I don't think we have a choice other than trusting your analysis and 
loop-testing. Looks good to merge! 👍 


> Akka shading sometimes produces invalid code
> 
>
> Key: FLINK-8071
> URL: https://issues.apache.org/jira/browse/FLINK-8071
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Local Runtime
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> On 2 separate occasions on separate machines I hit the exception below when 
> starting a cluster. Once it happened in the yarn tests, another time after 
> starting a standalone cluster with flink-dist.
> The issue appears to be related to some asm bug that affects both 
> sbt-assembly and the maven-shade-plugin.
> References:
> * https://github.com/akka/akka/issues/21596
> * https://github.com/sbt/sbt-assembly/issues/205
> From what I have found this should be fixable my bumping the asm version of 
> the maven-shade-plugin to 5.1 (out version uses 5.0.2), or just increment the 
> plugin version to 3.0.0 (which already uses 5.1).
> {code}
> java.lang.Exception: Could not create actor system
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:171)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:115)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.runApplicationMaster(YarnApplicationMasterRunner.java:313)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:199)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:196)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
> at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.run(YarnApplicationMasterRunner.java:196)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.main(YarnApplicationMasterRunner.java:123)
> Caused by: java.lang.VerifyError: Inconsistent stackmap frames at branch 
> target 152
> Exception Details:
>   Location:
> akka/dispatch/Mailbox.processAllSystemMessages()V @152: getstatic
>   Reason:
> Type top (current frame, locals[9]) is not assignable to 
> 'akka/dispatch/sysmsg/SystemMessage' (stack map, locals[9])
>   Current Frame:
> bci: @131
> flags: { }
> locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
> 'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
> 'java/lang/Throwable', 'java/lang/Throwable' }
> stack: { integer }
>   Stackmap Frame:
> bci: @152
> flags: { }
> locals: { 'akka/dispatch/Mailbox', 'java/lang/InterruptedException', 
> 'akka/dispatch/sysmsg/SystemMessage', top, 'akka/dispatch/Mailbox', 
> 'java/lang/Throwable', 'java/lang/Throwable', top, top, 
> 'akka/dispatch/sysmsg/SystemMessage' }
> stack: { }
>   Bytecode:
> 0x000: 014c 2ab2 0132 b601 35b6 0139 4db2 013e
> 0x010: 2cb6 0142 9900 522a b600 c69a 004b 2c4e
> 0x020: b201 3e2c b601 454d 2db9 0148 0100 2ab6
> 0x030: 0052 2db6 014b b801 0999 000e bb00 e759
> 0x040: 1301 4db7 010f 4cb2 013e 2cb6 0150 99ff
> 0x050: bf2a b600 c69a ffb8 2ab2 0132 b601 35b6
> 0x060: 0139 4da7 ffaa 2ab6 0052 b600 56b6 0154
> 0x070: b601 5a3a 04a7 0091 3a05 1905 3a06 1906
> 0x080: c100 e799 0015 1906 c000 e73a 0719 074c
> 0x090: b200 f63a 08a7 0071 b201 5f19 06b6 0163
> 0x0a0: 3a0a 190a b601 6899 0006 1905 bf19 0ab6
> 0x0b0: 016c c000 df3a 0b2a b600 52b6 0170 b601
> 0x0c0: 76bb 000f 5919 0b2a b600 52b6 017a b601
> 0x0d0: 80b6 0186 2ab6 018a bb01 8c59 b701 8e13
> 0x0e0: 0190 b601 9419 09b6 0194 1301 96b6 0194
> 0x0f0: 190b b601 99b6 0194 b601 9ab7 019d b601
> 0x100: a3b2 00f6 3a08 b201 3e2c b601 4299 0026
> 0x110: 2c3a 09b2 013e 2cb6 0145 4d19 09b9 0148
> 0x120: 0100 1904 2ab6 0052 b601 7a19 09b6 01a7
> 0x130: a7ff d62b c600 09b8 0109 572b bfb1
>   Exception Handler Table:
> bci [290, 307] => handler: 120
>   Stackmap Table:
> append_frame(@13,Object[#231],Object[#1

[GitHub] flink issue #5014: [FLINK-8071][build] Bump shade-plugin asm version to 5.1

2017-11-14 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/5014
  
I don't think we have a choice other than trusting your analysis and 
loop-testing. Looks good to merge! 👍 


---


[jira] [Commented] (FLINK-7973) Fix service shading relocation for S3 file systems

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251544#comment-16251544
 ] 

ASF GitHub Bot commented on FLINK-7973:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5013
  
Changes look reasonable to me. I supposed we can't guard this with a test?


> Fix service shading relocation for S3 file systems
> --
>
> Key: FLINK-7973
> URL: https://issues.apache.org/jira/browse/FLINK-7973
> Project: Flink
>  Issue Type: Bug
>Reporter: Stephan Ewen
>Assignee: Nico Kruber
>Priority: Blocker
> Fix For: 1.4.0
>
>
> The shade plugin relocates services incorrectly currently, applying 
> relocation patterns multiple times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5013: [FLINK-7973] disable JNI bridge for relocated hadoop clas...

2017-11-14 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5013
  
Changes look reasonable to me. I supposed we can't guard this with a test?


---


[jira] [Commented] (FLINK-8071) Akka shading sometimes produces invalid code

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251532#comment-16251532
 ] 

ASF GitHub Bot commented on FLINK-8071:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5014

[FLINK-8071][build] Bump shade-plugin asm version to 5.1

## What is the purpose of the change

This PR bumps the ASM dependency of the `maven-shade-plugin` to 5.1 as a 
workaround to a bug in ASM. This bug causes some error during the shading of 
akka, making it unusable.

## Verifying this change

I can't guarantee 100% that this fixes the issue, since the shading issue 
only manifests sometimes. This means a simple recompilation of flink-runtime 
_can_ resolve the issue temporarily.

I've compiled flink-runtime, flink-dist and ran the flink-yarn-tests in a 
loop and switched the version bump on and off. It only failed with the old ASM 
version,

To verify that the `maven-shade-plugin` uses ASM 5.1, run `mvn -X package` 
in an arbitrary module and check the dependency tree.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8071

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5014


commit 2544cc4078e85673767e8d2ff1a5966f0b07baa7
Author: zentol 
Date:   2017-11-14T13:47:00Z

[FLINK-8071][build] Bump shade-plugin asm version to 5.1




> Akka shading sometimes produces invalid code
> 
>
> Key: FLINK-8071
> URL: https://issues.apache.org/jira/browse/FLINK-8071
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Local Runtime
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> On 2 separate occasions on separate machines I hit the exception below when 
> starting a cluster. Once it happened in the yarn tests, another time after 
> starting a standalone cluster with flink-dist.
> The issue appears to be related to some asm bug that affects both 
> sbt-assembly and the maven-shade-plugin.
> References:
> * https://github.com/akka/akka/issues/21596
> * https://github.com/sbt/sbt-assembly/issues/205
> From what I have found this should be fixable my bumping the asm version of 
> the maven-shade-plugin to 5.1 (out version uses 5.0.2), or just increment the 
> plugin version to 3.0.0 (which already uses 5.1).
> {code}
> java.lang.Exception: Could not create actor system
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:171)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:115)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.runApplicationMaster(YarnApplicationMasterRunner.java:313)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:199)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner$1.call(YarnApplicationMasterRunner.java:196)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
> at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.run(YarnApplicationMasterRunner.java:196)
> at 
> org.apache.flink.yarn.YarnApplicationMasterRunner.main(YarnApplicationMasterRunner.java:123)
> Caused by: java.lang.VerifyError: Inconsistent stackmap frames at branch 
> target 152
> Exception Details:
>   Location:
> akka/dispatch/Mailbox.processAllSystemMessages()V @152: getstatic
>   Reason:
> Type top (current frame, locals[

[GitHub] flink pull request #5014: [FLINK-8071][build] Bump shade-plugin asm version ...

2017-11-14 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5014

[FLINK-8071][build] Bump shade-plugin asm version to 5.1

## What is the purpose of the change

This PR bumps the ASM dependency of the `maven-shade-plugin` to 5.1 as a 
workaround to a bug in ASM. This bug causes some error during the shading of 
akka, making it unusable.

## Verifying this change

I can't guarantee 100% that this fixes the issue, since the shading issue 
only manifests sometimes. This means a simple recompilation of flink-runtime 
_can_ resolve the issue temporarily.

I've compiled flink-runtime, flink-dist and ran the flink-yarn-tests in a 
loop and switched the version bump on and off. It only failed with the old ASM 
version,

To verify that the `maven-shade-plugin` uses ASM 5.1, run `mvn -X package` 
in an arbitrary module and check the dependency tree.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8071

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5014


commit 2544cc4078e85673767e8d2ff1a5966f0b07baa7
Author: zentol 
Date:   2017-11-14T13:47:00Z

[FLINK-8071][build] Bump shade-plugin asm version to 5.1




---


[jira] [Commented] (FLINK-7974) AbstractServerBase#shutdown does not wait for shutdown completion

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251521#comment-16251521
 ] 

ASF GitHub Bot commented on FLINK-7974:
---

Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/4993
  
Thanks for the comments @tillrohrmann . I addressed them.


> AbstractServerBase#shutdown does not wait for shutdown completion
> -
>
> Key: FLINK-7974
> URL: https://issues.apache.org/jira/browse/FLINK-7974
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Kostas Kloudas
>Priority: Critical
>
> The {{AbstractServerBase}} does not wait for the completion of its shutdown 
> when calling {{AbstractServerBase#shutdown}}. This is problematic since it 
> leads to resource leaks and instable tests such as the 
> {{AbstractServerTest}}. I propose to let the {{AbstractServerBase#shutdown}} 
> return a termination future which is completed upon shutdown completion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4993: [FLINK-7974][FLINK-7975][QS] Wait for shutdown in QS clie...

2017-11-14 Thread kl0u
Github user kl0u commented on the issue:

https://github.com/apache/flink/pull/4993
  
Thanks for the comments @tillrohrmann . I addressed them.


---


[jira] [Commented] (FLINK-7845) Netty Exception when submitting batch job repeatedly

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251503#comment-16251503
 ] 

ASF GitHub Bot commented on FLINK-7845:
---

Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/5007
  
Thanks!


> Netty Exception when submitting batch job repeatedly
> 
>
> Key: FLINK-7845
> URL: https://issues.apache.org/jira/browse/FLINK-7845
> Project: Flink
>  Issue Type: Bug
>  Components: Core, Network
>Affects Versions: 1.3.2
>Reporter: Flavio Pompermaier
>Assignee: Piotr Nowojski
>Priority: Blocker
> Fix For: 1.4.0
>
> Attachments: Screen Shot 2017-11-13 at 14.54.38.png
>
>
> We had some problems with Flink and Netty so we wrote a small unit test to 
> reproduce the memory issues we have in production. It happens that we have to 
> restart the Flink cluster because the memory is always increasing from job to 
> job. 
> The github project is https://github.com/okkam-it/flink-memory-leak and the 
> JUnit test is contained in the MemoryLeakTest class (within src/main/test).
> I don't know if this is the root of our problems but at some point, usually 
> around the 28th loop, the job fails with the following exception (actually we 
> never faced that in production but maybe is related to the memory issue 
> somehow...):
> {code:java}
> Caused by: java.lang.IllegalAccessError: 
> org/apache/flink/runtime/io/network/netty/NettyMessage
>   at 
> io.netty.util.internal.__matchers__.org.apache.flink.runtime.io.network.netty.NettyMessageMatcher.match(NoOpTypeParameterMatcher.java)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.acceptInboundMessage(SimpleChannelInboundHandler.java:95)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:102)
>   ... 16 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5007: [FLINK-7845] Make NettyMessage public

2017-11-14 Thread pnowojski
Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/5007
  
Thanks!


---


[jira] [Commented] (FLINK-7974) AbstractServerBase#shutdown does not wait for shutdown completion

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251497#comment-16251497
 ] 

ASF GitHub Bot commented on FLINK-7974:
---

Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/4993#discussion_r150856825
  
--- Diff: 
flink-queryable-state/flink-queryable-state-client-java/src/main/java/org/apache/flink/queryablestate/network/Client.java
 ---
@@ -483,27 +511,31 @@ private boolean close(Throwable cause) {
@Override
public void onRequestResult(long requestId, RESP response) {
TimestampedCompletableFuture pending = 
pendingRequests.remove(requestId);
-   if (pending != null && pending.complete(response)) {
+   if (pending != null && !pending.isDone()) {
long durationMillis = (System.nanoTime() - 
pending.getTimestamp()) / 1_000_000L;
stats.reportSuccessfulRequest(durationMillis);
--- End diff --

This was creating test instabilities in the 
`AbstractServerTest.testPortRangeSuccess()`. This is why I decided to complete 
the future after increasing the counter. Given that we check if the 
`pending.isDone()`, we do not leave much room for false increases. 


> AbstractServerBase#shutdown does not wait for shutdown completion
> -
>
> Key: FLINK-7974
> URL: https://issues.apache.org/jira/browse/FLINK-7974
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Kostas Kloudas
>Priority: Critical
>
> The {{AbstractServerBase}} does not wait for the completion of its shutdown 
> when calling {{AbstractServerBase#shutdown}}. This is problematic since it 
> leads to resource leaks and instable tests such as the 
> {{AbstractServerTest}}. I propose to let the {{AbstractServerBase#shutdown}} 
> return a termination future which is completed upon shutdown completion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4993: [FLINK-7974][FLINK-7975][QS] Wait for shutdown in ...

2017-11-14 Thread kl0u
Github user kl0u commented on a diff in the pull request:

https://github.com/apache/flink/pull/4993#discussion_r150856825
  
--- Diff: 
flink-queryable-state/flink-queryable-state-client-java/src/main/java/org/apache/flink/queryablestate/network/Client.java
 ---
@@ -483,27 +511,31 @@ private boolean close(Throwable cause) {
@Override
public void onRequestResult(long requestId, RESP response) {
TimestampedCompletableFuture pending = 
pendingRequests.remove(requestId);
-   if (pending != null && pending.complete(response)) {
+   if (pending != null && !pending.isDone()) {
long durationMillis = (System.nanoTime() - 
pending.getTimestamp()) / 1_000_000L;
stats.reportSuccessfulRequest(durationMillis);
--- End diff --

This was creating test instabilities in the 
`AbstractServerTest.testPortRangeSuccess()`. This is why I decided to complete 
the future after increasing the counter. Given that we check if the 
`pending.isDone()`, we do not leave much room for false increases. 


---


[jira] [Commented] (FLINK-7998) Make case classes in TPCHQuery3.java public to allow dynamic instantiation

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251492#comment-16251492
 ] 

ASF GitHub Bot commented on FLINK-7998:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4959


> Make case classes in TPCHQuery3.java public to allow dynamic instantiation
> --
>
> Key: FLINK-7998
> URL: https://issues.apache.org/jira/browse/FLINK-7998
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.3.2
>Reporter: Keren Zhu
>Assignee: Keren Zhu
>Priority: Minor
>  Labels: easyfix
> Fix For: 1.4.0
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> Case classes Lineitem, Customer and Order in example TPCHQuery3.java are set 
> to private. This causes an IllegalAccessException exception because of 
> reflection check in dynamic class instantiation. Making them public resolves 
> the problem (which is what implicitly suggested by _case class_ in 
> TPCHQuery3.scala)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7998) Make case classes in TPCHQuery3.java public to allow dynamic instantiation

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-7998.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

1.4: 2774335983bf83b68ff061388d0bd69d4346bb07
1.5: 408e186995f4e6bacc0904b8dd6b6f9b7e28e60e

> Make case classes in TPCHQuery3.java public to allow dynamic instantiation
> --
>
> Key: FLINK-7998
> URL: https://issues.apache.org/jira/browse/FLINK-7998
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.3.2
>Reporter: Keren Zhu
>Assignee: Keren Zhu
>Priority: Minor
>  Labels: easyfix
> Fix For: 1.4.0
>
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> Case classes Lineitem, Customer and Order in example TPCHQuery3.java are set 
> to private. This causes an IllegalAccessException exception because of 
> reflection check in dynamic class instantiation. Making them public resolves 
> the problem (which is what implicitly suggested by _case class_ in 
> TPCHQuery3.scala)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4959: [FLINK-7998] private scope is changed to public to...

2017-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4959


---


[jira] [Reopened] (FLINK-7998) Make case classes in TPCHQuery3.java public to allow dynamic instantiation

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-7998:
-

> Make case classes in TPCHQuery3.java public to allow dynamic instantiation
> --
>
> Key: FLINK-7998
> URL: https://issues.apache.org/jira/browse/FLINK-7998
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.3.2
>Reporter: Keren Zhu
>Assignee: Keren Zhu
>Priority: Minor
>  Labels: easyfix
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> Case classes Lineitem, Customer and Order in example TPCHQuery3.java are set 
> to private. This causes an IllegalAccessException exception because of 
> reflection check in dynamic class instantiation. Making them public resolves 
> the problem (which is what implicitly suggested by _case class_ in 
> TPCHQuery3.scala)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7973) Fix service shading relocation for S3 file systems

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251486#comment-16251486
 ] 

ASF GitHub Bot commented on FLINK-7973:
---

GitHub user NicoK opened a pull request:

https://github.com/apache/flink/pull/5013

[FLINK-7973] disable JNI bridge for relocated hadoop classes in s3-fs-*

## What is the purpose of the change

If some Hadoop's JNI library is in the classpath, it will be loaded by our 
shaded, relocated hadoop classes in the `flink-s3-fs-*` filesystems as well. 
Then, however, `NativeCodeLoader#isNativeCodeLoaded` will return `true` and 
native code libraries will be tried although our relocated namespaces have no 
JNI mapping leading to errors like `java.lang.UnsatisfiedLinkError: 
org.apache.flink.fs.s3hadoop.shaded.org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V`.

## Brief change log

- disable native code loading (there are more users than the shown 
`JniBasedUnixGroupsMapping`) via copies of the respective `NativeCodeLoader` 
class

## Verifying this change

This change added tests and can be verified as follows:

  - Manually verified the change by running a 3 node cluster with 1 
JobManagers and 2 TaskManagers on EMR executing the `WordCount` example with an 
S3 input source:
  ```
cp ./opt/flink-s3-fs-hadoop-1.4-SNAPSHOT.jar ./lib/
./bin/flink run -m yarn-cluster -yn 2 -ys 1 -yjm 768 -ytm 1024 
./examples/batch/WordCount.jar --input s3:///
```

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): **no**
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
  - The serializers: **no**
  - The runtime per-record code paths (performance sensitive): **yes** -- 
actually, the shaded and relocated Hadoop classes may not use (potentially 
faster) JNI implementations for certain functions; depending on their use, this 
may be per record but since this only applies to the S3 filesystem access, 
performance penalties should be hidden by its access times anyway
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**
  - The S3 file system connector: **yes**

## Documentation

  - Does this pull request introduce a new feature? **no**
  - If yes, how is the feature documented? **docs**


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NicoK/flink flink-7973-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5013.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5013


commit 95f533d004e7373e9de03245a7984b6355209c22
Author: Nico Kruber 
Date:   2017-11-14T13:36:22Z

[FLINK-7973] disable JNI bridge for relocated hadoop classes in s3-fs-*




> Fix service shading relocation for S3 file systems
> --
>
> Key: FLINK-7973
> URL: https://issues.apache.org/jira/browse/FLINK-7973
> Project: Flink
>  Issue Type: Bug
>Reporter: Stephan Ewen
>Assignee: Nico Kruber
>Priority: Blocker
> Fix For: 1.4.0
>
>
> The shade plugin relocates services incorrectly currently, applying 
> relocation patterns multiple times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5013: [FLINK-7973] disable JNI bridge for relocated hado...

2017-11-14 Thread NicoK
GitHub user NicoK opened a pull request:

https://github.com/apache/flink/pull/5013

[FLINK-7973] disable JNI bridge for relocated hadoop classes in s3-fs-*

## What is the purpose of the change

If some Hadoop's JNI library is in the classpath, it will be loaded by our 
shaded, relocated hadoop classes in the `flink-s3-fs-*` filesystems as well. 
Then, however, `NativeCodeLoader#isNativeCodeLoaded` will return `true` and 
native code libraries will be tried although our relocated namespaces have no 
JNI mapping leading to errors like `java.lang.UnsatisfiedLinkError: 
org.apache.flink.fs.s3hadoop.shaded.org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V`.

## Brief change log

- disable native code loading (there are more users than the shown 
`JniBasedUnixGroupsMapping`) via copies of the respective `NativeCodeLoader` 
class

## Verifying this change

This change added tests and can be verified as follows:

  - Manually verified the change by running a 3 node cluster with 1 
JobManagers and 2 TaskManagers on EMR executing the `WordCount` example with an 
S3 input source:
  ```
cp ./opt/flink-s3-fs-hadoop-1.4-SNAPSHOT.jar ./lib/
./bin/flink run -m yarn-cluster -yn 2 -ys 1 -yjm 768 -ytm 1024 
./examples/batch/WordCount.jar --input s3:///
```

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): **no**
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
  - The serializers: **no**
  - The runtime per-record code paths (performance sensitive): **yes** -- 
actually, the shaded and relocated Hadoop classes may not use (potentially 
faster) JNI implementations for certain functions; depending on their use, this 
may be per record but since this only applies to the S3 filesystem access, 
performance penalties should be hidden by its access times anyway
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**
  - The S3 file system connector: **yes**

## Documentation

  - Does this pull request introduce a new feature? **no**
  - If yes, how is the feature documented? **docs**


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NicoK/flink flink-7973-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5013.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5013


commit 95f533d004e7373e9de03245a7984b6355209c22
Author: Nico Kruber 
Date:   2017-11-14T13:36:22Z

[FLINK-7973] disable JNI bridge for relocated hadoop classes in s3-fs-*




---


[jira] [Commented] (FLINK-7998) Make case classes in TPCHQuery3.java public to allow dynamic instantiation

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251483#comment-16251483
 ] 

ASF GitHub Bot commented on FLINK-7998:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4959
  
Thank you for fixing this, merging.


> Make case classes in TPCHQuery3.java public to allow dynamic instantiation
> --
>
> Key: FLINK-7998
> URL: https://issues.apache.org/jira/browse/FLINK-7998
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.3.2
>Reporter: Keren Zhu
>Assignee: Keren Zhu
>Priority: Minor
>  Labels: easyfix
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> Case classes Lineitem, Customer and Order in example TPCHQuery3.java are set 
> to private. This causes an IllegalAccessException exception because of 
> reflection check in dynamic class instantiation. Making them public resolves 
> the problem (which is what implicitly suggested by _case class_ in 
> TPCHQuery3.scala)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4959: [FLINK-7998] private scope is changed to public to resolv...

2017-11-14 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4959
  
Thank you for fixing this, merging.


---


[jira] [Closed] (FLINK-8056) Default flink-conf.yaml uses deprecated keys

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-8056.
---
Resolution: Fixed

1.4: 7c7f24ec84dc73b59c8523b4c7af8d89f1a17eb5
1.5: ed90379e3b96b03e24414169f92e6cb8371a6250

> Default flink-conf.yaml uses deprecated keys
> 
>
> Key: FLINK-8056
> URL: https://issues.apache.org/jira/browse/FLINK-8056
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> {code}
> Config uses deprecated configuration key 'jobmanager.web.port' instead of 
> proper key 'web.port'
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8056) Default flink-conf.yaml uses deprecated keys

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251476#comment-16251476
 ] 

ASF GitHub Bot commented on FLINK-8056:
---

Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/5010


> Default flink-conf.yaml uses deprecated keys
> 
>
> Key: FLINK-8056
> URL: https://issues.apache.org/jira/browse/FLINK-8056
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> {code}
> Config uses deprecated configuration key 'jobmanager.web.port' instead of 
> proper key 'web.port'
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8069) Support empty watermark strategy for TableSources

2017-11-14 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251475#comment-16251475
 ] 

Timo Walther commented on FLINK-8069:
-

Given you have a TableSource like this: 
https://github.com/dataArtisans/flink-training-exercises/blob/master/src/main/java/com/dataartisans/flinktraining/exercises/table_java/sources/TaxiRideTableSource.java
And the custom TaxiRideSource already emits watermarks and assigns timestamps.

We need a to define a {{RowtimeAttributeDescriptor("eventTime", new 
StreamRecordTimestamp(), WatermarkStrategy???)}}  when implementing 
{{DefinedRowtimeAttributes}}. So an explicit empty watermark strategy that does 
basically nothing, is needed in this use case.

> Support empty watermark strategy for TableSources
> -
>
> Key: FLINK-8069
> URL: https://issues.apache.org/jira/browse/FLINK-8069
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Xingcan Cui
>
> In case the underlying data stream source emits watermarks, it should be 
> possible to define an empty watermark strategy for rowtime attributes in the 
> {{RowtimeAttributeDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5010: [FLINK-8056][dist] Use 'web.port' instead of 'jobm...

2017-11-14 Thread zentol
Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/5010


---


[jira] [Updated] (FLINK-7652) Port CurrentJobIdsHandler to new REST endpoint

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7652:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Port CurrentJobIdsHandler to new REST endpoint
> --
>
> Key: FLINK-7652
> URL: https://issues.apache.org/jira/browse/FLINK-7652
> Project: Flink
>  Issue Type: Sub-task
>  Components: REST, Webfrontend
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>  Labels: flip-6
> Fix For: 1.5.0
>
>
> Port existing {{CurrentJobIdsHandler}} to new REST endpoint



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7608) LatencyGauge change to histogram metric

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7608:

Fix Version/s: (was: 1.4.0)
   1.5.0

> LatencyGauge change to  histogram metric
> 
>
> Key: FLINK-7608
> URL: https://issues.apache.org/jira/browse/FLINK-7608
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Reporter: Hai Zhou UTC+8
>Assignee: Hai Zhou UTC+8
> Fix For: 1.5.0
>
>
> I used slf4jReporter[https://issues.apache.org/jira/browse/FLINK-4831]  to 
> export metrics the log file.
> I found:
> {noformat}
> -- Gauges 
> -
> ..
> zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
> Job.Map.0.latency:
>  value={LatencySourceDescriptor{vertexID=1, subtaskIndex=-1}={p99=116.0, 
> p50=59.5, min=11.0, max=116.0, p95=116.0, mean=61.836}}
> zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
> Job.Sink- Unnamed.0.latency: 
> value={LatencySourceDescriptor{vertexID=1, subtaskIndex=0}={p99=195.0, 
> p50=163.5, min=115.0, max=195.0, p95=195.0, mean=161.0}}
> ..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7727) Extend logging in file server handlers

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7727:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Extend logging in file server handlers
> --
>
> Key: FLINK-7727
> URL: https://issues.apache.org/jira/browse/FLINK-7727
> Project: Flink
>  Issue Type: Improvement
>  Components: REST, Webfrontend
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.5.0
>
>
> The file server handlers check several failure conditions but don't log 
> anything (like the path), making debugging difficult.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7692) Support user-defined variables in Metrics

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7692:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Support user-defined variables in Metrics
> -
>
> Key: FLINK-7692
> URL: https://issues.apache.org/jira/browse/FLINK-7692
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Wei-Che Wei
>Priority: Minor
> Fix For: 1.5.0
>
>
> Reporters that identify metrics with a set of key-value pairs are currently 
> limited to the variables defined by Flink, like the taskmanager ID, with 
> users not being able to supply their own.
> This is inconsistent with reporters that use metric identifiers that freely 
> include user-defined groups constructted via {{MetricGroup#addGroup(String 
> name)}}.
> I propose adding a new method {{MetricGroup#addGroup(String key, String 
> name)}} that adds a new key-value pair to the {{variables}} map in it's 
> constructor. When constructing the metric identifier the key should be 
> included as well, resulting in the same result as when constructing the 
> metric groups tree via {{group.addGroup(key).addGroup(value)}}.
> For this a new {{KeyedGenericMetricGroup}} should be created that resembles 
> the unkeyed version, with slight modifications to the constructor and 
> {{getScopeComponents}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-4812) Report Watermark metrics in all operators

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-4812:

Fix Version/s: (was: 1.4.0)
   1.5.0

> Report Watermark metrics in all operators
> -
>
> Key: FLINK-4812
> URL: https://issues.apache.org/jira/browse/FLINK-4812
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Reporter: Robert Metzger
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.5.0
>
>
> As reported by a user, Flink does currently not export the current low 
> watermark for sources 
> (http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/currentLowWatermark-metric-not-reported-for-all-tasks-td13770.html).
> This JIRA is for adding such a metric for the sources as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8069) Support empty watermark strategy for TableSources

2017-11-14 Thread Xingcan Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251458#comment-16251458
 ] 

Xingcan Cui commented on FLINK-8069:


Hi [~twalthr], could you give a little more explanations about the use case? 
That will be helpful. Thanks.

> Support empty watermark strategy for TableSources
> -
>
> Key: FLINK-8069
> URL: https://issues.apache.org/jira/browse/FLINK-8069
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Xingcan Cui
>
> In case the underlying data stream source emits watermarks, it should be 
> possible to define an empty watermark strategy for rowtime attributes in the 
> {{RowtimeAttributeDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-8006) flink-daemon.sh: line 103: binary operator expected

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-8006.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

1.4: fcc79c0ed76818e147381117ea735f5796be6066
1.5: b98a4aa8b8ba1881f938125d0765620a4289a3c7

> flink-daemon.sh: line 103: binary operator expected
> ---
>
> Key: FLINK-8006
> URL: https://issues.apache.org/jira/browse/FLINK-8006
> Project: Flink
>  Issue Type: Bug
>  Components: Startup Shell Scripts
>Affects Versions: 1.3.2
> Environment: Linux 4.12.12-gentoo #2 SMP x86_64 Intel(R) Core(TM) 
> i3-3110M CPU @ 2.40GHz GenuineIntel GNU/Linux
>Reporter: Alejandro 
>  Labels: easyfix, newbie
> Fix For: 1.4.0
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> When executing `./bin/start-local.sh` I get
> flink-1.3.2/bin/flink-daemon.sh: line 79: $pid: ambiguous redirect
> flink-1.3.2/bin/flink-daemon.sh: line 103: [: /tmp/flink-Alejandro: binary 
> operator expected
> I solved the problem replacing $pid by "$pid" in lines 79 and 103.
> Should I make a PR to the repo?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7845) Netty Exception when submitting batch job repeatedly

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251433#comment-16251433
 ] 

ASF GitHub Bot commented on FLINK-7845:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5007


> Netty Exception when submitting batch job repeatedly
> 
>
> Key: FLINK-7845
> URL: https://issues.apache.org/jira/browse/FLINK-7845
> Project: Flink
>  Issue Type: Bug
>  Components: Core, Network
>Affects Versions: 1.3.2
>Reporter: Flavio Pompermaier
>Assignee: Piotr Nowojski
>Priority: Blocker
> Fix For: 1.4.0
>
> Attachments: Screen Shot 2017-11-13 at 14.54.38.png
>
>
> We had some problems with Flink and Netty so we wrote a small unit test to 
> reproduce the memory issues we have in production. It happens that we have to 
> restart the Flink cluster because the memory is always increasing from job to 
> job. 
> The github project is https://github.com/okkam-it/flink-memory-leak and the 
> JUnit test is contained in the MemoryLeakTest class (within src/main/test).
> I don't know if this is the root of our problems but at some point, usually 
> around the 28th loop, the job fails with the following exception (actually we 
> never faced that in production but maybe is related to the memory issue 
> somehow...):
> {code:java}
> Caused by: java.lang.IllegalAccessError: 
> org/apache/flink/runtime/io/network/netty/NettyMessage
>   at 
> io.netty.util.internal.__matchers__.org.apache.flink.runtime.io.network.netty.NettyMessageMatcher.match(NoOpTypeParameterMatcher.java)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.acceptInboundMessage(SimpleChannelInboundHandler.java:95)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:102)
>   ... 16 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7811) Add support for Scala 2.12

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251435#comment-16251435
 ] 

ASF GitHub Bot commented on FLINK-7811:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4995


> Add support for Scala 2.12
> --
>
> Key: FLINK-7811
> URL: https://issues.apache.org/jira/browse/FLINK-7811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8006) flink-daemon.sh: line 103: binary operator expected

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251437#comment-16251437
 ] 

ASF GitHub Bot commented on FLINK-8006:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4968


> flink-daemon.sh: line 103: binary operator expected
> ---
>
> Key: FLINK-8006
> URL: https://issues.apache.org/jira/browse/FLINK-8006
> Project: Flink
>  Issue Type: Bug
>  Components: Startup Shell Scripts
>Affects Versions: 1.3.2
> Environment: Linux 4.12.12-gentoo #2 SMP x86_64 Intel(R) Core(TM) 
> i3-3110M CPU @ 2.40GHz GenuineIntel GNU/Linux
>Reporter: Alejandro 
>  Labels: easyfix, newbie
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> When executing `./bin/start-local.sh` I get
> flink-1.3.2/bin/flink-daemon.sh: line 79: $pid: ambiguous redirect
> flink-1.3.2/bin/flink-daemon.sh: line 103: [: /tmp/flink-Alejandro: binary 
> operator expected
> I solved the problem replacing $pid by "$pid" in lines 79 and 103.
> Should I make a PR to the repo?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7845) Netty Exception when submitting batch job repeatedly

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-7845.
---
Resolution: Fixed

1.4: 6f9ab7217266f5458263fdd976214c1b4c552576
1.5: 16107cf67416391ccf1493bda747a1083a20

> Netty Exception when submitting batch job repeatedly
> 
>
> Key: FLINK-7845
> URL: https://issues.apache.org/jira/browse/FLINK-7845
> Project: Flink
>  Issue Type: Bug
>  Components: Core, Network
>Affects Versions: 1.3.2
>Reporter: Flavio Pompermaier
>Assignee: Piotr Nowojski
>Priority: Blocker
> Fix For: 1.4.0
>
> Attachments: Screen Shot 2017-11-13 at 14.54.38.png
>
>
> We had some problems with Flink and Netty so we wrote a small unit test to 
> reproduce the memory issues we have in production. It happens that we have to 
> restart the Flink cluster because the memory is always increasing from job to 
> job. 
> The github project is https://github.com/okkam-it/flink-memory-leak and the 
> JUnit test is contained in the MemoryLeakTest class (within src/main/test).
> I don't know if this is the root of our problems but at some point, usually 
> around the 28th loop, the job fails with the following exception (actually we 
> never faced that in production but maybe is related to the memory issue 
> somehow...):
> {code:java}
> Caused by: java.lang.IllegalAccessError: 
> org/apache/flink/runtime/io/network/netty/NettyMessage
>   at 
> io.netty.util.internal.__matchers__.org.apache.flink.runtime.io.network.netty.NettyMessageMatcher.match(NoOpTypeParameterMatcher.java)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.acceptInboundMessage(SimpleChannelInboundHandler.java:95)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:102)
>   ... 16 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4500) Cassandra sink can lose messages

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251434#comment-16251434
 ] 

ASF GitHub Bot commented on FLINK-4500:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5002


> Cassandra sink can lose messages
> 
>
> Key: FLINK-4500
> URL: https://issues.apache.org/jira/browse/FLINK-4500
> Project: Flink
>  Issue Type: Bug
>  Components: Cassandra Connector
>Affects Versions: 1.1.0
>Reporter: Elias Levy
>Assignee: Michael Fong
> Fix For: 1.4.0
>
>
> The problem is the same as I pointed out with the Kafka producer sink 
> (FLINK-4027).  The CassandraTupleSink's send() and CassandraPojoSink's send() 
> both send data asynchronously to Cassandra and record whether an error occurs 
> via a future callback.  But CassandraSinkBase does not implement 
> Checkpointed, so it can't stop checkpoint from happening even though the are 
> Cassandra queries in flight from the checkpoint that may fail.  If they do 
> fail, they would subsequently not be replayed when the job recovered, and 
> would thus be lost.
> In addition, 
> CassandraSinkBase's close should check whether there is a pending exception 
> and throw it, rather than silently close.  It should also wait for any 
> pending async queries to complete and check their status before closing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8011) Set dist flink-python dependency to provided

2017-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16251436#comment-16251436
 ] 

ASF GitHub Bot commented on FLINK-8011:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4973


> Set dist flink-python dependency to provided
> 
>
> Key: FLINK-8011
> URL: https://issues.apache.org/jira/browse/FLINK-8011
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>
> We can simplify the flink-dist pom by setting the flink-python dependency to 
> provided, which allows us to remove an exclusion from the shade plugin 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-8011) Set dist flink-python dependency to provided

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-8011.
---
Resolution: Fixed

1.4: 8d8c52f876b79524393d2604b5bedc54c0764ae7
1.5: 119006752e190334fbf4f90fa53f6dfa9374e81b

> Set dist flink-python dependency to provided
> 
>
> Key: FLINK-8011
> URL: https://issues.apache.org/jira/browse/FLINK-8011
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
> Fix For: 1.4.0
>
>
> We can simplify the flink-dist pom by setting the flink-python dependency to 
> provided, which allows us to remove an exclusion from the shade plugin 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5002: [hotfix][docs] Remove the caveat about Cassandra c...

2017-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5002


---


[GitHub] flink pull request #4973: [FLINK-8011][dist] Set flink-python to provided

2017-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4973


---


[jira] [Reopened] (FLINK-7845) Netty Exception when submitting batch job repeatedly

2017-11-14 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reopened FLINK-7845:
-

> Netty Exception when submitting batch job repeatedly
> 
>
> Key: FLINK-7845
> URL: https://issues.apache.org/jira/browse/FLINK-7845
> Project: Flink
>  Issue Type: Bug
>  Components: Core, Network
>Affects Versions: 1.3.2
>Reporter: Flavio Pompermaier
>Assignee: Piotr Nowojski
>Priority: Blocker
> Fix For: 1.4.0
>
> Attachments: Screen Shot 2017-11-13 at 14.54.38.png
>
>
> We had some problems with Flink and Netty so we wrote a small unit test to 
> reproduce the memory issues we have in production. It happens that we have to 
> restart the Flink cluster because the memory is always increasing from job to 
> job. 
> The github project is https://github.com/okkam-it/flink-memory-leak and the 
> JUnit test is contained in the MemoryLeakTest class (within src/main/test).
> I don't know if this is the root of our problems but at some point, usually 
> around the 28th loop, the job fails with the following exception (actually we 
> never faced that in production but maybe is related to the memory issue 
> somehow...):
> {code:java}
> Caused by: java.lang.IllegalAccessError: 
> org/apache/flink/runtime/io/network/netty/NettyMessage
>   at 
> io.netty.util.internal.__matchers__.org.apache.flink.runtime.io.network.netty.NettyMessageMatcher.match(NoOpTypeParameterMatcher.java)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.acceptInboundMessage(SimpleChannelInboundHandler.java:95)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:102)
>   ... 16 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5007: [FLINK-7845] Make NettyMessage public

2017-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5007


---


[GitHub] flink pull request #5000: [hotfix][docs] Fix typos in deployment AWS documen...

2017-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5000


---


[GitHub] flink pull request #4995: [hotfix] [docs] Fix broken link to FLINK-7811

2017-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4995


---


[GitHub] flink pull request #4968: [FLINK-8006] [Startup Shell Scripts] Enclosing $pi...

2017-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4968


---


  1   2   >