[GitHub] flink issue #4459: [FLINK-7221] [jdbc] Throw exception if execution of last ...

2017-08-07 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/4459
  
merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7362) CheckpointProperties are not correctly set when restoring savepoint with HA enabled

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116645#comment-16116645
 ] 

ASF GitHub Bot commented on FLINK-7362:
---

GitHub user StefanRRichter opened a pull request:

https://github.com/apache/flink/pull/4491

[FLINK-7362] [checkpoints] Savepoint property is lost after de/serial…

…ization of CheckpointProperties

## What is the purpose of the change

This PR fixes lost information about the savepoint property when 
de/serializing `CheckpointProperty`. The problem was caused by serialization in 
connection with a singleton that was checked for referencial equality to derive 
the savepoint property.


## Brief change log

This makes the savepoint property explicit as a boolean field in 
`CheckpointProperty`. 


## Verifying this change

This change is a trivial rework / code cleanup without any test coverage. 
Nevertheless I added a small test.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)

## Documentation

  - Does this pull request introduce a new feature? (no)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StefanRRichter/flink 
serializable-savepoint-property

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4491.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4491


commit 64eb3abb55f515252069371f7dd8f874b9c73c7e
Author: Stefan Richter 
Date:   2017-08-07T14:22:32Z

[FLINK-7362] [checkpoints] Savepoint property is lost after 
de/serialization of CheckpointProperties




> CheckpointProperties are not correctly set when restoring savepoint with HA 
> enabled
> ---
>
> Key: FLINK-7362
> URL: https://issues.apache.org/jira/browse/FLINK-7362
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
> Fix For: 1.4.0
>
>
> When restoring a savepoint on a HA setup (with ZooKeeper) the web frontend 
> incorrectly says "Type: Checkpoint" in the information box about the latest 
> restore event.
> The information that this uses is set here: 
> https://github.com/apache/flink/blob/09caa9ffdc8168610c7d0260360c034ea87f904c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L1101
> It seems that the {{CheckpointProperties}} of a restored savepoint somehow 
> get lost, maybe because of the recover step that the 
> {{ZookeeperCompletedCheckpointStore}} is going through.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4491: [FLINK-7362] [checkpoints] Savepoint property is l...

2017-08-07 Thread StefanRRichter
GitHub user StefanRRichter opened a pull request:

https://github.com/apache/flink/pull/4491

[FLINK-7362] [checkpoints] Savepoint property is lost after de/serial…

…ization of CheckpointProperties

## What is the purpose of the change

This PR fixes lost information about the savepoint property when 
de/serializing `CheckpointProperty`. The problem was caused by serialization in 
connection with a singleton that was checked for referencial equality to derive 
the savepoint property.


## Brief change log

This makes the savepoint property explicit as a boolean field in 
`CheckpointProperty`. 


## Verifying this change

This change is a trivial rework / code cleanup without any test coverage. 
Nevertheless I added a small test.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)

## Documentation

  - Does this pull request introduce a new feature? (no)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StefanRRichter/flink 
serializable-savepoint-property

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4491.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4491


commit 64eb3abb55f515252069371f7dd8f874b9c73c7e
Author: Stefan Richter 
Date:   2017-08-07T14:22:32Z

[FLINK-7362] [checkpoints] Savepoint property is lost after 
de/serialization of CheckpointProperties




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4491: [FLINK-7362] [checkpoints] Savepoint property is lost aft...

2017-08-07 Thread StefanRRichter
Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/4491
  
CC @aljoscha 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7362) CheckpointProperties are not correctly set when restoring savepoint with HA enabled

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116647#comment-16116647
 ] 

ASF GitHub Bot commented on FLINK-7362:
---

Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/4491
  
CC @aljoscha 


> CheckpointProperties are not correctly set when restoring savepoint with HA 
> enabled
> ---
>
> Key: FLINK-7362
> URL: https://issues.apache.org/jira/browse/FLINK-7362
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Aljoscha Krettek
> Fix For: 1.4.0
>
>
> When restoring a savepoint on a HA setup (with ZooKeeper) the web frontend 
> incorrectly says "Type: Checkpoint" in the information box about the latest 
> restore event.
> The information that this uses is set here: 
> https://github.com/apache/flink/blob/09caa9ffdc8168610c7d0260360c034ea87f904c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L1101
> It seems that the {{CheckpointProperties}} of a restored savepoint somehow 
> get lost, maybe because of the recover step that the 
> {{ZookeeperCompletedCheckpointStore}} is going through.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4492: [FLINK-7381] [web] Decouple WebRuntimeMonitor from...

2017-08-07 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/4492

[FLINK-7381] [web] Decouple WebRuntimeMonitor from ActorGateway

## What is the purpose of the change

This PR decouples the `WebRuntimeMonitor` from the `ActorGateway` by 
introducing the `JobManagerGateway` interface which can have multiple 
implementations. This is a preliminary step for the integration of the existing 
`WebRuntimeMonitor` with the Flip-6 `JobMaster`.

This PR is based #4486.

## Brief change log

- Extending the `JobManagerGateway` with methods to cover the requirements 
of the `WebRuntimeMonitor`
- Change `JobManagerRetriever` to return `JobManagerGateway` instead of 
`Tuple2`
- Adapt handlers to use the `JobManagerRetriever`
- Introduce `MetricQueryServiceRetriever` to retrieve `MetricQueryService` 
implementations running alongside with the JM and TM
- Introduce `MetricQueryServiceGateway` to abstract the implementation 
details of the `MetricQueryService` (e.g. Akka based)
- Pass `MetricQueryServiceRetriever` to `WebRuntimeMonitor` and 
`MetricFetcher`
- Adapt test classes to work with newly introduced interfaces
- Add `web.timeout` configuration option to control the WebRuntimeMonitor 
timeouts (see `WebMonitorOptions#WEB_TIMEOUT`

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (nop)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink webMonitorFlip6

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4492.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4492


commit 50d304647e65be90937809d71cef87608f60b9ce
Author: Till Rohrmann 
Date:   2017-08-04T22:28:15Z

[FLINK-7372] [JobGraph] Remove ActorGateway from JobGraph

The JobGraph has an unncessary dependency on the ActorGateway via its 
JobGraph#uploadUserJars method. In
order to get rid of this dependency for future Flip-6 changes, this commit 
retrieves the BlobServer's
address beforehand and directly passes it to this method.

commit e4596e060b471464064de142d16d86c0a52ca078
Author: Till Rohrmann 
Date:   2017-08-06T15:56:41Z

[FLINK-7375] Replace ActorGateway with JobManagerGateway in JobClient

In order to make the JobClient code independent of Akka, this PR replaces 
the
ActorGateway parameters by JobManagerGateway. AkkaJobManagerGateway is the
respective implementation of the JobManagerGateway for Akka. Moreover, this
PR introduces useful ExceptionUtils method for handling of Future 
exceptions.
Additionally, the SerializedThrowable has been moved to flink-core.

commit c88b83db49d0f6f45578e7563e3dd7e28c3a24d3
Author: Till Rohrmann 
Date:   2017-08-02T16:43:00Z

[FLINK-7381] [web] Decouple WebRuntimeMonitor from ActorGateway




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4470: [FLINK-7343] Simulate network failures in kafka at-least-...

2017-08-07 Thread pnowojski
Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4470
  
Yes I also didn't like adding this new flag, but didn't have enough 
motivation to change it. I have done some refactoring extracting those 
dynamically set in `prepare` method to some `Config` class. 

However it helps only a little bit. Those tests would need a more 
comprehensive refactor in the future. I particularly don't like that this 
`prepare` method exists, it should all be configured in the constructor and all 
of those should be final fields.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7343) Kafka010ProducerITCase instability

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116655#comment-16116655
 ] 

ASF GitHub Bot commented on FLINK-7343:
---

Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4470
  
Yes I also didn't like adding this new flag, but didn't have enough 
motivation to change it. I have done some refactoring extracting those 
dynamically set in `prepare` method to some `Config` class. 

However it helps only a little bit. Those tests would need a more 
comprehensive refactor in the future. I particularly don't like that this 
`prepare` method exists, it should all be configured in the constructor and all 
of those should be final fields.


> Kafka010ProducerITCase instability
> --
>
> Key: FLINK-7343
> URL: https://issues.apache.org/jira/browse/FLINK-7343
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>  Labels: test-stability
>
> As reported by [~till.rohrmann] in 
> https://issues.apache.org/jira/browse/FLINK-6996 there seems to be a test 
> instability with 
> `Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink`
> https://travis-ci.org/tillrohrmann/flink/jobs/258538641
> It is probably related to log.flush intervals in Kafka, which delay flushing 
> the data to files and potentially causing data loses on killing Kafka brokers 
> in the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7381) Decouple WebRuntimeMonitor from ActorGateway

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116654#comment-16116654
 ] 

ASF GitHub Bot commented on FLINK-7381:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/4492

[FLINK-7381] [web] Decouple WebRuntimeMonitor from ActorGateway

## What is the purpose of the change

This PR decouples the `WebRuntimeMonitor` from the `ActorGateway` by 
introducing the `JobManagerGateway` interface which can have multiple 
implementations. This is a preliminary step for the integration of the existing 
`WebRuntimeMonitor` with the Flip-6 `JobMaster`.

This PR is based #4486.

## Brief change log

- Extending the `JobManagerGateway` with methods to cover the requirements 
of the `WebRuntimeMonitor`
- Change `JobManagerRetriever` to return `JobManagerGateway` instead of 
`Tuple2`
- Adapt handlers to use the `JobManagerRetriever`
- Introduce `MetricQueryServiceRetriever` to retrieve `MetricQueryService` 
implementations running alongside with the JM and TM
- Introduce `MetricQueryServiceGateway` to abstract the implementation 
details of the `MetricQueryService` (e.g. Akka based)
- Pass `MetricQueryServiceRetriever` to `WebRuntimeMonitor` and 
`MetricFetcher`
- Adapt test classes to work with newly introduced interfaces
- Add `web.timeout` configuration option to control the WebRuntimeMonitor 
timeouts (see `WebMonitorOptions#WEB_TIMEOUT`

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (nop)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink webMonitorFlip6

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4492.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4492


commit 50d304647e65be90937809d71cef87608f60b9ce
Author: Till Rohrmann 
Date:   2017-08-04T22:28:15Z

[FLINK-7372] [JobGraph] Remove ActorGateway from JobGraph

The JobGraph has an unncessary dependency on the ActorGateway via its 
JobGraph#uploadUserJars method. In
order to get rid of this dependency for future Flip-6 changes, this commit 
retrieves the BlobServer's
address beforehand and directly passes it to this method.

commit e4596e060b471464064de142d16d86c0a52ca078
Author: Till Rohrmann 
Date:   2017-08-06T15:56:41Z

[FLINK-7375] Replace ActorGateway with JobManagerGateway in JobClient

In order to make the JobClient code independent of Akka, this PR replaces 
the
ActorGateway parameters by JobManagerGateway. AkkaJobManagerGateway is the
respective implementation of the JobManagerGateway for Akka. Moreover, this
PR introduces useful ExceptionUtils method for handling of Future 
exceptions.
Additionally, the SerializedThrowable has been moved to flink-core.

commit c88b83db49d0f6f45578e7563e3dd7e28c3a24d3
Author: Till Rohrmann 
Date:   2017-08-02T16:43:00Z

[FLINK-7381] [web] Decouple WebRuntimeMonitor from ActorGateway




> Decouple WebRuntimeMonitor from ActorGateway
> 
>
> Key: FLINK-7381
> URL: https://issues.apache.org/jira/browse/FLINK-7381
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
>
> The {{WebRuntimeMonitor}} has a hard wired dependency on the {{ActorGateway}} 
> in order to communicate with the {{JobManager}}. In order to make it work 
> with the {{JobMaster}} (Flip-6), we have to abstract this dependency away. I 
> propose to add a {{JobManagerGateway}} interface which can be implemented 
> using Akka for the old {{JobManager}} code. The Flip-6 {{JobMasterGateway}} 
> can then directly inherit from this interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7240) Externalized RocksDB can fail with stackoverflow

2017-08-07 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-7240:


Assignee: Till Rohrmann

> Externalized RocksDB can fail with stackoverflow
> 
>
> Key: FLINK-7240
> URL: https://issues.apache.org/jira/browse/FLINK-7240
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.3.1, 1.4.0
> Environment: https://travis-ci.org/zentol/flink/jobs/255760513
>Reporter: Chesnay Schepler
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {code}
> testExternalizedFullRocksDBCheckpointsStandalone(org.apache.flink.test.checkpointing.ExternalizedCheckpointITCase)
>   Time elapsed: 146.894 sec  <<< ERROR!
> java.lang.StackOverflowError: null
>   at java.util.Hashtable.get(Hashtable.java:363)
>   at java.util.Properties.getProperty(Properties.java:969)
>   at java.lang.System.getProperty(System.java:720)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:84)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.PrintWriter.(PrintWriter.java:116)
>   at java.io.PrintWriter.(PrintWriter.java:100)
>   at 
> org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
>   at 
> org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
>   at 
> org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:381)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:392)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7240) Externalized RocksDB can fail with stackoverflow

2017-08-07 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-7240:
-
Affects Version/s: 1.4.0

> Externalized RocksDB can fail with stackoverflow
> 
>
> Key: FLINK-7240
> URL: https://issues.apache.org/jira/browse/FLINK-7240
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.3.1, 1.4.0
> Environment: https://travis-ci.org/zentol/flink/jobs/255760513
>Reporter: Chesnay Schepler
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {code}
> testExternalizedFullRocksDBCheckpointsStandalone(org.apache.flink.test.checkpointing.ExternalizedCheckpointITCase)
>   Time elapsed: 146.894 sec  <<< ERROR!
> java.lang.StackOverflowError: null
>   at java.util.Hashtable.get(Hashtable.java:363)
>   at java.util.Properties.getProperty(Properties.java:969)
>   at java.lang.System.getProperty(System.java:720)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:84)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.PrintWriter.(PrintWriter.java:116)
>   at java.io.PrintWriter.(PrintWriter.java:100)
>   at 
> org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
>   at 
> org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
>   at 
> org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:381)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:392)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4470: [FLINK-7343] Simulate network failures in kafka at-least-...

2017-08-07 Thread StefanRRichter
Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/4470
  
Merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7343) Kafka010ProducerITCase instability

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1611#comment-1611
 ] 

ASF GitHub Bot commented on FLINK-7343:
---

Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/4470
  
Merging.


> Kafka010ProducerITCase instability
> --
>
> Key: FLINK-7343
> URL: https://issues.apache.org/jira/browse/FLINK-7343
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>  Labels: test-stability
>
> As reported by [~till.rohrmann] in 
> https://issues.apache.org/jira/browse/FLINK-6996 there seems to be a test 
> instability with 
> `Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink`
> https://travis-ci.org/tillrohrmann/flink/jobs/258538641
> It is probably related to log.flush intervals in Kafka, which delay flushing 
> the data to files and potentially causing data loses on killing Kafka brokers 
> in the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4481: [FLINK-7316][network] always use off-heap network ...

2017-08-07 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4481#discussion_r131672655
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/NetworkBufferPool.java
 ---
@@ -274,7 +260,7 @@ private void redistributeBuffers() throws IOException {
return;
}
 
-   /**
--- End diff --

Actually, this was intentional since this was marked as a "dangling 
javadoc" by IntelliJ (it is just a longer inline-comment). I don't have too 
strong feelings about it though so we could either keep it or revert it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7316) always use off-heap network buffers

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116670#comment-16116670
 ] 

ASF GitHub Bot commented on FLINK-7316:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4481#discussion_r131672655
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/NetworkBufferPool.java
 ---
@@ -274,7 +260,7 @@ private void redistributeBuffers() throws IOException {
return;
}
 
-   /**
--- End diff --

Actually, this was intentional since this was marked as a "dangling 
javadoc" by IntelliJ (it is just a longer inline-comment). I don't have too 
strong feelings about it though so we could either keep it or revert it.


> always use off-heap network buffers
> ---
>
> Key: FLINK-7316
> URL: https://issues.apache.org/jira/browse/FLINK-7316
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core, Network
>Affects Versions: 1.4.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> In order to send flink buffers through netty into the network, we need to 
> make the buffers use off-heap memory. Otherwise, there will be a hidden copy 
> happening in the NIO stack.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4481: [FLINK-7316][network] always use off-heap network ...

2017-08-07 Thread StefanRRichter
Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/4481#discussion_r131673257
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/NetworkBufferPool.java
 ---
@@ -274,7 +260,7 @@ private void redistributeBuffers() throws IOException {
return;
}
 
-   /**
--- End diff --

Ok, then just leave it :D


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7316) always use off-heap network buffers

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116672#comment-16116672
 ] 

ASF GitHub Bot commented on FLINK-7316:
---

Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/4481#discussion_r131673257
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/NetworkBufferPool.java
 ---
@@ -274,7 +260,7 @@ private void redistributeBuffers() throws IOException {
return;
}
 
-   /**
--- End diff --

Ok, then just leave it :D


> always use off-heap network buffers
> ---
>
> Key: FLINK-7316
> URL: https://issues.apache.org/jira/browse/FLINK-7316
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core, Network
>Affects Versions: 1.4.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> In order to send flink buffers through netty into the network, we need to 
> make the buffers use off-heap memory. Otherwise, there will be a hidden copy 
> happening in the NIO stack.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4499) Introduce findbugs maven plugin

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116323#comment-16116323
 ] 

ASF GitHub Bot commented on FLINK-4499:
---

Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/4367
  
looks like the build times increase by 5-10 minutes but I'd say this is 
worth it
+1 from my side


> Introduce findbugs maven plugin
> ---
>
> Key: FLINK-4499
> URL: https://issues.apache.org/jira/browse/FLINK-4499
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Assignee: Suneel Marthi
>
> As suggested by Stephan in FLINK-4482, this issue is to add 
> findbugs-maven-plugin into the build process so that we can detect lack of 
> proper locking and other defects automatically.
> We can begin with small set of rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4367: [FLINK-4499] [build] Add spotbugs plugin

2017-08-07 Thread NicoK
Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/4367
  
looks like the build times increase by 5-10 minutes but I'd say this is 
worth it
+1 from my side


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-7354) test instability in LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers

2017-08-07 Thread Nico Kruber (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-7354:
---
Labels: test-stability  (was: )

> test instability in 
> LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers
> -
>
> Key: FLINK-7354
> URL: https://issues.apache.org/jira/browse/FLINK-7354
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.1.4, 1.2.1, 1.4.0, 1.3.2
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Critical
>  Labels: test-stability
>
> During {{mvn clean install}} on the 1.3.2 RC2, I found an inconsistently 
> failing test at 
> {{LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers}}:
> {code}
> testLocalFlinkMiniClusterWithMultipleTaskManagers(org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase)
>   Time elapsed: 34.978 sec  <<< FAILURE!
> java.lang.AssertionError: Thread 
> Thread[initialSeedUniquifierGenerator,5,main] was started by the mini 
> cluster, but not shut down
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers(LocalFlinkMiniClusterITCase.java:168)
> {code}
> Searching the web for that error yields one previous thread on the dev-list, 
> so this seems to be valid for quite old versions of flink, too, but 
> apparently, was never solved:
> https://lists.apache.org/thread.html/07ce439bf6d358bd3139541b52ef6b8e8af249a27e09ae10b6698f81@%3Cdev.flink.apache.org%3E
> Test environment: Debian 9, openjdk 1.8.0_141-8u141-b15-1~deb9u1-b15



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7300) End-to-end tests are instable on Travis

2017-08-07 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116353#comment-16116353
 ] 

Till Rohrmann commented on FLINK-7300:
--

This does not seem to fully solve the problem: 
https://travis-ci.org/apache/flink/jobs/261076162

> End-to-end tests are instable on Travis
> ---
>
> Key: FLINK-7300
> URL: https://issues.apache.org/jira/browse/FLINK-7300
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.4.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Aljoscha Krettek
> Fix For: 1.4.0
>
>
> It seems like the end-to-end tests are instable, causing the {{misc}} build 
> profile to sporadically fail.
> Incorrect matched output:
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/258569408/log.txt?X-Amz-Expires=30=20170731T060526Z=AWS4-HMAC-SHA256=AKIAJRYRXRSVGNKPKO5A/20170731/us-east-1/s3/aws4_request=host=4ef9ff5e60fe06db53a84be8d73775a46cb595a8caeb806b05dbbf824d3b69e8
> Another failure example of a different cause then the above, also on the 
> end-to-end tests:
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/258841693/log.txt?X-Amz-Expires=30=20170731T060007Z=AWS4-HMAC-SHA256=AKIAJRYRXRSVGNKPKO5A/20170731/us-east-1/s3/aws4_request=host=4a106b3990228b7628c250cc15407bc2c131c8332e1a94ad68d649fe8d32d726



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7343) Kafka010ProducerITCase instability

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116359#comment-16116359
 ] 

ASF GitHub Bot commented on FLINK-7343:
---

Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/4470#discussion_r131608304
  
--- Diff: 
flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/networking/NetworkFailureHandler.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.networking;
+
+import org.jboss.netty.bootstrap.ClientBootstrap;
+import org.jboss.netty.buffer.ChannelBuffer;
+import org.jboss.netty.buffer.ChannelBuffers;
+import org.jboss.netty.channel.Channel;
+import org.jboss.netty.channel.ChannelFuture;
+import org.jboss.netty.channel.ChannelFutureListener;
+import org.jboss.netty.channel.ChannelHandlerContext;
+import org.jboss.netty.channel.ChannelStateEvent;
+import org.jboss.netty.channel.ExceptionEvent;
+import org.jboss.netty.channel.MessageEvent;
+import org.jboss.netty.channel.SimpleChannelUpstreamHandler;
+import org.jboss.netty.channel.socket.ClientSocketChannelFactory;
+
+import java.net.InetSocketAddress;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.function.Consumer;
+
+/**
+ * Handler that is forwarding inbound traffic from the source channel to 
the target channel on remoteHost:remotePort
+ * and the responses in the opposite direction. All of the network traffic 
can be blocked at any time using blocked
+ * flag.
+ */
+class NetworkFailureHandler extends SimpleChannelUpstreamHandler {
+   private static final String TARGET_CHANNEL_HANDLER_NAME = 
"target_channel_handler";
+
+   // mapping between source and target channels, used for finding correct 
target channel to use for given source.
+   private final Map sourceToTargetChannels = new 
ConcurrentHashMap<>();
+   private final Consumer onClose;
+   private final ClientSocketChannelFactory channelFactory;
+   private final String remoteHost;
+   private final int remotePort;
+
+   private final AtomicBoolean blocked;
+
+   public NetworkFailureHandler(
+   AtomicBoolean blocked,
+   Consumer onClose,
+   ClientSocketChannelFactory channelFactory,
+   String remoteHost,
+   int remotePort) {
+   this.blocked = blocked;
+   this.onClose = onClose;
+   this.channelFactory = channelFactory;
+   this.remoteHost = remoteHost;
+   this.remotePort = remotePort;
+   }
+
+   /**
+* Closes the specified channel after all queued write requests are 
flushed.
+*/
+   static void closeOnFlush(Channel channel) {
+   if (channel.isConnected()) {
+   
channel.write(ChannelBuffers.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
+   }
+   }
+
+   public void closeConnections() {
+   for (Map.Entry entry : 
sourceToTargetChannels.entrySet()) {
+   // target channel is closed on source's channel 
channelClosed even
+   entry.getKey().close();
+   }
+   }
+
+   @Override
+   public void channelOpen(ChannelHandlerContext context, 
ChannelStateEvent event) throws Exception {
+   // Suspend incoming traffic until connected to the remote host.
+   final Channel sourceChannel = event.getChannel();
+   sourceChannel.setReadable(false);
+
+   if (blocked.get()) {
+   sourceChannel.close();
+   return;
+   }
+
+   // Start the connection attempt.
+   

[jira] [Assigned] (FLINK-5509) Replace QueryableStateClient keyHashCode argument

2017-08-07 Thread Kostas Kloudas (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-5509:
-

Assignee: Kostas Kloudas

> Replace QueryableStateClient keyHashCode argument
> -
>
> Key: FLINK-5509
> URL: https://issues.apache.org/jira/browse/FLINK-5509
> Project: Flink
>  Issue Type: Sub-task
>  Components: Queryable State
>Reporter: Ufuk Celebi
>Assignee: Kostas Kloudas
>Priority: Minor
>
> When going over the low level QueryableStateClient with [~NicoK] we noticed 
> that the key hashCode argument can be confusing to users:
> {code}
> Future getKvState(
>   JobID jobId,
>   String name,
>   int keyHashCode,
>   byte[] serializedKeyAndNamespace)
> {code}
> The {{keyHashCode}} argument is the result of calling {{hashCode()}} on the 
> key to look up. This is what is send to the JobManager in order to look up 
> the location of the key. While pretty straight forward, it is repetitive and 
> possibly confusing.
> As an alternative we suggest to make the method generic and simply call 
> hashCode on the object ourselves. This way the user just provides the key 
> object.
> Since there are some early users of the queryable state API already, we would 
> suggest to rename the method in order to provoke a compilation error after 
> upgrading to the actually released 1.2 version.
> (This would also work without renaming since the hashCode of Integer (what 
> users currently provide) is the same number, but it would be confusing why it 
> acutally works.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4460: [FLINK-7349] [travis] Only execute checkstyle in misc pro...

2017-08-07 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4460
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7349) Only execute checkstyle in one build profile

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116391#comment-16116391
 ] 

ASF GitHub Bot commented on FLINK-7349:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4460
  
merging.


> Only execute checkstyle in one build profile
> 
>
> Key: FLINK-7349
> URL: https://issues.apache.org/jira/browse/FLINK-7349
> Project: Flink
>  Issue Type: Improvement
>  Components: Checkstyle, Travis
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> We can save some time in 4/5 build profiles by skipping checkstyle. One of 
> the build profiles builds flink completely and would suffice as a check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7379) Remove `HighAvailabilityServices` from client constructor.

2017-08-07 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-7379:
-

 Summary: Remove `HighAvailabilityServices` from client constructor.
 Key: FLINK-7379
 URL: https://issues.apache.org/jira/browse/FLINK-7379
 Project: Flink
  Issue Type: Sub-task
  Components: Queryable State
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas
 Fix For: 1.4.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4456: [FLINK-7343][kafka] Increase Xmx for tests

2017-08-07 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4456
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7343) Kafka010ProducerITCase instability

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116399#comment-16116399
 ] 

ASF GitHub Bot commented on FLINK-7343:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4456
  
merging.


> Kafka010ProducerITCase instability
> --
>
> Key: FLINK-7343
> URL: https://issues.apache.org/jira/browse/FLINK-7343
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>  Labels: test-stability
>
> As reported by [~till.rohrmann] in 
> https://issues.apache.org/jira/browse/FLINK-6996 there seems to be a test 
> instability with 
> `Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink`
> https://travis-ci.org/tillrohrmann/flink/jobs/258538641
> It is probably related to log.flush intervals in Kafka, which delay flushing 
> the data to files and potentially causing data loses on killing Kafka brokers 
> in the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4453: [FLINK-6982] [guava] Reduce guava dependency usages

2017-08-07 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4453
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4490: [FLINK-7382][docs] Fix some broken links in docs

2017-08-07 Thread yew1eb
GitHub user yew1eb opened a pull request:

https://github.com/apache/flink/pull/4490

[FLINK-7382][docs] Fix some broken links in docs

## What is the purpose of the change
Fix some broken links in docs.

## Brief change log


## Verifying this change



## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yew1eb/flink FLINK-7382

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4490.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4490


commit c975b2e13b000c09746a0bb7830c5ae710c7c9db
Author: zhouhai02 
Date:   2017-08-07T12:48:08Z

fix some broken links in docs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7382) Broken links in `Apache Flink Documentation` page

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116538#comment-16116538
 ] 

ASF GitHub Bot commented on FLINK-7382:
---

GitHub user yew1eb opened a pull request:

https://github.com/apache/flink/pull/4490

[FLINK-7382][docs] Fix some broken links in docs

## What is the purpose of the change
Fix some broken links in docs.

## Brief change log


## Verifying this change



## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yew1eb/flink FLINK-7382

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4490.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4490


commit c975b2e13b000c09746a0bb7830c5ae710c7c9db
Author: zhouhai02 
Date:   2017-08-07T12:48:08Z

fix some broken links in docs




> Broken links in `Apache Flink Documentation`  page
> --
>
> Key: FLINK-7382
> URL: https://issues.apache.org/jira/browse/FLINK-7382
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Hai Zhou
>Priority: Minor
>
> Some links in the * External Resources * section are Broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7343) Kafka010ProducerITCase instability

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116539#comment-16116539
 ] 

ASF GitHub Bot commented on FLINK-7343:
---

Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4456
  
Thanks :)


> Kafka010ProducerITCase instability
> --
>
> Key: FLINK-7343
> URL: https://issues.apache.org/jira/browse/FLINK-7343
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>  Labels: test-stability
>
> As reported by [~till.rohrmann] in 
> https://issues.apache.org/jira/browse/FLINK-6996 there seems to be a test 
> instability with 
> `Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink`
> https://travis-ci.org/tillrohrmann/flink/jobs/258538641
> It is probably related to log.flush intervals in Kafka, which delay flushing 
> the data to files and potentially causing data loses on killing Kafka brokers 
> in the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4456: [FLINK-7343][kafka] Increase Xmx for tests

2017-08-07 Thread pnowojski
Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4456
  
Thanks :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4470: [FLINK-7343] Simulate network failures in kafka at...

2017-08-07 Thread pnowojski
Github user pnowojski commented on a diff in the pull request:

https://github.com/apache/flink/pull/4470#discussion_r131648638
  
--- Diff: 
flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/networking/NetworkFailureHandler.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.networking;
+
+import org.jboss.netty.bootstrap.ClientBootstrap;
+import org.jboss.netty.buffer.ChannelBuffer;
+import org.jboss.netty.buffer.ChannelBuffers;
+import org.jboss.netty.channel.Channel;
+import org.jboss.netty.channel.ChannelFuture;
+import org.jboss.netty.channel.ChannelFutureListener;
+import org.jboss.netty.channel.ChannelHandlerContext;
+import org.jboss.netty.channel.ChannelStateEvent;
+import org.jboss.netty.channel.ExceptionEvent;
+import org.jboss.netty.channel.MessageEvent;
+import org.jboss.netty.channel.SimpleChannelUpstreamHandler;
+import org.jboss.netty.channel.socket.ClientSocketChannelFactory;
+
+import java.net.InetSocketAddress;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.function.Consumer;
+
+/**
+ * Handler that is forwarding inbound traffic from the source channel to 
the target channel on remoteHost:remotePort
+ * and the responses in the opposite direction. All of the network traffic 
can be blocked at any time using blocked
+ * flag.
+ */
+class NetworkFailureHandler extends SimpleChannelUpstreamHandler {
+   private static final String TARGET_CHANNEL_HANDLER_NAME = 
"target_channel_handler";
+
+   // mapping between source and target channels, used for finding correct 
target channel to use for given source.
+   private final Map sourceToTargetChannels = new 
ConcurrentHashMap<>();
+   private final Consumer onClose;
+   private final ClientSocketChannelFactory channelFactory;
+   private final String remoteHost;
+   private final int remotePort;
+
+   private final AtomicBoolean blocked;
+
+   public NetworkFailureHandler(
+   AtomicBoolean blocked,
+   Consumer onClose,
+   ClientSocketChannelFactory channelFactory,
+   String remoteHost,
+   int remotePort) {
+   this.blocked = blocked;
+   this.onClose = onClose;
+   this.channelFactory = channelFactory;
+   this.remoteHost = remoteHost;
+   this.remotePort = remotePort;
+   }
+
+   /**
+* Closes the specified channel after all queued write requests are 
flushed.
+*/
+   static void closeOnFlush(Channel channel) {
+   if (channel.isConnected()) {
+   
channel.write(ChannelBuffers.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
+   }
+   }
+
+   public void closeConnections() {
+   for (Map.Entry entry : 
sourceToTargetChannels.entrySet()) {
+   // target channel is closed on source's channel 
channelClosed even
+   entry.getKey().close();
+   }
+   }
+
+   @Override
+   public void channelOpen(ChannelHandlerContext context, 
ChannelStateEvent event) throws Exception {
+   // Suspend incoming traffic until connected to the remote host.
+   final Channel sourceChannel = event.getChannel();
+   sourceChannel.setReadable(false);
+
+   if (blocked.get()) {
+   sourceChannel.close();
+   return;
+   }
+
+   // Start the connection attempt.
+   ClientBootstrap targetConnectionBootstrap = new 
ClientBootstrap(channelFactory);
+   
targetConnectionBootstrap.getPipeline().addLast(TARGET_CHANNEL_HANDLER_NAME, 
new TargetChannelHandler(event.getChannel(), blocked));
+   

[jira] [Commented] (FLINK-7376) Cleanup options class and test classes in flink-clients

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116551#comment-16116551
 ] 

ASF GitHub Bot commented on FLINK-7376:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4484
  
Yes! I hope I didn't discourage you from contributing. It's just that 
lately we have become careful with purely cosmetic changes because they can 
introduce subtle bugs and cause overhead when reviewing.


> Cleanup options class and test classes in flink-clients 
> 
>
> Key: FLINK-7376
> URL: https://issues.apache.org/jira/browse/FLINK-7376
> Project: Flink
>  Issue Type: Improvement
>  Components: Client
>Reporter: Hai Zhou
>Priority: Critical
>  Labels: cleanup, test
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4484: [FLINK-7376][Client] Cleanup options class and test class...

2017-08-07 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4484
  
Yes! I hope I didn't discourage you from contributing. It's just that 
lately we have become careful with purely cosmetic changes because they can 
introduce subtle bugs and cause overhead when reviewing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4489: [FLINK-7383] Remove ConfigurationUtil

2017-08-07 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4489
  
👍 LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4457: [hotfix] [gelly] Explicit type can be replaced wit...

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4457


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4367: [FLINK-4499] [build] Add spotbugs plugin

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4367


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4461: [FLINK-7350] [travis] Only execute japicmp in misc...

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4461


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4452: [FLINK-7013] [shading] Introduce flink-shaded-nett...

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4452


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4456: [FLINK-7343][kafka] Increase Xmx for tests

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4456


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4460: [FLINK-7349] [travis] Only execute checkstyle in m...

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4460


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-7348) Allow redundant modifiers on methods

2017-08-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-7348.
---
Resolution: Fixed

1.4: 614c18dcd9b6424d87ea836e08ddf5a84cc53894

> Allow redundant modifiers on methods
> 
>
> Key: FLINK-7348
> URL: https://issues.apache.org/jira/browse/FLINK-7348
> Project: Flink
>  Issue Type: Improvement
>  Components: Checkstyle
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> As per the discussion in https://github.com/apache/flink/pull/4447 we should 
> allow redundant modifiers on methods, and revert changes that removed 
> {{final}} modifiers from methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7343) Kafka010ProducerITCase instability

2017-08-07 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116568#comment-16116568
 ] 

Chesnay Schepler commented on FLINK-7343:
-

Xmx increase in 1.4 in 4406d4868320c72ce7c744748cbd7528ff4bc642

> Kafka010ProducerITCase instability
> --
>
> Key: FLINK-7343
> URL: https://issues.apache.org/jira/browse/FLINK-7343
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>  Labels: test-stability
>
> As reported by [~till.rohrmann] in 
> https://issues.apache.org/jira/browse/FLINK-6996 there seems to be a test 
> instability with 
> `Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink`
> https://travis-ci.org/tillrohrmann/flink/jobs/258538641
> It is probably related to log.flush intervals in Kafka, which delay flushing 
> the data to files and potentially causing data loses on killing Kafka brokers 
> in the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-4499) Introduce findbugs maven plugin

2017-08-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-4499.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

1.4: a1644076ee0b1771777ffc9e5634e5b2ece89d00

> Introduce findbugs maven plugin
> ---
>
> Key: FLINK-4499
> URL: https://issues.apache.org/jira/browse/FLINK-4499
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Assignee: Suneel Marthi
> Fix For: 1.4.0
>
>
> As suggested by Stephan in FLINK-4482, this issue is to add 
> findbugs-maven-plugin into the build process so that we can detect lack of 
> proper locking and other defects automatically.
> We can begin with small set of rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4499) Introduce findbugs maven plugin

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116561#comment-16116561
 ] 

ASF GitHub Bot commented on FLINK-4499:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4367


> Introduce findbugs maven plugin
> ---
>
> Key: FLINK-4499
> URL: https://issues.apache.org/jira/browse/FLINK-4499
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Assignee: Suneel Marthi
>
> As suggested by Stephan in FLINK-4482, this issue is to add 
> findbugs-maven-plugin into the build process so that we can detect lack of 
> proper locking and other defects automatically.
> We can begin with small set of rules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7343) Kafka010ProducerITCase instability

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116559#comment-16116559
 ] 

ASF GitHub Bot commented on FLINK-7343:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4456


> Kafka010ProducerITCase instability
> --
>
> Key: FLINK-7343
> URL: https://issues.apache.org/jira/browse/FLINK-7343
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>  Labels: test-stability
>
> As reported by [~till.rohrmann] in 
> https://issues.apache.org/jira/browse/FLINK-6996 there seems to be a test 
> instability with 
> `Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink`
> https://travis-ci.org/tillrohrmann/flink/jobs/258538641
> It is probably related to log.flush intervals in Kafka, which delay flushing 
> the data to files and potentially causing data loses on killing Kafka brokers 
> in the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7350) only execute japicmp in one build profile

2017-08-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-7350.
---
Resolution: Fixed

1.4: be8eb1a6000ad7fb97c254a1bd46f5b058e04d45

> only execute japicmp in one build profile
> -
>
> Key: FLINK-7350
> URL: https://issues.apache.org/jira/browse/FLINK-7350
> Project: Flink
>  Issue Type: Improvement
>  Components: Travis
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> Similarly to FLINK-7349 we improve build times (and stability!) by only 
> executing the japicmp plugin in the build profile that builds the all of 
> flink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116594#comment-16116594
 ] 

ASF GitHub Bot commented on FLINK-4565:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4404#discussion_r131655114
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/subquery.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.expressions
+
+import com.google.common.collect.ImmutableList
+import org.apache.calcite.rex.{RexNode, RexSubQuery}
+import org.apache.calcite.sql.fun.SqlStdOperatorTable
+import org.apache.calcite.tools.RelBuilder
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo._
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.table.api.StreamTableEnvironment
+import org.apache.flink.table.typeutils.TypeCheckUtils._
+import org.apache.flink.table.validate.{ValidationFailure, 
ValidationResult, ValidationSuccess}
+
+case class In(expression: Expression, elements: Seq[Expression]) extends 
Expression  {
+
+  override def toString = s"$expression.in(${elements.mkString(", ")})"
+
+  override private[flink] def children: Seq[Expression] = expression +: 
elements.distinct
+
+  override private[flink] def toRexNode(implicit relBuilder: RelBuilder): 
RexNode = {
+// check if this is a sub-query expression or an element list
+elements.head match {
+
+  case TableReference(name, table) =>
+RexSubQuery.in(table.getRelNode, 
ImmutableList.of(expression.toRexNode))
+
+  case _ =>
+relBuilder.call(SqlStdOperatorTable.IN, children.map(_.toRexNode): 
_*)
+}
+  }
+
+  override private[flink] def validateInput(): ValidationResult = {
+// check if this is a sub-query expression or an element list
+elements.head match {
+
+  case TableReference(name, table) =>
+if (elements.length != 1) {
+  return ValidationFailure("IN operator supports only one table 
reference.")
+}
+if (table.tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  return ValidationFailure(
+"Sub-query IN operator on stream tables is currently not 
supported.")
+}
+val tableOutput = table.logicalPlan.output
+if (tableOutput.length > 1) {
+  return ValidationFailure(
+s"The sub-query table '$name' must not have more than one 
column.")
+}
+(expression.resultType, tableOutput.head.resultType) match {
+  case (lType, rType) if isNumeric(lType) && isNumeric(rType) => 
ValidationSuccess
+  case (lType, rType) if lType == rType => ValidationSuccess
--- End diff --

I think a hotfix is enough. This part is not performance critical anyway, 
because it happens before runtime.


> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
> Fix For: 1.4.0
>
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4404: [FLINK-4565] [table] Support for SQL IN operator

2017-08-07 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4404#discussion_r131655114
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/subquery.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.expressions
+
+import com.google.common.collect.ImmutableList
+import org.apache.calcite.rex.{RexNode, RexSubQuery}
+import org.apache.calcite.sql.fun.SqlStdOperatorTable
+import org.apache.calcite.tools.RelBuilder
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo._
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.table.api.StreamTableEnvironment
+import org.apache.flink.table.typeutils.TypeCheckUtils._
+import org.apache.flink.table.validate.{ValidationFailure, 
ValidationResult, ValidationSuccess}
+
+case class In(expression: Expression, elements: Seq[Expression]) extends 
Expression  {
+
+  override def toString = s"$expression.in(${elements.mkString(", ")})"
+
+  override private[flink] def children: Seq[Expression] = expression +: 
elements.distinct
+
+  override private[flink] def toRexNode(implicit relBuilder: RelBuilder): 
RexNode = {
+// check if this is a sub-query expression or an element list
+elements.head match {
+
+  case TableReference(name, table) =>
+RexSubQuery.in(table.getRelNode, 
ImmutableList.of(expression.toRexNode))
+
+  case _ =>
+relBuilder.call(SqlStdOperatorTable.IN, children.map(_.toRexNode): 
_*)
+}
+  }
+
+  override private[flink] def validateInput(): ValidationResult = {
+// check if this is a sub-query expression or an element list
+elements.head match {
+
+  case TableReference(name, table) =>
+if (elements.length != 1) {
+  return ValidationFailure("IN operator supports only one table 
reference.")
+}
+if (table.tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  return ValidationFailure(
+"Sub-query IN operator on stream tables is currently not 
supported.")
+}
+val tableOutput = table.logicalPlan.output
+if (tableOutput.length > 1) {
+  return ValidationFailure(
+s"The sub-query table '$name' must not have more than one 
column.")
+}
+(expression.resultType, tableOutput.head.resultType) match {
+  case (lType, rType) if isNumeric(lType) && isNumeric(rType) => 
ValidationSuccess
+  case (lType, rType) if lType == rType => ValidationSuccess
--- End diff --

I think a hotfix is enough. This part is not performance critical anyway, 
because it happens before runtime.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7316) always use off-heap network buffers

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116598#comment-16116598
 ] 

ASF GitHub Bot commented on FLINK-7316:
---

Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/4481#discussion_r131648828
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/NetworkBufferPool.java
 ---
@@ -274,7 +260,7 @@ private void redistributeBuffers() throws IOException {
return;
}
 
-   /**
--- End diff --

Revert please.


> always use off-heap network buffers
> ---
>
> Key: FLINK-7316
> URL: https://issues.apache.org/jira/browse/FLINK-7316
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core, Network
>Affects Versions: 1.4.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> In order to send flink buffers through netty into the network, we need to 
> make the buffers use off-heap memory. Otherwise, there will be a hidden copy 
> happening in the NIO stack.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7245) Enhance the operators to support holding back watermarks

2017-08-07 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116601#comment-16116601
 ] 

Fabian Hueske commented on FLINK-7245:
--

Hi [~xccui],

I think this is a very good observation:

bq. The delay could be either static or dynamic.

My thinking was much to complicated. We can implement a windowed stream join 
with a static watermark delay which is defined by the window bounds (possibly 
adjusted by a late data interval) and does not need to materialize state. 
Basically, we can delay the watermark by the time at which we discard a row 
from the state. Once discarded, we cannot emit it anymore. So there is no 
immediate need to implement the more complex variant that keeps track of future 
record timestamps and checkpoints them.

Regarding your questions:
1. Only the first one would be feasible. A parallel task instance processes 
handles one or more key groups, each with many keys. So we would need to 
iterate over all keys which would be very expensive.
2. Looking up the lowest ts for each key would mean to query the keys state. 
So, yes we would not need additional state, but query the existing but it would 
be expensive.
3. Watermarks are handled per task instance. So we would need to coordinate the 
watermark across all keys processed by the same instance (see 1. and 2.). There 
would be no need for global (distributed) coordination across task instances.

I would suggest to restrict this JIRA to implement an operator with a constant 
watermark delay. When a watermark is received, the delay is subtracted and the 
resulting watermark is forwarded. This is will enable the implementation of a 
windowed stream join.

What do you think [~xccui]

> Enhance the operators to support holding back watermarks
> 
>
> Key: FLINK-7245
> URL: https://issues.apache.org/jira/browse/FLINK-7245
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API
>Reporter: Xingcan Cui
>Assignee: Xingcan Cui
>
> Currently the watermarks are applied and emitted by the 
> {{AbstractStreamOperator}} instantly. 
> {code:java}
> public void processWatermark(Watermark mark) throws Exception {
>   if (timeServiceManager != null) {
>   timeServiceManager.advanceWatermark(mark);
>   }
>   output.emitWatermark(mark);
> }
> {code}
> Some calculation results (with timestamp fields) triggered by these 
> watermarks (e.g., join or aggregate results) may be regarded as delayed by 
> the downstream operators since their timestamps must be less than or equal to 
> the corresponding triggers. 
> This issue aims to add another "working mode", which supports holding back 
> watermarks, to current operators. These watermarks should be blocked and 
> stored by the operators until all the corresponding new generated results are 
> emitted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-6305) flink-ml tests are executed in all flink-fast-test profiles

2017-08-07 Thread Nico Kruber (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber resolved FLINK-6305.

Resolution: Fixed

> flink-ml tests are executed in all flink-fast-test profiles
> ---
>
> Key: FLINK-6305
> URL: https://issues.apache.org/jira/browse/FLINK-6305
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.3.0
>Reporter: Nico Kruber
>Priority: Minor
>
> The {{flink-fast-tests-\*}} profiles partition the unit tests based on their 
> starting letter. However, this does not affect the Scala tests run via the 
> ScalaTest plugin and therefore, {{flink-ml}} tests are executed in all three 
> currently existing profiles. While this may not be that grave, it does run 
> for about 2.5 minutes on Travis CI which may be saved in 2/3 of the profiles 
> there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6571) InfiniteSource in SourceStreamOperatorTest should deal with InterruptedExceptions

2017-08-07 Thread Nico Kruber (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-6571:
---
Labels: test-stability  (was: )

> InfiniteSource in SourceStreamOperatorTest should deal with 
> InterruptedExceptions
> -
>
> Key: FLINK-6571
> URL: https://issues.apache.org/jira/browse/FLINK-6571
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: test-stability
>
> So this is a new one: i got a failing test 
> ({{testNoMaxWatermarkOnAsyncStop}}) due to an uncatched InterruptedException.
> {code}
> [00:28:15] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 0.828 sec <<< FAILURE! - in 
> org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest
> [00:28:15] 
> testNoMaxWatermarkOnAsyncStop(org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest)
>   Time elapsed: 0 sec  <<< ERROR!
> [00:28:15] java.lang.InterruptedException: sleep interrupted
> [00:28:15]at java.lang.Thread.sleep(Native Method)
> [00:28:15]at 
> org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest$InfiniteSource.run(StreamSourceOperatorTest.java:343)
> [00:28:15]at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
> [00:28:15]at 
> org.apache.flink.streaming.runtime.operators.StreamSourceOperatorTest.testNoMaxWatermarkOnAsyncStop(StreamSourceOperatorTest.java:176)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6567) ExecutionGraphMetricsTest fails on Windows CI

2017-08-07 Thread Nico Kruber (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-6567:
---
Labels: test-stability  (was: )

> ExecutionGraphMetricsTest fails on Windows CI
> -
>
> Key: FLINK-6567
> URL: https://issues.apache.org/jira/browse/FLINK-6567
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: test-stability
>
> The {{testExecutionGraphRestartTimeMetric}} fails every time i run it on 
> AppVeyor. It also very rarely failed for me locally.
> The test fails at Line 235 if the RUNNING timestamp is equal to the 
> RESTARTING timestamp, which may happen when combining a fast test with a low 
> resolution clock.
> A simple fix would be to increase the timestamp between RUNNING and 
> RESTARTING by adding a 50ms sleep timeout into the 
> {{TestingRestartStrategy#canRestart()}} method, as this one is called before 
> transitioning to the RESTARTING state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7343) Kafka010ProducerITCase instability

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116547#comment-16116547
 ] 

ASF GitHub Bot commented on FLINK-7343:
---

Github user pnowojski commented on a diff in the pull request:

https://github.com/apache/flink/pull/4470#discussion_r131648638
  
--- Diff: 
flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/networking/NetworkFailureHandler.java
 ---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.networking;
+
+import org.jboss.netty.bootstrap.ClientBootstrap;
+import org.jboss.netty.buffer.ChannelBuffer;
+import org.jboss.netty.buffer.ChannelBuffers;
+import org.jboss.netty.channel.Channel;
+import org.jboss.netty.channel.ChannelFuture;
+import org.jboss.netty.channel.ChannelFutureListener;
+import org.jboss.netty.channel.ChannelHandlerContext;
+import org.jboss.netty.channel.ChannelStateEvent;
+import org.jboss.netty.channel.ExceptionEvent;
+import org.jboss.netty.channel.MessageEvent;
+import org.jboss.netty.channel.SimpleChannelUpstreamHandler;
+import org.jboss.netty.channel.socket.ClientSocketChannelFactory;
+
+import java.net.InetSocketAddress;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.function.Consumer;
+
+/**
+ * Handler that is forwarding inbound traffic from the source channel to 
the target channel on remoteHost:remotePort
+ * and the responses in the opposite direction. All of the network traffic 
can be blocked at any time using blocked
+ * flag.
+ */
+class NetworkFailureHandler extends SimpleChannelUpstreamHandler {
+   private static final String TARGET_CHANNEL_HANDLER_NAME = 
"target_channel_handler";
+
+   // mapping between source and target channels, used for finding correct 
target channel to use for given source.
+   private final Map sourceToTargetChannels = new 
ConcurrentHashMap<>();
+   private final Consumer onClose;
+   private final ClientSocketChannelFactory channelFactory;
+   private final String remoteHost;
+   private final int remotePort;
+
+   private final AtomicBoolean blocked;
+
+   public NetworkFailureHandler(
+   AtomicBoolean blocked,
+   Consumer onClose,
+   ClientSocketChannelFactory channelFactory,
+   String remoteHost,
+   int remotePort) {
+   this.blocked = blocked;
+   this.onClose = onClose;
+   this.channelFactory = channelFactory;
+   this.remoteHost = remoteHost;
+   this.remotePort = remotePort;
+   }
+
+   /**
+* Closes the specified channel after all queued write requests are 
flushed.
+*/
+   static void closeOnFlush(Channel channel) {
+   if (channel.isConnected()) {
+   
channel.write(ChannelBuffers.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
+   }
+   }
+
+   public void closeConnections() {
+   for (Map.Entry entry : 
sourceToTargetChannels.entrySet()) {
+   // target channel is closed on source's channel 
channelClosed even
+   entry.getKey().close();
+   }
+   }
+
+   @Override
+   public void channelOpen(ChannelHandlerContext context, 
ChannelStateEvent event) throws Exception {
+   // Suspend incoming traffic until connected to the remote host.
+   final Channel sourceChannel = event.getChannel();
+   sourceChannel.setReadable(false);
+
+   if (blocked.get()) {
+   sourceChannel.close();
+   return;
+   }
+
+   // Start the connection attempt.
+   

[jira] [Updated] (FLINK-6838) RescalingITCase fails in master branch

2017-08-07 Thread Nico Kruber (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-6838:
---
Labels: test-stability  (was: )

> RescalingITCase fails in master branch
> --
>
> Key: FLINK-6838
> URL: https://issues.apache.org/jira/browse/FLINK-6838
> Project: Flink
>  Issue Type: Test
>  Components: State Backends, Checkpointing, Tests
>Reporter: Ted Yu
>Priority: Minor
>  Labels: test-stability
>
> {code}
> Tests in error:
>   RescalingITCase.testSavepointRescalingInKeyedState[1] » JobExecution Job 
> execu...
>   RescalingITCase.testSavepointRescalingWithKeyedAndNonPartitionedState[1] » 
> JobExecution
> {code}
> Both failed with similar cause:
> {code}
> testSavepointRescalingInKeyedState[1](org.apache.flink.test.checkpointing.RescalingITCase)
>   Time elapsed: 4.813 sec  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:933)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:876)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:876)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: org.apache.flink.streaming.runtime.tasks.AsynchronousException: 
> java.lang.Exception: Could not materialize checkpoint 4 for operator Flat Map 
> -> Sink: Unnamed (1/2).
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:967)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.Exception: Could not materialize checkpoint 4 for 
> operator Flat Map -> Sink: Unnamed (1/2).
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:967)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: 
> Cannot register Closeable, registry is already closed. Closing argument.
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:43)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:894)
>   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: Cannot register Closeable, registry is 
> already closed. Closing argument.
>   at 
> org.apache.flink.util.AbstractCloseableRegistry.registerClosable(AbstractCloseableRegistry.java:66)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBFullSnapshotOperation.openCheckpointStream(RocksDBKeyedStateBackend.java:495)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$3.openIOHandle(RocksDBKeyedStateBackend.java:394)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$3.openIOHandle(RocksDBKeyedStateBackend.java:390)
>   at 
> 

[jira] [Commented] (FLINK-7383) Remove ConfigurationUtil

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116555#comment-16116555
 ] 

ASF GitHub Bot commented on FLINK-7383:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4489
  
 LGTM


> Remove ConfigurationUtil
> 
>
> Key: FLINK-7383
> URL: https://issues.apache.org/jira/browse/FLINK-7383
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
>
> The {{ConfigurationUtil}} can be removed because it is no longer used and is 
> basically subsumed by Flink's {{ConfigOption}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7348) Allow redundant modifiers on methods

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116562#comment-16116562
 ] 

ASF GitHub Bot commented on FLINK-7348:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4458


> Allow redundant modifiers on methods
> 
>
> Key: FLINK-7348
> URL: https://issues.apache.org/jira/browse/FLINK-7348
> Project: Flink
>  Issue Type: Improvement
>  Components: Checkstyle
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> As per the discussion in https://github.com/apache/flink/pull/4447 we should 
> allow redundant modifiers on methods, and revert changes that removed 
> {{final}} modifiers from methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7349) Only execute checkstyle in one build profile

2017-08-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-7349.
---
Resolution: Fixed

1.4: c4a5dd85a92fcc5d4561bada609e0385acde381f

> Only execute checkstyle in one build profile
> 
>
> Key: FLINK-7349
> URL: https://issues.apache.org/jira/browse/FLINK-7349
> Project: Flink
>  Issue Type: Improvement
>  Components: Checkstyle, Travis
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> We can save some time in 4/5 build profiles by skipping checkstyle. One of 
> the build profiles builds flink completely and would suffice as a check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7350) only execute japicmp in one build profile

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116558#comment-16116558
 ] 

ASF GitHub Bot commented on FLINK-7350:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4461


> only execute japicmp in one build profile
> -
>
> Key: FLINK-7350
> URL: https://issues.apache.org/jira/browse/FLINK-7350
> Project: Flink
>  Issue Type: Improvement
>  Components: Travis
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> Similarly to FLINK-7349 we improve build times (and stability!) by only 
> executing the japicmp plugin in the build profile that builds the all of 
> flink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4453: [FLINK-6982] [guava] Reduce guava dependency usage...

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4453


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4458: [FLINK-7348] [checkstyle] Allow redundant modifier...

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4458


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6982) Replace guava dependencies

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116560#comment-16116560
 ] 

ASF GitHub Bot commented on FLINK-6982:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4453


> Replace guava dependencies
> --
>
> Key: FLINK-6982
> URL: https://issues.apache.org/jira/browse/FLINK-6982
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7349) Only execute checkstyle in one build profile

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116557#comment-16116557
 ] 

ASF GitHub Bot commented on FLINK-7349:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4460


> Only execute checkstyle in one build profile
> 
>
> Key: FLINK-7349
> URL: https://issues.apache.org/jira/browse/FLINK-7349
> Project: Flink
>  Issue Type: Improvement
>  Components: Checkstyle, Travis
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> We can save some time in 4/5 build profiles by skipping checkstyle. One of 
> the build profiles builds flink completely and would suffice as a check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7013) Add shaded netty dependency

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116563#comment-16116563
 ] 

ASF GitHub Bot commented on FLINK-7013:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4452


> Add shaded netty dependency
> ---
>
> Key: FLINK-7013
> URL: https://issues.apache.org/jira/browse/FLINK-7013
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7027) Replace netty dependencies

2017-08-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-7027.
---
Resolution: Fixed

1.4: 1ceb89a979d560944e1099fa4700e46c09a79484

> Replace netty dependencies
> --
>
> Key: FLINK-7027
> URL: https://issues.apache.org/jira/browse/FLINK-7027
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Network
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6982) Replace guava dependencies

2017-08-07 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116573#comment-16116573
 ] 

Chesnay Schepler commented on FLINK-6982:
-

Several guava dependencies were removed for 1.4 in 
0910bc537207895414a2e853f62c15ce4d0d91ae

> Replace guava dependencies
> --
>
> Key: FLINK-6982
> URL: https://issues.apache.org/jira/browse/FLINK-6982
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116585#comment-16116585
 ] 

ASF GitHub Bot commented on FLINK-4565:
---

Github user tedyu commented on a diff in the pull request:

https://github.com/apache/flink/pull/4404#discussion_r131654308
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/subquery.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.expressions
+
+import com.google.common.collect.ImmutableList
+import org.apache.calcite.rex.{RexNode, RexSubQuery}
+import org.apache.calcite.sql.fun.SqlStdOperatorTable
+import org.apache.calcite.tools.RelBuilder
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo._
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.table.api.StreamTableEnvironment
+import org.apache.flink.table.typeutils.TypeCheckUtils._
+import org.apache.flink.table.validate.{ValidationFailure, 
ValidationResult, ValidationSuccess}
+
+case class In(expression: Expression, elements: Seq[Expression]) extends 
Expression  {
+
+  override def toString = s"$expression.in(${elements.mkString(", ")})"
+
+  override private[flink] def children: Seq[Expression] = expression +: 
elements.distinct
+
+  override private[flink] def toRexNode(implicit relBuilder: RelBuilder): 
RexNode = {
+// check if this is a sub-query expression or an element list
+elements.head match {
+
+  case TableReference(name, table) =>
+RexSubQuery.in(table.getRelNode, 
ImmutableList.of(expression.toRexNode))
+
+  case _ =>
+relBuilder.call(SqlStdOperatorTable.IN, children.map(_.toRexNode): 
_*)
+}
+  }
+
+  override private[flink] def validateInput(): ValidationResult = {
+// check if this is a sub-query expression or an element list
+elements.head match {
+
+  case TableReference(name, table) =>
+if (elements.length != 1) {
+  return ValidationFailure("IN operator supports only one table 
reference.")
+}
+if (table.tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  return ValidationFailure(
+"Sub-query IN operator on stream tables is currently not 
supported.")
+}
+val tableOutput = table.logicalPlan.output
+if (tableOutput.length > 1) {
+  return ValidationFailure(
+s"The sub-query table '$name' must not have more than one 
column.")
+}
+(expression.resultType, tableOutput.head.resultType) match {
+  case (lType, rType) if isNumeric(lType) && isNumeric(rType) => 
ValidationSuccess
+  case (lType, rType) if lType == rType => ValidationSuccess
--- End diff --

Should I open a new JIRA ?


> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
> Fix For: 1.4.0
>
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4404: [FLINK-4565] [table] Support for SQL IN operator

2017-08-07 Thread tedyu
Github user tedyu commented on a diff in the pull request:

https://github.com/apache/flink/pull/4404#discussion_r131654308
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/subquery.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.expressions
+
+import com.google.common.collect.ImmutableList
+import org.apache.calcite.rex.{RexNode, RexSubQuery}
+import org.apache.calcite.sql.fun.SqlStdOperatorTable
+import org.apache.calcite.tools.RelBuilder
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo._
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.table.api.StreamTableEnvironment
+import org.apache.flink.table.typeutils.TypeCheckUtils._
+import org.apache.flink.table.validate.{ValidationFailure, 
ValidationResult, ValidationSuccess}
+
+case class In(expression: Expression, elements: Seq[Expression]) extends 
Expression  {
+
+  override def toString = s"$expression.in(${elements.mkString(", ")})"
+
+  override private[flink] def children: Seq[Expression] = expression +: 
elements.distinct
+
+  override private[flink] def toRexNode(implicit relBuilder: RelBuilder): 
RexNode = {
+// check if this is a sub-query expression or an element list
+elements.head match {
+
+  case TableReference(name, table) =>
+RexSubQuery.in(table.getRelNode, 
ImmutableList.of(expression.toRexNode))
+
+  case _ =>
+relBuilder.call(SqlStdOperatorTable.IN, children.map(_.toRexNode): 
_*)
+}
+  }
+
+  override private[flink] def validateInput(): ValidationResult = {
+// check if this is a sub-query expression or an element list
+elements.head match {
+
+  case TableReference(name, table) =>
+if (elements.length != 1) {
+  return ValidationFailure("IN operator supports only one table 
reference.")
+}
+if (table.tableEnv.isInstanceOf[StreamTableEnvironment]) {
+  return ValidationFailure(
+"Sub-query IN operator on stream tables is currently not 
supported.")
+}
+val tableOutput = table.logicalPlan.output
+if (tableOutput.length > 1) {
+  return ValidationFailure(
+s"The sub-query table '$name' must not have more than one 
column.")
+}
+(expression.resultType, tableOutput.head.resultType) match {
+  case (lType, rType) if isNumeric(lType) && isNumeric(rType) => 
ValidationSuccess
+  case (lType, rType) if lType == rType => ValidationSuccess
--- End diff --

Should I open a new JIRA ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4481: [FLINK-7316][network] always use off-heap network ...

2017-08-07 Thread StefanRRichter
Github user StefanRRichter commented on a diff in the pull request:

https://github.com/apache/flink/pull/4481#discussion_r131648828
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/buffer/NetworkBufferPool.java
 ---
@@ -274,7 +260,7 @@ private void redistributeBuffers() throws IOException {
return;
}
 
-   /**
--- End diff --

Revert please.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7240) Externalized RocksDB can fail with stackoverflow

2017-08-07 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116716#comment-16116716
 ] 

Till Rohrmann commented on FLINK-7240:
--

The underlying problem is the following: The {{ExternalizedCheckpointITCase}} 
executes multiple jobs per test on the same cluster. The individual jobs are 
stopped via a {{CancelJob}} message. The problem then is that we don't wait 
until the jobs have been completely cancelled. This is loosely enforced by a 
{{Thread.sleep}} call. If this timeout should not be enough (e.g. on Travis), 
then it might be the case that the next job will simply fail because it has not 
enough resources available. If we then try to request a new checkpoint, it will 
always fail. This combined with the recursive retry leads then to the 
{{StackOverflowError}}.

> Externalized RocksDB can fail with stackoverflow
> 
>
> Key: FLINK-7240
> URL: https://issues.apache.org/jira/browse/FLINK-7240
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing, Tests
>Affects Versions: 1.3.1, 1.4.0
> Environment: https://travis-ci.org/zentol/flink/jobs/255760513
>Reporter: Chesnay Schepler
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> {code}
> testExternalizedFullRocksDBCheckpointsStandalone(org.apache.flink.test.checkpointing.ExternalizedCheckpointITCase)
>   Time elapsed: 146.894 sec  <<< ERROR!
> java.lang.StackOverflowError: null
>   at java.util.Hashtable.get(Hashtable.java:363)
>   at java.util.Properties.getProperty(Properties.java:969)
>   at java.lang.System.getProperty(System.java:720)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:84)
>   at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.PrintWriter.(PrintWriter.java:116)
>   at java.io.PrintWriter.(PrintWriter.java:100)
>   at 
> org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58)
>   at 
> org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87)
>   at 
> org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:381)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:392)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
>   at 
> org.apache.flink.runtime.testingUtils.TestingCluster.requestCheckpoint(TestingCluster.scala:394)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7367) Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, MaxConnections, RequestTimeout, etc)

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116112#comment-16116112
 ] 

ASF GitHub Bot commented on FLINK-7367:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4473
  
One other thing:
Also need to add validation for these new configurations.
That should be placed in `KinesisConfigUtil.validateProducerConfigs`


> Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, 
> MaxConnections, RequestTimeout, etc)
> ---
>
> Key: FLINK-7367
> URL: https://issues.apache.org/jira/browse/FLINK-7367
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.3.3
>
>
> Right now, FlinkKinesisProducer only expose two configs for the underlying 
> KinesisProducer:
> - AGGREGATION_MAX_COUNT
> - COLLECTION_MAX_COUNT
> Well, according to [AWS 
> doc|http://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-config.html] 
> and [their sample on 
> github|https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/default_config.properties],
>  developers can set more to make the max use of KinesisProducer, and make it 
> fault-tolerant (e.g. by increasing timeout).
> I select a few more configs that we need when using Flink with Kinesis:
> - MAX_CONNECTIONS
> - RATE_LIMIT
> - RECORD_MAX_BUFFERED_TIME
> - RECORD_TIME_TO_LIVE
> - REQUEST_TIMEOUT
> We need to parameterize FlinkKinesisProducer to pass in the above params, in 
> order to cater to our need



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7367) Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, MaxConnections, RequestTimeout, etc)

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116109#comment-16116109
 ] 

ASF GitHub Bot commented on FLINK-7367:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4473#discussion_r131578056
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/config/ProducerConfigConstants.java
 ---
@@ -24,10 +24,30 @@
  */
 public class ProducerConfigConstants extends AWSConfigConstants {
 
-   /** Maximum number of items to pack into an PutRecords request. **/
+   /** Maximum number of KPL user records to store in a single Kinesis 
Streams record (an aggregated record). */
+   public static final String AGGREGATION_MAX_COUNT = 
"aws.producer.aggregationMaxCount";
+
+   /** Maximum number of Kinesis Streams records to pack into an 
PutRecords request. */
public static final String COLLECTION_MAX_COUNT = 
"aws.producer.collectionMaxCount";
 
-   /** Maximum number of items to pack into an aggregated record. **/
-   public static final String AGGREGATION_MAX_COUNT = 
"aws.producer.aggregationMaxCount";
+   /** Maximum number of connections to open to the backend. HTTP requests 
are
+* sent in parallel over multiple connections */
--- End diff --

Style consistency: missing period at the end.


> Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, 
> MaxConnections, RequestTimeout, etc)
> ---
>
> Key: FLINK-7367
> URL: https://issues.apache.org/jira/browse/FLINK-7367
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.3.3
>
>
> Right now, FlinkKinesisProducer only expose two configs for the underlying 
> KinesisProducer:
> - AGGREGATION_MAX_COUNT
> - COLLECTION_MAX_COUNT
> Well, according to [AWS 
> doc|http://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-config.html] 
> and [their sample on 
> github|https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/default_config.properties],
>  developers can set more to make the max use of KinesisProducer, and make it 
> fault-tolerant (e.g. by increasing timeout).
> I select a few more configs that we need when using Flink with Kinesis:
> - MAX_CONNECTIONS
> - RATE_LIMIT
> - RECORD_MAX_BUFFERED_TIME
> - RECORD_TIME_TO_LIVE
> - REQUEST_TIMEOUT
> We need to parameterize FlinkKinesisProducer to pass in the above params, in 
> order to cater to our need



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4473: [FLINK-7367][kinesis connector] Parameterize more configs...

2017-08-07 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4473
  
One other thing:
Also need to add validation for these new configurations.
That should be placed in `KinesisConfigUtil.validateProducerConfigs`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4469: [hotfix][docs] Updated required Java version for s...

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4469


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7210) Add TwoPhaseCommitSinkFunction (implementing exactly-once semantic in generic way)

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116184#comment-16116184
 ] 

ASF GitHub Bot commented on FLINK-7210:
---

Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4368
  
Thanks! :)


> Add TwoPhaseCommitSinkFunction (implementing exactly-once semantic in generic 
> way)
> --
>
> Key: FLINK-7210
> URL: https://issues.apache.org/jira/browse/FLINK-7210
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming Connectors
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
> Fix For: 1.4.0
>
>
> To implement exactly-once sink there is a re-occurring pattern for doing it - 
> two phase commit algorithm. It is used both in `BucketingSink` and in 
> `Pravega` sink and it will be used in `Kafka 0.11` connector. It would be 
> good to extract this common logic into one class, both to improve existing 
> implementation (for exampe `Pravega`'s sink doesn't abort interrupted 
> transactions) and to make it easier for the users to implement their own 
> custom exactly-once sinks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7374) Type Erasure in GraphUtils$MapTo

2017-08-07 Thread Matthias Kricke (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Kricke closed FLINK-7374.
--
Resolution: Fixed

> Type Erasure in GraphUtils$MapTo
> 
>
> Key: FLINK-7374
> URL: https://issues.apache.org/jira/browse/FLINK-7374
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.3.1
> Environment: Ubuntu 17.04
> java-8-oracle
>Reporter: Matthias Kricke
>Priority: Minor
>
> I got the following exception when executing ConnectedComponents or 
> GSAConnectedComponents algorithm:
> {code:java}
> org.apache.flink.api.common.functions.InvalidTypesException: Type of 
> TypeVariable 'O' in 'class org.apache.flink.graph.utils.GraphUtils$MapTo' 
> could not be determined. This is most likely a type erasure problem. The type 
> extraction currently supports types with generic variables only in cases 
> where all variables in the return type can be deduced from the input type(s).
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:915)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:836)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfo(TypeExtractor.java:802)
>   at org.apache.flink.graph.Graph.mapEdges(Graph.java:544)
>   at 
> org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:76)
>   at 
> org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:51)
>   at org.apache.flink.graph.Graph.run(Graph.java:1792)
> {code}
> I copied code that is used to test the ConnectedComponents algorithm from 
> flink/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/ConnectedComponentsWithRandomisedEdgesITCase.java
>  to try it on another path, because my own code which converts a Gradoop 
> Graph into a Gelly graph and executes the algorithm leads to the 
> afformentioned exception.
> However, even the testcode gave me the exception.
> Any ideas?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7374) Type Erasure in GraphUtils$MapTo

2017-08-07 Thread Matthias Kricke (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116197#comment-16116197
 ] 

Matthias Kricke commented on FLINK-7374:


Not sure what helped in the end. I enhanced version to 1.3.2 and updated some 
dependencies who might used older 1.1.x flink code. 
The problem is gone now. 

Sorry for any inconvenience. 

> Type Erasure in GraphUtils$MapTo
> 
>
> Key: FLINK-7374
> URL: https://issues.apache.org/jira/browse/FLINK-7374
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 1.3.1
> Environment: Ubuntu 17.04
> java-8-oracle
>Reporter: Matthias Kricke
>Priority: Minor
>
> I got the following exception when executing ConnectedComponents or 
> GSAConnectedComponents algorithm:
> {code:java}
> org.apache.flink.api.common.functions.InvalidTypesException: Type of 
> TypeVariable 'O' in 'class org.apache.flink.graph.utils.GraphUtils$MapTo' 
> could not be determined. This is most likely a type erasure problem. The type 
> extraction currently supports types with generic variables only in cases 
> where all variables in the return type can be deduced from the input type(s).
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:915)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:836)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfo(TypeExtractor.java:802)
>   at org.apache.flink.graph.Graph.mapEdges(Graph.java:544)
>   at 
> org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:76)
>   at 
> org.apache.flink.graph.library.ConnectedComponents.run(ConnectedComponents.java:51)
>   at org.apache.flink.graph.Graph.run(Graph.java:1792)
> {code}
> I copied code that is used to test the ConnectedComponents algorithm from 
> flink/flink-libraries/flink-gelly/src/test/java/org/apache/flink/graph/library/ConnectedComponentsWithRandomisedEdgesITCase.java
>  to try it on another path, because my own code which converts a Gradoop 
> Graph into a Gelly graph and executes the algorithm leads to the 
> afformentioned exception.
> However, even the testcode gave me the exception.
> Any ideas?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4469: [hotfix][docs] Updated required Java version for standalo...

2017-08-07 Thread dawidwys
Github user dawidwys commented on the issue:

https://github.com/apache/flink/pull/4469
  
merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6995) Add a warning to outdated documentation

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116099#comment-16116099
 ] 

ASF GitHub Bot commented on FLINK-6995:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4480
  
LGTM, I'll merge this to `release-1.2`.
It seems like we need to add this true -> false toggle somewhere in our 
release procedure.


> Add a warning to outdated documentation
> ---
>
> Key: FLINK-6995
> URL: https://issues.apache.org/jira/browse/FLINK-6995
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Timo Walther
>Assignee: mingleizhang
>
> When I search for "flink yarn" by Google, the first result is a outdated 0.8 
> release documentation page. We should add a warning to outdated documentation 
> pages.
> There are other problems as well:
> The main page only links to 1.3 and 1.4 but the flink-docs-master 
> documentation links to 1.3, 1.2, 1.1, and 1.0. But each of those packages 
> only links to older releases so if a user arrives on a 1.2 page they won't 
> see 1.3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4480: [FLINK-6995] [docs] Enable is_latest attribute to false

2017-08-07 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4480
  
LGTM, I'll merge this to `release-1.2`.
It seems like we need to add this true -> false toggle somewhere in our 
release procedure.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4480: [FLINK-6995] [docs] Enable is_latest attribute to false

2017-08-07 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/4480
  
If we release flink1.4, then we will mark true->false in flink1.3 release 
also.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6995) Add a warning to outdated documentation

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116106#comment-16116106
 ] 

ASF GitHub Bot commented on FLINK-6995:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/4480
  
If we release flink1.4, then we will mark true->false in flink1.3 release 
also.


> Add a warning to outdated documentation
> ---
>
> Key: FLINK-6995
> URL: https://issues.apache.org/jira/browse/FLINK-6995
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Timo Walther
>Assignee: mingleizhang
>
> When I search for "flink yarn" by Google, the first result is a outdated 0.8 
> release documentation page. We should add a warning to outdated documentation 
> pages.
> There are other problems as well:
> The main page only links to 1.3 and 1.4 but the flink-docs-master 
> documentation links to 1.3, 1.2, 1.1, and 1.0. But each of those packages 
> only links to older releases so if a user arrives on a 1.2 page they won't 
> see 1.3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4456: [FLINK-7343][kafka] Increase Xmx for tests

2017-08-07 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4456
  
+1, LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7343) Kafka010ProducerITCase instability

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116176#comment-16116176
 ] 

ASF GitHub Bot commented on FLINK-7343:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4456
  
+1, LGTM.


> Kafka010ProducerITCase instability
> --
>
> Key: FLINK-7343
> URL: https://issues.apache.org/jira/browse/FLINK-7343
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>  Labels: test-stability
>
> As reported by [~till.rohrmann] in 
> https://issues.apache.org/jira/browse/FLINK-6996 there seems to be a test 
> instability with 
> `Kafka010ProducerITCase>KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink`
> https://travis-ci.org/tillrohrmann/flink/jobs/258538641
> It is probably related to log.flush intervals in Kafka, which delay flushing 
> the data to files and potentially causing data loses on killing Kafka brokers 
> in the tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4473: [FLINK-7367][kinesis connector] Parameterize more ...

2017-08-07 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4473#discussion_r131578056
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/config/ProducerConfigConstants.java
 ---
@@ -24,10 +24,30 @@
  */
 public class ProducerConfigConstants extends AWSConfigConstants {
 
-   /** Maximum number of items to pack into an PutRecords request. **/
+   /** Maximum number of KPL user records to store in a single Kinesis 
Streams record (an aggregated record). */
+   public static final String AGGREGATION_MAX_COUNT = 
"aws.producer.aggregationMaxCount";
+
+   /** Maximum number of Kinesis Streams records to pack into an 
PutRecords request. */
public static final String COLLECTION_MAX_COUNT = 
"aws.producer.collectionMaxCount";
 
-   /** Maximum number of items to pack into an aggregated record. **/
-   public static final String AGGREGATION_MAX_COUNT = 
"aws.producer.aggregationMaxCount";
+   /** Maximum number of connections to open to the backend. HTTP requests 
are
+* sent in parallel over multiple connections */
--- End diff --

Style consistency: missing period at the end.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4473: [FLINK-7367][kinesis connector] Parameterize more ...

2017-08-07 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4473#discussion_r131577895
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
 ---
@@ -169,14 +169,27 @@ public void open(Configuration parameters) throws 
Exception {
 

producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));

producerConfig.setCredentialsProvider(AWSUtil.getCredentialsProvider(configProps));
-   if 
(configProps.containsKey(ProducerConfigConstants.COLLECTION_MAX_COUNT)) {
-   
producerConfig.setCollectionMaxCount(PropertiesUtil.getLong(configProps,
-   
ProducerConfigConstants.COLLECTION_MAX_COUNT, 
producerConfig.getCollectionMaxCount(), LOG));
-   }
-   if 
(configProps.containsKey(ProducerConfigConstants.AGGREGATION_MAX_COUNT)) {
-   
producerConfig.setAggregationMaxCount(PropertiesUtil.getLong(configProps,
-   
ProducerConfigConstants.AGGREGATION_MAX_COUNT, 
producerConfig.getAggregationMaxCount(), LOG));
-   }
+
+   
producerConfig.setAggregationMaxCount(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.AGGREGATION_MAX_COUNT, 
producerConfig.getAggregationMaxCount(), LOG));
+
+   
producerConfig.setCollectionMaxCount(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.COLLECTION_MAX_COUNT, 
producerConfig.getCollectionMaxCount(), LOG));
+
+   
producerConfig.setMaxConnections(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.MAX_CONNECTIONS, 
producerConfig.getMaxConnections(), LOG));
+
+   producerConfig.setRateLimit(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.RATE_LIMIT, 
producerConfig.getRateLimit(), LOG));
--- End diff --

Starting from this line, the indentation is not consistent.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7367) Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, MaxConnections, RequestTimeout, etc)

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116108#comment-16116108
 ] 

ASF GitHub Bot commented on FLINK-7367:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4473#discussion_r131577895
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisProducer.java
 ---
@@ -169,14 +169,27 @@ public void open(Configuration parameters) throws 
Exception {
 

producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));

producerConfig.setCredentialsProvider(AWSUtil.getCredentialsProvider(configProps));
-   if 
(configProps.containsKey(ProducerConfigConstants.COLLECTION_MAX_COUNT)) {
-   
producerConfig.setCollectionMaxCount(PropertiesUtil.getLong(configProps,
-   
ProducerConfigConstants.COLLECTION_MAX_COUNT, 
producerConfig.getCollectionMaxCount(), LOG));
-   }
-   if 
(configProps.containsKey(ProducerConfigConstants.AGGREGATION_MAX_COUNT)) {
-   
producerConfig.setAggregationMaxCount(PropertiesUtil.getLong(configProps,
-   
ProducerConfigConstants.AGGREGATION_MAX_COUNT, 
producerConfig.getAggregationMaxCount(), LOG));
-   }
+
+   
producerConfig.setAggregationMaxCount(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.AGGREGATION_MAX_COUNT, 
producerConfig.getAggregationMaxCount(), LOG));
+
+   
producerConfig.setCollectionMaxCount(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.COLLECTION_MAX_COUNT, 
producerConfig.getCollectionMaxCount(), LOG));
+
+   
producerConfig.setMaxConnections(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.MAX_CONNECTIONS, 
producerConfig.getMaxConnections(), LOG));
+
+   producerConfig.setRateLimit(PropertiesUtil.getLong(configProps,
+   ProducerConfigConstants.RATE_LIMIT, 
producerConfig.getRateLimit(), LOG));
--- End diff --

Starting from this line, the indentation is not consistent.


> Parameterize more configs for FlinkKinesisProducer (RecordMaxBufferedTime, 
> MaxConnections, RequestTimeout, etc)
> ---
>
> Key: FLINK-7367
> URL: https://issues.apache.org/jira/browse/FLINK-7367
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.3.3
>
>
> Right now, FlinkKinesisProducer only expose two configs for the underlying 
> KinesisProducer:
> - AGGREGATION_MAX_COUNT
> - COLLECTION_MAX_COUNT
> Well, according to [AWS 
> doc|http://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-config.html] 
> and [their sample on 
> github|https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/default_config.properties],
>  developers can set more to make the max use of KinesisProducer, and make it 
> fault-tolerant (e.g. by increasing timeout).
> I select a few more configs that we need when using Flink with Kinesis:
> - MAX_CONNECTIONS
> - RATE_LIMIT
> - RECORD_MAX_BUFFERED_TIME
> - RECORD_TIME_TO_LIVE
> - REQUEST_TIMEOUT
> We need to parameterize FlinkKinesisProducer to pass in the above params, in 
> order to cater to our need



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6988) Add Apache Kafka 0.11 connector

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116171#comment-16116171
 ] 

ASF GitHub Bot commented on FLINK-6988:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4239
  
Now that the prerequisite PRs are merged, we can rebase this now :)


> Add Apache Kafka 0.11 connector
> ---
>
> Key: FLINK-6988
> URL: https://issues.apache.org/jira/browse/FLINK-6988
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.3.1
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> Kafka 0.11 (it will be released very soon) add supports for transactions. 
> Thanks to that, Flink might be able to implement Kafka sink supporting 
> "exactly-once" semantic. API changes and whole transactions support is 
> described in 
> [KIP-98|https://cwiki.apache.org/confluence/display/KAFKA/KIP-98+-+Exactly+Once+Delivery+and+Transactional+Messaging].
> The goal is to mimic implementation of existing BucketingSink. New 
> FlinkKafkaProducer011 would 
> * upon creation begin transaction, store transaction identifiers into the 
> state and would write all incoming data to an output Kafka topic using that 
> transaction
> * on `snapshotState` call, it would flush the data and write in state 
> information that current transaction is pending to be committed
> * on `notifyCheckpointComplete` we would commit this pending transaction
> * in case of crash between `snapshotState` and `notifyCheckpointComplete` we 
> either abort this pending transaction (if not every participant successfully 
> saved the snapshot) or restore and commit it. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4239: [FLINK-6988] flink-connector-kafka-0.11 with exactly-once...

2017-08-07 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4239
  
Now that the prerequisite PRs are merged, we can rebase this now :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4368: [FLINK-7210] Introduce TwoPhaseCommitSinkFunction

2017-08-07 Thread pnowojski
Github user pnowojski commented on the issue:

https://github.com/apache/flink/pull/4368
  
Thanks! :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7365) excessive warning logs of attempt to override final parameter: fs.s3.buffer.dir

2017-08-07 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116264#comment-16116264
 ] 

Aljoscha Krettek commented on FLINK-7365:
-

I'm not sure we can change anything here, because that logging is coming from 
Hadoop code. The only ways I see of getting rid of those log entries to to 
actually change the offending parts in the config files or to change our code 
so that it doesn't instantiate the {{Configuration}} that often. The latter 
would be a bigger change, I think. 

> excessive warning logs of attempt to override final parameter: 
> fs.s3.buffer.dir
> ---
>
> Key: FLINK-7365
> URL: https://issues.apache.org/jira/browse/FLINK-7365
> Project: Flink
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>
> I'm seeing hundreds of line of the following log in my JobManager log file:
> {code:java}
> 2017-08-03 19:48:45,330 WARN  org.apache.hadoop.conf.Configuration
>   - /usr/lib/hadoop/etc/hadoop/core-site.xml:an attempt to 
> override final parameter: fs.s3.buffer.dir;  Ignoring.
> 2017-08-03 19:48:45,485 WARN  org.apache.hadoop.conf.Configuration
>   - /etc/hadoop/conf/core-site.xml:an attempt to override final 
> parameter: fs.s3.buffer.dir;  Ignoring.
> 2017-08-03 19:48:45,486 WARN  org.apache.hadoop.conf.Configuration
>   - /usr/lib/hadoop/etc/hadoop/core-site.xml:an attempt to 
> override final parameter: fs.s3.buffer.dir;  Ignoring.
> 2017-08-03 19:48:45,626 WARN  org.apache.hadoop.conf.Configuration
>   - /etc/hadoop/conf/core-site.xml:an attempt to override final 
> parameter: fs.s3.buffer.dir;  Ignoring
> ..
> {code}
> Info of my Flink cluster:
> - Running on EMR with emr-5.6.0
> - Using FSStateBackend, writing checkpointing data files to s3
> - Configured s3 with S3AFileSystem according to 
> https://ci.apache.org/projects/flink/flink-docs-release-1.4/setup/aws.html#set-s3-filesystem
> - AWS forbids resetting 'fs.s3.buffer.dir' value (it has a  tag on 
> this property in core-site.xml), so I set 'fs.s3a.buffer.dir' as '/tmp'
> Here's my core-site.xml file:
> {code:java}
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>   fs.s3.buffer.dir
>   /mnt/s3,/mnt1/s3
>   true
> 
> 
>   fs.s3.impl
>   org.apache.hadoop.fs.s3a.S3AFileSystem
> 
> 
>   fs.s3n.impl
>   com.amazon.ws.emr.hadoop.fs.EmrFileSystem
> 
>   
> ipc.client.connect.max.retries.on.timeouts
> 5
>   
>   
> hadoop.security.key.default.bitlength
> 256
>   
>   
> hadoop.proxyuser.hadoop.groups
> *
>   
>   
> hadoop.tmp.dir
> /mnt/var/lib/hadoop/tmp
>   
>   
> hadoop.proxyuser.hadoop.hosts
> *
>   
>   
> io.file.buffer.size
> 65536
>   
>   
> fs.AbstractFileSystem.s3.impl
> org.apache.hadoop.fs.s3.EMRFSDelegate
>   
>   
> fs.s3a.buffer.dir
> /tmp
>   
>   
> fs.s3bfs.impl
> org.apache.hadoop.fs.s3.S3FileSystem
>   
> 
> {code}
> This bug is about excessive logging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7284) Verify compile time and runtime version of Hadoop

2017-08-07 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116312#comment-16116312
 ] 

Till Rohrmann commented on FLINK-7284:
--

I think we have to use something similar to the {{git-commit-id-plugin}} which 
allows us to store build information in the generated jar. Not sure if 
something like that exists. I think we have to make some research there.

> Verify compile time and runtime version of Hadoop
> -
>
> Key: FLINK-7284
> URL: https://issues.apache.org/jira/browse/FLINK-7284
> Project: Flink
>  Issue Type: Improvement
>  Components: Cluster Management
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>
> In order to detect a potential version conflict when running a Flink cluster, 
> built with Hadoop {{x}}, in an environment which provides Hadoop {{y}}, we 
> should automatically check if {{x == y}}. If {{x != y}}, we should terminate 
> with an appropriate error message. This behaviour should also be 
> disengageable if one wants to run Flink explicitly in a different Hadoop 
> environment.
> The check could be done at cluster start up using Hadoops {{VersionInfo}} and 
> the build time Hadoop version info. The latter has to be included in the 
> Flink binaries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7053) improve code quality in some tests

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116348#comment-16116348
 ] 

ASF GitHub Bot commented on FLINK-7053:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4234
  
Changes look good to me. Thanks for the work @NicoK. Merging this PR.


> improve code quality in some tests
> --
>
> Key: FLINK-7053
> URL: https://issues.apache.org/jira/browse/FLINK-7053
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, Network
>Affects Versions: 1.4.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>
> * {{BlobClientTest}} and {{BlobClientSslTest}} share a lot of common code
> * the received buffers there are currently not verified for being equal to 
> the expected one
> * {{TemporaryFolder}} should be used throughout blob store tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4234: [FLINK-7053][blob] improve code quality in some tests

2017-08-07 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/4234
  
Changes look good to me. Thanks for the work @NicoK. Merging this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-2759) TaskManagerProcessReapingTest fails on Windows

2017-08-07 Thread Nico Kruber (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber closed FLINK-2759.
--
Resolution: Fixed

closing (fix version already set, issue is also quite old with no new content)

> TaskManagerProcessReapingTest fails on Windows
> --
>
> Key: FLINK-2759
> URL: https://issues.apache.org/jira/browse/FLINK-2759
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.10.0
> Environment: Windows 10, Cygwin, "mvn clean install"
>Reporter: Fabian Hueske
>
> The following test of {{TaskManagerProcessReapingTest}} fails when building 
> Flink with {{mvn clean install}} in Cygwin on Windows:
> {code}
> TaskManagerProcessReapingTest.testReapProcessOnFailure:133 TaskManager 
> process did not launch the TaskManager properly. Failed to look up 
> akka.tcp://flink@127.0.0.1:51269/user/taskmanager
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7013) Add shaded netty dependency

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116414#comment-16116414
 ] 

ASF GitHub Bot commented on FLINK-7013:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4452
  
merging.


> Add shaded netty dependency
> ---
>
> Key: FLINK-7013
> URL: https://issues.apache.org/jira/browse/FLINK-7013
> Project: Flink
>  Issue Type: Sub-task
>  Components: Network
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4368: [FLINK-7210] Introduce TwoPhaseCommitSinkFunction

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4368


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


<    1   2   3   4   >