[jira] [Commented] (FLINK-8275) Flink YARN deployment with Kerberos enabled not working

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294621#comment-16294621
 ] 

ASF GitHub Bot commented on FLINK-8275:
---

GitHub user suez1224 opened a pull request:

https://github.com/apache/flink/pull/5172

[FLINK-8275] [Security] fix keytab local path in YarnTaskManagerRunner

## Brief change log

  - Set the local keytab path in YarnTaskManagerRunner to the correct local 
path.

## Verifying this change
  - Manually verified the change by running in a production secure cluser 
with 1 JobManager and 2 TaskManagers. Both the JobManager and the Taskmanagers 
can start, and verify through kerberos metrics.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/suez1224/flink keytab-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5172.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5172


commit e40bef03f18d80f423150c8b94f875d9edb5ef24
Author: Shuyi Chen 
Date:   2017-12-18T07:50:07Z

fix keytab local path in YarnTaskManagerRunner




> Flink YARN deployment with Kerberos enabled not working 
> 
>
> Key: FLINK-8275
> URL: https://issues.apache.org/jira/browse/FLINK-8275
> Project: Flink
>  Issue Type: Bug
>  Components: Security
>Affects Versions: 1.4.0
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Blocker
> Fix For: 1.5.0
>
>
> The local keytab path in YarnTaskManagerRunner is incorrectly set to the 
> ApplicationMaster's local keytab path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5172: [FLINK-8275] [Security] fix keytab local path in Y...

2017-12-17 Thread suez1224
GitHub user suez1224 opened a pull request:

https://github.com/apache/flink/pull/5172

[FLINK-8275] [Security] fix keytab local path in YarnTaskManagerRunner

## Brief change log

  - Set the local keytab path in YarnTaskManagerRunner to the correct local 
path.

## Verifying this change
  - Manually verified the change by running in a production secure cluser 
with 1 JobManager and 2 TaskManagers. Both the JobManager and the Taskmanagers 
can start, and verify through kerberos metrics.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/suez1224/flink keytab-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5172.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5172


commit e40bef03f18d80f423150c8b94f875d9edb5ef24
Author: Shuyi Chen 
Date:   2017-12-18T07:50:07Z

fix keytab local path in YarnTaskManagerRunner




---


[jira] [Created] (FLINK-8275) Flink YARN deployment with Kerberos enabled not working

2017-12-17 Thread Shuyi Chen (JIRA)
Shuyi Chen created FLINK-8275:
-

 Summary: Flink YARN deployment with Kerberos enabled not working 
 Key: FLINK-8275
 URL: https://issues.apache.org/jira/browse/FLINK-8275
 Project: Flink
  Issue Type: Bug
  Components: Security
Affects Versions: 1.4.0
Reporter: Shuyi Chen
Assignee: Shuyi Chen
Priority: Blocker
 Fix For: 1.5.0


The local keytab path in YarnTaskManagerRunner is incorrectly set to the 
ApplicationMaster's local keytab path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-8216) Unify test utils in flink-connector-kinesis

2017-12-17 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai resolved FLINK-8216.

Resolution: Fixed

> Unify test utils in flink-connector-kinesis
> ---
>
> Key: FLINK-8216
> URL: https://issues.apache.org/jira/browse/FLINK-8216
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.5.0
>
>
> currently there are a few ways to get a Properties object with required 
> fields (aws access key, aws secret key, aws region) for KinesisConsumer and 
> KinesisProducer in unit tests. We should unify them and provide a single util 
> to get that Properties object



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8216) Unify test utils in flink-connector-kinesis

2017-12-17 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294603#comment-16294603
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-8216:


Merged for 1.5.0: c57e56f183bd923e6947c70f533a2919c888565b

> Unify test utils in flink-connector-kinesis
> ---
>
> Key: FLINK-8216
> URL: https://issues.apache.org/jira/browse/FLINK-8216
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.5.0
>
>
> currently there are a few ways to get a Properties object with required 
> fields (aws access key, aws secret key, aws region) for KinesisConsumer and 
> KinesisProducer in unit tests. We should unify them and provide a single util 
> to get that Properties object



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8218) move flink-connector-kinesis examples from /src to /test

2017-12-17 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294600#comment-16294600
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-8218:


Merged for 1.5.0: a7465f04ff2afa3774d7e3f746faadf9a5500fed.

> move flink-connector-kinesis examples from /src to /test
> 
>
> Key: FLINK-8218
> URL: https://issues.apache.org/jira/browse/FLINK-8218
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.5.0
>
>
> I just saw there are runnable examples in src tree and traced back to this 
> PR. I feel it's really bad to have runnable examples (with java main method) 
> in a lib jar that Flink distributes.
> chatted with [~tzulitai] , we agreed that we should move those examples from 
> /src to /test



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-8218) move flink-connector-kinesis examples from /src to /test

2017-12-17 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai resolved FLINK-8218.

Resolution: Fixed

> move flink-connector-kinesis examples from /src to /test
> 
>
> Key: FLINK-8218
> URL: https://issues.apache.org/jira/browse/FLINK-8218
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.5.0
>
>
> I just saw there are runnable examples in src tree and traced back to this 
> PR. I feel it's really bad to have runnable examples (with java main method) 
> in a lib jar that Flink distributes.
> chatted with [~tzulitai] , we agreed that we should move those examples from 
> /src to /test



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8216) Unify test utils in flink-connector-kinesis

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294597#comment-16294597
 ] 

ASF GitHub Bot commented on FLINK-8216:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5130


> Unify test utils in flink-connector-kinesis
> ---
>
> Key: FLINK-8216
> URL: https://issues.apache.org/jira/browse/FLINK-8216
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.5.0
>
>
> currently there are a few ways to get a Properties object with required 
> fields (aws access key, aws secret key, aws region) for KinesisConsumer and 
> KinesisProducer in unit tests. We should unify them and provide a single util 
> to get that Properties object



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8249) Kinesis Producer didnt configure region

2017-12-17 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294595#comment-16294595
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-8249:


Merged:

1.5.0: 3397bd66abdc529a2d5940a09f9feee035fd0b90
1.4.1: dbbaa9a4a761a22e402f08745775ce357dddac06

> Kinesis Producer didnt configure region
> ---
>
> Key: FLINK-8249
> URL: https://issues.apache.org/jira/browse/FLINK-8249
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Joao Boto
> Fix For: 1.5.0, 1.4.1
>
>
> Hi,
> setting this configurations to FlinkKinesisProducer:
> {code}
> properties.put(AWSConfigConstants.AWS_REGION, "eu-west-1");
> properties.put(AWSConfigConstants.AWS_ACCESS_KEY_ID, "accessKey");
> properties.put(AWSConfigConstants.AWS_SECRET_ACCESS_KEY, "secretKey");
> {code}
> is throwing this error:
> {code}
> 17/12/13 10:50:11 ERROR LogInputStreamReader: [2017-12-13 10:50:11.290786] 
> [0x57ba][0x7f31cbce5780] [error] [main.cc:266] Could not configure 
> the region. It was not given in the config and we were unable to retrieve it 
> from EC2 metadata.
> 17/12/13 10:50:12 ERROR KinesisProducer: Error in child process
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.IrrecoverableError:
>  Child process exited with code 1
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.fatalError(Daemon.java:525)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.fatalError(Daemon.java:497)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.startChildProcess(Daemon.java:475)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.access$100(Daemon.java:63)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon$1.run(Daemon.java:133)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 17/12/13 10:50:15 ERROR LogInputStreamReader: [2017-12-13 10:50:15.700441] 
> [0x57c4][0x7ffb152b5780] [error] [AWS Log: ERROR](CurlHttpClient)Curl 
> returned error code 28
> 17/12/13 10:50:15 ERROR LogInputStreamReader: [2017-12-13 10:50:15.700521] 
> [0x57c4][0x7ffb152b5780] [error] [AWS Log: 
> ERROR](EC2MetadataClient)Http request to Ec2MetadataService failed.
> {code}
> making some investigations the region is never setted and i think this is the 
> reason:
> in this commit: 
> https://github.com/apache/flink/commit/9ed5d9a180dcd871e33bf8982434e3afd90ed295#diff-f3c6c35f3b045df8408b310f8f8a6bc7
> {code}
> - KinesisProducerConfiguration producerConfig = new 
> KinesisProducerConfiguration();
> - 
> producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));
> + // check and pass the configuration properties
> + KinesisProducerConfiguration producerConfig = 
> KinesisConfigUtil.validateProducerConfiguration(configProps);
>   
> producerConfig.setCredentialsProvider(AWSUtil.getCredentialsProvider(configProps));
> {code}
> this line was removed
> {code}
> producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));
> {code}
> cc [~tzulitai], [~phoenixjiangnan]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-8249) Kinesis Producer didnt configure region

2017-12-17 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai resolved FLINK-8249.

Resolution: Fixed

> Kinesis Producer didnt configure region
> ---
>
> Key: FLINK-8249
> URL: https://issues.apache.org/jira/browse/FLINK-8249
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Joao Boto
> Fix For: 1.5.0, 1.4.1
>
>
> Hi,
> setting this configurations to FlinkKinesisProducer:
> {code}
> properties.put(AWSConfigConstants.AWS_REGION, "eu-west-1");
> properties.put(AWSConfigConstants.AWS_ACCESS_KEY_ID, "accessKey");
> properties.put(AWSConfigConstants.AWS_SECRET_ACCESS_KEY, "secretKey");
> {code}
> is throwing this error:
> {code}
> 17/12/13 10:50:11 ERROR LogInputStreamReader: [2017-12-13 10:50:11.290786] 
> [0x57ba][0x7f31cbce5780] [error] [main.cc:266] Could not configure 
> the region. It was not given in the config and we were unable to retrieve it 
> from EC2 metadata.
> 17/12/13 10:50:12 ERROR KinesisProducer: Error in child process
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.IrrecoverableError:
>  Child process exited with code 1
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.fatalError(Daemon.java:525)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.fatalError(Daemon.java:497)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.startChildProcess(Daemon.java:475)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.access$100(Daemon.java:63)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon$1.run(Daemon.java:133)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 17/12/13 10:50:15 ERROR LogInputStreamReader: [2017-12-13 10:50:15.700441] 
> [0x57c4][0x7ffb152b5780] [error] [AWS Log: ERROR](CurlHttpClient)Curl 
> returned error code 28
> 17/12/13 10:50:15 ERROR LogInputStreamReader: [2017-12-13 10:50:15.700521] 
> [0x57c4][0x7ffb152b5780] [error] [AWS Log: 
> ERROR](EC2MetadataClient)Http request to Ec2MetadataService failed.
> {code}
> making some investigations the region is never setted and i think this is the 
> reason:
> in this commit: 
> https://github.com/apache/flink/commit/9ed5d9a180dcd871e33bf8982434e3afd90ed295#diff-f3c6c35f3b045df8408b310f8f8a6bc7
> {code}
> - KinesisProducerConfiguration producerConfig = new 
> KinesisProducerConfiguration();
> - 
> producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));
> + // check and pass the configuration properties
> + KinesisProducerConfiguration producerConfig = 
> KinesisConfigUtil.validateProducerConfiguration(configProps);
>   
> producerConfig.setCredentialsProvider(AWSUtil.getCredentialsProvider(configProps));
> {code}
> this line was removed
> {code}
> producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));
> {code}
> cc [~tzulitai], [~phoenixjiangnan]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8218) move flink-connector-kinesis examples from /src to /test

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294598#comment-16294598
 ] 

ASF GitHub Bot commented on FLINK-8218:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5131


> move flink-connector-kinesis examples from /src to /test
> 
>
> Key: FLINK-8218
> URL: https://issues.apache.org/jira/browse/FLINK-8218
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.5.0
>
>
> I just saw there are runnable examples in src tree and traced back to this 
> PR. I feel it's really bad to have runnable examples (with java main method) 
> in a lib jar that Flink distributes.
> chatted with [~tzulitai] , we agreed that we should move those examples from 
> /src to /test



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8249) Kinesis Producer didnt configure region

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294596#comment-16294596
 ] 

ASF GitHub Bot commented on FLINK-8249:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5160


> Kinesis Producer didnt configure region
> ---
>
> Key: FLINK-8249
> URL: https://issues.apache.org/jira/browse/FLINK-8249
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Joao Boto
> Fix For: 1.5.0, 1.4.1
>
>
> Hi,
> setting this configurations to FlinkKinesisProducer:
> {code}
> properties.put(AWSConfigConstants.AWS_REGION, "eu-west-1");
> properties.put(AWSConfigConstants.AWS_ACCESS_KEY_ID, "accessKey");
> properties.put(AWSConfigConstants.AWS_SECRET_ACCESS_KEY, "secretKey");
> {code}
> is throwing this error:
> {code}
> 17/12/13 10:50:11 ERROR LogInputStreamReader: [2017-12-13 10:50:11.290786] 
> [0x57ba][0x7f31cbce5780] [error] [main.cc:266] Could not configure 
> the region. It was not given in the config and we were unable to retrieve it 
> from EC2 metadata.
> 17/12/13 10:50:12 ERROR KinesisProducer: Error in child process
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.IrrecoverableError:
>  Child process exited with code 1
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.fatalError(Daemon.java:525)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.fatalError(Daemon.java:497)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.startChildProcess(Daemon.java:475)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.access$100(Daemon.java:63)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon$1.run(Daemon.java:133)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 17/12/13 10:50:15 ERROR LogInputStreamReader: [2017-12-13 10:50:15.700441] 
> [0x57c4][0x7ffb152b5780] [error] [AWS Log: ERROR](CurlHttpClient)Curl 
> returned error code 28
> 17/12/13 10:50:15 ERROR LogInputStreamReader: [2017-12-13 10:50:15.700521] 
> [0x57c4][0x7ffb152b5780] [error] [AWS Log: 
> ERROR](EC2MetadataClient)Http request to Ec2MetadataService failed.
> {code}
> making some investigations the region is never setted and i think this is the 
> reason:
> in this commit: 
> https://github.com/apache/flink/commit/9ed5d9a180dcd871e33bf8982434e3afd90ed295#diff-f3c6c35f3b045df8408b310f8f8a6bc7
> {code}
> - KinesisProducerConfiguration producerConfig = new 
> KinesisProducerConfiguration();
> - 
> producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));
> + // check and pass the configuration properties
> + KinesisProducerConfiguration producerConfig = 
> KinesisConfigUtil.validateProducerConfiguration(configProps);
>   
> producerConfig.setCredentialsProvider(AWSUtil.getCredentialsProvider(configProps));
> {code}
> this line was removed
> {code}
> producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));
> {code}
> cc [~tzulitai], [~phoenixjiangnan]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5164: [hotfix][javadoc] fix typo in StreamExecutionEnvir...

2017-12-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5164


---


[GitHub] flink pull request #5130: [FLINK-8216] [Kinesis connector] Unify test utils ...

2017-12-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5130


---


[GitHub] flink pull request #5135: [hotfix] [doc] Fix typo in TaskManager and Environ...

2017-12-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5135


---


[GitHub] flink pull request #5131: [FLINK-8218] [Kinesis connector] move flink-connec...

2017-12-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5131


---


[GitHub] flink pull request #5160: [FLINK-8249] [KinesisConnector] [hotfix] aws regio...

2017-12-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5160


---


[jira] [Commented] (FLINK-8199) Annotation for Elasticsearch connector

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294564#comment-16294564
 ] 

ASF GitHub Bot commented on FLINK-8199:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5124
  
Thanks @tzulitai . I will change all ```@Public``` to ```@PublicEvolving``` 
soon enough.


> Annotation for Elasticsearch connector
> --
>
> Key: FLINK-8199
> URL: https://issues.apache.org/jira/browse/FLINK-8199
> Project: Flink
>  Issue Type: Sub-task
>  Components: ElasticSearch Connector
>Reporter: mingleizhang
>Assignee: mingleizhang
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5124: [FLINK-8199] [elasticsearch connector] Properly annotate ...

2017-12-17 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5124
  
Thanks @tzulitai . I will change all ```@Public``` to ```@PublicEvolving``` 
soon enough.


---


[jira] [Commented] (FLINK-8270) TaskManagers do not use correct local path for shipped Keytab files in Yarn deployment modes

2017-12-17 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294553#comment-16294553
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-8270:


[~dongxiao.yang] of course, contributions are always welcome.
Please ping [~eronwright] and me for review once you open the PR.

> TaskManagers do not use correct local path for shipped Keytab files in Yarn 
> deployment modes
> 
>
> Key: FLINK-8270
> URL: https://issues.apache.org/jira/browse/FLINK-8270
> Project: Flink
>  Issue Type: Bug
>  Components: Security
>Affects Versions: 1.4.0
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Reported on ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-4-0-keytab-is-unreadable-td17292.html
> This is a "recurrence" of FLINK-5580. The TMs in Yarn deployment modes are 
> again not using the correct local paths for shipped Keytab files.
> The cause was accidental due to this change: 
> https://github.com/apache/flink/commit/7f1c23317453859ce3b136b6e13f698d3fee34a1#diff-a81afdf5ce0872836ac6fadb603d483e.
> Things to consider:
> 1) The above accidental breaking change was actually targeting a minor 
> refactor on the "integration test scenario" code block in 
> {{YarnTaskManagerRunner}}. It would be best if we can remove that test case 
> code block from the main code.
> 2) Unit test coverage is apparently not enough. As this incidence shows, any 
> slight changes can cause this issue to easily resurface again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8271) upgrade from deprecated classes to AmazonKinesis

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294552#comment-16294552
 ] 

ASF GitHub Bot commented on FLINK-8271:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/5171
  
A good point for testing this in cluster mode. I don't have a 1.5 cluser so 
only tested in local mode. The upgrade is unnecessary, I'll revert it.

Which commit message in PR submission are you referring to? The "What is 
the purpose of the change ..." part, or the one for each commit?


> upgrade from deprecated classes to AmazonKinesis
> 
>
> Key: FLINK-8271
> URL: https://issues.apache.org/jira/browse/FLINK-8271
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.5.0, 1.4.1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5171: [FLINK-8271][Kinesis connector] upgrade from deprecated c...

2017-12-17 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/5171
  
A good point for testing this in cluster mode. I don't have a 1.5 cluser so 
only tested in local mode. The upgrade is unnecessary, I'll revert it.

Which commit message in PR submission are you referring to? The "What is 
the purpose of the change ..." part, or the one for each commit?


---


[jira] [Commented] (FLINK-8217) Properly annotate APIs of flink-connector-kinesis

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294541#comment-16294541
 ] 

ASF GitHub Bot commented on FLINK-8217:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/5138
  
A good point from Gordon. This PR's main purpose is actually reminding 
users which class to use and which not. `PublicEvolving` should be good enough.

Will update the PR


> Properly annotate APIs of flink-connector-kinesis
> -
>
> Key: FLINK-8217
> URL: https://issues.apache.org/jira/browse/FLINK-8217
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kinesis Connector
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5138: [FLINK-8217] [Kinesis connector] Properly annotate APIs o...

2017-12-17 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/5138
  
A good point from Gordon. This PR's main purpose is actually reminding 
users which class to use and which not. `PublicEvolving` should be good enough.

Will update the PR


---


[GitHub] flink issue #5135: [hotfix] [doc] Fix typo in TaskManager and EnvironmentInf...

2017-12-17 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5135
  
Thanks! LGTM.
Merging this ...


---


[jira] [Commented] (FLINK-8199) Annotation for Elasticsearch connector

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294520#comment-16294520
 ] 

ASF GitHub Bot commented on FLINK-8199:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5124
  
The PR has correct identification of what should be public and what is 
internal, IMO.

However, as I also explained in #5138, I think we should not go with the 
`@Public` annotation now, and only `@PublicEvolving`. The intention of 
FLINK-8199 focuses only on separating internal and public APIs. Whether or not 
we want to guarantee supporting a public API can be discussed separately.

@zhangminglei can you change all `@Public` annotations in the PR to 
`@PublicEvolving`?


> Annotation for Elasticsearch connector
> --
>
> Key: FLINK-8199
> URL: https://issues.apache.org/jira/browse/FLINK-8199
> Project: Flink
>  Issue Type: Sub-task
>  Components: ElasticSearch Connector
>Reporter: mingleizhang
>Assignee: mingleizhang
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5124: [FLINK-8199] [elasticsearch connector] Properly annotate ...

2017-12-17 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5124
  
The PR has correct identification of what should be public and what is 
internal, IMO.

However, as I also explained in #5138, I think we should not go with the 
`@Public` annotation now, and only `@PublicEvolving`. The intention of 
FLINK-8199 focuses only on separating internal and public APIs. Whether or not 
we want to guarantee supporting a public API can be discussed separately.

@zhangminglei can you change all `@Public` annotations in the PR to 
`@PublicEvolving`?


---


[jira] [Commented] (FLINK-8216) Unify test utils in flink-connector-kinesis

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294517#comment-16294517
 ] 

ASF GitHub Bot commented on FLINK-8216:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5130
  
Merging this ..


> Unify test utils in flink-connector-kinesis
> ---
>
> Key: FLINK-8216
> URL: https://issues.apache.org/jira/browse/FLINK-8216
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.5.0
>
>
> currently there are a few ways to get a Properties object with required 
> fields (aws access key, aws secret key, aws region) for KinesisConsumer and 
> KinesisProducer in unit tests. We should unify them and provide a single util 
> to get that Properties object



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5130: [FLINK-8216] [Kinesis connector] Unify test utils in flin...

2017-12-17 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5130
  
Merging this ..


---


[jira] [Commented] (FLINK-8218) move flink-connector-kinesis examples from /src to /test

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294515#comment-16294515
 ] 

ASF GitHub Bot commented on FLINK-8218:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5131
  
Thanks, +1 LGTM.
Merging this ...


> move flink-connector-kinesis examples from /src to /test
> 
>
> Key: FLINK-8218
> URL: https://issues.apache.org/jira/browse/FLINK-8218
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.5.0
>
>
> I just saw there are runnable examples in src tree and traced back to this 
> PR. I feel it's really bad to have runnable examples (with java main method) 
> in a lib jar that Flink distributes.
> chatted with [~tzulitai] , we agreed that we should move those examples from 
> /src to /test



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5131: [FLINK-8218] [Kinesis connector] move flink-connector-kin...

2017-12-17 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5131
  
Thanks, +1 LGTM.
Merging this ...


---


[jira] [Commented] (FLINK-8217) Properly annotate APIs of flink-connector-kinesis

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294513#comment-16294513
 ] 

ASF GitHub Bot commented on FLINK-8217:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5138
  
My intention for opening FLINK-8217 was more towards the purpose of clearly 
identifying what is `@Internal` and should not be used. I can agree with that 
we at least separate APIs to be annotated as either `@Internal` or 
`@PublicEvolving` for now. Whether or not we want to elevate certain APIs to 
`@Public` can probably be a separate issue.




> Properly annotate APIs of flink-connector-kinesis
> -
>
> Key: FLINK-8217
> URL: https://issues.apache.org/jira/browse/FLINK-8217
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kinesis Connector
>Affects Versions: 1.5.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5138: [FLINK-8217] [Kinesis connector] Properly annotate APIs o...

2017-12-17 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5138
  
My intention for opening FLINK-8217 was more towards the purpose of clearly 
identifying what is `@Internal` and should not be used. I can agree with that 
we at least separate APIs to be annotated as either `@Internal` or 
`@PublicEvolving` for now. Whether or not we want to elevate certain APIs to 
`@Public` can probably be a separate issue.




---


[GitHub] flink pull request #5160: [FLINK-8249] [KinesisConnector] [hotfix] aws regio...

2017-12-17 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5160#discussion_r157398835
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/KinesisConfigUtil.java
 ---
@@ -191,6 +191,7 @@ public static KinesisProducerConfiguration 
getValidatedProducerConfiguration(Pro
}
 
KinesisProducerConfiguration kpc = 
KinesisProducerConfiguration.fromProperties(config);
+   
kpc.setRegion(config.getProperty(AWSConfigConstants.AWS_REGION));
--- End diff --

I don't think so, but it is already safeguarded with 
https://github.com/apache/flink/pull/5160/files/72be9da2048d4e2cb6c855ce16c9652556e1f762#diff-16d72c003634fdee6e6efa5ece15c385R186
 that the region property is not null.




---


[jira] [Commented] (FLINK-8249) Kinesis Producer didnt configure region

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294507#comment-16294507
 ] 

ASF GitHub Bot commented on FLINK-8249:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5160#discussion_r157398835
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/KinesisConfigUtil.java
 ---
@@ -191,6 +191,7 @@ public static KinesisProducerConfiguration 
getValidatedProducerConfiguration(Pro
}
 
KinesisProducerConfiguration kpc = 
KinesisProducerConfiguration.fromProperties(config);
+   
kpc.setRegion(config.getProperty(AWSConfigConstants.AWS_REGION));
--- End diff --

I don't think so, but it is already safeguarded with 
https://github.com/apache/flink/pull/5160/files/72be9da2048d4e2cb6c855ce16c9652556e1f762#diff-16d72c003634fdee6e6efa5ece15c385R186
 that the region property is not null.




> Kinesis Producer didnt configure region
> ---
>
> Key: FLINK-8249
> URL: https://issues.apache.org/jira/browse/FLINK-8249
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Joao Boto
> Fix For: 1.5.0, 1.4.1
>
>
> Hi,
> setting this configurations to FlinkKinesisProducer:
> {code}
> properties.put(AWSConfigConstants.AWS_REGION, "eu-west-1");
> properties.put(AWSConfigConstants.AWS_ACCESS_KEY_ID, "accessKey");
> properties.put(AWSConfigConstants.AWS_SECRET_ACCESS_KEY, "secretKey");
> {code}
> is throwing this error:
> {code}
> 17/12/13 10:50:11 ERROR LogInputStreamReader: [2017-12-13 10:50:11.290786] 
> [0x57ba][0x7f31cbce5780] [error] [main.cc:266] Could not configure 
> the region. It was not given in the config and we were unable to retrieve it 
> from EC2 metadata.
> 17/12/13 10:50:12 ERROR KinesisProducer: Error in child process
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.IrrecoverableError:
>  Child process exited with code 1
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.fatalError(Daemon.java:525)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.fatalError(Daemon.java:497)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.startChildProcess(Daemon.java:475)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon.access$100(Daemon.java:63)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon$1.run(Daemon.java:133)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 17/12/13 10:50:15 ERROR LogInputStreamReader: [2017-12-13 10:50:15.700441] 
> [0x57c4][0x7ffb152b5780] [error] [AWS Log: ERROR](CurlHttpClient)Curl 
> returned error code 28
> 17/12/13 10:50:15 ERROR LogInputStreamReader: [2017-12-13 10:50:15.700521] 
> [0x57c4][0x7ffb152b5780] [error] [AWS Log: 
> ERROR](EC2MetadataClient)Http request to Ec2MetadataService failed.
> {code}
> making some investigations the region is never setted and i think this is the 
> reason:
> in this commit: 
> https://github.com/apache/flink/commit/9ed5d9a180dcd871e33bf8982434e3afd90ed295#diff-f3c6c35f3b045df8408b310f8f8a6bc7
> {code}
> - KinesisProducerConfiguration producerConfig = new 
> KinesisProducerConfiguration();
> - 
> producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));
> + // check and pass the configuration properties
> + KinesisProducerConfiguration producerConfig = 
> KinesisConfigUtil.validateProducerConfiguration(configProps);
>   
> producerConfig.setCredentialsProvider(AWSUtil.getCredentialsProvider(configProps));
> {code}
> this line was removed
> {code}
> producerConfig.setRegion(configProps.getProperty(ProducerConfigConstants.AWS_REGION));
> {code}
> cc [~tzulitai], [~phoenixjiangnan]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8271) upgrade from deprecated classes to AmazonKinesis

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294502#comment-16294502
 ] 

ASF GitHub Bot commented on FLINK-8271:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5171
  
Please also keep in mind that it would be a good contribution practice to 
have meaningful commit messages in PR submissions.
Ideally, they should be as you would like them to be when they are merged.


> upgrade from deprecated classes to AmazonKinesis
> 
>
> Key: FLINK-8271
> URL: https://issues.apache.org/jira/browse/FLINK-8271
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.5.0, 1.4.1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5171: [FLINK-8271][Kinesis connector] upgrade from deprecated c...

2017-12-17 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5171
  
Please also keep in mind that it would be a good contribution practice to 
have meaningful commit messages in PR submissions.
Ideally, they should be as you would like them to be when they are merged.


---


[GitHub] flink issue #5164: [hotfix][javadoc] fix typo in StreamExecutionEnvironment ...

2017-12-17 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/5164
  
+1, merging this ..


---


[jira] [Commented] (FLINK-8271) upgrade from deprecated classes to AmazonKinesis

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294499#comment-16294499
 ] 

ASF GitHub Bot commented on FLINK-8271:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5171#discussion_r157396375
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/AWSUtil.java
 ---
@@ -30,37 +30,43 @@
 import com.amazonaws.auth.EnvironmentVariableCredentialsProvider;
 import com.amazonaws.auth.SystemPropertiesCredentialsProvider;
 import com.amazonaws.auth.profile.ProfileCredentialsProvider;
-import com.amazonaws.regions.Region;
+import com.amazonaws.client.builder.AwsClientBuilder;
 import com.amazonaws.regions.Regions;
-import com.amazonaws.services.kinesis.AmazonKinesisClient;
+import com.amazonaws.services.kinesis.AmazonKinesis;
+import com.amazonaws.services.kinesis.AmazonKinesisClientBuilder;
 
 import java.util.Properties;
 
 /**
  * Some utilities specific to Amazon Web Service.
  */
 public class AWSUtil {
+   private static final String USER_AGENT_FORMAT = "Apache Flink %s (%s) 
Kinesis Connector";
--- End diff --

Should we add a comment on what the user agent string is used for?


> upgrade from deprecated classes to AmazonKinesis
> 
>
> Key: FLINK-8271
> URL: https://issues.apache.org/jira/browse/FLINK-8271
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.5.0, 1.4.1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8271) upgrade from deprecated classes to AmazonKinesis

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294498#comment-16294498
 ] 

ASF GitHub Bot commented on FLINK-8271:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5171#discussion_r157396637
  
--- Diff: flink-connectors/flink-connector-kinesis/pom.xml ---
@@ -33,7 +33,7 @@ under the License.
flink-connector-kinesis_${scala.binary.version}
flink-connector-kinesis

-   1.11.171
+   1.11.250
--- End diff --

AFAIK, the exact `aws.sdk.version` was picked according to match the KCL / 
KPL dependency versions.
i.e., both the AWS Java SDK dependency version used in the KCL / KPL 
matches the `aws.sdk.version`, otherwise there may be conflicts.

Was `1.11.250` confirmed to also be compatible with the current KCL / KPL 
versions?


> upgrade from deprecated classes to AmazonKinesis
> 
>
> Key: FLINK-8271
> URL: https://issues.apache.org/jira/browse/FLINK-8271
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.4.0
>Reporter: Bowen Li
>Assignee: Bowen Li
> Fix For: 1.5.0, 1.4.1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #5171: [FLINK-8271][Kinesis connector] upgrade from depre...

2017-12-17 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5171#discussion_r157396637
  
--- Diff: flink-connectors/flink-connector-kinesis/pom.xml ---
@@ -33,7 +33,7 @@ under the License.
flink-connector-kinesis_${scala.binary.version}
flink-connector-kinesis

-   1.11.171
+   1.11.250
--- End diff --

AFAIK, the exact `aws.sdk.version` was picked according to match the KCL / 
KPL dependency versions.
i.e., both the AWS Java SDK dependency version used in the KCL / KPL 
matches the `aws.sdk.version`, otherwise there may be conflicts.

Was `1.11.250` confirmed to also be compatible with the current KCL / KPL 
versions?


---


[GitHub] flink pull request #5171: [FLINK-8271][Kinesis connector] upgrade from depre...

2017-12-17 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/5171#discussion_r157396375
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/AWSUtil.java
 ---
@@ -30,37 +30,43 @@
 import com.amazonaws.auth.EnvironmentVariableCredentialsProvider;
 import com.amazonaws.auth.SystemPropertiesCredentialsProvider;
 import com.amazonaws.auth.profile.ProfileCredentialsProvider;
-import com.amazonaws.regions.Region;
+import com.amazonaws.client.builder.AwsClientBuilder;
 import com.amazonaws.regions.Regions;
-import com.amazonaws.services.kinesis.AmazonKinesisClient;
+import com.amazonaws.services.kinesis.AmazonKinesis;
+import com.amazonaws.services.kinesis.AmazonKinesisClientBuilder;
 
 import java.util.Properties;
 
 /**
  * Some utilities specific to Amazon Web Service.
  */
 public class AWSUtil {
+   private static final String USER_AGENT_FORMAT = "Apache Flink %s (%s) 
Kinesis Connector";
--- End diff --

Should we add a comment on what the user agent string is used for?


---


[jira] [Commented] (FLINK-8270) TaskManagers do not use correct local path for shipped Keytab files in Yarn deployment modes

2017-12-17 Thread dongxiao.yang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294489#comment-16294489
 ] 

dongxiao.yang commented on FLINK-8270:
--

[~tzulitai]  
Hi Gordon ,I'm the reporter of the mail 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-4-0-keytab-is-unreadable-td17292.html
 . May I do some works to help solve this issue?

> TaskManagers do not use correct local path for shipped Keytab files in Yarn 
> deployment modes
> 
>
> Key: FLINK-8270
> URL: https://issues.apache.org/jira/browse/FLINK-8270
> Project: Flink
>  Issue Type: Bug
>  Components: Security
>Affects Versions: 1.4.0
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Reported on ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-4-0-keytab-is-unreadable-td17292.html
> This is a "recurrence" of FLINK-5580. The TMs in Yarn deployment modes are 
> again not using the correct local paths for shipped Keytab files.
> The cause was accidental due to this change: 
> https://github.com/apache/flink/commit/7f1c23317453859ce3b136b6e13f698d3fee34a1#diff-a81afdf5ce0872836ac6fadb603d483e.
> Things to consider:
> 1) The above accidental breaking change was actually targeting a minor 
> refactor on the "integration test scenario" code block in 
> {{YarnTaskManagerRunner}}. It would be best if we can remove that test case 
> code block from the main code.
> 2) Unit test coverage is apparently not enough. As this incidence shows, any 
> slight changes can cause this issue to easily resurface again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7775) Remove unreferenced method PermanentBlobCache#getNumberOfCachedJobs

2017-12-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-7775:
--
Description: 
{code}
  public int getNumberOfCachedJobs() {
return jobRefCounters.size();
  }
{code}

The method of PermanentBlobCache is not used.
We should remove it.

  was:
{code}
  public int getNumberOfCachedJobs() {
return jobRefCounters.size();
  }
{code}

The method is not used.
We should remove it.


> Remove unreferenced method PermanentBlobCache#getNumberOfCachedJobs
> ---
>
> Key: FLINK-7775
> URL: https://issues.apache.org/jira/browse/FLINK-7775
> Project: Flink
>  Issue Type: Task
>  Components: Local Runtime
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
>   public int getNumberOfCachedJobs() {
> return jobRefCounters.size();
>   }
> {code}
> The method of PermanentBlobCache is not used.
> We should remove it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7917) The return of taskInformationOrBlobKey should be placed inside synchronized in ExecutionJobVertex

2017-12-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-7917:
--
Description: 
Currently in ExecutionJobVertex#getTaskInformationOrBlobKey:

{code}
}

return taskInformationOrBlobKey;
{code}
The return should be placed inside synchronized block.

  was:
Currently:

{code}
}

return taskInformationOrBlobKey;
{code}
The return should be placed inside synchronized block.


> The return of taskInformationOrBlobKey should be placed inside synchronized 
> in ExecutionJobVertex
> -
>
> Key: FLINK-7917
> URL: https://issues.apache.org/jira/browse/FLINK-7917
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> Currently in ExecutionJobVertex#getTaskInformationOrBlobKey:
> {code}
> }
> return taskInformationOrBlobKey;
> {code}
> The return should be placed inside synchronized block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8274) Fix Java 64K method compiling limitation for CommonCalc

2017-12-17 Thread Ruidong Li (JIRA)
Ruidong Li created FLINK-8274:
-

 Summary: Fix Java 64K method compiling limitation for CommonCalc
 Key: FLINK-8274
 URL: https://issues.apache.org/jira/browse/FLINK-8274
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Reporter: Ruidong Li
Assignee: Ruidong Li


For complex SQL Queries, the generated code for {code}DataStreamCalc{code}, 
{code}DataSetCalc{code} may exceed Java's method length limitation 64kb.
 
This issue will split long method to several sub method calls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8270) TaskManagers do not use correct local path for shipped Keytab files in Yarn deployment modes

2017-12-17 Thread Eron Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294306#comment-16294306
 ] 

Eron Wright  commented on FLINK-8270:
-

Please add me as a reviewer, thanks.

+1 for eliminating the special-case code.  

> TaskManagers do not use correct local path for shipped Keytab files in Yarn 
> deployment modes
> 
>
> Key: FLINK-8270
> URL: https://issues.apache.org/jira/browse/FLINK-8270
> Project: Flink
>  Issue Type: Bug
>  Components: Security
>Affects Versions: 1.4.0
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.5.0, 1.4.1
>
>
> Reported on ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-4-0-keytab-is-unreadable-td17292.html
> This is a "recurrence" of FLINK-5580. The TMs in Yarn deployment modes are 
> again not using the correct local paths for shipped Keytab files.
> The cause was accidental due to this change: 
> https://github.com/apache/flink/commit/7f1c23317453859ce3b136b6e13f698d3fee34a1#diff-a81afdf5ce0872836ac6fadb603d483e.
> Things to consider:
> 1) The above accidental breaking change was actually targeting a minor 
> refactor on the "integration test scenario" code block in 
> {{YarnTaskManagerRunner}}. It would be best if we can remove that test case 
> code block from the main code.
> 2) Unit test coverage is apparently not enough. As this incidence shows, any 
> slight changes can cause this issue to easily resurface again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8156) Bump commons-beanutils version to 1.9.3

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294229#comment-16294229
 ] 

ASF GitHub Bot commented on FLINK-8156:
---

Github user yew1eb commented on the issue:

https://github.com/apache/flink/pull/5113
  
Thanks @StephanEwen for the suggestion. I will update the PR accordingly.


> Bump commons-beanutils version to 1.9.3
> ---
>
> Key: FLINK-8156
> URL: https://issues.apache.org/jira/browse/FLINK-8156
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Hai Zhou UTC+8
>Assignee: Hai Zhou UTC+8
> Fix For: 1.5.0
>
>
> Commons-beanutils v1.8.0 dependency is not security compliant. See 
> [CVE-2014-0114|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0114]:
> {code:java}
> Apache Commons BeanUtils, as distributed in lib/commons-beanutils-1.8.0.jar 
> in Apache Struts 1.x through 1.3.10 and in other products requiring 
> commons-beanutils through 1.9.2, does not suppress the class property, which 
> allows remote attackers to "manipulate" the ClassLoader and execute arbitrary 
> code via the class parameter, as demonstrated by the passing of this 
> parameter to the getClass method of the ActionForm object in Struts 1.
> {code}
> Note that current version commons-beanutils 1.9.2 in turn has a CVE in its 
> dependency commons-collections (CVE-2015-6420, see BEANUTILS-488), which is 
> fixed in 1.9.3.
> We should upgrade {{commons-beanutils}} from 1.8.3 to 1.9.3 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5113: [FLINK-8156][build] Bump commons-beanutils version to 1.9...

2017-12-17 Thread yew1eb
Github user yew1eb commented on the issue:

https://github.com/apache/flink/pull/5113
  
Thanks @StephanEwen for the suggestion. I will update the PR accordingly.


---


[jira] [Created] (FLINK-8273) Configs shown on JobManager WebUI maybe not the real runtime value

2017-12-17 Thread Lynch Lee (JIRA)
Lynch Lee created FLINK-8273:


 Summary: Configs shown on JobManager WebUI maybe not the real 
runtime value 
 Key: FLINK-8273
 URL: https://issues.apache.org/jira/browse/FLINK-8273
 Project: Flink
  Issue Type: Bug
  Components: Configuration, YARN
Affects Versions: 1.3.2
Reporter: Lynch Lee


Some configurations are given like this:
cmd: /data/apps/opt/flink/bin/yarn-session.sh -n 5 -jm 8192 -tm 8192 -s 8 -d
file(flin-conf.yaml):
taskmanager.numberOfTaskSlots: 65

so we got slots of one taskmanager as 8 on runtime,  but  config  
taskmanager.numberOfTaskSlots shown on JobManager WebUI is 65.

I think we should given the real runtime value onto the webui...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8227) Optimize the performance of SharedBufferSerializer

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294174#comment-16294174
 ] 

ASF GitHub Bot commented on FLINK-8227:
---

Github user dawidwys commented on the issue:

https://github.com/apache/flink/pull/5142
  
I think it is a very good and needed change.

As for @StephanEwen questions:

Ad 1. I see no problem with this `entryId` being primitive `int`. 
Ad 2. I think this field should be transient it is used only during the 
serialization process to reflect the order in which we serialize the 
SharedBufferEntries to restore the links between them. It does not play any 
role outside of serialization.
Ad 3. I agree it should be at least long. It will limit  the number of 
intermittent state of pattern graph.

After changing `int` to `long` I think it is ok to be merged.


> Optimize the performance of SharedBufferSerializer
> --
>
> Key: FLINK-8227
> URL: https://issues.apache.org/jira/browse/FLINK-8227
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Reporter: Dian Fu
>Assignee: Dian Fu
>
> Currently {{SharedBufferSerializer.serialize()}} will create a HashMap and 
> put all the {{SharedBufferEntry}} into it. Usually this is not a problem. But 
> we obverse that in some cases the calculation of hashCode may become the 
> bottleneck. The performance will decrease as the number of 
> {{SharedBufferEdge}} increases. For looping pattern {{A*}}, if the number of 
> {{SharedBufferEntry}} is {{N}}, the number of {{SharedBufferEdge}} is about 
> {{N * N}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #5142: [FLINK-8227] Optimize the performance of SharedBufferSeri...

2017-12-17 Thread dawidwys
Github user dawidwys commented on the issue:

https://github.com/apache/flink/pull/5142
  
I think it is a very good and needed change.

As for @StephanEwen questions:

Ad 1. I see no problem with this `entryId` being primitive `int`. 
Ad 2. I think this field should be transient it is used only during the 
serialization process to reflect the order in which we serialize the 
SharedBufferEntries to restore the links between them. It does not play any 
role outside of serialization.
Ad 3. I agree it should be at least long. It will limit  the number of 
intermittent state of pattern graph.

After changing `int` to `long` I think it is ok to be merged.


---


[jira] [Closed] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Miguel Cejuela closed FLINK-8272.
--
   Resolution: Fixed
Fix Version/s: 1.4.0

> 1.4 & Scala: Exception when querying the state
> --
>
> Key: FLINK-8272
> URL: https://issues.apache.org/jira/browse/FLINK-8272
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.4.0
> Environment: mac
>Reporter: Juan Miguel Cejuela
> Fix For: 1.4.0
>
>
> With Flink 1.3.2 and basically the same code except for the changes in the 
> API that happened in 1.4.0, I could query the state finely.
> With 1.4.0 (& scala), I get the exception written below. Most important line: 
> {{java.util.concurrent.CompletionException: 
> java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
> writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}
> Somehow it seems that the connection to the underlying buffer fails. Perhaps 
> most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
> running), I still get the same error. So I guess no connection succeeds. 
> Also, to avoid possible serialization errors, I am just querying with a key 
> of type {{String}} and, for testing, a dummy & trivial {{case class}}.
> I create the necessary types such as:
> {code}
> implicit val typeInformationString = createTypeInformation[String]
> implicit val typeInformationDummy = createTypeInformation[Dummy]
> implicit val valueStateDescriptorDummy =
> new ValueStateDescriptor("state", typeInformationDummy)}}
> {code}
> And then query the like such as:
> {code}
> val javaCompletableFuture = client
> .getKvState(this.flinkJobIdObject, this.queryableState, key, 
> typeInformationString, valueStateDescriptorDummy)
> {code}
> Does anybody have any clue on this? Let me know what other information I 
> should provide.
> ---
> Full stack trace:
> {code}
> java.util.concurrent.CompletionException: 
> java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
> writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
>   at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
>   at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>   at 
> org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>   at 
> org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
>   at 
> org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
>   at 
> org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> 

[jira] [Commented] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294165#comment-16294165
 ] 

Juan Miguel Cejuela commented on FLINK-8272:


Correction: 

* I did see the log message: "Started Queryable State Proxy Server" (without 
the "the")
* Inspecting that log, I realized I had wrongly configured the port for the 
queryable client (if I recall correctly, in 1.3.2 the port was the same as 
flink's JobManager, but now the port must be of the "Queryable State Proxy 
Server" and the host must be one of the TaskManager's)

Bottom line: my bad, no issue! You can close this issue. Sorry!

> 1.4 & Scala: Exception when querying the state
> --
>
> Key: FLINK-8272
> URL: https://issues.apache.org/jira/browse/FLINK-8272
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.4.0
> Environment: mac
>Reporter: Juan Miguel Cejuela
>
> With Flink 1.3.2 and basically the same code except for the changes in the 
> API that happened in 1.4.0, I could query the state finely.
> With 1.4.0 (& scala), I get the exception written below. Most important line: 
> {{java.util.concurrent.CompletionException: 
> java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
> writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}
> Somehow it seems that the connection to the underlying buffer fails. Perhaps 
> most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
> running), I still get the same error. So I guess no connection succeeds. 
> Also, to avoid possible serialization errors, I am just querying with a key 
> of type {{String}} and, for testing, a dummy & trivial {{case class}}.
> I create the necessary types such as:
> {code}
> implicit val typeInformationString = createTypeInformation[String]
> implicit val typeInformationDummy = createTypeInformation[Dummy]
> implicit val valueStateDescriptorDummy =
> new ValueStateDescriptor("state", typeInformationDummy)}}
> {code}
> And then query the like such as:
> {code}
> val javaCompletableFuture = client
> .getKvState(this.flinkJobIdObject, this.queryableState, key, 
> typeInformationString, valueStateDescriptorDummy)
> {code}
> Does anybody have any clue on this? Let me know what other information I 
> should provide.
> ---
> Full stack trace:
> {code}
> java.util.concurrent.CompletionException: 
> java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
> writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
>   at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
>   at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>   at 
> org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>   at 
> org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
>   at 
> org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
>   at 
> org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> 

[jira] [Commented] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294155#comment-16294155
 ] 

Juan Miguel Cejuela commented on FLINK-8272:


More info: I never receive the message "Started the Queryable State Proxy 
Server" as described in: 
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/state/queryable_state.html#activating-queryable-state

> 1.4 & Scala: Exception when querying the state
> --
>
> Key: FLINK-8272
> URL: https://issues.apache.org/jira/browse/FLINK-8272
> Project: Flink
>  Issue Type: Bug
>  Components: Queryable State
>Affects Versions: 1.4.0
> Environment: mac
>Reporter: Juan Miguel Cejuela
>
> With Flink 1.3.2 and basically the same code except for the changes in the 
> API that happened in 1.4.0, I could query the state finely.
> With 1.4.0 (& scala), I get the exception written below. Most important line: 
> {{java.util.concurrent.CompletionException: 
> java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
> writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}
> Somehow it seems that the connection to the underlying buffer fails. Perhaps 
> most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
> running), I still get the same error. So I guess no connection succeeds. 
> Also, to avoid possible serialization errors, I am just querying with a key 
> of type {{String}} and, for testing, a dummy & trivial {{case class}}.
> I create the necessary types such as:
> {code}
> implicit val typeInformationString = createTypeInformation[String]
> implicit val typeInformationDummy = createTypeInformation[Dummy]
> implicit val valueStateDescriptorDummy =
> new ValueStateDescriptor("state", typeInformationDummy)}}
> {code}
> And then query the like such as:
> {code}
> val javaCompletableFuture = client
> .getKvState(this.flinkJobIdObject, this.queryableState, key, 
> typeInformationString, valueStateDescriptorDummy)
> {code}
> Does anybody have any clue on this? Let me know what other information I 
> should provide.
> ---
> Full stack trace:
> {code}
> java.util.concurrent.CompletionException: 
> java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
> writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
>   at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
>   at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>   at 
> org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>   at 
> org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
>   at 
> org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
>   at 
> org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> 

[jira] [Updated] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Miguel Cejuela updated FLINK-8272:
---
Description: 
With Flink 1.3.2 and basically the same code except for the changes in the API 
that happened in 1.4.0, I could query the state finely.

With 1.4.0 (& scala), I get the exception written below. Most important line: 
{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}

Somehow it seems that the connection to the underlying buffer fails. Perhaps 
most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
running), I still get the same error. So I guess no connection succeeds. Also, 
to avoid possible serialization errors, I am just querying with a key of type 
{{String}} and, for testing, a dummy & trivial {{case class}}.

I create the necessary types such as:

{code}
implicit val typeInformationString = createTypeInformation[String]
implicit val typeInformationDummy = createTypeInformation[Dummy]

implicit val valueStateDescriptorDummy =
new ValueStateDescriptor("state", typeInformationDummy)}}
{code}

And then query the like such as:

{code}
val javaCompletableFuture = client
.getKvState(this.flinkJobIdObject, this.queryableState, key, 
typeInformationString, valueStateDescriptorDummy)
{code}

Does anybody have any clue on this? Let me know what other information I should 
provide.

---

Full stack trace:

{code}
java.util.concurrent.CompletionException: java.lang.IndexOutOfBoundsException: 
readerIndex(0) + length(4) exceeds writerIndex(0): 
PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
at 
org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)

[jira] [Updated] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Miguel Cejuela updated FLINK-8272:
---
Description: 
With Flink 1.3.2 and basically the same code except for the changes in the API 
that happened in 1.4.0, I could query the state finely.

With 1.4.0 (& scala), I get the exception written below. Most important line: 
{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}

Somehow it seems that the connection to the underlying buffer fails. Perhaps 
most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
running), I still get the same error. So I guess no connection succeeds. Also, 
to avoid possible serialization errors, I am just querying with a key of type 
{{String}} and, for testing, a dummy & trivial {{case class}}.

I create the necessary types such as:

{code:java}
implicit val typeInformationString = createTypeInformation[String]
implicit val typeInformationDummy = createTypeInformation[Dummy]

implicit val valueStateDescriptorDummy =
new ValueStateDescriptor("state", typeInformationDummy)}}
{code}

And then query the like such as:

{code}
val javaCompletableFuture = client
.getKvState(this.flinkJobIdObject, this.queryableState, key, 
typeInformationString, valueStateDescriptorDummy)
{code}

Does anybody have any clue on this? Let me know what other information I should 
provide.

---

Full stack trace:

{code:scala}
java.util.concurrent.CompletionException: java.lang.IndexOutOfBoundsException: 
readerIndex(0) + length(4) exceeds writerIndex(0): 
PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
at 
org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 

[jira] [Updated] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Miguel Cejuela updated FLINK-8272:
---
Description: 
With Flink 1.3.2 and basically the same code except for the changes in the API 
that happened in 1.4.0, I could query the state finely.

With 1.4.0 (& scala), I get the exception written below. Most important line: 
{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}

Somehow it seems that the connection to the underlying buffer fails. Perhaps 
most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
running), I still get the same error. So I guess no connection succeeds. Also, 
to avoid possible serialization errors, I am just querying with a key of type 
{{String}} and, for testing, a dummy & trivial {{case class}}.

I create the necessary types such as:

{code:scala}
implicit val typeInformationString = createTypeInformation[String]
implicit val typeInformationDummy = createTypeInformation[Dummy]

implicit val valueStateDescriptorDummy =
new ValueStateDescriptor("state", typeInformationDummy)}}
{code}

And then query the like such as:

{code}
{{val javaCompletableFuture = client
.getKvState(this.flinkJobIdObject, this.queryableState, key, 
typeInformationString, valueStateDescriptorDummy)}}
{code}

Does anybody have any clue on this? Let me know what other information I should 
provide.

---

Full stack trace:

{code:scala}
java.util.concurrent.CompletionException: java.lang.IndexOutOfBoundsException: 
readerIndex(0) + length(4) exceeds writerIndex(0): 
PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
at 
org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 

[jira] [Updated] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Miguel Cejuela updated FLINK-8272:
---
Description: 
With Flink 1.3.2 and basically the same code except for the changes in the API 
that happened in 1.4.0, I could query the state finely.

With 1.4.0 (& scala), I get the exception written below. Most important line: 
{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}

Somehow it seems that the connection to the underlying buffer fails. Perhaps 
most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
running), I still get the same error. So I guess no connection succeeds. Also, 
to avoid possible serialization errors, I am just querying with a key of type 
{{String}} and, for testing, a dummy & trivial {{case class}}.

I create the necessary types such as:

{code:scala}
implicit val typeInformationString = createTypeInformation[String]
implicit val typeInformationDummy = createTypeInformation[Dummy]

implicit val valueStateDescriptorDummy =
new ValueStateDescriptor("state", typeInformationDummy)}}
{code}

And then query the like such as:

{code}
val javaCompletableFuture = client
.getKvState(this.flinkJobIdObject, this.queryableState, key, 
typeInformationString, valueStateDescriptorDummy)
{code}

Does anybody have any clue on this? Let me know what other information I should 
provide.

---

Full stack trace:

{code:scala}
java.util.concurrent.CompletionException: java.lang.IndexOutOfBoundsException: 
readerIndex(0) + length(4) exceeds writerIndex(0): 
PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
at 
org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 

[jira] [Updated] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Miguel Cejuela updated FLINK-8272:
---
Description: 
With Flink 1.3.2 and basically the same code except for the changes in the API 
that happened in 1.4.0, I could query the state finely.

With 1.4.0 (& scala), I get the exception written below. Most important line: 
{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}

Somehow it seems that the connection to the underlying buffer fails. Perhaps 
most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
running), I still get the same error. So I guess no connection succeeds. Also, 
to avoid possible serialization errors, I am just querying with a key of type 
{{String}} and, for testing, a dummy & trivial {{case class}}.

I create the necessary types such as:

{code:scala}
implicit val typeInformationString = createTypeInformation[String]
implicit val typeInformationDummy = createTypeInformation[Dummy]

implicit val valueStateDescriptorDummy =
new ValueStateDescriptor("state", typeInformationDummy)}}
{code}

And then query the like such as:

{code:scala}
val javaCompletableFuture = client
.getKvState(this.flinkJobIdObject, this.queryableState, key, 
typeInformationString, valueStateDescriptorDummy)
{code}

Does anybody have any clue on this? Let me know what other information I should 
provide.

---

Full stack trace:

{code:scala}
java.util.concurrent.CompletionException: java.lang.IndexOutOfBoundsException: 
readerIndex(0) + length(4) exceeds writerIndex(0): 
PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
at 
org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 

[jira] [Updated] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Miguel Cejuela updated FLINK-8272:
---
Description: 
With Flink 1.3.2 and basically the same code except for the changes in the API 
that happened in 1.4.0, I could query the state finely.

With 1.4.0 (& scala), I get the exception written below. Most important line: 
{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}

Somehow it seems that the connection to the underlying buffer fails. Perhaps 
most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
running), I still get the same error. So I guess no connection succeeds. Also, 
to avoid possible serialization errors, I am just querying with a type 
{{String}} and, for testing, a dummy & trivial {{case class}}.

I create the necessary types such as:

{{implicit val typeInformationString = createTypeInformation[String]
implicit val typeInformationDummy = createTypeInformation[Dummy]

implicit val valueStateDescriptorDummy =
new ValueStateDescriptor("state", typeInformationDummy)}}

And then query the like such as:

{{val javaCompletableFuture = client
.getKvState(this.flinkJobIdObject, this.queryableState, key, 
typeInformationString, valueStateDescriptorDummy)}}

Does anybody have any clue on this? Let me know what other information I should 
provide.

---

Full stack trace:

{code:scala}
{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
at 
org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at 

[jira] [Updated] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Miguel Cejuela updated FLINK-8272:
---
Description: 
With Flink 1.3.2 and basically the same code except for the changes in the API 
that happened in 1.4.0, I could query the state finely.

With 1.4.0 (& scala), I get the exception written below. Most important line: 
{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}

Somehow it seems that the connection to the underlying buffer fails. Perhaps 
most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
running), I still get the same error. So I guess no connection succeeds. Also, 
to avoid possible serialization errors, I am just querying with a type 
{{String}} and, for testing, a dummy & trivial {{case class}}.

I create the necessary types such as:

{code:scala}
implicit val typeInformationString = createTypeInformation[String]
implicit val typeInformationDummy = createTypeInformation[Dummy]

implicit val valueStateDescriptorDummy =
new ValueStateDescriptor("state", typeInformationDummy)}}
{code}

And then query the like such as:

{code}
{{val javaCompletableFuture = client
.getKvState(this.flinkJobIdObject, this.queryableState, key, 
typeInformationString, valueStateDescriptorDummy)}}
{code}

Does anybody have any clue on this? Let me know what other information I should 
provide.

---

Full stack trace:

{code:scala}
java.util.concurrent.CompletionException: java.lang.IndexOutOfBoundsException: 
readerIndex(0) + length(4) exceeds writerIndex(0): 
PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
at 
org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
 

[jira] [Updated] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Miguel Cejuela updated FLINK-8272:
---
Description: 
With Flink 1.3.2 and basically the same code except for the changes in the API 
that happened in 1.4.0, I could query the state finely.

With 1.4.0 (& scala), I get the exception written below. Most important line: 
{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}

Somehow it seems that the connection to the underlying buffer fails. Perhaps 
most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
running), I still get the same error. So I guess no connection succeeds. Also, 
to avoid possible serialization errors, I am just querying with a type 
{{String}} and, for testing, a dummy & trivial {{case class}}.

I create the necessary types such as:

{{implicit val typeInformationString = createTypeInformation[String]
implicit val typeInformationDummy = createTypeInformation[Dummy]

implicit val valueStateDescriptorDummy =
new ValueStateDescriptor("state", typeInformationDummy)}}

And then query the like such as:

{{val javaCompletableFuture = client
.getKvState(this.flinkJobIdObject, this.queryableState, key, 
typeInformationString, valueStateDescriptorDummy)}}

Does anybody have any clue on this? Let me know what other information I should 
provide.

---

Full stack trace:

{code:scala}
java.util.concurrent.CompletionException: java.lang.IndexOutOfBoundsException: 
readerIndex(0) + length(4) exceeds writerIndex(0): 
PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
at 
org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at 

[jira] [Created] (FLINK-8272) 1.4 & Scala: Exception when querying the state

2017-12-17 Thread Juan Miguel Cejuela (JIRA)
Juan Miguel Cejuela created FLINK-8272:
--

 Summary: 1.4 & Scala: Exception when querying the state
 Key: FLINK-8272
 URL: https://issues.apache.org/jira/browse/FLINK-8272
 Project: Flink
  Issue Type: Bug
  Components: Queryable State
Affects Versions: 1.4.0
 Environment: mac
Reporter: Juan Miguel Cejuela


With Flink 1.3.2 and basically the same code except for the changes in the API 
that happened in 1.4.0, I could query the state finely.

With 1.4.0 (& scala), I get the exception written below. Most important line: 
{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)}}

Somehow it seems that the connection to the underlying buffer fails. Perhaps 
most informing is that, if I give a wrong {{JobId}} (i.e. one that is not 
running), I still get the same error. So I guess no connection succeeds. Also, 
to avoid possible serialization errors, I am just querying with a type 
{{String}} and, for testing, a dummy & trivial {{case class}}.

I create the necessary types such as:

{{implicit val typeInformationString = createTypeInformation[String]
implicit val typeInformationDummy = createTypeInformation[Dummy]

implicit val valueStateDescriptorDummy =
new ValueStateDescriptor("state", typeInformationDummy)}}

And then query the like such as:

{{val javaCompletableFuture = client
.getKvState(this.flinkJobIdObject, this.queryableState, key, 
typeInformationString, valueStateDescriptorDummy)}}

Does anybody have any clue on this? Let me know what other information I should 
provide.

---

Full stack trace:

{{java.util.concurrent.CompletionException: 
java.lang.IndexOutOfBoundsException: readerIndex(0) + length(4) exceeds 
writerIndex(0): PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 0)
at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$PendingConnection.lambda$handInChannel$0(Client.java:289)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.close(Client.java:432)
at 
org.apache.flink.queryablestate.network.Client$EstablishedConnection.onFailure(Client.java:505)
at 
org.apache.flink.queryablestate.network.ClientHandler.channelRead(ClientHandler.java:92)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 

[jira] [Commented] (FLINK-7736) Fix some of the alerts raised by lgtm.com

2017-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294045#comment-16294045
 ] 

ASF GitHub Bot commented on FLINK-7736:
---

Github user 1m2c3t4 commented on the issue:

https://github.com/apache/flink/pull/4784
  
This has been reviewed and approved, but not committed. Can one of the 
committers take a look please ?


> Fix some of the alerts raised by lgtm.com
> -
>
> Key: FLINK-7736
> URL: https://issues.apache.org/jira/browse/FLINK-7736
> Project: Flink
>  Issue Type: Improvement
>Reporter: Malcolm Taylor
>Assignee: Malcolm Taylor
>
> lgtm.com has identified a number of issues giving scope for improvement in 
> the code: [https://lgtm.com/projects/g/apache/flink/alerts/?mode=list]
> This issue is to address some of the simpler ones. Some of these are quite 
> clear bugs such as off-by-one errors. Others are areas where the code might 
> be made clearer, such as use of a variable name which shadows another 
> variable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4784: FLINK-7736: fix some lgtm.com alerts

2017-12-17 Thread 1m2c3t4
Github user 1m2c3t4 commented on the issue:

https://github.com/apache/flink/pull/4784
  
This has been reviewed and approved, but not committed. Can one of the 
committers take a look please ?


---


[GitHub] flink issue #5133: [hotfix] Fix typo in AkkaUtils method

2017-12-17 Thread casidiablo
Github user casidiablo commented on the issue:

https://github.com/apache/flink/pull/5133
  
OK


---