[ https://issues.apache.org/jira/browse/FLINK-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15221142#comment-15221142 ]
ASF GitHub Bot commented on FLINK-3541: --------------------------------------- GitHub user skyahead opened a pull request: https://github.com/apache/flink/pull/1846 [FLINK-3541] [Kafka Connector] Clean up workaround in FlinkKafkaConsu… …mer09 You can merge this pull request into a Git repository by running: $ git pull https://github.com/skyahead/flink FLINK-3541 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/1846.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1846 ---- commit 49c291a5bd06f1468bcee40f03a7bbea3bb1be29 Author: Tianji Li <skyah...@gmail.com> Date: 2016-04-01T04:35:39Z [FLINK-3541] [Kafka Connector] Clean up workaround in FlinkKafkaConsumer09 ---- > Clean up workaround in FlinkKafkaConsumer09 > -------------------------------------------- > > Key: FLINK-3541 > URL: https://issues.apache.org/jira/browse/FLINK-3541 > Project: Flink > Issue Type: Improvement > Components: Kafka Connector > Affects Versions: 1.0.0 > Reporter: Till Rohrmann > Priority: Minor > > In the current {{FlinkKafkaConsumer09}} implementation, we repeatedly start a > new {{KafkaConsumer}} if the method {{KafkaConsumer.partitionsFor}} returns a > NPE. This is due to a bug with the Kafka version 0.9.0.0. See > https://issues.apache.org/jira/browse/KAFKA-2880. The code can be found in > the constructor of {{FlinkKafkaConsumer09.java:208}}. > However, the problem is marked as fixed for version 0.9.0.1, which we also > use for the flink-connector-kafka. Therefore, we should be able to get rid of > the workaround. -- This message was sent by Atlassian JIRA (v6.3.4#6332)