[
https://issues.apache.org/jira/browse/FLINK-2386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14699594#comment-14699594
]
ASF GitHub Bot commented on FLINK-2386:
---------------------------------------
Github user rmetzger commented on the pull request:
https://github.com/apache/flink/pull/1028#issuecomment-131843222
How about dropping the backported Kafka code and relying completely on our
own implementation against the SimpleConsumer API?
We would need to implement the `KafkaConsumer.partitionsFor()` method
ourselves, but I think that's doable.
> Implement Kafka connector using the new Kafka Consumer API
> ----------------------------------------------------------
>
> Key: FLINK-2386
> URL: https://issues.apache.org/jira/browse/FLINK-2386
> Project: Flink
> Issue Type: Improvement
> Components: Kafka Connector
> Reporter: Robert Metzger
> Assignee: Robert Metzger
>
> Once Kafka has released its new consumer API, we should provide a connector
> for that version.
> The release will probably be called 0.9 or 0.8.3.
> The connector will be mostly compatible with Kafka 0.8.2.x, except for
> committing offsets to the broker (the new connector expects a coordinator to
> be available on Kafka). To work around that, we can provide a configuration
> option to commit offsets to zookeeper (managed by flink code).
> For 0.9/0.8.3 it will be fully compatible.
> It will not be compatible with 0.8.1 because of mismatching Kafka messages.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)