@yanghua I believe you can't use a modern (>= 0.10.2.0) Kafka client to talk to 
a 0.8 or 0.9 broker.  That's why I said, we would probably have to keep the 0.8 
and 0.9 Kafka connectors, unless we decide we can drop them, if few in the 
community are still using those versions of Kafka.  They are quite old.  0.8 
came out 5 years ago, and 0.9 3 years ago.

But AFAIK there is no performance penalty for using a newer Kafka client (e.g. 
2.0.0) with an older KIP-35 supporting broker (e.g. 0.10.2.0).

As for Spark, you will note that they have yet to support any Kafka version 
greater than 0.10, which are the ones that can be used with any modern client.

@pnowojski rather than releasing `flink-connector-kafka-1.0`, I would just 
release `flink-connector-kafka` or `flink-connector-kafka-new` or some such, 
without embedding the Kafka version into the connector artifact id, as it will 
be updated with every new Kafka version.  And I would drop 
`flink-connector-kafka-0.11` and `flink-connector-kafka-0.10`.

I would not add a proxy.  It would just add complexity.  If the API doesn't 
change much, then best for folks to just change their dependencies to point to 
the new artifact and be done with it.  

Some testing against older brokers would be useful.  Particularly if there are 
any features we use that may not be available across versions.

[ Full content available at: https://github.com/apache/flink/pull/6577 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to