Hi Vino,

I would rather go a different path. I talked to some Kafka pros and they sort 
of confirmed my gut-feeling.
The greatest changes to Kafka have been in the layers behind the API itself. 
The API seems to have been designed with backward compatibility in mind.
That means you can generally use a newer API with an older broker as well as 
use a new broker with an older API (This is probably even the safer way 
around). As soon as you try to do something with the API which your broker 
doesn't support, you get error messages.

https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix

I would rather update the existing connector to a newer Kafka version ... 
0.8.2.2 is quite old and we should update to a version of at least 0.10.0 (I 
would prefer a 1.x) and stick with that. I doubt many will be using an ancient 
0.8.2 version (09.09.2015). And everything starting with 0.10.x should be 
interchangeable.

I wouldn't like to have yet another project maintaining a Zoo of adapters for 
Kafka. 

Eventually a Kafka-Streams client would make sense though ... to sort of extend 
the Edgent streams from the edge to the Kafka cluster.

Chris



Am 12.03.18, 03:41 schrieb "vino yang" <yanghua1...@gmail.com>:

    Hi guys,
    
    How about this idea, I think we should support kafka's new client API.
    
    2018-03-04 15:10 GMT+08:00 vino yang <yanghua1...@gmail.com>:
    
    > The reason is that Kafka 0.9+ provided a new consumer API which has more
    > features and better performance.
    >
    > Just like Flink's implementation : https://github.com/apache/
    > flink/tree/master/flink-connectors.
    >
    > vinoyang
    > Thanks.
    >
    >
    

Reply via email to