[ 
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15381654#comment-15381654
 ] 

ASF GitHub Bot commented on FLINK-4035:
---------------------------------------

Github user tzulitai commented on the issue:

    https://github.com/apache/flink/pull/2231
  
    Thank you for the description, @radekg .
    
    I think the problems you mentioned should be solvable by working on the 0.9 
connector to be just a bit more general, then users can simply manually use the 
0.10 jars. However, you also have a point on the possible confusion. IMHO, I 
think it is redundant to have two connector modules with almost the same code, 
and it doesn't also seem feasible for code maintainability to keep adding 
modules for new Kafka versions even if they don't have changes in the API.
    
    I think we'll need to loop in @rmetzger and @aljoscha to decide on how we 
can proceed with this. The solutions I currently see is to work on the 0.9 
connector on the above problems so it can be compatible with the 0.10 API, and 
either rename the module to be `flink-connecto-kafka-0.10` (doesn't seem good 
because it'll be breaking user's pom's), or add information to the 
documentation on how to work with Kafka 0.10. Either way, in the long-run, 
we'll probably still need to sort out a way to better manage the connector 
codes in situations of new external system versions like this.


> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---------------------------------------------------
>
>                 Key: FLINK-4035
>                 URL: https://issues.apache.org/jira/browse/FLINK-4035
>             Project: Flink
>          Issue Type: Bug
>          Components: Kafka Connector
>    Affects Versions: 1.0.3
>            Reporter: Elias Levy
>            Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.  
> Published messages now include timestamps and compressed messages now include 
> relative offsets.  As it is now, brokers must decompress publisher compressed 
> messages, assign offset to them, and recompress them, which is wasteful and 
> makes it less likely that compression will be used at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to