[ 
https://issues.apache.org/jira/browse/FLINK-11249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749907#comment-16749907
 ] 

Elias Levy commented on FLINK-11249:
------------------------------------

I believe that the open transactions are maintained.  The transaction data is 
recorded in the transaction log, which is an internal Kafka topic replicated 
three ways.  When a broker is restarted another broker's transaction 
coordinator becomes the leader for the transaction log partitions that were 
managed by the restarted broker.  The new transaction coordinator leader will 
read the transaction log partitions and rebuild the in memory transaction state 
and service the publishers.

> FlinkKafkaProducer011 can not be migrated to FlinkKafkaProducer
> ---------------------------------------------------------------
>
>                 Key: FLINK-11249
>                 URL: https://issues.apache.org/jira/browse/FLINK-11249
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Kafka Connector
>    Affects Versions: 1.7.0, 1.7.1
>            Reporter: Piotr Nowojski
>            Assignee: vinoyang
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 1.7.2, 1.8.0
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> As reported by a user on the mailing list "How to migrate Kafka Producer ?" 
> (on 18th December 2018), {{FlinkKafkaProducer011}} can not be migrated to 
> {{FlinkKafkaProducer}} and the same problem can occur in the future Kafka 
> producer versions/refactorings.
> The issue is that {{ListState<FlinkKafkaProducer.NextTransactionalIdHint> 
> FlinkKafkaProducer#nextTransactionalIdHintState}} field is serialized using 
> java serializers and this is causing problems/collisions on 
> {{FlinkKafkaProducer011.NextTransactionalIdHint}}  vs
> {{FlinkKafkaProducer.NextTransactionalIdHint}}.
> To fix that we probably need to release new versions of those classes, that 
> will rewrite/upgrade this state field to a new one, that doesn't relay on 
> java serialization. After this, we could drop the support for the old field 
> and that in turn will allow users to upgrade from 0.11 connector to the 
> universal one.
> One bright side is that technically speaking our {{FlinkKafkaProducer011}} 
> has the same compatibility matrix as the universal one (it's also forward & 
> backward compatible with the same Kafka versions), so for the time being 
> users can stick to {{FlinkKafkaProducer011}}.
> FYI [~tzulitai] [~yanghua]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to