[ 
https://issues.apache.org/jira/browse/FLINK-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-17916:
--------------------------------

    Assignee: Yuan Mei

> Provide API to separate KafkaShuffle's Producer and Consumer to different jobs
> ------------------------------------------------------------------------------
>
>                 Key: FLINK-17916
>                 URL: https://issues.apache.org/jira/browse/FLINK-17916
>             Project: Flink
>          Issue Type: Improvement
>          Components: API / DataStream, Connectors / Kafka
>    Affects Versions: 1.11.0
>            Reporter: Yuan Mei
>            Assignee: Yuan Mei
>            Priority: Major
>             Fix For: 1.11.0
>
>
> Follow up of FLINK-15670
> *Separate sink (producer) and source (consumer) to different jobs*
>  * In the same job, a sink and a source are recovered independently according 
> to regional failover. However, they share the same checkpoint coordinator and 
> correspondingly, share the same global checkpoint snapshot.
>  * That says if the consumer fails, the producer can not commit written data 
> because of two-phase commit set-up (the producer needs a checkpoint-complete 
> signal to complete the second stage)
>  * Same applies to the producer
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to