Hi Harshvardhan,

Flink won't buffer all the events between checkpoints. Flink uses
Kafka's transaction, which are committed only on checkpoints, so the
data will be persisted on the Kafka's side, but only available to read
once committed.

I've cced Piotr, who implemented the Kafka 0.11 connector in case he
wants to correct me or add something to the answer.

Best,

Dawid


On 23/09/18 17:48, Harshvardhan Agrawal wrote:
> Hi,
>
> Can someone please help me understand how does the exactly once
> semantic work with Kafka 11 in Flink?
>
> Thanks,
> Harsh
>
> On Tue, Sep 11, 2018 at 10:54 AM Harshvardhan Agrawal
> <harshvardhan.ag...@gmail.com <mailto:harshvardhan.ag...@gmail.com>>
> wrote:
>
>     Hi,
>
>     I was going through the blog post on how TwoPhaseCommitSink
>     function works with Kafka 11. One of the things I don’t understand
>     is: What is the behavior of the Kafka 11 Producer between two
>     checkpoints? Say that the time interval between two checkpoints is
>     set to 15 minutes. Will Flink buffer all records in memory in that
>     case and start writing to Kafka when the next checkpoint starts?
>
>     Thanks!
>     -- 
>     Regards,
>     Harshvardhan
>
>
>
> -- 
> *Regards,
> Harshvardhan Agrawal*
> *267.991.6618 | LinkedIn <https://www.linkedin.com/in/harshvardhanagr/>*

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to