Hello,

savepoint&cancel is in general not an atomic operation, it only guarantees that no other checkpoint will be completed between the savepoint and the job cancellation.

You can only guarantee that no messages are sent out if you used a sink that supports exactly-once, which as far as i know, in the case of Kafka, is only possible with the upcoming 0.11 connector (PR). <https://github.com/apache/flink/pull/4239>

On 29.08.2017 14:11, Or Sher wrote:
Hi,

I'm a bit new to Flink and I'm trying to figure out what's the best way to make an upgrade for my current running topology without having duplicate messages being sent by the the sink. (One time prior the upgrade and one time after).

I thought that the "atomic" part of the savepoint & cancel suggested that i can just take a savepoint and cancel the job at the same time and later on start from that savepoint and that would be it.

Having tried that, it seems that I got many duplicated messages sent by the kafka producer sink again after the restore from savepoint.

Is that suppose to happen?
Did I misunderstood the "atomic" meaning?

Thanks,
Or.


Reply via email to