Hi,Team:
Does Flink really discard replayed duplicated records via record timestamp as 
the paper Lightweight Asynchronous Snapshots for Distributed Dataflows 
states "To achieve this we can follow a similar scheme to SDGs [5] and mark 
records with sequence numbers from the sources, thus, every downstream node can 
discard records with sequence numbers less than what they have processed 
already"? If so,which part of the Flink source confirms this?

Reply via email to