We have an event-time pipeline that uses a ProcessFunction to accept events with an allowed lateness of a number of days. We a BoundedOutOfOrdernessTimestampExtractor and our event stream has a long tail that occasionally exceeds our allowed lateness, in which case we drop the events.
The logic is simple: 1. OnElement, we compare the (element's event time + allowed lateness) against the current watermark 2. If element is within time bound bounds we register a timer for (element's event time + allowed lateness). We call this "endTime". 3. during the above window we collect and assimilate all the data for the key and regularly UPSERT the data to a data store. 4. OnTimer for above "endTime" we clear state for the key. The above has worked well for the past 1-2 years. Last week, however, we had a bug that introduced DEBUG logging to the job config, and this caused several failure/restarts (S3 DEBUG logging is extremely verbose!). Within a day or two, our monitoring system restarted the pipeline several times, sometimes from a Savepoint over an hour or two old. For some reason during this period we noticed that some few long tail data that should have been dropped made it into our data store. These data did not contain assimilated Flink state, meaning they passed through after the endTime key purge (4.) and ended up compromising the data store by replacing assimilated with tail-end values. I'm wondering how this could be possible. The only explanation I can think of is: 4. on "endTime" timer key state is purged. 5 --- job fail --- 6. job restarted on 2.5 hour old Savepoint 7. watermark regresses (?) from "endTime" watermark. 8. a long tail event squeaks through under temporarily backdated watermark 9. data store data for key is replaced with long tail data, Is the above possible, or perhaps there is another possible scenario? Any opinions appreciated! -Cliff