Thanks for reporting this issue Chris. It looks indeed as if FLINK-10455
has not been fully fixed. I've reopened it and linked this mailing list
thread. If you want, then you could write to the JIRA thread as well. What
would be super helpful is if you manage to create a reproducing example for
fur
It is a blocker for exactly once support from flink kafka producer.
This issue reported and closed. but still reproducible
https://issues.apache.org/jira/browse/FLINK-10455
On Mon, May 6, 2019 at 10:20 AM Slotterback, Chris <
chris_slotterb...@comcast.com> wrote:
> Hey Flink users,
>
>
>
> Curre
Hey Flink users,
Currently using Flink 1.7.2 with a job using FlinkKafkaProducer with its write
semantic set to Semantic.EXACTLY_ONCE. When there is a job failure and restart
(in our case from checkpoint timeout), it begins a failure loop that requires a
cancellation and resubmission to fix. Th