Hi all,

i am quite new to flink and kafka, so i might mix something up here.
The situation is that we do have a flink application (1.14.5 with scala
2.12) running for a few hours to days and suddenly it stops working and
can't publish to kafka anymore.
I then noticed this message showing up twice and thought "this does not
look right":
> Created new transactional producer prefix-2-9447
The second message timestamp seems to be the timestamp when the application
doesn't publish to kafka properly anymore and when checkpoints are failing
to be made.
We also see this error message:
> Producer attempted an operation with an old epoch. Either there is a
newer producer with the same transactionalId, or the producer's transaction
has been expired by the broker.
Am i mistaken when i think that this should be impossible when flink
handles the sinks?
I would think that due to the checkpointing and due to us giving flink the
control about the output, it should never run into this situation.
We are using an exactly once delivery garantee for kafka and set the flink
sink parallelism to 4.
Also we are using the kubernetes operator of flink in version 1.1.0.
Any hints on what to check/change are highly appreciated.

best,
Sebastian S.
[image: image.png]

Reply via email to