Hey Alexander,
Thanks for the feedback and apologies for my late reply.
This validates my understanding of AT_LEAST_ONCE wrt the kafka producer.
I tried to reproduce the issue, but came back empty handed. As you
pointed out the culprit could be a call to an external,
non-idempotent, api.
I'll f
* to clarify: by different output I mean that for the same input message
the output message could be slightly smaller due to the abovementioned
factors and fall into the allowed size range without causing any failures
On Thu, 26 Oct 2023 at 21:52, Alexander Fedulov
wrote:
> Your expectations are
Your expectations are correct. In case of AT_LEAST_ONCE Flink will wait
for all outstanding records in the Kafka buffers to be acknowledged before
marking the checkpoint successful (=also recording the offsets of the
sources). That said, there might be other factors involved that could lead
to a d
Hey folks,
We currently run (py) flink 1.17 on k8s (managed by flink k8s
operator), with HA and checkpointing (fixed retries policy). We
produce into Kafka with AT_LEAST_ONCE delivery guarantee.
Our application failed when trying to produce a message larger than
Kafka's message larger than messag