Was any progress ever made on this? We have seen the same issue
in the past. What I do remember is, whatever I set max.block.ms
to, is when the job crashes.
I am going to attempt to reproduce the issue again and will report
back.
On 3/28/19
Hi Marc,
the Kafka Producer should be able to create backpressure. Could you try to
increase max.block.ms to Long.MAX_VALUE?
The exceptions you shared for the failure case don't look like the root
causes of the problem. Could you share the full stacktraces or even full
logs for this time frame.
Hi
We’ve got a job producing to a Kafka sink. The Kafka topics have a retention of
2 weeks. When doing a complete replay, it seems like Flink isn’t able to
back-pressure or throttle the amount of messages going to Kafka, causing the
following error: