I have a Flink job that writes data into Kafka. The Kafka topic has maximum
message size set to 5 MB, so if I try to write any record larger than 5 MB,
it throws the following exception and brings the job down.

<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/t1476/Screen_Shot_2018-09-13_at_1.png>
 

Now I have configured checkpointing in my job, so if the job fails, it
restarts again. Problem is, every time it restarts, it fails for the same
record and goes into an infinite loop of failures and restarts. Is there a
way to handle this Kafka exception in my code so that it doesn't bring down
the entire job.



--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Reply via email to