[
https://issues.apache.org/jira/browse/FLINK-38529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18033752#comment-18033752
]
Barak Ben-Nathan commented on FLINK-38529:
------------------------------------------
[~arvid] Thank you!
> Silent record dropping when exceeding Kafka max.request.size
> ------------------------------------------------------------
>
> Key: FLINK-38529
> URL: https://issues.apache.org/jira/browse/FLINK-38529
> Project: Flink
> Issue Type: Bug
> Components: Connectors / Kafka
> Affects Versions: 1.18.1
> Reporter: Barak Ben-Nathan
> Priority: Major
>
> +Environment:+
> Flink 1.18.1
> flink-connector-kafka 3.2.0-1.18
> +Issue:+
> When our Flink jobs process records that exceed the configured Kafka producer
> `max.request.size` parameter, the records are silently dropped and processing
> continues without any warning or error logs.
>
> While debugging, I discovered that the
> `org.apache.kafka.common.errors.RecordTooLargeException` is caught in the
> `onCompletion#WriterCallback` method, but it is not propagated upward to fail
> the job as expected.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)