[ 
https://issues.apache.org/jira/browse/FLINK-38529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18031073#comment-18031073
 ] 

Arvid Heise commented on FLINK-38529:
-------------------------------------

This has been solved with https://issues.apache.org/jira/browse/FLINK-35749 , 
please upgrade to a later version. You could try to just use 3.3.0-1.19 if you 
can't upgrade your Flink version for some reason.

> Silent record dropping when exceeding Kafka max.request.size
> ------------------------------------------------------------
>
>                 Key: FLINK-38529
>                 URL: https://issues.apache.org/jira/browse/FLINK-38529
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Kafka
>    Affects Versions: 1.18.1
>            Reporter: Barak Ben-Nathan
>            Priority: Major
>
> +Environment:+
> Flink 1.18.1
> flink-connector-kafka 3.2.0-1.18
> +Issue:+
> When our Flink jobs process records that exceed the configured Kafka producer 
> `max.request.size` parameter, the records are silently dropped and processing 
> continues without any warning or error logs.
>  
> While debugging, I discovered that the 
> `org.apache.kafka.common.errors.RecordTooLargeException` is caught in the 
> `onCompletion#WriterCallback` method, but it is not propagated upward to fail 
> the job as expected. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to