[
https://issues.apache.org/jira/browse/KAFKA-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18054949#comment-18054949
]
Uladzislau Blok commented on KAFKA-19430:
-----------------------------------------
That's what I thought, thanks for confirming.
It makes sense to surface {{RecordCorruptedException}} to the client (which
currently throws {{{}KafkaException{}}}). This is tracked in KAFKA-19613.
I also suggest adding a Kafka Streams handler for consumer errors. My proposed
solution above—propagating errors to the deserialization handler—would serve as
a more general handler for the entire read path.
cc [~mjsax]
> Don't fail on RecordCorruptedException
> --------------------------------------
>
> Key: KAFKA-19430
> URL: https://issues.apache.org/jira/browse/KAFKA-19430
> Project: Kafka
> Issue Type: Improvement
> Components: streams
> Reporter: Matthias J. Sax
> Assignee: Uladzislau Blok
> Priority: Major
>
> From [https://github.com/confluentinc/kafka-streams-examples/issues/524]
> Currently, the existing `DeserializationExceptionHandler` is applied when
> de-serializing the record key/value byte[] inside Kafka Streams. This implies
> that a `RecordCorruptedException` is not handled.
> We should explore to not let Kafka Streams crash, but maybe retry this error
> automatically (as `RecordCorruptedException extends RetriableException`), and
> find a way to pump the error into the existing exception handler.
> If the error is transient, user can still use `REPLACE_THREAD` in the
> uncaught exception handler, but this is a rather heavy weight approach.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)