gt; > >>
> > >>> Hi Damien, Sébastien, and Loïc,
> > >>>
> > >>> Thanks for the KIP!
> > >>>
> > >>> +1 (binding)
> > >>>
> > >>> Best,
> > >>> Bruno
> > >>&
Thanks for the feedback, I think we should keep two separate callbacks
for serialization and error handlers. It makes sense for type safety
(ProducerRecord vs POJO) and also for backward
compatibility. On top of that, all metadata provided in the #handle
method would need to be held in memory
Hi all,
We would like to start a vote for KIP-1033: Add Kafka Streams
exception handler for exceptions occurring during processing
The KIP is available on
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1033%3A+Add+Kafka+Streams+exception+handler+for+exceptions+occurring+during+processing
KIP to get
> this done? I would rather include timestamp extraction issue in the DLQ
> KIP from day one on. The interface is quite different though, so we
> would need to think a little bit about it in more details how to do
> this. Right now, the contract is that returning `-1` as extra
xception
>
>
> (6)
> I am wondering where the implementation of ProcessingMetadata gets the
> sourceRawKey/Value from. Do we need additional changes in
> ProcessingContext and implementations?
>
>
> Best,
> Bruno
>
>
> On 4/21/24 2:23 PM, Damien Gasparina wrote:
"dlq-topic")
>
> A DLQ topic name is currently required using the two last response types.
> I am wondering if it could benefit users to ease the use of the default DLQ
> topic "errors.deadletterqueue.topic.name" when implementing custom handlers,
> with such kind o
he
> ProcessingContext thing is the only open question in my mind
>
> On Thu, Apr 11, 2024 at 5:41 AM Damien Gasparina
> wrote:
>
> > Hi Matthias, Bruno,
> >
> > 1.a During my previous comment, by Processor Node ID, I meant
> > Processor name. This is i
ually implement
> forwarding to a dead-letter-queue via the handlers.
>
> Lastly, two super small things:
>
> S6:
> We use camel case in Streams, so it should be rawSourceKey/Value rather
> than raw_source_key/value
>
> S7:
> Can you add javadocs for the #withDeadLette
the topic and the topic should either be
automatically created, or pre-created.
17. If a DLQ record can not be sent, the exception should go to the
uncaughtExceptionHandler. Let me clearly state it in the KIP.
On Fri, 12 Apr 2024 at 17:25, Damien Gasparina wrote:
>
> Hi Nick,
>
> 1. G
source record. Is there a way these can be included? In particular, I'm
> concerned with "schema pointer" headers (like those set by Schema
> Registry), that may need to be propagated, especially if the records are
> fed back into the source topics for re-processing by the user.
at it can't be
> > used to forward records.
> >
> > 4.
> > You mention the KIP-1033 ProcessingExceptionHandler, but what's the plan
> > if KIP-1033 is not adopted, or if KIP-1034 lands before 1033?
> >
> > Regards,
> >
> > Nick
> >
> > On F
r 12, 2024 at 9:45 AM Damien Gasparina
> wrote:
>
> > Hi Claude,
> >
> > In this KIP, the Dead Letter Queue is materialized by a standard and
> > independant topic, thus normal ACL applies to it like any other topic.
> > This should not introduce any
queue? Without
> such a guarantee at the start it seems that managing dead letter queues
> will be fraught with security issues.
>
>
> On Wed, Apr 10, 2024 at 10:34 AM Damien Gasparina
> wrote:
>
> > Hi everyone,
> >
> > To continue on our effort to improve Kafka S
andler.ProcessingHandlerResponse.FAIL) {
> >> > throw new StreamsException("Processing exception handler is set to
> >> fail upon" +
> >> > " a processing error. If you would rather have the streaming
> >> pipeline" +
> >> > " continue after a
Damien Gasparina created KAFKA-16505:
Summary: KIP-1034: Dead letter queue in Kafka Streams
Key: KAFKA-16505
URL: https://issues.apache.org/jira/browse/KAFKA-16505
Project: Kafka
Issue
Hi everyone,
To continue on our effort to improve Kafka Streams error handling, we
propose a new KIP to add out of the box support for Dead Letter Queue.
The goal of this KIP is to provide a default implementation that
should be suitable for most applications and allow users to override
it if
tadata and useful info of the ProcessorContext but
> without
> > >> the
> > >> forwarding APIs. This would also lets us sidestep the following issue:
> > >> 2b. If you *do* want the ability to forward records, setting aside
> > >> whether
>
Damien Gasparina created KAFKA-16448:
Summary: Add Kafka Streams exception handler for exceptions
occuring during processing (KIP-1033)
Key: KAFKA-16448
URL: https://issues.apache.org/jira/browse/KAFKA-16448
Hi everyone,
After writing quite a few Kafka Streams applications, me and my colleagues
just created KIP-1033 to introduce a new Exception Handler in Kafka Streams
to simplify error handling.
This feature would allow defining an exception handler to automatically
catch exceptions occurring during
Hi team,
I would like permission to contribute to Kafka.
My wiki ID is "d.gasparina" and my Jira ID is "Dabz".
I would like to propose a KIP to improve Kafka Streams error and exception
handling.
Cheers,
Damien
Damien Gasparina created KAFKA-14302:
Summary: Infinite probing rebalance if a changelog topic got
emptied
Key: KAFKA-14302
URL: https://issues.apache.org/jira/browse/KAFKA-14302
Project: Kafka
Damien Gasparina created KAFKA-13636:
Summary: Committed offsets could be deleted during a rebalance if
a group did not commit for a while
Key: KAFKA-13636
URL: https://issues.apache.org/jira/browse/KAFKA
Damien Gasparina created KAFKA-13109:
Summary: WorkerSourceTask is not enforcing the
errors.retry.timeout and errors.retry.delay.max.ms parameters in case of a
RetriableException during task.poll()
Key: KAFKA-13109
Damien Gasparina created KAFKA-13024:
Summary: Kafka Streams is dropping messages with null key during
repartition
Key: KAFKA-13024
URL: https://issues.apache.org/jira/browse/KAFKA-13024
Project
Damien Gasparina created KAFKA-12951:
Summary: Infinite loop while restoring a GlobalKTable
Key: KAFKA-12951
URL: https://issues.apache.org/jira/browse/KAFKA-12951
Project: Kafka
Issue
Damien Gasparina created KAFKA-12272:
Summary: Kafka Streams metric commit-latency-max and
commit-latency-avg is always 0
Key: KAFKA-12272
URL: https://issues.apache.org/jira/browse/KAFKA-12272
Damien Gasparina created KAFKA-7129:
---
Summary: Dynamic default value for number of thread configuration
Key: KAFKA-7129
URL: https://issues.apache.org/jira/browse/KAFKA-7129
Project: Kafka
27 matches
Mail list logo