I am new to the Kafka codebase so please excuse any ignorance on my part.

When a dead letter queue is established is there a process to ensure that
it at least is defined with the same ACL as the original queue?  Without
such a guarantee at the start it seems that managing dead letter queues
will be fraught with security issues.


On Wed, Apr 10, 2024 at 10:34 AM Damien Gasparina <d.gaspar...@gmail.com>
wrote:

> Hi everyone,
>
> To continue on our effort to improve Kafka Streams error handling, we
> propose a new KIP to add out of the box support for Dead Letter Queue.
> The goal of this KIP is to provide a default implementation that
> should be suitable for most applications and allow users to override
> it if they have specific requirements.
>
> In order to build a suitable payload, some additional changes are
> included in this KIP:
>   1. extend the ProcessingContext to hold, when available, the source
> node raw key/value byte[]
>   2. expose the ProcessingContext to the ProductionExceptionHandler,
> it is currently not available in the handle parameters.
>
> Regarding point 2.,  to expose the ProcessingContext to the
> ProductionExceptionHandler, we considered two choices:
>   1. exposing the ProcessingContext as a parameter in the handle()
> method. That's the cleanest way IMHO, but we would need to deprecate
> the old method.
>   2. exposing the ProcessingContext as an attribute in the interface.
> This way, no method is deprecated, but we would not be consistent with
> the other ExceptionHandler.
>
> In the KIP, we chose the 1. solution (new handle signature with old
> one deprecated), but we could use other opinions on this part.
> More information is available directly on the KIP.
>
> KIP link:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1034%3A+Dead+letter+queue+in+Kafka+Streams
>
> Feedbacks and suggestions are welcome,
>
> Cheers,
> Damien, Sebastien and Loic
>

Reply via email to