We run Kafka Streams (Java) apps on Kubernetes to *consume*, *process* and
*produce* real time data in our Kafka Cluster (running Confluent Community
Edition v7.0/Kafka v3.0). How can we do a deployment of our apps in a way
that limits downtime on consuming records? Our initial target was approx *2
sec* downtime a single time for each task.

We are aiming to do continuous deployments of changes to the production
environment, but deployments are too disruptive by causing downtime in
record consumption in our apps, leading to latency in produced real time
records.

Since this question has already been described in detail on Stack Overflow (
https://stackoverflow.com/questions/71222496/how-to-achieve-high-availability-in-a-kafka-streams-app-during-deployment),
but has not been answered yet, we would like to refer to it instead of
copy/pasting the content in this mailing list.

Please let me know if you prefer to have the complete question in the
mailing list instead.

Reply via email to