Hi Pushkar,
In addition to Matthias and Guozhang's answer and clear explanation, I
think there's still one thing you should focus on:
> I could see that 2 of the 3 brokers restarted at the same time.
It's a total 3 brokers cluster, and suddenly, 2 of them are broken. You
should try to find out th
The `Producer#send()` call is actually not covered by the KIP because it
may result in data loss if we try to handle the timeout directly. --
Kafka Streams does not have a copy of the data in the producer's send
buffer and thus we cannot retry the `send()`. -- Instead, it's necessary
to re-proc
As the error message suggests, you can increase `max.block.ms` for this
case: If a broker is down, it may take some time for the producer to
fail over to a different broker (before the producer can fail over, the
broker must elect a new partition leader, and only afterward can inform
the produc
Hello Pushkar,
I'm assuming you have the same Kafka version (2.5.1) at the Streams client
side here: in those old versions, Kafka Streams relies on the embedded
Producer clients to handle timeouts, which requires users to correctly
configure such values.
In newer version (2.8+) We have made Kafka
Hi All,
I am getting below issue in streams application. Kafka cluster is a 3
broker cluster (v 2.5.1) and I could see that 2 of the 3 brokers restarted
at the same time when below exception occurred in streams application so I
can relate below exception to those brokers restarts. However, what is