Bryan,
Did you take down some brokers in your cluster while hitting KAFKA-1028? If
yes, you may be hitting KAFKA-1647 also.
Guozhang
On Mon, Oct 20, 2014 at 1:18 PM, Bryan Baugher bjb...@gmail.com wrote:
Hi everyone,
We run a 3 Kafka cluster using 0.8.1.1 with all topics having a
Yes the cluster was to a degree restarted in a rolling fashion but due to
some other events causing the brokers to be rather confused the ISR for a
number of partitions became empty and a new controller was elected.
KAFKA-1647 sounds exactly like the problem I encountered. Thank you.
On Tue, Oct
Hi everyone,
We run a 3 Kafka cluster using 0.8.1.1 with all topics having a replication
factor of 3 meaning every broker has a replica of every partition.
We recently ran into this issue (
https://issues.apache.org/jira/browse/KAFKA-1028) and saw data loss within
Kafka. We understand why it