Thank you very much, Stanislav,

We managed to fix it by deleting everything from kafka data dir and
zookeeper data dir.

Regards,
Vitaliy.

On Fri, Nov 10, 2017 at 5:41 PM, Stas Chizhov <schiz...@gmail.com> wrote:
> Hi, it looks like https://issues.apache.org/jira/browse/KAFKA-5970. Try
> restarting broker 1.
>
> Best regards,
> Stanislav.
>
> 2017-11-10 14:00 GMT+01:00 Vitaliy Semochkin <vitaliy...@gmail.com>:
>
>> Hi,
>>
>> I have a cluster with 3 brokers (0.11)
>> when I create a topic with min.insync.replicas=2 and replication-factor 2
>> I see number of  insync replicats in the created topic is less than
>> min.insync.replicas.
>> Why some partitions have less than 2 in sync repicas? How to prevent it?
>>
>> Here is the commands I used and the output.
>> kafka-topics.sh --create --topic test --replication-factor 2 --config
>> min.insync.replicas=2  --partitions 30 --zookeeper 127.0.0.1
>> kafka-topics.sh --describe --topic test  --zookeeper 127.0.0.1
>>
>> Topic:test    PartitionCount:30    ReplicationFactor:2
>> Configs:min.insync.replicas=2
>>     Topic: highload    Partition: 0    Leader: 2    Replicas: 2,3    Isr:
>> 2,3
>>     Topic: highload    Partition: 1    Leader: 3    Replicas: 3,1    Isr:
>> 3,1
>>     Topic: highload    Partition: 2    Leader: 1    Replicas: 1,2    Isr:
>> 1,2
>>     Topic: highload    Partition: 3    Leader: 2    Replicas: 2,1    Isr:
>> 2,1
>>     Topic: highload    Partition: 4    Leader: 3    Replicas: 3,2    Isr:
>> 3,2
>>     Topic: highload    Partition: 5    Leader: 1    Replicas: 1,3    Isr: 1
>>     Topic: highload    Partition: 6    Leader: 2    Replicas: 2,3    Isr:
>> 2,3
>>     Topic: highload    Partition: 7    Leader: 3    Replicas: 3,1    Isr:
>> 3,1
>>     Topic: highload    Partition: 8    Leader: 1    Replicas: 1,2    Isr:
>> 1,2
>>     Topic: highload    Partition: 9    Leader: 2    Replicas: 2,1    Isr:
>> 2,1
>>     Topic: highload    Partition: 10    Leader: 3    Replicas: 3,2    Isr:
>> 3,2
>>     Topic: highload    Partition: 11    Leader: 1    Replicas: 1,3    Isr:
>> 1
>>     Topic: highload    Partition: 12    Leader: 2    Replicas: 2,3    Isr:
>> 2,3
>>     Topic: highload    Partition: 13    Leader: 3    Replicas: 3,1    Isr:
>> 3,1
>>     Topic: highload    Partition: 14    Leader: 1    Replicas: 1,2    Isr:
>> 1,2
>>     Topic: highload    Partition: 15    Leader: 2    Replicas: 2,1    Isr:
>> 2,1
>>     Topic: highload    Partition: 16    Leader: 3    Replicas: 3,2    Isr:
>> 3,2
>>     Topic: highload    Partition: 17    Leader: 1    Replicas: 1,3    Isr:
>> 1
>>     Topic: highload    Partition: 18    Leader: 2    Replicas: 2,3    Isr:
>> 2,3
>>     Topic: highload    Partition: 19    Leader: 3    Replicas: 3,1    Isr:
>> 3,1
>>     Topic: highload    Partition: 20    Leader: 1    Replicas: 1,2    Isr:
>> 1,2
>>     Topic: highload    Partition: 21    Leader: 2    Replicas: 2,1    Isr:
>> 2,1
>>     Topic: highload    Partition: 22    Leader: 3    Replicas: 3,2    Isr:
>> 3,2
>>     Topic: highload    Partition: 23    Leader: 1    Replicas: 1,3    Isr:
>> 1
>>     Topic: highload    Partition: 24    Leader: 2    Replicas: 2,3    Isr:
>> 2,3
>>     Topic: highload    Partition: 25    Leader: 3    Replicas: 3,1    Isr:
>> 3,1
>>     Topic: highload    Partition: 26    Leader: 1    Replicas: 1,2    Isr:
>> 1,2
>>     Topic: highload    Partition: 27    Leader: 2    Replicas: 2,1    Isr:
>> 2,1
>>     Topic: highload    Partition: 28    Leader: 3    Replicas: 3,2    Isr:
>> 3,2
>>     Topic: highload    Partition: 29    Leader: 1    Replicas: 1,3    Isr:
>> 1
>>

Reply via email to