Hi,
I have a Kafka 0.8 cluster with two nodes connected to three ZKs, with the
same configuration but the brokerId (one is 0 and the other 1). I created
three topics A, B and C with 4 partitions and a replication factor of 1. My
idea was to have 2 partitions per topic in each broker. However, when I
connect a producer, I can't have both brokers to write at the same time and
I don't know what's going on.
My server.config has the following entries:
auto.create.topics.enable=true
num.partitions=2
When I run bin/kafka-list-topic.sh --zookeeper localhost:2181 I get the
following partition leader assignments:
topic: A partition: 0 leader: 1 replicas: 1 isr: 1
topic: A partition: 1 leader: 0 replicas: 0 isr: 0
topic: A partition: 2 leader: 1 replicas: 1 isr: 1
topic: A partition: 3 leader: 0 replicas: 0 isr: 0
topic: B partition: 0 leader: 0 replicas: 0 isr: 0
topic: B partition: 1 leader: 1 replicas: 1 isr: 1
topic: B partition: 2 leader: 0 replicas: 0 isr: 0
topic: B partition: 3 leader: 1 replicas: 1 isr: 1
topic: C partition: 0 leader: 0 replicas: 0 isr: 0
topic: C partition: 1 leader: 1 replicas: 1 isr: 1
topic: C partition: 2 leader: 0 replicas: 0 isr: 0
topic: C partition: 3 leader: 1 replicas: 1 isr: 1
I've forced reassignment using the kafka-reassign-partitions tool with the
following JSON:
{"partitions": [
{"topic": "A", "partition": 1, "replicas": [0] },
{"topic": "A", "partition": 3, "replicas": [0] },
{"topic": "A", "partition": 0, "replicas": [1] },
{"topic": "A", "partition": 2, "replicas": [1] },
{"topic": "B", "partition": 1, "replicas": [0] },
{"topic": "B", "partition": 3, "replicas": [0] },
{"topic": "B", "partition": 0, "replicas": [1] },
{"topic": "B", "partition": 2, "replicas": [1] },
{"topic": "C", "partition": 0, "replicas": [0] },
{"topic": "C", "partition": 1, "replicas": [1] },
{"topic": "C", "partition": 2, "replicas": [0] },
{"topic": "C", "partition": 3, "replicas": [1] }
]}
After reassignment, I've restarted producer and nothing worked. I've tried
also to restart both brokers and producer and nothing.
The producer contains this logs:
2013-06-12 14:48:46,467] WARN Error while fetching metadata partition 0
leader: none replicas: isr: isUnderReplicated: false for
topic partition [C,0]: [class kafka.common.LeaderNotAvailableException]
(kafka.producer.BrokerPartitionInfo)
[2013-06-12 14:48:46,467] WARN Error while fetching metadata partition 0
leader: none replicas: isr: isUnderReplicated: false for
topic partition [C,0]: [class kafka.common.LeaderNotAvailableException]
(kafka.producer.BrokerPartitionInfo)
[2013-06-12 14:48:46,468] WARN Error while fetching metadata partition 2
leader: none replicas: isr: isUnderReplicated: false for
topic partition [C,2]: [class kafka.common.LeaderNotAvailableException]
(kafka.producer.BrokerPartitionInfo)
[2013-06-12 14:48:46,468] WARN Error while fetching metadata partition 2
leader: none replicas: isr: isUnderReplicated: false for
topic partition [C,2]: [class kafka.common.LeaderNotAvailableException]
(kafka.producer.BrokerPartitionInfo)
And sometimes lines like this:
[2013-06-12 14:55:37,339] WARN Error while fetching metadata
[{TopicMetadata for topic B ->
No partition metadata for topic B due to
kafka.common.LeaderNotAvailableException}] for topic [B]: class
kafka.common.LeaderNotAvailableException
(kafka.producer.BrokerPartitionInfo)
Do you guys have any idea what's going on?
Thanks in advance,
Alex
--
@BlisMedia <http://twitter.com/BlisMedia>
www.blismedia.com <http://blismedia.com>
This email and any attachments to it may be confidential and are intended
solely
for the use of the individual to whom it is addressed. Any views or opinions
expressed are solely those of the author and do not necessarily represent
those of BlisMedia Ltd, a company registered in England and Wales with
registered number 06455773. Its registered office is 3rd Floor, 101 New
Cavendish St, London, W1W 6XH, United Kingdom.
If you are not the intended recipient of this email, you must neither take
any action based upon its contents, nor copy or show it to anyone. Please
contact the sender if you believe you have received this email in error.