yes, I gave it several minutes.
On Sat, Jun 14, 2014 at 2:18 PM, Michael G. Noll
wrote:
> Have you given Kafka some time to re-elect a new leader for the
> "missing" partition when you re-try steps 1-5?
>
> See here:
> > If you do, you should be able to go through steps
> > 1-8 without seeing L
Have you given Kafka some time to re-elect a new leader for the
"missing" partition when you re-try steps 1-5?
See here:
> If you do, you should be able to go through steps
> 1-8 without seeing LeaderNotAvailableExceptions (you may need to give
> Kafka some time to re-elect the remaining, second b
So if we go back to the 2 broker case, I tried your suggestion with
replication-factor 2
./kafka-topics.sh --topic test2 --create --partitions 3 --zookeeper
localhost:2181 --replication-factor
When i repeat steps 1-5 i still see the exception. When i go to step 8 (
back to 2 brokers ), I dont s
In your second case (1-broker cluster and putting your laptop to sleep) these
exceptions should be transient and disappear after a while.
In the logs you should see ZK session expirations (hence the initial/transient
exceptions, which in this case are expected and ok), followed by new ZK
sessio
Thanks for your response Michael.
In step 3, I am actually stopping the entire cluster and restarting it
without the 2nd broker. But I see your point. When i look in
/tmp/kafka-logs-2 ( which is the log dir for the 2nd broker ) I see it
holds test2-1 ( ie 1st partition of test2 topic ).
For /tmp/k
Prakash,
you are configure the topic with a replication factor of only 1, i.e. no
additional replica beyond "the original one". This replication setting
of 1 means that only one of the two brokers will ever host the (single)
replica -- which is implied to also be the leader in-sync replica -- of
yes,
here are the steps:
Create topic as : ./kafka-topics.sh --topic test2 --create --partitions 3
--zookeeper localhost:2181 --replication-factor 1
1) Start cluster with 2 brokers, 3 consumers.
2) Dont start any producer
3) Shutdown cluster and disable one broker from starting
4) restart clust
Is this what you want from kafka-topics ? I took this script dump now when
the exception is occuring.
./kafka-topics.sh --describe test2 --zookeeper localhost:2181
Topic:test2 PartitionCount:3 ReplicationFactor:1 Configs:
Topic: test2 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: test2 Parti
Could you use the kafka-topic command to describe test2 and see if the
leader is available?
Thanks,
Jun
On Tue, Jun 10, 2014 at 11:04 AM, Prakash Gowri Shankor <
prakash.shan...@gmail.com> wrote:
> Hi,
>
> I am running a cluster with a single broker, the performance producer
> script and 3 con
Do you by any chance have steps to reproduce this issue easily?
On Tue, Jun 10, 2014 at 02:23:20PM -0700, Prakash Gowri Shankor wrote:
> No i have not used the delete topic feature. I have been manually deleting
> the topics from zookeeper and removing the topic from the kafka and zk logs.
> I've
Any exceptions you saw on the controller log?
On Tue, Jun 10, 2014 at 2:23 PM, Prakash Gowri Shankor <
prakash.shan...@gmail.com> wrote:
> No i have not used the delete topic feature. I have been manually deleting
> the topics from zookeeper and removing the topic from the kafka and zk
> logs.
>
No i have not used the delete topic feature. I have been manually deleting
the topics from zookeeper and removing the topic from the kafka and zk logs.
I've experimented a bit more. It seems like this occurs when I have a
single broker running. When i restart with 2 brokers, the problem goes away.
Did you use the delete topic command?
That was an experimental feature in the 0.8.1 release with several
bugs. The fixes are all on trunk, but those fixes did not make it into
0.8.1.1 - except for a config option to disable delete-topic support
on the broker.
Joel
On Tue, Jun 10, 2014 at 01:07:4
>From the moment it starts occuring, it is persistent. Restarts dont seem to
make it go away. The only thing that makes it go away is following the
steps listed in this stackoverflow thread.
http://stackoverflow.com/questions/23228222/running-into-leadernotavailableexception-when-using-kafka-0-8-1
Hello Prakash,
Is this exception transient or persistent on broker startup?
Guozhang
On Tue, Jun 10, 2014 at 11:04 AM, Prakash Gowri Shankor <
prakash.shan...@gmail.com> wrote:
> Hi,
>
> I am running a cluster with a single broker, the performance producer
> script and 3 consumers.
> On a fres
Hi,
I am running a cluster with a single broker, the performance producer
script and 3 consumers.
On a fresh start of the cluster , the producer throws this exception.
I was able to run this cluster successfully on the same topic ( test2 )
successfully the first time.
The solution( from stackover
16 matches
Mail list logo