No i have not used the delete topic feature. I have been manually deleting
the topics from zookeeper and removing the topic from the kafka and zk logs.
I've experimented a bit more. It seems like this occurs when I have a
single broker running. When i restart with 2 brokers, the problem goes away.


On Tue, Jun 10, 2014 at 2:09 PM, Joel Koshy <jjkosh...@gmail.com> wrote:

> Did you use the delete topic command?
>
> That was an experimental feature in the 0.8.1 release with several
> bugs. The fixes are all on trunk, but those fixes did not make it into
> 0.8.1.1 - except for a config option to disable delete-topic support
> on the broker.
>
> Joel
>
> On Tue, Jun 10, 2014 at 01:07:45PM -0700, Prakash Gowri Shankor wrote:
> > From the moment it starts occuring, it is persistent. Restarts dont seem
> to
> > make it go away. The only thing that makes it go away is following the
> > steps listed in this stackoverflow thread.
> >
> >
> http://stackoverflow.com/questions/23228222/running-into-leadernotavailableexception-when-using-kafka-0-8-1-with-zookeeper-3
> >
> >
> >
> > On Tue, Jun 10, 2014 at 12:47 PM, Guozhang Wang <wangg...@gmail.com>
> wrote:
> >
> > > Hello Prakash,
> > >
> > > Is this exception transient or persistent on broker startup?
> > >
> > > Guozhang
> > >
> > >
> > > On Tue, Jun 10, 2014 at 11:04 AM, Prakash Gowri Shankor <
> > > prakash.shan...@gmail.com> wrote:
> > >
> > > > Hi,
> > > >
> > > > I am running a cluster with a single broker, the performance producer
> > > > script and 3 consumers.
> > > > On a fresh start of the cluster , the producer throws this exception.
> > > > I was able to run this cluster successfully on the same topic (
> test2 )
> > > > successfully the first time.
> > > >
> > > > The solution( from stackoverflow ) seems to be to delete the topic
> data
> > > in
> > > > the broker and in zookeeper. This doesnt seem to be a viable
> production
> > > > solution to me. Is there a way to solve this without losing topic
> data ?
> > > >
> > > > Also does the incidence of this problem decrease if I run more
> > > > brokers/servers ?
> > > >
> > > > I see logs like these in the server.log
> > > >
> > > > [2014-06-10 10:45:35,194] WARN [KafkaApi-0] Offset request with
> > > correlation
> > > > id 0 from client  on partition [test2,0] failed due to Topic test2
> either
> > > > doesn't exist or is in the process of being deleted
> > > > (kafka.server.KafkaApis)
> > > >
> > > > [2014-06-10 10:45:35,211] WARN [KafkaApi-0] Offset request with
> > > correlation
> > > > id 0 from client  on partition [test2,1] failed due to Topic test2
> either
> > > > doesn't exist or is in the process of being deleted
> > > > (kafka.server.KafkaApis)
> > > >
> > > > [2014-06-10 10:45:35,221] WARN [KafkaApi-0] Offset request with
> > > correlation
> > > > id 0 from client  on partition [test2,2] failed due to Topic test2
> either
> > > > doesn't exist or is in the process of being deleted
> > > > (kafka.server.KafkaApis)
> > > >
> > > > The exception trace is:
> > > >
> > > > [2014-06-10 10:45:32,464] ERROR Error in handling batch of 200 events
> > > > (kafka.producer.async.ProducerSendThread)
> > > >
> > > > kafka.common.FailedToSendMessageException: Failed to send messages
> after
> > > 3
> > > > tries.
> > > >
> > > > at
> > > >
> > > >
> > >
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
> > > >
> > > > at
> > > >
> > > >
> > >
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
> > > >
> > > > at
> > > >
> > > >
> > >
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
> > > >
> > > > at
> > > >
> > > >
> > >
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
> > > >
> > > > at scala.collection.immutable.Stream.foreach(Stream.scala:526)
> > > >
> > > > at
> > > >
> > > >
> > >
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
> > > >
> > > > at
> > >
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
> > > >
> > > > [2014-06-10 10:45:32,464] ERROR Failed to send requests for topics
> test2
> > > > with correlation ids in [41,48]
> > > (kafka.producer.async.DefaultEventHandler)
> > > >
> > > > [2014-06-10 10:45:32,464] WARN Error while fetching metadata
> > > > [{TopicMetadata for topic test2 ->
> > > >
> > > > No partition metadata for topic test2 due to
> > > > kafka.common.LeaderNotAvailableException}] for topic [test2]: class
> > > > kafka.common.LeaderNotAvailableException
> > > > (kafka.producer.BrokerPartitionInfo)
> > > >
> > > > [2014-06-10 10:45:32,464] ERROR Error in handling batch of 200 events
> > > > (kafka.producer.async.ProducerSendThread)
> > > >
> > > > kafka.common.FailedToSendMessageException: Failed to send messages
> after
> > > 3
> > > > tries.
> > > >
> > > > at
> > > >
> > > >
> > >
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
> > > >
> > > > at
> > > >
> > > >
> > >
> kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
> > > >
> > > > at
> > > >
> > > >
> > >
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
> > > >
> > > > at
> > > >
> > > >
> > >
> kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
> > > >
> > > > at scala.collection.immutable.Stream.foreach(Stream.scala:526)
> > > >
> > > > at
> > > >
> > > >
> > >
> kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
> > > >
> > > > at
> > >
> kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
>
>

Reply via email to