Is this what you want from kafka-topics ? I took this script dump now when the exception is occuring.
./kafka-topics.sh --describe test2 --zookeeper localhost:2181 Topic:test2 PartitionCount:3 ReplicationFactor:1 Configs: Topic: test2 Partition: 0 Leader: 0 Replicas: 0 Isr: 0 Topic: test2 Partition: 1 Leader: 1 Replicas: 1 Isr: 1 Topic: test2 Partition: 2 Leader: 0 Replicas: 0 Isr: 0 On Tue, Jun 10, 2014 at 9:22 PM, Jun Rao <jun...@gmail.com> wrote: > Could you use the kafka-topic command to describe test2 and see if the > leader is available? > > Thanks, > > Jun > > > On Tue, Jun 10, 2014 at 11:04 AM, Prakash Gowri Shankor < > prakash.shan...@gmail.com> wrote: > > > Hi, > > > > I am running a cluster with a single broker, the performance producer > > script and 3 consumers. > > On a fresh start of the cluster , the producer throws this exception. > > I was able to run this cluster successfully on the same topic ( test2 ) > > successfully the first time. > > > > The solution( from stackoverflow ) seems to be to delete the topic data > in > > the broker and in zookeeper. This doesnt seem to be a viable production > > solution to me. Is there a way to solve this without losing topic data ? > > > > Also does the incidence of this problem decrease if I run more > > brokers/servers ? > > > > I see logs like these in the server.log > > > > [2014-06-10 10:45:35,194] WARN [KafkaApi-0] Offset request with > correlation > > id 0 from client on partition [test2,0] failed due to Topic test2 either > > doesn't exist or is in the process of being deleted > > (kafka.server.KafkaApis) > > > > [2014-06-10 10:45:35,211] WARN [KafkaApi-0] Offset request with > correlation > > id 0 from client on partition [test2,1] failed due to Topic test2 either > > doesn't exist or is in the process of being deleted > > (kafka.server.KafkaApis) > > > > [2014-06-10 10:45:35,221] WARN [KafkaApi-0] Offset request with > correlation > > id 0 from client on partition [test2,2] failed due to Topic test2 either > > doesn't exist or is in the process of being deleted > > (kafka.server.KafkaApis) > > > > The exception trace is: > > > > [2014-06-10 10:45:32,464] ERROR Error in handling batch of 200 events > > (kafka.producer.async.ProducerSendThread) > > > > kafka.common.FailedToSendMessageException: Failed to send messages after > 3 > > tries. > > > > at > > > > > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90) > > > > at > > > > > kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104) > > > > at > > > > > kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87) > > > > at > > > > > kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67) > > > > at scala.collection.immutable.Stream.foreach(Stream.scala:526) > > > > at > > > > > kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66) > > > > at > kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44) > > > > [2014-06-10 10:45:32,464] ERROR Failed to send requests for topics test2 > > with correlation ids in [41,48] > (kafka.producer.async.DefaultEventHandler) > > > > [2014-06-10 10:45:32,464] WARN Error while fetching metadata > > [{TopicMetadata for topic test2 -> > > > > No partition metadata for topic test2 due to > > kafka.common.LeaderNotAvailableException}] for topic [test2]: class > > kafka.common.LeaderNotAvailableException > > (kafka.producer.BrokerPartitionInfo) > > > > [2014-06-10 10:45:32,464] ERROR Error in handling batch of 200 events > > (kafka.producer.async.ProducerSendThread) > > > > kafka.common.FailedToSendMessageException: Failed to send messages after > 3 > > tries. > > > > at > > > > > kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90) > > > > at > > > > > kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104) > > > > at > > > > > kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87) > > > > at > > > > > kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67) > > > > at scala.collection.immutable.Stream.foreach(Stream.scala:526) > > > > at > > > > > kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66) > > > > at > kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44) > > >