It seems that used ZkSerializer has to be aligned with KafkaProducer configured key.serializer.class.
On Thu, Oct 23, 2014 at 1:13 AM, Stevo Slavić <ssla...@gmail.com> wrote: > Still have to understand what is going on, but when I set > kafka.utils.ZKStringSerializer to be ZkSerializer for ZkClient used in > AdminUtils calls, KafkaProducer could see created topic... > Default ZkSerializer is > org.I0Itec.zkclient.serialize.SerializableSerializer. > > Kind regards, > Stevo Slavic. > > On Wed, Oct 22, 2014 at 10:03 PM, Stevo Slavić <ssla...@gmail.com> wrote: > >> Output on trunk is clean too, after clean build: >> >> ~/git/oss/kafka [trunk|✔] >> 22:00 $ bin/kafka-topics.sh --zookeeper 127.0.0.1:50194 --topic >> 059915e6-56ef-4b8e-8e95-9f676313a01c --describe >> Error while executing topic command next on empty iterator >> java.util.NoSuchElementException: next on empty iterator >> at scala.collection.Iterator$$anon$2.next(Iterator.scala:39) >> at scala.collection.Iterator$$anon$2.next(Iterator.scala:37) >> at scala.collection.IterableLike$class.head(IterableLike.scala:91) >> at scala.collection.AbstractIterable.head(Iterable.scala:54) >> at >> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:137) >> at >> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:127) >> at >> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) >> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) >> at kafka.admin.TopicCommand$.describeTopic(TopicCommand.scala:127) >> at kafka.admin.TopicCommand$.main(TopicCommand.scala:56) >> at kafka.admin.TopicCommand.main(TopicCommand.scala) >> >> On Wed, Oct 22, 2014 at 9:45 PM, Stevo Slavić <ssla...@gmail.com> wrote: >> >>> kafka-topics.sh execution, from latest trunk: >>> >>> ~/git/oss/kafka [trunk|✔] >>> 21:00 $ bin/kafka-topics.sh --zookeeper 127.0.0.1:50194 --topic >>> 059915e6-56ef-4b8e-8e95-9f676313a01c --describe >>> SLF4J: Class path contains multiple SLF4J bindings. >>> SLF4J: Found binding in >>> [jar:file:/Users/d062007/git/oss/kafka/core/build/dependant-libs-2.10.1/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] >>> SLF4J: Found binding in >>> [jar:file:/Users/d062007/git/oss/kafka/core/build/dependant-libs-2.10.1/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class] >>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an >>> explanation. >>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] >>> Error while executing topic command next on empty iterator >>> java.util.NoSuchElementException: next on empty iterator >>> at scala.collection.Iterator$$anon$2.next(Iterator.scala:39) >>> at scala.collection.Iterator$$anon$2.next(Iterator.scala:37) >>> at scala.collection.IterableLike$class.head(IterableLike.scala:91) >>> at scala.collection.AbstractIterable.head(Iterable.scala:54) >>> at >>> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:170) >>> at >>> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:160) >>> at >>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) >>> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) >>> at kafka.admin.TopicCommand$.describeTopic(TopicCommand.scala:160) >>> at kafka.admin.TopicCommand$.main(TopicCommand.scala:60) >>> at kafka.admin.TopicCommand.main(TopicCommand.scala) >>> >>> >>> Output from same command on 0.8.1 branch is better, but still same >>> exception: >>> >>> ~/git/oss/kafka [0.8.1|✔] >>> 21:12 $ bin/kafka-topics.sh --zookeeper 127.0.0.1:50194 --topic >>> 059915e6-56ef-4b8e-8e95-9f676313a01c --describe >>> Error while executing topic command null >>> java.util.NoSuchElementException >>> at scala.collection.IterableLike$class.head(IterableLike.scala:101) >>> at scala.collection.immutable.Map$EmptyMap$.head(Map.scala:73) >>> at >>> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:137) >>> at >>> kafka.admin.TopicCommand$$anonfun$describeTopic$1.apply(TopicCommand.scala:127) >>> at >>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57) >>> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43) >>> at kafka.admin.TopicCommand$.describeTopic(TopicCommand.scala:127) >>> at kafka.admin.TopicCommand$.main(TopicCommand.scala:56) >>> at kafka.admin.TopicCommand.main(TopicCommand.scala) >>> >>> On Wed, Oct 22, 2014 at 5:30 PM, Guozhang Wang <wangg...@gmail.com> >>> wrote: >>> >>>> Hello Stevo, >>>> >>>> Your understanding about the configs are correct, and it is indeed wired >>>> that the producer gets the exception after topic is created. Could you >>>> use >>>> the kafka-topics command to check if the leaders exist? >>>> >>>> kafka-topics.sh --zookeeper XXX --topic [topic-name] describe >>>> >>>> Guozhang >>>> >>>> On Wed, Oct 22, 2014 at 5:57 AM, Stevo Slavić <ssla...@gmail.com> >>>> wrote: >>>> >>>> > Hello Apache Kafka users, >>>> > >>>> > Using Kafka 0.8.1.1 (single instance with single ZK 3.4.6 running >>>> locally), >>>> > with auto topic creation disabled, in a test I have topic created with >>>> > AdminUtils.createTopic (AdminUtils.topicExists returns true) but >>>> > KafkaProducer on send request keeps throwing >>>> > UnknownTopicOrPartitionException even after 100 retries, both when >>>> > topic.metadata.refresh.interval.ms and retry.backoff.ms are left at >>>> > defaults, and when customized. >>>> > >>>> > Am I doing something wrong or is this a known bug? >>>> > >>>> > How long does it typically take for metadata to be refreshed? >>>> > How long does it take for leader to be elected? >>>> > >>>> > Documentation for retry.backoff.ms states: >>>> > "Before each retry, the producer refreshes the metadata of relevant >>>> topics >>>> > to see if a new leader has been elected. Since leader election takes >>>> a bit >>>> > of time, this property specifies the amount of time that the producer >>>> waits >>>> > before refreshing the metadata." >>>> > >>>> > Do I understand this docs correctly - on failure to send a message, >>>> such as >>>> > unknown topic, if retries are configured producer will wait for >>>> configured >>>> > retry.backoff.ms, then it will initiate and wait for metadata >>>> refresh to >>>> > complete, and only then retry sending? >>>> > >>>> > Kind regards, >>>> > Stevo Slavic. >>>> > >>>> >>>> >>>> >>>> -- >>>> -- Guozhang >>>> >>> >>> >> >