On Thu, Feb 5, 2015 at 12:37 PM, Sumit Rangwala <sumitrangw...@gmail.com>
wrote:

>
>
> On Wed, Feb 4, 2015 at 9:23 PM, Jun Rao <j...@confluent.io> wrote:
>
>> Could you try the 0.8.2.0 release? It fixed one issue related to topic
>> creation.
>>
>
Jun,

If you need more info let me know. Seems like TopicMetadataResponse is
expecting more fields then what is present in the response.


Sumit



>
> Jun,
>
> Tried with 0.8.2.0 and I still see the same error.
>
> I see the error given below almost incessantly on the client side for
> topic LAX1-GRIFFIN-r47-1423165897627. Things look fine one the broker side:
> Topic:LAX1-GRIFFIN-r47-1423165897627 PartitionCount:8 ReplicationFactor:1
> Configs:
> Topic: LAX1-GRIFFIN-r47-1423165897627 Partition: 0 Leader: 49817 Replicas:
> 49817 Isr: 49817
> Topic: LAX1-GRIFFIN-r47-1423165897627 Partition: 1 Leader: 49818 Replicas:
> 49818 Isr: 49818
> Topic: LAX1-GRIFFIN-r47-1423165897627 Partition: 2 Leader: 49814 Replicas:
> 49814 Isr: 49814
> Topic: LAX1-GRIFFIN-r47-1423165897627 Partition: 3 Leader: 49817 Replicas:
> 49817 Isr: 49817
> Topic: LAX1-GRIFFIN-r47-1423165897627 Partition: 4 Leader: 49818 Replicas:
> 49818 Isr: 49818
> Topic: LAX1-GRIFFIN-r47-1423165897627 Partition: 5 Leader: 49814 Replicas:
> 49814 Isr: 49814
> Topic: LAX1-GRIFFIN-r47-1423165897627 Partition: 6 Leader: 49817 Replicas:
> 49817 Isr: 49817
> Topic: LAX1-GRIFFIN-r47-1423165897627 Partition: 7 Leader: 49818 Replicas:
> 49818 Isr: 49818
>
>
> Broker logs at http://d.pr/f/1jLOf/5H8uWPqu in case you need them
>
> ERROR:
>  2015-02-05 12:02:03,764
> (LAX1-GRIFFIN-r47-1423165897627-pf6898-lax1-GriffinDownloader-1423166442395_f27c6501f3d5-1423166444213-d572da8f-leader-finder-thread)
> ClientUtils$ WARN: Fetching topic metadata with correlation id 152 for
> topics [Set(LAX1-GRIFFIN-r47-1423165897627)] from broker
> [id:49814,host:172.16.204.44,port:49814] failed
>  java.lang.ArrayIndexOutOfBoundsException: 7
>    at
> kafka.api.TopicMetadata$$anonfun$readFrom$1.apply$mcVI$sp(TopicMetadata.scala:38)
>    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:78)
>    at kafka.api.TopicMetadata$.readFrom(TopicMetadata.scala:36)
>    at
> kafka.api.TopicMetadataResponse$$anonfun$3.apply(TopicMetadataResponse.scala:31)
>    at
> kafka.api.TopicMetadataResponse$$anonfun$3.apply(TopicMetadataResponse.scala:31)
>    at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
>    at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
>    at scala.collection.immutable.Range.foreach(Range.scala:81)
>    at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
>    at scala.collection.immutable.Range.map(Range.scala:46)
>    at
> kafka.api.TopicMetadataResponse$.readFrom(TopicMetadataResponse.scala:31)
>    at kafka.producer.SyncProducer.send(SyncProducer.scala:114)
>    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
>    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
>    at
> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
>    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
>
>
>
>
>
>
>>
>> Thanks,
>>
>> Jun
>>
>> On Wed, Feb 4, 2015 at 12:58 AM, Sumit Rangwala <sumitrangw...@gmail.com>
>> wrote:
>>
>> > I am observing the following exception with kafka client:
>> >
>> > 2015-02-04 00:17:27,345
>> >
>> >
>> (LAX1-GRIFFIN-r8-1423037468055-pf13797-lax1-GriffinDownloader-1423037818264_c7b1e843ff51-1423037822122-eb7afca7-leader-finder-thread)
>> > ClientUtils$ WARN: Fetching topic metadata with correlation id 112 for
>> > topics [Set(LAX1-GRIFFIN-r8-1423037468055)] from broker
>> > [id:49649,host:172.16.204.44,port:49649] failed
>> > java.lang.ArrayIndexOutOfBoundsException: 7
>> > at
>> >
>> >
>> kafka.api.TopicMetadata$$anonfun$readFrom$1.apply$mcVI$sp(TopicMetadata.scala:38)
>> > at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:78)
>> > at kafka.api.TopicMetadata$.readFrom(TopicMetadata.scala:36)
>> > at
>> >
>> >
>> kafka.api.TopicMetadataResponse$$anonfun$3.apply(TopicMetadataResponse.scala:31)
>> > at
>> >
>> >
>> kafka.api.TopicMetadataResponse$$anonfun$3.apply(TopicMetadataResponse.scala:31)
>> > at
>> >
>> >
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
>> > at
>> >
>> >
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
>> > at scala.collection.immutable.Range.foreach(Range.scala:81)
>> > at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
>> > at scala.collection.immutable.Range.map(Range.scala:46)
>> > at
>> >
>> kafka.api.TopicMetadataResponse$.readFrom(TopicMetadataResponse.scala:31)
>> > at kafka.producer.SyncProducer.send(SyncProducer.scala:115)
>> > at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
>> > at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
>> > at
>> >
>> >
>> kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
>> > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
>> >
>> >
>> >
>> > # Background on what my code is doing:
>> >
>> > In my setup kafka brokers are set for auto topic creation. In the
>> scenario
>> > above a node informs other nodes (currently 5 in total) about ~50 new
>> > (non-existent) topics, and all the nodes almost simultaneously open a
>> > consumer for each of these topics. This triggers topic creation for all
>> the
>> > topics on kafka brokers. Most of the topics are created fine but there
>> is
>> > almost always few topics that throw the above exception and a kafka
>> > producer is unable to send any data to such a topic
>> > (LAX1-GRIFFIN-r8-1423037468055
>> > in the above case)
>> >
>> >
>> > # Logs
>> > All kafka broker logs (3 brokers) available at
>> > http://d.pr/f/1eOGM/5UPMPfg5
>> > For these logs only LAX1-GRIFFIN-r8-1423037468055 had an issue. All
>> other
>> > topics were fine.
>> >
>> >
>> > Setup
>> > --------
>> > Zookeeper: 3.4.6
>> > Kafka broker: 0.8.2-beta
>> > Kafka clients: 0.8.2-beta
>> >
>> > # Kafka boker settings (all other settings are default 0.8.2-beta
>> settings)
>> > kafka.controlled.shutdown.enable: 'FALSE'
>> > kafka.auto.create.topics.enable: 'TRUE'
>> > kafka.num.partitions: 8
>> > kafka.default.replication.factor: 1
>> > kafka.rebalance.backoff.ms: 3000
>> > kafka.rebalance.max.retries: 10
>> > kafka.log.retention.minutes: 1200
>> > kafka.delete.topic.enable: 'TRUE'
>> >
>> >
>> >
>> > Sumit
>> >
>>
>
>

Reply via email to