[ 
https://issues.apache.org/jira/browse/KAFKA-3892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15345400#comment-15345400
 ] 

Noah Sloan commented on KAFKA-3892:
-----------------------------------

I can say that all producers and consumers ended up with metadata for all 
topics (according to the heap dump), not just ones that might have not had any 
subscriptions yet. So there is something pathological about it, since the 
conditions never corrects itself. Also, when i was debugging, it was never the 
first metadata response that contained all topics. 

> Clients retain metadata for non-subscribed topics
> -------------------------------------------------
>
>                 Key: KAFKA-3892
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3892
>             Project: Kafka
>          Issue Type: Bug
>          Components: clients
>    Affects Versions: 0.9.0.1
>            Reporter: Noah Sloan
>
> After upgrading to 0.9.0.1 from 0.8.2 (and adopting the new consumer and 
> producer classes,) we noticed services with small heap crashing due to 
> OutOfMemoryErrors. These services contained many producers and consumers (~20 
> total) and were connected to brokers with >2000 topics and over 10k 
> partitions. Heap dumps revealed that each client had 3.3MB of Metadata 
> retained in their Cluster, with references to topics that were not being 
> produced or subscribed to. While the services were running with 128MB of heap 
> prior to the upgrade, we to had increased max heap to 200MB to accommodate 
> all the extra data. 
> While this is not technically a memory leak, it does impose a significant 
> overhead on clients when connected to a large cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to