[ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16130123#comment-16130123
 ] 

jang commented on KAFKA-5734:
-----------------------------

Hello. [~omkreddy] 

When I tested last two days, "array of char" objects are gradually increased as 
OU growing like below.
At first as kafka started, "array of char" was not most bytes but growed up.

num     #instances         #bytes  class name
----------------------------------------------
   1:       2329801      289205160  [C
   2:           626      135334760  [B
   3:       1155090       97186560  [Ljava.util.HashMap$Node;
   4:       2329743       55913832  java.lang.String
   5:       2138720       51329280  javax.management.ObjectName$Property
   6:       2137625       51313696  [Ljavax.management.ObjectName$Property;
   7:       1068598       51292704  java.util.HashMap
   8:       1068813       42752520  javax.management.ObjectName
   9:       1202446       38478272  java.util.HashMap$Node
  10:       1068079       25633896  
org.apache.kafka.common.metrics.JmxReporter$KafkaMbean
  11:       1068092       17089472  java.util.HashMap$EntrySet
  12:        264851       12050544  [Ljava.lang.Object;
  13:        129789        7268184  java.util.LinkedHashMap
  14:        173217        6928680  java.util.LinkedHashMap$Entry
  15:        259664        6231936  java.util.ArrayList
  16:        181048        5793536  java.util.concurrent.ConcurrentHashMap$Node
  17:         86480        5534720  org.apache.kafka.common.metrics.Sensor


I use java kafka producer like this(send message as byte array type)

*producer.send(new ProducerRecord(tm.getTopic().toString(), 
toBytes(tm.getMessage())), new Callback() {
                                                
        @Override
        public void onCompletion(RecordMetadata metadata, Exception exception) {
                if ( exception != null ) {
                        exception.printStackTrace();
                } else {
                        logger.debug("Topic :" + metadata.topic() + " Offset :" 
+ metadata.offset());
                }
        }
});*


> Heap (Old generation space) gradually increase
> ----------------------------------------------
>
>                 Key: KAFKA-5734
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5734
>             Project: Kafka
>          Issue Type: Bug
>          Components: metrics
>    Affects Versions: 0.10.2.0
>         Environment: ubuntu 14.04 / java 1.7.0
>            Reporter: jang
>         Attachments: heap-log.xlsx
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to