[ 
https://issues.apache.org/jira/browse/KAFKA-12507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17304593#comment-17304593
 ] 

Wenbing Shen commented on KAFKA-12507:
--------------------------------------

How much do you set about these parameters:    
-XX:MaxDirectMemorySize、num.network.threads、socket.request.max.bytes

We have encountered the same problem in the environment of our customers. We 
set  -XX:MaxDirectMemorySize=2G 、num.network.threads=120,Later, I circumvented 
this problem by calculating real-time traffic and reducing the 
num.network.threads.

> java.lang.OutOfMemoryError: Direct buffer memory
> ------------------------------------------------
>
>                 Key: KAFKA-12507
>                 URL: https://issues.apache.org/jira/browse/KAFKA-12507
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>         Environment: kafka version: 2.0.1
> java version: 
> java version "1.8.0_211"
> Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)
> the command we use to start kafka broker:
> java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -Djava.awt.headless=true 
> -XX:+ExplicitGCInvokesConcurrent 
>            Reporter: diehu
>            Priority: Major
>
> Hi, we have three brokers in our kafka cluster, and we use scripts to send 
> data to kafka at a rate of about 3.6w eps. After about one month, we got the 
> OOM error: 
> {code:java}
> [2021-01-09 17:12:24,750] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> java.lang.OutOfMemoryError: Direct buffer memory
>     at java.nio.Bits.reserveMemory(Bits.java:694)
>     at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
>     at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
>     at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:241)
>     at sun.nio.ch.IOUtil.read(IOUtil.java:195)
>     at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
>     at 
> org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:104)
>     at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:117)
>     at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:335)
>     at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:296)
>     at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:562)
>     at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:498)
>     at org.apache.kafka.common.network.Selector.poll(Selector.java:427)
>     at kafka.network.Processor.poll(SocketServer.scala:679)
>     at kafka.network.Processor.run(SocketServer.scala:584)
>     at java.lang.Thread.run(Thread.java:748){code}
>  the kafka server is not shutdown, but always get this error. And at the same 
> time, data can not be produced to kafka cluster, consumer can not consume 
> data from kafka cluster.
> We used the recommended java parameter XX:+ExplicitGCInvokesConcurrent  but 
> it seems not useful. 
>  Only kafka cluster restart helps to fix this problem.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to