Re: getting an error about: java.lang.OutOfMemoryError: Direct buffer memory

2016-11-16 Thread Wei Li
java.lang.OutOfMemoryError is not really necessary directly related with
memory usage.  In your config, it requests only 1G.  If your system is not
stressed, I would suggest you to check ulimit for kafka runtime user,
particularly check max number of open file descriptor and max number of
processes (nofile and nproc).  If the problem is not easily reproduced,
this is likely to be the issue.

On Tue, Nov 15, 2016 at 10:27 AM, Vytenis Silgalis <
vytenis.silga...@riseinteractive.com.invalid> wrote:

> kafka version - 0.9.0.0
> JVM flags:
> -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC
> -Djava.awt.headless=true
>
> It's not easily reproducible but has caused some issues for us, any insight
> would be appreciated.
>
> Full stack:
> [2016-11-15 07:27:47,850] ERROR [Replica Manager on Broker 2]: Error
> processing append operation on partition [fancy_topic_name,1]
> (kafka.server.ReplicaManager)
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:631)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
> at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174)
> at sun.nio.ch.IOUtil.write(IOUtil.java:58)
> at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:205)
> at
> kafka.message.ByteBufferMessageSet.writeTo(ByteBufferMessageSet.scala:160)
> at kafka.log.FileMessageSet.append(FileMessageSet.scala:229)
> at kafka.log.LogSegment.append(LogSegment.scala:85)
> at kafka.log.Log.append(Log.scala:360)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at
> kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.
> apply(ReplicaManager.scala:401)
> at
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.
> apply(ReplicaManager.scala:386)
> at
> scala.collection.TraversableLike$$anonfun$map$
> 1.apply(TraversableLike.scala:245)
> at
> scala.collection.TraversableLike$$anonfun$map$
> 1.apply(TraversableLike.scala:245)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.
> apply(HashMap.scala:99)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.
> apply(HashMap.scala:99)
> at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
> at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at kafka.server.KafkaApis.handleProducerRequest(
> KafkaApis.scala:366)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
> at
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
>
> Thanks,
> Vytenis
>


getting an error about: java.lang.OutOfMemoryError: Direct buffer memory

2016-11-15 Thread Vytenis Silgalis
kafka version - 0.9.0.0
JVM flags:
-Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC
-Djava.awt.headless=true

It's not easily reproducible but has caused some issues for us, any insight
would be appreciated.

Full stack:
[2016-11-15 07:27:47,850] ERROR [Replica Manager on Broker 2]: Error
processing append operation on partition [fancy_topic_name,1]
(kafka.server.ReplicaManager)
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:631)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174)
at sun.nio.ch.IOUtil.write(IOUtil.java:58)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:205)
at
kafka.message.ByteBufferMessageSet.writeTo(ByteBufferMessageSet.scala:160)
at kafka.log.FileMessageSet.append(FileMessageSet.scala:229)
at kafka.log.LogSegment.append(LogSegment.scala:85)
at kafka.log.Log.append(Log.scala:360)
at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
at
kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
at
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
at
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at
scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at
kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
at
kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
at
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)

Thanks,
Vytenis