yurmix opened a new issue, #18560:
URL: https://github.com/apache/druid/issues/18560

   When ingesting 10+ Datasketchs HLL metircs with index_kafka, we are getting 
OOM errors (`java.lang.OutOfMemoryError: Cannot reserve 65536 bytes of direct 
buffer memory`) for tasks, regardless of peon off-heap size. 
   Same behaviour is observed with cardinality metrics.
   When removing approximation metrics, issue is no longer observedd.
   
   ### Affected Version
   
   33.0.0 + JVM 17
   
   ### Description
   
   We are having an issue with memory allocation for index_kafka tasks for a 
datasource heavy on cardinality / DS HLL metrics.
   Keep getting this: `java.lang.OutOfMemoryError: Cannot reserve 65536 bytes 
of direct buffer memory (allocated: 68719453817, limit: 68719476736)`. Keep 
increasing peon maxDirectMemorySize but issue remains (increased this far to 
64GiB.).
   
   On the ingestion spec, taskDuration is 30 minutes, all the rest are defaults.
   
   I have continuosly increased peon maxDirectMemorySize upto 64GB and 
`druid.processing.buffer.sizeBytes` up to 1GB. 
   But Im not sure if I should increase farther, try to limit bytes/rows in 
memory, or something else.
   
   Full log: https://drive.google.com/file/d/1IgjsNJlIN1UuUUt3njQEK6XsqX6VdLw8
   
   Log snippets:
   
   Status:
   ```
   2025-09-22T02:01:47,559 INFO [task-runner-0-priority-0] 
org.apache.druid.indexing.worker.executor.ExecutorLifecycle - Task completed 
with status: {
     "id" : "index_kafka_myacmedatasource_4696788663ba69f_ndbmdddm",
     "status" : "FAILED",
     "duration" : 854415,
     "errorMsg" : "java.lang.RuntimeException: java.lang.OutOfMemoryError: 
Cannot reserve 32768 bytes of direct buffer memory (allocated: 68719451105, 
limit: 68719476736)\n\tat 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner.runInternal(SeekableStreamIndexTaskRunner.java:631)\n\tat
 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner.run(SeekableStreamIndexTaskRunner.java:295)\n\tat
 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTask.runTask(SeekableStreamIndexTask.java:152)\n\tat...29
 characters omitted...nt.IndexMergerV9.merge(IndexMergerV9.java:1374)\n\tat 
org.apache.druid.segment.IndexMergerV9.multiphaseMerge(IndexMergerV9.java:1192)\n\tat
 org.apache.druid.segment.IndexMergerV9.persist(IndexMergerV9.java:1096)\n\tat 
org.apache.druid.segment.IndexMerger.persist(IndexMerger.java:237)\n\tat 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator.persistHydrant(StreamAppenderator.java:1658)\n\tat
 org.apache.druid.segment
 
.realtime.appenderator.StreamAppenderator$2.call(StreamAppenderator.java:695)\n\t...
 6 more\n",
     "location" : {
       "host" : null,
       "port" : -1,
       "tlsPort" : -1
     }
   }
   ```
   
   Log:
   The task runs with about 2500 persist events during ~12 minutes:
   ```
   2025-09-22T01:47:33,573 INFO [main] 
org.apache.druid.java.util.common.lifecycle.Lifecycle - Starting lifecycle 
[module] stage [ANNOUNCEMENTS]
   2025-09-22T01:47:33,585 INFO [main] 
org.apache.druid.java.util.common.lifecycle.Lifecycle - Successfully started 
lifecycle [module]
   2025-09-22T01:47:33,789 INFO [task-runner-0-priority-0] 
org.apache.kafka.clients.Metadata - [Consumer clientId=acme-consumer, 
groupId=acme-consumer-group] Cluster ID: 9at7IwYrQEePshmjHB9Dpg
   2025-09-22T01:47:34,285 INFO [task-runner-0-priority-0] 
org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - Announcing 
segment[myacmedatasource_2025-09-22T01:00:00.000Z_2025-09-22T02:00:00.000Z_2025-09-22T01:03:26.536Z_33]
 at new 
path[/druid-cluster/acme-cluster-dev/segments/druid-acme-cluster-dev-middlemanagers-6.druid-acme-cluster-dev-middlemanagers.acme-cluster-dev.svc.cluster.local:8100/druid-acme-cluster-dev-middlemanagers-6.druid-acme-cluster-dev-middlemanagers.acme-cluster-dev.svc.cluster.local:8100_indexer-executor__default_tier_2025-09-22T01:47:34.284Z_5d9d537b5dda40f0bd73e22f4f2988651]
   2025-09-22T01:47:35,452 INFO [task-runner-0-priority-0] 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Marking 
ready for non-incremental async persist due to reasons[[No more rows can be 
appended to sink, (estimated) bytesCurrentlyInMemory[178970180] is greater than 
maxBytesInMemory[178956970]]].
   2025-09-22T01:47:35,462 INFO [task-runner-0-priority-0] 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted 
rows[4,111] and (estimated) bytes[178,965,180]
   2025-09-22T01:47:36,117 INFO 
[[index_kafka_myacmedatasource_4696788663ba69f_ndbmdddm]-appenderator-persist] 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Flushed 
in-memory data for 
segment[myacmedatasource_2025-09-22T01:00:00.000Z_2025-09-22T02:00:00.000Z_2025-09-22T01:03:26.536Z_33]
 spill[0] to disk in [654] ms (4,111 rows).
   2025-09-22T01:47:36,201 INFO 
[[index_kafka_myacmedatasource_4696788663ba69f_ndbmdddm]-appenderator-persist] 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Flushed 
in-memory data with commit metadata 
[AppenderatorDriverMetadata{segments={index_kafka_myacmedatasource_4696788663ba69f_0=[SegmentWithState{segmentIdentifier=myacmedatasource_2025-09-22T01:00:00.000Z_2025-09-22T02:00:00.000Z_2025-09-22T01:03:26.536Z_33,
 state=APPENDING}]}, 
lastSegmentIds={index_kafka_myacmedatasource_4696788663ba69f_0=myacmedatasource_2025-09-22T01:00:00.000Z_2025-09-22T02:00:00.000Z_2025-09-22T01:03:26.536Z_33},
 
callerMetadata={nextPartitions=SeekableStreamEndSequenceNumbers{stream='myacmetopic',
 partitionSequenceNumberMap={KafkaTopicPartition{partition=151, topic='null', 
multiTopicPartition=false}=73773108988, .....}}}}] for segments: 
myacmedatasource_2025-09-22T01:00:00.000Z_2025-09-22T02:00:00.000Z_2025-09-22T01:03:26.536Z_33
   2025-09-22T01:47:36,202 INFO 
[[index_kafka_myacmedatasource_4696788663ba69f_ndbmdddm]-appenderator-persist] 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted 
stats: processed rows: [7644], persisted rows[4111], sinks: [1], total 
fireHydrants (across sinks): [1], persisted fireHydrants (across sinks): [1]
   ...
   ...
   2025-09-22T02:01:33,658 INFO [task-runner-0-priority-0] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Hit 
the row limit updating sequenceToCheckpoint, SequenceToCheckpoint: 
[SequenceMetadata{sequenceId=0, 
sequenceName='index_kafka_myacmedatasource_4696788663ba69f_0', 
assignments=[KafkaTopicPartition{partition=283, topic='null', 
multiTopicPartition=false},...., sentinel=false, checkpointed=false}], 
rowInSegment: [5000280], TotalRows: [5000280]
   2025-09-22T02:01:33,675 INFO [task-runner-0-priority-0] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - 
Received pause command, pausing ingestion until resumed.
   2025-09-22T02:01:33,679 INFO [qtp207366788-415] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - 
Sequence[index_kafka_myacmedatasource_4696788663ba69f_1] created with start 
offsets [{KafkaTopicPartition{partition=283, topic='null', 
multiTopicPartition=false}=68038841056, ......}] and end offsets [{.....}].
   2025-09-22T02:01:33,681 INFO [qtp207366788-415] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Saved 
sequence metadata to disk: [SequenceMetadata{sequenceId=0, 
sequenceName='index_kafka_myacmedatasource_4696788663ba69f_0', 
assignments=[.....
   2025-09-22T02:01:33,681 INFO [task-runner-0-priority-0] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - 
Received resume command, resuming ingestion.
   2025-09-22T02:01:33,681 INFO [task-runner-0-priority-0] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Adding 
partition.....
   ....
   2025-09-22T02:01:33,929 INFO 
[[index_kafka_myacmedatasource_4696788663ba69f_ndbmdddm]-appenderator-persist] 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted 
stats: processed rows:.....
   ```
   
   Then:
   ```
   2025-09-22T02:01:46,868 ERROR 
[[index_kafka_myacmedatasource_4696788663ba69f_ndbmdddm]-appenderator-persist] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - 
Persist failed, dying
   2025-09-22T02:01:46,868 INFO [task-runner-0-priority-0] 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Persisted 
rows[4,113] and (estimated) bytes[178,986,076]
   2025-09-22T02:01:46,902 ERROR 
[[index_kafka_myacmedatasource_4696788663ba69f_ndbmdddm]-publish] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - Error 
while publishing segments for sequenceNumber[SequenceMetadata{sequenceId=0, 
sequenceName='index_kafka_myacmedatasource_4696788663ba69f_0', assignments=[], 
startOffsets={KafkaTopicPartition{partition=283, topic='null', 
multiTopicPartition=false}=68038760444....., sentinel=false, checkpointed=true}]
   java.lang.OutOfMemoryError: Cannot reserve 65536 bytes of direct buffer 
memory (allocated: 68719451105, limit: 68719476736)
        at java.base/java.nio.Bits.reserveMemory(Bits.java:178) ~[?:?]
        at 
java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121) ~[?:?]
        at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332) 
~[?:?]
        at 
org.apache.druid.segment.CompressedPools$4.get(CompressedPools.java:102) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.CompressedPools$4.get(CompressedPools.java:95) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.collections.StupidPool.makeObjectWithHandler(StupidPool.java:184)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at org.apache.druid.collections.StupidPool.take(StupidPool.java:156) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.CompressedPools.getByteBuf(CompressedPools.java:110) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.data.DecompressingByteBufferObjectStrategy.fromByteBuffer(DecompressingByteBufferObjectStrategy.java:70)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.data.DecompressingByteBufferObjectStrategy.fromByteBuffer(DecompressingByteBufferObjectStrategy.java:30)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.data.GenericIndexed$BufferIndexed.get(GenericIndexed.java:598)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.data.CompressedColumnarIntsSupplier$CompressedColumnarInts.loadBuffer(CompressedColumnarIntsSupplier.java:309)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.data.CompressedColumnarIntsSupplier$1.get(CompressedColumnarIntsSupplier.java:86)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.data.CompressedVSizeColumnarMultiIntsSupplier$CompressedVSizeColumnarMultiInts.get(CompressedVSizeColumnarMultiIntsSupplier.java:186)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.column.StringUtf8DictionaryEncodedColumn$1MultiValueDimensionSelector.getRow(StringUtf8DictionaryEncodedColumn.java:219)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.selector.settable.SettableDimensionValueSelector.setValueFrom(SettableDimensionValueSelector.java:47)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.QueryableIndexIndexableAdapter$RowIteratorImpl.setRowPointerValues(QueryableIndexIndexableAdapter.java:431)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.QueryableIndexIndexableAdapter$RowIteratorImpl.moveToNext(QueryableIndexIndexableAdapter.java:410)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.ForwardingRowIterator.moveToNext(ForwardingRowIterator.java:62)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.MergingRowIterator.lambda$new$0(MergingRowIterator.java:84)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
java.base/java.util.stream.IntPipeline$10$1.accept(IntPipeline.java:392) ~[?:?]
        at 
java.base/java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:104)
 ~[?:?]
        at 
java.base/java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:711) 
~[?:?]
        at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) 
~[?:?]
        at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
 ~[?:?]
        at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575) 
~[?:?]
        at 
java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
 ~[?:?]
        at 
java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616)
 ~[?:?]
        at 
org.apache.druid.segment.MergingRowIterator.<init>(MergingRowIterator.java:92) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.RowCombiningTimeAndDimsIterator.<init>(RowCombiningTimeAndDimsIterator.java:108)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.IndexMergerV9.lambda$merge$6(IndexMergerV9.java:1369) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.IndexMergerV9.makeMergedTimeAndDimsIterator(IndexMergerV9.java:1427)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.IndexMergerV9.makeIndexFiles(IndexMergerV9.java:252) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.IndexMergerV9.merge(IndexMergerV9.java:1374) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.IndexMergerV9.multiphaseMerge(IndexMergerV9.java:1192) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.IndexMergerV9.mergeQueryableIndex(IndexMergerV9.java:1134)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator.mergeAndPush(StreamAppenderator.java:958)
 ~[druid-server-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator.lambda$push$1(StreamAppenderator.java:827)
 ~[druid-server-33.0.0.jar:33.0.0]
        at 
com.google.common.util.concurrent.AbstractTransformFuture$TransformFuture.doTransform(AbstractTransformFuture.java:252)
 ~[guava-32.0.1-jre.jar:?]
        at 
com.google.common.util.concurrent.AbstractTransformFuture$TransformFuture.doTransform(AbstractTransformFuture.java:242)
 ~[guava-32.0.1-jre.jar:?]
        at 
com.google.common.util.concurrent.AbstractTransformFuture.run(AbstractTransformFuture.java:123)
 [guava-32.0.1-jre.jar:?]
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
 [?:?]
        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 [?:?]
        at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
   2025-09-22T02:01:46,928 ERROR [task-runner-0-priority-0] 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner - 
Encountered exception in run() before persisting.
   java.lang.RuntimeException: java.lang.OutOfMemoryError: Cannot reserve 32768 
bytes of direct buffer memory (allocated: 68719451105, limit: 68719476736)
        at 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner.runInternal(SeekableStreamIndexTaskRunner.java:631)
 [druid-indexing-service-33.0.0.jar:33.0.0]
        at 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskRunner.run(SeekableStreamIndexTaskRunner.java:295)
 [druid-indexing-service-33.0.0.jar:33.0.0]
        at 
org.apache.druid.indexing.seekablestream.SeekableStreamIndexTask.runTask(SeekableStreamIndexTask.java:152)
 [druid-indexing-service-33.0.0.jar:33.0.0]
        at 
org.apache.druid.indexing.common.task.AbstractTask.run(AbstractTask.java:179) 
[druid-indexing-service-33.0.0.jar:33.0.0]
        at 
org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:477)
 [druid-indexing-service-33.0.0.jar:33.0.0]
        at 
org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:449)
 [druid-indexing-service-33.0.0.jar:33.0.0]
        at 
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
 [guava-32.0.1-jre.jar:?]
        at 
com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:75)
 [guava-32.0.1-jre.jar:?]
        at 
com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
 [guava-32.0.1-jre.jar:?]
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
 [?:?]
        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 [?:?]
        at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
   Caused by: java.lang.OutOfMemoryError: Cannot reserve 32768 bytes of direct 
buffer memory (allocated: 68719451105, limit: 68719476736)
        at java.base/java.nio.Bits.reserveMemory(Bits.java:178) ~[?:?]
        at 
java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121) ~[?:?]
        at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332) 
~[?:?]
        at 
org.apache.druid.segment.writeout.FileWriteOutBytes.<init>(FileWriteOutBytes.java:48)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.writeout.TmpFileSegmentWriteOutMedium.lambda$makeWriteOutBytes$3(TmpFileSegmentWriteOutMedium.java:92)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.writeout.LazilyAllocatingHeapWriteOutBytes.ensureBytes(LazilyAllocatingHeapWriteOutBytes.java:235)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.writeout.LazilyAllocatingHeapWriteOutBytes.write(LazilyAllocatingHeapWriteOutBytes.java:166)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.writeout.LazilyAllocatingHeapWriteOutBytes.write(LazilyAllocatingHeapWriteOutBytes.java:158)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at java.base/java.io.OutputStream.write(OutputStream.java:127) ~[?:?]
        at 
org.apache.druid.segment.data.ObjectStrategy.writeTo(ObjectStrategy.java:97) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.data.GenericIndexedWriter.write(GenericIndexedWriter.java:268)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.DictionaryEncodedColumnMerger.mergeBitmaps(DictionaryEncodedColumnMerger.java:548)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.DictionaryEncodedColumnMerger.writeIndexes(DictionaryEncodedColumnMerger.java:414)
 ~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.IndexMergerV9.makeIndexFiles(IndexMergerV9.java:288) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.IndexMergerV9.merge(IndexMergerV9.java:1374) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.IndexMergerV9.multiphaseMerge(IndexMergerV9.java:1192) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.IndexMergerV9.persist(IndexMergerV9.java:1096) 
~[druid-processing-33.0.0.jar:33.0.0]
        at org.apache.druid.segment.IndexMerger.persist(IndexMerger.java:237) 
~[druid-processing-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator.persistHydrant(StreamAppenderator.java:1658)
 ~[druid-server-33.0.0.jar:33.0.0]
        at 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator$2.call(StreamAppenderator.java:695)
 ~[druid-server-33.0.0.jar:33.0.0]
        ... 6 more
   2025-09-22T02:01:47,121 INFO 
[[index_kafka_myacmedatasource_4696788663ba69f_ndbmdddm]-appenderator-persist] 
org.apache.druid.segment.realtime.appenderator.StreamAppenderator - Flushed 
in-memory data for 
segment[myacmedatasource_2025-09-22T01:00:00.000Z_2025-09-22T02:00:00.000Z_2025-09-22T01:03:26.536Z_43]
 spill[16] to disk in [252] ms (4,111 rows).
   2025-09-22T02:01:47,172 WARN [Cleaner-0] 
org.apache.druid.collections.StupidPool - Not closed! Object leaked from 
StupidPool{name=littleEndByteBufPool, objectsCacheMaxCount=2147483647, 
poolSize=0}. Allowing gc to prevent leak.
   2025-09-22T02:01:47,172 WARN [Cleaner-0] 
org.apache.druid.collections.StupidPool - Not closed! Object leaked from 
StupidPool{name=littleEndByteBufPool, objectsCacheMaxCount=2147483647, 
poolSize=0}. Allowing gc to prevent leak.
   .... 
   (about 6300 gc messages)
   ```
   
   
   This is a pre-production test environment running on Kubernetes.  
Middlemanager sizing:
   ```
   middlemanagers:
     use: true
     replicas: 12
     workerCapacity: 1
     baseTaskDirPath: /druid/data/baseTaskDir
     jvmOptions:
       minHeapSize: 1G
       maxHeapSize: 1G
       maxDirectMemorySize: 1G
     resources:
   #    limits:
   #      cpu: 13
   #      memory: 80Gi
       requests:
         cpu: 5
         memory: 40Gi
     peon:
       jvmOptions:
         maxDirectMemorySize: 64G
         maxHeapSize: 1G
       forkProperties:
         druid.processing.numThreads: 2
         druid.processing.numMergeBuffers: 2
         druid.realtime.cache.populateCache: true
         druid.processing.buffer.sizeBytes: 1GiB
     preStop:
       enabled: false
     ephemeral:
       enabled: true
       name: druid-middlemanager-ephemeral
       storageClassName: longhorn-strict-local
       storage: 50Gi
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to