韩建东 created KAFKA-19375:
---------------------------
Summary: Some Topic partitions on one Kafka node in the cluster
service cannot be automatically cleaned up, leading to disk space occupation
Key: KAFKA-19375
URL: https://issues.apache.org/jira/browse/KAFKA-19375
Project: Kafka
Issue Type: Bug
Components: core, log cleaner
Affects Versions: 3.5.0
Reporter: 韩建东
we are unable to determine the cause of this situation, but the error logs from
the faulty node process keep showing the following errors, one is log
retention, and the other is log appending
[2025-03-15 00:45:21,743] ERROR Uncaught exception in scheduled task
'kafka-log-retention' (org.apache.kafka.server.util.KafkaScheduler)
java.nio.BufferOverflowException
at java.nio.Buffer.nextPutIndex(Buffer.java:533)
at java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:796)
at
org.apache.kafka.storage.internals.log.TimeIndex.maybeAppend(TimeIndex.java:206)
at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:527)
at kafka.log.LocalLog.$anonfun$roll$9(LocalLog.scala:531)
at kafka.log.LocalLog.$anonfun$roll$9$adapted(LocalLog.scala:531)
at scala.Option.foreach(Option.scala:407)
at kafka.log.LocalLog.$anonfun$roll$2(LocalLog.scala:531)
at kafka.log.LocalLog.roll(LocalLog.scala:713)
at kafka.log.UnifiedLog.roll(UnifiedLog.scala:1498)
at kafka.log.UnifiedLog.$anonfun$deleteSegments$2(UnifiedLog.scala:1351)
at kafka.log.UnifiedLog.deleteSegments(UnifiedLog.scala:1733)
at
kafka.log.UnifiedLog.deleteRetentionMsBreachedSegments(UnifiedLog.scala:1337)
at kafka.log.UnifiedLog.deleteOldSegments(UnifiedLog.scala:1383)
at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:1299)
at
kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:1296)
at scala.collection.immutable.List.foreach(List.scala:431)
at kafka.log.LogManager.cleanupLogs(LogManager.scala:1296)
at
kafka.log.LogManager.$anonfun$startupWithConfigOverrides$2(LogManager.scala:594)
at
org.apache.kafka.server.util.KafkaScheduler.lambda$schedule$1(KafkaScheduler.java:150)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
[2025-03-15 00:47:03,726] ERROR [ReplicaManager broker=6] Error processing
append operation on partition
ODAEDATASET._DEFAULT.topic_data_res_3020_0._DEFAULT-2
(kafka.server.ReplicaManager)
java.nio.BufferOverflowException
at java.nio.Buffer.nextPutIndex(Buffer.java:533)
at java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:796)
at
org.apache.kafka.storage.internals.log.TimeIndex.maybeAppend(TimeIndex.java:206)
at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:527)
at kafka.log.LocalLog.$anonfun$roll$9(LocalLog.scala:531)
at kafka.log.LocalLog.$anonfun$roll$9$adapted(LocalLog.scala:531)
at scala.Option.foreach(Option.scala:407)
at kafka.log.LocalLog.$anonfun$roll$2(LocalLog.scala:531)
at kafka.log.LocalLog.roll(LocalLog.scala:713)
at kafka.log.UnifiedLog.roll(UnifiedLog.scala:1498)
at kafka.log.UnifiedLog.maybeRoll(UnifiedLog.scala:1484)
at kafka.log.UnifiedLog.$anonfun$append$2(UnifiedLog.scala:824)
at kafka.log.UnifiedLog.append(UnifiedLog.scala:1733)
at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:665)
at
kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1281)
at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1269)
at
kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:1026)
at
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
at scala.collection.TraversableLike.map(TraversableLike.scala:286)
at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:1014)
at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:672)
at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:694)
at kafka.server.KafkaApis.handle(KafkaApis.scala:180)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:76)
at java.lang.Thread.run(Thread.java:750)
--
This message was sent by Atlassian Jira
(v8.20.10#820010)