Hi team,

We are having a issue with compacting __consumer_offsets topic in our
cluster. We’re seeing logs in log-cleaner.log saying:

[2017-05-16 11:56:28,993] INFO Cleaner 0: Building offset map for log
__consumer_offsets-15 for 349 segments in offset range [0, 619265471).
(kafka.log.LogCleaner)
[2017-05-16 11:56:29,014] ERROR [kafka-log-cleaner-thread-0], Error due to
 (kafka.log.LogCleaner)
java.lang.IllegalArgumentException: requirement failed: 306088059 messages
in segment __consumer_offsets-15/00000000000000000000.log but offset map
can fit only 74999999. You can increase log.cleaner.dedupe.buffer.size or
decrease log.cleaner.threads
at scala.Predef$.require(Predef.scala:219)
at kafka.log.Cleaner$$anonfun$buildOffsetMap$4.apply(LogCleaner.scala:584)
at kafka.log.Cleaner$$anonfun$buildOffsetMap$4.apply(LogCleaner.scala:580)
at
scala.collection.immutable.Stream$StreamWithFilter.foreach(Stream.scala:570)
at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:580)
at kafka.log.Cleaner.clean(LogCleaner.scala:322)
at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:230)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:208)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
[2017-05-16 11:56:29,016] INFO [kafka-log-cleaner-thread-0], Stopped
 (kafka.log.LogCleaner)

We have log.cleaner.dedupe.buffer.size=2000000000, which is slightly less
than 2G, but still, it can only fit 74,999,999 messages. The segment has
306,088,059 messages which is 4 times larger than the buffer can hold. We
tried to set log.cleaner.dedupe.buffer.size even larger, but we see the log
saying that
[2017-05-16 11:52:16,238] WARN [kafka-log-cleaner-thread-0], Cannot use
more than 2G of cleaner buffer space per cleaner thread, ignoring excess
buffer space... (kafka.log.LogCleaner)

The size of 00000000000000000000.log segment is 100MB, and
log.cleaner.threads=1. We’re running Kafka 0.9.0.1.
How can we get through this?

Thanks,
Jun

Reply via email to