[ 
https://issues.apache.org/jira/browse/KAFKA-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KahnCheny updated KAFKA-12860:
------------------------------
    Description: 
We encountered an issue with Kafka after running out of heap space. When on 
several brokers halted on start up with the error:

{code:java}
// Some comments here
2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal error 
during KafkaServer startup. Prepare to shutdown
kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
to position 6553 no larger than the last offset appended (1125422119) to 
/dockerdata/kafka_data12/R_sh_level1_3_596_133-1/00000000001124738758.index.
        at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
        at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
        at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
        at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
        at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
        at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at kafka.log.LogSegment.recover(LogSegment.scala:278)
        at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
        at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
        at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
        at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at kafka.log.Log.loadSegmentFiles(Log.scala:322)
        at kafka.log.Log.loadSegments(Log.scala:405)
        at kafka.log.Log.<init>(Log.scala:218)
        at kafka.log.Log$.apply(Log.scala:1776)
        at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
        at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.
{code}




We dump Log segment file that the partition offset (1125422119) due to 
non-monotonically incrementing offsets in logs:

{code:java}
// Some comments here
baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532548379 CreateTime: 1622078435585 isvalid: true size: 110260 magic: 
2 compresscodec: GZIP crc: 4024531289
baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 magic: 
2 compresscodec: GZIP crc: 1867381940
baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 magic: 
2 compresscodec: GZIP crc: 3993802638
baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532782466 CreateTime: 1622078471656 isvalid: true size: 107229 magic: 
2 compresscodec: GZIP crc: 3510625081
baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 magic: 
2 compresscodec: GZIP crc: 2377977722
baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532905251 CreateTime: 1622078466094 isvalid: true size: 29834 magic: 
2 compresscodec: GZIP crc: 322023138
{code}

  was:
We encountered an issue with Kafka after running out of heap space. When on 
several brokers halted on start up with the error:

{code:java}
// Some comments here
2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal error 
during KafkaServer startup. Prepare to shutdown
kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
to position 6553 no larger than the last offset appended (1125422119) to 
/dockerdata/kafka_data12/R_sh_level1_3_596_133-1/00000000001124738758.index.
        at 
kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
        at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
        at kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
        at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
        at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
        at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at kafka.log.LogSegment.recover(LogSegment.scala:278)
        at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
        at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
        at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
        at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at kafka.log.Log.loadSegmentFiles(Log.scala:322)
        at kafka.log.Log.loadSegments(Log.scala:405)
        at kafka.log.Log.<init>(Log.scala:218)
        at kafka.log.Log$.apply(Log.scala:1776)
        at 
kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
        at 
kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.
{code}




We dump Log segment file that the partition offset (1125422119) due to 
non-monotonically incrementing offsets in logs:

{code:java}
// Some comments here
baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532548379 CreateTime: 1622078435585 isvalid: true size: 110260 magic: 
2 compresscodec: GZIP crc: 4024531289
baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 magic: 
2 compresscodec: GZIP crc: 1867381940
baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 magic: 
2 compresscodec: GZIP crc: 3993802638
{color:#DE350B}baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 
lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 
isTransactional: false position: 532782466 CreateTime: 1622078471656 isvalid: 
true size: 107229 magic: 2 compresscodec: GZIP crc: 3510625081
baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 magic: 
2 compresscodec: GZIP crc: 2377977722{color}
baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: -1 
producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false 
position: 532905251 CreateTime: 1622078466094 isvalid: true size: 29834 magic: 
2 compresscodec: GZIP crc: 322023138
{code}


> Partition offset due to non-monotonically incrementing offsets in logs
> ----------------------------------------------------------------------
>
>                 Key: KAFKA-12860
>                 URL: https://issues.apache.org/jira/browse/KAFKA-12860
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 1.1.1
>            Reporter: KahnCheny
>            Priority: Major
>
> We encountered an issue with Kafka after running out of heap space. When on 
> several brokers halted on start up with the error:
> {code:java}
> // Some comments here
> 2021-05-27 14:00:47 main ERROR KafkaServer:159 - [KafkaServer id=1] Fatal 
> error during KafkaServer startup. Prepare to shutdown
> kafka.common.InvalidOffsetException: Attempt to append an offset (1125422119) 
> to position 6553 no larger than the last offset appended (1125422119) to 
> /dockerdata/kafka_data12/R_sh_level1_3_596_133-1/00000000001124738758.index.
>         at 
> kafka.log.OffsetIndex$$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:149)
>         at 
> kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
>         at 
> kafka.log.OffsetIndex$$anonfun$append$1.apply(OffsetIndex.scala:139)
>         at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
>         at kafka.log.OffsetIndex.append(OffsetIndex.scala:139)
>         at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:290)
>         at kafka.log.LogSegment$$anonfun$recover$1.apply(LogSegment.scala:278)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>         at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>         at kafka.log.LogSegment.recover(LogSegment.scala:278)
>         at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:372)
>         at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:350)
>         at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:322)
>         at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>         at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>         at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>         at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>         at kafka.log.Log.loadSegmentFiles(Log.scala:322)
>         at kafka.log.Log.loadSegments(Log.scala:405)
>         at kafka.log.Log.<init>(Log.scala:218)
>         at kafka.log.Log$.apply(Log.scala:1776)
>         at 
> kafka.log.LogManager.kafka$log$LogManager$$loadLog(LogManager.scala:294)
>         at 
> kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$11$$anonfun$apply$15$$anonfun$apply$2.apply$mcV$sp(LogManager.scala:374)
>         at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> 2021-05-27 14:00:48 main ERROR KafkaServerStartable:143 - Exiting Kafka.
> {code}
> We dump Log segment file that the partition offset (1125422119) due to 
> non-monotonically incrementing offsets in logs:
> {code:java}
> // Some comments here
> baseOffset: 1125421806 lastOffset: 1125421958 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false position: 532548379 CreateTime: 1622078435585 isvalid: true size: 
> 110260 magic: 2 compresscodec: GZIP crc: 4024531289
> baseOffset: 1125421959 lastOffset: 1125422027 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false position: 532658639 CreateTime: 1622078442831 isvalid: true size: 55250 
> magic: 2 compresscodec: GZIP crc: 1867381940
> baseOffset: 1125422028 lastOffset: 1125422118 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false position: 532713889 CreateTime: 1622078457410 isvalid: true size: 68577 
> magic: 2 compresscodec: GZIP crc: 3993802638
> baseOffset: 1125422119 lastOffset: 1125422257 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false position: 532782466 CreateTime: 1622078471656 isvalid: true size: 
> 107229 magic: 2 compresscodec: GZIP crc: 3510625081
> baseOffset: 1125422119 lastOffset: 1125422138 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false position: 532889695 CreateTime: 1622078471124 isvalid: true size: 15556 
> magic: 2 compresscodec: GZIP crc: 2377977722
> baseOffset: 1125422139 lastOffset: 1125422173 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false position: 532905251 CreateTime: 1622078466094 isvalid: true size: 29834 
> magic: 2 compresscodec: GZIP crc: 322023138
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to