Rajendra Jangir created KAFKA-6518:
--------------------------------------

             Summary: Kafka broker get stops after configured log retention 
time and throws  I/O exception in append to log
                 Key: KAFKA-6518
                 URL: https://issues.apache.org/jira/browse/KAFKA-6518
             Project: Kafka
          Issue Type: Bug
          Components: controller
    Affects Versions: 0.10.2.0
         Environment: Windows 7, Java version 9.0.4, kafka version- 0.10.2.0, 
zookeeper version -3.3.6
            Reporter: Rajendra Jangir
             Fix For: 0.10.0.2
         Attachments: AppendToLogError.PNG

I am facing one serious issue in kafka. I have multiple kafka brokers. And I am 
producing approx 100-200 messages per second through a python script. Initially 
it works fine without any exception. When it comes to log retention time(5 mins 
in my case) then it throws I/O exception in append to log  and finally kafka 
broker get stops. And it throws following error -

[2018-02-01 17:18:39,168] FATAL [Replica Manager on Broker 3]: Halting due to 
unrecoverable I/O error while handling produce request: 
(kafka.server.ReplicaManager)
kafka.common.KafkaStorageException: I/O exception in append to log 'rjangir1-22'
 at kafka.log.Log.append(Log.scala:349)
 at kafka.cluster.Partition$$anonfun$10.apply(Partition.scala:443)
 at kafka.cluster.Partition$$anonfun$10.apply(Partition.scala:429)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:240)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
 at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:407)
 at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:393)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:393)
 at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:330)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:416)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
 at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
 at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.io.IOException: The requested operation cannot be performed on 
a file with a user-mapped section open
 at java.base/java.io.RandomAccessFile.setLength(Native Method)
 at kafka.log.AbstractIndex$$anonfun$resize$1.apply(AbstractIndex.scala:125)
 at kafka.log.AbstractIndex$$anonfun$resize$1.apply(AbstractIndex.scala:116)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
 at kafka.log.AbstractIndex.resize(AbstractIndex.scala:116)
 at 
kafka.log.AbstractIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(AbstractIndex.scala:175)
 at 
kafka.log.AbstractIndex$$anonfun$trimToValidSize$1.apply(AbstractIndex.scala:175)
 at 
kafka.log.AbstractIndex$$anonfun$trimToValidSize$1.apply(AbstractIndex.scala:175)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234)
 at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:174)
 at kafka.log.Log.roll(Log.scala:774)
 at kafka.log.Log.maybeRoll(Log.scala:745)
 at kafka.log.Log.append(Log.scala:405)
 ... 22 more

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to