[
https://issues.apache.org/jira/browse/KAFKA-677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jun Rao updated KAFKA-677:
--------------------------
Attachment: kafka_677-cleanup.patch
Not sure what's causing this. However, in Log.truncateTo(), we have a couple of
error cases upon which we only log an error and let it go. Those error cases
will bring the log to an inconsistent state. So, it's probably better to throw
a KafkaStorageException ( the exception will propagate all the way to
KafkaApis.handleLeaderAndIsrRequest() and cause the broker to shut down)
instead. Attach a patch to clean this up.
> Retention process gives exception if an empty segment is chosen for collection
> ------------------------------------------------------------------------------
>
> Key: KAFKA-677
> URL: https://issues.apache.org/jira/browse/KAFKA-677
> Project: Kafka
> Issue Type: Bug
> Affects Versions: 0.8
> Reporter: Jay Kreps
> Assignee: Jay Kreps
> Fix For: 0.8
>
> Attachments: kafka_677-cleanup.patch
>
>
> java.io.FileNotFoundException:
> /mnt/u001/kafka_08_long_running_test/kafka-logs/NewsActivityEvent-3/00000000000000000000.index
> (No such file or directory)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:244)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:233)
> at kafka.log.Log.rollToOffset(Log.scala:459)
> at kafka.log.Log.roll(Log.scala:443)
> at kafka.log.Log.markDeletedWhile(Log.scala:395)
> at
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:241)
> at
> kafka.log.LogManager$$anonfun$cleanupLogs$2.apply(LogManager.scala:277)
> at
> kafka.log.LogManager$$anonfun$cleanupLogs$2.apply(LogManager.scala:275)
> at scala.collection.Iterator$class.foreach(Iterator.scala:631)
> at
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
> at
> scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:495)
> at kafka.log.LogManager.cleanupLogs(LogManager.scala:275)
> at
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
> at kafka.utils.Utils$$anon$2.run(Utils.scala:66)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira