Apart from the fact that the file rename is failing (the API notes that there are chances of the rename failing), it looks like the implementation in FileMessageSet's rename can cause a couple of issues, one of them being a leak.

The implementation looks like this https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/log/FileMessageSet.scala#L268. Notice that the reference to the original file member variable is switched with a new one but the (old) FileChannel held in that FileMessageSet isn't closed. That I think explains the leak. Furthermore, a new fileChannel for the new File instance isn't being created either and that's a different issue.

P.S: Not much familiar with Kafka code yet. The above explanation is just based on a quick look at that piece of code and doesn't take into account any other context there might be to this.

-Jaikiran


On Thursday 08 January 2015 11:19 AM, Yonghui Zhao wrote:
CentOS release 6.3 (Final)


2015-01-07 22:18 GMT+08:00 Harsha <ka...@harsha.io>:

Yonghui,
            Which OS you are running.
-Harsha

On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
Yes  and I found the reason rename in deletion is failed.
In rename progress the files is deleted? and then exception blocks file
closed in kafka.
But I don't know how can rename failure happen,

[2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
'kafka-log-retention' (kafka.utils.KafkaScheduler)
kafka.common.KafkaStorageException: Failed to change the log file suffix
from  to .deleted for log segment 70781650
         at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
         at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:636)
         at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:627)
         at
         kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
         at
         kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:415)
         at scala.collection.immutable.List.foreach(List.scala:318)
         at kafka.log.Log.deleteOldSegments(Log.scala:415)
         at

kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:325)
         at
kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:356)
         at
kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:354)
         at

scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
         at scala.collection.Iterator$class.foreach(Iterator.scala:727)
         at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
         at
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
         at

scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
         at kafka.log.LogManager.cleanupLogs(LogManager.scala:354)
         at

kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
         at
         kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
         at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
         at

java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
         at
         java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
         at

java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
         at

java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
         at

java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
         at

java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
         at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
         at java.lang.Thread.run(Thread.java:662)


2015-01-07 13:56 GMT+08:00 Jun Rao <j...@confluent.io>:

Do you mean that the Kafka broker still holds a file handler on a
deleted
file? Do you see those files being deleted in the Kafka log4j log?

Thanks,

Jun

On Tue, Jan 6, 2015 at 4:46 AM, Yonghui Zhao <zhaoyong...@gmail.com>
wrote:

Hi,

We use kafka_2.10-0.8.1.1 in our server. Today we found disk space
alert.
We find many kafka data files are deleted, but still opened by kafka.

such as:

_yellowpageV2-0/00000000000068170670.log (deleted)
java       8446         root  724u      REG              253,2
536937911
26087362


/home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000068818668.log
(deleted)
java       8446         root  725u      REG              253,2
536910838
26087364


/home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000069457098.log
(deleted)
java       8446         root  726u      REG              253,2
536917902
26087368


/home/work/data/soft/kafka-0.8/data/_oakbay_v2_search_topic_ypgsearch_yellowpageV2-0/00000000000070104914.log
(deleted)


Is there anything wrong or wrong configed?


Reply via email to