[
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15705267#comment-15705267
]
Abhi commented on KAFKA-1194:
-----------------------------
[~soumyajitsahu]
I tried the build that you have shared on one drive..
I am testing locally at my machine and I am having one zookeper plus three
kafka broker , so ideally I am having three server.properties file
correspondingly along with its respective log directory e.g
C:\kafka_2.11-0.10.2.0\kafka-logs, C:\kafka_2.11-0.10.2.0\kafka-logs1 and
C:\kafka_2.11-0.10.2.0\kafka-logs2 respectively
Now your start.bat command I made the changes accordingly,
and ran this
kafka-run-class.bat kafka.Kafka config/server-0.properties
Now a question, do i need to make 3 start.bat files for each
server-0.properties ?
Though i just made it for 1 , and when I started the kafka broker
I got this error
C:\kafka_2.11-0.10.2.0>.\bin\windows\kafka-server-start.bat
config\server-1.properties
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: \server.log (Access is denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at
org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
at
org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at
org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at
org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at
org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at
org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at
org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at
org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)
at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)
at
org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)
at org.apache.kafka.common.utils.Utils.<clinit>(Utils.java:58)
> The kafka broker cannot delete the old log files after the configured time
> --------------------------------------------------------------------------
>
> Key: KAFKA-1194
> URL: https://issues.apache.org/jira/browse/KAFKA-1194
> Project: Kafka
> Issue Type: Bug
> Components: log
> Affects Versions: 0.8.1
> Environment: window
> Reporter: Tao Qin
> Assignee: Jay Kreps
> Labels: features, patch
> Fix For: 0.10.2.0
>
> Attachments: KAFKA-1194.patch, Untitled.jpg, kafka-1194-v1.patch,
> kafka-1194-v2.patch
>
> Original Estimate: 72h
> Remaining Estimate: 72h
>
> We tested it in windows environment, and set the log.retention.hours to 24
> hours.
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=24
> After several days, the kafka broker still cannot delete the old log file.
> And we get the following exceptions:
> [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix from
> to .deleted for log segment 1516723
> at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
> at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
> at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
> at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
> at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
> at
> scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
> at scala.collection.immutable.List.foreach(List.scala:76)
> at kafka.log.Log.deleteOldSegments(Log.scala:418)
> at
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
> at
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
> at
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
> at
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
> at scala.collection.Iterator$class.foreach(Iterator.scala:772)
> at
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
> at
> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
> at
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
> at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
> at
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
> at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
> I think this error happens because kafka tries to rename the log file when it
> is still opened. So we should close the file first before rename.
> The index file uses a special data structure, the MappedByteBuffer. Javadoc
> describes it as:
> A mapped byte buffer and the file mapping that it represents remain valid
> until the buffer itself is garbage-collected.
> Fortunately, I find a forceUnmap function in kafka code, and perhaps it can
> be used to free the MappedByteBuffer.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)