Using Kafka 0.7.2, I have the following in server.properties:

log.retention.hours=48
log.retention.size=107374182400
log.file.size=536870912

My interpretation of this is:
a) a single log segment file over 48hrs old will be deleted
b) the total combined size of *all* logs is 100GB
c) a single log segment file is limited to 500MB in size before a new segment 
file is spawned spawning a new segment file
d) a "log file" can be composed of many "log segment files"

But, even after setting the above, I find that the total combined size of all 
Kafka logs on disk is 200GB right now.  Isn't log.retention.size supposed to 
limit it to 100GB?  Am I missing something?  The docs are not really clear, 
especially when it comes to distinguishing between a "log file" and a "log 
segment file".

I have disk monitoring.  But like anything else in software, even monitoring 
can fail.  Via configuration, I'd like to make sure that Kafka does not write 
more than the available disk space.  Or something like log4j, where I can set a 
max number of log files and the max size per file, which essentially allows me 
to set a max aggregate size limit across all logs.

Thanks,
-Vinh

Reply via email to