We don't have this global setting right now - so the guideline is to
capacity plan to set your log retention settings and set alerts on
disk space accordingly.
In order to recover you should be able to change your retention
settings, but the log cleaner is not called on start up so that won't
work
So, I've hit this issue with kafka - when disk is full kafka stops
with exception
FATAL [KafkaApi-1] Halting due to unrecoverable I/O error while
handling produce request: (kafka.server.KafkaApis)
kafka.common.KafkaStorageException: I/O exception in append to log 'perf1-2'
I think it will be usef
I've checked it, it's per partition, what i'm talking about is more of
global log size limit, like if i have only 200gb, i want to set global
limit on log size, not per partition, so i won't have to change it
later if i add topics/partitions.
Thanks.
On Tue, Nov 5, 2013 at 8:37 PM, Neha Narkhede
You are probably looking for log.retention.bytes. Refer to
http://kafka.apache.org/documentation.html#brokerconfigs
On Tue, Nov 5, 2013 at 3:10 PM, Kane Kane wrote:
> Hello,
>
> What would happen if disk is full? Does it make sense to have
> additional variable to set the maximum size for all l
Hello,
What would happen if disk is full? Does it make sense to have
additional variable to set the maximum size for all logs combined?
Thanks.