g has only 1 segment left and yet the total size still exceeds the limit.
> Do we roll log segments early?
>
> Thanks,
>
> Jun
>
>
> On Sun, May 4, 2014 at 4:31 AM, vinh wrote:
>
>> Thanks Jun. So if I understand this correctly, there really is no master
>&
each log segment in the log dir.
>
> Thanks,
>
> Jun
>
>
> On Thu, May 1, 2014 at 9:31 PM, vinh wrote:
>
>> In the 0.7 docs, the description for log.retention.size and log.file.size
>> sound very much the same. In particular, that they apply to a single l
o add more disk space to the
pool, but I'd like to avoid that and see if I can simply configure Kafka to not
write more than 1TB aggregate. Else, Kafka will OOM and kill itself, and
possibly the crash the node itself because the disk is full.
On May 1, 2014, at 9:21 PM, vinh wrote:
>
in software, even monitoring
can fail. Via configuration, I'd like to make sure that Kafka does not write
more than the available disk space. Or something like log4j, where I can set a
max number of log files and the max size per file, which essentially allows me
to set a max aggregate size limit across all logs.
Thanks,
-Vinh
In that case, is there a way to detect that a consumer instance is no longer
usable, so that we can recreate the instance on the fly again to have it
reconnect? Without having to restart our app?
Thanks,
-Vinh
On Feb 11, 2014, at 7:45 AM, Jun Rao wrote:
> We do catch the exception. Howe