Re: log.retention.size

2014-05-16 Thread vinh
g has only 1 segment left and yet the total size still exceeds the limit. > Do we roll log segments early? > > Thanks, > > Jun > > > On Sun, May 4, 2014 at 4:31 AM, vinh wrote: > >> Thanks Jun. So if I understand this correctly, there really is no master >&

Re: log.retention.size

2014-05-04 Thread vinh
each log segment in the log dir. > > Thanks, > > Jun > > > On Thu, May 1, 2014 at 9:31 PM, vinh wrote: > >> In the 0.7 docs, the description for log.retention.size and log.file.size >> sound very much the same. In particular, that they apply to a single l

Re: log.retention.size

2014-05-01 Thread vinh
o add more disk space to the pool, but I'd like to avoid that and see if I can simply configure Kafka to not write more than 1TB aggregate. Else, Kafka will OOM and kill itself, and possibly the crash the node itself because the disk is full. On May 1, 2014, at 9:21 PM, vinh wrote: >

log.retention.size

2014-05-01 Thread vinh
in software, even monitoring can fail. Via configuration, I'd like to make sure that Kafka does not write more than the available disk space. Or something like log4j, where I can set a max number of log files and the max size per file, which essentially allows me to set a max aggregate size limit across all logs. Thanks, -Vinh

Re: 0.72 Consumer: message is invalid, compression codec: NoCompressionCodec

2014-02-11 Thread vinh
In that case, is there a way to detect that a consumer instance is no longer usable, so that we can recreate the instance on the fly again to have it reconnect? Without having to restart our app? Thanks, -Vinh On Feb 11, 2014, at 7:45 AM, Jun Rao wrote: > We do catch the exception. Howe