Thank you!
On Mon, Jul 27, 2015 at 1:43 PM, Ewen Cheslack-Postava
wrote:
> As I mentioned, adjusting any settings such that files are small enough
> that you don't get the benefits of append-only writes or file
> creation/deletion become a bottleneck might affect performance. It looks
> like the
As I mentioned, adjusting any settings such that files are small enough
that you don't get the benefits of append-only writes or file
creation/deletion become a bottleneck might affect performance. It looks
like the default setting for log.segment.bytes is 1GB, so given fast enough
cleanup of old l
Thank you! what performance impacts will it be if I change
log.segment.bytes? Thanks.
On Mon, Jul 27, 2015 at 1:25 PM, Ewen Cheslack-Postava
wrote:
> I think log.cleanup.interval.mins was removed in the first 0.8 release. It
> sounds like you're looking at outdated docs. Search for
> log.retenti
I think log.cleanup.interval.mins was removed in the first 0.8 release. It
sounds like you're looking at outdated docs. Search for
log.retention.check.interval.ms here:
http://kafka.apache.org/documentation.html
As for setting the values too low hurting performance, I'd guess it's
probably only an
If I want to get higher throughput, should I increase the
log.segment.bytes?
I don't see log.retention.check.interval.ms, but there is
log.cleanup.interval.mins, is that what you mean?
If I set log.roll.ms or log.cleanup.interval.mins too small, will it hurt
the throughput? Thanks.
On Fri, Jul 2
You can configure that in the Configs by setting log retention :
http://kafka.apache.org/07/configuration.html
Thanks,
Mayuresh
Sent from my iPhone
> On Jul 24, 2015, at 12:49 PM, Yuheng Du wrote:
>
> Hi,
>
> I am testing the kafka producer performance. So I created a queue and
> writes a l
You'll want to set the log retention policy via
log.retention.{ms,minutes,hours} or log.retention.bytes. If you want really
aggressive collection (e.g., on the order of seconds, as you specified),
you might also need to adjust log.segment.bytes/log.roll.{ms,hours} and
log.retention.check.interval.m
Hi,
I am testing the kafka producer performance. So I created a queue and
writes a large amount of data to that queue.
Is there a way to delete the data automatically after some time, say
whenever the data size reaches 50GB or the retention time exceeds 10
seconds, it will be deleted so my disk w