Opened https://issues.apache.org/jira/browse/KAFKA-1489 .
Regards,
András
On 6/11/2014 6:19 AM, Jun Rao wrote:
Could you file a jira to track this?
Thanks,
Jun
On Tue, Jun 10, 2014 at 8:22 AM, András Serény
wrote:
Hi Kafka devs,
are there currently any plans to implement the global thre
Could you file a jira to track this?
Thanks,
Jun
On Tue, Jun 10, 2014 at 8:22 AM, András Serény
wrote:
> Hi Kafka devs,
>
> are there currently any plans to implement the global threshold feature?
> Is there a JIRA about it?
>
> We are considering to implement a solution for this issue (eithe
Hi Kafka devs,
are there currently any plans to implement the global threshold feature?
Is there a JIRA about it?
We are considering to implement a solution for this issue (either inside
or outside of Kafka).
Thanks a lot,
András
On 5/30/2014 11:45 AM, András Serény wrote:
Sorry for the
Sorry for the delay on this.
Yes, that's right -- it'd be just another term in the chain of 'or'
conditions. Currently it's OR . With the global
condition, it would be
OR OR
In my view, that's fairly simple and intuitive, hence a fine piece of logic.
Regards,
András
On 5/27/2014 4:34
For log.retention.bytes.per.topic and log.retention.hours.per.topic, the
current interpretation is that those are tight bounds. In other words, only
when those thresholds are violated, a segment is deleted. To further
satisfy log.retention.bytes.global, the per topic thresholds may no longer
be tig
No, I think more specific settings should get a chance first. I'm
suggesting that provided that there is a segment rolled for a topic,
*any *of log.retention.bytes.per.topic, log.retention.hours.per.topic,
and a future log.retention.bytes.global violation would cause segments
to be deleted.
Yes, that's possible. There is a default log.retention.bytes for every
topic. By introducing a global threshold, we may have to delete data from
logs whose size is smaller than log.retention.bytes. So, are you saying
that the global threshold has precedence?
Thanks,
Jun
On Fri, May 23, 2014 at
Hi Kafka users,
this feature would also be very useful for us. With lots of topics of
different volume (and as they grow in number) it could become tedious to
maintain topic level settings.
As a start, I think uniform reduction is a good idea. Logs wouldn't be
retained as long as you want,
Agreed…a global knob is a bit tricky for exactly the reason you've identified.
Perhaps the problem could be simplified though by considering the context and
purpose of Kafka. I would use a persistent message queue because I want to
guarantee that data/messages don't get lost. But, since Kafka
Yes, your understanding is correct. A global knob that controls aggregate
log size may make sense. What would be the expected behavior when that
limit is reached? Would you reduce the retention uniformly across all
topics? Then, it just means that some of the logs may not be retained as
long as you
Thanks Jun. So if I understand this correctly, there really is no master
property to control the total aggregate size of all Kafka data files on a
broker.
log.retention.size and log.file.size are great for managing data at the
application level. In our case, application needs change frequentl
log.retention.size controls the total size in a log dir (per
partition). log.file.size
controls the size of each log segment in the log dir.
Thanks,
Jun
On Thu, May 1, 2014 at 9:31 PM, vinh wrote:
> In the 0.7 docs, the description for log.retention.size and log.file.size
> sound very much th
In the 0.7 docs, the description for log.retention.size and log.file.size sound
very much the same. In particular, that they apply to a single log file (or
log segment file).
http://kafka.apache.org/07/configuration.html
I'm beginning to think there is no setting to control the max aggregate s
13 matches
Mail list logo