Cross posting from SO. The exact some question. See my answer there:
http://stackoverflow.com/questions/41048041/kafka-deletes-segments-even-before-segment-size-is-reached/41065100#comment69338104_41065100
-Matthias
On 12/8/16 8:43 PM, Rodrigo Sandoval wrote:
> This is what Tood said:
>
> "Ret
This is what Tood said:
"Retention is going to be based on a combination of both the retention and
segment size settings (as a side note, it's recommended to use
log.retention.ms and log.segment.ms, not the hours config. That's there for
legacy reasons, but the ms configs are more consistent). As
Your understanding about segment.bytes and retention.ms is correct. But
Tood Palino said just after having reached the segment size, that is when
the segment is "closed" PLUS all messages within the segment that was
closed are older than the retention policy defined ( in this case
retention.ms) TH
I think segment.bytes defines the size of single log file before creating a
new one.
retention.ms defines number of ms to wait on a log file before deleting it.
So it is working as defined in docs.
On Fri, Dec 9, 2016 at 2:42 AM, Rodrigo Sandoval wrote:
> How is that about that when the segmen
How is that about that when the segment size is reached, plus every single
message inside the segment is older than the retention time, then the segment
will be deleted?
I have playing with Kafka and I have the following:
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic1 con
How is that about that when the segment size is reached, plus every single
message inside the segment is older than the retention time, then the segment
will be deleted?
I have playing with Kafka and I have the following:
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic1 con
How is that about that when the segment size is reached, plus every single
message inside the segment is older than the retention time, then the
segment will be deleted?
I have playing with Kafka and I have the following:
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic1
conf
One caveat. If you are relying on log.segment.ms to roll the current log
segment, it will not roll until the both time elapses and something new
arrives for the log.
In other words, if your topic/log segment are idle, no rolling will happen.
The theoretically ineligible log will still be the curre
All of the information Todd posted is important to know. There was also
jira related to this that has been committed trunk:
https://issues.apache.org/jira/browse/KAFKA-2436
Before that patch, log.retention.hours was used to calculate
KafkaConfig.logRetentionTimeMillis. But it was not used in LogMa
Retention is going to be based on a combination of both the retention and
segment size settings (as a side note, it's recommended to use
log.retention.ms and log.segment.ms, not the hours config. That's there for
legacy reasons, but the ms configs are more consistent). As messages are
received by K
I guess that kind of makes sense.
The following section in the config is what confused me:
*"# The following configurations control the disposal of log segments. The
policy can*
*# be set to delete segments after a period of time, or after a given size
has accumulated.*
*# A segment will be deleted
"minimum age of a log file to be eligible for deletion" Key word is
minimum. If you only have 1k logs, Kafka doesn't need to delete anything.
Try to push more data through and when it needs to, it will start deleting
old logs.
On Mon, Sep 21, 2015 at 8:58 PM allen chan
wrote:
> Hi,
>
> Just brou
Hi,
Just brought up new kafka cluster for testing.
Was able to use the console producers to send 1k of logs and received it on
the console consumer side.
The one issue that i have right now is that the retention period does not
seem to be working.
*# The minimum age of a log file to be eligible
13 matches
Mail list logo