[ 
https://issues.apache.org/jira/browse/KAFKA-70?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13097606#comment-13097606
 ] 

Prashanth Menon commented on KAFKA-70:
--------------------------------------

Hi there, I'm currently looking into this issue (and the larger project as a 
whole) and have a few questions:

- In the general case, either the time limit is reached and we trim, or the 
size limit is reached and we trim.  What happens when we cleanup expired 
segments but still go over the size restraint? Do we continue by trimming the 
size or wait for the next cycle to check?  I'd presume so but thought to make 
sure.
- In the config, since this would be a new optional field, what would be a 
reasonable default for the retention size?  I'm not sure what types of log file 
sizes the system deals with so any suggestions would be good.

Thanks everyone!

- Prashanth

> Introduce retention setting that depends on space
> -------------------------------------------------
>
>                 Key: KAFKA-70
>                 URL: https://issues.apache.org/jira/browse/KAFKA-70
>             Project: Kafka
>          Issue Type: New Feature
>
> Currently there is this setting: 
> log.retention.hours 
> introduce: 
> log.retention.size 
> Semantic would be, either size is reached or time is reached, oldest messages 
> will be deleted. 
> This does not break back-ward compatibility and would make the system robust 
> under scenarios where message size is not deterministic over time.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to