No, there's no control over that. The right way to do this is to keep up
with the head of the topic and decide on "old" yourself in the consumer.

Deletion can happen at different times on the different replicas of the
log, and to different messages. Whilst a consumer will only be reading from
the lead broker for any log at any one time, the leader can and will change
to handle broker failure.

On Thu, Jun 23, 2016 at 4:37 PM, Krish <krishnan.k.i...@gmail.com> wrote:

> Thanks Tom.
> Is there any way a consumer can be triggered when the message is about to
> be deleted by Kafka?
>
>
>
> --
> κρισhναν
>
> On Thu, Jun 23, 2016 at 6:16 PM, Tom Crayford <tcrayf...@heroku.com>
> wrote:
>
>> Hi,
>>
>> A pretty reasonable thing to do here would be to have a consumer that
>> moved "old" events to another topic.
>>
>> Kafka has no concept of an expired queue, the only thing it can do once a
>> message is aged out is delete it. The deletion is done in bulk and
>> typically is set to 24h or even higher (LinkedIn use 4 days, the default is
>> 7 days).
>>
>> Thanks
>>
>> Tom Crayford
>> Heroku Kafka
>>
>> On Thu, Jun 23, 2016 at 10:45 AM, Krish <krishnan.k.i...@gmail.com>
>> wrote:
>>
>>> Hi,
>>> I am trying to design a real-time application where message timeout can
>>> be
>>> as low as a minute or two (message can get stale real-fast).
>>>
>>> In the rare chance that the consumers lag too far behind in processing
>>> messages from the broker, is there a concept of expired message queue in
>>> Kafka?
>>>
>>> I would like to know if a message has expired and then park it in some
>>> topic till as such time that a service can dequeue, process it and/or
>>> investigate it.
>>>
>>> Thanks.
>>>
>>> Best,
>>> Krish
>>>
>>
>>
>

Reply via email to