Hello Elias,

I would love to solicit more feedbacks from the community on how commonly
used a TTL persistent KV store. Maybe you can share your use cases first
here in this thread?

As for its implementation, I think leveraging rocksdb's TTL feature would
be a good option. One tricky part though, is how we can insert the
corresponding "tombstone" messages to the changelogs as well in an
efficient way (this is also the main reason we did not add this feature in
the first release of Kafka Streams). I remember rocksdb's TTL feature do
have a compaction listener interface, but not sure if it is available in
JNI. That may worth exploring.


Guozhang


On Mon, Feb 6, 2017 at 8:17 PM, Elias Levy <fearsome.lucid...@gmail.com>
wrote:

> We have a use case within a Streams application that requires a persistent
> TTL key-value cache store.  As Streams does not currently offer such a
> store, I've implemented it by abusing WindowStore, as it allows for a
> configurable retention period.  Nonetheless, it is not an ideal solution,
> as you can only iterate forward on the iterator returned by fetch(),
> whereas the use case calls for a reverse iterator, as we are only
> interested in the latest value for an entry.
>
> I am curious as to the appetite for a KIP to add such a TTL caching store.
> KAFKA-4212 is the issue I opened requesting such a store.  Do others have a
> need for them?  If there is interest in such a KIP, I can get one started.
>
> If there is interest, there are two ways such a store could be
> implemented.  It could make use of RocksDB TTL feature or it could mirror
> WindowStore and make use multiple segmented RockDBs, possibly reusing the
> RocksDBSegmentedBytesStore from the latest refactoring of the stores.  The
> former deletes most of the work to RocksDB compaction, although likely at
> the expense of greater write amplification.  The later is more efficient at
> dropping expired entries, but potentially more space inefficient.
>
> Thoughts?
>



-- 
-- Guozhang

Reply via email to