nghao Chen
wrote:
> Hi Vishal,
>
> FIFO compaction with RocksDB can result in data loss as it discards the
> oldest SST file by a size-based trigger or based on TTL (an internal
> RocksDB option, irrelevant to the TTL setting in Flink). So use with cause.
>
> I'm not sure why yo
Hi Vishal,
FIFO compaction with RocksDB can result in data loss as it discards the oldest
SST file by a size-based trigger or based on TTL (an internal RocksDB option,
irrelevant to the TTL setting in Flink). So use with cause.
I'm not sure why you observed SST file compactions, did you
Hi Vishal,
I am not sure I get what you mean by the question #2:
>2. SST files get created each time a checkpoint is triggered. At this
point, does the data for a given key get merged in case the initial data
was read from an SST file while the update must have happened in memory?
Could you
In my load tests, I've found FIFO compaction to offer the best performance
as my job needs state only for so long. However, this particular statement
in RocksDB documentation concerns me:
"Since we never rewrite the key-value pair, we also don't ever apply the
compaction filter on the keys."