[ https://issues.apache.org/jira/browse/FLINK-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16413567#comment-16413567 ]
Truong Duc Kien commented on FLINK-9070: ---------------------------------------- Yeah, that bug with DeleteRange was fixed, I'm just concerned about the overall stability of that method, since the bug's discovery was quite recent. Also there's this [comment|https://github.com/facebook/rocksdb/blob/master/include/rocksdb/db.h#L284-L299] in RocksDB's source. I'll put an implementation for RocksDBMapState.clear() using DeleteRange up for review soon. > Improve performance of RocksDBMapState.clear() > ---------------------------------------------- > > Key: FLINK-9070 > URL: https://issues.apache.org/jira/browse/FLINK-9070 > Project: Flink > Issue Type: Improvement > Components: State Backends, Checkpointing > Affects Versions: 1.6.0 > Reporter: Truong Duc Kien > Priority: Minor > > Currently, RocksDBMapState.clear() is implemented by iterating over all the > keys and drop them one by one. This iteration can be quite slow with: > * Large maps > * High-churn maps with a lot of tombstones > There are a few methods to speed-up deletion for a range of keys, each with > their own caveats: > * DeleteRange: still experimental, likely buggy > * DeleteFilesInRange + CompactRange: only good for large ranges > > Flink can also keep a list of inserted keys in-memory, then directly delete > them without having to iterate over the Rocksdb database again. > > Reference: > * [RocksDB article about range > deletion|https://github.com/facebook/rocksdb/wiki/Delete-A-Range-Of-Keys] > * [Bug in DeleteRange|https://pingcap.com/blog/2017-09-08-rocksdbbug] > -- This message was sent by Atlassian JIRA (v7.6.3#76005)