sryanyuan commented on PR #3375:
URL: https://github.com/apache/kvrocks/pull/3375#issuecomment-4053360706

   You're correct that a sudden burst of massive deletes can increase 
compaction pressure in RocksDB.
   
   In our implementation, FLUSHSLOTS uses RocksDB's DeleteRange API, which is 
optimized to mark key ranges for deletion without immediately touching every 
SST file entry. This reduces write amplification compared to issuing many 
individual Delete operations.
   
   However, DeleteRange still leaves tombstones and invalidated data in the LSM 
tree, which will be cleaned up during subsequent compactions. This GC phase 
will rewrite SST files to reclaim space, and can cause extra compaction load.
   
   This command is intended for special operational scenarios — for example, 
when a cluster is not serving live traffic and an operator needs to bulk-clear 
data for specific slots. In such cases, temporary compaction pressure is 
acceptable, and we avoid running it during normal high-load periods.
   
   To mitigate impact, we recommend:
   
   * Running FLUSHSLOTS during off-peak hours.
   * Limiting the number of slots cleared in a single operation (e.g., only a 
small fraction of the 16,384 slots at a time), to  reduce the volume of deleted 
data and spread GC load over time.
   * Adjusting compaction options (e.g., background threads, delete range 
thresholds) if necessary.
   * Optionally forcing manual compaction on affected ranges after deletion.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to