Hi Mark,

In v17.2.7 we enabled a feature that automatically performs a compaction
>> if too many tombstones are present during iteration in RocksDB.  It
>> might be worth upgrading to see if it helps (you might have to try
>> tweaking the settings if the defaults aren't helping enough).  The PR is
>> here:
>>
>> https://github.com/ceph/ceph/pull/50893
>
>
we've upgraded Ceph to v17.2.7 yesterday. Unfortunately I still see growing
latency on OSDs hosting index pool. Will try to tune
rocksdb_cf_compact_on_deletion options as you suggested.

I've started with decreasing deletion_trigger from 16384 to 512 with:

# ceph tell 'osd.*' injectargs '--rocksdb_cf_compact_on_deletion_trigger
512'

At first glance - nothing has changed per OSD latency graphs. I've tried to
decrease it to 32 deletions per window on a single OSD where I see
increasing latency to force compactions, but per graphs nothing has changed
after approx 40 minutes.

# ceph tell 'osd.435' injectargs '--rocksdb_cf_compact_on_deletion_trigger
32'

Didn't touch rocksdb_cf_compact_on_deletion_sliding_window yet, it is set
with default 32768 entries.

Do you know if it rocksdb_cf_compact_on_deletion_trigger and
rocksdb_cf_compact_on_deletion_sliding_window can be changed in runtime
without OSD restart?

--
Thank you,
Roman
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to