Some of it is transferable to RocksDB on mons nonetheless.

> but their specs exceed Ceph hardware recommendations by a good margin

Please point me to such recommendations, if they're on docs.ceph.com 
<http://docs.ceph.com/> I'll get them updated.

> On Oct 13, 2023, at 13:34, Zakhar Kirpichenko <zak...@gmail.com> wrote:
> 
> Thank you, Anthony. As I explained to you earlier, the article you had sent 
> is about RocksDB tuning for Bluestore OSDs, while the issue at hand is not 
> with OSDs but rather monitors and their RocksDB store. Indeed, the drives are 
> not enterprise-grade, but their specs exceed Ceph hardware recommendations by 
> a good margin, they're being used as boot drives only and aren't supposed to 
> be written to continuously at high rates - which is what unfortunately is 
> happening. I am trying to determine why it is happening and how the issue can 
> be alleviated or resolved, unfortunately monitor RocksDB usage and tunables 
> appear to be not documented at all. 
> 
> /Z
> 
> On Fri, 13 Oct 2023 at 20:11, Anthony D'Atri <anthony.da...@gmail.com 
> <mailto:anthony.da...@gmail.com>> wrote:
>> cf. Mark's article I sent you re RocksDB tuning.  I suspect that with Reef 
>> you would experience fewer writes.  Universal compaction might also help, 
>> but in the end this SSD is a client SKU and really not suited for enterprise 
>> use.  If you had the 1TB SKU you'd get much longer life, or you could change 
>> the overprovisioning on the ones you have.
>> 
>>> On Oct 13, 2023, at 12:30, Zakhar Kirpichenko <zak...@gmail.com 
>>> <mailto:zak...@gmail.com>> wrote:
>>> 
>>> I would very much appreciate it if someone with a better understanding of
>>> monitor internals and use of RocksDB could please chip in.
>> 

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to