Yes, we changed osd_memory_target to 10 GB on just our index OSDs. These OSDs 
have over 300 GB of lz4 compressed bucket index omap data. Here is a graph 
showing the latencies before/after that single change:

https://pasteboard.co/IMCUWa1t3Uau.png

Cory Snyder


From: Anthony D'Atri <anthony.da...@gmail.com>
Sent: Friday, February 2, 2024 2:15 PM
To: Cory Snyder <csny...@1111systems.com>
Cc: ceph-users <ceph-users@ceph.io>
Subject: Re: [ceph-users] OSD read latency grows over time 
 
You adjusted osd_memory_target? Higher than the default 4GB? Another thing that 
we've found is that rocksdb can become quite slow if it doesn't have enough 
memory for internal caches. As our cluster usage has grown, we've needed to 
increase 
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender 
This message came from outside your organization. 
Report Suspicious 
 
ZjQcmQRYFpfptBannerEnd
You adjusted osd_memory_target?  Higher than the default 4GB?



Another thing that we've found is that rocksdb can become quite slow if it 
doesn't have enough memory for internal caches. As our cluster usage has grown, 
we've needed to increase OSD memory in accordance with bucket index pool usage. 
One one cluster, we found that increasing OSD memory improved rocksdb latencies 
by over 10x.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to