Hi,

osd_memory_target is a "target", i.e. an OSD make an effort to consume up
to the specified amount of RAM, but won't consume less than required for
its operation and caches, which have some minimum values such as for
example osd_memory_cache_min, bluestore_cache_size,
bluestore_cache_size_hdd, bluestore_cache_size_ssd, etc. The recommended
and default OSD memory target is 4 GB.

Your nodes have a sufficient amount of RAM, thus I don't see why you would
want to reduce OSD memory consumption below the recommended defaults,
especially considering that in-memory caches are important for Ceph
operations as they're many times faster than the fastest storage devices. I
run my OSDs with osd_memory_target=17179869184 (16 GB) and it helps,
especially with slower HDD-backed OSDs.

/Z

On Thu, 16 Nov 2023 at 01:02, Nguyễn Hữu Khôi <nguyenhuukho...@gmail.com>
wrote:

> Hello,
> I am using a CEPH cluster. After monitoring it, I set:
>
> ceph config set osd osd_memory_target_autotune false
>
> ceph config set osd osd_memory_target 1G
>
> Then restart all OSD services then do test again, I just use fio commands
> from multi clients and I see that OSD memory consume is over 1GB. Would you
> like to help me understand this case?
>
> Ceph version: Quincy
>
> OSD: 3 nodes with 11 nvme each and 512GB ram per node.
>
> CPU: 2 socket xeon gold 6138 cpu with 56 cores per socket.
>
> Network: 25Gbps x 2 for public network and 25Gbps x 2 for storage network.
> MTU is 9000
>
> Thank you very much.
>
>
> Nguyen Huu Khoi
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to