Hello,
I am using a CEPH cluster. After monitoring it, I set:

ceph config set osd osd_memory_target_autotune false

ceph config set osd osd_memory_target 1G

Then restart all OSD services then do test again, I just use fio commands
from multi clients and I see that OSD memory consume is over 1GB. Would you
like to help me understand this case?

Ceph version: Quincy

OSD: 3 nodes with 11 nvme each and 512GB ram per node.

CPU: 2 socket xeon gold 6138 cpu with 56 cores per socket.

Network: 25Gbps x 2 for public network and 25Gbps x 2 for storage network.
MTU is 9000

Thank you very much.


Nguyen Huu Khoi
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to