Hi,
we are seeing quite a high memory usage by OSDs since Nautilus. Averaging 
10GB/OSD for 10TB HDDs. But I had OOM issues on 128GB Systems because some 
single OSD processes used up to 32%.
Here an example how they look on average: https://i.imgur.com/kXCtxMe.png
Is that normal? I never seen this on luminous. Memory leaks?Using all default 
values, OSDs have no special configuration. Use case is cephfs.

v14.2.4 on Ubuntu 18.04 LTS
Seems a bit high?
Thanks for help
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to