Hi,

our Ceph 14.2.3 cluster so far runs smooth with replicated and EC pools, but 
since a couple of days one of the dedicated replication nodes consumes up to 
99% swap and stays at that level. The other two replicated nodes use +- 50 - 
60% of swap.

All the 24 NVMe OSDs per node are BlueStore with default settings, 128GB RAM. 
The vm.swappiness is set to 10.

Do you have any suggestions how to handle/reduce the swap usage?

        Thanks for feedback and regards . Götz

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to