Thanks, we are storing a lot less stress.
0. I rebooted 30 OSDs on one machine and the queue was not reduced, but the
storage space was released in large amounts.
1. why did the reboot OSD release so much space?
Here are Ceph details..
ceph version 16.2.7
Hello,
I'm encountering an issue with Ceph when using it as the backend storage for
OpenStack Cinder. Specifically, after deleting RBD snapshots through Cinder,
I've noticed a significant increase in the removed_snaps_queue entries within
the corresponding Ceph pool. It seems to affect the