[ceph-users] Re: Accumulation of removed_snaps_queue After Deleting Snapshots in Ceph RBD

2024-02-13 Thread Josh Baergen
> 24 active+clean+snaptrim I see snaptrimming happening in your status output - do you know if that was happening before restarting those OSDs? This is the mechanism by which OSDs clean up deleted snapshots, and once all OSDs have completed snaptrim for a given snapshot it should be removed

[ceph-users] Re: Accumulation of removed_snaps_queue After Deleting Snapshots in Ceph RBD

2024-02-12 Thread localhost Liam
Thanks, we are storing a lot less stress. 0. I rebooted 30 OSDs on one machine and the queue was not reduced, but the storage space was released in large amounts. 1. why did the reboot OSD release so much space? Here are Ceph details.. ceph version 16.2.7

[ceph-users] Re: Accumulation of removed_snaps_queue After Deleting Snapshots in Ceph RBD

2024-02-09 Thread Josh Baergen
Hello, Which version of Ceph are you using? Are all of your OSDs currently up+in? If you're HEALTH_OK and all OSDs are up, snaptrim should work through the removed_snaps_queue and clear it over time, but I have seen cases where this seems to get stuck and restarting OSDs can help. Josh On Wed,