Le 13/06/2022 à 18:37, Stefan Kooman a écrit :
CAUTION: This email originated from outside the organization. Do not
click links or open attachments unless you recognize the sender and
know the content is safe.
On 6/13/22 18:21, Eric Le Lay wrote:
Those objects are deleted but have snapshots, even if the pool itself
doesn't have snapshots.
What could cause that?
root@hpc1a:~# rados -p storage stat
rbd_data.5b423b48a4643f.000000000006a4e5
error stat-ing storage/rbd_data.5b423b48a4643f.000000000006a4e5: (2)
No such file or directory
root@hpc1a:~# rados -p storage lssnap
0 snaps
root@hpc1a:~# rados -p storage listsnaps
rbd_data.5b423b48a4643f.000000000006a4e5
rbd_data.5b423b48a4643f.000000000006a4e5:
cloneid snaps size overlap
1160 1160 4194304
[1048576~32768,1097728~16384,1228800~16384,1409024~16384,1441792~16384,1572864~16384,1720320~16384,1900544~16384,2310144~16384]
1364 1364 4194304 []
Do the OSDs still need to trim the snapshots? Does data usage decline
over time?
Gr. Stefan
thanks Stefan for your time!
Snaptrims were re-enabled a week ago but the OSDs only snaptrim newly
deleted snapshots.
restarting or outing an OSD doesn't trigger them either.
Crush-reweighting to 0 an OSD indeeds results in more storage being used!
I'll drop the cluster and start again from scratch.
Best,
Eric
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io