Hi all,

I have a question about the garbage collector within RGWs. We run Nautilus 
14.2.8 and we have 32 garbage objects in the gc pool with totally 39 GB of 
garbage that needs to be processed.
When we run,

  radosgw-admin gc process --include-all

objects are processed but most of them won't be deleted. This can be checked by 
using --debug-rgw=5 in the command and stat the objects which are mentioned 
that they have been processed. Also the monitoring doesn't show that a huge 
amount of objects are deleted by the gc. So, I assume that it doesn't actually 
delete the objects. It might be due to a renewed time stamp? (not sure about 
this) Is there anybody who had similar issues with removing a large amount of 
garbage and is there a way to let the gc delete the objects?
Most of the objects within the gc list are __multipart__ objects. Are they 
processed differently than single part objects? E.g. collect all the multiparts 
before the deletion actually happens or how is this implemented? The garbage is 
still increasing and the gc cannot process things what scares us a bit. Also, 
we cannot bypass the gc because the bucket is still in use.

I also thought about reinitializing the GC in order to get an up to date list 
of garbage. (some entries show with `radosgw-admin gc list --include-all` are 
over a month old) Is there a way to make this happen and how save is it?
I thought about exporting the omapobjects from the gc pool (as a backup) and 
delete the objects within the pool (or rename the pool).

I appreciate any input and thank you in advance.

Regards,
Michael

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to