I don’t believe there is any tooling to find and clean orphaned bucket index 
shards. So if you’re certain they’re no longer needed, you can use `rados` 
commands to remove the objects.

Eric
(he/him)

> On Sep 27, 2022, at 2:37 AM, Yuji Ito (伊藤 祐司) <yuji-...@cybozu.co.jp> wrote:
> 
> Hi,
> 
> I have encountered a problem after deleting an RGW bucket. There seem to be 
> remaining bucket index shard objects. Could you tell me the desired way to 
> delete these objects? Is it OK to just delete these objects? Or should I use 
> some dedicated ceph commands? I couldn't found how to do it in the official 
> document.
> 
> Environment:
> Rook: 1.9.6
> Ceph: 16.2.10
> 
> Here is a detailed information:
> I got the following HEALTH_WARN after deleting a RGW bucket.
> 
> ```
> $ kubectl exec -n ceph-poc deploy/rook-ceph-tools -- ceph health detail
> HEALTH_WARN 35 large omap objects
> [WRN] LARGE_OMAP_OBJECTS: 35 large omap objects
>    35 large objects found in pool 
> 'ceph-poc-object-store-ssd-index.rgw.buckets.index'
>    Search the cluster log for 'Large omap object found' for more details.
> ```
> 
> I tried `bilog trim` and `stale-instance delete` commands with reffering to 
> the following document.
> - https://access.redhat.com/solutions/6450561
> - 
> https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/object_gateway_guide_for_ubuntu/index#cleaning-stale-instances-after-resharding-rgw
> 
> Then I ran deep-scrub and this warning was disappeared. However, this warning 
> appeared later. As a result of investigation, I found the bucket index shard 
> objects of deleted bucket still exist. 
> 
> There were two buckets.
> 
> ```
> $ kubectl exec -it -n ceph-poc deploy/rook-ceph-tools -- radosgw-admin bucket 
> stats | jq '.[] | {"bucket": .bucket, "id": .id}' | jq .
> {
>  "bucket": "csa-large-omap-9332ba5c-3cb5-4ff7-98cf-1729b44b954c",
>  "id": "83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1"
> }
> {
>  "bucket": "rook-ceph-bucket-checker-dfef5d3c-036a-428a-b4df-ae6be5d5c41a",
>  "id": "83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1"
> }
> ```
> 
> However, there were three sets of bucket index shard objects.
> 
> ```
> $ kubectl exec -n ceph-poc deploy/rook-ceph-tools -- rados ls --pool 
> ceph-poc-object-store-ssd-index.rgw.buckets.index | sort
> .dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.14548925.2.0
> <...snip...>
> .dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.14548925.2.9
> .dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.0
> <...snip...>
> .dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.53178977.1.9
> .dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.0
> <...snip...>
> .dir.83a2aeca-b5a0-46b2-843b-fb34884bb148.62065601.1.9​
> ```
> 
> I would like to delete the above unused objects by `rados rm` command. But 
> I'm not sure whether this operation is safe or not. I would like to know how 
> to manually delete them and the procedure to do so.
> 
> Thanks,
> Yuji
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to