I think this is capped at 1000 by the config setting. Ive used the aws
and s3cmd clients to delete more than 1000 objects at a time and it
works even with the config setting capped at 1000. But it is a bit slow.
#> ceph config help rgw_delete_multi_obj_max_num
rgw_delete_multi_obj_max_num - Max
Hi,
keep in mind that deleting objects in RGW involves its garbage collector
and lifecycle management. Thus the real deletion impact may occur later.
If you are able to use radosgw-admin you can instruct it to skip the
garbage collector and delete objects immediately. This is useful for
rem
multi delete is inherently limited to 1000 per operation by AWS S3:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html
This is a hard-coded limit in RGW as well, currently. You will need to
batch your deletes in groups of 1000. radosgw-admin has a
"--purge-objects" option
thx.
I tried with:
ceph config set mon rgw_delete_multi_obj_max_num 1
ceph config set client rgw_delete_multi_obj_max_num 1
ceph config set global rgw_delete_multi_obj_max_num 1
but still only 1000 objects get deleted.
Is the target something different?
On Wed, May 17, 2023 at 11:58
Hi Rok,
try this:
rgw_delete_multi_obj_max_num - Max number of objects in a single
multi-object delete request
(int, advanced)
Default: 1000
Can update at runtime: true
Services: [rgw]
config set
WHO: client. or client.rgw
KEY: rgw_delete_multi_obj_max_num
VALUE: 1
Regar
If it works I’d be amazed. We have this slow and limited delete issue also.
What we’ve done to run on the same bucket multiple delete from multiple servers
via s3cmd.
Istvan Szabo
Staff Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan
Thx for the input.
I tried several config sets e.g.:
ceph config set client.radosgw.mon2 rgw_delete_multi_obj_max_num 1
ceph config set client.radosgw.mon1 rgw_delete_multi_obj_max_num 1
ceph config set client.rgw rgw_delete_multi_obj_max_num 1
where client.radosgw.mon2 is the same as
> [...] We have this slow and limited delete issue also. [...]
That usually, apart from command list length limitations,
happens because so many Ceph storage backends have too low
committed IOPS (write, but not just) for mass metadata (and
equivalently small data) operations, never mind for runnin
Since 1000 is the hard coded limit in AWS, maybe you need to set
something on the client as well? "client.rgw" should work for setting
the config in RGW.
Daniel
On 5/18/23 03:01, Rok Jaklič wrote:
Thx for the input.
I tried several config sets e.g.:
ceph config set client.radosgw.mon2 rgw_d