On Wed, Oct 02, 2019 at 01:48:40PM +0200, Christian Pedersen wrote:
> Hi Martin,
> 
> Even before adding cold storage on HDD, I had the cluster with SSD only. That 
> also could not keep up with deleting the files.
> I am no where near I/O exhaustion on the SSDs or even the HDDs.
Please see my presentation from Cephalic on 2019 about RGW S3 where I
touch on slowness in Lifecycle processing and deletion. 

The efficiency of the code is very low: it requires a full scan of
the bucket index every single day. Depending on the traversal order
(unordered listing helps), this might mean it takes a very long time to
find the items that can be deleted, and even when it gets to them, it's
bound by the deletion time, which is also slow (that the head of the
objects is a synchronous deletion in many cases, while the tails are
async garbage-collected).

Fixing this isn't trivial: either you have to scan the entire bucket, or
you have to maintain a secondary index in insertion-order for EACH
prefix in a lifecycle policy.

-- 
Robin Hugh Johnson
Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
E-Mail   : robb...@gentoo.org
GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to