[ceph-users] Unable to delete bucket - endless multipart uploads?

2021-02-23 Thread David Monschein
Hi All, We've been dealing with what seems to be a pretty annoying bug for a while now. We are unable to delete a customer's bucket that seems to have an extremely large number of aborted multipart uploads. I've had $(radosgw-admin bucket rm --bucket=pusulax --purge-objects) running in a screen

[ceph-users] Re: [RGW] Space usage vastly overestimated since Octopus upgrade

2020-07-15 Thread David Monschein
Hi Liam, All, We have also run into this bug: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PCYY2MKRPCPIXZLZV5NNBWVHDXKWXVAG/ Like you, we are also running Octopus 15.2.3 Downgrading the RGWs at this point is not ideal, but if a fix isn't found soon we might have to. Has a

[ceph-users] User stats - Object count wrong in Octopus?

2020-07-14 Thread David Monschein
Hi All, Sorry for the double email, I accidentally sent the previous e-mail with an accidental KB shortcut before it was finished :) I'm investigating what appears to be a bug in RGW stats. This is a brand new cluster running 15.2.3 One of our customers reached out, saying they were hitting

[ceph-users] User stats - Object count wrong in Octopus?

2020-07-14 Thread David Monschein
Hi All, I'm investigating what appears to be a bug in RGW stats. This is a brand new cluster running 15.2.3 One of our customers reached out, saying they were hitting their quota (S3 error: 403 (QuotaExceeded)). The user-wide max_objects quota we set is 50 million objects, so this would be

[ceph-users] Re: Bogus Entries in RGW Usage Log / Large omap object in rgw.log pool

2019-10-29 Thread David Monschein
t 29, 2019 at 3:22 AM Florian Haas wrote: > Hi David, > > On 28/10/2019 20:44, David Monschein wrote: > > Hi All, > > > > Running an object storage cluster, originally deployed with Nautilus > > 14.2.1 and now running 14.2.4. > > > > Last week I wa

[ceph-users] Bogus Entries in RGW Usage Log / Large omap object in rgw.log pool

2019-10-28 Thread David Monschein
Hi All, Running an object storage cluster, originally deployed with Nautilus 14.2.1 and now running 14.2.4. Last week I was alerted to a new warning from my object storage cluster: [root@ceph1 ~]# ceph health detail HEALTH_WARN 1 large omap objects LARGE_OMAP_OBJECTS 1 large omap objects 1