Have you looked at your Garbage Collection.  I would guess that your GC is
behind and that radosgw-admin is accounting for that space knowing that it
hasn't been freed up yet, whiles 3cmd doesn't see it since it no longer
shows in the listing.

On Tue, Sep 18, 2018 at 4:45 AM Luis Periquito <periqu...@gmail.com> wrote:

> Hi all,
>
> I have a couple of very big s3 buckets that store temporary data. We
> keep writing to the buckets some files which are then read and
> deleted. They serve as a temporary storage.
>
> We're writing (and deleting) circa 1TB of data daily in each of those
> buckets, and their size has been mostly stable over time.
>
> The issue has arisen that radosgw-admin bucket stats says one bucket
> is 10T and the other is 4T; but s3cmd du (and I did a sync which
> agrees) says 3.5T and 2.3T respectively.
>
> The bigger bucket suffered from the orphaned objects bug
> (http://tracker.ceph.com/issues/18331). The smaller was created as
> 10.2.3 so it may also had the suffered from the same bug.
>
> Any ideas what could be at play here? How can we reduce actual usage?
>
> trimming part of the radosgw-admin bucket stats output
>     "usage": {
>         "rgw.none": {
>             "size": 0,
>             "size_actual": 0,
>             "size_utilized": 0,
>             "size_kb": 0,
>             "size_kb_actual": 0,
>             "size_kb_utilized": 0,
>             "num_objects": 18446744073709551572
>         },
>         "rgw.main": {
>             "size": 10870197197183,
>             "size_actual": 10873866362880,
>             "size_utilized": 18446743601253967400,
>             "size_kb": 10615426951,
>             "size_kb_actual": 10619010120,
>             "size_kb_utilized": 18014398048099578,
>             "num_objects": 1702444
>         },
>         "rgw.multimeta": {
>             "size": 0,
>             "size_actual": 0,
>             "size_utilized": 0,
>             "size_kb": 0,
>             "size_kb_actual": 0,
>             "size_kb_utilized": 0,
>             "num_objects": 406462
>         }
>     },
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to