Hi, I hit the same errors when doing multisite sync between luminous and
octopus, but what I founded is that my sync errors was mainly on old
multipart and shadow objects, at the "rados level" if I might say.
(leftovers from luminous bugs)
So check at the "user level", using s3cmd/awscli and t
Hi everyone, something strange here with bucket resharding vs. bucket
listing.
I have a bucket with about 1M objects in it, I increased the bucket
quota from 1M to 2M, and manually resharded from 11 to 23. (dynamic
resharding is disabled)
Since then, the user can't list objects in some paths.
Hello ceph-users, does someone have an idea why I got this?
$ radosgw-admin user stats --uid someone --reset-stats
ERROR: could not reset user stats: (75) Value too large for defined data
type
___
ceph-users mailing list -- ceph-users@ceph.io
To uns
Hi Boris, I don't have any answer for you, but I have situation similar
to yours.
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/7E6O6ILGE5JCI4ISU66HZ6ZVZP6N6T3M/
I didn't try radoslist, I should have.
Is this new, or it just that the client realised this lately?
All the data
I have the exact opposite. Files can be listed (the are in the bucket
index), but are not available anymore.
Am Fr., 16. Juli 2021 um 18:41 Uhr schrieb Jean-Sebastien Landry
:
Hi Boris, I don't have any answer for you, but I have situation similar
to yours.
https://lists.ceph.io/hyper
My understanding is that radoslist is the same (or "very like") as rados
ls, except that it limit the scope to the given bucket.
to be confirmed, I don't want to spread false information, but when you
do a
radosgw-admin bucket check --check-objects --fix,
it rebuild the "bi" from the pool leve
Hi everyone, we have a ceph cluster for object storage only, the rgws are
accessible from the internet, and everything is ok.
Now, one of our team/client required that their data should not ever be
accessible from the internet.
In any case of security bug/breach/whatever, they want to limit the
Hi Wido, yes I have and http proxy in between.
Your right, bucket filtering on the proxy and ACL on the bucket will be simple
enough,
but I don't know if it will be good enough.
I know it's far-fetched but, if, for whatever reason, the access/secret key are
leaked,
and I have a security issue o
Hi everyone, a bucket was overquota, (default quota of 300k objects per
bucket), I enabled the object quota for this bucket and set a quota of 600k
objects.
We are on Luminous (12.2.12) and dynamic resharding is disabled, I manually do
the resharding from 3 to 6 shards.
Since then, radosgw-ad
Manuel & Konstantin, thank you to confirm this.
I should upgrade to Nautilus in the next few weeks.
I just live with it for now.
Thanks!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
10 matches
Mail list logo