On Wed, Nov 20, 2019 at 5:16 PM wrote:
>
> All;
>
> Since I haven't heard otherwise, I have to assume that the only way to get
> this to go away is to dump the contents of the RGW bucket(s), and recreate
> it (them)?
Things to try:
* check the bucket sharding status: radosgw-admin bucket limi
Greetings everyone, I wanted to post this notice that we are opening up our
Catalog Service and file system extensions for Ceph as an open source project.
DeepSpace takes a different approach in that we advocate using standard file
systems (pretty much just using xfs at this time) so that the fi
It's a warning, not an error, and if you consider it to not be a
problem, I believe you can change
osd_deep_scrub_large_omap_object_value_sum_threshold back to 2M.
On Wed, Nov 20, 2019 at 11:37 AM wrote:
>
> All;
>
> Since I haven't heard otherwise, I have to assume that the only way to get
> th
All;
Since I haven't heard otherwise, I have to assume that the only way to get this
to go away is to dump the contents of the RGW bucket(s), and recreate it
(them)?
How did this get past release approval? A change which makes a valid cluster
state in-valid, with no mitigation other than dow
hi,
we were able to track this down to the auto balancer: disabling the auto
balancer and cleaning out old (and probably not very meaningful)
upmap-entries via ceph osd rm-pg-upmap-items brought back stable mgr
daemons and an usable dashboard.
the not-so-sensible upmap-entries might or might not
Hello - Recently we have upgraded to Luminous 12.2.11. After that we can
see the scrub errors on the object storage pool only on daily basis. After
repair, it will be cleared. But again it will come tomorrow after scrub
performed the PG.
Any known issue - on scrub errs with 12.2.11 version?
Thank
Hi,
my Ceph cluster is in unhealthy state and busy with recovery.
I'm observing the MGR log and this is showing this error message regularely:
2019-11-20 09:51:45.211 7f7205581700 0 auth: could not find secret_id=4193
2019-11-20 09:51:45.211 7f7205581700 0 cephx: verify_authorizer could
not get
Hello,
I can also confirm the same problem described by Joe Ryner in 14.2.2. and
Oliver Freyermuth.
My ceph version is 14.2.4
-
# ceph health detail
HEALTH_WARN 1 subtrees have overcommitted pool target_size_bytes; 1 subtrees
have overcommitt