Re: [ceph-users] Large OMAP Object

2019-11-20 Thread Paul Emmerich
On Wed, Nov 20, 2019 at 5:16 PM wrote: > > All; > > Since I haven't heard otherwise, I have to assume that the only way to get > this to go away is to dump the contents of the RGW bucket(s), and recreate > it (them)? Things to try: * check the bucket sharding status: radosgw-admin bucket limi

[ceph-users] Introducing DeepSpace

2019-11-20 Thread Cranage, Steve
Greetings everyone, I wanted to post this notice that we are opening up our Catalog Service and file system extensions for Ceph as an open source project. DeepSpace takes a different approach in that we advocate using standard file systems (pretty much just using xfs at this time) so that the fi

Re: [ceph-users] Large OMAP Object

2019-11-20 Thread Nathan Fish
It's a warning, not an error, and if you consider it to not be a problem, I believe you can change osd_deep_scrub_large_omap_object_value_sum_threshold back to 2M. On Wed, Nov 20, 2019 at 11:37 AM wrote: > > All; > > Since I haven't heard otherwise, I have to assume that the only way to get > th

Re: [ceph-users] Large OMAP Object

2019-11-20 Thread DHilsbos
All; Since I haven't heard otherwise, I have to assume that the only way to get this to go away is to dump the contents of the RGW bucket(s), and recreate it (them)? How did this get past release approval? A change which makes a valid cluster state in-valid, with no mitigation other than dow

Re: [ceph-users] dashboard hangs

2019-11-20 Thread thoralf schulze
hi, we were able to track this down to the auto balancer: disabling the auto balancer and cleaning out old (and probably not very meaningful) upmap-entries via ceph osd rm-pg-upmap-items brought back stable mgr daemons and an usable dashboard. the not-so-sensible upmap-entries might or might not

[ceph-users] scrub error on object storage pool

2019-11-20 Thread M Ranga Swami Reddy
Hello - Recently we have upgraded to Luminous 12.2.11. After that we can see the scrub errors on the object storage pool only on daily basis. After repair, it will be cleared. But again it will come tomorrow after scrub performed the PG. Any known issue - on scrub errs with 12.2.11 version? Thank

[ceph-users] Error in MGR log: auth: could not find secret_id

2019-11-20 Thread Thomas Schneider
Hi, my Ceph cluster is in unhealthy state and busy with recovery. I'm observing the MGR log and this is showing this error message regularely: 2019-11-20 09:51:45.211 7f7205581700  0 auth: could not find secret_id=4193 2019-11-20 09:51:45.211 7f7205581700  0 cephx: verify_authorizer could not get

[ceph-users] POOL_TARGET_SIZE_BYTES_OVERCOMMITTED and POOL_TARGET_SIZE_RATIO_OVERCOMMITTED

2019-11-20 Thread Björn Hinz
Hello, I can also confirm the same problem described by Joe Ryner in 14.2.2. and Oliver Freyermuth. My ceph version is 14.2.4 - # ceph health detail HEALTH_WARN 1 subtrees have overcommitted pool target_size_bytes; 1 subtrees have overcommitt