[ceph-users] Re: grafana-api-url not only for one host

2021-03-05 Thread Ernesto Puerta
AFAIK this was solved by OpenStack folks following what Vladimir suggested: keepalived + haproxy for grafana and prometheus (alertmanager has it's own gossip-based HA). Please find attached (if the mailing list permits) a document where we discussed different approaches to provide HA to the Ceph

[ceph-users] Re: Resolving LARGE_OMAP_OBJECTS

2021-03-05 Thread BenoƮt Knecht
On Friday, March 5th, 2021 at 15:20, Drew Weaver wrote: > Sorry to sound clueless but no matter what I search for on El Goog I can't > figure out how to answer the question as to whether dynamic sharding is > enabled in our environment. > > It's not configured as true in the config files, but

[ceph-users] balance OSD usage.

2021-03-05 Thread ricardo.re.azevedo
Hi All, Does anyone know how I can rebalance my cluster to balance out the OSD usage? I just added 12 more 14Tb HDDs to my cluster (cluster of made up of 12Tb and 14Tb disks) bringing my total to 48 OSDs. Ceph df reports my pool as 83% full (see below). I am aware this only reports the

[ceph-users] Re: Resolving LARGE_OMAP_OBJECTS

2021-03-05 Thread Drew Weaver
Sorry for multi-reply, I got that command to run: for obj in $(rados -p default.rgw.buckets.index ls | grep 2b67ef7c-2015-4ca0-bf50-b7595d01e46e.74194.637); do printf "%-60s %7d\n" $obj $(rados -p default.rgw.buckets.index listomapkeys $obj | wc -l); done;

[ceph-users] Re: Resolving LARGE_OMAP_OBJECTS

2021-03-05 Thread Drew Weaver
Hi, Only 2 of the buckets are really used: "buckets": [ { "bucket": "test", "tenant": "", "num_objects": 968107, "num_shards": 16, "objects_per_shard": 60506, "fill_status": "OK"

[ceph-users] what is quickest way to generate a new key for a user?

2021-03-05 Thread Marc
what is quickest way to generate a new key for a user? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Resolving LARGE_OMAP_OBJECTS

2021-03-05 Thread Drew Weaver
Sorry to sound clueless but no matter what I search for on El Goog I can't figure out how to answer the question as to whether dynamic sharding is enabled in our environment. It's not configured as true in the config files, but it is the default. Is there a radosgw-admin command to determine