Hoping someone can point me to possible tunables that could hopefully better 
tighten my OSD distribution.

Cluster is currently
> "ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus 
> (stable)": 307
With plans to begin moving to pacific before end of year, with a possible 
interim stop at octopus.17 on the way.

Cluster was born on jewel, and is fully bluestore/straw2.
The upmap balancer works/is working, but not to the degree that I believe it 
could/should work, which seems should be much closer to near perfect than what 
I’m seeing.

https://imgur.com/a/lhtZswo <https://imgur.com/a/lhtZswo> <- Histograms of my 
OSD distribution

https://pastebin.com/raw/dk3fd4GH <https://pastebin.com/raw/dk3fd4GH> <- 
pastebin of cluster/pool/crush relevant bits

To put it succinctly, I’m hoping to get much tighter OSD distribution, but I’m 
not sure what knobs to try turning next, as the upmap balancer has gone as far 
as it can, and I end up playing “reweight the most full OSD whack-a-mole as 
OSD’s get nearful.”

My goal is obviously something akin to this perfect distribution like here: 
https://www.youtube.com/watch?v=niFNZN5EKvE&t=1353s 
<https://www.youtube.com/watch?v=niFNZN5EKvE&t=1353s>

I am looking to tweak the PG counts for a few pool.
Namely the ssd-radosobj has shrunk in size and needs far fewer PGs now.
Similarly hdd-cephfs shrunk in size as well and needs fewer PGs (as ceph health 
shows).
And on the flip side, ec*-cephfs likely need more PGs as they have grown in 
size.
However I was hoping to get more breathing room of free space on my most full 
OSDs before starting to do big PG expand/shrink.

I am assuming that my whacky mix of replicated vs multiple EC storage pools 
coupled with hybrid SSD+HDD pools is throwing off the balance more than if it 
was a more homogenous crush ruleset, but this is what exists and is what I’m 
working with.
Also, since it will look odd in the tree view, the crush rulesets for hdd pools 
are chooseleaf chassis, while ssd pools are chooseleaf host.

Any tips or help would be greatly appreciated.

Reed
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to