Hi Denis,
As the vast majority of OSDs have bluestore_min_alloc_size = 65536, I
think you can safely ignore https://tracker.ceph.com/issues/64715. The
only consequence will be that 58 OSDs will be less full than others.
In other words, please use either the hybrid approach or the built-in
balancer
Hi Alexander,
that sounds pretty promising to me.
I've checked bluestore_min_alloc_size and most 1370 OSDs have value 65536.
You mentioned: "You will have to do that weekly until you redeploy all
OSDs that were created with 64K bluestore_min_alloc_size"
Is it the only way to approach this, t
Hi,
I would expect that almost every PG in the cluster is going to have to
move once you start standardizing CRUSH weights, and I wouldn't want to
move data twice. My plan would look something like:
- Make sure the cluster is healthy (no degraded PGs)
- Set nobackfill, norebalance flags to pr
Hi Denis,
My approach would be:
1. Run "ceph osd metadata" and see if you have a mix of 64K and 4K
bluestore_min_alloc_size. If so, you cannot really use the built-in
balancer, as it would result in a bimodal distribution instead of a
proper balance, see https://tracker.ceph.com/issues/64715, but