To get an idea how much work is left, take a look at `ceph osd pool ls
detail`. There should be pg_num_target... The osds will merge or split PGs
until pg_num matches that value.

.. Dan


On Wed, 22 Sep 2021, 11:04 Jan-Philipp Litza, <j...@plutex.de> wrote:

> Hi everyone,
>
> I had the autoscale_mode set to "on" and the autoscaler went to work and
> started adjusting the number of PGs in that pool. Since this implies a
> huge shift in data, the reweights that the balancer had carefully
> adjusted (in crush-compat mode) are now rubbish, and more and more OSDs
> become nearful (we sadly have very different sized OSDs).
>
> Now apparently both manager modules, balancer and pg_autoscaler, have
> the same threshold for operation, namely target_max_misplaced_ratio. So
> the balancer won't become active as long as the pg_autoscaler is still
> adjusting the number of PGs.
>
> I already set the autoscale_mode to "warn" on all pools, but apparently
> the autoscaler is determined to finish what it started.
>
> Is there any way to pause the autoscaler so the balancer has a chance of
> fixing the reweights? Because even in manual mode (ceph balancer
> optimize), the balancer won't compute a plan when the misplaced ratio is
> higher than target_max_misplaced_ratio.
>
> I know about "ceph osd reweight-*", but they adjust the reweights
> (visible in "ceph osd tree"), whereas the balancer adjusts the "compat
> weight-set", which I don't know how to convert back to the old-style
> reweights.
>
> Best regards,
> Jan-Philipp
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to