Hi,

For the reason you observed, I normally set upmap_max_deviation = 1 on
all clusters I get my hands on.

Cheers, Dan

--
Dan van der Ster
CTO

Clyso GmbH
p: +49 89 215252722 | a: Vancouver, Canada
w: https://clyso.com | e: dan.vanders...@clyso.com

We are hiring: https://www.clyso.com/jobs/
Try our Ceph Analyzer! https://analyzer.clyso.com/

On Mon, Nov 27, 2023 at 10:30 AM <bryansoon...@gmail.com> wrote:
>
> Hello,
>
> We are running a pacific 16.2.10 cluster and enabled the balancer module, 
> here is the configuration:
>
> [root@ceph-1 ~]# ceph balancer status
> {
>     "active": true,
>     "last_optimize_duration": "0:00:00.052548",
>     "last_optimize_started": "Fri Nov 17 17:09:57 2023",
>     "mode": "upmap",
>     "optimize_result": "Unable to find further optimization, or pool(s) 
> pg_num is decreasing, or distribution is already perfect",
>     "plans": []
> }
> [root@ceph-1 ~]# ceph balancer eval
> current cluster score 0.017742 (lower is better)
>
> Here is the balancer configuration of upmap_max_deviation:
> # ceph config get mgr mgr/balancer/upmap_max_deviation
> 5
>
> We have two different types of OSDS, one is 7681G and another is 3840G. When 
> I checked our PG distribution on each type of OSD, I found the PG 
> distribution is not evenly, for the 7681G OSDs, the OSD distribution varies 
> from 136 to 158; while for the 3840G OSDs, it varies from 60 to 83, seems the 
> upmap_max_deviation is almost +/- 10. So I just wondering if this is expected 
> or do I need to change the upmap_max_deviation to a smaller value.
>
> Thanks for answering my question.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to