[ceph-users] Re: Balancing with upmap

2021-02-01 Thread Francois Legrand
This is the pgs repartition as given by the command I found here http://cephnotes.ksperis.com/blog/2015/02/23/get-the-number-of-placement-groups-per-osd : pool :    35   44   36   31   32   33    2   34    43   | SUM

[ceph-users] Re: Balancing with upmap

2021-02-01 Thread Dan van der Ster
On Mon, Feb 1, 2021 at 10:03 AM Francois Legrand wrote: > > Hi, > > Actually we have no EC pools... all are replica 3. And we have only 9 pools. > > The average number og pg/osd is not very high (40.6). > > Here is the detail of the pools : > > pool 2 replicated size 3 min_size 1 crush_rule 2

[ceph-users] Re: Balancing with upmap

2021-02-01 Thread Francois Legrand
Hi, Actually we have no EC pools... all are replica 3. And we have only 9 pools. The average number og pg/osd is not very high (40.6). Here is the detail of the pools : pool 2 replicated size 3 min_size 1 crush_rule 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 623105 lfor

[ceph-users] Re: Balancing with upmap

2021-01-31 Thread Francois Legrand
Hi, After 2 days, the recovery ended. The situation is clearly better (but still not perfect) with 339.8 Ti available in pools (for 575.8 Ti available in the whole cluster). The balancing remains not perfect (31 to 47 pgs on 8TB disks). And the ceph osd df tree returns : ID  CLASS WEIGHT  

[ceph-users] Re: Balancing with upmap

2021-01-30 Thread Francois Legrand
Hi, Thanks for your advices. Here is the output of ceph osd df tree : ID  CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP    META    AVAIL   %USE  VAR  PGS STATUS TYPE NAME  -1   1018.65833    -  466 TiB 214 TiB 213 TiB 117 GiB 605 GiB 252 TiB 0    0   -    root default

[ceph-users] Re: Balancing with upmap

2021-01-29 Thread Dan van der Ster
Thanks, and thanks for the log file OTR which simply showed: 2021-01-29 23:17:32.567 7f6155cae700 4 mgr[balancer] prepared 0/10 changes This indeed means that balancer believes those pools are all balanced according to the config (which you have set to the defaults). Could you please also

[ceph-users] Re: Balancing with upmap

2021-01-29 Thread Dan van der Ster
Hi Francois, What is the output of `ceph balancer status` ? Also, can you increase the debug_mgr to 4/5 then share the log file of the active mgr? Best, Dan On Fri, Jan 29, 2021 at 10:54 AM Francois Legrand wrote: > > Thanks for your suggestion. I will have a look ! > > But I am a bit

[ceph-users] Re: Balancing with upmap

2021-01-29 Thread Francois Legrand
Thanks for your suggestion. I will have a look ! But I am a bit surprised that the "official" balancer seems so unefficient ! F. Le 28/01/2021 à 12:00, Jonas Jelten a écrit : Hi! We also suffer heavily from this so I wrote a custom balancer which yields much better results:

[ceph-users] Re: Balancing with upmap

2021-01-28 Thread Jonas Jelten
Hi! We also suffer heavily from this so I wrote a custom balancer which yields much better results: https://github.com/TheJJ/ceph-balancer After you run it, it echoes the PG movements it suggests. You can then just run those commands the cluster will balance more. It's kinda work in progress,

[ceph-users] Re: Balancing with upmap

2021-01-27 Thread Francois Legrand
Nope ! Le 27/01/2021 à 17:40, Anthony D'Atri a écrit : Do you have any override reweights set to values less than 1.0? The REWEIGHT column when you run `ceph osd df` On Jan 27, 2021, at 8:15 AM, Francois Legrand wrote: Hi all, I have a cluster with 116 disks (24 new disks of 16TB added in