Arguably the error should say something like

" TASK ERROR: error with 'osd pool create': mon_command failed -  pg_num 512 
size 8 would result in 251 cumulative PGs per OSD (22148 total PG replicas on 
88 ‘in’ OSDs), which exceeds the mon_max_pg_per_osd value of  250.

 IMHO this is sort of mixing up max_pgs_per_osd with mon_max_pool_pg_num.


Here’s the code

  auto max_pgs = max_pgs_per_osd * num_osds;
  if (projected > max_pgs) {
    if (pool >= 0) {
      *ss << "pool id " << pool;
    }
    *ss << " pg_num " << pg_num << " size " << size
        << " would mean " << projected
        << " total pgs, which exceeds max " << max_pgs
        << " (mon_max_pg_per_osd " << max_pgs_per_osd
        << " * num_in_osds " << num_osds << ")";
    return -ERANGE;
  }

 IMHO this is sort of confusing max_pgs_per_osd with mon_max_pool_pg_num.

Here’s a PR with a suggested cleanup for clarity:

https://github.com/ceph/ceph/pull/49180



> On Dec 1, 2022, at 13:00, Eugen Block <ebl...@nde.ag> wrote:
> 
> Hi,
> 
> you need to take into account the number of replicas as well. With 88 OSDs 
> and the default max PGs per OSD of 250 you get the mentioned 22000 PGs 
> (including replica): 88 x 250 = 22000.
> With EC pools each chunk counts as one replica. So you should consider 
> shrinking your pools or let autoscaler do that for you.
> 
> Regards
> Eugen
> 
> Zitat von Rainer Krienke <krie...@uni-koblenz.de>:
> 
>> Hello,
>> 
>> I run a a hyperconverged pve cluster (V7.2) with 11 nodes. Each node has 8 
>> 4TB disks. pve and ceph are installed an running.
>> 
>> I wanted to create some ceph-pools with each 512 pgs. Since I want to use 
>> erasure coding (5+3) when creating a pool one rbd pool for metadata and the 
>> data pool are created. I used pveceph pool this command:
>> 
>> pveceph pool create px-e --erasure-coding k=5,m=3 --pg_autoscale_mode off 
>> --pg_num 512 --pg_num_min 128
>> 
>> I was able to create two pools in this way but the third pveceph call threw 
>> this error:
>> 
>> "got unexpected control message: TASK ERROR: error with 'osd pool create': 
>> mon_command failed -  pg_num 512 size 8 would mean 22148 total pgs, which 
>> exceeds max 22000 (mon_max_pg_per_osd 250 * num_in_osds 88)"
>> 
>> I also tried the direct way to create a new pool using:
>> ceph osd pool create <pool> 512 128 erasure <profile> but the error message 
>> below remains.
>> 
>> What I do not understand now are the calculations behind the scenes for the 
>> calculated total pg number of 22148. How is this total number "22148"  
>> calculated?
>> 
>> I already reduced the number of pgs for the metadata pool of each ec-pool 
>> and so I was able to create 4 pools in this way. But just for fun I now 
>> tried to create ec-pool number 5 and I see the message from above again.
>> 
>> Here are the pools created by now (scraped from ceph osd pool 
>> autoscale-status):
>> Pool:                Size:   Bias:  PG_NUM:
>> rbd                  4599    1.0      32
>> px-a-data          528.2G    1.0     512
>> px-a-metadata      838.1k    1.0     128
>> px-b-data              0     1.0     512
>> px-b-metadata         19     1.0     128
>> px-c-data              0     1.0     512
>> px-c-metadata         19     1.0     128
>> px-d-data              0     1.0     512
>> px-d-metadata          0     1.0     128
>> 
>> So the total number of pgs for all pools is currently 2592 which is far from 
>> 22148 pgs?
>> 
>> Any ideas?
>> Thanks Rainer
>> -- 
>> Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse  1
>> 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
>> PGP: http://www.uni-koblenz.de/~krienke/mypgp.html,     Fax: +49261287 
>> 1001312
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to