This was the issue (could not create the pool, because it would have
exceeded the new (luminous) limitation on pgs /osd.

On Tue, Sep 4, 2018 at 10:35 AM David Turner <drakonst...@gmail.com> wrote:

> I was confused what could be causing this until Janne's email.  I think
> they're correct that the cluster is preventing pool creation due to too
> many PGs per OSD.  Double check how many PGs you have in each pool and what
> your defaults are for that.
>
> On Mon, Sep 3, 2018 at 7:19 AM Janne Johansson <icepic...@gmail.com>
> wrote:
>
>> Did you change the default pg_num or pgp_num so the pools that did show
>> up made it go past the mon_max_pg_per_osd ?
>>
>>
>> Den fre 31 aug. 2018 kl 17:20 skrev Robert Stanford <
>> rstanford8...@gmail.com>:
>>
>>>
>>>  I installed a new Luminous cluster.  Everything is fine so far.  Then I
>>> tried to start RGW and got this error:
>>>
>>> 2018-08-31 15:15:41.998048 7fc350271e80  0 rgw_init_ioctx ERROR:
>>> librados::Rados::pool_create returned (34) Numerical result out of range
>>> (this can be due to a pool or placement group misconfiguration, e.g. pg_num
>>> < pgp_num or mon_max_pg_per_osd exceeded)
>>> 2018-08-31 15:15:42.005732 7fc350271e80 -1 Couldn't init storage
>>> provider (RADOS)
>>>
>>>  I notice that the only pools that exist are the data and index RGW
>>> pools (no user or log pools like on Jewel).  What is causing this?
>>>
>>>  Thank you
>>>  R
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>> --
>> May the most significant bit of your life be positive.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to