the main problem is not the pg_num,but some other problem about your
network or ceph service AFAIK.

can your parse ceph -s ,ceph osd tree, cat ceph.conf ?

2016-05-23 11:52 GMT+08:00 Albert Archer <albertarche...@gmail.com>:

> So, there is no solution at all ?
>
> On Sun, May 22, 2016 at 7:01 PM, Albert Archer <albertarche...@gmail.com>
> wrote:
>
>> Hello All.
>>
>> Determining the number of pgs and pgps is the very hard job (of course
>> for newbies like me ).
>> The problem is, when we set the number of pgs and pgps for creating a
>> pool, it seems there is no way to decrease the pgs for that pool.
>>
>> I configured 9 OSDs hosts (virtual machine in VMware ESXI ), in UP and IN
>> state. and approximately 1700 pgs for rbd and another pool .
>>
>> but :
>> ~1536 stale+active+clean
>> ~250  active + clean
>>
>> so, how can i remove some pgs ???? or come back to ~1700 active+clean
>> state ???
>>
>> what is the problem ???
>>
>> Regards
>> Albert
>>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to