Of course, both to 32768.

On Sun, Jun 29, 2014 at 9:15 AM, Gregory Farnum <g...@inktank.com> wrote:

> Did you also increase the "pgp_num"?
>
>
> On Saturday, June 28, 2014, Jianing Yang <jianingy.y...@gmail.com> wrote:
>
>> Actually, I did increase PG number to 32768 (120 osds) and I also use
>> "tunable optimal". But the data still not distribute evenly.
>>
>>
>> On Sun, Jun 29, 2014 at 3:42 AM, Konrad Gutkowski <
>> konrad.gutkow...@ffs.pl> wrote:
>>
>>> Hi,
>>>
>>> Increasing PG number for pools that hold data might help if you didn't
>>> do that already.
>>>
>>> Check out this thread:
>>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/
>>> 2014-January/027094.html
>>>
>>> You might find some tips there (although it was pre firefly).
>>>
>>> W dniu 28.06.2014 o 14:44 Jianing Yang <jianingy.y...@gmail.com> pisze:
>>>
>>>
>>>> Hi, all
>>>>
>>>> My cluster has been running for about 4 month now. I have about 108
>>>> osds and all are 600G SAS Disk. Their disk usage is between 70% and 85%.
>>>> It seems that ceph cannot distribute data evenly by default settings. Is
>>>> there any configuration that helps distribute data more evenly?
>>>>
>>>> Thanks very much
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>>>
>>> --
>>>
>>> Konrad Gutkowski
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>
> --
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to