Re: [ceph-users] OSD Data not evenly distributed

2014-06-28 Thread Jianing Yang
Of course, both to 32768.


On Sun, Jun 29, 2014 at 9:15 AM, Gregory Farnum  wrote:

> Did you also increase the "pgp_num"?
>
>
> On Saturday, June 28, 2014, Jianing Yang  wrote:
>
>> Actually, I did increase PG number to 32768 (120 osds) and I also use
>> "tunable optimal". But the data still not distribute evenly.
>>
>>
>> On Sun, Jun 29, 2014 at 3:42 AM, Konrad Gutkowski <
>> konrad.gutkow...@ffs.pl> wrote:
>>
>>> Hi,
>>>
>>> Increasing PG number for pools that hold data might help if you didn't
>>> do that already.
>>>
>>> Check out this thread:
>>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/
>>> 2014-January/027094.html
>>>
>>> You might find some tips there (although it was pre firefly).
>>>
>>> W dniu 28.06.2014 o 14:44 Jianing Yang  pisze:
>>>
>>>
 Hi, all

 My cluster has been running for about 4 month now. I have about 108
 osds and all are 600G SAS Disk. Their disk usage is between 70% and 85%.
 It seems that ceph cannot distribute data evenly by default settings. Is
 there any configuration that helps distribute data more evenly?

 Thanks very much
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>>>
>>>
>>> --
>>>
>>> Konrad Gutkowski
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>
> --
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD Data not evenly distributed

2014-06-28 Thread Gregory Farnum
Did you also increase the "pgp_num"?

On Saturday, June 28, 2014, Jianing Yang  wrote:

> Actually, I did increase PG number to 32768 (120 osds) and I also use
> "tunable optimal". But the data still not distribute evenly.
>
>
> On Sun, Jun 29, 2014 at 3:42 AM, Konrad Gutkowski  > wrote:
>
>> Hi,
>>
>> Increasing PG number for pools that hold data might help if you didn't do
>> that already.
>>
>> Check out this thread:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/
>> 2014-January/027094.html
>>
>> You might find some tips there (although it was pre firefly).
>>
>> W dniu 28.06.2014 o 14:44 Jianing Yang > > pisze:
>>
>>
>>> Hi, all
>>>
>>> My cluster has been running for about 4 month now. I have about 108
>>> osds and all are 600G SAS Disk. Their disk usage is between 70% and 85%.
>>> It seems that ceph cannot distribute data evenly by default settings. Is
>>> there any configuration that helps distribute data more evenly?
>>>
>>> Thanks very much
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> 
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>> --
>>
>> Konrad Gutkowski
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>

-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD Data not evenly distributed

2014-06-28 Thread Jianing Yang
Actually, I did increase PG number to 32768 (120 osds) and I also use
"tunable optimal". But the data still not distribute evenly.


On Sun, Jun 29, 2014 at 3:42 AM, Konrad Gutkowski 
wrote:

> Hi,
>
> Increasing PG number for pools that hold data might help if you didn't do
> that already.
>
> Check out this thread:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/
> 2014-January/027094.html
>
> You might find some tips there (although it was pre firefly).
>
> W dniu 28.06.2014 o 14:44 Jianing Yang  pisze:
>
>
>> Hi, all
>>
>> My cluster has been running for about 4 month now. I have about 108
>> osds and all are 600G SAS Disk. Their disk usage is between 70% and 85%.
>> It seems that ceph cannot distribute data evenly by default settings. Is
>> there any configuration that helps distribute data more evenly?
>>
>> Thanks very much
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
>
> Konrad Gutkowski
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD Data not evenly distributed

2014-06-28 Thread Konrad Gutkowski

Hi,

Increasing PG number for pools that hold data might help if you didn't do  
that already.


Check out this thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-January/027094.html

You might find some tips there (although it was pre firefly).

W dniu 28.06.2014 o 14:44 Jianing Yang  pisze:



Hi, all

My cluster has been running for about 4 month now. I have about 108
osds and all are 600G SAS Disk. Their disk usage is between 70% and 85%.
It seems that ceph cannot distribute data evenly by default settings. Is
there any configuration that helps distribute data more evenly?

Thanks very much
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--

Konrad Gutkowski
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSD Data not evenly distributed

2014-06-28 Thread Jianing Yang

Hi, all

My cluster has been running for about 4 month now. I have about 108
osds and all are 600G SAS Disk. Their disk usage is between 70% and 85%.
It seems that ceph cannot distribute data evenly by default settings. Is
there any configuration that helps distribute data more evenly?

Thanks very much
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com