Did you also set the pgp_num, as I understand it the newly created pg's aren't 
considered for placement until you increase the pgp_num aka effective pg number.

Sent from my iPad

On Jul 3, 2013, at 11:54 AM, Pierre BLONDEAU <pierre.blond...@unicaen.fr> wrote:

> Le 03/07/2013 11:12, Pierre BLONDEAU a écrit :
>> Le 01/07/2013 19:17, Gregory Farnum a écrit :
>>> On Mon, Jul 1, 2013 at 10:13 AM, Alex Bligh <a...@alex.org.uk> wrote:
>>>> 
>>>> On 1 Jul 2013, at 17:37, Gregory Farnum wrote:
>>>> 
>>>>> Oh, that's out of date! PG splitting is supported in Cuttlefish:
>>>>> "ceph osd pool set <foo> pg_num <number>"
>>>>> http://ceph.com/docs/master/rados/operations/control/#osd-subsystem
>>>> 
>>>> Ah, so:
>>>>   pg_num: The placement group number.
>>>> means
>>>>   pg_num: The number of placement groups.
>>>> 
>>>> Perhaps worth demystifying for those hard of understanding such as
>>>> myself.
>>>> 
>>>> I'm still not quite sure how that relates to pgp_num.
>>> 
>>> Pools are sharded into placement groups. That's the pg_num. Those
>>> placement groups can be placed all independently, or as if there were
>>> a smaller number of placement groups (this is so you can double the
>>> number of PGs but not move any data until the splitting is done).
>>> -Greg
>> 
>> Hy,
>> 
>> Thank you very much for your answer. Sorry for the late reply but a
>> modification of a cluster of 67T is long ;)
>> 
>> Actually my pg number was very insufficient :
>> 
>> ceph osd pool get data pg_num
>> pg_num: 48
>> 
>> As I'm not sure of the rate of replication that I will set, I change the
>> number of pg to 1800:
>> ceph osd pool set data pg_num 1800
>> 
>> But the placement is always heterogeneous especially on the machine
>> where I had an full osd. I now have two osd on this machine to the limit
>> and I can not write to the cluster
>> 
>> jack
>> 67 -> 67% /var/lib/ceph/osd/ceph-6
>> 86 -> 86% /var/lib/ceph/osd/ceph-8
>> 85 -> 77% /var/lib/ceph/osd/ceph-11
>> ?  -> 66% /var/lib/ceph/osd/ceph-7
>> 47 -> 47% /var/lib/ceph/osd/ceph-10
>> 29 -> 29% /var/lib/ceph/osd/ceph-9
>> 
>> joe
>> 86 -> 77% /var/lib/ceph/osd/ceph-15
>> 67 -> 67% /var/lib/ceph/osd/ceph-13
>> 95 -> 96% /var/lib/ceph/osd/ceph-14
>> 92 -> 95% /var/lib/ceph/osd/ceph-17
>> 86 -> 87% /var/lib/ceph/osd/ceph-12
>> 20 -> 20% /var/lib/ceph/osd/ceph-16
>> 
>> william
>> 68 -> 86% /var/lib/ceph/osd/ceph-0
>> 86 -> 86% /var/lib/ceph/osd/ceph-3
>> 67 -> 61% /var/lib/ceph/osd/ceph-4
>> 79 -> 71% /var/lib/ceph/osd/ceph-1
>> 58 -> 58% /var/lib/ceph/osd/ceph-18
>> 64 -> 50% /var/lib/ceph/osd/ceph-2
>> 
>> ceph -w :
>> 2013-07-03 10:56:06.610928 mon.0 [INF] pgmap v174071: 1928 pgs: 1816
>> active+clean, 84 active+remapped+backfill_toofull, 9
>> active+degraded+backfill_toofull, 19
>> active+degraded+remapped+backfill_toofull; 300 TB data, 45284 GB used,
>> 21719 GB / 67004 GB avail; 15EB/s rd, 15EB/s wr, 15Eop/s;
>> 9975324/165229620 degraded (6.037%);  recovering 15E o/s, 15EB/s
>> 2013-07-03 10:56:08.404701 osd.14 [WRN] OSD near full (95%)
>> 2013-07-03 10:56:29.729297 osd.17 [WRN] OSD near full (94%)
>> 
>> And I do not understand why the OSD 16 and 19 are hardly used
>> 
>> Regards
> Hy,
> 
> I made a mistake, when I restored the weight of all osd to 1, i forgot the 
> osd.15 .
> 
> So after this mistake solve i have only one osd which is full and the ceph 
> message is a little bit différent :
> 
> joe
> 77 -> 86% /var/lib/ceph/osd/ceph-15
> 95 -> 85% /var/lib/ceph/osd/ceph-17
> 
> 2013-07-03 17:38:16.700846 mon.0 [INF] pgmap v177380: 1928 pgs: 1869 
> active+clean, 28 active+remapped+backfill_toofull, 9 
> active+degraded+backfill_toofull, 19 
> active+degraded+remapped+backfill_toofull, 3 active+clean+scrubbing+deep; 221 
> TB data, 45284 GB used, 21720 GB / 67004 GB avail; 4882468/118972792 degraded 
> (4.104%)
> 2013-07-03 17:38:20.813192 osd.14 [WRN] OSD near full (95%)
> 
> Can I change the default ratio of full osd?
> If so, that can help ceph to move some pg of osd.14 on another osd as 16 for 
> example?
> 
> Regards
> 
> -- 
> ----------------------------------------------
> Pierre BLONDEAU
> Administrateur Systèmes & réseaux
> Université de Caen
> Laboratoire GREYC, Département d'informatique
> 
> tel    : 02 31 56 75 42
> bureau    : Campus 2, Science 3, 406
> ----------------------------------------------
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to