Re: [ceph-users] Default PGs
>-Original Message- >From: Tyler Brekke [mailto:tyler.bre...@inktank.com] >Sent: Thursday, October 24, 2013 4:36 AM >To: Gruher, Joseph R >Cc: ceph-users@lists.ceph.com >Subject: Re: [ceph-users] Default PGs > >You have to do this before creating your first monitor as the default pools are >created by the monitor. > >Now any pools you create should have the correct number of placement >groups though. > >You can also increase your pg and pgp num with, > >ceph osd pool set pg_num ceph osd pool set >pgp_num What I did was "ceph-deploy new " to create the default ceph.conf, then I added these lines in [global]: osd_pool_default_pgp_num = 800 osd_pool_default_pg_num = 800 Then I created the monitors and OSDs via ceph-deploy. I still had 64 PGs in all the default pools. Is that expected? Do I need to set up ceph.conf before running "new"? If I do it seems to overwrite my ceph.conf with the default ceph.conf. For pools created later, the pool create command requires you specify the number of PGs, so I can't really judge if the default PG value is working. Trying to run "ceph osd pool create {pool-name} {pg-num}" without supplying a value for pg-num fails. I was able to adjust the PGs in the default pools after they were created - that works as described. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Default PGs
You have to do this before creating your first monitor as the default pools are created by the monitor. Now any pools you create should have the correct number of placement groups though. You can also increase your pg and pgp num with, ceph osd pool set pg_num ceph osd pool set pgp_num ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Default PGs
Should osd_pool_default_pg_num and osd_pool_default_pgp_num apply to the default pools? I put them in ceph.conf before creating any OSDs but after bringing up the OSDs the default pools are using a value of 64. Ceph.conf contains these lines in [global]: osd_pool_default_pgp_num = 800 osd_pool_default_pg_num = 800 After creating and activating OSDs: [ceph@joceph05 ceph]$ ceph osd pool get data pg_num pg_num: 64 [ceph@joceph05 ceph]$ ceph osd pool get data pgp_num pgp_num: 64 [ceph@joceph05 ceph]$ ceph osd dump pool 0 'data' rep size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45 pool 1 'metadata' rep size 3 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 pool 2 'rbd' rep size 3 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 I have ceph-deploy 1.2.7 and ceph 0.67.4 on CentOS 6.4 with 3.11.6 kernel. Thanks, Joe ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com