han.sebast...@gmail.com said:
> Well to avoid un necessary data movement, there is also an _experimental_
> feature to change on fly the number of PGs in a pool.
> ceph osd pool set <poolname> pg_num <numpgs> --allow-experimental-feature 

I've been following the instructions here:

http://ceph.com/docs/master/rados/configuration/osd-config-ref/

under "data placement", trying to set the number of pgs in ceph.conf.
I've added these lines in the "global" section:

        osd pool default pg num = 500
        osd pool default pgp num = 500

but they don't seem to have any effect on how mkcephfs behaves.
Before I added these lines, mkcephfs created a data pool with
3904 pgs.  After wiping everything, adding the lines and 
re-creating the pool, it still ends up with 3904 pgs.  What
am I doing wrong?

                                        Thanks,
                                        Bryan
-- 
========================================================================
Bryan Wright              |"If you take cranberries and stew them like 
Physics Department        | applesauce, they taste much more like prunes
University of Virginia    | than rhubarb does."  --  Groucho 
Charlottesville, VA  22901|                     
(434) 924-7218            |         br...@virginia.edu
========================================================================


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to