> That later change would have _increased_ the number of recommended PG, not
> decreased it.
Weird as our Giant health status was ok before upgrading to Hammer…

> With your cluster 2048 PGs total (all pools combined!) would be the sweet
> spot, see:
> 
> http://ceph.com/pgcalc/ <http://ceph.com/pgcalc/>
Had read this originally when creating the cluster

> It seems to me that you increased PG counts assuming that the formula is per 
> pool.
Well yes maybe, believe we bumped PGs per status complain in Giant mentioned 
explicit different pool names, eg. too few PGs in <pool-name>…
so we naturally bumped mentioned pools slightly up til next 2-power until 
health stop complaining
and yes we wondered over this relative high number of PGs in total for the 
cluster, as we initially had read pgcalc and thought we understood this.

ceph.com <http://ceph.com/> not responsding presently…

- are you saying one needs to consider in advance #pools in a cluster and 
factor this in when calculating the number of PGs?

- If so, how to decide which pool gets what #PG, as this is set per pool, 
especially if one can’t precalc the amount objects ending up in each pool?

But yes understand also that more pools means more PGs per OSD, does this imply 
using different pools to segregate various data f.ex. per application in same 
cluster is a bad idea?

Using pools as sort of name space segregation makes it easy f.e. to 
remove/migration data per application and thus a handy segregation tool ImHO.

- Are the BCP to consolidate data in few pools per cluster?

/Steffen
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to