Now I also know I have too many PGs!
It is fairly confusing to talk about PGs on the Pool page, but only vaguely
talk about the number of PGs for the cluster.
Here are some examples of confusing statements with suggested alternatives
from the online docs:
That later change would have _increased_ the number of recommended PG, not
decreased it.
Weird as our Giant health status was ok before upgrading to Hammer…
With your cluster 2048 PGs total (all pools combined!) would be the sweet
spot, see:
http://ceph.com/pgcalc/ http://ceph.com/pgcalc/
On 16/04/2015, at 01.48, Steffen W Sørensen ste...@me.com wrote:
Also our calamari web UI won't authenticate anymore, can’t see any issues in
any log under /var/log/calamari, any hints on what to look for are
appreciated, TIA!
Well this morning it will authenticate me, but seems calamari
On 16/04/2015, at 11.09, Christian Balzer ch...@gol.com wrote:
On Thu, 16 Apr 2015 10:46:35 +0200 Steffen W Sørensen wrote:
That later change would have _increased_ the number of recommended PG,
not decreased it.
Weird as our Giant health status was ok before upgrading to Hammer…
I'm
On Thu, 16 Apr 2015 10:46:35 +0200 Steffen W Sørensen wrote:
That later change would have _increased_ the number of recommended PG,
not decreased it.
Weird as our Giant health status was ok before upgrading to Hammer…
I'm pretty sure the too many check was added around then, and the the
Also our calamari web UI won't authenticate anymore, can’t see any issues in
any log under /var/log/calamari, any hints on what to look for are appreciated,
TIA!
# dpkg -l | egrep -i calamari\|ceph
ii calamari-clients 1.2.3.1-2-gc1f14b2all
Inktank
Hi,
Successfully upgrade a small development 4x node Giant 0.87-1 cluster to Hammer
0.94-1, each node with 6x OSD - 146GB, 19 pools, mainly 2 in usage.
Only minor thing now ceph -s complaining over too may PGs, previously Giant had
complain of too few, so various pools were bumped up till
Hello,
On Thu, 16 Apr 2015 00:41:29 +0200 Steffen W Sørensen wrote:
Hi,
Successfully upgrade a small development 4x node Giant 0.87-1 cluster to
Hammer 0.94-1, each node with 6x OSD - 146GB, 19 pools, mainly 2 in
usage. Only minor thing now ceph -s complaining over too may PGs,