** Changed in: ceph (Ubuntu)
Assignee: Dave Chiluk (chiluk) => (unassigned)
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1261501
Title:
ceph osds unbalanced
To manage notificat
** Changed in: ceph (Ubuntu)
Importance: Undecided => Medium
** Changed in: ceph (Ubuntu)
Status: New => Triaged
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1261501
Title:
This may also be due to legacy CRUSH behavior that is fixed in bobtail
(though not made default until recently). In newer Ceph versions, you
can do 'ceph osd crush tunables optimal' to get the improved mapping
(expect some data movement).
I suggest you upgrade, in any case!
--
You received this
This is Ceph Argonaut (from Folsom UCA); AIUI Ceph doesn't support/live
PG addition until Dumpling. Can you confirm? Are there any
workarounds?
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net
** Changed in: ceph (Ubuntu)
Assignee: (unassigned) => Dave Chiluk (chiluk)
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to ceph in Ubuntu.
https://bugs.launchpad.net/bugs/1261501
Title:
ceph osds unbalanced
To manage notificat
Woops missed the below.
Once you increase the number of placement groups, you must also increase
the number of placement groups for placement (pgp_num) before your
cluster will rebalance. The pgp_num should be equal to the pg_num. To
increase the number of placement groups for placement, execute t
They likely have too few placement groups. They can retrieve their current
value using ...
$ ceph osd pool get {pool-name} pg_num
Please have IS set the number of placement groups according to the below page.
http://ceph.com/docs/master/rados/operations/placement-groups/
Basically it should be