I tried increasing the number of metadata replicas from 2 to 3 on my
test cluster with the following command:
ceph osd pool set metadata size 3
Afterwards it appears that all the metadata placement groups switch to
a degraded state and doesn't seem to be attempting to recover:
2013-01-08
What are your CRUSH rules? Depending on how you set this cluster up,
it might not be placing more than one replica in a single host, and
you've only got two hosts so it couldn't satisfy your request for 3
copies.
-Greg
On Tue, Jan 8, 2013 at 2:11 PM, Bryan Stillwell
bstillw...@photobucket.com
That would make sense. Here's what the metadata rule looks like:
rule metadata {
ruleset 1
type replicated
min_size 2
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
On Tue, Jan 8, 2013 at 3:23 PM, Gregory
Yep! The step chooseleaf firstn 0 type host means choose n nodes of
type host, and select a leaf under each one of them, where n is the
pool size. You only have two hosts so it can't do more than 2 with
that rule type.
You could do step chooseleaf firstn 0 type device, but that won't
guarantee a
I appreciate you giving more detail on this. I plan on expanding the
test cluster to 5 servers soon, so I'll just wait until then before
changing the number of replicas.
Thanks,
Bryan
On Tue, Jan 8, 2013 at 3:49 PM, Gregory Farnum g...@inktank.com wrote:
Yep! The step chooseleaf firstn 0 type