perhaps group sets of hosts into racks in crushmap. The crushmap doesn't have to strictly map the real world.

On 05/13/2014 08:52 AM, Cao, Buddy wrote:

Hi,

I have a crushmap structure likes root->rack->host->osds. I designed the rule below, since I used "chooseleaf...rack" in rule definition, if there is only one rack in the cluster, the ceph gps will always stay at stuck unclean state (that is because the default metadata/data/rbd pool set 2 replicas). Could you let me know how do I configure the rule to let it can also work in a cluster with only one rack?

rule ssd{

    ruleset 1

    type replicated

    min_size 0

    max_size 10

    step take root

    step chooseleaf firstn 0 type rack

    step emit

}

BTW, if I add a new rack into the crushmap, the pg status will finally get to active+clean. However, my customer do ONLY have one rack in their env, so hard for me to have workaround to ask him setup several racks.

Wei Cao (Buddy)



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to