Hello to all,

I was wondering is it possible to place different pools on different OSDs, but using only two physical servers?

I was thinking about this: http://tinypic.com/r/30tgt8l/8

I would like to use osd.0 and osd.1 for Cinder/RBD pool, and osd.2 and osd.3 for Nova instances. I was following the howto from ceph documentation: http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds , but it assumed that there are 4 physical servers: 2 for "Platter" pool and 2 for "SSD" pool.

What I was concerned about is how the CRUSH map should be written and how the CRUSH will decide where it will send the data? Because of the the same hostnames in cinder and nova pools. For example, is it possible to do something like this:


# buckets
host cephosd1 {
        id -2           # do not change unnecessarily
        # weight 0.010
        alg straw
        hash 0  # rjenkins1
        item osd.0 weight 0.000
}

host cephosd1 {
        id -3           # do not change unnecessarily
        # weight 0.010
        alg straw
        hash 0  # rjenkins1
        item osd.2 weight 0.010
}

host cephosd2 {
        id -4           # do not change unnecessarily
        # weight 0.010
        alg straw
        hash 0  # rjenkins1
        item osd.1 weight 0.000
}

host cephosd2 {
        id -5           # do not change unnecessarily
        # weight 0.010
        alg straw
        hash 0  # rjenkins1
        item osd.3 weight 0.010
}

root cinder {
        id -1           # do not change unnecessarily
        # weight 0.000
        alg straw
        hash 0  # rjenkins1
        item cephosd1 weight 0.000
        item cephosd2 weight 0.000
}

root nova {
        id -6           # do not change unnecessarily
        # weight 0.020
        alg straw
        hash 0  # rjenkins1
        item cephosd1 weight 0.010
        item cephosd2 weight 0.010
}

If not, could you share an idea how this scenario could be achieved?

Thanks in advance!!


--
Nikola Pajtic
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to