Don't know if this email got delivered some days ago. It may seem not.

Re-sending it again hope someone can add something :)

Thank you,
Gian


On 26 Feb 2015, at 19:20, ceph-users <ceph-us...@pinguozzo.com> wrote:

Hi All,

I've been provided by this hardware:
4xHP server G8
18 HD 1TB per server (72 HD in total)
3 SSD per server (12 in total)

I do have now 2 questions:

1. How would it look like having 1xSSD performing journal each 6xOSD? Is it feasible? What would you do?

2. I thought it would not be a very good idea to setup a cluster in that way, and for this reason I setup the following crush map:

-1      69.96   root default
-14     64.8            region SAS
-12     32.4                    datacenter 1-SAS
-3      16.2                            host z-srv-m-cph01-sas
4       0.9                                     osd.4   up      1
....
-7      16.2                            host z-srv-m-cph02-sas
84      0.9                                     osd.84  up      1
....
-13     32.4                    datacenter 2-SAS
-8      16.2                            host r-srv-m-cph01-sas
7       0.9                                     osd.7   up      1
....
-9    16.2                host r-srv-m-cph02-sas
5    0.9                    osd.5    up    1
....
-15    5.16        region SSD
-10    2.58            datacenter 1-SSD
-2    1.29                host z-srv-m-cph01-ssd
1    0.43                    osd.1    up    1
21    0.43                    osd.21    up    1
45    0.43                    osd.45    up    1
-4    1.29                host z-srv-m-cph02-ssd
83    0.43                    osd.83    up    1
97    0.43                    osd.97    up    1
90    0.43                    osd.90    up    1
-11    2.58            datacenter 2-SSD
-6    1.29                host r-srv-m-cph01-ssd
3    0.43                    osd.3    up    1
30    0.43                    osd.30    up    1
62    0.43                    osd.62    up    1
-5    1.29                host r-srv-m-cph02-ssd
2    0.43                    osd.2    up    1
31    0.43                    osd.31    up    1
61    0.43                    osd.61    up    1


I would like to setup a replication in a way that:
pool_size 2 and 1 copy each Datacenter.

After I created 2 pools:
rbd-sas
rbd-ssd

And this crush_ruleset:
# ceph osd crush rule dump SSD
{ "rule_id": 1,
 "rule_name": "SSD",
 "ruleset": 1,
 "type": 1,
 "min_size": 1,
 "max_size": 10,
 "steps": [
       { "op": "take",
         "item": -15,
         "item_name": "SSD"},
       { "op": "chooseleaf_firstn",
         "num": 0,
         "type": "datacenter"},
       { "op": "emit"}]}


# ceph osd crush rule dump SAS
{ "rule_id": 2,
 "rule_name": "SAS",
 "ruleset": 2,
 "type": 1,
 "min_size": 1,
 "max_size": 10,
 "steps": [
       { "op": "take",
         "item": -14,
         "item_name": "SAS"},
       { "op": "chooseleaf_firstn",
         "num": 0,
         "type": "datacenter"},
       { "op": "emit"}]}


I associated the rbd-sas to SAS and rbd-ssd to SSD.
Do the 2 rule perform as I wanted? Sorry for the huge email.

Gian


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to