Thanks a lot guys.

Best,


*German*

2016-03-24 15:55 GMT-03:00 Sean Redmond <sean.redmo...@gmail.com>:

> Hi German,
>
> For Data to be split over the racks you should set the crush rule set to
> 'step chooseleaf firstn 0 type rack' instead of 'step chooseleaf firstn 0
> type host'
>
> Thanks
>
> On Wed, Mar 23, 2016 at 3:50 PM, German Anders <gand...@despegar.com>
> wrote:
>
>> Hi all,
>>
>> I had a question, I'm in the middle of a new ceph deploy cluster and I've
>> 6 OSD servers between two racks, so rack1 would have osdserver1,3 and 5,
>> and rack2 osdserver2,4 and 6. I've edited the following crush map and I
>> want to know if it's ok and also if the objects would be stored one on each
>> rack-host. So, if I lost one rack, I had one copy on the other rack/server:
>>
>> *http://pastebin.com/raw/QJf1VeeJ <http://pastebin.com/raw/QJf1VeeJ>*
>>
>> Also If I need to run any command in order to 'apply' the new crush map
>> to the existing pools (actually only two):
>>
>> - 0 rbd            (pg_num: 4096 | pgp_num: 4096 | size: 2 | min_size: 1)
>> - 1 cinder-volumes (pg_num: 4096 | pgp_num: 4096 | size: 2 | min_size: 1)
>>
>> # ceph --cluster cephIB osd tree
>> ID WEIGHT   TYPE NAME            UP/DOWN REWEIGHT PRIMARY-AFFINITY
>> -1 51.29668 root default
>> -8 26.00958     rack cage5-rack1
>> -2  8.66986         host cibn01
>>  0  0.72249             osd.0         up  1.00000          1.00000
>>  1  0.72249             osd.1         up  1.00000          1.00000
>>  2  0.72249             osd.2         up  1.00000          1.00000
>>  3  0.72249             osd.3         up  1.00000          1.00000
>>  4  0.72249             osd.4         up  1.00000          1.00000
>>  5  0.72249             osd.5         up  1.00000          1.00000
>>  6  0.72249             osd.6         up  1.00000          1.00000
>>  7  0.72249             osd.7         up  1.00000          1.00000
>>  8  0.72249             osd.8         up  1.00000          1.00000
>>  9  0.72249             osd.9         up  1.00000          1.00000
>> 10  0.72249             osd.10        up  1.00000          1.00000
>> 11  0.72249             osd.11        up  1.00000          1.00000
>> -4  8.66986         host cibn03
>> 24  0.72249             osd.24        up  1.00000          1.00000
>> 25  0.72249             osd.25        up  1.00000          1.00000
>> 26  0.72249             osd.26        up  1.00000          1.00000
>> 27  0.72249             osd.27        up  1.00000          1.00000
>> 28  0.72249             osd.28        up  1.00000          1.00000
>> 29  0.72249             osd.29        up  1.00000          1.00000
>> 30  0.72249             osd.30        up  1.00000          1.00000
>> 31  0.72249             osd.31        up  1.00000          1.00000
>> 32  0.72249             osd.32        up  1.00000          1.00000
>> 33  0.72249             osd.33        up  1.00000          1.00000
>> 34  0.72249             osd.34        up  1.00000          1.00000
>> 35  0.72249             osd.35        up  1.00000          1.00000
>> -6  8.66986         host cibn05
>> 48  0.72249             osd.48        up  1.00000          1.00000
>> 49  0.72249             osd.49        up  1.00000          1.00000
>> 50  0.72249             osd.50        up  1.00000          1.00000
>> 51  0.72249             osd.51        up  1.00000          1.00000
>> 52  0.72249             osd.52        up  1.00000          1.00000
>> 53  0.72249             osd.53        up  1.00000          1.00000
>> 54  0.72249             osd.54        up  1.00000          1.00000
>> 55  0.72249             osd.55        up  1.00000          1.00000
>> 56  0.72249             osd.56        up  1.00000          1.00000
>> 57  0.72249             osd.57        up  1.00000          1.00000
>> 58  0.72249             osd.58        up  1.00000          1.00000
>> 59  0.72249             osd.59        up  1.00000          1.00000
>> -9 25.28709     rack cage5-rack2
>> -3  8.66986         host cibn02
>> 12  0.72249             osd.12        up  1.00000          1.00000
>> 13  0.72249             osd.13        up  1.00000          1.00000
>> 14  0.72249             osd.14        up  1.00000          1.00000
>> 15  0.72249             osd.15        up  1.00000          1.00000
>> 16  0.72249             osd.16        up  1.00000          1.00000
>> 17  0.72249             osd.17        up  1.00000          1.00000
>> 18  0.72249             osd.18        up  1.00000          1.00000
>> 19  0.72249             osd.19        up  1.00000          1.00000
>> 20  0.72249             osd.20        up  1.00000          1.00000
>> 21  0.72249             osd.21        up  1.00000          1.00000
>> 22  0.72249             osd.22        up  1.00000          1.00000
>> 23  0.72249             osd.23        up  1.00000          1.00000
>> -5  8.66986         host cibn04
>> 36  0.72249             osd.36        up  1.00000          1.00000
>> 37  0.72249             osd.37        up  1.00000          1.00000
>> 38  0.72249             osd.38        up  1.00000          1.00000
>> 39  0.72249             osd.39        up  1.00000          1.00000
>> 40  0.72249             osd.40        up  1.00000          1.00000
>> 41  0.72249             osd.41        up  1.00000          1.00000
>> 42  0.72249             osd.42        up  1.00000          1.00000
>> 43  0.72249             osd.43        up  1.00000          1.00000
>> 44  0.72249             osd.44        up  1.00000          1.00000
>> 45  0.72249             osd.45        up  1.00000          1.00000
>> 46  0.72249             osd.46        up  1.00000          1.00000
>> 47  0.72249             osd.47        up  1.00000          1.00000
>> -7  7.94737         host cibn06
>> 60  0.72249             osd.60        up  1.00000          1.00000
>> 61  0.72249             osd.61        up  1.00000          1.00000
>> 62  0.72249             osd.62        up  1.00000          1.00000
>> 63  0.72249             osd.63        up  1.00000          1.00000
>> 64  0.72249             osd.64        up  1.00000          1.00000
>> 65  0.72249             osd.65        up  1.00000          1.00000
>> 66  0.72249             osd.66        up  1.00000          1.00000
>> 67  0.72249             osd.67        up  1.00000          1.00000
>> 68  0.72249             osd.68        up  1.00000          1.00000
>> 69  0.72249             osd.69        up  1.00000          1.00000
>> 70  0.72249             osd.70        up  1.00000          1.00000
>>
>>
>> Ceph version 9.2.1 (752b6a3020c3de74e07d2a8b4c5e48dab5a6b6fd)
>>
>>
>> Thanks in advance,
>>
>> Best,
>>
>> *German*
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to