[ceph-users] Re: Changing failure domain

2020-02-01 Thread mrxlazuardin
Hi Francois, I'm afraid that you need more rooms to have such availability. For data pool, you will need 5 rooms due to your 3+2 erasure profile and for metadata you will need 3 rooms due to your 3 replication rule. If you have only 2 rooms, there is possibility of corrupted data whenever you

[ceph-users] Re: Changing failure domain

2020-01-14 Thread Francois Legrand
I don't want to remove cephfs_meta pool but cephfs_datapool. To be clear : I have now cephfs consisting of a cephfs_metapool and a cephfs_datapool. I want to add a new data pool cephfs_datapool2, migrate all data from cephfs_datapool to cephfs_datapool2 and then remove the original

[ceph-users] Re: Changing failure domain

2020-01-13 Thread Konstantin Shalygin
On 1/6/20 5:50 PM, Francois Legrand wrote: I still have few questions before going on. It seems that some metadata should remains on the original data pool, preventing it's deletion (http://ceph.com/geen-categorie/ceph-pool-migration/ and

[ceph-users] Re: Changing failure domain

2020-01-06 Thread Francois Legrand
Thanks again for your answer. I still have few questions before going on. It seems that some metadata should remains on the original data pool, preventing it's deletion (http://ceph.com/geen-categorie/ceph-pool-migration/ and https://www.spinics.net/lists/ceph-users/msg41374.html). Thus does

[ceph-users] Re: Changing failure domain

2019-12-23 Thread Konstantin Shalygin
On 12/19/19 10:22 PM, Francois Legrand wrote: Thus my question is *how can I migrate a data pool in EC of a cephfs to another EC pool ?* I suggest this: # create you new ec pool # `ceph osd pool application enable ec_new cephfs` # `ceph fs add_data_pool cephfs ec_new` # `setfattr -n

[ceph-users] Re: Changing failure domain

2019-12-19 Thread Francois Legrand
Thanks for you advices. I thus created a new replica profile : {     "rule_id": 2,     "rule_name": "replicated3over2rooms",     "ruleset": 2,     "type": 1,     "min_size": 3,     "max_size": 4,     "steps": [     {     "op": "take",     "item": -1,    

[ceph-users] Re: Changing failure domain

2019-12-02 Thread Konstantin Shalygin
On 12/2/19 5:56 PM, Francois Legrand wrote: For replica, what is the best way to change crush profile ? Is it to create a new replica profile, and set this profile as crush rulest for the pool (something like ceph osd pool set {pool-name} crush_ruleset my_new_rule) ? Indeed. Then you can

[ceph-users] Re: Changing failure domain

2019-12-02 Thread Francois Legrand
Thanks. For replica, what is the best way to change crush profile ? Is it to create a new replica profile, and set this profile as crush rulest for the pool (something like ceph osd pool set {pool-name} crush_ruleset my_new_rule) ? For erasure coding, I would thus have to change the profile

[ceph-users] Re: Changing failure domain

2019-11-28 Thread Paul Emmerich
Use a crush rule likes this for replica: 1) root default class XXX 2) choose 2 rooms 3) choose 2 disks That'll get you 4 OSDs in two rooms and the first 3 of these get data, the fourth will be ignored. That guarantees that losing a room will lose you at most 2 out of 3 copies. This is for