Hi Francois,
I'm afraid that you need more rooms to have such availability. For data pool,
you will need 5 rooms due to your 3+2 erasure profile and for metadata you will
need 3 rooms due to your 3 replication rule. If you have only 2 rooms, there is
possibility of corrupted data whenever you
I don't want to remove cephfs_meta pool but cephfs_datapool.
To be clear :
I have now cephfs consisting of a cephfs_metapool and a cephfs_datapool.
I want to add a new data pool cephfs_datapool2, migrate all data from
cephfs_datapool to cephfs_datapool2 and then remove the original
On 1/6/20 5:50 PM, Francois Legrand wrote:
I still have few questions before going on.
It seems that some metadata should remains on the original data pool,
preventing it's deletion
(http://ceph.com/geen-categorie/ceph-pool-migration/ and
Thanks again for your answer.
I still have few questions before going on.
It seems that some metadata should remains on the original data pool,
preventing it's deletion
(http://ceph.com/geen-categorie/ceph-pool-migration/ and
https://www.spinics.net/lists/ceph-users/msg41374.html).
Thus does
On 12/19/19 10:22 PM, Francois Legrand wrote:
Thus my question is *how can I migrate a data pool in EC of a cephfs
to another EC pool ?*
I suggest this:
# create you new ec pool
# `ceph osd pool application enable ec_new cephfs`
# `ceph fs add_data_pool cephfs ec_new`
# `setfattr -n
Thanks for you advices.
I thus created a new replica profile :
{
"rule_id": 2,
"rule_name": "replicated3over2rooms",
"ruleset": 2,
"type": 1,
"min_size": 3,
"max_size": 4,
"steps": [
{
"op": "take",
"item": -1,
On 12/2/19 5:56 PM, Francois Legrand wrote:
For replica, what is the best way to change crush profile ? Is it to
create a new replica profile, and set this profile as crush rulest for
the pool (something like ceph osd pool set {pool-name} crush_ruleset
my_new_rule) ?
Indeed. Then you can
Thanks.
For replica, what is the best way to change crush profile ? Is it to
create a new replica profile, and set this profile as crush rulest for
the pool (something like ceph osd pool set {pool-name} crush_ruleset
my_new_rule) ?
For erasure coding, I would thus have to change the profile
Use a crush rule likes this for replica:
1) root default class XXX
2) choose 2 rooms
3) choose 2 disks
That'll get you 4 OSDs in two rooms and the first 3 of these get data,
the fourth will be ignored. That guarantees that losing a room will
lose you at most 2 out of 3 copies. This is for