Hi to all and thanks for sharing your experience on ceph !
We have an easy setup with 9 osd all hdd and 3 nodes, 3 osd for each node.
We started the cluster to test how it works with hdd with default and easy 
bootstrap . Then we decide to add ssd and create a pool to use only ssd.
In order to have pools on hdd  and pools on ssd only we edited the crushmap to 
add class hdd
We do not enter anything about ssd till now, nor disk or rules only add the 
class map to the default rule.
So i show you the rules  before introducing class hdd
# rules
rule replicated_rule {
        id 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host
        step emit
}
rule erasure-code {
        id 1
        type erasure
        min_size 3
        max_size 4
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type host
        step emit
}
rule erasure2_1 {
        id 2
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type host
        step emit
}
rule erasure-pool.meta {
        id 3
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type host
        step emit
}
rule erasure-pool.data {
        id 4
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf indep 0 type host
        step emit
}

And  here is the after 

# rules
rule replicated_rule {
        id 0
        type replicated
        min_size 1
        max_size 10
        step take default class hdd
        step chooseleaf firstn 0 type host
        step emit
}
rule erasure-code {
        id 1
        type erasure
        min_size 3
        max_size 4
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default class hdd
        step chooseleaf indep 0 type host
        step emit
}
rule erasure2_1 {
        id 2
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default class hdd
        step chooseleaf indep 0 type host
        step emit
}
rule erasure-pool.meta {
        id 3
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default class hdd
        step chooseleaf indep 0 type host
        step emit
}
rule erasure-pool.data {
        id 4
        type erasure
        min_size 3
        max_size 3
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default class hdd
        step chooseleaf indep 0 type host
        step emit
}
Just doing this triggered the misplaced of all pgs bind to EC pool.

Is that correct ? and why ?
Best regards 
Alessandro Bolgia
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to