Re: [ceph-users] cluster can't remapped objects after change crush tree

2018-04-27 Thread Igor Gajsin
Thanks a lot for your help. Konstantin Shalygin writes: > On 04/27/2018 05:05 PM, Igor Gajsin wrote: >> I have a crush rule like > > > You still can use device classes! > > >> * host0 has a piece of data on osd.0 > Not peace, full object. If we talk about non-EC pools. >> * host1 has pieces of da

Re: [ceph-users] cluster can't remapped objects after change crush tree

2018-04-27 Thread Konstantin Shalygin
On 04/27/2018 05:05 PM, Igor Gajsin wrote: I have a crush rule like You still can use device classes! * host0 has a piece of data on osd.0 Not peace, full object. If we talk about non-EC pools. * host1 has pieces of data on osd.1 and osd.2 host1 has copy on osd.1 *or* osd.2 * host2 has

Re: [ceph-users] cluster can't remapped objects after change crush tree

2018-04-27 Thread Igor Gajsin
Thanks, man. Thanks a lot. Now I'm understood. So, to be sure If I have 3 hosts, replicating factor is also 3 and I have a crush rule like: { "rule_id": 0, "rule_name": "replicated_rule", "ruleset": 0, "type": 1, "min_size": 1, "max_size": 10, "steps": [ {

Re: [ceph-users] cluster can't remapped objects after change crush tree

2018-04-27 Thread Konstantin Shalygin
On 04/27/2018 04:37 PM, Igor Gajsin wrote: pool 7 'rbd' replicated size 3 min_size 2 crush_rule 0 Your pools have proper size settings - is 3. But you crush have only 2 buckets for this rule (e.g. is your pods). For making this rule work you should have minimum of 3 'pod' buckets. k

Re: [ceph-users] cluster can't remapped objects after change crush tree

2018-04-27 Thread Igor Gajsin
# ceph osd pool ls detail pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 958 lfor 0/909 flags hashpspool stripe_width 0 application cephfs pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins

Re: [ceph-users] cluster can't remapped objects after change crush tree

2018-04-27 Thread Konstantin Shalygin
On 04/26/2018 11:30 PM, Igor Gajsin wrote: after assigning this rule to a pool it stucks in the same state: `ceph osd pool ls detail` please k ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-cep

Re: [ceph-users] cluster can't remapped objects after change crush tree

2018-04-26 Thread Igor Gajsin
Hi Konstantin, thanks a lot for your response. > Your crush is imbalanced: I do it deliberately. The group2 of my small-but-helpful ceph cluster also will be a master-nodes for my new small-but-helpful kubernetes cluster. And I what I want to achieve is: there are 2 groups of nodes, and even if o

Re: [ceph-users] cluster can't remapped objects after change crush tree

2018-04-25 Thread Konstantin Shalygin
# ceph osd crush tree ID CLASS WEIGHT TYPE NAME -1 3.63835 root default -9 0.90959 pod group1 -5 0.90959 host feather1 1 hdd 0.90959 osd.1 -10 2.72876 pod group2 -7 1.81918 host ds1 2 hdd 0.90959 osd.

[ceph-users] cluster can't remapped objects after change crush tree

2018-04-25 Thread Igor Gajsin
Hi, I've got stuck in a problem with crush rule. I have a small cluster with 3 nodes and 4 osd. I've decided to split it to 2 failure domains and made 2 buckets and put hosts in that buckets like in that instruction http://www.sebastien-han.fr/blog/2014/01/13/ceph-managing-crush-with-the-cli/ Fina