Re: [ceph-users] Crush distribution with heterogeneous device classes and failure domain hosts

2018-09-20 Thread Kevin Olbrich
Thank you very much Paul. Kevin Am Do., 20. Sep. 2018 um 15:19 Uhr schrieb Paul Emmerich < paul.emmer...@croit.io>: > Hi, > > device classes are internally represented as completely independent > trees/roots; showing them in one tree is just syntactic sugar. > > For example, if you have a hiera

Re: [ceph-users] Crush distribution with heterogeneous device classes and failure domain hosts

2018-09-20 Thread Paul Emmerich
Hi, device classes are internally represented as completely independent trees/roots; showing them in one tree is just syntactic sugar. For example, if you have a hierarchy like root --> host1, host2, host3 --> nvme/ssd/sata OSDs, then you'll actually have 3 trees: root~ssd -> host1~ssd, host2~ss

Re: [ceph-users] Crush distribution with heterogeneous device classes and failure domain hosts

2018-09-20 Thread Kevin Olbrich
To answer my own question: ceph osd crush tree --show-shadow Sorry for the noise... Am Do., 20. Sep. 2018 um 14:54 Uhr schrieb Kevin Olbrich : > Hi! > > Currently I have a cluster with four hosts and 4x HDDs + 4 SSDs per host. > I also have replication rules to distinguish between HDD and SSD (

[ceph-users] Crush distribution with heterogeneous device classes and failure domain hosts

2018-09-20 Thread Kevin Olbrich
Hi! Currently I have a cluster with four hosts and 4x HDDs + 4 SSDs per host. I also have replication rules to distinguish between HDD and SSD (and failure-domain set to rack) which are mapped to pools. What happens if I add a heterogeneous host with 1x SSD and 1x NVMe (where NVMe will be a new d