Re: [ceph-users] mirror OSD configuration

2018-02-28 Thread David Turner
A more common search term for this might be Rack failure domain. The premise is the same for room as it is for rack, both can hold hosts and be set as the failure domain. There is a fair bit of discussion on how to achieve multi-rack/room/datacenter setups. Datacenter setups are more likely to h

Re: [ceph-users] mirror OSD configuration

2018-02-28 Thread Gregory Farnum
On Wed, Feb 28, 2018 at 3:02 AM Zoran Bošnjak < zoran.bosn...@sloveniacontrol.si> wrote: > I am aware of monitor consensus requirement. It is taken care of (there is > a third room with only monitor node). My problem is about OSD redundancy, > since I can only use 2 server rooms for OSDs. > > I co

Re: [ceph-users] mirror OSD configuration

2018-02-28 Thread Zoran Bošnjak
I am aware of monitor consensus requirement. It is taken care of (there is a third room with only monitor node). My problem is about OSD redundancy, since I can only use 2 server rooms for OSDs. I could use EC-pools, lrc or any other ceph configuration. But I could not find a configuration that

Re: [ceph-users] mirror OSD configuration

2018-02-27 Thread Eino Tuominen
> Is it possible to configure crush map such that it will tolerate "room" > failure? In my case, there is one > network switch per room and one power supply per room, which makes a single > point of (room) failure. Hi, You cannot achieve real room redundancy with just two rooms. At minimum you'

[ceph-users] mirror OSD configuration

2018-02-27 Thread Zoran Bošnjak
This is my planned OSD configuration: root room1 OSD host1 OSD host2 room2 OSD host3 OSD host4 There are 6 OSDs per host. Is it possible to configure crush map such that it will tolerate "room" failure? In my case, there is one network switch per room and