Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-21 Thread Ashley Merrick
Correct, in a large cluster no problem. I was talking in Wladimir setup where they are running single node with a failure domain of OSD. Which would be a loss of all OSD's and all data. On Sun, 22 Sep 2019 03:42:52 +0800 solarflow99 wrote now my und

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-21 Thread Ashley Merrick
Correct, in a large cluster no problem. I was talking in Wladimir setup where they are running single node with a failure domain of OSD. Which would be a loss of all OSD's and all data. On Sun, 22 Sep 2019 03:42:52 +0800 solarflow99 wrote now my und

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-21 Thread solarflow99
now my understanding is that a NVMe drive is recommended to help speed up bluestore. If it were to fail then those OSDs would be lost but assuming there is 3x replication and enough OSDs I don't see the problem here. There are other scenarios where a whole server might le lost, it doesn't mean the

Re: [ceph-users] Need advice with setup planning

2019-09-21 Thread mj
Hi, In the case of three ceph hosts, you could also consider this setup: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server This only requires that you have two 10G nics on each machine. Plus an extra 1G for 'regular' non-ceph traffic. That way at least your ceph comms would be 1