Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-22 Thread Wladimir Mutel
Ashley Merrick wrote: Correct, in a large cluster no problem. I was talking in Wladimir setup where they are running single node with a failure domain of OSD. Which would be a loss of all OSD's and all data. Sure I am aware that running with 1 NVMe is risky, so we have a plan to add a

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-22 Thread Ashley Merrick
Correct, in a large cluster no problem. I was talking in Wladimir setup where they are running single node with a failure domain of OSD. Which would be a loss of all OSD's and all data. On Sun, 22 Sep 2019 03:42:52 +0800 solarflow99 wrote now my

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-22 Thread Ashley Merrick
Correct, in a large cluster no problem. I was talking in Wladimir setup where they are running single node with a failure domain of OSD. Which would be a loss of all OSD's and all data. On Sun, 22 Sep 2019 03:42:52 +0800 solarflow99 wrote now my

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-21 Thread solarflow99
now my understanding is that a NVMe drive is recommended to help speed up bluestore. If it were to fail then those OSDs would be lost but assuming there is 3x replication and enough OSDs I don't see the problem here. There are other scenarios where a whole server might le lost, it doesn't mean

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-20 Thread vitalif
1 NVMe is really only limited to a readonly / writethrough cache (which should be of course possible with bcache). Nobody wants to lose all data after 1 disk failure... Another option is the use of bcache / flashcache. I have experimented with bcache, it is quite easy to et up, but once you

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-20 Thread Bastiaan Visser
Another option is the use of bcache / flashcache. I have experimented with bcache, it is quite easy to et up, but once you run into performance problems it is hard to pinpoint the problem. In the end i ended up just adding more disks to share iops, and going for the default setup (db / wal

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-20 Thread Ashley Merrick
Placing it as a Journal / Bluestore DB/WAL will help with writes mostly, by the sounds of it you want to increase read performance?, how important is the data on this CEPH cluster? If you place it as a Journal DB/WAL any failure of it will cause total data loss so I would very much advise

[ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-20 Thread Wladimir Mutel
Dear everyone, Last year I set up an experimental Ceph cluster (still single node, failure domain = osd, MB Asus P10S-M WS, CPU Xeon E3-1235L, RAM 64 GB, HDDs WD30EFRX, Ubuntu 18.04, now with kernel 5.3.0 from Ubuntu mainline PPA and Ceph 14.2.4 from