Re: [ceph-users] Need advice with setup planning

2019-09-20 Thread Marc Roos
> >> > - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case >> >> I can use it later for storage) >> >> OS not? get enterprise ssd as os (I think some recommend it when >> colocating monitors, can generate a lot of disk io) > >Yes, OS. I have no option to get a SSD.

Re: [ceph-users] Need advice with setup planning

2019-09-20 Thread Salsa
Replying inline. -- Salsa Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐ Original Message ‐‐‐ On Friday, September 20, 2019 1:34 PM, Martin Verges wrote: > Hello Salsa, > >> I have tested Ceph using VMs but never got to put it to use and had a lot of >> trouble to

Re: [ceph-users] Need advice with setup planning

2019-09-20 Thread Salsa
Replying inline -- Salsa Sent with ProtonMail Secure Email. ‐‐‐ Original Message ‐‐‐ On Friday, September 20, 2019 1:31 PM, Marc Roos wrote: > > - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case > > I can use it later for storage) > > OS not? get enterprise ssd

Re: [ceph-users] Need advice with setup planning

2019-09-20 Thread Martin Verges
Hello Salsa, I have tested Ceph using VMs but never got to put it to use and had a lot > of trouble to get it to install. > if you want to get rid of all the troubles from installing to day2day operations, you could consider using https://croit.io/croit-virtual-demo - Use 2 HDDs for SO using

Re: [ceph-users] Need advice with setup planning

2019-09-20 Thread Marc Roos
>- Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case I can use it later for storage) OS not? get enterprise ssd as os (I think some recommend it when colocating monitors, can generate a lot of disk io) >- Install CentOS 7.7 Good choice >- Use 2 vLANs, one for ceph

[ceph-users] Need advice with setup planning

2019-09-20 Thread Salsa
I have tested Ceph using VMs but never got to put it to use and had a lot of trouble to get it to install. Now I've been asked to do a production setup using 3 servers (Dell R740) with 12 4TB each. My plan is this: - Use 2 HDDs for SO using RAID 1 (I've left 3.5TB unallocated in case I can use

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-20 Thread vitalif
1 NVMe is really only limited to a readonly / writethrough cache (which should be of course possible with bcache). Nobody wants to lose all data after 1 disk failure... Another option is the use of bcache / flashcache. I have experimented with bcache, it is quite easy to et up, but once you

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-20 Thread Bastiaan Visser
Another option is the use of bcache / flashcache. I have experimented with bcache, it is quite easy to et up, but once you run into performance problems it is hard to pinpoint the problem. In the end i ended up just adding more disks to share iops, and going for the default setup (db / wal

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-20 Thread Ashley Merrick
Placing it as a Journal / Bluestore DB/WAL will help with writes mostly, by the sounds of it you want to increase read performance?, how important is the data on this CEPH cluster? If you place it as a Journal DB/WAL any failure of it will cause total data loss so I would very much advise

[ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-20 Thread Wladimir Mutel
Dear everyone, Last year I set up an experimental Ceph cluster (still single node, failure domain = osd, MB Asus P10S-M WS, CPU Xeon E3-1235L, RAM 64 GB, HDDs WD30EFRX, Ubuntu 18.04, now with kernel 5.3.0 from Ubuntu mainline PPA and Ceph 14.2.4 from

Re: [ceph-users] reproducible rbd-nbd crashes

2019-09-20 Thread Marc Schöchlin
Hello Mike and Jason, as described in my last mail i converted the filesystem to ext4, set "sysctl vm.dirty_background_ratio=0" and I put the regular workload on the filesystem (used as a NFS mount). That seems so to prevent crashes for a entire week now (before this, the nbd device crashed

Re: [ceph-users] slow ops for mon slowly increasing

2019-09-20 Thread Kevin Olbrich
OK, looks like clock skew is the problem. I thought this is caused by the reboot but it did not fix itself after some minutes (mon3 was 6 seconds ahead). After forcing time sync from the same server, it seems to be solved now. Kevin Am Fr., 20. Sept. 2019 um 07:33 Uhr schrieb Kevin Olbrich : >