Re: [ceph-users] design guidance

2017-06-07 Thread Christian Balzer
Hello, On Tue, 6 Jun 2017 20:59:40 -0400 Daniel K wrote: > Christian, > > Thank you for the tips -- I certainly googled my eyes out for a good while > before asking -- maybe my google-fu wasn't too good last night. > > > I love using IB, alas with just one port per host you're likely best off

Re: [ceph-users] design guidance

2017-06-06 Thread Daniel K
I started down that path and got so deep that I couldn't even find where I went in. I couldn't make heads or tails out of what would or wouldn't work. We didn't need multiple hosts accessing a single datastore, so on the client side I just have a single VM guest running on each ESXi hosts, with

Re: [ceph-users] design guidance

2017-06-06 Thread Daniel K
Christian, Thank you for the tips -- I certainly googled my eyes out for a good while before asking -- maybe my google-fu wasn't too good last night. > I love using IB, alas with just one port per host you're likely best off > ignoring it, unless you have a converged network/switches that can

Re: [ceph-users] design guidance

2017-06-06 Thread Maxime Guyot
Hi Daniel, The flexibility of Ceph is that you can start with your current config, scale out and upgrade (CPUs, journals etc...) as your performance requirement increase. 6x1.7Ghz, are we speaking about the Xeon E5 2603L v4? Any chance to bump that to 2620 v4 or 2630 v4? Test how the 6x1.7Ghz

Re: [ceph-users] design guidance

2017-06-06 Thread Adrian Saul
> > Early usage will be CephFS, exported via NFS and mounted on ESXi 5.5 > > and > > 6.0 hosts(migrating from a VMWare environment), later to transition to > > qemu/kvm/libvirt using native RBD mapping. I tested iscsi using lio > > and saw much worse performance with the first cluster, so it seems

Re: [ceph-users] design guidance

2017-06-06 Thread Christian Balzer
Hello, lots of similar questions in the past, google is your friend. On Mon, 5 Jun 2017 23:59:07 -0400 Daniel K wrote: > I've built 'my-first-ceph-cluster' with two of the 4-node, 12 drive > Supermicro servers and dual 10Gb interfaces(one cluster, one public) > > I now have 9x 36-drive

[ceph-users] design guidance

2017-06-05 Thread Daniel K
I've built 'my-first-ceph-cluster' with two of the 4-node, 12 drive Supermicro servers and dual 10Gb interfaces(one cluster, one public) I now have 9x 36-drive supermicro StorageServers made available to me, each with dual 10GB and a single Mellanox IB/40G nic. No 1G interfaces except IPMI. 2x