Hi Daniel,

The flexibility of Ceph is that you can start with your current config,
scale out and upgrade (CPUs, journals etc...) as your performance
requirement increase.

6x1.7Ghz, are we speaking about the Xeon E5 2603L v4? Any chance to bump
that to 2620 v4 or 2630 v4?
Test how the 6x1.7Ghz handles 36 OSDs, then based on that take a decision
to RAID0/LVM or not.
If you have a need for large-low performance block storage, it could be
worth to do a hybrid setup with *some* OSDs in Raid0/LVM.

Since this is a virtualisation use case (VMware and KVM), did you consider
journals? This 256GB SATA SSD is not enough for 36 filestore journals.
Assuming that those 256GB SSD have a performance profile compatible with
journal, a storage tier OSDs with SSD journal (20%) and OSD with collocated
journals (80%) could be nice. Then you place the VMs in different tiers
based on write latency requirements.

If you have the budget for it, you can fit 3x PCIe SSD/NVMe cards into
those StorageServers, that would make a 1:12 ratio and pretty good write
latency.
Another option is to start with filestore then upgrade to bluestore when
stable.

IMO a single network for cluster and public is easier to manage. Since you
already have a 10G cluster, continue with that. Either:
1) If you are tight on 10G ports, do 2x10G per node and skip the 40G NIC
2) If you have plenty of ports, do 4x10G per node: split the 40G NIC into
4x10G.
13 servers (9+3) is usually too small to run in a single ToR setup. So you
should be good with a LACP pair of standard 10G switch as ToR, which you
probably already have?

Cheers,
Maxime

On Tue, 6 Jun 2017 at 08:33 Adrian Saul <adrian.s...@tpgtelecom.com.au>
wrote:

> > > Early usage will be CephFS, exported via NFS and mounted on ESXi 5.5
> > > and
> > > 6.0 hosts(migrating from a VMWare environment), later to transition to
> > > qemu/kvm/libvirt using native RBD mapping. I tested iscsi using lio
> > > and saw much worse performance with the first cluster, so it seems
> > > this may be the better way, but I'm open to other suggestions.
> > >
> > I've never seen any ultimate solution to providing HA iSCSI on top of
> Ceph,
> > though other people here have made significant efforts.
>
> In our tests our best results were with SCST - also because it provided
> proper ALUA support at the time.  I ended up developing my own pacemaker
> cluster resources to manage the SCST orchestration and ALUA failover.  In
> our model we have  a pacemaker cluster in front being an RBD client
> presenting LUNs/NFS out to VMware (NFS), Solaris and Hyper-V (iSCSI).  We
> are using CephFS over NFS but performance has been poor, even using it just
> for VMware templates.  We are on an earlier version of Jewel so its
> possibly some later versions may improve CephFS for that but I have not had
> time to test it.
>
> We have been running a small production/POC for over 18 months on that
> setup, and gone live into a much larger setup in the last 6 months based on
> that model.  It's not without its issues, but most of that is a lack of
> test resources to be able to shake out some of the client compatibility and
> failover shortfalls we have.
>
> Confidentiality: This email and any attachments are confidential and may
> be subject to copyright, legal or some other professional privilege. They
> are intended solely for the attention and use of the named addressee(s).
> They may only be copied, distributed or disclosed with the consent of the
> copyright owner. If you have received this email by mistake or by breach of
> the confidentiality clause, please notify the sender immediately by return
> email and delete or destroy all copies of the email. Any confidentiality,
> privilege or copyright is not waived or lost because this email has been
> sent to you by mistake.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to