The use case is for KVM RBD volumes.  

Our enviroment will be 80% random reads/writes probably 40/60 or 30/70 is a 
good estimate.  All 4k-8k IO sizes.  We currently run on a Nimble Hybrid array 
which runs in the 5k-15k IOPS range with some spikes up to 20-25k IOPS (Capable 
of 100k iops per Nimble).    

Its also worth mentioning that I plan on splitting this cluster between our two 
sites.  We have a dedicated dark fiber connection which is 8x 10GB links 
between the two sites (roughly .4 or .5 ms latency).   I was leaning towards 
40G because our existing HP 5900 switch's which we use to light our dark fiber 
also have 4x 40GB ports which I could use to hang the Ceph cluster network from.

This would be a 4/2 setup, keeping 2 copies at each site.    

I currently have a much lower end ceph cluster using all spinning rust with 3 
nodes at each site (48 OSD's).  So far that is working out really well, but the 
data going on that cluster is written than almost never touched again.  The 
above spinning rust cluster is running just simple 10G SFP+.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to