Hi ceph list,

we have a hyperconvergent ceph cluster with kvm on 8 nodes with ceph
hammer 0.94.10. The cluster is now 3 years old an we plan with a new
cluster for a high iops project. We use replicated pools 3/2 and have
not the best latency on our switch backend.


ping -s 8192 10.10.10.40

8200 bytes from 10.10.10.40: icmp_seq=1 ttl=64 time=0.153 ms


We plan to split the hyperconvergent setup to storage an compute nodes
and want to split ceph cluster and public network. Cluster network with
40 gbit mellanox switches and public network with the existant 10gbit
switches.

Now my question... are 0.153ms - 0.170ms fast enough for the public
network? We must deploy a setup with 1500 - 2000 terminalserver....


Has anyone some experience with a lot of terminalservers on a ceph backend?


Thanks for replys...


-- 
Tobias Kropf

 

Technik

 

 

--

inett5-100x56

inett GmbH » Ihr IT Systemhaus in Saarbrücken

Mainzerstrasse 183
66121 Saarbrücken
Geschäftsführer: Marco Gabriel
Handelsregister Saarbrücken
HRB 16588
        

Telefon: 0681 / 41 09 93 – 0
Telefax: 0681 / 41 09 93 – 99
E-Mail: i...@inett.de
Web: www.inett.de

Cyberoam Gold Partner - Zarafa Gold Partner - Proxmox Authorized Reseller - 
Proxmox Training Center - SEP sesam Certified Partner – Open-E Partner - Endian 
Certified Partner - Kaspersky Silver Partner – ESET Silver Partner - Mitglied 
im iTeam Systemhausverbund für den Mittelstand 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to