Hi Grégoire,

With those NICs (and without any other background).  I'd go with bonding your 
1G NICs together and your 10G NICs together, put primary and secondary storage 
over the 10G.  Mgmt traffic is minimal and spread over all of your hosts, so 
would be public traffic, so these would be fine over the bonded 1Gbs links.  
Finally guest traffic, this would normally be fine over the 1Gb links, 
especially if you throttle the traffic a little, unless you know that you'll 
have especially high guest traffic.



Kind regards,

Paul Angus

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-----Original Message-----
From: Grégoire Lamodière [mailto:g.lamodi...@dimsi.fr] 
Sent: 04 July 2017 21:15
To: users@cloudstack.apache.org
Subject: Network architecture

Dear All,

In the process of implementing a new CS advanced zone (4.9.2), I am wondering 
about the best network architecture to implement.
Any idea / advice would be highly appreciated.

1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe 2/ The PR Store is 
nfs based 10 Gbe 3/ The sec Store is nfs based 10 Gbe 4/ Maximum network 
offering is 1 Gbit to Internet 5/ Hypervisor Xen 7 6/ Hardware Hp Blade c7000

Right now, my choice would be :

1/ Bound the 2 gigabit networks cards and use the bound for mgmt + public 2/ 
Use 1 10Gbe for storage network (operations on sec Store) 3/ Use 1 10 Gbe for 
guest traffic (and pr store traffic by design)

This architecture sounds good in terms of performance (using 10 Gbe where it 
makes sense, redundancy on mgmt + public with bound).

Another option would be to bound the 2 10 Gbe interfaces, and use Xen Label to 
manage Storage and guest on the same physical network. This choice would give 
us faileover on storage and guest traffic, but I am wondering if performances 
would be badly affected.

Do you have any feedback on this ?

Thanks all.

Best Regards.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


Reply via email to