Re: Best use of server NICs.

2019-03-19 Thread Jon Marshall
Hi Dag

Many thanks for that,  option 1 it is then 

Jon


From: Dag Sonstebo 
Sent: 19 March 2019 09:29
To: users@cloudstack.apache.org
Subject: Re: Best use of server NICs.

Hi Jon,

In short "it depends...". Going by your hardware spec (only 1GBps NICs) I will 
assume (please correct me if wrong) that this is a smaller environment / lab / 
proof of concept? If so you won't see much of a benefit from option 2 since you 
simply won't have that much secondary storage traffic going through to cause 
noisy neighbour problems - hence my advice would be option 1) to give you 
redundancy.

Option 2) would be at risk of no redundancy for management and storage (bad), 
and would only make sense if you had guest VMs with high network IO. Even if 
you had a lot of secondary storage traffic I would advise against this. If you 
absolutely wanted to run secondary storage traffic separately I would run a 
bond for management and primary storage and a NIC each for secondary and guest 
traffic - but I would still say 1) is the better option.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue


On 18/03/2019, 19:02, "Jon Marshall"  wrote:


I have  4 1Gbps NICs in each compute node and was considering 2 deployment 
options (Advanced network with Security Groups) -

1)  2 NICs bonded together and used for all storage and management and the 
other 2 NIC bonded together and used for guest VM traffic.

2)  1 NIC or management and primary storage, 1 NIC for secondary storage 
and the remaining 2 NICs bonded together for guest VM traffic.

Option 1 would give more redundancy but is there any benefit to separating 
storage that would outweigh this ?

Or is there a better option I have overlooked.

Any advice much appreciated





dag.sonst...@shapeblue.com
www.shapeblue.com<http://www.shapeblue.com>
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue





Re: Best use of server NICs.

2019-03-19 Thread Dag Sonstebo
Hi Jon,

In short "it depends...". Going by your hardware spec (only 1GBps NICs) I will 
assume (please correct me if wrong) that this is a smaller environment / lab / 
proof of concept? If so you won't see much of a benefit from option 2 since you 
simply won't have that much secondary storage traffic going through to cause 
noisy neighbour problems - hence my advice would be option 1) to give you 
redundancy. 

Option 2) would be at risk of no redundancy for management and storage (bad), 
and would only make sense if you had guest VMs with high network IO. Even if 
you had a lot of secondary storage traffic I would advise against this. If you 
absolutely wanted to run secondary storage traffic separately I would run a 
bond for management and primary storage and a NIC each for secondary and guest 
traffic - but I would still say 1) is the better option.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue
 

On 18/03/2019, 19:02, "Jon Marshall"  wrote:


I have  4 1Gbps NICs in each compute node and was considering 2 deployment 
options (Advanced network with Security Groups) -

1)  2 NICs bonded together and used for all storage and management and the 
other 2 NIC bonded together and used for guest VM traffic.

2)  1 NIC or management and primary storage, 1 NIC for secondary storage 
and the remaining 2 NICs bonded together for guest VM traffic.

Option 1 would give more redundancy but is there any benefit to separating 
storage that would outweigh this ?

Or is there a better option I have overlooked.

Any advice much appreciated





dag.sonst...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 



Best use of server NICs.

2019-03-18 Thread Jon Marshall

I have  4 1Gbps NICs in each compute node and was considering 2 deployment 
options (Advanced network with Security Groups) -

1)  2 NICs bonded together and used for all storage and management and the 
other 2 NIC bonded together and used for guest VM traffic.

2)  1 NIC or management and primary storage, 1 NIC for secondary storage and 
the remaining 2 NICs bonded together for guest VM traffic.

Option 1 would give more redundancy but is there any benefit to separating 
storage that would outweigh this ?

Or is there a better option I have overlooked.

Any advice much appreciated