On 31 okt. 2013, at 04:34, Marcus Sorensen <shadow...@gmail.com> wrote:

> Here's a scenario... Lets say we currently have a zone deployed with Vlan
> isolation. There's nothing stopping us from running a vxlan isolation on
> the same hosts, even on the same physical interface. This is actually done
> in devcloud-kvm where public and mgmt traffic are selected by traffic label
> (which are tagged vlans), and guest traffic is handled by vxlan.
> 
> If one wanted to deprecate Vlan in favor of vxlan without a big zone
> shufflr it would be simple if isolation method were a property of network
> offering. I imagine other isolation offerings could also be deployed in
> tandem, even if it required multiple physical interfaces in the host.

Currently the system we have in place is indeed capable of running different 
isolation types on a host, it uses the physical network concept in cloudstack 
to determine which isolation type to use. This is mainly because the physical 
network also defines which tag or network interface label to use on the 
hypervisors.

I understand the case that it might be possible to combine different isolation 
types on physical networks, like the case you describe with vlan and vxlan. 
However in other cases it should not happen, like when nicira nvp is used which 
takes exclusive control of a traffic label or bridge on the hypervisor. As far 
as i understood the contrail implementation it actually takes control of the 
entire networking of a hypervisor, so its exclusive with any other isolation 
provider.

Within the database and (most of) the code it is already possible to set 
multiple isolation types on a physical network. That gives the admin the 
ability to link a certain physical interface/traffic label or bridge to a set 
of isolation types. For example cloudbr0 with NONE, VLAN and VXLAN and cloudbr1 
with STT. The offering system can be used to indicate which isolation provider 
to use for which particular virtual network.

Cheers,

Hugo


> On Wed, Oct 30, 2013 at 11:36 AM, Pedro Roque Marques
> <pedro.r.marq...@gmail.com> wrote:
>> 
>> Why ? That seems to me the most obvious way to do it.
>> There are different networking solutions: e.g. VLANs and overlays such as
> OpenContrail that assume an L3 switching topology.  For a deployment one
> would tend to choose a solution associated with the physical network.
> 
> Sure that would be simple.  The problem is that Zone also implies
> other things.  Secondary/Primary storage for example.  Zone puts other
> limitations on things.  So by tying networking to a zone you can get
> into the situation where somebody has a CloudStack installation, and
> they want to add OpenContrail.  If we tie it to the zone, that may
> mean that in order for them to use OpenContrail they may need to get
> another NFS secondary storage, primary storage, compute nodes, etc and
> then copy all the templates from one zone to another, etc, etc.
> 
> So currently you can't have Basic and Advanced networking in the same
> zone.  I think it should be possible.  Imagine I had a smaller
> dev/test cloud.  Today if I want basic and advance networking (because
> those are two distinict work loads), I have to have servers dedicated
> to each type.  So 10 for basic, 10 or advanced.  Now the networking
> under the hood for basic and advance can mix quite easily but just
> because we decided to implement it as it is today, I'm suddenly forced
> to managed two pools of physical resources, when it would have been
> must easier and more cost effective to have one pool.
> 
> I think this is general thing that should change with CloudStack over
> time.  Zone and Pod should not be so closely tied to network concepts.
> CloudStack should be capable of being looser for people who want more
> hybrid work loads.  We too easily say that a service provider is just
> going to choose one or the other so why support mixed configurations.
> But there are more people that could be using CloudStack but are
> turned off by the strict models and concepts that CloudStack enforces.
> I have to say I was one of those people.  CloudStack forced me to
> manage my infrastructure in a certain way that I didn't particularly
> care for.
> 
> Darren

Reply via email to