Your description indicates you want something like this:

       +--------------------------+
       |         Compute          |
       |                          |
       |    /--------------\      |--->
       |    |     VM1      | -----|2x10G
       |    \--------------/      |--->
       |                          |
       |    /--------------\      |--->
       |    |     VM2      | -----|2x10G
       |    \--------------/      |--->
       |                          |
       +--------------------------+
                   | 1G
                   |
                   | 1G
        +------------------------+
        |      Controller        |
        +------------------------+

The diagram assumes that the two 10G interfaces per VM are bonded. Create
two provider/external networks over the bonds and attach the VMs to them. If
there are no bonds, create four provider networks and attach each VM to two
of them. 

You find examples for deploying provider networks in the installation
tutorials
(<https://docs.openstack.org/neutron/latest/install/install-ubuntu.html>
using the LinuxBridge mechanism driver) and the Networking guide
(<https://docs.openstack.org/neutron/latest/admin/deploy.html> using either
LinuxBridge or Openvswitch). I can't tell how hard it is to translate this
to ODL.

Or create two/four external networks, plus assorted tenant networks routed
to them, and connect the VMs to the tenant networks. 

I suppose this is also possible using Packstack, but like DevStack
Packstack's goal is to create simple setups without fuzz, so you might be
stretching it a little.

----------------------------------------------------------------------------
-------------------
From: d.l...@surrey.ac.uk [mailto:d.l...@surrey.ac.uk] 
Sent: Friday, December 22, 2017 2:11 AM
To: dtro...@gmail.com
Cc: netvirt-...@lists.opendaylight.org; openstack@lists.openstack.org
Subject: Re: [Openstack] Issues Understanding Neutron Networking Layout

Hi Dean

I guessed this was the answer but I was afraid to admit it :-)

Can you point me to any documentation  to allow me to build to this kind of
layout?

I’ve struggled to find anything with blue-print/step-by-step installation
which is why i fell back to DevStack...

David

Sent from my iPhone
________________________________________
From: Dean Troyer <dtro...@gmail.com>
Sent: Thursday, December 21, 2017 4:42:40 PM
To: Lake D Mr (PG/R - Elec Electronic Eng)
Cc: trinath.soman...@nxp.com; openstack@lists.openstack.org;
netvirt-...@lists.opendaylight.org
Subject: Re: [Openstack] Issues Understanding Neutron Networking Layout 
 
On Thu, Dec 21, 2017 at 4:23 AM,  <d.l...@surrey.ac.uk> wrote:
> Controller in one location with a single IP connection (1 GE)
> Compute node in a remote location with 4 10GE connections for public
> networking and 1GE IP connection to the Controller
>
> The VMs on the Compute node will each have 2 10GE connections as they will
> be forwarding data.
>
> I have successfully deployed a Pike system with ODL with the previously
> attached local.conf.

I'll be blunt here, DevStack is absolutely the wrong tool for this
job.  The fact that you got this far with it is admirable but it was
never intended to support that sort of a custom installation, hence
the difficulties you are experiencing.  There are a number of
assumption in DevStack that are contrary to your setup, including
around the network configuration and how multi-node DevStack is built.

Unless you really need the services built from source you will have a
much better time down the road installing from packages of one form or
another, if not just using something like Packstack.  Sorting out the
network configuration should be easier as you will not need to
translate between DevStack's variables and the documented Neutron
configs.

dt

-- 
Dean Troyer
dtro...@gmail.com


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to