v1->v2:
  - The first path I took with this tried to model a provider network
    as an OVN logical switch.  This patch takes a different approach
    suggested by Ben Pfaff where each connection to a provider network
    is modeled as a 2-port OVN logical switch.  More details below.
  - This series is still RFC because the only "testing" I have done
    so far is just with ovs-sandbox.  I'd also like to actually make
    this work with Neutron in a real test environment.  I also expect
    parts of this to conflict with Ben's work on tunnel IDs since this
    series modifies the Bindings table.  I can rebase on that once it's
    ready.

Russell Bryant (11):
  ovn: Convert tabs to spaces in ovn-sb.xml.
  ovn: Add bridge mappings to ovn-controller.
  ovn: Drop unnecessary br_int local variable.
  ovn: Add patch ports for ovn bridge mappings.
  ovn: Set up some bridge mappings in ovs-sandbox.
  ovn-northd: Make column comparisons more generic.
  lib: Add smap_equal().
  ovn: Add type and options to logical port.
  ovn: Get/set lport type and options in ovn-nbctl.
  ovn: Fix uninit access warning from valgrind.
  ovn: Add "localnet" logical port type.

 lib/smap.c                      |  34 ++++++++
 lib/smap.h                      |   2 +
 ovn/controller/ofctrl.c         |   2 +-
 ovn/controller/ovn-controller.c | 173 +++++++++++++++++++++++++++++++++++++++-
 ovn/controller/ovn-controller.h |   7 ++
 ovn/controller/physical.c       | 123 +++++++++++++++++++++-------
 ovn/northd/ovn-northd.c         |  42 +++++++---
 ovn/ovn-nb.ovsschema            |   6 ++
 ovn/ovn-nb.xml                  |  26 ++++++
 ovn/ovn-nbctl.8.xml             |  24 +++++-
 ovn/ovn-nbctl.c                 | 111 ++++++++++++++++++++++++++
 ovn/ovn-sb.ovsschema            |   6 ++
 ovn/ovn-sb.xml                  | 107 ++++++++++++++++---------
 tutorial/ovs-sandbox            |   4 +
 14 files changed, 581 insertions(+), 86 deletions(-)

OpenStack Neutron as an API extension called "provider networks" which
allows an administrator to specify that it would like ports directly
attached to some pre-existing network in their environment.  There was a
previous thread where we got into the details of this here:

  http://openvswitch.org/pipermail/dev/2015-June/056765.html

The case where this would be used is an environment that isn't actually
interested in virtual networks and just wants all of their compute
resources connected up to externally managed networks.  Even in this
environment, OVN still has a lot of value to add.  OVN implements port
security and ACLs for all ports connected to these networks.  OVN also
provides the configuration interface and control plane to manage this
across many hypervisors.

Let's start from how this would be used from Neutron and go down through
OVN to show how it works in OVN.  

Imagine an environment where every hypervisor has a NIC attached to the
same physical network that you would like all of your VMs connected to.
We'll refer to this physical network as "physnet1".  Let's also assume
that the interface to "physne1" is eth1 on every hypervisor.  You would
need to first create an OVS bridge and add eth1 to it by doing something
like:

  $ ovs-vsctl add-br br-eth1
  $ ovs-vsctl add-port br-eth1 eth1

Now you must also configure ovn-controller to tell it that it can get
traffic to "physnet1" by sending it to the bridge "br-eth1".

  $ ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1

When ovn-controller starts up, it parses the bridge mappings and
automatically creates patch ports between the OVN integration bridge and
the bridges specified in bridge mappings.

Now that ovn-controller on every hypervisor understands what "physnet1"
is, you can create this network in Neutron.  The following command
defines a network in Neutron called "provnet1" which is implemented as
connecting to a physical network called "physnet1".  The type is set to
"flat" meaning that the traffic is not tagged.

  $ neutron net-create provnet1 --shared \
  > --provider:physical_network physnet1 \ --provider:network_type flat

(Note that the Neutron API supports specifying a VLAN tag here, but that
is not yet supported in this patch series but will be added later as an
addition.)

At this point an OpenStack user can start creating Neutron ports for VMs
to be attached to this network.

  $ neutron port-create provnet1

When the Neutron network is defined, nothing is actually created in
OVN_Northbound.  Instead, every time a Neutron port is created on this
Neutron provider network, this connection is modeled as a 2-port OVN
logical switch.

At this point, we can model what would happen by using ovn-nbctl.
Consider the following script, which sets up what Neutron would create
for 2 Neutron ports connected to the same Neutron provider network.

  for n in 1 2 ; do 
      ovn-nbctl lswitch-add provnet1-$n

      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-add provnet1-$n provnet1-$n-port1
      ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n
      ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n
      ovs-vsctl add-port br-int lport$n -- set Interface lport$n 
external_ids:iface-id=provnet1-$n-port1

      ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1
      ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown
      ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet
      ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1
  done

This creates 2 OVN logical switches.  One port on each logical switch is
a "normal" port to be used by a VM or container.  The other is a special
type of port which represents the connection to the provider network.
The special port has a type of "localnet" and a type-specific option
called "network_name" which maps to the value we put in
"ovn-bridge-mappings".

When ovn-northd processes this, the logical Pipeline is no different
than it would be for 2 "normal" logical ports on a logical switch.  As a
result, the OpenFlow flows that implement the logical pipeline also
remain unchanged.

ovn-northd copies the "type" and "options" columsn from the logical port
in OVN_Northbound to the Binding table in OVN_Southbound.  With that
information, ovn-controller can wire things up appropriately.
Specifically, the changes are in ovn-controller's code that does the
logical to physical mappings and creates the associated OpenFlow flows.

Here is the final state of the system using ovs-sandbox using this
example.  First, here's a list of the bridges and ports:

  $ ovs-vsctl show
  1500b021-0bd4-447c-b79f-4ca91a982c46
      Bridge "br-eth1"
          Port "patch-br-eth1-to-br-int"
              Interface "patch-br-eth1-to-br-int"
                  type: patch
                  options: {peer=br-int}
          Port "br-eth1"
              Interface "br-eth1"
                  type: internal
      Bridge br-int
          fail_mode: secure
          Port "lport1"
              Interface "lport1"
          Port "lport2"
              Interface "lport2"
          Port br-int
              Interface br-int
                  type: internal
          Port "patch-br-int-to-br-eth1"
              Interface "patch-br-int-to-br-eth1"
                  type: patch
                  options: {peer="br-eth1"}

Before showing the flows, here are the OpenFlow port numbers for the
ports on br-int:

  patch-br-int-to-br-eth1 -- 1
  lport1 -- 2
  lport2 -- 3

Finally, here are the flows (with unimportant pieces stripped) related
to physical-to-logical and logical-to-physical translation:

 table=0, priority=100,in_port=2 
actions=set_field:0x1->metadata,set_field:0x1->reg6,resubmit(,16)
 table=0, priority=100,in_port=3 
actions=set_field:0x2->metadata,set_field:0x3->reg6,resubmit(,16)
 table=0, priority=100,in_port=1 
actions=set_field:0x2->metadata,set_field:0x4->reg6,resubmit(,16),set_field:0x1->metadata,set_field:0x2->reg6,resubmit(,16)
 table=0, priority=50,tun_id=0x1 actions=output:2
 table=0, priority=50,tun_id=0x3 actions=output:3
 ...
 table=64, priority=100,reg6=0x1,reg7=0x1 actions=drop
 table=64, priority=100,reg6=0x2,reg7=0x2 actions=drop
 table=64, priority=100,reg6=0x3,reg7=0x3 actions=drop
 table=64, priority=100,reg6=0x4,reg7=0x4 actions=drop
 table=64, priority=50,reg7=0x1 actions=output:2
 table=64, priority=50,reg7=0x2 actions=output:1
 table=64, priority=50,reg7=0x3 actions=output:3
 table=64, priority=50,reg7=0x4 actions=output:1

Some parting thoughts ...

I got this far and realized I missed something pretty important.  This
works great when there's only a single hypervisor.  When you have
multiple, every hypervisor where the normal port does *not* reside will
still set up the flows for forwarding packets over tunnels which will
be quite problematic.  I'll have to think about how to address this.
It could be trying to implement something like "if a packet came in on a
localnet port, never send it out over a tunnel".  I'm not sure yet and
certainly open to suggestions.

It's also interesting to think about how the "localnet" logical port
type could be (ab)used outside of Neutron's particular use case here.
For example, could you have several normal logical ports on a logical
switch along with a "localnet" port?  I think that might actually work
fine with traffic passing over tunnels between the logical ports and
only in/out of the "localnet" when necessary.  I need to think through
cases like this more thoroughly and ensure the parameters around
using "localnet" ports are adequately documented, and perhaps even
enforced in code.

Thanks a bunch for reading!

-- 
2.4.3

_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to