Specifically, it would be lispmob:

http://lispmob.org/

Lispmob is an implementation of LISP in Linux.  It would provide a fabric 
similar to what AWS uses (although it predates AWS’s implementation) in that is 
provides a massively scalable encap/decap mechanism with a central (but 
distributed) look-up.    LISP uses a mapping server, but has been integrated 
into OpenDaylight.  ODL would “SDNize” LISP and provide a way to enforce 
policy.  There is also the possibility of leveraging the existing ODL neutron 
plug-in.  The LISP header is very similar to VXLAN with a draft RFC 
(draft-lewis-lisp-gpe-02) to bring it in-line with GPE giving it a standard 
header that could be used across merchant silicon, virtual switching, and in 
the OS.  The first cut of integrating LISP into OpenStack uses ODL and OVS with 
the OVS doing the encap/decap and acting as the RLOC.  Another option that we 
are exploring is to replace OVS with lispmob completely. This would give an 
encap/decap method that would essentially act as a distributed virtual router 
that could cross L3 boundaries enabling direct host/hypervisor to 
host/hypervisor communication without going through an intervening router.  
Service insertion would be enabled with NSH and a few mechanisms already build 
into LISP.  Since LISP has been around for a while and operating in several 
production networks, it has the benefit of addressing many of the corner cases 
(e.g. NAT traversal, IP Mobility, scalability), its just a matter of 
integration around the edges with ODL and OpenStack.  I am not sure if it meets 
all of the use cases stated below, but LISP’s VM-Mobility functionality could 
meet the requirements below.  I am sure the devil is in the details, but we’d 
love an explorative conversation on the matter.

Thanks,

Steven.

From: "Britt Houser (bhouser)" <bhou...@cisco.com<mailto:bhou...@cisco.com>>
Date: Thursday, December 4, 2014 at 12:47 PM
To: Ryan Clevenger 
<ryan.cleven...@rackspace.com<mailto:ryan.cleven...@rackspace.com>>, 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
 
<openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>>
Subject: Re: [Openstack-operators] [openstack-operators] [large deployments] 
[neutron[ [rfc] Floating IP idea solicitation and collaboration

I think LISP could probably be another way to skin the cat:

http://lisp.cisco.com/lisp_over.html

Has anyone explored using LISP with Neutron?

Thx,
britt

From: Ryan Clevenger 
<ryan.cleven...@rackspace.com<mailto:ryan.cleven...@rackspace.com>>
Date: Thursday, December 4, 2014 at 10:35 AM
To: 
"openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>"
 
<openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>>
Subject: [Openstack-operators] [openstack-operators] [large deployments] 
[neutron[ [rfc] Floating IP idea solicitation and collaboration

x-post from the dev list but also wanted to get any feedback or comments you 
all had.

Hi,

At Rackspace, we have a need to create a higher level networking service 
primarily for the purpose of creating a Floating IP solution in our 
environment. The current solutions for Floating IPs, being tied to plugin 
implementations, does not meet our needs at scale for the following reasons:

1. Limited endpoint H/A mainly targeting failover only and not multi-active 
endpoints,
2. Lack of noisy neighbor and DDOS mitigation,
3. IP fragmentation (with cells, public connectivity is terminated inside each 
cell leading to fragmentation and IP stranding when cell CPU/Memory use doesn't 
line up with allocated IP blocks. Abstracting public connectivity away from 
nova installations allows for much more efficient use of those precious IPv4 
blocks).
4. Diversity in transit (multiple encapsulation and transit types on a per 
floating ip basis).

We realize that network infrastructures are often unique and such a solution 
would likely diverge from provider to provider. However, we would love to 
collaborate with the community to see if such a project could be built that 
would meet the needs of providers at scale. We believe that, at its core, this 
solution would boil down to terminating north<->south traffic temporarily at a 
massively horizontally scalable centralized core and then encapsulating traffic 
east<->west to a specific host based on the association setup via the current 
L3 router's extension's 'floatingips' resource.

Our current idea, involves using Open vSwitch for header rewriting and tunnel 
encapsulation combined with a set of Ryu applications for management:

https://i.imgur.com/bivSdcC.png

The Ryu application uses Ryu's BGP support to announce up to the Public Routing 
layer individual floating ips (/32's or /128's) which are then summarized and 
announced to the rest of the datacenter. If a particular floating ip is 
experiencing unusually large traffic (DDOS, slashdot effect, etc.), the Ryu 
application could change the announcements up to the Public layer to shift that 
traffic to dedicated hosts setup for that purpose. It also announces a single 
/32 "Tunnel Endpoint" ip downstream to the TunnelNet Routing system which 
provides transit to and from the cells and their hypervisors. Since traffic 
from either direction can then end up on any of the FLIP hosts, a simple flow 
table to modify the MAC and IP in either the SRC or DST fields (depending on 
traffic direction) allows the system to be completely stateless. We have proven 
this out (with static routing and flows) to work reliably in a small lab setup.

On the hypervisor side, we currently plumb networks into separate OVS bridges. 
Another Ryu application would control the bridge that handles overlay 
networking to selectively divert traffic destined for the default gateway up to 
the FLIP NAT systems, taking into account any configured logical routing and 
local L2 traffic to pass out into the existing overlay fabric undisturbed.

Adding in support for L2VPN EVPN 
(https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN Overlay 
(https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the Ryu BGP 
speaker will allow the hypervisor side Ryu application to advertise up to the 
FLIP system reachability information to take into account VM failover, 
live-migrate, and supported encapsulation types. We believe that decoupling the 
tunnel endpoint discovery from the control plane (Nova/Neutron) will provide 
for a more robust solution as well as allow for use outside of openstack if 
desired.


________________________________________

Ryan Clevenger
Manager, Cloud Engineering - US
m: 678.548.7261
e: ryan.cleven...@rackspace.com<mailto:ryan.cleven...@rackspace.com>
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to