Hi Yi, Thanks for replying. The reason why I am thinking of service-plane as overlay inside a tenant network is because the way ETSI NFV standards have defined VNFFG (VNF-Forwarding-Graphs). Please refer to following ETSI document which talks about NFV-MANO virtualization constructs and its modeling. Refer to section 6.4 which talks about virtual-link and connection-point construct which essentially maps to a tenant network and neutron ports respectively in openstack context. A single VNF could be part of multiple tenant network via multiple ports. Also, refer to section 6.5 which describes VNFFG construct and its model. SFC is clearly right technology to implement/orchestrate VNFFG but its not clear to me if proposed SFC implementation has all the knobs required to support ETSI VNFFG model.
http://www.etsi.org/deliver/etsi_gs/NFV-MAN/001_099/001/01.01.01_60/gs_nfv-man001v010101p.pdf Since you are driving the SFC implementation in OVS/ODL to maturity, I wanted to point out the ETSI-MANO requirements if you haven’t considered those already. According to this document, I make following observations. * SF is just another VNF orchestrated as part of network-service. * VNF (and hence a SF) can be part of multiple virtual-links (tenant networks). * SF/VNF may be part of multiple VNFFGs (and hence multiple SFCs) via different neutron ports (see diagrams 6.5 from document) * SF/VNF may have one of more ports which are not part of any VNFFG/SFC chain and may be used for configuration/management of SF/VNF. It would be great if proposed SFC solution could satisfy all above requirements and hence can be used to orchestrate ETSI VNFFG. — Thanks, Aniruddha On 3/29/16, 10:32 PM, "Yang, Yi Y" <[email protected]<mailto:[email protected]>> wrote: Replies inline, please check. From: Aniruddha Atale [mailto:[email protected]] Sent: Wednesday, March 30, 2016 12:47 AM To: Yang, Yi Y <[email protected]<mailto:[email protected]>>; opendaylight sfc <[email protected]<mailto:[email protected]>> Cc: [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]> Subject: Re: [ovsdb-dev] My presentation "Fix VxLAN issue in SFC integration by using Eth+NSH and VxLAN-gpe+NSH Hybrid Mode" in ONS 2016 Hi Yi, Thanks for response. Let me provide some background on the question. Each port of the VM in openstack is part of a openstack tenant network. Each openstack installation has a configured encapsulation (VXLAN/GRE/VLAN/FLAT) method defined for tenant network. In your slide#7, all the VMs (VM1, VM2, SF1, SF2) have their ports part of single tenant network which uses VXLAN encapsulation. Now consider a case of non-SFC/non-NSH traffic between SF1 and SF2 (lets say simple ping). This will go via VXLAN tunnel between host1 and host2 as per openstack neutron networking path. But SFC/NSH traffic will follow different VXLAN-gpe encapsulation. [Yi Yang]: it is ok, SFC only cares SFC/NSH traffic, OVSDB will care general traffic, openflow table can distinguish them, no problem, SFC/NSH traffic will go VxLAN-gpe port, non-SFC traffic will go VxLAN port. Architecturally speaking, SFC should create a overlay service plane based on top of underlay physical/virtual network. In this case (from VM perspective) underlay network is neutron tenant network to which all the ports belong. However, NSH traffic (when going across hosts) is not going through existing tenant network but instead orchestrating its own virtual overlay on top of physical network which is different than neutron tenant overlay. [Yi Yang] Do you mean service-plane uses another physical network? It is ok, actually, SF1 and SF2 are service VM, so they has service network, not tenant network, they are same in my demo, but they can be different, I want to set up that in Openstack + OVSDB + SFC to verify this, but I’m not sure OVSDB can route the traffic from tenant network to service network. Now consider a case of multi-tenant environment where single host has VMs from different tenants and all or some of which are orchestrating the SFC. Since single VXLAN-gpe tunnel between the hosts will carry traffic from all these tenants, it may lead to un-desirable cases of IP/MAC conflicts as well as security concerns. [Yi Yang] Do you mean VM IP/MAC conflicts? Why? Different tenant network will mean different sub network, conflicts will be impossible. Consider a case of application level networking connectivity monitoring scheme (gossip or some such many-to-many protocol) between all the VMs of given tenant. Such scheme may be used for high-availibility architecture. Since NSH traffic is flowing via different encapsulation method than regular tenant traffic, such monitoring will not be effective in deciding reachability. E.g. An TOR switch has been accidentally programmed to permit only GRE/VXLAN protocol and drop everything else. In such cases, SF1 continues to be accessible to SF2 via regular tenant networking, from SFC perspective SF1 and SF2 are disconnected. [Yi Yang] For such case, TOR switch must be programmed to support VxLAN-gpe Coming back to existing implementation, I have specific question on steps given in slide#12. After packet is received from SF1 (SF1 OUT step), current implementation pops the ethernet header and then adds VXLAN-gpe (and IP/UDP/Eth) in subsequent steps. Instead of this, if the destination MAC is changed to that of SF2, can the packet not follow the existing tenant encapsulation scheme implemented for non-SFC traffic between SF1 and SF2 (VXLAN/GRE/VLAN etc)? What problems do you foresee? [Yi Yang] Again, SFC only cares SFC-traffic, OVSDB will care non-SFC traffic, they can coexist without any problem. MAC change happens in Host2 instead of Host1 — Thanks, Aniruddha On 3/28/16, 8:35 PM, "Yang, Yi Y" <[email protected]<mailto:[email protected]>> wrote: Hi, Aniruddha Thank you for your question, SFC needs NSH to deliver SFC context information, currently VxLAN-gpe is only one available option, GRE/Geneve is also ok to do this, but currently they don’t support this in ovs implementation. I don’t know why underlay IP fabric can’t support VxLAN-gpe, VxLAN-gpe is almost same as VxLAN, VxLAN is very popular in data center, Openstack has used it as the default tunnel protocol between different compute nodes. Please let me know details if you have any user scenario which can’t support this. From: Aniruddha Atale [mailto:[email protected]] Sent: Tuesday, March 29, 2016 1:47 AM To: Yang, Yi Y <[email protected]<mailto:[email protected]>>; opendaylight sfc <[email protected]<mailto:[email protected]>> Cc: [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]> Subject: Re: [ovsdb-dev] My presentation "Fix VxLAN issue in SFC integration by using Eth+NSH and VxLAN-gpe+NSH Hybrid Mode" in ONS 2016 Hi Yi, Could you please explain the reasoning behind VxLAN-gpe tunnel between two host using underlay networking directly as against running over tenant networking. It seems while non-SFC traffic between tenant vms (one two different hosts) would go via operator selected encapsulation technology (e.g. GRE/Geneve), the SFC traffic between them would use VxLAN-gpe encapsulation which may or may not be supported by underlay IP fabric. — Thanks, Aniruddha On 3/24/16, 11:43 PM, "[email protected]<mailto:[email protected]> on behalf of Yang, Yi Y" <[email protected]<mailto:[email protected]> on behalf of [email protected]<mailto:[email protected]>> wrote: Hi, All I had a presentation “Fix VxLAN issue in SFC integration by using Eth+NSH and VxLAN-gpe+NSH Hybrid Mode<https://youtu.be/fu4s5MaURJQ>” in ONS 2016 to showcases our work in ovs, openflowplugin, ovsdb, gbp and sfc, you can get my presentation by http://events.linuxfoundation.org/sites/events/files/slides/Fix%20VxLAN%20issue%20in%20SFC%20integration%20by%20using%20Eth%2BNSH%20and%20VxLAN-gpe%2BNSH%20Hybrid%20Mode%20Final.pdf, here https://www.youtube.com/watch?v=fu4s5MaURJQ is video record, https://www.youtube.com/watch?v=3SlqKxZp9nY is demo’s HD video, you can watch it with 720P resolution. I hope I can reproduce this demo setup in netvirt+sfc integration and let it as test bed or landing zone for genius or Openflowplugin Flow Programmer, our openflow multi-writer issue or app coexistence issue will count on this setup :)
_______________________________________________ openflowplugin-dev mailing list [email protected] https://lists.opendaylight.org/mailman/listinfo/openflowplugin-dev
