Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
Replies inline, please check. -Original Message- From: Elzur, Uri [mailto:uri.el...@intel.com] Sent: Thursday, June 02, 2016 9:19 AM To: OpenStack Development Mailing List (not for usage questions) ; Cathy Zhang ; b...@ovn.org Cc: Jesse Gross ; Jiri Benc Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC Few comments below Thx Uri ("Oo-Ree") C: 949-378-7568 -Original Message----- From: Yang, Yi Y [mailto:yi.y.y...@intel.com] Sent: Wednesday, June 1, 2016 5:20 PM To: Cathy Zhang ; OpenStack Development Mailing List (not for usage questions) ; b...@ovn.org Cc: Jesse Gross ; Jiri Benc Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC Also cc to Jiri and Jesse, I think mandatory L3 requirement is not reasonable for tunnel port, say VxLAN or VxLAN-gpe, its intention is to L2 over L3, so L2 header is must-have, but mandatory L3 requirement removed L2. [UE] pls add more context [Yi Yang] In current Linux kernel, VxLAN-gpe port is a L3 port, that means one packet with L2 will be popped eth header by implicit pop_eth when it is output to such port. But I think this is inappropriate, VxLAN-gpe can transfer L2 packet as VxLAN does, we can't force it to work in L3 mode. I also think VxLAN + Eth + NSH + Original frame should be an option, at least industries have such requirements in practice. So my point is it will be great if we can support both VxLAN-gpe+ETH+NSH+Original L2 and VxLAN+ETH+NSH+Original L2, this will simplify our nsh patches upstreaming efforts and speed up merging. [UE] this " VxLAN+ETH+NSH+Original L2" can be a local packet (i.e. SFF to SF on a 'local circuit") IFF OS kernels and SFs will support it, but not sure how it can travel on the wire... what is in that added ETH header? [Yi Yang] This ETH is from inner L2 (Original L2), but ether_type is 0x894f [UE] did you mean " VxLAN-gpe+NSH+Original L2" or " VxLAN-gpe+ETH+NSH+Original L2"? The latter is not the packet on the wire [Yi Yang] Current ovs implementation requires the packet from tunnel port must be Ethernet packet, so we have to use VxLAN-gpe+Eth+Nsh+Original packet, I know hardware devices only recognize "VxLAN-gpe+NSH+Original L2". -Original Message- From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com] Sent: Thursday, June 02, 2016 2:54 AM To: OpenStack Development Mailing List (not for usage questions) ; b...@ovn.org; Yang, Yi Y Cc: Cathy Zhang Subject: RE: [openstack-dev] [Neutron] support of NSH in networking-SFC Looks like the work of removing the mandatory L3 requirement associated with decapsulated VxLAN-gpe packet also involves OVS kernel change, which is difficult. Furthermore, even this blocking issue is resolved and eventually OVS accepts the VLAN-gpe+NSH encapsulation, there is still another issue. Current Neutron only supports VXLAN, not VXLAN-gpe, and adopting VXLAN-gpe involves consideration of backward compatibility with existing VXLAN VTEP and VXLAN Gateway. An alternative and maybe easier/faster path could be to push a patch of " VxLAN + Eth + NSH + Original frame" into OVS kernel. This is also IETF compliant encapsulation for SFC and does not have the L3 requirement issue and Neutron VXLAN-gpe support issue. We can probably take this discussion to the OVS mailing alias. Thanks, Cathy -Original Message- From: Ben Pfaff [mailto:b...@ovn.org] Sent: Tuesday, May 31, 2016 9:48 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote: > Ben, yes, we submitted nsh support patch set last year, but ovs > community told me we have to push kernel part into Linux kernel tree, > we're struggling to do this, but something blocked us from doing this. It's quite difficult to get patches for a new protocol into the kernel. You have my sympathy. > Recently, ovs did some changes in tunnel protocols which requires the > packet decapsulated by a tunnel must be a Ethernet packet, but Linux > kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy > (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port > must be L3 packet but not L2 Ethernet packet, this blocked us from > progressing better. > > Simon Horman (Netronome guy) has posted a series of patches to remove > the mandatory requirement from ovs in order that the packet from a > tunnel can be any packet, but so far we didn't see they are merged. These are slowly working their way through OVS review, but these also have a prerequisite on kernel patches, so it's not easy to get them in either. > I heard ovs community looks forward to getting nsh patches merged, it > will be great if ovs g
Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
Indeed, but I saw an exceptional case, LISP, it is in ovs but not in Linux kernel. For our nsh patches, kernel is easier than ovs. -Original Message- From: Ben Pfaff [mailto:b...@ovn.org] Sent: Thursday, June 02, 2016 7:17 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC I'm probably the wrong person to give advice on kernel development, since I haven't been involved in it for years. I just know that it's difficult, and not always because of the code. It's hard to support a protocol in OVS before it's supported in the kernel, since userspace without a kernel implementation is not very useful. On Wed, Jun 01, 2016 at 09:59:12PM +, Elzur, Uri wrote: > Hi Ben > > Any guidance you can offer will be appreciated. The process has taken long > time and precious cycles. How can we get to a coordinated Kernel and OvS > approach to avoid the challenges and potentially misaligned advise we got > (per Yi Yang's mail)? > > Thx > > Uri ("Oo-Ree") > C: 949-378-7568 > > > -Original Message- > From: Ben Pfaff [mailto:b...@ovn.org] > Sent: Tuesday, May 31, 2016 9:48 PM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [Neutron] support of NSH in > networking-SFC > > On Wed, Jun 01, 2016 at 12:08:23AM +, Yang, Yi Y wrote: > > Ben, yes, we submitted nsh support patch set last year, but ovs > > community told me we have to push kernel part into Linux kernel > > tree, we're struggling to do this, but something blocked us from doing this. > > It's quite difficult to get patches for a new protocol into the kernel. > You have my sympathy. > > > Recently, ovs did some changes in tunnel protocols which requires > > the packet decapsulated by a tunnel must be a Ethernet packet, but > > Linux kernel (net-next) tree accepted VxLAN-gpe patch set from > > Redhat guy (Jiri Benc) which requires the packet decapsulated by > > VxLAN-gpe port must be L3 packet but not L2 Ethernet packet, this > > blocked us from progressing better. > > > > Simon Horman (Netronome guy) has posted a series of patches to > > remove the mandatory requirement from ovs in order that the packet > > from a tunnel can be any packet, but so far we didn't see they are merged. > > These are slowly working their way through OVS review, but these also have a > prerequisite on kernel patches, so it's not easy to get them in either. > > > I heard ovs community looks forward to getting nsh patches merged, > > it will be great if ovs guys can help progress this. > > I do plan to do my part in review (but much of this is kernel review, which > I'm not really involved in anymore). > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
Also cc to Jiri and Jesse, I think mandatory L3 requirement is not reasonable for tunnel port, say VxLAN or VxLAN-gpe, its intention is to L2 over L3, so L2 header is must-have, but mandatory L3 requirement removed L2. I also think VxLAN + Eth + NSH + Original frame should be an option, at least industries have such requirements in practice. So my point is it will be great if we can support both VxLAN-gpe+ETH+NSH+Original L2 and VxLAN+ETH+NSH+Original L2, this will simplify our nsh patches upstreaming efforts and speed up merging. -Original Message- From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com] Sent: Thursday, June 02, 2016 2:54 AM To: OpenStack Development Mailing List (not for usage questions) ; b...@ovn.org; Yang, Yi Y Cc: Cathy Zhang Subject: RE: [openstack-dev] [Neutron] support of NSH in networking-SFC Looks like the work of removing the mandatory L3 requirement associated with decapsulated VxLAN-gpe packet also involves OVS kernel change, which is difficult. Furthermore, even this blocking issue is resolved and eventually OVS accepts the VLAN-gpe+NSH encapsulation, there is still another issue. Current Neutron only supports VXLAN, not VXLAN-gpe, and adopting VXLAN-gpe involves consideration of backward compatibility with existing VXLAN VTEP and VXLAN Gateway. An alternative and maybe easier/faster path could be to push a patch of " VxLAN + Eth + NSH + Original frame" into OVS kernel. This is also IETF compliant encapsulation for SFC and does not have the L3 requirement issue and Neutron VXLAN-gpe support issue. We can probably take this discussion to the OVS mailing alias. Thanks, Cathy -Original Message- From: Ben Pfaff [mailto:b...@ovn.org] Sent: Tuesday, May 31, 2016 9:48 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC On Wed, Jun 01, 2016 at 12:08:23AM +0000, Yang, Yi Y wrote: > Ben, yes, we submitted nsh support patch set last year, but ovs > community told me we have to push kernel part into Linux kernel tree, > we're struggling to do this, but something blocked us from doing this. It's quite difficult to get patches for a new protocol into the kernel. You have my sympathy. > Recently, ovs did some changes in tunnel protocols which requires the > packet decapsulated by a tunnel must be a Ethernet packet, but Linux > kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy > (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port > must be L3 packet but not L2 Ethernet packet, this blocked us from > progressing better. > > Simon Horman (Netronome guy) has posted a series of patches to remove > the mandatory requirement from ovs in order that the packet from a > tunnel can be any packet, but so far we didn't see they are merged. These are slowly working their way through OVS review, but these also have a prerequisite on kernel patches, so it's not easy to get them in either. > I heard ovs community looks forward to getting nsh patches merged, it > will be great if ovs guys can help progress this. I do plan to do my part in review (but much of this is kernel review, which I'm not really involved in anymore). __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
Ben, yes, we submitted nsh support patch set last year, but ovs community told me we have to push kernel part into Linux kernel tree, we're struggling to do this, but something blocked us from doing this. Recently, ovs did some changes in tunnel protocols which requires the packet decapsulated by a tunnel must be a Ethernet packet, but Linux kernel (net-next) tree accepted VxLAN-gpe patch set from Redhat guy (Jiri Benc) which requires the packet decapsulated by VxLAN-gpe port must be L3 packet but not L2 Ethernet packet, this blocked us from progressing better. Simon Horman (Netronome guy) has posted a series of patches to remove the mandatory requirement from ovs in order that the packet from a tunnel can be any packet, but so far we didn't see they are merged. I heard ovs community looks forward to getting nsh patches merged, it will be great if ovs guys can help progress this. -Original Message- From: Ben Pfaff [mailto:b...@ovn.org] Sent: Tuesday, May 31, 2016 10:38 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC On Mon, May 30, 2016 at 10:12:34PM -0400, Paul Carver wrote: > I don't know the details of why OvS hasn't added NSH support so I > can't judge the validity of the concerns, but one way or another there > has to be a production-quality dataplane for networking-sfc to front-end. It looks like the last time anyone submitted NSH patches to Open vSwitch was September 2015. They got some reviews but no new version has been posted since. Basically, we can't add NSH support if no one submits patches. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron] How are Tenant VMs' traffic routed to Service VMs
Sam, Thanks, But I failed to install two compute nodes if tacker is enabled, I’m not sure if tacker depends on Netron L3, can you share local.conf for two compute nodes case with tacker enabled. From: Sam Hague [mailto:sha...@redhat.com] Sent: Friday, November 20, 2015 9:19 PM To: Yang, Yi Y Cc: openstack-dev@lists.openstack.org; Flavio Fernandes ; Tim Rozet ; Andre Fredette Subject: Re: [openstack-dev] [Neutron] How are Tenant VMs' traffic routed to Service VMs Yi, yes, you just create a router and connect the two networks to it. The router will ensure traffic works between the two networks/subnets. Add the router and then just add each subnet to the router. Something like the below: neutron net-create vx-net --provider:network_type vxlan --provider:segmentation_id 1500 neutron net-create vx-net2 --provider:network_type vxlan --provider:segmentation_id 1501 neutron subnet-create vx-net 10.100.5.0/24<http://10.100.5.0/24> --name vx-subnet --dns-nameserver 8.8.8.8 neutron subnet-create vx-net2 10.100.6.0/24<http://10.100.6.0/24> --name vx-subnet2 --dns-nameserver 8.8.8.8 neutron net-create ext-net neutron router-interface-add ext-rtr vx-subnet neutron router-interface-add ext-rtr vx-subnet2 Thanks, Sam On Fri, Nov 20, 2015 at 2:52 AM, Yang, Yi Y mailto:yi.y.y...@intel.com>> wrote: Hi, folks I'm trying tacker to start some service VMs as Service Function VNF by "heat" tenant user, service VMs have special Neutron net & subnet, other common tenant VMs will have their own Neutron net & subnet, my question is how to route the traffic to service VMs in Openstack environment, DVR or router? I integrated Opendaylight and used Opendaylight ML2 driver (https://github.com/openstack/networking-odl), in that case, I used its L3 routing plugin instead of Neutron L3, I also integrated ovsdb, from ovsdb perspective, ARP response and L3 routing are done by openflow tables, so can openflow tables do the same thing to routing the traffic between tenant VMs and service VMs? Network topology looks like the below diagram. +--+ +--+ |Compute Node 1 | |Compute Node 2 | | || | | || | | || | | || | | || | | ++ +-+ | | ++ +-+| | |Tenant VM1 | |Service VMx | || |Tenant VM2 | |Service VMy|| | | 10.0.0.3 | | 11.0.0.3| || | 10.0.0.4 | | 11.0.0.4 || | || || || || | || | || || || || | || | || || || || | || | +---eth0---+ +eth0--+ | | +---eth0---+ +--eth0-+| | | | || | | | | | | || | | | | tap0tap1 || tap0tap1| | |ovs| || |ovs| | | +-br-int+ | | +br-int+ | || || | | | ++---+ | | ++---+| | | | || | |
[openstack-dev] [Neutron] How are Tenant VMs' traffic routed to Service VMs
Hi, folks I'm trying tacker to start some service VMs as Service Function VNF by "heat" tenant user, service VMs have special Neutron net & subnet, other common tenant VMs will have their own Neutron net & subnet, my question is how to route the traffic to service VMs in Openstack environment, DVR or router? I integrated Opendaylight and used Opendaylight ML2 driver (https://github.com/openstack/networking-odl), in that case, I used its L3 routing plugin instead of Neutron L3, I also integrated ovsdb, from ovsdb perspective, ARP response and L3 routing are done by openflow tables, so can openflow tables do the same thing to routing the traffic between tenant VMs and service VMs? Network topology looks like the below diagram. +--+ +--+ |Compute Node 1 | |Compute Node 2 | | || | | || | | || | | || | | || | | ++ +-+ | | ++ +-+| | |Tenant VM1 | |Service VMx | || |Tenant VM2 | |Service VMy|| | | 10.0.0.3 | | 11.0.0.3| || | 10.0.0.4 | | 11.0.0.4 || | || || || || | || | || || || || | || | || || || || | || | +---eth0---+ +eth0--+ | | +---eth0---+ +--eth0-+| | | | || | | | | | | || | | | | tap0tap1 || tap0tap1| | |ovs| || |ovs| | | +-br-int+ | | +br-int+ | || || | | | ++---+ | | ++---+| | | | || | || |VxLAN1 VxLAN-gpe1 || VxLAN1VxLAN-gpe1 | ||| | |-eth1-+ + ---eth1-+ 192.188.50.3| | 192.168.50.4 +===+ __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] SR-IOV and IOMMU check
Hi, all Currently openstack can support SR-IOV device pass-through (at least there are some patches for this), but the prerequisite to this is both IOMMU and SR-IOV must be enabled correctly, it seems there is not a robust way to check this in openstack, I have implemented a way to do this and hope it can be committed into upstream, this can help find the issue beforehand, instead of letting kvm report the issue "no IOMMU found" until the VM is started. I didn't find an appropriate place to put into this, do you think this is necessary? Where can it be put into? Welcome your advice and thank you in advance. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] How to write a new neutron L2 plugin using ML2 framework?
Thank you for your detailed info, but I want to implement this in Havana release, mlnx is a good reference, what I want to implement on Intel NIC is similar to mlnx, but it is a standalone plugin and didn't use ML2 framework, I want to use ML2 framework, I think nova has supported SR-IOV in Havana, so I just need to implement Neutron part, I hope you can provide some guide about this. BTW, We can't afford to wait Icehouse release. -Original Message- From: Irena Berezovsky [mailto:ire...@mellanox.com] Sent: Monday, February 10, 2014 8:11 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Yang, Yi Y Subject: RE: [openstack-dev] How to write a new neutron L2 plugin using ML2 framework? Hi, As stated below, we are already having this work both in nova and neuron. Please take a look at the following discussions: https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting For neutron part there are two different flavors that are coming as part of this effort: 1. Cisco SRIOV supporting 802.1QBH - no L2 agent 2. Mellanox Flavor - SRIOV embedded switch ("HW_VEB") - with L2 agent. My guess is that second flavor of SRIOV embedded switch should work for Intel NICs as well. Please join the PCI pass-through meeting discussions to see that you do not do any redundant work or just follow-up on mailing list. BR, Irena -Original Message- From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com] Sent: Monday, February 10, 2014 1:25 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] How to write a new neutron L2 plugin using ML2 framework? Hi, SRIOV is under implementation in nova and neutron. Did you have a look to : https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov On Mon, Feb 10, 2014 at 7:27 AM, Isaku Yamahata wrote: > On Sat, Feb 08, 2014 at 03:49:46AM +, "Yang, Yi Y" > wrote: > >> Hi, All > > Hi. > > >> I want to write a new neutron L2 plugin using ML2 framework, I noticed >> openvswitch and linxubridge have been ported into ML2 framework, but it >> seems many code is removed compared to standalone L2 plugin, I guess some >> code has been written into a common library. Now I want to write a L2 plugin >> to enable switch for a SR-IOV 10g NIC, I think I need to write as follows: > having such a feature would be awesome : did you fill a BP for that? > >> 1. a new mechanism driver neutron/plugins/ml2/drivers/mech_XXX.py, but from >> source code, it seems nothing to do. You mean, you want to use AgentMechanismDriverBase directly? this is an abstract class du to check_segment_for_agent method. > > This requires to define how your plugin utilize network. > If multi tenant network is wanted, what/how technology will be used. > The common one is VLAN or tunneling(GRE, VXLAN). > This depends on what feature your NIC supports. > >> 2. a new agent neutron/plugins/XXX/ XXX_neutron_plugin.py I don't know if this would be mandatory. May be you can just add necessary informations with extend_port_dict while your MD bind the port, as proposed by this patch : https://review.openstack.org/#/c/69783/ Nova will then configure the port correctly. The only need for an agent would be to populate the agent DB with supported segment types, so that during bind_port, the MD find an appropriate segment (with check_segment_for_agent). >> >> After this, an issue it how to let neutron know it and load it by default or >> by configuration. Debugging is also an issue, nobody can write code >> correctly once :-), does neutron have any good debugging way for a newbie? > > LOG.debug and debug middle ware. > If there are any other better way, I'd also like to know. > > thanks, > >> I'm very eager to be able to get your help and sincerely thank you in >> advance. >> >> ___ >> OpenStack-dev mailing list >> OpenStack-dev@lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Isaku Yamahata > > ___ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] How to write a new neutron L2 plugin using ML2 framework?
Hi, All I want to write a new neutron L2 plugin using ML2 framework, I noticed openvswitch and linxubridge have been ported into ML2 framework, but it seems many code is removed compared to standalone L2 plugin, I guess some code has been written into a common library. Now I want to write a L2 plugin to enable switch for a SR-IOV 10g NIC, I think I need to write as follows: 1. a new mechanism driver neutron/plugins/ml2/drivers/mech_XXX.py, but from source code, it seems nothing to do. 2. a new agent neutron/plugins/XXX/ XXX_neutron_plugin.py After this, an issue it how to let neutron know it and load it by default or by configuration. Debugging is also an issue, nobody can write code correctly once :-), does neutron have any good debugging way for a newbie? I'm very eager to be able to get your help and sincerely thank you in advance. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev