Re: [ovs-discuss] VXLAN support for OVN

2020-03-17 Thread Leonid Grossman
Hi Daniel,
WRT performance gains from offloading encap/decap, these are really specific to 
the workloads (and much less to Vxlan vs Geneve).
My advice would be to test it in your specific environment to decide whether 
it’s worth the extra effort. Note that the effort would be outside of ovs/ovn 
(getting the right hardware, drivers etc in place).
For our workloads it was pretty much a must, either due to the need to run 
close to 100+Gbps line rates or due to most of the host resources needed for 
the apps and very little left for the networking stack to use.
But I can think about other cases when doing encap/decap in software is ok.

WRT VXLAN vs. Geneve – as the original Ihar’s e-mail below rightfully points 
out, this really depends upon the number of logical ports needed for 
deployments.
In our cloud environment, it was hard to guarantee the tenant deployments would 
not grow over time beyond the number of logical ports that VXLAN can support 
(even after leaving source port behind, which we did). So for us VXLAN was a 
step stone, till Geneve offload got in place. We also did not have other 
limitations/perceptions described in the original e-mail that may favor VXLAN.
I do agree though for some/many users this may be a good feature to support, if 
their logical network is guaranteed to stay relatively small.
Best, Leonid

From: Daniel Alvarez Sanchez 
Sent: Tuesday, March 17, 2020 6:19 AM
To: Leonid Grossman 
Cc: Ben Pfaff ; Ihar Hrachyshka ; 
ovs-discuss@openvswitch.org
Subject: Re: [ovs-discuss] VXLAN support for OVN

External email: Use caution opening links or attachments

Hi Leonid, all

On Fri, Mar 13, 2020 at 11:54 PM Leonid Grossman 
mailto:lgross...@nvidia.com>> wrote:
Ben/all,
We actually moved to Geneve a while ago... The original hurdle with Geneve was 
the lack of hw support, but it got solved (at least for our environments).

This is great. I'm not sure if you can share some perf numbers when it comes to 
using VXLAN offloading back then and now with the HW support that you're using 
in the NICs. It'd help a lot to decide if the effort is really worthy.

Thanks a lot!
Daniel

Thanks, Leonid

> -Original Message-
> From: Ben Pfaff mailto:b...@ovn.org>>
> Sent: Friday, March 13, 2020 3:36 PM
> To: Ihar Hrachyshka mailto:ihrac...@redhat.com>>; Leonid 
> Grossman
> mailto:lgross...@nvidia.com>>
> Cc: ovs-discuss@openvswitch.org<mailto:ovs-discuss@openvswitch.org>
> Subject: Re: [ovs-discuss] VXLAN support for OVN
>
> External email: Use caution opening links or attachments
>
>
> Hi!  I'm adding Leonid Grossman to this thread because I believe that his
> team at nVidia has an internal fork of OVN that supports VXLAN.
> I've discussed the tradeoffs that you mentioned below about splitting up bits
> with him, too.
>
> On Mon, Mar 09, 2020 at 09:22:24PM -0400, Ihar Hrachyshka wrote:
> > Good day,
> >
> > at Red Hat, once in a while we hear from customers, both internal and
> > external, that they would like to see VXLAN support in OVN for them to
> > consider switching to the technology. This email is a notice that I
> > plan to work on this feature in the next weeks and months and hope to
> > post patches for you to consider. Below is an attempt to explain why
> > we may want it, how we could achieve it, potential limitations. This
> > is also an attempt to collect early feedback for the whole idea.
> >
> > Reasons for the customer requests are multiple; some of more merit,
> > some are more about perception. One technical reason is that there are
> > times when a SDN / cloud deployment team doesn't have direct influence
> > on protocols allowed in the underlying network; and when it's hard,
> > due to politics or other reasons, to make policy changes to allow
> > Geneve traffic while VXLAN is already available to use. Coming from
> > OpenStack background, usually you have interested customers already
> > using ML2-OVS implementation of Neutron that already relies on VXLAN.
> >
> > Another reason is that some potential users may believe that VXLAN
> > would bring specific benefits in their environment compared to Geneve
> > tunnelling (these gains are largely expected in performance, not
> > functionality because of objective limitations of VXLAN protocol
> > definition).  While Geneve vs. VXLAN performance is indeed quite an
> > old debate with no clear answers, and while there were experiments set
> > in the past that apparently demonstrated that potential performance
> > gains from VXLAN may not be as prominent or present as one may
> > believe*, nevertheless the belief that VXLAN would be beneficial at
> > least in some environments on some hardware never dies out; and so
> > regardless of proven merit of such belief

Re: [ovs-discuss] VXLAN support for OVN

2020-03-13 Thread Leonid Grossman
Ben/all,
We actually moved to Geneve a while ago... The original hurdle with Geneve was 
the lack of hw support, but it got solved (at least for our environments).
Thanks, Leonid

> -Original Message-
> From: Ben Pfaff 
> Sent: Friday, March 13, 2020 3:36 PM
> To: Ihar Hrachyshka ; Leonid Grossman
> 
> Cc: ovs-discuss@openvswitch.org
> Subject: Re: [ovs-discuss] VXLAN support for OVN
> 
> External email: Use caution opening links or attachments
> 
> 
> Hi!  I'm adding Leonid Grossman to this thread because I believe that his
> team at nVidia has an internal fork of OVN that supports VXLAN.
> I've discussed the tradeoffs that you mentioned below about splitting up bits
> with him, too.
> 
> On Mon, Mar 09, 2020 at 09:22:24PM -0400, Ihar Hrachyshka wrote:
> > Good day,
> >
> > at Red Hat, once in a while we hear from customers, both internal and
> > external, that they would like to see VXLAN support in OVN for them to
> > consider switching to the technology. This email is a notice that I
> > plan to work on this feature in the next weeks and months and hope to
> > post patches for you to consider. Below is an attempt to explain why
> > we may want it, how we could achieve it, potential limitations. This
> > is also an attempt to collect early feedback for the whole idea.
> >
> > Reasons for the customer requests are multiple; some of more merit,
> > some are more about perception. One technical reason is that there are
> > times when a SDN / cloud deployment team doesn't have direct influence
> > on protocols allowed in the underlying network; and when it's hard,
> > due to politics or other reasons, to make policy changes to allow
> > Geneve traffic while VXLAN is already available to use. Coming from
> > OpenStack background, usually you have interested customers already
> > using ML2-OVS implementation of Neutron that already relies on VXLAN.
> >
> > Another reason is that some potential users may believe that VXLAN
> > would bring specific benefits in their environment compared to Geneve
> > tunnelling (these gains are largely expected in performance, not
> > functionality because of objective limitations of VXLAN protocol
> > definition).  While Geneve vs. VXLAN performance is indeed quite an
> > old debate with no clear answers, and while there were experiments set
> > in the past that apparently demonstrated that potential performance
> > gains from VXLAN may not be as prominent or present as one may
> > believe*, nevertheless the belief that VXLAN would be beneficial at
> > least in some environments on some hardware never dies out; and so
> > regardless of proven merit of such belief, OVN adoption suffers
> > because of its lack of VXLAN support.
> >
> > *
> > https://blog.russellbryant.net/2017/05/30/ovn-geneve-vs-vxlan-does-it-
> > matter/
> >
> > So our plan is to satisfy such requests by introducing support for the
> > new tunnelling type into OVN and by doing that allow interested
> > parties to try it in their specific environments and see if it makes
> > the expected difference.
> >
> > Obviously, there is a cost to introduce additional protocol to support
> > matrix (especially considering limitations it would introduce, as
> > discussed below). We will probably have to consider the complexity of
> > the final implementation once it's available for review.
> >
> > =
> >
> > For implementation, the base problem to solve here is the fact that
> > VXLAN doesn't carry as many bits available to use for encoding
> > datapath as Geneve does. (Geneve occupies both the 24-bit VNI field as
> > well as 32 more bits of metadata to carry logical source and
> > destination ports.) VXLAN ID is just 24 bits long, and there are no
> > additional fields available for OVN to pass port information.  (This
> > would be different if one would consider protocol extensions like
> > VXLAN-GPE, but relying on them makes both reasons to consider VXLAN
> > listed above somewhat moot.)
> >
> > To satisfy OVN while also working with VXLAN, the limited 24 bit VNI
> > space would have to be split between three components - network ID,
> > logical source and destination ports. The split necessarily limits the
> > maximum number of networks or ports per network, depending on where
> > the split is cut.
> >
> > Splitting the same 24 bit space between all three components equally
> > would result in limitations that would probably not satisfy most real
> > life deployments (we are talking about max 256 networks with max 256
> > ports per network).
> >

Re: [ovs-discuss] [ovs-dev] Geneve remote_ip as flow for OVN hosts

2018-12-13 Thread Leonid Grossman
Hi Ben,
You are right, we briefly discussed this at ovscon; in-person meeting sounds as 
a good way to progress.
Venu/Girish/myself are available next Wed - will this work for you?
We  can come over to VMware campus, or host the meeting here at Nvidia (Santa 
Clara HQ, San Tomas and Walsh)
to discuss the use case in more details (other use cases like NFV should be 
able benefit too), 
and perhaps demo the proposed code changes.
Please pick the time/place and let us know.
Best, Leonid

> -Original Message-
> From: ovs-discuss-boun...@openvswitch.org  boun...@openvswitch.org> On Behalf Of Ben Pfaff
> Sent: Wednesday, December 12, 2018 12:51 PM
> To: venugopal iyer 
> Cc: ovs dev ; Guru Shetty ; Girish
> Moodalbail ; disc...@openvswitch.org
> Subject: Re: [ovs-discuss] [ovs-dev] Geneve remote_ip as flow for OVN
> hosts
> 
> If I'm not mistaken, we briefly discussed this at ovscon.  It seems to me that
> this is a fairly complicated issue and proposal, and it might benefit from in-
> person discussion.  I seem to recall that you are local to the Bay Area, and, 
> if
> so, do you think we could take some time, perhaps next week, to have a
> meeting over it?  Otherwise, I will continue to study it.
> 
> On Thu, Nov 29, 2018 at 05:40:45PM +, venugopal iyer wrote:
> >  Sorry for the resend, I am not sure how the pictures will render in the 
> > text
> doc, so am attaching the PDF too.
> > thanks,
> > -venu
> >
> > On Thursday, November 29, 2018, 9:26:54 AM PST, venugopal iyer
>  wrote:
> >
> >   Thanks, Ben.
> >
> > Sorry for the delay. Please find attached a draft design proposal and
> > let me know your comments etc. I did some quick prototyping
> to  check  for  feasibility too;  I can share that, if it helps.
> > Note, the document is a draft and, I admit, there might be  things
> > that I haven't thought about/through, or missed.  I am attaching a text doc,
> assuming it might be easier, but if you'd like it in a different format, 
> please let
> me know.
> >
> > thanks!
> > -venu
> >
> > On Wednesday, October 31, 2018, 10:30:23 AM PDT, Ben Pfaff
>  wrote:
> >
> >  Honestly the best thing to do is probably to propose a design or, if
> > it's simple enough, to send a patch.  That will probably be more
> > effective at sparking a discussion.
> >
> > On Wed, Oct 31, 2018 at 03:33:48PM +, venugopal iyer wrote:
> > >  Hi:
> > > Just wanted to check if folks had any thoughts on the use case
> > >Girish outlined below. We do have  a real use case for this and are
> interested in looking at options for supporting more than one VTEP IP.It is
> currently a limitation for us, wanted to know if there are similar use cases
> folks are looking at/interested in addressing.
> > >
> > > thanks,
> > > -venu
> > >
> > >    On Thursday, September 6, 2018, 9:19:01 AM PDT, venugopal iyer
> > >via dev  wrote:
> > >
> > >  Would it be possible for the association  > >VTEP> to be made  when the logical port is instantiated on a node?
> > >and relayed on to the SB by  the controller, e.g. assuming a
> > >mechanism to specify/determine a physical port mapping for a  logical
> > >port for a VM.  The  mappings can be
> > >specified as  configuration on the chassis. In the absence of physical port
> information for  a logical port/VM, I suppose we could default to an encap-ip.
> > >
> > >
> > > just a thought,
> > > -venu
> > >   On Wednesday, September 5, 2018, 2:03:35 PM PDT, Ben Pfaff
> > >  wrote:
> > >
> > >  How would OVN know which IP to use for a given logical port on a
> > >chassis?
> > >
> > > I think that the "multiple tunnel encapsulations" is meant to cover,
> > > say, Geneve vs. STT vs. VXLAN, not the case you have in mind.
> > >
> > > On Wed, Sep 05, 2018 at 09:50:32AM -0700, Girish Moodalbail wrote:
> > > > Hello all,
> > > >
> > > > I would like to add more context here. In the diagram below
> > > >
> > > > +--+
> > > > |ovn-host                          |
> > > > |                                  |
> > > > |                                  |
> > > > |      +-+|
> > > > |      |        br-int          ||
> > > > |      ++-+--+|
> > > > |            |            |      |
> > > > |        +--v-+  +---v+  |
> > > > |        | geneve |  | geneve |  |
> > > > |        +--+-+  +---++  |
> > > > |            |            |      |
> > > > |          +-v+    +--v---+  |
> > > > |          | IP0  |    | IP1  |  |
> > > > |          +--+    +--+  |
> > > > +--+ eth0 +-+ eth1 +---+
> > > >            +--+    +--+
> > > >
> > > > eth0 and eth are, say, in its own physical segments. The VMs that
> > > > are instantiated in the above ovn-host will have multiple
> > > > interfaces and each of those interface need to be on a different
> Geneve VTEP.
> > > >
> > > > I think the following entry in OVN TODOs (
> > > > https://github.com/openvswitch/ovs/blob/master/ovn/TODO.rst)
> > > >
> > > >