Thank you all for the feedback!!

Our requirements we ar looking for a way to increase stability without
sacrificing bandwidth availability and convergence with Data Center host
connectivity to an existing vxlan fabric.  Our server traffic volume is
higher bandwidth East to West then North to South.

We have a Cisco Nexus based vxlan evpn fabric with multi site feature
connecting all of our PODs intra DC and use BGP EVPN over an MPLS core for
DCI inter connect inter Data Center connectivity.

We have orchestration via Cisco NSO and DCNM for network programmability.
Typical Cisco shop.

Our Data Center host attachment model as been with MLAG using Cisco’s vPC.
That has been problematic having that L2 extension so we would like to find
a better way to maybe leverage our existing vxlan fabric and extend to
server hypervisor if at all possible.

So with a hypervisor connected to two leaf switches in a vxlan fabric  it
sounds like it maybe possible with Cisco’s IETF standards based
implementation of vxlan overlay following NVO3 RFC 8365 and BGP EVPN RFC
7432 that we could extend the fabric to server hypervisor.

The question related to L3 ECMP versus L2 MLAG become moot as with existing
BGP EVPN load balancing procedures with EVPN type 4 default gateway DF
election we can achieve all active multi homed load balancing from the
hypervisor.  As was mentioned RFC 8365 since vxlan evpn does not use ESI
the local bias feature would have to be used for split horizon.

Of course with the extension to the hypervisor we would use are existing
orchestration of the fabric to manage the server hypervisor layer.

Has this ever been done before and with which hypervisors ?

Kind regards

Gyan

On Mon, Mar 2, 2020 at 7:58 PM Jeff Tantsura <jefftant.i...@gmail.com>
wrote:

> Gyan,
>
> On open source side of things - FRR supports EVPN on the host.
> Any vendor virtualized NOS would provide you the same (at least Junos/cRPD
> or  XRv).
> EVPN ESI load-sharing eliminates need for MLAG (basic thought, the devil
> is in the details :))
> ECMP vs LAG load-balancing - the algorithms supported are quite similar,
> in some code bases actually the same, so this statement is not really
> correct.
>
> Would be glad to better understand your requirements and help you!
>
> Regards,
> Jeff
>
> On Mar 2, 2020, at 16:00, Gyan Mishra <hayabusa...@gmail.com> wrote:
>
> 
>
> Thanks Robert for the quick response
>
> Just thinking out loud -  I can see there maybe some advantages of
> elimination of L2 to the host but the one major disadvantage is that BGP
> multipath provides flow based uneven load balancing so not as desirable
> from that standpoint compare to L3 MLAG bundle XOR Src/Dest/Port hash.
>
> Other big down side is most enterprises have the hypervisor managed by
> server admins but if you run BGP now that ends up shifting to network.
> More complicated.
>
> Kind regards
>
> Gyan
>
> On Mon, Mar 2, 2020 at 6:39 PM Robert Raszuk <rob...@raszuk.net> wrote:
>
>> Hi Gyan,
>>
>> Similar architecture has been invented and shipped by Contrail team. Now
>> that project after they got acquired by Juniper has been renamed to
>> Tungsten Fabric https://tungsten.io/ while Juniper continued to keep the
>> original project's name and commercial flavor of it. No guarantees of any
>> product quality at this point.
>>
>> Btw ,,, no need for VXLAN nor BGP to the host. The proposed above
>> alternative were well thought out and turned to work ways far more
>> efficient and practical if you zoom into details.
>>
>> Best,
>> Robert.
>>
>>
>> On Tue, Mar 3, 2020 at 12:26 AM Gyan Mishra <hayabusa...@gmail.com>
>> wrote:
>>
>>>
>>> Dear BESS WG
>>>
>>> Is anyone aware of any IETF BGP development in the Data Center arena to
>>> extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor
>>> part of the  vxlan fabric.  This could eliminate use of MLAG on the leaf
>>> switches and eliminate L2 completely from the vxlan fabric thereby
>>> maximizing  stability.
>>>
>>> Kind regards,
>>>
>>> Gyan
>>> --
>>>
>>> Gyan  Mishra
>>>
>>> Network Engineering & Technology
>>>
>>> Verizon
>>>
>>> Silver Spring, MD 20904
>>>
>>> Phone: 301 502-1347
>>>
>>> Email: gyan.s..mis...@verizon.com <gyan.s.mis...@verizon.com>
>>>
>>>
>>>
>>> _______________________________________________
>>> BESS mailing list
>>> BESS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/bess
>>>
>> --
>
> Gyan  Mishra
>
> Network Engineering & Technology
>
> Verizon
>
> Silver Spring, MD 20904
>
> Phone: 301 502-1347
>
> Email: gyan.s.mis...@verizon.com
>
>
>
> _______________________________________________
> BESS mailing list
> BESS@ietf.org
> https://www.ietf.org/mailman/listinfo/bess
>
> --

Gyan  Mishra

Network Engineering & Technology

Verizon

Silver Spring, MD 20904

Phone: 301 502-1347

Email: gyan.s.mis...@verizon.com
_______________________________________________
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess

Reply via email to