Hi Linda The mass mac withdrawal is an BGP PIC like optimization feature where instead of updating the next hop of every prefix only the H-FIB next hop is tracked, similarly with EVPN to combat slow convergence with MSDC massive MAC VRF use cases, rather than withdrawal of individual mac addresses, the Ethernet A-D per ES route is withdrawn and any mac pointing to the ES is marked invalid and purged from the mac VRF on the PE.
In a DC NVO CLOS fabric n-way ECMP scaled out spine RFC 8365 NVO non MPLS EVPN use case or MPLS EVPN use case RFC 7432, I have not seen any issues with degradation of service due to mass mac withdrawal optimized convergence feature caused an outage or problems. Kind Regards Gyan On Wed, Jun 29, 2022 at 9:21 PM Jeff Tantsura <[email protected]> wrote: > Linda, > > EVPN mass withdraw is an EVPN (as the name suggests) technology and to my > memory is supported by all implementations. > > Wrt RFC7938 (and to rephrase Robert), in presence of multiple equally > preferred routes towards a destination, failure of one of the routes need > not to be propagated downstream, since the destination is still reachable. > If you happen to use BGP BW communities, then there’s going to be an > update every time cumulative BW towards destination has changed. > > Hope this helps > > Cheers, > Jeff > > On Jun 29, 2022, at 15:03, Robert Raszuk <[email protected]> wrote: > > > > Hi Linda, > > The most important premise on why BGP can be used in data centers fabrics > (not that this is a good idea in vast majority of deployments) is based on > the critical assumption that multipath eBGP is in place. > > So single link or switch failure is really a local event and does not need > to be reflected in any protocol action. > > Otherwise use of BGP would be a fatal idea when number of underlay routes > is relatively high. > > With that your email is a bit confusing as you quote rfc7938 which talks > about how to construct underlay, yet suddenly you bring EVPN which is an > overlay. You could more likely bring BGP aggregate withdraw idea, but again > while applicable to WANs in correctly build DCs should have no need. > > Thx, > R. > > > On Wed, Jun 29, 2022 at 11:49 PM Linda Dunbar <[email protected]> > wrote: > >> BGP experts: >> >> >> >> The Section 3.2 of >> https://datatracker.ietf.org/doc/draft-ietf-rtgwg-net2cloud-problem-statement/ >> describes a problem of a Cloud DC infrastructure failure, that may lead to >> massive route changes. >> >> >> >> As described in RFC7938, Cloud DC BGP might not have an IGP to route >> >> around link/node failures within the Assess. Fiber-cut is not uncommon >> >> within Cloud DCs or between sites. Sometimes, an entire cloud data >> >> center goes dark caused by a variety of reasons, such as too many >> >> changes and updates at once, changes of outside of maintenance >> >> windows, cybersecurity threats attacks, cooling failures, >> >> insufficient backup power, etc. When those events happen, massive >> >> numbers of routes need to be changed. >> >> >> >> The large number of routes switching over to another site can also >> >> cause overloading that triggers more failures. >> >> >> >> In addition, the routes (IP addresses) in a Cloud DC cannot be >> >> aggregated nicely, triggering very large number of BGP UPDATE >> >> messages when a failure occurs. >> >> >> >> EVPN [RFC7432] defined mass withdraw mechanism to signal a large number >> of routes being changed to remote PE nodes. >> >> >> >> Is Mass withdrawn supported by all networks? >> >> >> >> Thank you >> >> Linda Dunbar >> >> >> _______________________________________________ >> rtgwg mailing list >> [email protected] >> https://www.ietf.org/mailman/listinfo/rtgwg >> > _______________________________________________ > rtgwg mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/rtgwg > > _______________________________________________ > rtgwg mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/rtgwg > -- <http://www.verizon.com/> *Gyan Mishra* *Network Solutions A**rchitect * *Email [email protected] <[email protected]>* *M 301 502-1347*
_______________________________________________ rtgwg mailing list [email protected] https://www.ietf.org/mailman/listinfo/rtgwg
