Hi Dennis, I have read your proposal and have one major question ...
It says: "It is out of the scope of this document to describe how the SID lists are computed and programmed at the source nodes. As an example, a centralized controller could be the source of the Prefix SID allocation. The controller could continuously collect the state of each domain (e.g. BGP-LS)." How do you distribute IP reachability across domains - say between L1 and L2 over C? Since you mentioned DC-interconnect use case is the plan to achieve any to any compute node to compute node interconnect via controller ? How would tenants (in their respective VMs or LXCs) practically tell "controller" that they need to talk L1 to L2 to get proper SID sequence ? Note also that it is exactly in the reachability and forwarding anchors distribution where the crux of the scalable multi domain interconnect resides. So far AFAIK both Contrail and LISP solved it. Likewise another alternative is provided in BGP Vector Routing proposal: https://tools.ietf.org/html/draft-patel-raszuk-bgp-vector-routing-00 Of course having a controller reachability and SID distribution is an option. However for the proposal to stand solid I am afraid more questions needs to be answered .. in case of interprovider - who controls such oracle, how end points signal need to reach remote destinations etc ... Many thx, R. PS. For easier reading I recommend to avoid calling as "C" both core domain and leave node :) On Mon, Jul 20, 2015 at 9:15 AM, Dennis Cai (dcai) <[email protected]> wrote: > Hi, All, > > This draft is about an application of Segment Routing to scale the network > to support hundreds of thousands of network nodes, and tens of millions of > physical underlay endpoints. Request you all to review the draft and > provide your valuable feedback. > > We have requested a speaking slot for this draft. > > Thanks > Dennis > > > > > -----Original Message----- > From: [email protected] [mailto:[email protected]] > Sent: Monday, July 20, 2015 8:26 AM > To: Rob Shakir; Wim Henderickx; Stefano Previdi (sprevidi); Francis > Ferguson; Steven Lin; Tim LaBerge; Luay Jalil; Dave Cooper; Clarence > Filsfils (cfilsfil); Dennis Cai (dcai); Luay Jalil; Stefano Previdi > (sprevidi); Bruno Decraene; Dave Cooper; Clarence Filsfils (cfilsfil); > Francis Ferguson; Bruno Decraene; Tim Laberge; Steven Lin; Wim Henderickx; > Rob Shakir; Dennis Cai (dcai) > Subject: New Version Notification for > draft-filsfils-spring-large-scale-interconnect-00.txt > > > A new version of I-D, draft-filsfils-spring-large-scale-interconnect-00.txt > has been successfully submitted by Dennis Cai and posted to the IETF > repository. > > Name: draft-filsfils-spring-large-scale-interconnect > Revision: 00 > Title: Interconnecting Millions Of Endpoints With Segment Routing > Document date: 2015-07-19 > Group: Individual Submission > Pages: 10 > URL: > https://www.ietf.org/internet-drafts/draft-filsfils-spring-large-scale-interconnect-00.txt > Status: > https://datatracker.ietf.org/doc/draft-filsfils-spring-large-scale-interconnect/ > Htmlized: > https://tools.ietf.org/html/draft-filsfils-spring-large-scale-interconnect-00 > > > Abstract: > This document describes an application of Segment Routing to scale > the network to support hundreds of thousands of network nodes, and > tens of millions of physical underlay endpoints. This use-case can be > applied to the interconnection of massive-scale DC's and/or large > aggregation networks. Forwarding tables of midpoint and leaf nodes > only require a few tens of thousands of entries. > > > > > > Please note that it may take a couple of minutes from the time of > submission until the htmlized version and diff are available at > tools.ietf.org. > > The IETF Secretariat > > _______________________________________________ > spring mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/spring >
_______________________________________________ spring mailing list [email protected] https://www.ietf.org/mailman/listinfo/spring
