On 11/12/2013 18:12, Gert Doering wrote: > I don't think I'd ever recommend that, except to a competitor.
I'll be the first to admit that not all this stuff is relevant to all networks. E.g. for INEX transit ASN / as2128, I use full mesh between the various bgp boxes and no communities because there is no real point in doing anything else. For other networks, I use RRs because it's generally a better design approach, it creates a much cleaner networking infrastructure and it allows you to build up really powerful routing policy. This may or may not matter to your network - I don't know. > Having a few 100 external(!) LSAs in an IGP won't make any of them sweat, > not even a stone-age cisco IOS 11.0 OSPF implementation on a 2500. Mostly no argument there when everything is running smoothly, but from a design perspective, it is a lot cleaner to handle this stuff in ibgp. You get a bunch of advantages, including stuff like continuous edge link flaps not trashing your entire network (before you pooh-pooh this, I have had to deal with the consequences of this on a sup720 based core with small numbers of prefixes in the igp and it's not pretty), being able to scale your network to arbitrarily large, consistently controlling distribution of prefixes around the place in a way that you just can't do with an IGP, implementing network-wide RTBH infrastructure, etc. > OTOH, introducing more complexity by bringing in extra routing intelligence, > *and* putting these into a VM (aka "your VM infrastructure has a hickup, > all your network is down, have fun bringing it back up") is... If your RR can run on pc hardware, it's mostly a religious thing whether you should run this sort of thing on bare metal or on a vm. The performance loss from VM overhead is negligible, but the admin gain can be large. I run my RRs on esxi because that provide better console access when operating system stuff goes wrong or connectivity is lost, as happens occasionally. btw I'm not suggesting running RRs on your local stack of teh-cloud boxes because that would be silly. All my virtualised RR boxes are running on dedicated hardware, with a single RR vm on each and with each having redundant network connectivity into the core. I.e. they are treated as dedicated routers with the VM layer being used for OOB management. This works nicely for me; ymmv. > Nick, you've been doing IXPs for too long. This is not good for you. as you mention it, there is an IXP angle here. Redistributing your connected interfaces into ibgp and setting next-hop-self to loopback means that you don't carry your IXP prefix in your ibgp or igp table. Your IXP will appreciate it if you do this. Also if you have multiple connections into the IXP fabric, redistributing connected into your IGP is a very dumb thing to do and is guaranteed to cause connectivity loss in certain situations (e.g. ixp maintenance). Nick _______________________________________________ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/