Hi list,

I was wondering if anyone here has been able to establish any real-world 
correlation between the relative complexity of a BGP import filter (a route-map 
with various match clauses which reference various other prefix/AS-path lists 
to set metric/preference attributes on incoming prefixes) and any related 
impact to RP CPU? (specifically, the BGP Router process)

We make fairly extensive use of import route-map logic for outbound 
traffic-engineering purposes between our various transit providers, and I'm 
trying to determine if this practice is responsible for driving up RP CPU 
utilization significantly higher than would otherwise be the case. I believe 
that route-maps are (largely?) processed in hardware on the 65K platform 
(S720-3BXL), but nevertheless, logically, it seems to me that since each 
received prefix must pass through the route-map logic until it reaches a match 
clause that matches and then sets the associated attributes, I can't help but 
think the impact on CPU can't plausibly be 'zero'.

As an example, here is an example of a somewhat 'standard' route-map we would 
typically apply (inbound) to a full transit provider's BGP session:

route-map accept-carriername deny 1
 match ip address prefix-list our-prefixes
!
route-map accept-carriername deny 2
 match ip address prefix-list bogon-routes
!
route-map accept-carriername deny 5
 match ip address prefix-list customer-prefixes
!
route-map accept-carriername permit 9
 description Match Carrier Internals, Set LP=500
 match as-path 40
 set metric 300
 set local-preference 500
 set origin igp
!
route-map accept-carriername permit 10
 description Prefix Markdown
 match ip address prefix-list markdown-carriername
 set metric 300
 set local-preference 50
 set origin igp
!
route-map accept-carriername permit 11
 description AS Markdown
 match as-path 46
 set metric 300
 set local-preference 51
 set origin igp
!
route-map accept-carriername permit 15
 description Prefix Markup
 match ip address prefix-list markup-carriername
 set metric 300
 set local-preference 1001
 set origin igp
!
route-map accept-carriername permit 16
 description AS Markup
 match as-path 45
 set metric 300
 set local-preference 1000
 set origin igp
!
route-map accept-carriername permit 20
 description Match Remaining Transits, Set LP=200
 match as-path 41
 set metric 300
 set local-preference 200
 set origin igp
!

The first three sequence numbers (1,2 and 5) simply deny any received prefixes 
that are ones we announce, or bogons.

The next sequence (9), matches the transit provider's own internal 
announcements (origin-as) only -- e.g. ^12345$ and sets the LP to "500".

The next two (seq 10 and 11) 'mark-down' or sets a very low local-preference to 
the received prefix which would generally 'lose' the bestpath calculation 
against any other available path to reach said destination based on a match of 
a specific ASN regex (via as-path access-list) or prefix (prefix list) -- these 
generally have relatively small (< 10) number of physical entries in the 
associated as-path access list or prefix-list referenced). This would only 
apply to paths heard through the transit provider, since the transit provider's 
own prefixes would have already been matched in Seq 9 and the route-map would 
have terminated at this point.

Seq 15 and 16 are the exact reverse of 10 and 11 -- that is, they 'mark-up' or 
set a very high preference which would generally 'win' the bestpath calculation 
over any other possible path. Again, the as-path or prefix-list associated with 
these two sequences would contain relatively few entries.

Seq 20 is where the majority of all prefixes would actually match. This sets 
the 'default' metric and preference values for that carrier on 'transit' (not 
local carrier-origin, but heard VIA carrier) that did not match one of the 
earlier markdown/markup clauses.

Its unfortunate that the clause which will match the majority of all received 
prefixes must be at the bottom of the route-map, but I don't see any other way 
to write this to achieve what we're after traffic-engineering wise.

Any thoughts on if this seems excessive, or if you think this should or should 
not significantly contribute to overly 'elevated' CPU consumption by the BGP 
router process?

I understand there are a lot of different factors at play involved in BGP 
process & overall RP CPU loads on the 65K platform, but we're just scrutinizing 
everything we can think of at this point in an effort to keep the loads down as 
much as is feasible. Any trench experience anyone can share would be greatly 
appreciated!

Thanks,

-Jeremy

Jeremy Reid
Network Engineer, MojoHost
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to