Hi Michael,

With multiple full tables from two or more eBGP providers + iBGP peers, Your ASBR has to go via BGP best path reselection first before it can start programming FIB. And most specific route always wins, even if it otherwise inferior so BGP has to go over 100,000s of prefixes to find the bests among specific prefixes.

JUNOS INH helps at FIB programming stage, not at BGP best path reselection stage. Additionally in recent JUNOS versions, there are improvements made regarding FIB programming speed, please ask Your Juniper account team for details.

If You would not have full tables over iBGP peering, then the picture would be simplified in a sense that in case of full-table eBGP peer down its invalidated prefixes need to be only removed, and eBGP 0/0 becomes best path. But I guess You won't like to run the network that way?

You can sense L2 failures by using either LACP with single member link (assuming Your Metro Ethernet provider passes LACP PDU), or Ethernet OAM (assuming Your Metro Ethernet provider passes EOAM PDU) or BFD. I would personally rate BFD as tool of last resort as (a) BFD being an UDP/IP protocol means there are many other failures that affect BFD like access-lists (b) even when BFD is down, the BGP session may be still up whereas You want the BFD to follow BGP and (c) BFD failure does not bring the interface down, it just tears down the BGP session whereas LACP failure/EOAM failure brings the logical interface down. Presumably, someone will point out to uBFD over LAG but it still requires LACP so LACP+uBFD is overkill for a simple network UNLESS You are really into microseconds convergence.

When I said "JUNOS is different from IOS - BGP session will stay up until holdtime fires ..." - this is default behavior, You don't need to configure anything for it.

HTH

Thx

Alex

On 19/04/2017 15:41, Michael Hare wrote:
While reading this thread I think I understand that updating the trie is 
expensive such that there is really no way to quickly promote use of the 
default route, so while I still may have use for that default (provider of last 
resort) it won't help with convergence.

In several locations there is an ethernet switch between myself and 
transit/peers.  So I don't always lose local link on end to end path failure 
and if transit networks were in IGP they wouldn't necessarily be withdrawn.  
FWIW I am currently doing NHS with transit subnets in iBGP (for ICMP 
monitoring).

Alex said: "JUNOS is different from IOS - BGP session will stay up until holdtime 
fires but the protocol NH will disappear, the routes will be recalculated and network 
will reconverge even if BGP session to gone peer is still up."

I think I see the same behavior as Alex using "routing-options resolution rib", 
correct?   This is something we are already doing iBGP wise already for our default and 
aggregate announcements that contain our NHS addrs, unless there is yet another feature I 
should be considering?

An enlightening part of this thread is that I didn't realize the difference 
between BGP PIC Core vs BGP PIC Edge, the latter is seemingly what I'm most 
interested in and is seemingly unobtainable at this time.  Our network is 
extremely simplified in that we really have two ABSR so I don't think PIC Core 
would accomplish anything?

-Michael

-----Original Message-----
From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
Of Alexander Arseniev
Sent: Wednesday, April 19, 2017 8:12 AM
To: adamv0...@netconsultings.com; juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] improving global unicast convergence (with or without
BGP-PIC)

Sorry, "Juniper’s “Provider Edge Link Protection for BGP” (Cisco’s BGP
PIC Edge)" is not there in 15.1R5:

[edit]
user@labrouter# set protocols bgp group IBGP family inet unicast protection
                                                                      ^
syntax error.

[edit]
user@labrouter# run show version
Hostname: labrouter
Model: mx240
Junos: 15.1R5.5


The "Juniper BGP PIC for inet" (in global table) is definitely there:

https://www.juniper.net/techpubs/en_US/junos/information-
products/topic-collections/release-notes/15.1/topic-83366.html#jd0e6510

So, what feature in the global table You were surmising to helps the OP?

HTH

Thx
Alex


On 19/04/2017 13:42, adamv0...@netconsultings.com wrote:
Wow, hold on a sec, we’re starting to mix things here,

Sorry maybe my bad, cause I’ve been using Cisco terminology,

Let me use juniper terminology:

I’d recommend using Juniper’s “Provider Edge Link Protection for BGP”
(Cisco’s BGP PIC Edge). –which in Junos for some reason was supported
only for eBGP session in routing-instance –that changes since 15.1.

-that’s what me and OP is talking about (at least I think that’s what
OP is talking about)

Cmd:

set routing-instances radium protocols bgp group toCE2 family inet
unicast protection

What you mentioned below is  Juniper’s “BGP PIC Edge” (Cisco’s BGP PIC
Core).

Cmd:

[edit routing-instances routing-instance-name routing-options]

user@host# set protect core

adam

netconsultings.com

::carrier-class solutions for the telecommunications industry::

*From:*Alexander Arseniev [mailto:arsen...@btinternet.com]
*Sent:* Wednesday, April 19, 2017 1:28 PM
*To:* adamv0...@netconsultings.com; 'Michael Hare';
juniper-nsp@puck.nether.net
*Subject:* Re: [j-nsp] improving global unicast convergence (with or
without BGP-PIC)

Hi there,

BGP PIC for inet/inet6 is primarily for complete ASBR failure use case:

When the BGP Prefix Independent Convergence (PIC) feature is enabled
on a router, BGP installs to the Packet Forwarding Engine the second
best path in addition to the calculated best path to a destination.
The router uses this backup path when an egress router fails in a
network and drastically reduces the outage time. You can enable this
feature to reduce the network downtime if the egress router fails.

https://www.juniper.net/techpubs/en_US/junos/topics/concept/use-
case-for-bgp-pic-for-inet-inet6-lu.html

The original topic was for eBGP peer failure use case.

I admit You could make BGP PIC to work for the original topic scenario
if You don't do eBGP->iBGP NHS on ASBR and inject eBGP peer interface
subnet into Your IGP and into LDP/RSVP (if LDP/RSVP are in use).

HTH

Thx
Alex

On 19/04/2017 13:21, adamv0...@netconsultings.com
<mailto:adamv0...@netconsultings.com> wrote:

     I see, so it’s sort of a “half way through” solution, where the
     convergence still needs to be done in CP and then when it comes to
     DP programming –that’s going to be fast cause just one INH needs
     to be reprogramed.

     Not sure I‘m convinced though, would rather recommend upgrading to
     15.1 to get PIC capability for inet0.

     adam

     netconsultings.com

     ::carrier-class solutions for the telecommunications industry::

     *From:*Alexander Arseniev [mailto:arsen...@btinternet.com]
     *Sent:* Wednesday, April 19, 2017 1:09 PM
     *To:* adamv0...@netconsultings.com
     <mailto:adamv0...@netconsultings.com>; 'Michael Hare';
     juniper-nsp@puck.nether.net <mailto:juniper-nsp@puck.nether.net>
     *Subject:* Re: [j-nsp] improving global unicast convergence (with
     or without BGP-PIC)

     Hi there,

     The benefit is that value of INH mapped to a 100,000s of prefixes
     can be quickly rewritten into another value - for a different INH
     pointing to another iBGP peer.

     Without INH, the forwarding NH value of EACH and EVERY prefix is
     rewritten individually and for longer period of time.

     Your example of "correctly programmed INH" with LFA show 2
     preprogrammed forwarding NHs which is orthogonal to the original
     topic of this discussion.

     INH could be preprogrammed with one or multiple forwarding NHs,
     and to achieve "multiple forwarding NHs" preprogramming, one uses
     ECMP, (r)LFA, RSVP FRR, etc.

     HTH

     Thx

     Alex

     On 19/04/2017 12:51, adamv0...@netconsultings.com
     <mailto:adamv0...@netconsultings.com> wrote:

             Of Alexander Arseniev

             Sent: Wednesday, April 19, 2017 11:51 AM

             - then 203.0.113.0 will appear as "indirect" and You can have the 
usual

         INH

             benefits. Example from my lab:



             show krt indirect-next-hop | find "203.0.113."



             Indirect Nexthop:

             Index: 1048592 Protocol next-hop address: 203.0.113.0

                 RIB Table: inet.0

                 Policy Version: 1                     References: 1

                 Locks: 3                              0x9e54f70

                 Flags: 0x2

                 INH Session ID: 0x185

                 INH Version ID: 0

                 Ref RIB Table: unknown

                       Next hop: #0 0.0.0.0.0.0 via ae4.100

                       Session Id: 0x182

                     IGP FRR Interesting proto count : 1

                     Chain IGP FRR Node Num          : 1

                        IGP Resolver node(hex)       : 0xb892f54

                        IGP Route handle(hex)        : 0x9dc8e14      IGP 
rt_entry

             protocol        : Static

                        IGP Actual Route handle(hex) : 0x0            IGP Actual

             rt_entry protocol : Any



             Disclaimer - I haven't tested the actual convergence with this 
setup.



         But what good is an indirect next-hop if it's pointing to just a single

         forwarding next-hop??



         Example of correctly programed backup NHs for a BGP route:

         ...

         #Multipath Preference: 255

         Next hop: ELNH Address 0x585e1440 weight 0x1, selected  <<<eBGP
primary path
         Next hop: ELNH Address 0x370c8698 weight 0x4000               <<< PIC
backup
         via iBGP

            Indirect next hop: 9550000 1048589 INH Session ID: 0x605

               Next hop: 10.0.20.1 via ae1.0 weight 0x1 <<< IGP primary path

               Next hop: 10.0.10.1 via ae0.0 weight 0xf000 <<< LFA backup path



         -I doubt you can get this with a static default route



         For the above you need to allow for multiple NHs to be programed into
FIB
         using:

         set policy-options policy-statement ECMP then load-balance per-
packet
         set routing-options forwarding-table export ECMP



         adam



         netconsultings.com

         ::carrier-class solutions for the telecommunications industry::



_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to