Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread adamv0025
> Michael Hare
> Sent: Tuesday, April 18, 2017 5:51 PM
> 
> Hello,
> 
> Sorry if this is an easy question already covered.  Does anyone on list
have an
> understanding of what happens in the FIB in the following circumstance?
> 
> Simplified topology;
> * Router 1 RIB default points to reject
> * Router 1 RIB has default free feed from attached eBGP neighbor A
> * Router 1 RIB has default free feed from attached iBGP neighbor B (add-
> path)
> 
> I guess what I'm trying to understand, from the perspective of improving
> upstream convergence for outbound packets from our AS, if my default
> route pointed to a valid next hop of last resort am I likely to see an
> improvement (reduction) in blackholing on router 1 during topology
> changes?  The thought being that if Router 1 FIB invalidates next-hop A
> quickly (en masse) packets could match default route with valid next-hop
> while FIB is being re-programmed with more specifics via B?
> 
> I am aware of indirect-next-hop being default on MPC but my understanding
> is this will not work for directly connected eBGP peers?  So if session
with A
> drops (BFD, link, whatever) are routes with next hop to neighbor A
> deprogrammed nearly atomically due to some level of indirection or are
> routes considered one by one until all routes (~600K) have been processed?
> I suspect the latter but perhaps looking for verification.
> 
Hmm I'm not sure about the "indirect next-hops for everyone" proclaimed by
documentation and folks here, but I'd be glad to be proven otherwise. 
Just tried to configure static route with primary and backup(metric 100) NH
and I don't see the backup next hop flag or any indirect NHs (and using the
"show krt" cmd doesn't show anything). 
But even then how good is an indirect-NH if it's not pointing to primary and
backup forwarding-NHs. 
Using "show route extensive" or "show krt" I've always seen INHs only for
BGP routes or next-hops. 
So I think that having default route pointing to backup router won't help
with your convergence, cause the BGP NH and static route NH are not going to
be linked together in a primary-backup fashion.  

> I am aware of BGP PIC but not yet running 15.X [when internet is not in
VRF].
> I am willing to accept that if BGP PIC is the best approach to improving
this
> scenario an upgrade is the best path forward.  I'd be curious to hear from
> anyone who is on 15.1 [or newer] and using MPC4 in terms of perceived code
> quality and MPC4 heap utilization before/after.
> 
Yes BGP Edge Link Protection will definitely help (1M prefixes converged
under 500usec -yup not even a millisecond).  But be aware of one catch on
Junos. 
Since Junos iBGP and eBGP has the same protocol preference (how stupid is
that right?), just by enabling "protection" cmd you can end up in loops
(Juniper forgets to mention this), so in addition to enabling PIC edge you
have to improve protocol preference for eBGP routes (make them more
preferred on the backup node), or enable per-prefix/per-NH VPN labels to
avoid L3 lookup -not applicable in your case.

Chained composite next-hops where mentioned. 
But this feature places another indirect next hop between VPN-Label and
NH-Label, so not applicable in your case. 
This feature can address a problem of too many VPN-Label to NH-Label pairs. 
So in other words whit this feature it doesn't matter how many VPNs (if per
VPN labels are used) or CEs (if per CE VPN-Labels are used) or prefixes (in
VRF if per prefix VPN-Labels are used) there are advertised by the
particular PE all of them will share just one indirect next hop -so in case
of a primary link failure only one indirect NH per PE needs to be updated
with a backup path NH-Label and that affects all the VPNs advertised by that
router, so it only matters now how many PEs a.k.a unique NH-Labels there are
in the network. 
>From documentation: 
On platforms containing only MPCs chained composite next hops are enabled by
default. 
With Junos OS Release 13.3, the support for chained composite next hops is
enhanced to automatically identify the underlying platform capability on
composite next hops at startup time, without relying on user configuration,
and to decide the next hop type (composite or indirect) to embed in the
Layer 3 VPN label. 


adam

netconsultings.com
::carrier-class solutions for the telecommunications industry::


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Dragan Jovicic
As mentioned on mx trio indirect-nh is enabled and can't be disabled.
You could check with > show krt indirect-next-hop protocol-next-hop
commands (0x3 flag should mean it is enabled).
However this was not the case in older Junos versions where
indirect-next-hop was in fact not enabled and had to be enabled even on mx
mpc (it escapes me when was this, pre-13 or so).

If your uplink fails, with indirect-nh change is almost instantaneous,
given your BGP next-hop is unchanged, as only one pointer needs to be
rewritten (or you have equal cost uplinks...). However you still need
composite-next-hop feature for L3VPN labeled traffic and this is NOT
enabled by default (might be important if you run lots of routes in vrf)...

If your BGP next-hop changes and you have routes in rib (add-paths,
advertise-external, multiple RRs), and you have them installed in FT
(pre- or post- 15.1), you still rely on failure detection of upstream BGP
router or upstream link (even slower, but you could put upstream links in
IGP).

There's also egress-protection for labeled traffic..

Before we implemented bgp pic/add-paths, we used multiple RR and iBGP mesh
in certain parts and spread BGP partial feeds from multiple upstream
routers to at least minimize time to update FIB, as none of this required
any upgrade/maintenance.

If you find your FIB update time is terrible, bgp pic edge will definately
help..

BR,


-Dragan

ccie/jncie





On Tue, Apr 18, 2017 at 10:07 PM, Vincent Bernat  wrote:

>  ❦ 18 avril 2017 21:51 +0200, Raphael Mazelier  :
>
> >> Is this the case for chassis MX104 and 80? Is your recommendation to run
> >> with indirect-next-hop on them as well?
> >>
> >
> > Correct me if I'm wrong but I think this is the default on all the MX
> > since a long time. There as no downside afaik.
>
> Documentation says:
>
> > By default, the Junos Trio Modular Port Concentrator (MPC) chipset on
> > MX Series routers is enabled with indirectly connected next hops, and
> > this cannot be disabled using the no-indirect-next-hop statement.
> --
> Harp not on that string.
> -- William Shakespeare, "Henry VI"
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Visio Stencil with MX104

2017-04-18 Thread Marcelo Carneiro
Hello Brad,

Did you see here:

https://www.juniper.net/us/en/products-services/icons-stencils/

Marcelo Carneiro
ICQ: 81992974
AIM: marcelo.carne...@gmail.com
Skype: marcelo.carne...@gmail.com
e-mail: marcelo.carne...@gmail.com
Website: http://bit.ly/149RhSw

On 18 April 2017 at 17:28, Brad Fleming  wrote:

> Does anyone have a Visio stencil that includes the MX104 series and the
> various hardware options (PSUs, REs, etc)? The ones most easily found on
> the Juniper website don't include that platform as near as I can tell.
>
> Thanks for any help or suggestions!
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Visio Stencil with MX104

2017-04-18 Thread Brad Fleming
Does anyone have a Visio stencil that includes the MX104 series and the
various hardware options (PSUs, REs, etc)? The ones most easily found on
the Juniper website don't include that platform as near as I can tell.

Thanks for any help or suggestions!
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Vincent Bernat
 ❦ 18 avril 2017 21:51 +0200, Raphael Mazelier  :

>> Is this the case for chassis MX104 and 80? Is your recommendation to run
>> with indirect-next-hop on them as well?
>>
>
> Correct me if I'm wrong but I think this is the default on all the MX
> since a long time. There as no downside afaik.

Documentation says:

> By default, the Junos Trio Modular Port Concentrator (MPC) chipset on
> MX Series routers is enabled with indirectly connected next hops, and
> this cannot be disabled using the no-indirect-next-hop statement.
-- 
Harp not on that string.
-- William Shakespeare, "Henry VI"
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Michael Hare
Agreeing with Raphael, my reading implies indirect-next-hop cannot be disabled 
on TRIO.  That said I do explicitly configure it on all of our MX gear.

You may also want to look at indirect-next-hop-change-acknowledgements, in my 
case I use LFA and dynamic-rsvp-lsp and have it configured acknowledging (no 
pun intended) it may be adding to my poor convergence woes without BGP PIC.  
FWIW I left krt-nexthop-ack-timeout at its default of 1s.

http://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/indirect-next-hop-change-acknowledgements-edit-routing-options-forwarding-options.html

-Michael

> -Original Message-
> From: Jared Mauch [mailto:ja...@puck.nether.net]
> Sent: Tuesday, April 18, 2017 2:48 PM
> To: Charlie Allom 
> Cc: Jared Mauch ; Michael Hare
> ; juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] improving global unicast convergence (with or without
> BGP-PIC)
> 
> On Tue, Apr 18, 2017 at 08:45:17PM +0100, Charlie Allom wrote:
> > On Tue, Apr 18, 2017 at 7:36 PM, Jared Mauch 
> wrote:
> >
> > You want to set indirect-next-hop in all use-cases.  This allows
> > > faster FIB convergence upon RIB events because all shared next-hops can
> be
> > > updated
> > > at once.
> > >
> > Is this the case for chassis MX104 and 80? Is your recommendation to run
> > with indirect-next-hop on them as well?
> >
> > ..or are there downsides on these smaller units?
> 
>   Yes, I would use this on all JunOS devices myself.
> 
>   - Jared
> 
> --
> Jared Mauch  | pgp key available via finger from ja...@puck.nether.net
> clue++;  | http://puck.nether.net/~jared/  My statements are only mine.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Raphael Mazelier




Is this the case for chassis MX104 and 80? Is your recommendation to run
with indirect-next-hop on them as well?



Correct me if I'm wrong but I think this is the default on all the MX 
since a long time. There as no downside afaik.




--
Raphael Mazelier
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Jared Mauch
On Tue, Apr 18, 2017 at 08:45:17PM +0100, Charlie Allom wrote:
> On Tue, Apr 18, 2017 at 7:36 PM, Jared Mauch  wrote:
> 
> You want to set indirect-next-hop in all use-cases.  This allows
> > faster FIB convergence upon RIB events because all shared next-hops can be
> > updated
> > at once.
> >
> Is this the case for chassis MX104 and 80? Is your recommendation to run
> with indirect-next-hop on them as well?
> 
> ..or are there downsides on these smaller units?

Yes, I would use this on all JunOS devices myself.

- Jared

-- 
Jared Mauch  | pgp key available via finger from ja...@puck.nether.net
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Charlie Allom
On Tue, Apr 18, 2017 at 7:36 PM, Jared Mauch  wrote:

You want to set indirect-next-hop in all use-cases.  This allows
> faster FIB convergence upon RIB events because all shared next-hops can be
> updated
> at once.
>
Is this the case for chassis MX104 and 80? Is your recommendation to run
with indirect-next-hop on them as well?

..or are there downsides on these smaller units?
​
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Jared Mauch
On Tue, Apr 18, 2017 at 04:50:41PM +, Michael Hare wrote:
> Hello,
> 
> Sorry if this is an easy question already covered.  Does anyone on list have 
> an understanding of what happens in the FIB in the following circumstance?
> 
> Simplified topology;
> * Router 1 RIB default points to reject
> * Router 1 RIB has default free feed from attached eBGP neighbor A
> * Router 1 RIB has default free feed from attached iBGP neighbor B (add-path)
> 
> I guess what I'm trying to understand, from the perspective of improving 
> upstream convergence for outbound packets from our AS, if my default route 
> pointed to a valid next hop of last resort am I likely to see an improvement 
> (reduction) in blackholing on router 1 during topology changes?  The thought 
> being that if Router 1 FIB invalidates next-hop A quickly (en masse) packets 
> could match default route with valid next-hop while FIB is being 
> re-programmed with more specifics via B?
> 
> I am aware of indirect-next-hop being default on MPC but my understanding is 
> this will not work for directly connected eBGP peers?  So if session with A 
> drops (BFD, link, whatever) are routes with next hop to neighbor A 
> deprogrammed nearly atomically due to some level of indirection or are routes 
> considered one by one until all routes (~600K) have been processed?  I 
> suspect the latter but perhaps looking for verification.


You want to set indirect-next-hop in all use-cases.  This allows
faster FIB convergence upon RIB events because all shared next-hops can be 
updated
at once.

> I am aware of BGP PIC but not yet running 15.X [when internet is not in VRF]. 
>  I am willing to accept that if BGP PIC is the best approach to improving 
> this scenario an upgrade is the best path forward.  I'd be curious to hear 
> from anyone who is on 15.1 [or newer] and using MPC4 in terms of perceived 
> code quality and MPC4 heap utilization before/after.  

Since you are running a full RIB+FIB, you want to leverage PIC & INH to
get the full performance feasible from your hardware.

- Jared

-- 
Jared Mauch  | pgp key available via finger from ja...@puck.nether.net
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-18 Thread Michael Hare
Hello,

Sorry if this is an easy question already covered.  Does anyone on list have an 
understanding of what happens in the FIB in the following circumstance?

Simplified topology;
* Router 1 RIB default points to reject
* Router 1 RIB has default free feed from attached eBGP neighbor A
* Router 1 RIB has default free feed from attached iBGP neighbor B (add-path)

I guess what I'm trying to understand, from the perspective of improving 
upstream convergence for outbound packets from our AS, if my default route 
pointed to a valid next hop of last resort am I likely to see an improvement 
(reduction) in blackholing on router 1 during topology changes?  The thought 
being that if Router 1 FIB invalidates next-hop A quickly (en masse) packets 
could match default route with valid next-hop while FIB is being re-programmed 
with more specifics via B?

I am aware of indirect-next-hop being default on MPC but my understanding is 
this will not work for directly connected eBGP peers?  So if session with A 
drops (BFD, link, whatever) are routes with next hop to neighbor A deprogrammed 
nearly atomically due to some level of indirection or are routes considered one 
by one until all routes (~600K) have been processed?  I suspect the latter but 
perhaps looking for verification.

I am aware of BGP PIC but not yet running 15.X [when internet is not in VRF].  
I am willing to accept that if BGP PIC is the best approach to improving this 
scenario an upgrade is the best path forward.  I'd be curious to hear from 
anyone who is on 15.1 [or newer] and using MPC4 in terms of perceived code 
quality and MPC4 heap utilization before/after.  

Historically the AS I primarily manage has been default free (default pointing 
to reject), but I'm considering changing that to improve convergence (aware of 
the security considerations).  As for our "real" topology, adding up all the 
transit and peering we have our RIB is nearing 6M routes.  We are not doing 
internet in a VRF.  Our network has add-path 3 enabled.  In some cases our 
peers/upstreams are on unprotected transport that is longer than I'd like.  
Providing a ring and placing the router closer would be nice but not 
necessarily in budget.

I haven't yet approached our account team to ask about this.

Thanks in advance for any suggestions or pointers for further reading.

-Michael
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp