Re: [j-nsp] Optimizing the FIB on MX

2016-02-24 Thread Timur Maryin

Hi Alexander,


On 18-Feb-16 10:30, Alexander Marhold wrote:


Why do you need to enable MPLS and LDP for PIC ?

IMHO this is a documentation error , or do I miss something ?



Considering you refer to this doc:
http://www.juniper.net/techpubs/en_US/junos15.1/topics/task/configuration/bgp-configuring-bgp-pic-for-inet.html

I think there is a mistake.
Besides, the doc suggests to configure "protect core" within VRF which 
sounds wrong to me.



However i do not think this approach work in setup being discussed, 
since the doc also says:

===
 Junos OS installs the backup path for the indirect next hop on the 
Routing Engine and also provides this route to the Packet Forwarding 
Engine and IGP. When an IGP loses reachability to a prefix with one or 
more routes, it signals to the Routing Engine with a single message 
prior to updating the routing tables.

===

and i do not expect you run IGP with upstreams?




Regarding you suggestion of using it in a routing instance with version
<15.1 I am not sure if that works as documentation says that it only works
for vpnv4-BGP routes

DOCU says
"Before you begin:

 Configure LDP.
 Configure an IGP, either OSPF or IS-IS.
 Configure a Layer 3 VPN.
 Configure multiprotocol BGP for either an IPv4 VPN or an IPv6 VPN.
< nthis seems to be a restriction regarding your proposed
solution
"

Any more info on that available ?



Well, the suggestion from Adam with version < 15.1 was to put internet 
into VRF if i recall correctly.

If such huge step is made why not run LDP+ BGP with inet-vpn internally? :)

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-22 Thread Saku Ytti
On 22 February 2016 at 21:56, Daniel Verlouw  wrote:

Hey Daniel,

> while a bit cumbersome, that's possible today, using something along
> the lines of:
>
> set policy-options policy-statement direct-per-VRF from protocol direct
> set policy-options policy-statement direct-per-VRF then
> label-allocation per-table
>
> set policy-options policy-statement per-CE then label-allocation per-nexthop
> set policy-options policy-statement per-CE then accept
>
> set routing-instances  vrf-table-label # set default to per-table
> set routing-instances  routing-options label allocation per-CE
> # override to per-CE by default
> set routing-instances  vrf-export [ direct-per-VRF  VRF export policies> ]  # revert to per-table for directly
> connected

Thanks, wasn't aware of this.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-22 Thread Daniel Verlouw
Hi,

On Mon, Feb 22, 2016 at 6:53 PM, Saku Ytti  wrote:
> On pre-Trio it would disable egress filters, but on Trio it won't.

yup, Trio always uses the egress proto family, whereas DPC would use
the ingress (i.e. mpls) when vrf-table-label is used.
One more reason to love Trio :-)

> I'd really want per CE labels + aggregate label for connected network.

while a bit cumbersome, that's possible today, using something along
the lines of:

set policy-options policy-statement direct-per-VRF from protocol direct
set policy-options policy-statement direct-per-VRF then
label-allocation per-table

set policy-options policy-statement per-CE then label-allocation per-nexthop
set policy-options policy-statement per-CE then accept

set routing-instances  vrf-table-label # set default to per-table
set routing-instances  routing-options label allocation per-CE
# override to per-CE by default
set routing-instances  vrf-export [ direct-per-VRF  ]  # revert to per-table for directly
connected


Cheers,
Daniel.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-22 Thread Saku Ytti
On 22 February 2016 at 18:18, Adam Vitkovsky  wrote:
> My understanding is that using "per-prefix-label" will disable a lot of 
> services that depend on vrf-table-label (like FW filters for example)

On pre-Trio it would disable egress filters, but on Trio it won't.

I'd really want per CE labels + aggregate label for connected network.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-22 Thread Adam Vitkovsky
> From: Dragan Jovicic [mailto:dragan...@gmail.com]
> Sent: Monday, February 22, 2016 2:23 PM
>
> I guess one solution would be to not use vrf-table-label at all, but default
> per-ce prefix allocation (needing vt- interface...).
>
>
My understanding is that using "per-prefix-label" will disable a lot of 
services that depend on vrf-table-label (like FW filters for example)

Anyways, this part is indeed confusing in the documentation and I guess it 
needs some further lab time.

For Provider Edge Link Protection for BGP Labeled Unicast Paths:
To minimize packet loss when the protected path is down, also use the 
per-prefix-label statement at the [edit routing-instances instance-name 
protocols bgp family inet labeled-unicast] hierarchy level. Set this statement 
on every PE router within the AS containing the protected path.

For Provider Edge Link Protection in Layer 3 VPNs:
Note: The option vrf-table-label must be configured under the 
[routing-instances instance-name] hierarchy for the routers that have protected 
PE-CE links. This applies to Junos OS Releases 12.3 through 13.2 inclusive.



adam


Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-22 Thread Dragan Jovicic
I guess one solution would be to not use vrf-table-label at all, but
default per-ce prefix allocation (needing vt- interface...).


On Mon, Feb 22, 2016 at 3:13 PM, Adam Vitkovsky 
wrote:

> > Raphael Mazelier
> > Sent: Monday, February 22, 2016 1:46 PM
> >
> >
> > Le 20/02/2016 16:16, Raphael Mazelier a écrit :
> > >
> > >
> > > Le 19/02/2016 14:08, Adam Vitkovsky a écrit :
> > >
> > >
> > > Thanks for the clarification.
> > >
> >
> > And again the Oreilly book "Mpls in the SDN ERA" have three great
> chapters
> > on the end speficic to theses problematics ("fast restoration").
> >
> So what does the book recommend to overcome the transient loop in Junos
> please?
>  -do they propose adjusting the protocol preference for eBGP prefixes to
> mimic the IOS behaviour or some other workaround please?
>
>
> adam
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Adam Vitkovsky
> IP Engineer
>
> T:  0333 006 5936
> E:  adam.vitkov...@gamma.co.uk
> W:  www.gamma.co.uk
>
> This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents
> of this email are confidential to the ordinary user of the email address to
> which it was addressed. This email is not intended to create any legal
> relationship. No one else may place any reliance upon it, or copy or
> forward all or any of it in any form (unless otherwise notified). If you
> receive this email in error, please accept our apologies, we would be
> obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or
> email postmas...@gamma.co.uk
>
> Gamma Telecom Limited, a company incorporated in England and Wales, with
> limited liability, with registered number 04340834, and whose registered
> office is at 5 Fleet Place London EC4M 7RD and whose principal place of
> business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-22 Thread Adam Vitkovsky
> Raphael Mazelier
> Sent: Monday, February 22, 2016 1:46 PM
>
>
> Le 20/02/2016 16:16, Raphael Mazelier a écrit :
> >
> >
> > Le 19/02/2016 14:08, Adam Vitkovsky a écrit :
> >
> >
> > Thanks for the clarification.
> >
>
> And again the Oreilly book "Mpls in the SDN ERA" have three great chapters
> on the end speficic to theses problematics ("fast restoration").
>
So what does the book recommend to overcome the transient loop in Junos please?
 -do they propose adjusting the protocol preference for eBGP prefixes to mimic 
the IOS behaviour or some other workaround please?


adam



















Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-22 Thread Raphael Mazelier



Le 20/02/2016 16:16, Raphael Mazelier a écrit :



Le 19/02/2016 14:08, Adam Vitkovsky a écrit :


Thanks for the clarification.



And again the Oreilly book "Mpls in the SDN ERA" have three great 
chapters on the end speficic to theses problematics ("fast restoration").


--
Raphael Mazelier
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-21 Thread Mark Tinka


On 18/Feb/16 16:12, Saku Ytti wrote:

> Define underpowered?
>
> MX80 has 8572, also sported by platforms such as sup7, sup2t, nexus7k,
> me3600x, sfm3-12@alu
> RSP720, EX8200 RE have even slower spec cpu 8548
>
> MX104 has faster cpu than any of these, P5021. Yet RSP720 runs circles
> around MX104 in terms of BGP performance.
>
> I'd say it is underpowered for JunOS (All PPC's are, but is that HW or
> SW issue, that's debatable), but it really can't be considered
> particularly slow cpu in this market generally, especially during its
> launch year.

The MX104's RE is faster than the MX80, but still extremely slow for
daily use :-(.

As you say, Junos suffers quite a bit on PPC platforms.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-21 Thread Adam Vitkovsky
> Dragan Jovicic [mailto:dragan...@gmail.com]
> Sent: Saturday, February 20, 2016 1:28 PM
>
> I had a peek at JunOS documentation; regarding "protect core" and newer
> unicast/labeled "protection" commands - they seem to do the same thing.
I think the "protection" (for L3VPN) is actually older than the "protect core"

> Both methods rely on having additional backup route(s) in table and being
> installed into FIB, albeit with greater weight.
> I see one issue with convergence in particular when using a "default" vrf-
> table-label.
> In case of PE1-CE1 link failure in L3VPN, PE1 must reroute packets over to
> PE2, as all other routers still wait for BGP control plane convergence (as
> opposed to nexthop address tracking in case of a node failure).
> Because PE2 takes a peek at IP packet due to vrf-table-label, it still might 
> see
> best route over at the prefered PE1 site, hence a possible transient loop.
>
Yes you are right, so in order for the failover to work you should decrease 
eBGP protocol preference on PE2.

adam






















Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-20 Thread Raphael Mazelier



Le 19/02/2016 14:08, Adam Vitkovsky a écrit :


No
Multipath between iBGP paths would be similar to 'protect core'
Multipath between eBGP and iBGP paths would be similar to 'unicast protection'
Whereas in protection mode you use one active path and the other path is backup 
and in multipath both paths are active.



Thanks for the clarification.


The "advertise-external" would be needed in both cases to provide the 
alternate/backup path



Indeed.


As you pointed out the biggest pro is the flexibility this setup provides, you 
can have VRF for Peers and another for Transits.
But I'm not aware of any performance issues, certainly the convergence is 
faster with BGP-PIC.



I will try to move/split the DMZ in separate vrf.
Seems to be a fun project, specialy the migration part.

--
Raphael Mazelier
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-20 Thread Dragan Jovicic
I had a peek at JunOS documentation; regarding "protect core" and newer
unicast/labeled "protection" commands - they seem to do the same thing.
Both methods rely on having additional backup route(s) in table and being
installed into FIB, albeit with greater weight.

I see one issue with convergence in particular when using a "default"
vrf-table-label.
In case of PE1-CE1 link failure in L3VPN, PE1 must reroute packets over to
PE2, as all other routers still wait for BGP control plane convergence (as
opposed to nexthop address tracking in case of a node failure).

Because PE2 takes a peek at IP packet due to vrf-table-label, it still
might see best route over at the prefered PE1 site, hence a possible
transient loop.




On Fri, Feb 19, 2016 at 2:25 PM, Adam Vitkovsky 
wrote:

> > Dan Peachey
> > Sent: Friday, February 19, 2016 11:07 AM
> >
> > Cisco often call this PIC core (hierarchical FIB). I think the different
> terms
> > used by the different vendors causes some confusion. From what I
> > understand...
> >
> > Cisco:
> >
> > H-FIB = PIC Core
> > Node protection = PIC Edge Node Protection Link protection = PIC Edge
> Link
> > Protection
> >
> > Juniper:
> >
> > H-FIB = Indirect next-hop
> > Node protection = PIC Edge
> > Link protection = Provider edge link protection
> >
> > However I've also seen node protection referred to as PIC core in some
> Cisco
> > documentation, so who knows :)
> >
> Yeah very confusing indeed
> The draft refers to PE-CE link failure as well as the egress node failure
> in the BGP-PIC Edge section.
> And it refers to core link or node failure protection using LFA in BGP-PIC
> Core section.
>
>
> adam
>
>
>
> Adam Vitkovsky
> IP Engineer
>
> T:  0333 006 5936
> E:  adam.vitkov...@gamma.co.uk
> W:  www.gamma.co.uk
>
> This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents
> of this email are confidential to the ordinary user of the email address to
> which it was addressed. This email is not intended to create any legal
> relationship. No one else may place any reliance upon it, or copy or
> forward all or any of it in any form (unless otherwise notified). If you
> receive this email in error, please accept our apologies, we would be
> obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or
> email postmas...@gamma.co.uk
>
> Gamma Telecom Limited, a company incorporated in England and Wales, with
> limited liability, with registered number 04340834, and whose registered
> office is at 5 Fleet Place London EC4M 7RD and whose principal place of
> business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-19 Thread Adam Vitkovsky
> Dan Peachey
> Sent: Friday, February 19, 2016 11:07 AM
>
> Cisco often call this PIC core (hierarchical FIB). I think the different terms
> used by the different vendors causes some confusion. From what I
> understand...
>
> Cisco:
>
> H-FIB = PIC Core
> Node protection = PIC Edge Node Protection Link protection = PIC Edge Link
> Protection
>
> Juniper:
>
> H-FIB = Indirect next-hop
> Node protection = PIC Edge
> Link protection = Provider edge link protection
>
> However I've also seen node protection referred to as PIC core in some Cisco
> documentation, so who knows :)
>
Yeah very confusing indeed
The draft refers to PE-CE link failure as well as the egress node failure in 
the BGP-PIC Edge section.
And it refers to core link or node failure protection using LFA in BGP-PIC Core 
section.


adam



Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-19 Thread Adam Vitkovsky
> Raphael Mazelier
> Sent: Thursday, February 18, 2016 10:55 PM
>
> Very interresting topic.
>
> Some questions about your setup :
>
> In 2) you set advertise-external, is it working the same by using multipath ?
>
No
Multipath between iBGP paths would be similar to 'protect core'
Multipath between eBGP and iBGP paths would be similar to 'unicast protection'
Whereas in protection mode you use one active path and the other path is backup 
and in multipath both paths are active.

The "advertise-external" would be needed in both cases to provide the 
alternate/backup path


> In 3) you set 'unicast protection'. It is the same thing as PIC 'protect core'
> knob ?
>
In essence yes, as both are using Prefix Independent Convergence(FIB hierarchy) 
and both pre-install a backup next-hop.
'unicast protection' is used to protect against PE-CE link failure.
'protect core' is used to protect against PE node failure.

> If I understand correctly, before 15.1 PIC is only available on l3vpn, so my
> question is :
>
> Is it advisable to run the dmz/internet table in a vrf/routing instance on
> juniper ? and what are the pros/cons of doing that ?
>
> pros : PIC, more flexibily ?
> cons : more complex setup, performance issue (I've heard some storie about
> that) ?
>
As you pointed out the biggest pro is the flexibility this setup provides, you 
can have VRF for Peers and another for Transits.
But I'm not aware of any performance issues, certainly the convergence is 
faster with BGP-PIC.

adam



Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-19 Thread Dan Peachey


On 19/02/2016 10:53, Alexander Marhold wrote:

Hi



You wrote:


One thing I haven't seen mentioned is that routers need indirect-nexthop 
feature enabled




IMHO exactly this is also called PIC   (prefix independent convergence) so to 
be exact to get a prefix amount independent convergence you need a pointer to a 
nexthop-pointer-structure which then points to the next-hop.

In case of a change of the nexthop you only need to change the pointer in the 
next-hop-pointer-structure independent how many prefixes are using that 
next-hop.



Regards



alexander



Cisco often call this PIC core (hierarchical FIB). I think the different 
terms used by the different vendors causes some confusion. From what I 
understand...


Cisco:

H-FIB = PIC Core
Node protection = PIC Edge Node Protection
Link protection = PIC Edge Link Protection

Juniper:

H-FIB = Indirect next-hop
Node protection = PIC Edge
Link protection = Provider edge link protection

However I've also seen node protection referred to as PIC core in some 
Cisco documentation, so who knows :)


Regards,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-19 Thread Alexander Marhold
Hi

 

You wrote:

>One thing I haven't seen mentioned is that routers need indirect-nexthop 
>feature enabled

 

IMHO exactly this is also called PIC   (prefix independent convergence) so to 
be exact to get a prefix amount independent convergence you need a pointer to a 
nexthop-pointer-structure which then points to the next-hop.

In case of a change of the nexthop you only need to change the pointer in the 
next-hop-pointer-structure independent how many prefixes are using that 
next-hop.

 

Regards

 

alexander

 

 

Von: Dragan Jovicic [mailto:dragan...@gmail.com] 
Gesendet: Freitag, 19. Februar 2016 11:45
An: Adam Vitkovsky
Cc: alexander.marh...@gmx.at; sth...@nethelp.no; Chuck Anderson; Vincent 
Bernat; juniper-nsp@puck.nether.net
Betreff: Re: [j-nsp] Optimizing the FIB on MX

 

Advertise-external or more general Additional Path capability (would prefer 
this if new install) could be used to distribute selected few routes if FIB 
space is of concern.

One thing I haven't seen mentioned is that routers need indirect-nexthop 
feature enabled, which should be by default enabled on recent Junos versions on 
all-MPC chassis. Otherwise, change of core interface will take at least a 
couple of dozen seconds to update FIB. DPC cards do not support this, and on 
older version you'd need to enable this feature by hand, if i remember 
correctly.

Also, if this protection is used in l3vpn, you would optimally need 
composite-nexthop feature which decouples link between indirect-nexhop and 
outer label (which is changed once core link fails).

 

Regards

 

On Fri, Feb 19, 2016 at 11:02 AM, Adam Vitkovsky  
wrote:

> Alexander Marhold [mailto:alexander.marh...@gmx.at]
> Sent: Thursday, February 18, 2016 6:50 PM
>
> Hi folks
>
> To make the discussion clearer and comming back to the Juniper MX 104
> implementation
>
> Here is a picture of 2 PEs on P  and 2 peers (ISP1 and IX1) let´s assume we
> want to prefer routes from IX1 over ISP1
> MX1 is EBGP (lpref 100) to  ISP1 and IBGP to MX2 and MX3
> MX2 is EBGP (lpref 110) to IX1 and IBGP to MX1 and MX3
>
>   ISP1   IX1
> | locpref ^ locpref
> |   100  |   110
>MX1->-MX2
> |   |
> |   |
> +--MX3--->--+
>
>
> In my opinion if you need also  the MX3 then  for this MX3 you need "PIC-
> CORE" to quickly switch between both paths
>
Yes that's right, "protect core" under routing-options of the Internet VRF.

However then I can ask what if MX2 fails or is severed from the core completely,
though with that I'd be opening a whole different and very interesting realm of 
convergence options.
First you'd need to tune your IGP so it propagates the unreachability of MX2's 
loopback towards MX3 as fast as possible.
Then I could argue that MX3 is in a different AS and it knows about MX2's 
loopback via BGP-LU -so you'd need to tune BGP-LU infrastructure(can't shave of 
much delay).
Then I could argue I want sub 50ms convergence in the above case and well then, 
then you'd have to rely on P routers to perform the local repair and swing from 
MX2 to MX1 If MX2 goes down.
And that brings me to Segment Routing and protecting a node segment upon the 
failure of its advertising node


> On MX1 you need "best-external" to advertise the external routes whereas
> the best is the internal route pointing to MX2
>
> On MX1 and MX2 you need "PIC-EDGE" to quickly switch when IX1 goes
> down
>
Well technically you need PIC-EDGE only on MX2 so it can do the local repair 
for traffic destined to IX1 and re-label it so that it gets to MX1.

> Do we all agree on that picture and the named mechanisms ( put in "") ?
>
>
> So now what versions of Junos is needed and what additional "unnecessary"
> methods like MPLS or LDP is now needed ?
>
Well I'm afraid you'd need to run MPLS L3VPNs for all this.


adam


Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652   or 
email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is

Re: [j-nsp] Optimizing the FIB on MX

2016-02-19 Thread Dragan Jovicic
Advertise-external or more general Additional Path capability (would prefer
this if new install) could be used to distribute selected few routes if FIB
space is of concern.

One thing I haven't seen mentioned is that routers need indirect-nexthop
feature enabled, which should be by default enabled on recent Junos
versions on all-MPC chassis. Otherwise, change of core interface will take
at least a couple of dozen seconds to update FIB. DPC cards do not support
this, and on older version you'd need to enable this feature by hand, if i
remember correctly.

Also, if this protection is used in l3vpn, you would optimally need
composite-nexthop feature which decouples link between indirect-nexhop and
outer label (which is changed once core link fails).


Regards

On Fri, Feb 19, 2016 at 11:02 AM, Adam Vitkovsky  wrote:

> > Alexander Marhold [mailto:alexander.marh...@gmx.at]
> > Sent: Thursday, February 18, 2016 6:50 PM
> >
> > Hi folks
> >
> > To make the discussion clearer and comming back to the Juniper MX 104
> > implementation
> >
> > Here is a picture of 2 PEs on P  and 2 peers (ISP1 and IX1) let´s assume
> we
> > want to prefer routes from IX1 over ISP1
> > MX1 is EBGP (lpref 100) to  ISP1 and IBGP to MX2 and MX3
> > MX2 is EBGP (lpref 110) to IX1 and IBGP to MX1 and MX3
> >
> >   ISP1   IX1
> > | locpref ^ locpref
> > |   100  |   110
> >MX1->-MX2
> > |   |
> > |   |
> > +--MX3--->--+
> >
> >
> > In my opinion if you need also  the MX3 then  for this MX3 you need "PIC-
> > CORE" to quickly switch between both paths
> >
> Yes that's right, "protect core" under routing-options of the Internet VRF.
>
> However then I can ask what if MX2 fails or is severed from the core
> completely,
> though with that I'd be opening a whole different and very interesting
> realm of convergence options.
> First you'd need to tune your IGP so it propagates the unreachability of
> MX2's loopback towards MX3 as fast as possible.
> Then I could argue that MX3 is in a different AS and it knows about MX2's
> loopback via BGP-LU -so you'd need to tune BGP-LU infrastructure(can't
> shave of much delay).
> Then I could argue I want sub 50ms convergence in the above case and well
> then, then you'd have to rely on P routers to perform the local repair and
> swing from MX2 to MX1 If MX2 goes down.
> And that brings me to Segment Routing and protecting a node segment upon
> the failure of its advertising node
>
>
> > On MX1 you need "best-external" to advertise the external routes whereas
> > the best is the internal route pointing to MX2
> >
> > On MX1 and MX2 you need "PIC-EDGE" to quickly switch when IX1 goes
> > down
> >
> Well technically you need PIC-EDGE only on MX2 so it can do the local
> repair for traffic destined to IX1 and re-label it so that it gets to MX1.
>
> > Do we all agree on that picture and the named mechanisms ( put in "") ?
> >
> >
> > So now what versions of Junos is needed and what additional "unnecessary"
> > methods like MPLS or LDP is now needed ?
> >
> Well I'm afraid you'd need to run MPLS L3VPNs for all this.
>
>
> adam
>
>
> Adam Vitkovsky
> IP Engineer
>
> T:  0333 006 5936
> E:  adam.vitkov...@gamma.co.uk
> W:  www.gamma.co.uk
>
> This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents
> of this email are confidential to the ordinary user of the email address to
> which it was addressed. This email is not intended to create any legal
> relationship. No one else may place any reliance upon it, or copy or
> forward all or any of it in any form (unless otherwise notified). If you
> receive this email in error, please accept our apologies, we would be
> obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or
> email postmas...@gamma.co.uk
>
> Gamma Telecom Limited, a company incorporated in England and Wales, with
> limited liability, with registered number 04340834, and whose registered
> office is at 5 Fleet Place London EC4M 7RD and whose principal place of
> business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-19 Thread Adam Vitkovsky
> Alexander Marhold [mailto:alexander.marh...@gmx.at]
> Sent: Thursday, February 18, 2016 6:50 PM
>
> Hi folks
>
> To make the discussion clearer and comming back to the Juniper MX 104
> implementation
>
> Here is a picture of 2 PEs on P  and 2 peers (ISP1 and IX1) let´s assume we
> want to prefer routes from IX1 over ISP1
> MX1 is EBGP (lpref 100) to  ISP1 and IBGP to MX2 and MX3
> MX2 is EBGP (lpref 110) to IX1 and IBGP to MX1 and MX3
>
>   ISP1   IX1
> | locpref ^ locpref
> |   100  |   110
>MX1->-MX2
> |   |
> |   |
> +--MX3--->--+
>
>
> In my opinion if you need also  the MX3 then  for this MX3 you need "PIC-
> CORE" to quickly switch between both paths
>
Yes that's right, "protect core" under routing-options of the Internet VRF.

However then I can ask what if MX2 fails or is severed from the core completely,
though with that I'd be opening a whole different and very interesting realm of 
convergence options.
First you'd need to tune your IGP so it propagates the unreachability of MX2's 
loopback towards MX3 as fast as possible.
Then I could argue that MX3 is in a different AS and it knows about MX2's 
loopback via BGP-LU -so you'd need to tune BGP-LU infrastructure(can't shave of 
much delay).
Then I could argue I want sub 50ms convergence in the above case and well then, 
then you'd have to rely on P routers to perform the local repair and swing from 
MX2 to MX1 If MX2 goes down.
And that brings me to Segment Routing and protecting a node segment upon the 
failure of its advertising node


> On MX1 you need "best-external" to advertise the external routes whereas
> the best is the internal route pointing to MX2
>
> On MX1 and MX2 you need "PIC-EDGE" to quickly switch when IX1 goes
> down
>
Well technically you need PIC-EDGE only on MX2 so it can do the local repair 
for traffic destined to IX1 and re-label it so that it gets to MX1.

> Do we all agree on that picture and the named mechanisms ( put in "") ?
>
>
> So now what versions of Junos is needed and what additional "unnecessary"
> methods like MPLS or LDP is now needed ?
>
Well I'm afraid you'd need to run MPLS L3VPNs for all this.


adam


Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-19 Thread Alexander Arseniev

Hello,
"condition" is not supported in forwarding-table export policy, only in 
BGP/IGP export policy.
You have to insert a "BGP-exporter" intermediate node between 
peer|upstream and Your MX, this could be a logical system on MX itself.

Thx
Alex

On 18/02/2016 10:14, Vincent Bernat wrote:

  ❦ 17 février 2016 21:07 GMT, Alexander Arseniev  :


True, one cannot match on "next-hop" in "condition", only on exact
prefix+table name.
But this can be done using "route isolation" approach.
So, the overall approach is:
1/ create a separate table and leak a 0/0 route there matching on 0/0
exact + next-hop ("isolate the interested route"). Use
"instance-import" + policy.
2/ create condition

policy-options {
  condition default-to-upstream {
   if-route-exists {
0.0.0.0/0;
table isolate-0/0.inet.0;
   }
  }

3/ use condition to match & reject the specifics:

policy-options {
  policy-statement reject-same-nh-as-0/0 {
   term 1  {
   from {
 protocol bgp;
route-filter 0/0 longer;
 condition default-to-upstream;
next-hop 198.18.1.1;
 }
 then reject;
 }
  term 2  {
   from {
 protocol bgp;
route-filter 0/0 longer;
next-hop 198.18.1.1;
 }
 then accept;
 }

Just by curiosity, I tried your approach and it almost work. However,
for some reason, the condition can match when there is no route in the
associated table. I didn't do exactly as you proposed, so maybe I am
doing something wrong. I am not really interested in getting to the
bottom of this matter. I just post my current configuration in case
somebody is interested:

  
https://github.com/vincentbernat/network-lab/blob/d984d6c5f847b96a131b240d91346b46bfaecac9/lab-vmx-fullview/vMX1.conf#L106-L115

If I enable term 4, it catches all routes whose next-hop is
192.0.2.129 despite the condition being false. In the RIB, I have many
routes whose next-hop is 192.0.2.129:

root@vMX1# run show route next-hop 192.0.2.129

inet.0: 1110 destinations, 1869 routes (1110 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0   [BGP/140] 00:38:12, MED 10, localpref 100
   AS path: 65002 ?, validation-state: unverified
 > to 192.0.2.129 via ge-0/0/1.0
 [OSPF/150] 00:37:31, metric 10, tag 0
 > to 192.0.2.129 via ge-0/0/1.0
1.0.240.0/20   *[BGP/140] 00:38:12, MED 10, localpref 100
   AS path: 65002 3257 3356 4651 9737 23969 I, 
validation-state: unverified
 > to 192.0.2.129 via ge-0/0/1.0
1.1.1.0/24 *[BGP/140] 00:38:12, MED 10, localpref 100
   AS path: 65002 8758 15576 6772 13030 226 I, 
validation-state: unverified
 > to 192.0.2.129 via ge-0/0/1.0
[...]

But none of them make it to the FIB:

root@vMX1# run show route forwarding-table matching 1.1.1.0/24
Routing table: default.inet
Internet:

Routing table: __master.anon__.inet
Internet:

The peer.inet.0 table is empty:

root@vMX1# run show route summary
Autonomous system number: 64512
Router ID: 192.0.2.128

inet.0: 1110 destinations, 1869 routes (1110 active, 0 holddown, 0 hidden)
   Direct:  3 routes,  3 active
Local:  3 routes,  3 active
 OSPF:  2 routes,  1 active
  BGP:   1861 routes,   1103 active

upstream.inet.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
  BGP:  1 routes,  1 active

Adding a static route to peer.inet.0 doesn't help (I added a discard
route). Switching the default to the peer doesn't change anything (term
3 also matches anything). Tested on vMX 14.1R1. Maybe a bug in
if-route-exists?


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Olivier Benghozi
I think that this recommandation makes sense: I don't see any good reason to 
have, by default, eBGP routes with a better administrative distance 
("preference", in Junos) than your IGP (OSPF or ISIS).

This been said, in all BGP implementations, the BGP best path selection algo 
includes a [prefer eBGP routes] item, whatever the admin-distance value is set 
when BGP pushes its routes to the RIB.
Both things are different.

So eBGP _is_ preferred over iBGP (at a certain point in the selection algo). 
According to the protocol definition, in fact (RFC 4271 page 81).


> 18 feb 2016 at 12:12, sth...@nethelp.no wrote :
> 
>> *not sure what was Juniper and ALU thinking when they came up with the same 
>> protocol preference for eBGP and iBGP routes, there's a ton of reasons why 
>> you always want to prefer closest AS-EXIT.
> 
> Probably the same as Cisco, when Cisco on multiple occasions have
> promoted using the same administrative distance (200) for both EBGP
> and IBGP as "best practice".

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Vincent Bernat
 ❦ 18 février 2016 10:50 GMT, Adam Vitkovsky  :

>> You are right. I didn't understand your answer the first time as I thought 
>> that
>> PIC was for "programmable integrated circuit", so I thought this was a plan
>> for Juniper to fix the problem with some dedicated piece of hardware.

> Sorry about that, I'll try to be more explicit in my future posts.
>
> The setup is really easy
[...]

Oh, many thanks for the detailed setup! I'll need some time to update to
15.1 and I'll get back to you with the results once this is done.

> 5)always prefer eBGP over iBGP*
> set policy-options policy-statement FROM_TRANSIT term INGRESS_POLICY_A then 
> preference 169 <
> -by default,
> If the MX140-A from our previous example loses its Transit link it will (via 
> BGP-PIC) immediately reroute traffic to MX140-B
> However by default MX140-B has a best path via MX140-A -so until it
> receives withdrawn from MX140-A it'll loop traffic back to MX140-A.
> That's why you want MX140-B to prefer it's local exit.
>
> *not sure what was Juniper and ALU thinking when they came up with the
> same protocol preference for eBGP and iBGP routes, there's a ton of
> reasons why you always want to prefer closest AS-EXIT.

Unfortunately, I don't have the same upstream on both MX and for some
routes, one of them may have a better route than the other. The two MX
are advertising just a default, so they can attract traffic that would
be better routed by their neighbor. I'll try to think a bit about what's
more important.
-- 
Make sure your code "does nothing" gracefully.
- The Elements of Programming Style (Kernighan & Plauger)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Vincent Bernat
 ❦ 18 février 2016 07:31 -0600, Colton Conor  :

> So is the MX-104 processor really that underpowered? I have heard
> reports that is was too underpowered for its pricepoint, and now I am
> starting to believe it. Vincent what are your thoughts? 

Well, I don't have enough experience with it. However, it's unfortunate
that it needs a pricey license to handle a full view but is not able to
accomodate it efficiently (without BGP PIC).
-- 
Work consists of whatever a body is obliged to do.
Play consists of whatever a body is not obliged to do.
-- Mark Twain
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Vincent Bernat
 ❦ 17 février 2016 21:07 GMT, Alexander Arseniev  :

> True, one cannot match on "next-hop" in "condition", only on exact
> prefix+table name.
> But this can be done using "route isolation" approach.
> So, the overall approach is:
> 1/ create a separate table and leak a 0/0 route there matching on 0/0
> exact + next-hop ("isolate the interested route"). Use
> "instance-import" + policy.
> 2/ create condition
>
> policy-options {
>  condition default-to-upstream {
>   if-route-exists {
>0.0.0.0/0;
>table isolate-0/0.inet.0;
>   }
>  }
>
> 3/ use condition to match & reject the specifics:
>
> policy-options {
>  policy-statement reject-same-nh-as-0/0 {
>   term 1  {
>   from {
> protocol bgp;
>route-filter 0/0 longer;
> condition default-to-upstream;
>   next-hop 198.18.1.1;
> }
> then reject;
> }
>  term 2  {
>   from {
> protocol bgp;
>route-filter 0/0 longer;
>   next-hop 198.18.1.1;
> }
> then accept;
> }

Just by curiosity, I tried your approach and it almost work. However,
for some reason, the condition can match when there is no route in the
associated table. I didn't do exactly as you proposed, so maybe I am
doing something wrong. I am not really interested in getting to the
bottom of this matter. I just post my current configuration in case
somebody is interested:

 
https://github.com/vincentbernat/network-lab/blob/d984d6c5f847b96a131b240d91346b46bfaecac9/lab-vmx-fullview/vMX1.conf#L106-L115

If I enable term 4, it catches all routes whose next-hop is
192.0.2.129 despite the condition being false. In the RIB, I have many
routes whose next-hop is 192.0.2.129:

root@vMX1# run show route next-hop 192.0.2.129

inet.0: 1110 destinations, 1869 routes (1110 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0   [BGP/140] 00:38:12, MED 10, localpref 100
  AS path: 65002 ?, validation-state: unverified
> to 192.0.2.129 via ge-0/0/1.0
[OSPF/150] 00:37:31, metric 10, tag 0
> to 192.0.2.129 via ge-0/0/1.0
1.0.240.0/20   *[BGP/140] 00:38:12, MED 10, localpref 100
  AS path: 65002 3257 3356 4651 9737 23969 I, 
validation-state: unverified
> to 192.0.2.129 via ge-0/0/1.0
1.1.1.0/24 *[BGP/140] 00:38:12, MED 10, localpref 100
  AS path: 65002 8758 15576 6772 13030 226 I, 
validation-state: unverified
> to 192.0.2.129 via ge-0/0/1.0
[...]

But none of them make it to the FIB:

root@vMX1# run show route forwarding-table matching 1.1.1.0/24
Routing table: default.inet
Internet:

Routing table: __master.anon__.inet
Internet:

The peer.inet.0 table is empty:

root@vMX1# run show route summary
Autonomous system number: 64512
Router ID: 192.0.2.128

inet.0: 1110 destinations, 1869 routes (1110 active, 0 holddown, 0 hidden)
  Direct:  3 routes,  3 active
   Local:  3 routes,  3 active
OSPF:  2 routes,  1 active
 BGP:   1861 routes,   1103 active

upstream.inet.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
 BGP:  1 routes,  1 active

Adding a static route to peer.inet.0 doesn't help (I added a discard
route). Switching the default to the peer doesn't change anything (term
3 also matches anything). Tested on vMX 14.1R1. Maybe a bug in
if-route-exists?
-- 
Use the fundamental control flow constructs.
- The Elements of Programming Style (Kernighan & Plauger)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Raphael Mazelier

Very interresting topic.

Some questions about your setup :

In 2) you set advertise-external, is it working the same by using 
multipath ?


In 3) you set 'unicast protection'. It is the same thing as PIC 'protect 
core' knob ?


If I understand correctly, before 15.1 PIC is only available on l3vpn, 
so my question is :


Is it advisable to run the dmz/internet table in a vrf/routing instance 
on juniper ? and what are the pros/cons of doing that ?


pros : PIC, more flexibily ?
cons : more complex setup, performance issue (I've heard some storie 
about that) ?


Best,

--
Raphael Mazelier

Le 18/02/2016 11:50, Adam Vitkovsky a écrit :



The setup is really easy


1) carve up the FIB so that it allows for multiple next-hops (in our case 
pointer to a backup path)
set routing-options forwarding-table export lb
set policy-options policy-statement lb then load-balance per-packet


2)advertise the best external routes
set protocols bgp group MX140-IBGP advertise-external <<
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Alexander Marhold
Hi folks

To make the discussion clearer and comming back to the Juniper MX 104 
implementation

Here is a picture of 2 PEs on P  and 2 peers (ISP1 and IX1)
let´s assume we want to prefer routes from IX1 over ISP1
MX1 is EBGP (lpref 100) to  ISP1 and IBGP to MX2 and MX3
MX2 is EBGP (lpref 110) to IX1 and IBGP to MX1 and MX3

  ISP1   IX1
| locpref ^ locpref 
|   100  |   110
   MX1->-MX2
|   |
|   |
+--MX3--->--+


In my opinion if you need also  the MX3 then  for this MX3 you need "PIC-CORE" 
to quickly switch between both paths

On MX1 you need "best-external" to advertise the external routes whereas the 
best is the internal route pointing to MX2

On MX1 and MX2 you need "PIC-EDGE" to quickly switch when IX1 goes down

Do we all agree on that picture and the named mechanisms ( put in "") ?


So now what versions of Junos is needed and what additional "unnecessary" 
methods like MPLS or LDP is now needed ?

regards

alexander

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Sebastian Becker
Hi Ytti,

I meant 9001 to 9010 and mx104 to mx240.
cpu to cpu works, but than there is the software you mentioned.

Back to Juniper. ;-)

-- 
Sebastian Becker
s...@lab.dtag.de

> Am 18.02.2016 um 16:39 schrieb Saku Ytti :
> 
> 
> On 18 February 2016 at 17:29, Sebastian Becker  wrote:
> 
> Hey Sebastian,
> 
>> As AS9001 and AS9006/9010 have a different cpu architecture as MX104 and 
>> MX240/480/960 the comparison is not easy just by the type of the cpu itself.
> 
> ASR9001 and MX104 use same Freescale QorIQ family, so it's very direct
> comparison. Specsheets are publically available.
> Larger MX and ASR9k use Intel X86/AMD64, so easy to compare.
> 
> But of course the software is very different, and this MX104 issue
> thread is about, does not exist in IOS-XR, regardless how slow CPU it
> is rocking.
> 
> -- 
>  ++ytti

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Adam Vitkovsky
> sth...@nethelp.no [mailto:sth...@nethelp.no]
> Sent: Thursday, February 18, 2016 11:13 AM
>
> Just commenting on a couple things:
>
> > If the MX140-A from our previous example loses its Transit link it
> > will (via BGP-PIC) immediately reroute traffic to MX140-B However by
> default MX140-B has a best path via MX140-A -so until it receives withdrawn
> from MX140-A it'll loop traffic back to MX140-A.
> > That's why you want MX140-B to prefer it's local exit.
> >
> > *not sure what was Juniper and ALU thinking when they came up with the
> same protocol preference for eBGP and iBGP routes, there's a ton of reasons
> why you always want to prefer closest AS-EXIT.
>
> Probably the same as Cisco, when Cisco on multiple occasions have
> promoted using the same administrative distance (200) for both EBGP and
> IBGP as "best practice".
>
Well the bottom line is the sanity won.


> > Caveats:
> > "vrf-table-label" must be enabled at the routing-instance on the
> > MX140s - just another stupidity in this script kiddie OS of Junos
>
> You are of course free to call JunOS whatever you want. Calling JunOS a
> "script kiddie OS" may not the best way to be taken seriously.
>
> In any case, vrf-table-label is *much* older than PIC (around 10 years, if I
> remember correctly).
>
And that makes vrf-table-label a prerequisite for PIC?
My point is that if a packet is received with a VRF label the label points to 
an indirect next hop pointer
And the pointer points to a forwarding next-hop i.e.  IP edge-interface and 
adjacent L2 rewrite
and as a backup could point to an MPLS core-interface and adjacent label stack 
(NH label to backup PE(if any) and VRF label that backup PE advertised)
-basically an ASBR option B label swap operation
Why would I need to run an unnecessary IP lookup in the VRF table to derive 
info about the backup path?

adam








Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Saku Ytti
On 18 February 2016 at 17:29, Sebastian Becker  wrote:

Hey Sebastian,

> As AS9001 and AS9006/9010 have a different cpu architecture as MX104 and 
> MX240/480/960 the comparison is not easy just by the type of the cpu itself.

ASR9001 and MX104 use same Freescale QorIQ family, so it's very direct
comparison. Specsheets are publically available.
Larger MX and ASR9k use Intel X86/AMD64, so easy to compare.

But of course the software is very different, and this MX104 issue
thread is about, does not exist in IOS-XR, regardless how slow CPU it
is rocking.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Sebastian Becker
Hi Ytti / Colton,

ASR9001-RP
cisco ASR9K Series (P4040) processor with 8388608K bytes of memory.
P4040 processor at 1500MHz, Revision 3.0

This box ist only available as SE (service enhanced) version.

A9K-RSP440-SE
cisco ASR9K Series (Intel 686 F6M14S4) processor with 12582912K bytes of memory.
Intel 686 F6M14S4 processor at 2135MHz, Revision 2.174

There is a TR (transport) version with half the memory:
http://www.cisco.com/c/en/us/products/collateral/routers/asr-9000-series-aggregation-services-routers/data_sheet_c78-674143.html

A9K-RSP880-SE
cisco ASR9K Series (Intel 686 F6M14S4) processor with 33554432K bytes of memory.
Intel 686 F6M14S4 processor at 1904MHz, Revision 2.174

There is a TR (transport) version with half the memory:
http://www.cisco.com/c/en/us/products/collateral/routers/asr-9000-series-aggregation-services-routers/datasheet-c78-733763.html

As AS9001 and AS9006/9010 have a different cpu architecture as MX104 and 
MX240/480/960 the comparison is not easy just by the type of the cpu itself. 

-- 
Sebastian Becker
s...@lab.dtag.de

> Am 18.02.2016 um 16:06 schrieb Saku Ytti :
> 
> 
> On 18 February 2016 at 16:21, Colton Conor  wrote:
> 
> Hey Colton,
> 
>> What processor is in the Cisco 9001, and how does it compare to a MX104 in
>> terms of speed and BGP Performance?
> 
> ASR9001 is P4040 on RP, lower single core performance than MX104
> P5021. But the problem this thread addresses is not a problem IOS-XR
> has.
> 
>> What about a Cisco 9010 ASR9K Route Switch Processor with 440G/slot Fabric
>> and 6GB?
> 
> RSP440 is 4 core Intel, at about 2GHz. I'm actually sure which
> specific Intel CPU.
> 
> -- 
>  ++ytti
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Saku Ytti
On 18 February 2016 at 16:21, Colton Conor  wrote:

Hey Colton,

> What processor is in the Cisco 9001, and how does it compare to a MX104 in
> terms of speed and BGP Performance?

ASR9001 is P4040 on RP, lower single core performance than MX104
P5021. But the problem this thread addresses is not a problem IOS-XR
has.

> What about a Cisco 9010 ASR9K Route Switch Processor with 440G/slot Fabric
> and 6GB?

RSP440 is 4 core Intel, at about 2GHz. I'm actually sure which
specific Intel CPU.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread sthaug
> >http://www.juniper.net/techpubs/en_US/junos15.1/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html
> 
> >>From 
> http://www.juniper.net/techpubs/en_US/junos15.1/topics/task/configuration/bgp-configuring-bgp-pic-for-inet.html
> 
> "Note: The BGP PIC edge feature is supported only on routers with
> MPC interfaces."
> 
> AIUI, this excludes MX80/MX104 - arguably where one would need it
> most...

MX80/MX104 is a "fixed config" MPC.

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Sascha Luck [ml]

On Wed, Feb 17, 2016 at 03:18:59PM -0500, Chuck Anderson wrote:

Can you use Junos 15.1?  Try this:

http://www.juniper.net/techpubs/en_US/junos15.1/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html


From 

http://www.juniper.net/techpubs/en_US/junos15.1/topics/task/configuration/bgp-configuring-bgp-pic-for-inet.html

"Note: The BGP PIC edge feature is supported only on routers with
MPC interfaces."

AIUI, this excludes MX80/MX104 - arguably where one would need it
most...

cheers,
s.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Colton Conor
Saku,

You seems to know a bit about processors to say the least.

What processor is in the Cisco 9001, and how does it compare to a MX104 in
terms of speed and BGP Performance?

What about a Cisco 9010 ASR9K Route Switch Processor with 440G/slot Fabric
and 6GB?

On Thu, Feb 18, 2016 at 8:12 AM, Saku Ytti  wrote:

> On 18 February 2016 at 15:31, Colton Conor  wrote:
> > So is the MX-104 processor really that underpowered? I have heard reports
> > that is was too underpowered for its pricepoint, and now I am starting to
> > believe it. Vincent what are your thoughts?
>
> Define underpowered?
>
> MX80 has 8572, also sported by platforms such as sup7, sup2t, nexus7k,
> me3600x, sfm3-12@alu
> RSP720, EX8200 RE have even slower spec cpu 8548
>
> MX104 has faster cpu than any of these, P5021. Yet RSP720 runs circles
> around MX104 in terms of BGP performance.
>
> I'd say it is underpowered for JunOS (All PPC's are, but is that HW or
> SW issue, that's debatable), but it really can't be considered
> particularly slow cpu in this market generally, especially during its
> launch year.
>
> --
>   ++ytti
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Saku Ytti
On 18 February 2016 at 15:31, Colton Conor  wrote:
> So is the MX-104 processor really that underpowered? I have heard reports
> that is was too underpowered for its pricepoint, and now I am starting to
> believe it. Vincent what are your thoughts?

Define underpowered?

MX80 has 8572, also sported by platforms such as sup7, sup2t, nexus7k,
me3600x, sfm3-12@alu
RSP720, EX8200 RE have even slower spec cpu 8548

MX104 has faster cpu than any of these, P5021. Yet RSP720 runs circles
around MX104 in terms of BGP performance.

I'd say it is underpowered for JunOS (All PPC's are, but is that HW or
SW issue, that's debatable), but it really can't be considered
particularly slow cpu in this market generally, especially during its
launch year.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Dave Bell
I've not used the MX104, but the MX80 is incredibly slow to commit
changes, and from discussion on this mailing list slow to converge
also. As has been mentioned though, this can be got around by using
things like BGP PIC, and LFA to maintain a valid forwarding path while
the control plane sorts itself out though.

What I would be interested to know though is do these technologies use
additional FIB slots? IE by enabling BGP PIC, are you effectively
cutting your FIB capacity in half, from 1 million routes to 0.5
million?

Regards,
Dave

On 18 February 2016 at 13:31, Colton Conor  wrote:
> So is the MX-104 processor really that underpowered? I have heard reports
> that is was too underpowered for its pricepoint, and now I am starting to
> believe it. Vincent what are your thoughts?
>
> On Wed, Feb 17, 2016 at 5:14 PM, Vincent Bernat  wrote:
>
>>  ❦ 17 février 2016 22:56 GMT, Adam Vitkovsky > > :
>>
>> >> Being a bit unsatisfied with a pair of MX104 turning themselves as a
>> blackhole
>> >> during BGP convergence, I am trying to reduce the size of the FIB.
>> >>
>> > You mentioned earlier that this is a new installation so why not use
>> > routing instance for Internet which allows you to use PIC with your
>> > current version of code and save you all this trouble duck-taping the
>> > solution together.
>>
>> You are right. I didn't understand your answer the first time as I
>> thought that PIC was for "programmable integrated circuit", so I thought
>> this was a plan for Juniper to fix the problem with some dedicated piece
>> of hardware.
>> --
>> Truth is the most valuable thing we have -- so let us economize it.
>> -- Mark Twain
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Colton Conor
So is the MX-104 processor really that underpowered? I have heard reports
that is was too underpowered for its pricepoint, and now I am starting to
believe it. Vincent what are your thoughts?

On Wed, Feb 17, 2016 at 5:14 PM, Vincent Bernat  wrote:

>  ❦ 17 février 2016 22:56 GMT, Adam Vitkovsky  > :
>
> >> Being a bit unsatisfied with a pair of MX104 turning themselves as a
> blackhole
> >> during BGP convergence, I am trying to reduce the size of the FIB.
> >>
> > You mentioned earlier that this is a new installation so why not use
> > routing instance for Internet which allows you to use PIC with your
> > current version of code and save you all this trouble duck-taping the
> > solution together.
>
> You are right. I didn't understand your answer the first time as I
> thought that PIC was for "programmable integrated circuit", so I thought
> this was a plan for Juniper to fix the problem with some dedicated piece
> of hardware.
> --
> Truth is the most valuable thing we have -- so let us economize it.
> -- Mark Twain
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread sthaug
Just commenting on a couple things:

> If the MX140-A from our previous example loses its Transit link it will (via 
> BGP-PIC) immediately reroute traffic to MX140-B
> However by default MX140-B has a best path via MX140-A -so until it receives 
> withdrawn from MX140-A it'll loop traffic back to MX140-A.
> That's why you want MX140-B to prefer it's local exit.
> 
> *not sure what was Juniper and ALU thinking when they came up with the same 
> protocol preference for eBGP and iBGP routes, there's a ton of reasons why 
> you always want to prefer closest AS-EXIT.

Probably the same as Cisco, when Cisco on multiple occasions have
promoted using the same administrative distance (200) for both EBGP
and IBGP as "best practice".

> Caveats:
> "vrf-table-label" must be enabled at the routing-instance on the MX140s - 
> just another stupidity in this script kiddie OS of Junos

You are of course free to call JunOS whatever you want. Calling JunOS a
"script kiddie OS" may not the best way to be taken seriously.

In any case, vrf-table-label is *much* older than PIC (around 10 years,
if I remember correctly).

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Adam Vitkovsky
> Vincent Bernat
> Sent: Wednesday, February 17, 2016 11:14 PM
>
>  17 février 2016 22:56 GMT, Adam Vitkovsky
>  :
>
> >> Being a bit unsatisfied with a pair of MX104 turning themselves as a
> >> blackhole during BGP convergence, I am trying to reduce the size of the
> FIB.
> >>
> > You mentioned earlier that this is a new installation so why not use
> > routing instance for Internet which allows you to use PIC with your
> > current version of code and save you all this trouble duck-taping the
> > solution together.
>
> You are right. I didn't understand your answer the first time as I thought 
> that
> PIC was for "programmable integrated circuit", so I thought this was a plan
> for Juniper to fix the problem with some dedicated piece of hardware.
> --
Sorry about that, I'll try to be more explicit in my future posts.

The setup is really easy


1) carve up the FIB so that it allows for multiple next-hops (in our case 
pointer to a backup path)
set routing-options forwarding-table export lb
set policy-options policy-statement lb then load-balance per-packet


2)advertise the best external routes
set protocols bgp group MX140-IBGP advertise-external <

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Adam Vitkovsky
> Alexander Marhold
> Sent: Thursday, February 18, 2016 9:31 AM
>
> Hi Chuck !
>
> Followed with interest the problem and especially your solution and I have
> looked into the docu BUT:
>
> DOCU says:
> " Before you begin:
>
> Configure the device interfaces.
> Configure OSPF or any other IGP protocol.
> Configure MPLS and LDP. <-- REALLY
> 
> Configure BGP.
> "
>
> Why do you need to enable MPLS and LDP for PIC ?
>
Well because Junos implementation of PIC sucks big times :)

> IMHO this is a documentation error , or do I miss something ?
>
> Regarding you suggestion of using it in a routing instance with version
> <15.1 I am not sure if that works as documentation says that it only works for
> vpnv4-BGP routes
>
> DOCU says
> "Before you begin:
>
> Configure LDP.
> Configure an IGP, either OSPF or IS-IS.
> Configure a Layer 3 VPN.
> Configure multiprotocol BGP for either an IPv4 VPN or an IPv6 VPN.
> < nthis seems to be a restriction regarding your proposed
> solution "
>
And I was hoping that Junos would finally support BGP PIC for v4 and V6 (not a 
VRF-lite).


adam


Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-18 Thread Alexander Marhold
Hi Chuck !

Followed with interest the problem and especially your solution and I have
looked into the docu BUT:

DOCU says:
" Before you begin:

Configure the device interfaces.
Configure OSPF or any other IGP protocol.
Configure MPLS and LDP. <-- REALLY

Configure BGP.
"

Why do you need to enable MPLS and LDP for PIC ?

IMHO this is a documentation error , or do I miss something ?

Regarding you suggestion of using it in a routing instance with version
<15.1 I am not sure if that works as documentation says that it only works
for vpnv4-BGP routes

DOCU says
"Before you begin:

Configure LDP.
Configure an IGP, either OSPF or IS-IS.
Configure a Layer 3 VPN.
Configure multiprotocol BGP for either an IPv4 VPN or an IPv6 VPN.
< nthis seems to be a restriction regarding your proposed
solution
"

Any more info on that available ?

Regards

alexander

-Ursprüngliche Nachricht-
Von: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] Im Auftrag von
Chuck Anderson
Gesendet: Mittwoch, 17. Februar 2016 21:19
An: juniper-nsp@puck.nether.net
Betreff: Re: [j-nsp] Optimizing the FIB on MX

On Wed, Feb 17, 2016 at 08:51:23PM +0100, Vincent Bernat wrote:
> Being a bit unsatisfied with a pair of MX104 turning themselves as a 
> blackhole during BGP convergence, I am trying to reduce the size of 
> the FIB.
> 
> I am in a simple situation: one upstream on each router, an iBGP 
> session between the two routers. I am also receiving a default route 
> along the full feed.

Can you use Junos 15.1?  Try this:

http://www.juniper.net/techpubs/en_US/junos15.1/topics/concept/use-case-for-
bgp-pic-for-inet-inet6-lu.html
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-17 Thread Vincent Bernat
 ❦ 17 février 2016 22:56 GMT, Adam Vitkovsky  :

>> Being a bit unsatisfied with a pair of MX104 turning themselves as a 
>> blackhole
>> during BGP convergence, I am trying to reduce the size of the FIB.
>>
> You mentioned earlier that this is a new installation so why not use
> routing instance for Internet which allows you to use PIC with your
> current version of code and save you all this trouble duck-taping the
> solution together.

You are right. I didn't understand your answer the first time as I
thought that PIC was for "programmable integrated circuit", so I thought
this was a plan for Juniper to fix the problem with some dedicated piece
of hardware.
-- 
Truth is the most valuable thing we have -- so let us economize it.
-- Mark Twain
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-17 Thread Adam Vitkovsky
> Vincent Bernat
> Sent: Wednesday, February 17, 2016 7:51 PM
>
> Hey!
>
> Being a bit unsatisfied with a pair of MX104 turning themselves as a blackhole
> during BGP convergence, I am trying to reduce the size of the FIB.
>
You mentioned earlier that this is a new installation so why not use routing 
instance for Internet which allows you to use PIC with your current version of 
code and save you all this trouble duck-taping the solution together.


adam


Adam Vitkovsky
IP Engineer

T:  0333 006 5936
E:  adam.vitkov...@gamma.co.uk
W:  www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of 
this email are confidential to the ordinary user of the email address to which 
it was addressed. This email is not intended to create any legal relationship. 
No one else may place any reliance upon it, or copy or forward all or any of it 
in any form (unless otherwise notified). If you receive this email in error, 
please accept our apologies, we would be obliged if you would telephone our 
postmaster on +44 (0) 808 178 9652 or email postmas...@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with 
limited liability, with registered number 04340834, and whose registered office 
is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at 
Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-17 Thread Alexander Arseniev

Hello,
a/ please create an instance of type no-forwarding (default) or 
virtual-router. They both accept "instance-import " knob.

b/ please don't use generated route, use a true BGP 0/0 route.
You can use logical systems for testing if You lack actual physical routers.
Thx
Alex

On 17/02/2016 21:50, Vincent Bernat wrote:

  ❦ 17 février 2016 21:07 GMT, Alexander Arseniev  :


If the condition system would allow me to match a next-hop or an
interface in addition to a route, I could do:

   3. Reject any route with upstream as next-hop if there is a default
  route to upstream.

   4. Reject any route with peer as next-hop if there is a default route
  to peer.

   5. Accept everything else.

True, one cannot match on "next-hop" in "condition", only on exact
prefix+table name.
But this can be done using "route isolation" approach.
So, the overall approach is:
1/ create a separate table and leak a 0/0 route there matching on 0/0
exact + next-hop ("isolate the interested route"). Use
"instance-import" + policy.

Thanks for the suggestion. I tried to do that but was unable to create a
separate table and do the leak.

policy-options {
  rib .0 { ... }
}

In this case .0 is not recognized as a table in condition. But I
only tried with .0. From example, maybe I should have tried
.0.inet.0?

So, I tried to create a routing instance for that purpose but I did try
to use a generated route and I wasn't able to have it goes up. I can try
to use import instead.

So, is a separate table defined with policy-options rib or with a
routing-instance?


Disclaimer - I haven't tested this myself.

I'll try that.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-17 Thread Vincent Bernat
 ❦ 17 février 2016 21:07 GMT, Alexander Arseniev  :

>> If the condition system would allow me to match a next-hop or an
>> interface in addition to a route, I could do:
>>
>>   3. Reject any route with upstream as next-hop if there is a default
>>  route to upstream.
>>
>>   4. Reject any route with peer as next-hop if there is a default route
>>  to peer.
>>
>>   5. Accept everything else.
>
> True, one cannot match on "next-hop" in "condition", only on exact
> prefix+table name.
> But this can be done using "route isolation" approach.
> So, the overall approach is:
> 1/ create a separate table and leak a 0/0 route there matching on 0/0
> exact + next-hop ("isolate the interested route"). Use
> "instance-import" + policy.

Thanks for the suggestion. I tried to do that but was unable to create a
separate table and do the leak.

policy-options {
 rib .0 { ... }
}

In this case .0 is not recognized as a table in condition. But I
only tried with .0. From example, maybe I should have tried
.0.inet.0?

So, I tried to create a routing instance for that purpose but I did try
to use a generated route and I wasn't able to have it goes up. I can try
to use import instead.

So, is a separate table defined with policy-options rib or with a
routing-instance?

> Disclaimer - I haven't tested this myself.

I'll try that.
-- 
Nothing so needs reforming as other people's habits.
-- Mark Twain, "Pudd'nhead Wilson's Calendar"
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-17 Thread Alexander Arseniev

Hello,

On 17/02/2016 19:51, Vincent Bernat wrote:

Hey!


If the condition system would allow me to match a next-hop or an
interface in addition to a route, I could do:

  3. Reject any route with upstream as next-hop if there is a default
 route to upstream.

  4. Reject any route with peer as next-hop if there is a default route
 to peer.

  5. Accept everything else.


True, one cannot match on "next-hop" in "condition", only on exact 
prefix+table name.

But this can be done using "route isolation" approach.
So, the overall approach is:
1/ create a separate table and leak a 0/0 route there matching on 0/0 
exact + next-hop ("isolate the interested route"). Use "instance-import" 
+ policy.

2/ create condition

policy-options {
 condition default-to-upstream {
  if-route-exists {
   0.0.0.0/0;
   table isolate-0/0.inet.0;
  }
 }

3/ use condition to match & reject the specifics:

policy-options {
 policy-statement reject-same-nh-as-0/0 {
  term 1  {
  from {
protocol bgp;
   route-filter 0/0 longer;
condition default-to-upstream;
next-hop 198.18.1.1;
}
then reject;
}
 term 2  {
  from {
protocol bgp;
   route-filter 0/0 longer;
next-hop 198.18.1.1;
}
then accept;
}

Disclaimer - I haven't tested this myself.

HTH
Thx
Alex
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Optimizing the FIB on MX

2016-02-17 Thread Vincent Bernat
 ❦ 17 février 2016 15:18 -0500, Chuck Anderson  :

>> Being a bit unsatisfied with a pair of MX104 turning themselves as a
>> blackhole during BGP convergence, I am trying to reduce the size of the
>> FIB.
>> 
>> I am in a simple situation: one upstream on each router, an iBGP session
>> between the two routers. I am also receiving a default route along the
>> full feed.
>
> Can you use Junos 15.1?  Try this:
>
> http://www.juniper.net/techpubs/en_US/junos15.1/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html

Thanks for the tip. I can't update right now (I am on 13.3), but I put
that on my todo list!
-- 
Make it right before you make it faster.
- The Elements of Programming Style (Kernighan & Plauger)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Optimizing the FIB on MX

2016-02-17 Thread Chuck Anderson
On Wed, Feb 17, 2016 at 08:51:23PM +0100, Vincent Bernat wrote:
> Being a bit unsatisfied with a pair of MX104 turning themselves as a
> blackhole during BGP convergence, I am trying to reduce the size of the
> FIB.
> 
> I am in a simple situation: one upstream on each router, an iBGP session
> between the two routers. I am also receiving a default route along the
> full feed.

Can you use Junos 15.1?  Try this:

http://www.juniper.net/techpubs/en_US/junos15.1/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Optimizing the FIB on MX

2016-02-17 Thread Vincent Bernat
Hey!

Being a bit unsatisfied with a pair of MX104 turning themselves as a
blackhole during BGP convergence, I am trying to reduce the size of the
FIB.

I am in a simple situation: one upstream on each router, an iBGP session
between the two routers. I am also receiving a default route along the
full feed.

I have tried the simple approach of rejecting routes learned from BGP
with a combination of prefix length and AS path length:

  
https://github.com/vincentbernat/network-lab/blob/c4e7647b65fb954afbfc67378171451e967a4b9b/lab-vmx-fullview/vMX2.conf#L63-L122

I didn't try for real, but on a small lab using vMX, the FIB size is
divided by 20, which should be quite enough.

I have tried a smarter approach:

 
https://github.com/vincentbernat/network-lab/blob/c4e7647b65fb954afbfc67378171451e967a4b9b/lab-vmx-fullview/vMX1.conf#L71-L121

Unfortunately, the condition system seems not powerful enough to express
what I want:

 1. Accept the default route.

 2. Reject any small route (ge /25).

 3. Reject any route with the same next-hop as the default route.

 4. Accept everything else.

Currently, I was able to achieve this:

 3. Reject any route using upstream as next-hop (with the assumption
that we have a default route to upstream since it would come from
the same eBGP session).

 4. Accept everything else.

This is not satisfactory because if upstream becomes unavailable, a lot
of routes will be programmed in the FIB.

If the condition system would allow me to match a next-hop or an
interface in addition to a route, I could do:

 3. Reject any route with upstream as next-hop if there is a default
route to upstream.

 4. Reject any route with peer as next-hop if there is a default route
to peer.

 5. Accept everything else.

This way, only routes to peer would be put in FIB (and they are far less
numerous than routes to upstream). Eventually, those routes could be
trimmed down with prefix-length and AS path-length too.

The condition could look like this:

#v+
policy-options {
 condition default-to-upstream {
  if-route-exists {
   0.0.0.0/0;
   next-hop 192.0.2.0;
  }
 }
 condition default-to-peer {
  if-route-exists {
   0.0.0.0/0;
   next-hop 192.0.2.129;
  }
 }
}
#v-

I think that I will simply keep the first approach (just using AS
path-length and prefix-length of individual routes) but I would welcome
any comments and tips on how to optimize the FIB (notably prior work).

Thanks!
-- 
Make sure all variables are initialised before use.
- The Elements of Programming Style (Kernighan & Plauger)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp