[j-nsp] mpls.0 doesn't show LSI as the next hop

2018-05-01 Thread Arie Vayner
Hi,

We are trying to get an MX104 to work in a setup which works today on MX240.

We have a VRF configured with vrf-table-label, and I can see the label
being assigned as well as an LSI interface being created.
The issue is that the the mpls.0 table doesn't show the LSI interface as
the next hop:

user@MX104> show route table mpls.0
16 *[VPN/0] 00:25:28
  to table vpn_public_vrf.inet.0, Pop


While if we do the same on our MX240 it looks like this:
18 *[VPN/0] 4d 21:24:17
> via *lsi.2048* (vpn_public_vrf), Pop



Any ideas why this could be happening?

(Both routers run the same code version...)

Tnx
Arie
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper M120 - SSD Replacement

2018-05-01 Thread Alain Hebert

    Hi,

    I'm guessing juniper's are over priced?

    I have yet to yank mine from our MX240 demo to see if a generic one 
would suffice...


-
Alain Hebertaheb...@pubnix.net
PubNIX Inc.
50 boul. St-Charles
P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443

On 05/01/18 12:05, Juan C. Crespo R. wrote:

Hello Guys


Could  you please tell me a good SSD replacement for this Routing 
Engine (RE-A-2000)??



thanks
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] migration from cisco VRF+Vrrp to the juniper ACX

2018-05-01 Thread A. Camci
does anyone have an idea why it does not work on Acx( vrf+ vrrp).

Br ap


Op di 24 apr. 2018 om 10:20 schreef A. Camci 

>
> Hi Guys,
>>
>> we are migration the Cisco CISCO7606-S to the  acx5096.
>> But we have 1 customer with VRF and VRRP on the same port.
>>
>> after migration to the ACX has customer no connection from the VRF.
>> if we switch back to cisco, everything works fine.
>>
>> this is a full redundant vrf.
>> other side is still cisco and all locations are now running on the backup
>> vrf.
>>
>> if we lower the priotry of the vrrp on the backup vrf we see that the
>> primary location becomes master. so the vrrp does work.  after switching
>> the vrrp has customer still one way traffic from the acx.  maybe vrf+vrrp
>> doesnt work on a ACX.
>>
>> see below for the config.
>>
>> CISCO config
>>
>> vlan 3021
>> mtu 1600
>> !
>> interface Vlan3021
>>  mtu 1600
>>  ip vrf forwarding CUST_APPIE
>>  ip address 172.21.1.251 255.255.255.0
>>  vrrp 1 description CUST_APPIE-DC-centraal-pri
>>  vrrp 1 ip 172.21.1.250
>>  vrrp 1 preempt delay minimum 10
>>  vrrp 1 priority 110
>> !
>>
>> interface Te3/4
>> switchport trunk allowed vlan add 3021
>>
>> ip vrf CUST_APPIE
>>  rd 10.31.0.61:10006
>>  route-target export 65001:10006
>>  route-target import 65001:10006
>>
>>
>>  router bgp 65001
>>  address-family ipv4 vrf CUST_APPIE
>>  no synchronization
>>  redistribute static
>>  redistribute connected
>>  default-information originate
>>  exit-address-family
>>
>> ip route vrf CUST_APPIE 0.0.0.0 0.0.0.0 172.21.1.1
>>
>>
>> JUNIPER CONFIG
>> ACX Model: acx5096_ Junos: 15.1X54-D61.6
>>
>> set interfaces xe-0/0/88 description "*** CUST_APPIE***"
>> set interfaces xe-0/0/88 flexible-vlan-tagging
>> set interfaces xe-0/0/88 speed 10g
>> set interfaces xe-0/0/88 mtu 1622
>> set interfaces xe-0/0/88 encapsulation flexible-ethernet-services
>> set interfaces xe-0/0/88 ether-options no-auto-negotiation
>>
>> set interfaces xe-0/0/88 unit 3021 vlan-id 3021
>> set interfaces xe-0/0/88 unit 3021 family inet address 172.21.1.251/24
>> vrrp-group 1 virtual-address 172.21.1.250
>> set interfaces xe-0/0/88 unit 3021 family inet address 172.21.1.251/24
>> vrrp-group 1 priority 110
>> set interfaces xe-0/0/88 unit 3021 family inet address 172.21.1.251/24
>> vrrp-group 1 preempt hold-time 10
>> set interfaces xe-0/0/88 unit 3021 family inet address 172.21.1.251/24
>> vrrp-group 1 accept-data
>>
>> set policy-options policy-statement ipvpn-CUST_APPIE-ebgp-export term 1
>> from protocol direct
>> set policy-options policy-statement ipvpn-CUST_APPIE-ebgp-export term 1
>> from protocol static
>> set policy-options policy-statement ipvpn-CUST_APPIE-ebgp-export term 1
>> then accept
>> set policy-options policy-statement ipvpn-CUST_APPIE-ebgp-import term 1
>> from protocol bgp
>> set policy-options policy-statement ipvpn-CUST_APPIE-ebgp-import term 1
>> from route-filter 0.0.0.0/0 exact
>> set policy-options policy-statement ipvpn-CUST_APPIE-ebgp-import term 1
>> then local-preference 150
>> set policy-options policy-statement ipvpn-CUST_APPIE-ebgp-import term 1
>> then accept
>> set policy-options policy-statement ipvpn-CUST_APPIE-ebgp-import term 2
>> from protocol direct
>> set policy-options policy-statement ipvpn-CUST_APPIE-ebgp-import term 2
>> then accept
>>
>> set routing-instances CUST_APPIE instance-type vrf
>> set routing-instances CUST_APPIE interface xe-0/0/88.3021
>> set routing-instances CUST_APPIE route-distinguisher 10.32.0.43:10006
>> set routing-instances CUST_APPIE vrf-target import target:65001:10006
>> set routing-instances CUST_APPIE vrf-target export target:65001:10006
>> set routing-instances CUST_APPIE vrf-table-label
>>
>> set routing-instances CUST_APPIE routing-options static route 0.0.0.0/0
>> next-hop 172.21.1.1
>>
>> set routing-instances CUST_APPIE forwarding-options dhcp-relay
>> server-group CUST_APPIE 172.21.1.1
>> set routing-instances CUST_APPIE forwarding-options dhcp-relay
>> active-server-group CUST_APPIE
>> set routing-instances CUST_APPIE forwarding-options dhcp-relay group
>> CUST_APPIE interface xe-0/0/88.3021
>> set routing-instances CUST_APPIE protocols bgp group ebgp-CUST_APPIE
>> import ipvpn-CUST_APPIE-ebgp-import
>> set routing-instances CUST_APPIE protocols bgp group ebgp-CUST_APPIE
>> export ipvpn-CUST_APPIE-ebgp-export
>>
>> set firewall family inet filter re-protect-v4 term accept-customer-vrrp
>> from protocol vrrp
>> set firewall family inet filter re-protect-v4 term accept-customer-vrrp
>> then count accept-vrrp-customer
>> set firewall family inet filter re-protect-v4 term accept-customer-vrrp
>> then accept
>> set firewall family inet filter routing-engine-traffic term mark-vrrp
>> from protocol vrrp
>> set firewall family inet filter routing-engine-traffic term mark-vrrp
>> then count mark-vrrp
>> set firewall family inet filter routing-engine-traffic term mark-vrrp
>> then forwarding-class NC1
>> set firewall family inet filter routing-engine-traffic term mark-vrrp
>> then

[j-nsp] Juniper M120 - SSD Replacement

2018-05-01 Thread Juan C. Crespo R.

Hello Guys


Could  you please tell me a good SSD replacement for this Routing Engine 
(RE-A-2000)??



thanks
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 and NetFlow - Any horror story to share?

2018-05-01 Thread Vincent Bernat
 ❦  1 mai 2018 14:30 GMT, Michael Hare  :

> chassis {
> afeb {
> slot 0 {
> inline-services {
> flow-table-size {
> ipv4-flow-table-size 7;
> ipv6-flow-table-size 7;
> }
> }
> }
> }
> }

On 15.1R6, I am using this without any issue:

   afeb {
slot 0 {
inline-services {
flow-table-size {
ipv4-flow-table-size 10;
ipv6-flow-table-size 5;
}
}
}
}
-- 
Don't sacrifice clarity for small gains in "efficiency".
- The Elements of Programming Style (Kernighan & Plauger)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 and NetFlow - Any horror story to share?

2018-05-01 Thread Michael Hare
Alain,

Do you want to collect IPv6?  You are probably passed 14.X code on MX104 but I 
observed that I was unable to change the ipv6-flow-table-size at all (including 
after reboot).  I was able to set flow-table-size in 16.X but my load average 
on 16.X on MX104 is pretty terrible; seems like I got all of the performance 
penalty of threading in 16.X without an additional core unlocked on the MX104 
RE.  Since 14.X is near EOL I didn't harass JTAC.

Thanks and a nod to Olivier, I hadn't seen "flex-flow-sizing" before, seems 
like I really wanted that, not the explicit flow-table-size commands.

abbreviated code example below.

chassis {
afeb {
slot 0 {
inline-services {
flow-table-size {
ipv4-flow-table-size 7;
ipv6-flow-table-size 7;
}
}
}
}
}

-Michael

>>-Original Message-
>>From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
>>Of Alain Hebert
>>Sent: Tuesday, May 01, 2018 8:23 AM
>>To: juniper-nsp@puck.nether.net
>>Subject: Re: [j-nsp] MX104 and NetFlow - Any horror story to share?
>>
>>     Yeah I had the feeling I would break those MX's.
>>
>>     At this point it is worth it to rebuilt our vMX lab to test the
>>IPFIX variant...
>>
>>     Thanks for the input.
>>
>>
>>     As for routing we have a pretty good mix of T1/T2 providers and we
>>rarely drop sessions so it is providing a pretty good uptime...  And
>>that's why we got a pair of MX960 coming down anytime this year.
>>
>>
>>     PS: Unrelated quote - Yeah fat fingers sorry list.
>>
>>-
>>Alain Hebertaheb...@pubnix.net
>>PubNIX Inc.
>>50 boul. St-Charles
>>P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
>>Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443
>>
>>On 04/30/18 19:41, Olivier Benghozi wrote:
>>> Hi Alain,
>>>
>>> While you seem to already be kind of suicidal (5 full tables peers on an
>>MX104), on an MX you must not use netflow v9 (CPU based) but use inline
>>IPFIX (Trio / PFE based).
>>> I suppose that Netflow-v9 on an MX104 could be quickly an interesting
>>horror story with real traffic due to its ridiculously slow CPU, by the way.
>>> With inline IPFIX it should just take some more RAM, and FIB update could
>>be a bit slower.
>>>
>>> By the way on MX104 you don't configure «fpc» (bigger MXs) of «tfeb»
>>(MX80) in chassis hierarchy, but «afeb», so you can remove your fpc line and
>>fix your tfeb line.
>>>
>>> So you'll need something like that in services, instead of version9:
>>> set services flow-monitoring version-ipfix template ipv4 template-refresh-
>>rate
>>> set services flow-monitoring version-ipfix template ipv4 option-refresh-
>>rate
>>> set services flow-monitoring version-ipfix template ipv4 ipv4-template
>>>
>>> And these ones too, to allocate some memory for the flows in the Trio and
>>to define how it will speaks with the collector:
>>> set chassis afeb slot 0 inline-services flex-flow-sizing
>>> set forwarding-options sampling instance NETFLOW-SI family inet output
>>inline-jflow source-address a.b.c.d
>>>
>>> Of course you'll remove the line with «output flow-server  source
>>».
>>>
>>>
>>>
>>> I don't see why you quoted the mail from Brijesh Patel about the Routing
>>licences, by the way :P
>>>
>>>
>>> Olivier
>>>
 On 30 apr. 2018 at 21:34, Alain Hebert  wrote :


 Anyone has any horror stories with something similar to what we're about
>>to do?
  We're planning to turn up the following Netflow config (see below) on
>>our MX104s (while we wait for our new MX960 =D), it worked well with
>>everything else (SRX mostly), the "*s**et chassis"* are making us wonder
>>how high would be the possibility to render those system unstable, at short
>>and long term.

  Thanks again for your time.

  PS: We're using Elastiflow, and its working great for our needs atm.


 -- A bit of context

  Model: mx104
  Junos: 16.1R4-S1.3

  They're routing about 20Gbps atm, with 5 full tables peers, ~0.20 load
>>average, and 700MB mem free.


 -- The Netflow config

 *set chassis tfeb0 slot 0 sampling-instance NETFLOW-SI*

 *set chassis fpc 1 sampling-instance NETFLOW-SI*

 set services flow-monitoring version9 template FM-V9 option-refresh-
>>rate seconds 25
 set services flow-monitoring version9 template FM-V9 template-refresh-
>>rate seconds 15
 set services flow-monitoring version9 template FM-V9 ipv4-template

 set forwarding-options sampling instance NETFLOW-SI input rate 1 run-
>>length 0
 set forwarding-options sampling instance NETFLOW-SI family inet output
>>flow-server  port 2055
 set forwarding-options sampling instance NETFLOW-SI family inet output
>>flow-server  sour

Re: [j-nsp] MX104 and NetFlow - Any horror story to share?

2018-05-01 Thread Alain Hebert

    Yeah I had the feeling I would break those MX's.

    At this point it is worth it to rebuilt our vMX lab to test the 
IPFIX variant...


    Thanks for the input.


    As for routing we have a pretty good mix of T1/T2 providers and we 
rarely drop sessions so it is providing a pretty good uptime...  And 
that's why we got a pair of MX960 coming down anytime this year.



    PS: Unrelated quote - Yeah fat fingers sorry list.

-
Alain Hebertaheb...@pubnix.net
PubNIX Inc.
50 boul. St-Charles
P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443

On 04/30/18 19:41, Olivier Benghozi wrote:

Hi Alain,

While you seem to already be kind of suicidal (5 full tables peers on an 
MX104), on an MX you must not use netflow v9 (CPU based) but use inline IPFIX 
(Trio / PFE based).
I suppose that Netflow-v9 on an MX104 could be quickly an interesting horror 
story with real traffic due to its ridiculously slow CPU, by the way.
With inline IPFIX it should just take some more RAM, and FIB update could be a 
bit slower.

By the way on MX104 you don't configure «fpc» (bigger MXs) of «tfeb» (MX80) in 
chassis hierarchy, but «afeb», so you can remove your fpc line and fix your 
tfeb line.

So you'll need something like that in services, instead of version9:
set services flow-monitoring version-ipfix template ipv4 template-refresh-rate
set services flow-monitoring version-ipfix template ipv4 option-refresh-rate
set services flow-monitoring version-ipfix template ipv4 ipv4-template

And these ones too, to allocate some memory for the flows in the Trio and to 
define how it will speaks with the collector:
set chassis afeb slot 0 inline-services flex-flow-sizing
set forwarding-options sampling instance NETFLOW-SI family inet output 
inline-jflow source-address a.b.c.d

Of course you'll remove the line with «output flow-server  source ».



I don't see why you quoted the mail from Brijesh Patel about the Routing 
licences, by the way :P


Olivier


On 30 apr. 2018 at 21:34, Alain Hebert  wrote :


Anyone has any horror stories with something similar to what we're about to do?
 We're planning to turn up the following Netflow config (see below) on our MX104s 
(while we wait for our new MX960 =D), it worked well with everything else (SRX mostly), 
the "*s**et chassis"* are making us wonder how high would be the possibility to 
render those system unstable, at short and long term.

 Thanks again for your time.

 PS: We're using Elastiflow, and its working great for our needs atm.


-- A bit of context

 Model: mx104
 Junos: 16.1R4-S1.3

 They're routing about 20Gbps atm, with 5 full tables peers, ~0.20 load 
average, and 700MB mem free.


-- The Netflow config

*set chassis tfeb0 slot 0 sampling-instance NETFLOW-SI*

*set chassis fpc 1 sampling-instance NETFLOW-SI*

set services flow-monitoring version9 template FM-V9 option-refresh-rate 
seconds 25
set services flow-monitoring version9 template FM-V9 template-refresh-rate 
seconds 15
set services flow-monitoring version9 template FM-V9 ipv4-template

set forwarding-options sampling instance NETFLOW-SI input rate 1 run-length 0
set forwarding-options sampling instance NETFLOW-SI family inet output flow-server 
 port 2055
set forwarding-options sampling instance NETFLOW-SI family inet output flow-server 
 source 
set forwarding-options sampling instance NETFLOW-SI family inet output flow-server 
 version9 template FM-V9
set forwarding-options sampling instance NETFLOW-SI family inet output inline-jflow 
source-address 

set interfaces  unit  family inet sampling input
set interfaces  unit  family inet sampling output

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Difference between MPC4E-3D-32XGE-RB and MPC4E-3D-32XGE-SFPP ?

2018-05-01 Thread Nikolas Geyer
Can’t remember the exact numbers but the non-RB card is targeted at MPLS core 
applications where it’s just high density label switching. Won’t take a full 
routing table and has reduced L3VPN numbers. Ask your AM/SE for the specifics.

Sent from my iPhone

> On 30 Apr 2018, at 10:34 am, Brijesh Patel  wrote:
> 
> Hello Members,
> 
> Any idea what is Difference between MPC4E-3D-32XGE-RB  and
> MPC4E-3D-32XGE-SFPP ?
> 
> Juniper PDf says :
> 
> MPC4E-3D-32XGE-SFPP 32x10GbE, full scale L2/L2.5 and *reduced scale L3
> features*
> and
> MPC4E-3D-32XGE-RB 32XGbE SFPP ports, full scale L2/L2.5,
> * L3 and L3VPN features*
> 
> now question is *what is reduced scale L3 featurs and L3vpn features ?*
> 
> *Many Thanks,*
> 
> *Brijesh Patel*
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp