Re: [j-nsp] Strange Behavior after ISSU from 13.3R8 to 17.4R1.16

2018-05-27 Thread Jeffrey Nikoletich
Scratch that. Problem still exist. ☹







*From:* Jeffrey Nikoletich 
*Sent:* May 27, 2018 07:43 PM
*To:* juniper-nsp@puck.nether.net
*Subject:* RE: Strange Behavior after ISSU from 13.3R8 to 17.4R1.16



All,



So after I sent this email I noticed a had a rogue rib-group in place. I
removed that and it seems traffic is fine now. I am testing and will let
you know.



Apparently it helps me to email the whole group just to “hopefully” find
the answer 5 seconds late.



Thanks,



Jeffrey Nikoletich



*From:* Jeffrey Nikoletich 
*Sent:* May 27, 2018 07:02 PM
*To:* juniper-nsp@puck.nether.net
*Subject:* Strange Behavior after ISSU from 13.3R8 to 17.4R1.16



Hello all,



So I have been scratching my head at a weird issue I am seeing on only 1 of
our devices after a ISSU rollout to 17.4. It seems that all peering session
links are not passing traffic. Here is what I know so far:



   1. Peering sessions (via exchanges) connect just fine.
   2. If the exchange is prepended, traffic flows just fine.
   3. No policy changes were made during the upgrade.
   4. When looking at the looking glass of direct peers, the next hop is
   set correctly to our IP on the exchanges.
   5. Don’t believe it is in interface/card issue as this is a trunk port
   and transit is working just fine.
   6. Routes received from peer are fine as well.



Here is my import and export policies, I  broke them down to extremely
simple and they still do not work:



show configuration policy-options policy-statement default-peering-out

term get-routes {

from {

prefix-list XXX;

}

then {

community add XXX;

community add X;

accept;

}

}

term from_bgp_customers {

from {

protocol bgp;

as-path [ XXX-routes XXX-routes XXX-routes ];

}

then accept;

}

term others {

then reject;

}



show configuration policy-options policy-statement pubpeer-in

term set-consistancy {

then {

metric 100;

local-preference 200;

community set type_pubpeer;

next-hop peer-address;

}

}

then accept;



Here is a test bgp group I created with just a few peers:



show configuration protocols bgp group ipv4---

type external;

import pubpeer-in;

export default-peering-out;

neighbor 206.53.XXX.XXX {

description "";

family inet {

unicast {

prefix-limit {

maximum 600;

teardown idle-timeout 5;

}

}

any {

prefix-limit {

maximum 1000;

}

}

}

peer-as ;

}

neighbor 206.53.XXX.XXX {

description "";

family inet {

unicast {

prefix-limit {

maximum 600;

teardown idle-timeout 5;

}

}

any {

prefix-limit {

maximum 1000;

}

}

}

peer-as ;

}

neighbor 206.53.XXX.XXX {

description ".";

family inet {

unicast {

prefix-limit {

maximum 1100;

teardown idle-timeout 5;

}

}

}

peer-as ;

}





I have a feeling it is something simple and I am just missing it. Any ideas
or help would be appreciated. Thanks.



Jeffrey Nikoletich
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Strange Behavior after ISSU from 13.3R8 to 17.4R1.16

2018-05-27 Thread Jeffrey Nikoletich
All,



So after I sent this email I noticed a had a rogue rib-group in place. I
removed that and it seems traffic is fine now. I am testing and will let
you know.



Apparently it helps me to email the whole group just to “hopefully” find
the answer 5 seconds late.



Thanks,



Jeffrey Nikoletich



*From:* Jeffrey Nikoletich 
*Sent:* May 27, 2018 07:02 PM
*To:* juniper-nsp@puck.nether.net
*Subject:* Strange Behavior after ISSU from 13.3R8 to 17.4R1.16



Hello all,



So I have been scratching my head at a weird issue I am seeing on only 1 of
our devices after a ISSU rollout to 17.4. It seems that all peering session
links are not passing traffic. Here is what I know so far:



   1. Peering sessions (via exchanges) connect just fine.
   2. If the exchange is prepended, traffic flows just fine.
   3. No policy changes were made during the upgrade.
   4. When looking at the looking glass of direct peers, the next hop is
   set correctly to our IP on the exchanges.
   5. Don’t believe it is in interface/card issue as this is a trunk port
   and transit is working just fine.
   6. Routes received from peer are fine as well.



Here is my import and export policies, I  broke them down to extremely
simple and they still do not work:



show configuration policy-options policy-statement default-peering-out

term get-routes {

from {

prefix-list XXX;

}

then {

community add XXX;

community add X;

accept;

}

}

term from_bgp_customers {

from {

protocol bgp;

as-path [ XXX-routes XXX-routes XXX-routes ];

}

then accept;

}

term others {

then reject;

}



show configuration policy-options policy-statement pubpeer-in

term set-consistancy {

then {

metric 100;

local-preference 200;

community set type_pubpeer;

next-hop peer-address;

}

}

then accept;



Here is a test bgp group I created with just a few peers:



show configuration protocols bgp group ipv4---

type external;

import pubpeer-in;

export default-peering-out;

neighbor 206.53.XXX.XXX {

description "";

family inet {

unicast {

prefix-limit {

maximum 600;

teardown idle-timeout 5;

}

}

any {

prefix-limit {

maximum 1000;

}

}

}

peer-as ;

}

neighbor 206.53.XXX.XXX {

description "";

family inet {

unicast {

prefix-limit {

maximum 600;

teardown idle-timeout 5;

}

}

any {

prefix-limit {

maximum 1000;

}

}

}

peer-as ;

}

neighbor 206.53.XXX.XXX {

description ".";

family inet {

unicast {

prefix-limit {

maximum 1100;

teardown idle-timeout 5;

}

}

}

peer-as ;

}





I have a feeling it is something simple and I am just missing it. Any ideas
or help would be appreciated. Thanks.



Jeffrey Nikoletich
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Strange Behavior after ISSU from 13.3R8 to 17.4R1.16

2018-05-27 Thread Jeffrey Nikoletich
Hello all,



So I have been scratching my head at a weird issue I am seeing on only 1 of
our devices after a ISSU rollout to 17.4. It seems that all peering session
links are not passing traffic. Here is what I know so far:



   1. Peering sessions (via exchanges) connect just fine.
   2. If the exchange is prepended, traffic flows just fine.
   3. No policy changes were made during the upgrade.
   4. When looking at the looking glass of direct peers, the next hop is
   set correctly to our IP on the exchanges.
   5. Don’t believe it is in interface/card issue as this is a trunk port
   and transit is working just fine.
   6. Routes received from peer are fine as well.



Here is my import and export policies, I  broke them down to extremely
simple and they still do not work:



show configuration policy-options policy-statement default-peering-out

term get-routes {

from {

prefix-list XXX;

}

then {

community add XXX;

community add X;

accept;

}

}

term from_bgp_customers {

from {

protocol bgp;

as-path [ XXX-routes XXX-routes XXX-routes ];

}

then accept;

}

term others {

then reject;

}



show configuration policy-options policy-statement pubpeer-in

term set-consistancy {

then {

metric 100;

local-preference 200;

community set type_pubpeer;

next-hop peer-address;

}

}

then accept;



Here is a test bgp group I created with just a few peers:



show configuration protocols bgp group ipv4---

type external;

import pubpeer-in;

export default-peering-out;

neighbor 206.53.XXX.XXX {

description "";

family inet {

unicast {

prefix-limit {

maximum 600;

teardown idle-timeout 5;

}

}

any {

prefix-limit {

maximum 1000;

}

}

}

peer-as ;

}

neighbor 206.53.XXX.XXX {

description "";

family inet {

unicast {

prefix-limit {

maximum 600;

teardown idle-timeout 5;

}

}

any {

prefix-limit {

maximum 1000;

}

}

}

peer-as ;

}

neighbor 206.53.XXX.XXX {

description ".";

family inet {

unicast {

prefix-limit {

maximum 1100;

teardown idle-timeout 5;

}

}

}

peer-as ;

}





I have a feeling it is something simple and I am just missing it. Any ideas
or help would be appreciated. Thanks.



Jeffrey Nikoletich
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 ballpark

2018-05-27 Thread Edward Dore

On 27/05/2018, 09:28, "juniper-nsp on behalf of Vincent Bernat" 
 wrote:

 ❦ 27 mai 2018 13:24 +0700, Mark Tees  :

> Not sure if it’s licensed on FIB usage but I’m trying to gain an idea on
> both full table and partial table options.

For full FIB (or RIB?), you need the S-MX104-ADV-R2 license whose public
price is 2. However, the limitation is not enforced (you cannot even
add the license to the system, it's just a piece of paper). This kind of
license limitation doesn't exist with the MX80 (or with any other MX
from the same era). This license can be part of a bundle (you should
definitely look at bundles for the MX104, the pricing doesn't make much
sense). If you buy it separately, Juniper easily does at least 30% on
licenses. Personnally, I wouldn't pay anything for such a license since
the MX104 slow routing engine is unable to handle an Internet-sized FIB
without important downtimes during changes (1-2 minutes). You'll have to
select the routes you install in FIB if you want to minimize impacts
during changes and you'll need to be below the licensing limit (256k
routes I think).

I can't say anything about current pricing because I've bought mine more
than 3 years ago, but you should also consider if a MX204 or a MX150
would fit your needs. The routing engines are far more capable on these
(but they are a fixed chassis with only one routing engine).

Mark asked about the MX204, not the MX104.

Edward Dore 
Freethought Internet



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 ballpark

2018-05-27 Thread Vincent Bernat
 ❦ 27 mai 2018 13:24 +0700, Mark Tees  :

> Not sure if it’s licensed on FIB usage but I’m trying to gain an idea on
> both full table and partial table options.

For full FIB (or RIB?), you need the S-MX104-ADV-R2 license whose public
price is 2. However, the limitation is not enforced (you cannot even
add the license to the system, it's just a piece of paper). This kind of
license limitation doesn't exist with the MX80 (or with any other MX
from the same era). This license can be part of a bundle (you should
definitely look at bundles for the MX104, the pricing doesn't make much
sense). If you buy it separately, Juniper easily does at least 30% on
licenses. Personnally, I wouldn't pay anything for such a license since
the MX104 slow routing engine is unable to handle an Internet-sized FIB
without important downtimes during changes (1-2 minutes). You'll have to
select the routes you install in FIB if you want to minimize impacts
during changes and you'll need to be below the licensing limit (256k
routes I think).

I can't say anything about current pricing because I've bought mine more
than 3 years ago, but you should also consider if a MX204 or a MX150
would fit your needs. The routing engines are far more capable on these
(but they are a fixed chassis with only one routing engine).
-- 
Use variable names that mean something.
- The Elements of Programming Style (Kernighan & Plauger)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp