Re: [j-nsp] A low number of Firewall filters reducing the bandwidth capacity.

2024-09-11 Thread Tom Beecher via juniper-nsp
>
> # show
> term term1-match {
>  from {
>  destination-address {
>  131.0.245.143/32;
>  }
>  packet-length 32-63;
>  protocol 6;
>  source-port 443;
>  tcp-flags "syn & ack & !fin & !rst & !psh";
>  }
>  then {
>  count Corero-auto-block-term1-match;
>  port-mirror;
>  next term;
>  }
> }
> term term1-action {
>  from {
>  destination-address {
>  131.0.245.143/32;
>  }
>  packet-length 32-63;
>  protocol 6;
>  source-port 443;
>  tcp-flags "syn & ack & !fin & !rst & !psh";
>  }
>  then {
>  count Corero-auto-block-term1-discard;
>  discard;
>  }
> }
>

This is still terribly inefficient. It only needs to be matched once, and
you can do the count / mirror / discard in the same stanza.

 then {
 count Corero-auto-block-term1;
 port-mirror;
 discard;
 }

I would assume that the Corero product gives you the option to mirror *OR*
not , and then accept *OR* discard, so whoever coded this just makes one
stanza for each option selected by the user, which is , HOLY SHIT stupid.
Juniper folks who I know are here :  you should REALLY get them to fix this
if you're going to keep pushing that partnership. :)

Trying to replicate an SRX doing heavy SPI on Trio chips can sorta work
depending on the environment, but usually isn't going to be a good idea.
You are 100% not going to maintain line rate when you start checking /32s
and TCP flags. Doing too much is going to crush the chip, as you saw.

Also, a couple people mentioned the FLT block. I would agree that might
have been a good idea, but I am 95% sure the FLT only supports the
traditional 5-tuple, so this couldn't have been loaded there anyways.


On Tue, Sep 10, 2024 at 3:57 PM Timur Maryin via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> On 26-Aug-24 15:43, Gustavo Santos via juniper-nsp wrote:
> > Awesome, thanks for the info!
> > Rules are like the one below.
> > after adjusting the detection engine to handle as /24 network instead of
> > /32 hosts the issue is gone..
> > As you said the issue was not caused by pps as the attack traffic was
> just
> > about 30Mpps and with adjusted rules to /24 networks
> > there were not more dropped packets from PFE.
> >
> > Did you know or have how to check the PPE information that should show
> what
> > may have happened?
>
>
> EA utilization monitoring might not be straightforward on a first look
> But we have internal tools(script) which print data in nicely manner.
> JTAC may be able to share that.
>
> But the key point here is filter optimization.
> Filter can be re-arranged so that PPE spend less cycles processing each
> packet.
>
>
> > Below a sample rule that was generated ( about 300 of them via netconf
> that
> > caused the slowdown).
> >
> >set term e558d83516833f77dea28e0bd5e65871-match from
> > destination-address 131.0.245.143/32
> >  set term e558d83516833f77dea28e0bd5e65871-match from
>
>
> converting (and renaming long term name) that to more readable:
>
> # show
> term term1-match {
>  from {
>  destination-address {
>  131.0.245.143/32;
>  }
>  packet-length 32-63;
>  protocol 6;
>  source-port 443;
>  tcp-flags "syn & ack & !fin & !rst & !psh";
>  }
>  then {
>  count Corero-auto-block-term1-match;
>  port-mirror;
>  next term;
>  }
> }
> term term1-action {
>  from {
>  destination-address {
>  131.0.245.143/32;
>  }
>  packet-length 32-63;
>  protocol 6;
>  source-port 443;
>  tcp-flags "syn & ack & !fin & !rst & !psh";
>  }
>  then {
>  count Corero-auto-block-term1-discard;
>  discard;
>  }
> }
>
> As you can see match conditions are duplicated in two terms.
> I suspect addition optimization is possible in your filter.
>
> General recommendations (not rules to follow) are:
>
> -group similar terms
> -group matches on the same protocol
> -instead of large number of "small" terms for different host make one
> term with list of them
> -try to arrange them by tuple matching, start with 2tuple terms then
> 3tuple and so on.
> - where possible put higher terms which will be hit by most of traffic
>
>
> > As it is a "joint" solution by Corero / Juniper, they were supposed to
> know
> > how to optimize or know platform limitations.
> > Opened a Case with Corero to verify if they know something about that.
>
>
> They are supposed but it's quite possible who develop such solutions are
> not aware of all PFE internals.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.net

Re: [j-nsp] ifstrace log filling up with debug output

2024-07-18 Thread Tom Beecher via juniper-nsp
I don't see this behavior in 20.X trains, so possibly something introduced
in 21.

That being said, I think that concerns about SSD's dying because of write
volume is pretty over dramatic. The volume of logs written for everything
else is certainly much higher than this.

Worth opening a case on for sure, but this isn't going to kill drives
demonstrably faster.

On Thu, Jul 18, 2024 at 7:30 AM Joerg Staedele via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi,
>
> yes, it's the same. An no one seems to care about it? I mean, it will
> write constantly data to the local storage and may cause the flash to fail
> ...
>
> Very sad ☹
>
> Regards
>  Joerg
>
> -Original Message-
> From: juniper-nsp  On Behalf Of
> Timur Maryin via juniper-nsp
> Sent: Thursday, April 4, 2024 4:14 PM
> To: juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] ifstrace log filling up with debug output
>
> Hi Joerg,
>
> On 23-Mar-24 15:24, Joerg Staedele via juniper-nsp wrote:
> > Hi,
> >
> > No traceoptions ... and meanwhile i've tested even with no configuration
> and after a zeroize it also does the same. I guess it’s a bug. I will try
> another version (maybe some 19.x)
>
>
> And I believe it will be the same because ifstraced  daemon is enabled
> by default on SMP systems (for a while already)
>
>
> How much data do you get in your logs?
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Logging for shell sessions

2024-07-08 Thread Tom Beecher via juniper-nsp
>
> If you have Cisco HTTS or Juniper ACP or the like, where you get named
> engineers, then you can develop a mutual trust and give those
> engineers access to your network.
>

To each their own, but I'm with Jared on this. No way would a vendor have
any direct access. The most permissive I'd accept would be a screen share
giving temp terminal control, but even then on a production device I REALLY
have to trust the person.

I've had vendor DEs tell me a given command sequence is 'completely safe on
a production device' when working a case , only to find out it actually
wasn't. Not something I want them to discover on me at 2am when
nobody knows they're doing it.

On Mon, Jul 8, 2024 at 2:56 AM Saku Ytti via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> This depends greatly on how you've set up your support.
>
> If you have Cisco HTTS or Juniper ACP or the like, where you get named
> engineers, then you can develop a mutual trust and give those
> engineers access to your network.
>
> But if you're going through a normal process, perhaps additional care
> is warranted.
>
> On Sun, 7 Jul 2024 at 19:34, Jared Mauch  wrote:
> >
> > I don't trust my vendors to run commands on my devices, it's not
> > personal.  If there is a diagnostic that they want run, they need to be
> > able to articulate the operational risk, or we may want to validate in a
> > virtual or real physical router.
> >
> > - Jared
> >
> > On Sun, Jul 07, 2024 at 11:07:48AM +0300, Saku Ytti via juniper-nsp
> wrote:
> > > For things like TAC use, what I've previously done is made a vendor
> > > shell, where the shell program is screen instead of shell, and screen
> > > is set up to log.
> > >
> > >
> > > On Sat, 6 Jul 2024 at 16:50, Job Snijders  wrote:
> > > >
> > > > Perhaps it’s just about wanting to keep track “what happened?!?”
> > > >
> > > > For such a scenario, consider conserver
> > > > https://www.conserver.com/docs/console.man.html and script
> > > > http://man.openbsd.org/script to store the terminal interactions
> > > >
> > > > Assume untrusted users probably can escape these such environments
> > > >
> > > > Kind regards,
> > > >
> > > > Job
> > >
> > >
> > >
> > > --
> > >   ++ytti
> > > ___
> > > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> > --
> > Jared Mauch  | pgp key available via finger from ja...@puck.nether.net
> > clue++;  | http://puck.nether.net/~jared/  My statements are only
> mine.
>
>
>
> --
>   ++ytti
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP timer

2024-04-27 Thread Tom Beecher via juniper-nsp
>
> I also find that BFD can cause more problems than it fixes if you go too
> aggressive with it !
>

Concur here. BFD has its uses in specific circumstances, but it's almost
always much better to rely on interface state change and hold-time up FOO.


On Sat, Apr 27, 2024 at 6:34 AM Sean Clarke via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi Lee
>
> Would Flap Damping fix it for you ?
>
> I also find that BFD can cause more problems than it fixes if you go too
> aggressive with it !
>
> regards
>
> Sean
>
>
> On 27-Apr-24 9:44 AM, Lee Starnes via juniper-nsp wrote:
> > Hello everyone,
> >
> > Having difficulty finding a way to prevent BGP from re-establishing
> after a
> > BFD down detect. I am looking for a way to keep the session from
> > re-establishing for a configured amount of time (say 5 minutes) to ensure
> > we don't have a flapping session for a. link having issues.
> >
> > We asked the jtac but they came back with the reverse which would keep
> the
> > session up for a certain amount of time before it drops (Not what we
> want).
> >
> > Is there a way to do this? We are using MX204 routers and the latest
> > 23.4R1.9 Junos.
> >
> > Best,
> >
> > -Lee
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] L3VPNs and on-prem DDoS scrubbing architecture

2024-04-03 Thread Tom Beecher via juniper-nsp
>
> but a BGP-LU solution exists even for this problem.
>

My first thought was also to use BGP-LU.

On Wed, Apr 3, 2024 at 2:58 AM Saku Ytti via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> On Wed, 3 Apr 2024 at 09:45, Saku Ytti  wrote:
>
> > Actually I think I'm confused. I think it will just work. Because even
> > as the EgressPE does IP lookup due to table-label, the IP lookup still
> > points to egressMAC, instead looping back, because it's doing it in
> > the CleanVRF.
> > So I think it just works.
>
> > routing-options {
> >   interface-routes {
> > rib-groups {
> >   cleanVRF {
> > import-rib [ inet.0 cleanVRF.inet.0 ];
> > import-policy cleanVRF:EXPORT;
> >  
>
> This isn't exactly correct. You need to put the cleanVRF in
> interfacer-quotes and close it.
>
> Anyhow I'm 90% sure this will just work and pretty sure I've done it.
> The confusion I had was about the scrubbing route that on the
> clean-side is already host/32. For this, I can't figure out a cleanVRF
> solution, but a BGP-LU solution exists even for this problem.
>
>
> --
>   ++ytti
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP route announcements and Blackholes

2024-03-27 Thread Tom Beecher via juniper-nsp
I've read this a couple times. Also confused, but I think this is what you
are saying :

- You have a /19 aggregate that is announced via BGP to upstreams.
- You use an upstream RTBH service that will sink a particular destination
via a BGP announcement to a particular peer.
- When you add a /32 discard (presumably to send to the RTBH peer ) , your
aggregate is 'impacted' somehow.

There's not enough info to fully grasp what is going on here.(What does
"impacted" mean?)  But a couple points.

1. Aggregate routes are intentionally NH discard all the time. The intent
is that traffic for the entire /19 would come to this router, and that you
will have more specific routes for parts of the /19 you're actually using.
Anything not covered by a more specific would just be dropped. You can
change this if you want, but normally don't need to.
2. This may be part of your issue.

x.x.0.0/19 *[OSPF/125] 5d 19:26:19, metric 20, tag 0
>  to 10.20.20.3 via ae0.0
[Aggregate/130] 5d 20:18:36
   Reject

Aggregates are not installed unless there is a contributing route present
as well. Contributing route must be a route with a longer mask covered by
the aggregate; can't be an exact match. This means that your OSPF /19 is
NOT the contributor to the aggregate, it must be something else. ( Also,
OSPF/125 ? )

If I had to guess, you're doing something here that is impacting the
contributing routes, causing the aggregate to disappear.

On Tue, Mar 19, 2024 at 1:43 PM Lee Starnes via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hello Juniper gurus. I am seeing an issue where we have a carrier that does
> RTBH via BGP announcement rather than community strings. This is done via
> BGP peer to a blackhole BGP router/server.
>
> My issue here is that our aggregate IP block that is announced to our
> backbone providers gets impacted when creating a /32 static discard route
> to announce to that blackhole peer.
>
> The blackhole peer does receive the /32 announcement, but the aggregate
> route also becomes discarded and thus routes to the other peers stop
> working.
>
> Been trying to determine just how to accomplish this function without
> killing all routes.
>
> So we have several /30 to /23 routes within our /19 block that are
> announced via OSPF from our switches to the routers. The routers aggregate
> these to the /19 to announce the entire larger block to the backbone
> providers.
>
> The blackhole peer takes routes down to a /32 for mitigation of an attack.
> If we add a static route as "route x.x.22.12/32 discard" we get:
>
> show route x.x.22.10
>
> inet.0: 931025 destinations, 2787972 routes (931025 active, 0 holddown, 0
> hidden)
> @ = Routing Use Only, # = Forwarding Use Only
> + = Active Route, - = Last Active, * = Both
>
> x.x.0.0/19 *[OSPF/125] 5d 19:26:19, metric 20, tag 0
> >  to 10.20.20.3 via ae0.0
> [Aggregate/130] 5d 20:18:36
>Reject
>
>
> While we see the more specific route as discard:
>
> show route x.x.22.12
>
> inet.0: 931022 destinations, 2787972 routes (931022 active, 0 holddown, 0
> hidden)
> @ = Routing Use Only, # = Forwarding Use Only
> + = Active Route, - = Last Active, * = Both
> x.x.22.12/32*[Static/5] 5d 20:20:07
>Discard
>
>
>
> Does anyone have a working config for this type of setup that might be able
> to share some tips or the likes on what I need to do or what I'm doing
> wrong?
>
> Best,
>
> -Lee
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-09 Thread Tom Beecher via juniper-nsp
>
> No one is saying that cRPD isn't the future, just that there are a lot
> of existing deployments with vRR, which are run with some success, and
> the entire stability of the network depends on it. Whereas cRPD is a
> newer entrant, and early on back when I tested it, it was very feature
> incomplete in comparison.
> So those who are already running vRR, and are happy with it, changing
> to cRPD just to change to cRPD is simply bad risk. Many of us don't
> care about DRAM of vCPU, because you only need a small number of RRs,
> and DRAM/vCPU grows on trees. But we live in constant fear of the
> entire RR setup blowing up, so motivation for change needs to be solid
> and ideally backed by examples of success in a similar role in your
> circle of people.
>

Completely fair, yes. My comments were mostly aimed at a vMX/cRPD
comparison. I probably wasn't clear about that. Completely agree that it
doesn't make much sense to move from an existing vRR to cRPD just because.
For a greenfield thing I'd certainly lean cRPD over VRR at least in
planning. Newer cRPD has definitely come a long way relative to older. (
Although I haven't had reason or cycles to really ride it hard and see
where I can break it yet. :) )



On Fri, Feb 9, 2024 at 3:51 AM Saku Ytti  wrote:

> On Thu, 8 Feb 2024 at 17:11, Tom Beecher via juniper-nsp
>  wrote:
>
> > For any use cases that you want protocol interaction, but not substantive
> > traffic forwarding capabilities , cRPD is by far the better option.
>
> No one is saying that cRPD isn't the future, just that there are a lot
> of existing deployments with vRR, which are run with some success, and
> the entire stability of the network depends on it. Whereas cRPD is a
> newer entrant, and early on back when I tested it, it was very feature
> incomplete in comparison.
> So those who are already running vRR, and are happy with it, changing
> to cRPD just to change to cRPD is simply bad risk. Many of us don't
> care about DRAM of vCPU, because you only need a small number of RRs,
> and DRAM/vCPU grows on trees. But we live in constant fear of the
> entire RR setup blowing up, so motivation for change needs to be solid
> and ideally backed by examples of success in a similar role in your
> circle of people.
>
>
> --
>   ++ytti
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-08 Thread Tom Beecher via juniper-nsp
>
> Is the same true for VMware?
>

Never tried it there myself.

be able to run a
> solid software-only OS than be a test-bed for cRPD in such a use-case.


AFAIK, cRPD is part of the same build pipeline as 'full' JUNOS, so if
there's a bug in any given version, it will catch you on Juniper's metal ,
or your own metal for vMX, or cRPD ( assuming said bug is not hardware
dependent/related ).

On Thu, Feb 8, 2024 at 10:21 AM Mark Tinka  wrote:

>
>
> On 2/8/24 17:10, Tom Beecher wrote:
>
> >
> > For any use cases that you want protocol interaction, but not
> > substantive traffic forwarding capabilities , cRPD is by far the
> > better option.
> >
> > It can handle around 1M total RIB/FIB using around 2G RAM, right in
> > Docker or k8. The last version of vMX I played with required at least
> > 5G RAM / 4 cores to even start the vRE and vPFEs up, plus you have to
> > do a bunch of KVM tweaking and customization, along with NIC driver
> > fun. All of that has to work right just to START the thing, even if
> > you have no intent to use it for forwarding. You could have cRPD up in
> > 20 minutes on even a crappy Linux host. vMX has a lot more overhead.
>
> Is the same true for VMware?
>
> I had a similar experience trying to get CSR1000v on KVM going back in
> 2014 (and Junos vRR, as it were). Gave up and moved to CSR1000v on
> VMware where it was all sweeter. Back then, vRR did not support
> VMware... only KVM.
>
> On the other hand, if you are deploying one of these as an RR, hardware
> resources are going to be the least of your worries. In other words,
> some splurging is in order. I'd rather do that and be able to run a
> solid software-only OS than be a test-bed for cRPD in such a use-case.
>
> Mark.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-08 Thread Tom Beecher via juniper-nsp
>
> I wouldn't consider cRPD for production. vRR (or vMX, if it's still a
> thing) seems to make more sense.
>

For any use cases that you want protocol interaction, but not substantive
traffic forwarding capabilities , cRPD is by far the better option.

It can handle around 1M total RIB/FIB using around 2G RAM, right in Docker
or k8. The last version of vMX I played with required at least 5G RAM / 4
cores to even start the vRE and vPFEs up, plus you have to do a bunch of
KVM tweaking and customization, along with NIC driver fun. All of that has
to work right just to START the thing, even if you have no intent to use it
for forwarding. You could have cRPD up in 20 minutes on even a crappy Linux
host. vMX has a lot more overhead.



On Thu, Feb 8, 2024 at 3:13 AM Mark Tinka via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
>
> On 2/8/24 09:50, Roger Wiklund via juniper-nsp wrote:
>
> > Hi
> >
> > I'm curious, when moving from vRR to cRPD, how do you plan to
> manage/setup
> > the infrastructure that cRPD runs on?
>
> I run cRPD on my laptop for nothing really useful apart from testing
> configuration commands, e.t.c.
>
> I wouldn't consider cRPD for production. vRR (or vMX, if it's still a
> thing) seems to make more sense.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Thanks for all the fish

2024-01-11 Thread Tom Beecher via juniper-nsp
>
> I am aware of a few major orders of the ACX7024 that Juniper are working
> on. Of course, none of it will become materially evidential until the
> end of 2024. That said, I think HP will give the box a chance, as there
> is a market for it. They might just put a time line on it.
>

I doubt this deal closes before Q4, and neither party is legally allowed to
do anything prior to close under the assumption it will. So nothing really
will change near-ish term.

On Thu, Jan 11, 2024 at 6:25 AM Mark Tinka via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
>
> On 1/11/24 02:56, Chris Kawchuk via juniper-nsp wrote:
>
> > Shall we start taking bets on what stays, and what goes?
>
> I'm glad that Rami gets to stay as CEO for the networking side of
> things. Good lad, that...
> > Again, ACX was never a competitor to the ASR920 which I know Mr Tinka
> was very fond of. And the NCS540 "is the new ASR920”. There’s some long
> roads ahead for JNPR to wrestle back some of that marketshare.
>
> The whole Metro-E industry is currently a balls-up. All vendors seem to
> have met at a secret location that served 20-year old wine and agreed
> not to pursue any Metro-E platforms built around custom silicon.
>
> So really, what you are buying from either Cisco, Juniper, Nokia, Arista
> or Arrcus will be code maturity. No other differentiator.
>
> > ACX also did a ‘reboot’ of the product line in the 7000-series when they
> went Jericho, versus ACX5000 which (correct me if I’m wrong) that was
> QFX/Trident/Trident+ based and earlier ACX series which were
> $no-idea-i-didnt-look-very-hard-at-them…. so its almost “a new product”
> which may not have a lot of customer nor market traction; thus easier to
> kill off. Yes — even though previous generations of ACX did exist and
> likely had some customers..somewhere…., I know of absolutely nobody that
> bought them nor used them in anger for a large Metro-E/MPLS/eVPN/SR network
> role.
> >
> > I'm happy to be proven wrong on ACX; as I don’t like the idea of handing
> an entire market segment to a single vendor.
>
> I am aware of a few major orders of the ACX7024 that Juniper are working
> on. Of course, none of it will become materially evidential until the
> end of 2024. That said, I think HP will give the box a chance, as there
> is a market for it. They might just put a time line on it.
>
> And for once in Juniper's history, they are beginning to take the
> Metro-E network a little seriously, although probably a tad later than
> they should have.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Thanks for all the fish

2024-01-10 Thread Tom Beecher via juniper-nsp
>
> HPE will turn Juniper just like they turn 3com.
>

3Com's death started almost a decade before HP acquired them. They were
pretty much dead by the time that happened,



On Wed, Jan 10, 2024 at 2:38 PM Alexandre Figueira Guimaraes via
juniper-nsp  wrote:

> HPE will turn Juniper just like they turn 3com.
>
> you know the results.
>
>
>
> att
> Alexandre
>
>
>
>
>
>
> 
> De: juniper-nsp  em nome de Aaron
> Gould via juniper-nsp 
> Enviado: quarta-feira, 10 de janeiro de 2024 16:30
> Para: juniper-nsp@puck.nether.net 
> Assunto: Re: [j-nsp] Thanks for all the fish
>
>
> https://urldefense.com/v3/__https://newsroom.juniper.net/news/news-details/2024/HPE-to-Acquire-Juniper-Networks-to-Accelerate-AI-Driven-Innovation/__;!!M3gv20Gt!cY_tIELb_GnFbX25Rob0JdOOa-DCsw5rdrDXQLZCHc5pbquwHK0zxmd1eBGJkltMjQg9rRZ5_SLSka5e9RqBfwazhmC0uXDs$
>
> an MX with an HP label on it will seem so weird
>
>
> On 1/9/2024 2:55 AM, Saku Ytti via juniper-nsp wrote:
> > What do we think of HPE acquiring JNPR?
> >
> >
> > I guess it was given that something's gotta give, JNPR has lost to
> > dollar as an investment for more than 2 decades, which is not
> > sustainable in the way we model our economy.
> >
> > Out of all possible outcomes:
> > - JNPR suddenly starts to grow (how?)
> > - JNPR defaults
> > - JNPR gets acquired
> >
> > It's not the worst outcome, and from who acquires them, HPE isn't the
> > worst option, nor the best. I guess the best option would have been,
> > several large telcos buying it through a co-owned sister company, who
> > then are less interested in profits, and more interested in having a
> > device that works for them. Worst would probably have been Cisco,
> > Nokia, Huawei.
> >
> > I think the main concern is that SP business is kinda shitty business,
> > long sales times, low sales volumes, high requirements. But that's
> > also the side of JNPR that has USP.
> >
> > What is the future of NPU (Trio) and Pipeline (Paradise/Triton), why
> > would I, as HP exec, keep them alive? I need JNPR to put QFX in my DC
> > RFPs, I don't really care about SP markets, and I can realise some
> > savings by axing chip design and support. I think Trio is the best NPU
> > on the market, and I think we may have a real risk losing it, and no
> > mechanism that would guarantee new players surfacing to replace it.
> >
> > I do wish that JNPR had been more serious about how unsustainable it
> > is to lose to the dollar, and had tried more to capture markets. I
> > always suggested why not try Trio-PCI in newegg. Long tail is long,
> > maybe if you could buy it for 2-3k, there would be a new market of
> > Linux PCI users who want wire rate programmable features for multiple
> > ports? Maybe ESXi server integration for various pre-VPC protection
> > features at wire-rate? I think there might be a lot of potential in
> > NPU-PCI, perhaps even FAB-PCI, to have more ports than single NPU-PCI.
> >
> --
> -Aaron
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
>
> https://urldefense.com/v3/__https://puck.nether.net/mailman/listinfo/juniper-nsp__;!!M3gv20Gt!cY_tIELb_GnFbX25Rob0JdOOa-DCsw5rdrDXQLZCHc5pbquwHK0zxmd1eBGJkltMjQg9rRZ5_SLSka5e9RqBfwazhuH6LnpG$
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Thanks for all the fish

2024-01-10 Thread Tom Beecher via juniper-nsp
>
> In our hubris to "decouple the control plane from the data plane (tm)",
> we, instead, decoupled the software/hardware integration from a single
> vendor.
>

I wouldn't necessarily agree that was the wrong *technical* decision.
Unfortunately, it was a perfect scenario to be exploited for the
MBA-ification of everything that has greatly expanded in the past decade.

On Wed, Jan 10, 2024 at 2:24 AM Mark Tinka via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
>
> On 1/10/24 09:04, Forrest Christian (List Account) wrote:
>
> > I find it frustrating that things one would expect to be included in
> > any layer 3 switch has become additional revenue opportunities.
> >
> > "The switch hardware is $x.  Oh you want the software too?  Oh,
> > that's an additional cost.   L3 switching?  Oh,  that's an extra
> > feature.  OSPF? Oh that's not included with the L3 license so that
> > will be extra too. Oh and by the way,  you aren't buying a perpetual
> > license anymore so be sure to pay us the fees for all the software
> > functionality every year".
> >
> > Yes I know the above isn't completely 100% accurate but it definitely
> > is how it seems anymore.
> >
> > I get charging extra for advanced features,  but when basic features
> > that pretty much everyone wants and uses becomes an add-on and not
> > perpetual,  it tends to make me start looking for a different vendor.
>
> In our hubris to "decouple the control plane from the data plane (tm)",
> we, instead, decoupled the software/hardware integration from a single
> vendor.
>
> So hardware folk make their own cut, and software folk make their own
> cut. And they are not the same people.
>
> Welcome to the "white box" and "software-only" era.
>
> Mark.
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Difference in "MX204" and "MX204-HW-BASE"?

2024-01-10 Thread Tom Beecher via juniper-nsp
>
> Is there a difference between "MX204" and "MX204-HW-BASE"?
>

Strictly speaking they are just different SKUs, not different models.

MX204 : Chassis + Fan trays + PEMs
MX204-HW-BASE : Base MX204 chassis PLUS perpetual Junos software license

AFAIK , code that has enforcement is limited to specific scaling or more
advanced features, but outside of that, base things just work. Don't take
that as gospel though.

On Wed, Jan 10, 2024 at 8:19 AM chiel via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Is there a difference between "MX204" and "MX204-HW-BASE"?
>
> I thought the "MX204" has honored based license, which isn't sold
> anymore. Where the "MX204-HW-BASE" (also end of sale but still widely
> available) enforces the license after version 22.2R1 for BGP. Is this
> assumption correct?
>
> If there is indeed this difference how can I distinguish these two
> platforms from the CLI?
>
> I have a MX204 with version 22.3R3.8 without a license installed on it
> and its doing BGP just fine. So I guess I have the older MX204 model?
>
> I'm asking as I'm looking for a spare (refurb) unit for my current router.
>
> admin@router> show system license
> License usage:
>   Licenses LicensesLicenses
>Feature  Feature Feature
>Feature name   usedinstalled  needed Expiry
>scale-subscriber  0   10 0permanent
>scale-l2tp0 1000 0permanent
>bgp   10 1invalid
>l3static  10 1invalid
>ospf  10 1invalid
>
> Licenses installed: none
>
>
> admin@router> show chassis hardware
> Hardware inventory:
> Item Version  Part number  Serial number Description
> ChassisX JNP204 [MX204]
>
>
>
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RSVP hidden routes in inet.0

2023-12-11 Thread Tom Beecher via juniper-nsp
This is correct, they exist for the bypass LSPs.

I wouldn't characterize it as a dirty hack though. RFC4090 fast reroute
requires the backup pathways to be pre-computed for a sub-10ms switchover.
You put an export policy in place to make sure all labels (including
bypass) are in the FIB already. Once a tear event occurs, the hidden RSVP
route is just flipped to active, and LSPs using that /32 start pushing the
bypass label on the stack. Since that label is already in the FIB, it just
works from there.

On Mon, Dec 11, 2023 at 9:27 AM Michael Hare via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi Misak,
>
> I think what you're seeing is normal for protection LSPs, "dirty hack on
> the control plane side", but I'm looking forward to be humbled on this list
> that my conclusion is incorrect.
>
> We use "ldp interface link-protection dynamic-rsvp-lsp" and for all my
> bypass LSPs, 'show route hidden table inet.3 detail' tells me
>
> Label-switched-path et-0/1/0.3402:BypassLSP->143.235.32.2
>   ...
> State: 
> Inactive reason: Unusable path
>
> I agree this is disconcerting if you are trying to get hidden routes to be
> zero, but there are other normal reasons for routes to be hidden such as
> rejection by bgp import policy.  Better IMHO to focus instead [or
> additionally] on " show route resolution unresolved "
>
> -Michael
>
> > -Original Message-
> > From: juniper-nsp  On Behalf Of
> > Misak Khachatryan via juniper-nsp
> > Sent: Monday, December 11, 2023 7:03 AM
> > To: juniper-nsp@puck.nether.net
> > Subject: [j-nsp] RSVP hidden routes in inet.0
> >
> > Hello,
> >
> > Recently I implemented RSVP in my network, nothing so fancy - automesh
> and
> > autobandwidth with node-link protection.
> >
> > By doing final review i saw output of show route summary:
> >
> > inet.0: 296 destinations, 298 routes (275 active, 0 holddown, 21 hidden)
> >   Direct:  6 routes,  5 active
> >Local:  5 routes,  5 active
> > OSPF:265 routes,264 active
> > RSVP: 21 routes,  0 active
> >  LDP:  1 routes,  1 active
> >
> > It is very curious for me why I see hidden RSVP routes in inet.0. It
> seems
> > somehow related to bypass LSP's and how Juniper organises it. Here they
> are:
> >
> > > show route protocol rsvp table inet.0 hidden
> >
> > inet.0: 296 destinations, 298 routes (275 active, 0 holddown, 21 hidden)
> > @ = Routing Use Only, # = Forwarding Use Only
> > + = Active Route, - = Last Active, * = Both
> >
> > 10.255.0.21/32  [RSVP] 01:11:54, metric 1
> > >  to 10.255.0.226 via ae1.7, label-switched-path
> Bypass-
> > >10.255.0.222->10.255.0.21
> > 10.255.0.29/32  [RSVP] 1d 10:26:25, metric 1
> > >  to 10.255.0.226 via ae1.7, label-switched-path
> Bypass-
> > >10.255.0.230->10.255.0.29
> > 10.255.0.33/32  [RSVP] 1d 10:26:25, metric 1
> > >  to 10.255.0.226 via ae1.7, label-switched-path
> Bypass-
> > >10.255.0.230->10.255.0.33
> > 10.255.0.38/32  [RSVP] 1d 09:32:03, metric 1
> > >  to 10.255.0.230 via ae4.7, label-switched-path
> Bypass-
> > >10.255.0.222->10.255.0.38
> > 10.255.0.70/32  [RSVP] 04:53:42, metric 1
> > >  to 10.255.0.230 via ae4.7, label-switched-path
> Bypass-
> > >10.255.0.226->10.255.0.70
> > 10.255.0.73/32  [RSVP] 1d 10:26:21, metric 1
> > >  to 10.255.0.226 via ae1.7, label-switched-path
> Bypass-
> > >10.255.0.230->10.255.0.73
> > 10.255.0.122/32 [RSVP] 1d 10:26:21, metric 1
> > >  to 10.255.0.226 via ae1.7, label-switched-path
> Bypass-
> > >10.255.0.230->10.255.0.122
> > 10.255.0.126/32 [RSVP] 1d 10:26:41, metric 1
> > >  to 10.255.0.226 via ae1.7, label-switched-path
> Bypass-
> > >10.255.0.230->10.255.0.126
> > 10.255.0.134/32 [RSVP] 1d 05:27:20, metric 1
> > >  to 10.255.0.230 via ae4.7, label-switched-path
> Bypass-
> > >10.255.0.222->10.255.0.134
> > 10.255.0.174/32 [RSVP] 1d 07:19:25, metric 1
> > >  to 10.255.0.230 via ae4.7, label-switched-path
> Bypass-
> > >10.255.0.222->10.255.0.174
> > 10.255.0.181/32 [RSVP] 1d 10:26:19, metric 1
> > >  to 10.255.0.226 via ae1.7, label-switched-path
> Bypass-
> > >10.255.0.230->10.255.0.181
> > 10.255.0.185/32 [RSVP] 1d 10:26:19, metric 1
> > >  to 10.255.0.226 via ae1.7, label-switched-path
> Bypass-
> > >10.255.0.230->10.255.0.185
> > 10.255.0.201/32 [RSVP] 1d 10:17:37, metric 1
> > 

Re: [j-nsp] MX304 - Edge Router

2023-10-26 Thread Tom Beecher via juniper-nsp
>
> Did I mention Arista is not spending valuable engineer time on all this
> license shit, but on actually making great products?
>

Oh they aren't?

https://www.arista.com/en/support/product-documentation/eos-feature-licensing

Arista will almost certainly move towards a licensing model similar to
other vendors at some point once their growth curve slows and they need to
start squeezing more revenue out of what they are selling.



On Wed, Oct 25, 2023 at 9:36 AM Gert Doering via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi,
>
> On Wed, Oct 25, 2023 at 12:50:33PM +, Richard McGovern via juniper-nsp
> wrote:
> > The introduction of newer (well now like 2 years old) Flex licensing
> > all newly purchased MX (which would include ALL MX304s) support
> > only L2 in the base (free) license. For any L3 (even static) you
> > require some additional level of license.
>
> There goes another vendor...
>
> Now, if the base price would have been *lowered* by the amount the
> L3 features of a *MX router* cost extra now, this might have been an
> option... but for my understanding, the base MX304 is already insanely
> pricey, and then add licenses on top...  nah, taking our money elsewhere.
>
> Did I mention Arista is not spending valuable engineer time on all this
> license shit, but on actually making great products?
>
> gert
> --
> Gert Doering - Munich, Germany
> g...@greenie.muc.de
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 - Edge Router

2023-10-24 Thread Tom Beecher via juniper-nsp
>
> My MX304 trial license expired last night, after rebooting the MX304,
> various protocols no longer work.  This seems more than just
> honor-based... ospf, ldp, etc, no longer function.  This is new to me;
> that Juniper is making protocols and technologies tied to license.  I
> need to understand more about this, as I'm considering buying MX304's.
>

This is news to me as well. Definitely not the case on smaller or much
larger MXes.

On Tue, Oct 24, 2023 at 3:20 PM Aaron Gould  wrote:

> My MX304 trial license expired last night, after rebooting the MX304,
> various protocols no longer work.  This seems more than just
> honor-based... ospf, ldp, etc, no longer function.  This is new to me;
> that Juniper is making protocols and technologies tied to license.  I
> need to understand more about this, as I'm considering buying MX304's.
>
> -Aaron
>
> On 10/24/2023 4:18 AM, Karl Gerhard via juniper-nsp wrote:
> > On 18/10/2023 18:55, Tom Beecher via juniper-nsp wrote:
> >> Juniper licensing is honor based. Won't impact functionality, will
> >> just grump at you on commits.
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 - Edge Router

2023-10-18 Thread Tom Beecher via juniper-nsp
Juniper licensing is honor based. Won't impact functionality, will
just grump at you on commits.

On Wed, Oct 18, 2023 at 10:32 AM Mark Tinka via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
>
> On 10/18/23 15:47, Aaron1 via juniper-nsp wrote:
>
> > Also, I get a license warning when committing OSPF and LDP.  We got a
> license from our account team and now we don’t see that message anymore
>
> Any details on what this license does, and if there is any operational
> risk if you don't apply it?
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] CVE-2023-4481

2023-09-11 Thread Tom Beecher via juniper-nsp
>
> Which in theory opens a new attack vector for the future.
>

What is the attack vector you foresee for a route sitting as hidden with
the potentially offending attributes stripped off?

On Thu, Aug 31, 2023 at 4:27 AM Tobias Heister via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi,
>
> Am 30.08.2023 um 18:09 schrieb heasley via juniper-nsp:
> > Tue, Aug 29, 2023 at 03:42:41PM -0700, David Sinn via juniper-nsp:
> >> A network I operate is going with:
> >>
> >>  bgp-error-tolerance {
> >>  malformed-route-limit 0;
> >>  }
> >>
> >> The thoughts being that there is no real reason to retain the malformed
> route and the default of 1000 is arbitrary. We haven't really seen a rash
> of them, so adjusting the logging hasn't proven needed yet.
> >
> > It does seem arbitrary.  retaining all seems like a better choice,
> > operationally.  allowing the operator diagnose why a route is missing;
> > show route  hidden.
>
> Which in theory opens a new attack vector for the future.
>
> As the update is malformed it could do $something to the handling in
> e.g. RPD or other daemons by processing them somehow wrong. By not
> holding or further process any of them that could (maybe, hopefully?) be
> minimized.
>
> Of course proper code and handling of malformed things would be even
> better, but you know ...
>
> regards
> Tobias
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] P2MP traffic loss over QFX and EX series

2023-09-11 Thread Tom Beecher via juniper-nsp
This PR may be worth asking about.

https://prsearch.juniper.net/problemreport/PR1693424


On Sat, Sep 2, 2023 at 7:08 PM Gonzalo Caceres via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi everyone,
>
> I have multicast traffic through a P2MP LSP and when this traffic goes
> through an EX or QFX series switch (spot EX4600 and QFX5110/5120) it is
> lost. We use these switches as multilayer, they have MPLS suite protocols
> (ISIS, LDP and RSVP).
>
>
>
> *>show versionModel: ex4600-40fJunos: 21.4R3.15 (recommended version at
> day)*
>
> The switch see the transit LSP from IP 2.2.2.2 to 1.1.1.1:
>
>
>
>
> *> show mpls lspTransit LSP: 2 sessionsTo  FromState Rt Style
> Labelin Labelout LSPname2.2.2.2 1.1.1.1 Up0  1 SE 62
>  lsp1Total 2 displayed, Up 2, Down 0*
>
> Traffic from 1.1.1.1 come's via ae0:
>
>
> *> show route 1.1.1.11.1.1.1/32  *[IS-IS/18] via
> ae0.01.1.1.1/32  *[LDP/9] via ae0.0, Push 315471*
>
>   Traffic to 2.2.2.2 goes via ae3:
>
>
> *> show route 2.2.2.22.2.2.2/32  *[IS-IS/18] 20:36:26,
> via ae3.02.2.2.2/32  *[LDP/9] 20:36:25, via ae3.0*
>
> The P2MP traffic see on input interface but they not replicate at the
> output:
>
>
>
>
>
>
>
>
> *Interface: ae0, Enabled, Link is UpInput bytes: 7282719382422 (785608168
> bps) [207824323]Input packets: 5263502719 (70972 pps) [223103]Interface:
> ae3, Enabled, Link is Up  Output bytes: 186620844 (26760 bps) [7545]
> Output packets: 1225967 (13 pps) [47]*
> Has anyone ever had a similar problem? or are you certain that this type of
> LSP will NOT work on these switches?
>
> Thanks, Gonzalo.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP export policy, group vs neighbor level

2022-02-08 Thread Tom Beecher via juniper-nsp
What this is saying:

If you have a GROUP level policy, and make any MODIFICATIONS to INDIVIDUAL
neighbor policies, that individual neighbor will reset.

This means adding OR removing the peer level policy will trigger the reset.

As I recall, it is because there is normally a single BGP Update IO thread
for a given peer group that handles everything for all those peers. If you
override a specific peer , it has to move that to a different thread, which
is intrusive and requires the reset. Conversely, if you remove the
override, it also needs to reset to pull it back into the same update
thread with the others.

On Sat, Feb 5, 2022 at 4:04 AM Raph Tello via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hey,
>
> not really clear to me what that KB is exactly saying.
>
> Does it say:
>
> a) Peer will be reset when it previously hadn’t an individual import/export
> policy statement but the group one and then an individual one is configured
>
> b) Peer will be reset each time it‘s individual policy is touched while
> there is another policy in the group
>
> or
>
> c) Peer is reset the first time it receives it‘s own policy under the group
>
> Unfortunate that this seems to be not really well documented.
>
>
> On Fri 4. Feb 2022 at 20:15, Andrey Kostin  wrote:
>
> > Hi,
> > this KB article just came in:
> >
> >
> https://kb.juniper.net/InfoCenter/index?page=content&id=KB12008&actp=SUBSCRIPTION
> > Symptoms:
> > Why does modifying a policy on a BGP neighbor in a group cause that
> > particular peer to be reset, when another policy is applied for the
> > whole peer group?
> > Solution:
> > Changing the export policy on a member (peer) in a group will cause that
> > member to be reset, as there is no graceful way to modify a group
> > parameter for a particular peer. Junos can gracefully change the export
> > policy, only when it is applied to the complete group.
> >
> > It's not much helpful but just provides a confirmation.
> >
> > Kind regards,
> > Andrey
> >
> > Raph Tello via juniper-nsp писал(а) 2022-02-04 09:33:
> > > I would also like to hear opinions about having ipv4 and ipv6 ebgp peer
> > > sessions in the same group and using the same policy instead of having
> > > two
> > > separate groups and two policies (I saw this kind policy at
> > > https://bgpfilterguide.nlnog.net/guides/small_prefixes/#junos).
> > >
> > > It would nicely pack things together. Could that be considered kind of
> > > new
> > > best practice?
> > >
> > > On Thu 3. Feb 2022 at 16:12, Raph Tello  wrote:
> > >
> > >> Hi list,
> > >>
> > >> I wonder what kind of bgp group configuration would allow me to change
> > >> the
> > >> import/export policy of a single neighbor without resetting the
> > >> session of
> > >> this neighbor nor any other session of other neighbors. Similar to
> > >> enabling/disabling features on a single session without resetting the
> > >> sessions of others.
> > >>
> > >> Let‘s say I have a bgp group IX-peers and each peer in that group has
> > >> its
> > >> own import/export policy statement but all reference the same
> > >> policies. Now
> > >> a single IX-peer needs a different policy which is going to change
> > >> local-pref, so I would replace the policy chain of that peer with a
> > >> different one.
> > >>
> > >> Would this cause a session reset because the peer would be moved out
> > >> of
> > >> the update group?
> > >>
> > >> (I wonder mainly about group>peer>policy vs. group>policy vs. each
> > >> peer
> > >> it‘s own group)
> > >>
> > >> - Tello
> > >>
> > > ___
> > > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> >
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] How to pick JUNOS Version

2020-08-19 Thread Tom Beecher
Start with the highest code version supported on the hardware that has all
the features you need.
Subtract 2 from the major revision number.
Pick a .3 version of that major revision.
Work towards current from there depending on test results, security needs,
etc.

On Wed, Aug 19, 2020 at 10:47 AM Colton Conor 
wrote:

> How do you plan which JUNOS version to deploy on your network? Do you stick
> to the KB21476 - JTAC Recommended Junos Software Versions or go a different
> route? Some of the JTAC recommended code seems to be very dated, but that
> is probably by design for stability.
>
> https://kb.juniper.net/InfoCenter/index?page=content&id=KB21476&actp=METADATA
>
> Just wondering if JUNOS will ever go to a unified code model like Arista
> does? The amount of PR's and bug issues in JUNOS seems overwhelming. Is
> this standard across vendors? I am impressed that Juniper takes the times
> to keep track of all these issues, but I am unimpressed that there are this
> many bugs.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Decoding DDOS messages

2020-03-18 Thread Tom Beecher
This is the most recent Juniper document I had bookmarked for the QFX.

https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/protocols-edit-ddos-qfx-series.html

I agree with Saku that the ddos-policer is a good tool to use, but as he
said it requires turning for your specific environment to be at its most
useful. I don't want to discount the opinions of those who dislike it, but
many complaints about it seem to boil down to "the defaults didn't work, I
didn't tune it, therefore I hate it".

On Wed, Mar 18, 2020 at 10:42 AM Saku Ytti  wrote:

> Hey Jason,
>
> > Questions about the ddos-protection "features".  We're on a qfx5100-48
> running 16.1.  I know that folks on the list aren't always big fans of
> ddos-protection; I'm just trying to understand what is triggering it so I
> can make decisions about tuning/disabling/ignoring it.
>
> I am a big fan, it's great feature everyone should be running and
> tuning correctly. Unfortunately even non-broken lo0 filter is
> extremely uncommon, even MX book has fundamentally broken example, as
> is CYMRU example. And ddos-protection may be even more complicated.
>
> I'm not very familiar how it works on QFX5k, I'm more aware of MX
> behaviour, where it's really great.
>
> > We are not a service provider; we're an end site running a flat L2
> network (LAN) with the QFX as our L3 core for IRB and routing to our ISP.
> Since the QFX is seeing all the BUM traffic I'm curious if ddos-protection
> is being triggered as a result of seeing all the L2 packets.
>
> Your L2 should be in its virtual-switch/vpls (doesn't imply VPLS)
> instance with forwarding-plane filter policing BUM. But unrelatd to
> subject.
>
> > IPMCAST-miss (lots of this one!)
>
> Probably punts for programming flow, and subsequent will be HW
> switched. You may want to have ACL to drop all MCAST traffic at edge.
> This should be 0 if you don't actually run multicast.
>
> > ARP
>
> Self-explanatory? You shouldn't want to see this exceeded, ideally you
> should police this on IFD level, but I'm not sure if QFX5k can, MX
> can.
>
> > TTL
>
> TTL exceeded message. Normal to hit this policer in uloops.
>
> > Redirect
>
> IP redirect, you probably want to disable them at network edge. This
> should be 0.
>
> > L3MTU-fail
>
> Egress MTU was too small for packet. It is punted for potentially ICMP
> message generation. Depending on config expected or unexpected.
>
> > RESOLVE
>
> Traffic hitting connected DADDR which is not in ARP cache, we need to
> punt it for ARP resolution. Normal to see as there is constant
> background traffic to every DADDR.
>
> > L3NHOP
>
> Unsure.
>
> > So is that all ARP?  ARP that the switch needs to answer for?  Similar
> for the other packet types: are these thresholds for packets that the
> switch is processing (sent to the RE), or just for any traffic that is seen
> on any interface?  If it's just an issue of too much stuff going to the RE
> I can firewall it off so long as I know it's spurious.
>
> It's ARP packets verbatim (see RESOLVE which is non ARP packet
> triggering ARP resolution). Originally when ddos-protection was
> implemented resolve was not implemented, and RESOLVE packet was what
> ever classifier it hit, so if you sent BGP packets to unarpable DADDR
> it was eating BGP policer PPS, so you could easily get targets core
> iBGP down, and there was nothing they could do to stop you.
>
> > Sorry if I'm not asking the right questions... I'm just trying to figure
> out if these errors are actually problems that I need to track down, or if
> the default reporting is just too noisy.
> >
> > Thanks,
> >
> > Jason
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
> --
>   ++ytti
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Tom Beecher
Likely, but if you only need like 4  :)

On Wed, Mar 4, 2020 at 10:01 AM Mark Tinka  wrote:

>
> On 4/Mar/20 16:53, Giuliano C. Medalha wrote:
>
> With the new MPC10 you can get 10 x 100G or 15 x 100G per slot in mx240 ,
> mx480 or mx960
>
> But you will need premium 3 chassis with scbe3 boards to have maximum
> capacity.
>
>
> An MX10008/10016 chassis can get you 24x 100Gbps per slot. That's going to
> be a lot cheaper than an MPC10E-15C-MRATE (and other bits you may need to
> upgrade for the performance).
>
> Mark.
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Tom Beecher
You can still get 100G ports on the 960 chassis with MPC5E/6/7s , depending
on what kind of density you require.

On Wed, Mar 4, 2020 at 9:42 AM Mark Tinka  wrote:

>
>
> On 4/Mar/20 16:36, Tom Beecher wrote:
> > It really depends on what you're going to be doing,but I still have
> quite a
> > few MX960s out there running pretty significant workloads without issues.
> >
> > I would suspect you hit the limits of the MS-MPCs way before the limits
> of
> > the chassis.
>
> The classic MX chassis are nowhere close to running out of ideas.
>
> But Juniper have to always be pushing the tech., so emphasis will be on
> the MX1000 (although not necessarily at the expense of the MX960/480/240).
>
> I still believe if your use-case is not overly complicated, you may find
> the MX960/480 to be cheaper if you don't need 100Gbps ports.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Tom Beecher
It really depends on what you're going to be doing,but I still have quite a
few MX960s out there running pretty significant workloads without issues.

I would suspect you hit the limits of the MS-MPCs way before the limits of
the chassis.

On Wed, Mar 4, 2020 at 6:56 AM Ibariouen Khalid  wrote:

> dear Juniper community
>
> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>
> juniper always recommends to use MX10K , but i my case i need MS-MPC which
> is not supported on MX10K and i want to knwo if i will have some limitation
> on MX960.
>
> Thanks
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Old JunOS upgrade path

2019-03-10 Thread Tom Beecher
This was, and still is, the most accurate answer in the thread. To expand
on it further

Cisco IOS images are standalone binary images. Each time the device is
powered on, it loads the image it is configured too, and executes it. The
entire operating system is encapsulated in this image file, and it's
execution is completely independent of any image is has executed
previously, or will in the future. You don't really 'upgrade' a Cisco IOS
device ; you just tell it to load a different image.

Junos is a complete operating systems. Just like your home computer, each
time the device loads, it is loading the complete OS in the state that it
was when it was shut down. ( Hopefully. :) ) When you upgrade it, the
process only modifies certain components. Any OS upgrade process like that
will have restrictions and dependencies that dictate where you can upgrade
from based on where you are now.  You always have an option to do a clean
install of the version you want, but that is a complete removal and
replacement of the entire OS.

Cisco NX-OS is the same way. I've watched our datacenter guys have to do
the exact same intermediate upgrade, X -> Y -> Z , if the version gap was
large enough that X -> Z was not doable. Exact same reason as JunOS.

On Fri, Mar 8, 2019 at 4:14 PM Gert Doering  wrote:

> Hi,
>
> On Fri, Mar 08, 2019 at 01:17:44PM -0700, Eldon Koyle wrote:
> > Many (most?) network operating systems are an image file that the
> > switch either writes over a partition (ie. block-level copy) or boots
> > directly (ie. initrd/initramfs) with a separate partition for a config
> > file.  Junos is a full BSD operating system that installs packages to
> > partitions on the device, runs upgrade scripts, etc.
>
> So?
>
> gert
> --
> "If was one thing all people took for granted, was conviction that if you
>  feed honest figures into a computer, honest figures come out. Never
> doubted
>  it myself till I met a computer with a sense of humor."
>  Robert A. Heinlein, The Moon is a Harsh
> Mistress
>
> Gert Doering - Munich, Germany
> g...@greenie.muc.de
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP OIDs for Yellow/Red Alarm on MX204

2019-02-28 Thread Tom Beecher
Weird.. those still show as valid through 18.4...



On Thu, Feb 28, 2019 at 3:31 PM Paul Stewart  wrote:

> Wow that does suck … have used those alarm OID’s for a very long time with
> Juniper boxes ….
>
> > On Feb 28, 2019, at 3:27 PM, Simon Lockhart  wrote:
> >
> > On Thu Feb 28, 2019 at 03:19:36PM -0500, Tom Beecher wrote:
> >> These don't work on the 204?
> >>
> >> Red Alarm: jnxRedAlarmState 1.3.6.1.4.1.2636.3.4.2.3.1
> >> Yellow Alarm: jnxYellowAlarmState 1.3.6.1.4.1.2636.3.4.2.2.1
> >
> > They don't seem to exist on either MX10003 or MX204...
> >
> > $ snmpwalk -v 2c -c xxx mx10003 .1.3.6.1.4.1.2636.3.4
> > iso.3.6.1.4.1.2636.3.4 = No Such Object available on this agent at this
> OID
> >
> > $ snmpwalk -v 2c -c xxx mx204 .1.3.6.1.4.1.2636.3.4
> > iso.3.6.1.4.1.2636.3.4 = No Such Object available on this agent at this
> OID
> >
> > Simon
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP OIDs for Yellow/Red Alarm on MX204

2019-02-28 Thread Tom Beecher
These don't work on the 204?

Red Alarm: jnxRedAlarmState 1.3.6.1.4.1.2636.3.4.2.3.1
Yellow Alarm: jnxYellowAlarmState 1.3.6.1.4.1.2636.3.4.2.2.1

On Thu, Feb 28, 2019 at 2:51 PM Tarko Tikan  wrote:

> hey,
>
> > i just found out that it seems not to be possible to poll Yellow/Red
> Alarm (Count or State) on MX204 via SNMP.
>
> This is pretty sad, juniper, please fix this.
>
> Polling for redAlarmCount > 0 or yellowAlarmCount > 0 has been a good
> generic solution to cover a lot of ground quickly.
>
> --
> tarko
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Avoid transit LSPs

2019-01-25 Thread Tom Beecher
So I missed your specific comment about being concerned about the bypass
LSPs.

I think (although I haven't tested this in forever) if you enable
no-node-protection under RSVP , that will prevent those interfaces from
being available for any node or link bypass LSP to use.

On Fri, Jan 25, 2019 at 11:51 AM Luis Balbinot 
wrote:

> Please let me know if you find some other approach.
>
> The overload bit helps but in the absence of another path the RSVP FRR
> mechanism will setup a bypass LSP through a node with the overload bit
> set. And link coloring does not help, at least in my case.
>
> Luis
>
> On Fri, Jan 25, 2019 at 11:20 AM Jason Lixfeld 
> wrote:
> >
> > I’m testing a similar approach (except using the ISIS overload bit) that
> aims to prevent the path between a pair of LSRs via the links to and
> through my RRs from being considered as a possible transit path.  Seems to
> work just fine in the lab.
> >
> > > On Jan 24, 2019, at 3:24 PM, Luis Balbinot 
> wrote:
> > >
> > > That’s a good idea. I’m not 100% sure that this will prevent the
> creation
> > > of bypass LSPs but I’ll give it a try.
> > >
> > > Thanks!
> > >
> > > Luis
> > >
> > > On Thu, 24 Jan 2019 at 18:01 Colby Barth  wrote:
> > >
> > >> Luis-
> > >>
> > >> You could probably set the overload bit.
> > >>
> > >> -Colby
> > >>
> > >> On 1/24/19, 1:10 PM, "juniper-nsp on behalf of Dave Bell" <
> > >> juniper-nsp-boun...@puck.nether.net on behalf of m...@geordish.org>
> wrote:
> > >>
> > >>I'm not aware of any option that will do this.
> > >>
> > >>The three solutions that I can think of are:
> > >>Link colouring like Adam suggests
> > >>An explicit path that avoids the interfaces you are worried about
> > >>Set the RSVP cost for the interfaces really high
> > >>
> > >>Dave
> > >>
> > >>On Thu, 24 Jan 2019 at 17:01, Luis Balbinot  >
> > >> wrote:
> > >>
> > >>> It's a permanent thing.
> > >>>
> > >>> These boxes are PE routers that are not supposed to handle transit
> > >>> traffic but during certain network events a few FRR bypass LSPs are
> > >>> established through them because that's the only remaining path.
> > >>>
> > >>> Something easier like a "no-eligible-backup" toggle like the one we
> > >>> have with OSPF LFA would be nice.
> > >>>
> > >>> Luis
> > >>>
> > >>> On Thu, Jan 24, 2019 at 2:53 PM 
> > >> wrote:
> > 
> > > Luis Balbinot
> > > Sent: Thursday, January 24, 2019 4:45 PM
> > >
> > > Hi.
> > >
> > > How could I prevent a device from getting transit RSVP LSPs being
> > > established through it? I only want it to accept ingress LSPs
> > >> destined
> > >>> to
> >  that
> > > box.
> > >
> >  If this is a permanent thing,
> >  You could create a colouring scheme where all links connected to
> > >> this
> > >>> node
> >  have to be avoided by all LSPs with the exception of LSPs
> > >> terminated on
> > >>> this
> >  node (or originated by this node).
> > 
> >  If this is a maintenance thing,
> >  There's a command that can be enabled to drain all transit LSPs
> > >> out the
> > >>> box.
> >  But, all the LSPs would need to be configured with this capability
> > >> in the
> >  first place.
> > 
> > 
> >  adam
> > 
> > >>> ___
> > >>> juniper-nsp mailing list juniper-nsp@puck.nether.net
> > >>>
> > >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwICAg&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=3lVLtwWQROCdl4DSF6Gy-pAupMaX1AvOyICNqk7ML5U&m=xjETS_MpBQ5yJFgNOTwIgxvck93EOjA7sbBBzWEr7sE&s=fIjs_f4CA6mUSiUhoRhAYrV4mmOT5lDwryzIvSgr-KU&e=
> > >>>
> > >>___
> > >>juniper-nsp mailing list juniper-nsp@puck.nether.net
> > >>
> > >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwICAg&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=3lVLtwWQROCdl4DSF6Gy-pAupMaX1AvOyICNqk7ML5U&m=xjETS_MpBQ5yJFgNOTwIgxvck93EOjA7sbBBzWEr7sE&s=fIjs_f4CA6mUSiUhoRhAYrV4mmOT5lDwryzIvSgr-KU&e=
> > >>
> > >>
> > >>
> > > ___
> > > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Avoid transit LSPs

2019-01-24 Thread Tom Beecher
Best option would be the coloring scheme along with explicit excludes in
the config for the LSPs you dont want to go through said device.

On Thu, Jan 24, 2019 at 3:25 PM Luis Balbinot  wrote:

> That’s a good idea. I’m not 100% sure that this will prevent the creation
> of bypass LSPs but I’ll give it a try.
>
> Thanks!
>
> Luis
>
> On Thu, 24 Jan 2019 at 18:01 Colby Barth  wrote:
>
> > Luis-
> >
> > You could probably set the overload bit.
> >
> > -Colby
> >
> > On 1/24/19, 1:10 PM, "juniper-nsp on behalf of Dave Bell" <
> > juniper-nsp-boun...@puck.nether.net on behalf of m...@geordish.org> wrote:
> >
> > I'm not aware of any option that will do this.
> >
> > The three solutions that I can think of are:
> > Link colouring like Adam suggests
> > An explicit path that avoids the interfaces you are worried about
> > Set the RSVP cost for the interfaces really high
> >
> > Dave
> >
> > On Thu, 24 Jan 2019 at 17:01, Luis Balbinot 
> > wrote:
> >
> > > It's a permanent thing.
> > >
> > > These boxes are PE routers that are not supposed to handle transit
> > > traffic but during certain network events a few FRR bypass LSPs are
> > > established through them because that's the only remaining path.
> > >
> > > Something easier like a "no-eligible-backup" toggle like the one we
> > > have with OSPF LFA would be nice.
> > >
> > > Luis
> > >
> > > On Thu, Jan 24, 2019 at 2:53 PM 
> > wrote:
> > > >
> > > > > Luis Balbinot
> > > > > Sent: Thursday, January 24, 2019 4:45 PM
> > > > >
> > > > > Hi.
> > > > >
> > > > > How could I prevent a device from getting transit RSVP LSPs
> being
> > > > > established through it? I only want it to accept ingress LSPs
> > destined
> > > to
> > > > that
> > > > > box.
> > > > >
> > > > If this is a permanent thing,
> > > > You could create a colouring scheme where all links connected to
> > this
> > > node
> > > > have to be avoided by all LSPs with the exception of LSPs
> > terminated on
> > > this
> > > > node (or originated by this node).
> > > >
> > > > If this is a maintenance thing,
> > > > There's a command that can be enabled to drain all transit LSPs
> > out the
> > > box.
> > > > But, all the LSPs would need to be configured with this
> capability
> > in the
> > > > first place.
> > > >
> > > >
> > > > adam
> > > >
> > > ___
> > > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > >
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwICAg&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=3lVLtwWQROCdl4DSF6Gy-pAupMaX1AvOyICNqk7ML5U&m=xjETS_MpBQ5yJFgNOTwIgxvck93EOjA7sbBBzWEr7sE&s=fIjs_f4CA6mUSiUhoRhAYrV4mmOT5lDwryzIvSgr-KU&e=
> > >
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> >
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp&d=DwICAg&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=3lVLtwWQROCdl4DSF6Gy-pAupMaX1AvOyICNqk7ML5U&m=xjETS_MpBQ5yJFgNOTwIgxvck93EOjA7sbBBzWEr7sE&s=fIjs_f4CA6mUSiUhoRhAYrV4mmOT5lDwryzIvSgr-KU&e=
> >
> >
> >
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] dhcp relay fail between VRFs

2018-12-20 Thread Tom Beecher
I feel like I hit this once and had to use a rib-group.

But it would probably be helpful to see the relevant configs in full.

On Wed, Dec 19, 2018 at 10:52 PM Anderson, Charles R  wrote:

> On Wed, Dec 19, 2018 at 03:41:40PM -0800, Chris Cappuccio wrote:
> > Nathan Ward [nw...@daork.net] wrote:
> > > Hi Chris, check out the forward-only-replies option, pretty sure there
> was some stuff there I had to fiddle with.
> > >
> > > Can you post your config?
> > >
> >
> > I'm using the bootp helper instead of the dhcp-relay feature. I don't
> know
> > if 'fud' handles both. When I tried using dhcp-relay, my remote PEs
> stopped
> > receiving DHCP replies. The router connected to the dhcp server would
> eat the
> > replies and they would never reach the remote PE router, despite the
> > interface that the DHCP server connects to not having any relay enabled.
> > Since it is a production network I stopped using dhcp-relay until I look
> > further into the documentation..
>
> bootp helper requires you to specify the routing-instance.  Have you done
> that?
>
> set forwarding-options helpers bootp interface irb.2287 server 10.0.1.10
> routing-instance FOO
> set forwarding-options helpers bootp interface irb.2287 server 10.0.2.10
> routing-instance FOO
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ftp.juniper.net

2018-12-20 Thread Tom Beecher
:) You both make fair points for sure.

It seems to be a trend in the industry when it comes to documentation to
lean towards "ZOMG DON'T DELETE ANYTHING WE MIGHT NEED THAT SOMEDAY!".
Usually silly reasons, but I've been just as guilty as the next person over
the years in constructing an argument about why we need to remember one
esoteric bit of knowledge about the config on that platform we haven't had
in use since the early Obama administration.

On Wed, Dec 19, 2018 at 6:11 PM Niall Donaghy 
wrote:

> @Tom: Oh sure, hardlinks are rife and a HTTP 301 might be a bit crude.
>
> I favour what Chris suggests.
>
> Getting rather too involved in this no doubt ... but I wonder how we are
> all
> defining 'deprecated'.
> My dictionary says 'in the process of being replaced' --- which I take to
> mean (employing the principle of least surprise) that during the process it
> will work until the replacement arrives, at which point it will be
> decommissioned, retired, removed, dereferenced. For me that means not just
> firewalling off 21/tcp or stopping the service, but clearly stating in the
> documentation that $thing is dead.
>
> Surely I'm procrastinating with such pedantry here. =)  :D
>
> -Original Message-----
> From: Chris Cappuccio [mailto:ch...@nmedia.net]
> Sent: 19 December 2018 22:18
> To: Tom Beecher 
> Cc: Niall Donaghy ; Juniper List
> ; pasv...@gmail.com; Saku Ytti 
> Subject: Re: [j-nsp] ftp.juniper.net
>
> Tom Beecher [beec...@beecher.cc] wrote:
> > Lots of people (for better or for worse) may have hard linked /
> > referenced
> > KB15585 in their documentation, so IMO it's a good idea to leave it up
> > with a bolded note of depreciation, which is what they did.
> >
>
> Just erase the rest of the KB after this text:
> NOTE:  Use of FTP is deprecated, large file uploads for cases should
> instead
> use the SSH FTP (SFTP) process described in KB23337 - How to upload large
> files to a JTAC Case.
>
> Problem solved.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ftp.juniper.net

2018-12-19 Thread Tom Beecher
Lots of people (for better or for worse) may have hard linked / referenced
KB15585 in their documentation, so IMO it's a good idea to leave it up with
a bolded note of depreciation, which is what they did.

Expecting a bit of RTFM on the end user isn't that much, IMO.

On Wed, Dec 19, 2018 at 12:37 PM Niall Donaghy 
wrote:

> Hi Pasvorn, Aaron D,
>
> Sure - I just find it odd that the FTP instructions are still part of the
> KB
> (if FTP is retired, then the instructions will never work again, so it's a
> mite misleading to leave them up).
>
> YMMV, IMHO, etc.
>
> Happy SFTPing folks!
>
> Br,
> Niall
>
> -Original Message-
> From: Aaron Dewell [mailto:aaron.dew...@gmail.com]
> Sent: 19 December 2018 17:19
> To: Niall Donaghy 
> Cc: Aaron Gould ; Saku Ytti ; Juniper List
> 
> Subject: Re: [j-nsp] ftp.juniper.net
>
>
> Definitely.  You can file a report with the "feedback" button on that page
> and
> it will get updated.
>
> > On Dec 19, 2018, at 10:16 AM, Niall Donaghy 
> wrote:
> >
> > Thanks Saku and Aaron.
> >
> > My point is KB15585 should be retired if FTP is no longer supported. =)
> >
> > -Original Message-
> > From: Aaron Gould [mailto:aar...@gvtc.com]
> > Sent: 19 December 2018 16:41
> > To: 'Saku Ytti' ; Niall Donaghy 
> > Cc: aaron.dew...@gmail.com; 'Juniper List' 
> > Subject: RE: [j-nsp] ftp.juniper.net
> >
> > Yep works, thanks (Niall, use sftp.juniper.net not ftp.juniper.net)
> >
> > C:\Users\aaron>sftp anonym...@sftp.juniper.net Password authentication
> > Password:
> > Connected to anonym...@sftp.juniper.net.
> > sftp> pwd
> > Remote working directory: /pub/incoming
> >
> > - Aaron
> >
> >
>
>
> From: Pasvorn Boonmark [mailto:pasv...@gmail.com]
> Sent: 19 December 2018 17:17
> To: Niall Donaghy 
> Subject: Re: [j-nsp] ftp.juniper.net
>
> Hi Niall,
>
> KB15585 has the following on top of it, so yes, ftp.juniper.com is
> deprecated
> since the end of 2017.
>
> NOTE:  Use of FTP is deprecated, large file uploads for cases should
> instead
>
> use the SSH FTP (SFTP) process described in KB23337 - How to upload large
> files to a JTAC Case.
>
> Please see KB23337 for the use of SFTP to upload a file.
>
> Cheers,
>
> -Pasvorn
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] dsc interface on qfx5100

2018-10-12 Thread Tom Beecher
No problem!

@Niall : I can't remember if there was something under the hood that made
the discard *interface* more preferable than just a discard *route*. In our
implementation we have a qualified next-hop to send flagged traffic to a
local collector box first, and only discard if that's not reachable, and
we're not doing anything on dsc.0 that a straight discard route couldn't
do.

Otherwise yeah, running a count filter on dsc for metrics about what the
traffic was was kinda the entire point I thought. :)

On Fri, Oct 12, 2018 at 12:40 PM Jason Healy  wrote:

> On Oct 12, 2018, at 9:07 AM, Niall Donaghy 
> wrote:
> >
> > Yes we (large ISP) tried using dsc interfaces (MX series) to count RTBH
> > traffic and found, 1) they don't count, and 2) IPv6 is unsupported for
> dsc.
>
> That's what I needed to know!  Back to standard discard routes it is...
>
> Thanks to you and Tom for saving me any more pain trying to figure this
> out.
>
> Jason
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] dsc interface on qfx5100

2018-10-12 Thread Tom Beecher
I’m pretty sure we drilled Juniper about the IPv6 discard interface thing a
few months ago and got a feature request in for that. One of our guys
wasted about 2 weeks on that.

On Fri, Oct 12, 2018 at 09:07 Niall Donaghy  wrote:

> Hi Jason,
>
> Yes we (large ISP) tried using dsc interfaces (MX series) to count RTBH
> traffic and found, 1) they don't count, and 2) IPv6 is unsupported for dsc.
> As with many Junos features, there is not parity between IPv4 and IPv6.
> That alone bugged us, but especially as the counters did not work, we
> abandoned dsc and just made the IPv4 RTBH sink a discard route:
>
> #
> # Adjust to suit for VRFs / redistribute to your VRFs
> #
> set routing-options static route 192.0.2.101/32 discard
> set routing-options static route 192.0.2.101/32 no-readvertise
> set routing-options rib inet6.0 static route 0100::/64 discard
> set routing-options rib inet6.0 static route 0100::/64 no-readvertise
>
> So at the moment, you've got a dsc.0 which is about as useful as static
> discard.
>
> If anyone has gotten dsc counting with IPv4, and/or if things have changed
> and IPv6 is supported, I'm curious to know.
> Right now we're on 15.1F6-S10.4 targeting 17.4R2.4 for our next upgrade -
> retesting dsc functionality is somewhat lower than low priority.
>
> Br,
> Niall
>
> -Original Message-
> From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
> Of
> Jason Healy
> Sent: 12 October 2018 02:37
> To: juniper-nsp 
> Subject: [j-nsp] dsc interface on qfx5100
>
> I'm more of a layer-2 guy, but I'm trying to tighten up a few things on our
> qfx5100 that acts as our l3 core here at our campus.  We use RFC1918 space
> internally, but I'd like to discard any traffic to these ranges if they
> aren't one of our known subnets.  I have that working with standard
> "discard" routes.
>
> I've seen some best-practice documents where you can set up the discard
> (dsc) interface to blackhole traffic, rather than using a direct "discard"
> route.  I thought it might be nice to use that so I could analyze the stuff
> that's getting discarded (count packets, maybe even mirror or analyze the
> traffic).
>
> TL;DR: I got dsc working and it is discarding, but the counters aren't
> incrementing.  I need to know if I'm doing it wrong or if this is a
> "feature" of the qfx5100 similar to its inability to count ipv6 packets...
>
> I've created the interface and assigned an IP with a "destination" I can
> route to, and a filter to count packets:
>
> dsc {
> unit 0 {
> family inet {
> filter {
> output sa-discard-v4;
> }
> address 10.255.254.2/32 {
> destination 10.255.254.1;
> }
> }
> }
> }
>
>
> The filter just counts:
>
> filter sa-discard-v4 {
> term default-discard {
> then {
> count discard-v4-default;
> /* Not supported on egress on this platform */
> inactive: log;
> }
> }
> }
>
>
> And I've added some rules to discard the traffic:
>
> [edit routing-options rib inet.0 static]
> + route 10.0.32.64/32 next-hop 10.255.254.1;
>
>
> That's a live IP on my network, and I've confirmed that traffic is
> discarded
> with that route active.  Alas, the counters on the interface don't budge:
>
> qfx> show interfaces dsc
> Physical interface: dsc, Enabled, Physical link is Up
>   Interface index: 5, SNMP ifIndex: 5
>   Type: Software-Pseudo, MTU: Unlimited
>   Device flags   : Present Running
>   Interface flags: Point-To-Point SNMP-Traps
>   Link flags : None
>   Last flapped   : Never
> Input packets : 0
> Output packets: 0
>
>   Logical interface dsc.0 (Index 548) (SNMP ifIndex 709)
> Flags: Down Point-To-Point SNMP-Traps Encapsulation: Unspecified
> Protocol inet, MTU: Unlimited
>   Flags: Sendbcast-pkt-to-re
>   Addresses, Flags: Is-Preferred Is-Primary
> Destination: 10.255.254.1, Local: 10.255.254.2
>
>
> Nor do the firewall counters:
>
> qfx> show firewall filter sa-discard-v4 counter discard-v4-default
>
> Filter: sa-discard-v4
> Counters:
> NameBytes
> Packets
> discard-v4-default  0
> 0
>
>
>
> Has anyone set this up with static routing?  All the examples use BGP, but
> I
> can't imagine why that would make a difference for the reporting if the
> traffic is correctly discarded.
>
> Thanks,
>
> Jason
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos Telemetry Interface (JTI)

2018-10-11 Thread Tom Beecher
Related, my company open sourced a tool we've been working on for network
telemetry at NANOG in Vancouver. I'm 95% sure that a JTI receiver is
functional on our internal builds, but they're still working on a few
things with streaming receivers generally, so it's not yet in the public
repo. May be something that can meet your needs at some point if you wanted
to keep an eye on it.

https://github.com/yahoo/panoptes

On Thu, Oct 11, 2018 at 9:02 AM Niall Donaghy 
wrote:

> Fantastic news Aaron!
>
> That tallies with our experience of deploying the 'bundle' version of
> OpenNTI
> for Junos ST.
>
> We look forward to your shared experiences as you kick the tyres and -
> hopefully - incorporate this into your NMS/procedures. :)
>
> Many thanks,
> Niall
>
>
> -Original Message-
> From: Aaron Gould [mailto:aar...@gvtc.com]
> Sent: 11 October 2018 13:59
> To: juniper-nsp@puck.nether.net
> Cc: James Burnett ; Niall Donaghy
> ; 'Colton Conor' 
> Subject: RE: [j-nsp] Junos Telemetry Interface (JTI)
>
> Wanted to circle back with y'all... I finally got this working...thanks to
> techmocha10 (see below) and my linux coworker genius (Dave),
>
> I'll just copy/paste a post I just made...
>
>
> https://forums.juniper.net/t5/vMX/Telemetry-data-is-not-streaming-from-Juniper-vMX-17-4R1-16/m-p/375996#M923
>
>
> I got telemetry streaming working using this site ... I have a couple
> MX960's
> streaming telemetry to the suite of software provided in this Open-NTI
> project
> spoken of on this techmocha blog site.  I think my previous problems were
> related to conflicting installs as myself and my coworker had loaded
> individual items and then the open-nti suite (which i understand is a
> docker
> container with all the items like grafana, fluentd, chronograf, influxdb,
> etc) anyway, we started with a fresh install Ubunto virtual machine
> and
> *only* loaded Open-NTI and it works.
>
>
> I do not know or understand all of the innerworkings of it at this point,
> but
> am quickly learning, even while writing this post... I'm currently using
> Chronograf hosted at port  and browsing the Data Explorer function and
> seeing some nice graphs.  (I'm wondering if Chrongraf is simply an
> alternative
> to Grafana gui front end, unsure) There seems to be tons of items to
> monitor
> and analyze, and I'm currently only sending the following sensor resource
> from
> the MX960 and there are several more that can be sent
> /junos/system/linecard/interface/
>
>
> I am sending the telemetry from the MX960 using UDP transport and GPB
> format
> to port 5 and source port 2 (mx960-1) and 21112 (mx960-2).  I'm
> unsure
> that I had to use unique source ports... as I wonder if the source-ip
> would
> have been sufficient to make the streaming sources unique in the Open-NTI
> server.
>
>
> Looking at the techmocha pictures, and the "docker ps" command on the
> linux
> server, and now this new-found techmocha link (see "deconstructed" below)
> apparently FluentD is the TSDB (time series db) that is
> receiving/ingesting
> the *Native* streaming form of telemetry from my MX960's on udp port 5
> and
> looks like fluentd hands off that data to InfluxDB port 8086 (which i
> think
> happens internally at that server).  (I'm not evening talking about the
> other
> form of jti telemetry using openconfig and grpcI've yet to do that and
> don't know why I would exactly...which i beleive is ingested using
> telegraf,
> unsure)
>
>
> ...the link i followed to deploy open-nti suite
>
> https://techmocha.blog/2017/06/26/using-opennti-as-a-collector-for-streaming-telemetry-from-juniper-devices-part-1/#comments
>
>
> ...interestingly, i just now found this, which apparently is a way of
> deploying all the components individually...
> https://techmocha.blog/2017/10/31/serving-up-opennti-deconstructed/
>
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Traffic delayed

2018-10-02 Thread Tom Beecher
You have switches with completely different buffer depths than you used to.
You prob want to look into that.

On Tue, Oct 2, 2018 at 9:39 AM james list  wrote:

> Dear experts
>
> I’ve a strange issue.
>
> Our customer replaced two L2/3 switches (C6500) where a pure L2 and L3
> (hsrp) environment was set-up with a couple of new MX9k running the same L2
> and L3 services but those two MX are running MPLS/VPLS to transport L3/L2
> frames. Access switches are QFX5k connected to MX MPLS PE.
>
> Now the main issue: the customer every almost 30 minutes (sometimes 28
> sometimes 33 minutes sometimes 30) detect some frames received with a delay
> of 3-600 milliseconds. The customer is a trading venue..
>
> It seems like something slow down the forwarding processing, now I know
> Juniper separate forwarding and control, but I was thinking to OSPF LSA
> refresh or something like that since the frequency is around 30 minutes..
>
> Can anybody help me in sorting out which can be the main point here ?
>
> Thanks in advance
>
> Cheers,
>
> James
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] auto b/w mpls best practice -- cpu spikes

2018-09-13 Thread Tom Beecher
There's no one magic knob that fixes CPU spikes in an MPLS environment.
They're all different. What I change to optimize mine might knock your
network over in 5 minutes. You need to determine what is triggering the
churn before you can reasonable optimize it. Take a look at logs and see
what is causing path changes that cause CPU spikes, work from there.

Having pre-signaled secondary paths will generally always be a good idea,
although with those try to use the sync-active-path-bandwidth command too
to prevent stale secondary RSVP reservations. Make-before-break is almost
universally a good idea too.

On code, personally I wouldn't ever go latest and greatest. It usually
means you just find the latest and greatest bugs. :) I go with the newest
stable version that doesnt have bugs that screw me, and upgrade only when
it's a feature I need, optimization I want, or security reasons.

Find your churn causes, work from there.

On Thu, Sep 13, 2018 at 7:40 AM Saku Ytti  wrote:

> I think 16.1 was first.
>
> ps Haux|grep rpd should show multiple rpd lines.
>
> Also
> y...@r41.labxtx01.us.bb> show task io |match {
>  KRT IO task  0   0   0   0
> 0 {krtio-th}
>  krtio-th 0   0   0   0
> 0 {krtio-th}
>  krt ioth solic client0   0 869   0
> 0 {krtio-th}
>  KRT IO   0   0   0   0
> 0 {krtio-th}
>  bgpio-0-th   0   0   0   0
> 0 {bgpio-0-th}
>  rsvp-io  0   0   0   0
> 0 {rsvp-io}
>  jtrace_jthr_task 0   0   0   0
> 0 {TraceThread}
>
> I'd just go latest and greatest.
> On Thu, 13 Sep 2018 at 12:13, tim tiriche  wrote:
> >
> > .o issues with convergence or suboptimal paths.  The noc is constantly
> seeing high cpu alerts and that was concerning.  Is this normal in other
> networks?
> >
> > Running 14.1R7.4 with mx480/240 mix.
> > I usually follow the code listed here:
> https://kb.juniper.net/InfoCenter/index?page=content&id=KB21476
> >
> > Which code version have these optimization happened in?
> >
> >
> > On Wed, Sep 12, 2018 at 2:11 AM Saku Ytti  wrote:
> >>
> >> Hey Tim,
> >>
> >> I'd optimise for customer experience, not CPU utilisation. Do you have
> >> issues with convergence time, suboptimal paths?
> >>
> >> Which JunOS you're running? There are quite good reasons to jump in
> >> recent JunOS for RSVP, as you can get RSVP its own core, and you can
> >> get make-before-break LSP reoptimisation, which actually works
> >> event-driven rather than timer based (like what you have, causing LSP
> >> blackholing if LSP convergence lasts longer than timers).
> >>
> >>
> >>
> >>
> >> On Wed, 12 Sep 2018 at 08:05, tim tiriche 
> wrote:
> >> >
> >> > Hi,
> >> >
> >> > Attached is my MPLS Auto B/w Configuration and i see frequent path
> changes
> >> > and cpu spikes.  I have a small network and wanted to know if there
> is any
> >> > optimization/best practices i could follow to reduce the churn.
> >> >
> >> > protocols {
> >> > mpls {
> >> > statistics {
> >> > file mpls.statistics size 1m files 10;
> >> > interval 300;
> >> > auto-bandwidth;
> >> > }
> >> > log-updown {
> >> > syslog;
> >> > trap;
> >> > trap-path-down;
> >> > trap-path-up;
> >> > }
> >> > traffic-engineering mpls-forwarding;
> >> >
> >> > rsvp-error-hold-time 25;
> >> > smart-optimize-timer 180;
> >> > ipv6-tunneling;
> >> > optimize-timer 3600;
> >> > label-switched-path <*> {
> >> > retry-timer 600;
> >> > random;
> >> > node-link-protection;
> >> > adaptive;
> >> > auto-bandwidth {
> >> > adjust-interval 7200;
> >> > adjust-threshold 20;
> >> > minimum-bandwidth 1m;
> >> > maximum-bandwidth 9g;
> >> > adjust-threshold-overflow-limit 2;
> >> > adjust-threshold-underflow-limit 4;
> >> > }
> >> > primary <*> {
> >> > priority 5 5;
> >> > }
> >> > }
> >> > ___
> >> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >>
> >>
> >>
> >> --
> >>   ++ytti
>
>
>
> --
>   ++ytti
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Mixing v4/v6 neighbors in BGP groups

2018-07-02 Thread Tom Beecher
In my view, the benefits of keeping them separate far outweigh the
additional effort of creating and managing the additional groups.

On Fri, Jun 29, 2018 at 11:01 AM, Rob Foehl  wrote:

> Wondering aloud a bit...  I've seen plenty of cases where wedging parallel
> v4/v6 sessions into the same BGP group and letting the router sort out
> which AFI it's supposed to be using on each session works fine, and nearly
> as many where configuring anything family-specific starts to get ugly
> without splitting them into separate v4/v6 groups.  Are there any
> particularly compelling reasons to prefer one over the other?
>
> I can think of a bunch of reasons for and against on both sides, and
> several ways to handle it with apply-groups or commit scripts.  Curious
> what others are doing here.
>
> Thanks!
>
> -Rob
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Router for full routes

2018-06-27 Thread Tom Beecher
Can confirm convergence time on the MX80 with even a single full table
session is extremely painful, and essentially not functional in a
production environment.


On Wed, Jun 27, 2018 at 7:10 AM, Dovid Bender  wrote:

> Hi All,
>
> In my 9-5 I work for an ITSP where we have two MX5's with
> - iBGP
> - two up steams with two BGP sessions each (one per routes)
> - one upstream with one bgp session
> - one bgp session where we get minimal routes (maybe 15 total)
>
> I was told that convergence time on the MX5 would be horrible so we never
> tried full routes. I am wondering what's the "lowest" model that can
> support full routes without having an issue re-sorting the routes.
>
> TIA.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp