[j-nsp] QFX5200 switches and fresh install of OS.

2024-08-29 Thread Lee Starnes via juniper-nsp
Hello everyone,

Does anyone know of and can point to a document for doing a format and
reinstall of the OS on the QFX5200 like what you can do on the EX series
switches? Either via tftp or from a USB stick? Have done numerous searches
and the only thing that comes up in my searches relate to the EX series
switches with the loader menu.

Best,

-Lee
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP timer

2024-05-03 Thread Lee Starnes via juniper-nsp
Hello Mark,

Thanks for asking. This is eBGP and the issue is that there have been
failures whereby the link does not fail, and thus can't track that routes
should be removed. BGP session has stayed up in some cases as well, yet no
traffic.

On Mon, Apr 29, 2024 at 9:31 AM Mark Tinka via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
>
> On 4/29/24 17:42, Lee Starnes via juniper-nsp wrote:
> > As for BFD and stability with aggressive settings, we don't run too
> > aggressive on this, but certainly do require it because the physical
> links
> > have not gone down in our cases when we have had issues, causing a larger
> > delay in killing the routes for that path. Not being able to rely on link
> > state failure leaves us with requiring the use of BFD.
>
> Is this link carrying eBGP or iBGP?
>
> If the latter, have you considered using BFD to track the IGP instead of
> BGP?
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP timer

2024-04-29 Thread Lee Starnes via juniper-nsp
Thank you everyone for the replies on this topic. For us, we would rather
keep a link down longer when it has an issue and goes down than to have it
come back up and then go down again. This is because the flapping is very
destructive to live video and VoIP. Having several diverse backbone
connections, we can tolerate having one down. This topic came up because we
have had one of our backbone carriers become problematic and the flapping
caused by their issues caused a lot of damage in terms of customer
relations. So certainly would want to let a failed link sit failed for a
little bit after it restores before bringing BGP back up.

As for BFD and stability with aggressive settings, we don't run too
aggressive on this, but certainly do require it because the physical links
have not gone down in our cases when we have had issues, causing a larger
delay in killing the routes for that path. Not being able to rely on link
state failure leaves us with requiring the use of BFD.

Again, thanks for all the replies everyone. I will check out the BFD
holddown.

-Lee

On Mon, Apr 29, 2024 at 5:43 AM Jeff Haas via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
> Juniper Business Use Only
> On 4/29/24, 02:41, "Saku Ytti" mailto:s...@ytti.fi>> wrote:
> > On Sun, 28 Apr 2024 at 21:20, Jeff Haas via juniper-nsp
> > > BFD holddown is the right feature for this.
> >
> > But why is this desirable? Why do I want to prioritise stability
> > always, instead of prioritising convergence on well-behaved interfaces
> > and stability on poorly behaved interfaces?
>
> This feature is "don't bring up BGP on interfaces that aren't stable
> enough to
> let BFD stay up".  The intended use case is when you have an interface
> noisy
> enough that TCP can fight its way through keeping BGP up... enough, but not
> stable enough that you'd really want to forward over it.  The assessment
> for
> that is "BFD will go down in short order".
>
> > That is, if I cannot have exponential back-off, I won't kill
> > convergence 'just in case', because it's not me who will feel the pain
> > of my decisions, it's my customers. Netengs and particularly infosec
> > people quite often are unnecessarily conservative in their policies,
> > because they don't have skin in the game, they feel the upside, but
> > not the downside.
>
> People make decisions that are appropriate for their networks.  Using BFD
> on
> your BGP sessions is probably overkill *for you*.  Don't do that then.
>
> -- Jeff
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] BGP timer

2024-04-27 Thread Lee Starnes via juniper-nsp
Hello everyone,

Having difficulty finding a way to prevent BGP from re-establishing after a
BFD down detect. I am looking for a way to keep the session from
re-establishing for a configured amount of time (say 5 minutes) to ensure
we don't have a flapping session for a. link having issues.

We asked the jtac but they came back with the reverse which would keep the
session up for a certain amount of time before it drops (Not what we want).

Is there a way to do this? We are using MX204 routers and the latest
23.4R1.9 Junos.

Best,

-Lee
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] BGP route announcements and Blackholes

2024-03-19 Thread Lee Starnes via juniper-nsp
Hello Juniper gurus. I am seeing an issue where we have a carrier that does
RTBH via BGP announcement rather than community strings. This is done via
BGP peer to a blackhole BGP router/server.

My issue here is that our aggregate IP block that is announced to our
backbone providers gets impacted when creating a /32 static discard route
to announce to that blackhole peer.

The blackhole peer does receive the /32 announcement, but the aggregate
route also becomes discarded and thus routes to the other peers stop
working.

Been trying to determine just how to accomplish this function without
killing all routes.

So we have several /30 to /23 routes within our /19 block that are
announced via OSPF from our switches to the routers. The routers aggregate
these to the /19 to announce the entire larger block to the backbone
providers.

The blackhole peer takes routes down to a /32 for mitigation of an attack.
If we add a static route as "route x.x.22.12/32 discard" we get:

show route x.x.22.10

inet.0: 931025 destinations, 2787972 routes (931025 active, 0 holddown, 0
hidden)
@ = Routing Use Only, # = Forwarding Use Only
+ = Active Route, - = Last Active, * = Both

x.x.0.0/19 *[OSPF/125] 5d 19:26:19, metric 20, tag 0
>  to 10.20.20.3 via ae0.0
[Aggregate/130] 5d 20:18:36
   Reject


While we see the more specific route as discard:

show route x.x.22.12

inet.0: 931022 destinations, 2787972 routes (931022 active, 0 holddown, 0
hidden)
@ = Routing Use Only, # = Forwarding Use Only
+ = Active Route, - = Last Active, * = Both
x.x.22.12/32*[Static/5] 5d 20:20:07
   Discard



Does anyone have a working config for this type of setup that might be able
to share some tips or the likes on what I need to do or what I'm doing
wrong?

Best,

-Lee
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MX204 OSPF default route injection

2024-03-06 Thread Lee Starnes via juniper-nsp
Hey guys,

Running into a problem whereby when trying to inject a default route into
an OSPF area, it basically is not doing so.

Followed the the documentation in this document for injecting a default
route into a stub area but the stub area never receives the route,

https://www.juniper.net/documentation/us/en/software/junos/ospf/topics/topic-map/configuring-ospf-areas.html

Search for "set protocols ospf area 07 stub" to find the section referring
to.

I came across this other juniper documentation that does not deal with
setting a stub area but allows for a default route to be injected.
forgetting about the BGP aspect of this document (not using that portion of
the config), implemented this and it does send a default route only to the
OSPF areas.
https://supportportal.juniper.net/s/article/How-to-inject-default-route-into-OSPF-using-generate-route?language=en_US

So my question here would be what is the best practice setup for this and
if it is the first (stub area) what is it that would be preventing the
default route to be injected into the area?

The setup is 2 MX204s connected to a Dell stack that is doing ospf all
routes configured in the vlan interfaces. The MX routers connect to the
stack via LACP bundle (one link on each chassis of the stack) and only
need. to send default routes. to the switch stack but receive all routes
(minus the default route) from the stack for propagation outbound via BGP.

The solution would need to not only work on the Dell switch stack, but also
to a separate area that would have juniper QFX5200 switches.

Any tips or help on the best practice implementation would be greatly
appreciated.

Best,

-Lee
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-08 Thread Lee Starnes via juniper-nsp
All very good information. Thanks guys for all the replies. very helpful.

On Thu, Feb 8, 2024 at 6:42 AM Mark Tinka  wrote:

>
>
> On 2/8/24 16:29, Saku Ytti wrote:
>
> In absence of more specifics, junos by default doesn't discard but
> reject.
>
>
> Right, which I wanted to clarify if it does the same thing with this
> specific feature, or if it does "discard"
>
> Mark.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-06 Thread Lee Starnes via juniper-nsp
Thanks Mark for the quick reply. That was the validation I was looking for.
The TAC tech was really unsure about what he was doing and I had to guide
him through things, So this is very helpful.

Thanks again.

-Lee


On Tue, Feb 6, 2024 at 8:54 AM Mark Tinka  wrote:

>
>
> On 2/6/24 18:48, Lee Starnes via juniper-nsp wrote:
>
> > Hello everyone,
> >
> > I was having difficulty in getting an announcement of a IPv6 /32 block
> > using prefix-lists rather than redistribution of the IP addresses in from
> > other protocols. We only have a couple /64 blocks in use at the moment
> but
> > want to be able to announce the entire /32. In cisco, that would just be
> a
> > holddown route and then announce. Not sure how it works to Juniper.
> >
> > I configured a prefix-list that contained the /32 block in it. Then
> created
> > a policy statement with term 1 from prefix-list  and then term 2
> then
> > accept. Set the export in BGP protocol peer of this policy statement and
> it
> > just ignores it.
> >
> > Now this same setup in IPv4 works fine.
> >
> > After a week of going round and round with Juniper TAC, they had me
> setup a
> > rib inet6 aggregate entry for the /32 and then use that in the policy
> > statement.
>
> This is the equivalent of the "hold-down" route you refer to in
> Cisco-land. Useful if the route does not exist in the RIB from any other
> source.
>
> I'm guessing your IPv4 route just works without a hold-down route
> because it is being learned from somewhere else (perhaps your IGP, iBGP
> or a static route), and as such, already exists in the router's RIB for
> your export policy to pick it up with no additional fiddling.
>
> Typically, BGP will not originate a route to its neighbors unless it
> already exists in the routing table through some source. If it is an
> aggregate route, a hold-down pointing to "discard" (Null0 in Cisco) is
> enough. If it is a longer route assigned to a customer, that route
> pointing to the customer will do.
>
> Mark.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MX204 and IPv6 BGP announcements

2024-02-06 Thread Lee Starnes via juniper-nsp
Hello everyone,

I was having difficulty in getting an announcement of a IPv6 /32 block
using prefix-lists rather than redistribution of the IP addresses in from
other protocols. We only have a couple /64 blocks in use at the moment but
want to be able to announce the entire /32. In cisco, that would just be a
holddown route and then announce. Not sure how it works to Juniper.

I configured a prefix-list that contained the /32 block in it. Then created
a policy statement with term 1 from prefix-list  and then term 2 then
accept. Set the export in BGP protocol peer of this policy statement and it
just ignores it.

Now this same setup in IPv4 works fine.

After a week of going round and round with Juniper TAC, they had me setup a
rib inet6 aggregate entry for the /32 and then use that in the policy
statement.

It seemed kinda clugy, so just wanted to ask here if this is the typical
way of going about this or is there a better more accepted way of doing
this?

Thanks,

-Lee
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper QFX5200-32c and QSFP28 channelized optics

2023-12-29 Thread Lee Starnes via juniper-nsp
Thanks for the info and links Tobias. really helpful. It must just be a
Juniper thing that it supports only 25G because on Cisco and Mikrotik it
supports down to 1G. Anyway, at least I have answers and a solution.

Best,

-Lee

On Fri, Dec 29, 2023 at 1:31 AM Tobias Heister via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi,
>
> While it sometimes works to let 25G transceivers run at 10G (depending
> on Transceiver and Device(s)) i think in this case you will need a
> different transceiver.
>
> See e.g. HCT for reference:
> https://apps.juniper.net/hct/product/?prd=QFX5200-32C
>
> the PSM only lists 25 and 100 as being supported:
> https://apps.juniper.net/hct/model/?component=JNP-QSFP-100G-PSM4
>
> For 10G SM Breakout something like
> https://apps.juniper.net/hct/model/?component=JNP-QSFP-4X10GE-LR would
> be the way to go.
>
>
> regards
> Tobias
>
> Am 29.12.2023 um 02:38 schrieb Lee Starnes via juniper-nsp:
> > Hello everyone,
> >
> > I am running into an issue on our QFX5200 switches where I have
> installed a
> > QSFP-100G-PSM4 optic. This can do 1G/10G/25G on the 4 channels. My issue
> is
> > that I am not able to get the interfaces to go to 10G even though I have
> > set them as such.
> >
> > If setting all 4 channels to 10G only a single interface shows at 100G.
> If
> > I set them all to 25G, all 4 show as 25G. Then if I change one of the
> > channels to 10G, all 4 remain as 25G.
> >
> > Is this an issue with how I am setting this up or an issue with the type
> of
> > Optic being used? Below is the config for the ports in the last state I
> > tested.
> >
> > chassis {
> >  fpc 0 {
> >  pic 0 {
> >  port 0 {
> >  channel-speed 10g;
> >  }
> >  port 1 {
> >  channel-speed 25g;
> >  }
> >  port 2 {
> >  channel-speed 25g;
> >  }
> >  port 3 {
> >  channel-speed 25g;
> >  }
> >
> > Thanks for any info or documents you can point me to.
> >
> > Best,
> >
> > -Lee
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Juniper QFX5200-32c and QSFP28 channelized optics

2023-12-28 Thread Lee Starnes via juniper-nsp
Hello everyone,

I am running into an issue on our QFX5200 switches where I have installed a
QSFP-100G-PSM4 optic. This can do 1G/10G/25G on the 4 channels. My issue is
that I am not able to get the interfaces to go to 10G even though I have
set them as such.

If setting all 4 channels to 10G only a single interface shows at 100G. If
I set them all to 25G, all 4 show as 25G. Then if I change one of the
channels to 10G, all 4 remain as 25G.

Is this an issue with how I am setting this up or an issue with the type of
Optic being used? Below is the config for the ports in the last state I
tested.

chassis {
fpc 0 {
pic 0 {
port 0 {
channel-speed 10g;
}
port 1 {
channel-speed 25g;
}
port 2 {
channel-speed 25g;
}
port 3 {
channel-speed 25g;
}

Thanks for any info or documents you can point me to.

Best,

-Lee
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp