Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-08 Thread Lee Starnes via juniper-nsp
All very good information. Thanks guys for all the replies. very helpful.

On Thu, Feb 8, 2024 at 6:42 AM Mark Tinka  wrote:

>
>
> On 2/8/24 16:29, Saku Ytti wrote:
>
> In absence of more specifics, junos by default doesn't discard but
> reject.
>
>
> Right, which I wanted to clarify if it does the same thing with this
> specific feature, or if it does "discard"
>
> Mark.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-08 Thread Tom Beecher via juniper-nsp
>
> Is the same true for VMware?
>

Never tried it there myself.

be able to run a
> solid software-only OS than be a test-bed for cRPD in such a use-case.


AFAIK, cRPD is part of the same build pipeline as 'full' JUNOS, so if
there's a bug in any given version, it will catch you on Juniper's metal ,
or your own metal for vMX, or cRPD ( assuming said bug is not hardware
dependent/related ).

On Thu, Feb 8, 2024 at 10:21 AM Mark Tinka  wrote:

>
>
> On 2/8/24 17:10, Tom Beecher wrote:
>
> >
> > For any use cases that you want protocol interaction, but not
> > substantive traffic forwarding capabilities , cRPD is by far the
> > better option.
> >
> > It can handle around 1M total RIB/FIB using around 2G RAM, right in
> > Docker or k8. The last version of vMX I played with required at least
> > 5G RAM / 4 cores to even start the vRE and vPFEs up, plus you have to
> > do a bunch of KVM tweaking and customization, along with NIC driver
> > fun. All of that has to work right just to START the thing, even if
> > you have no intent to use it for forwarding. You could have cRPD up in
> > 20 minutes on even a crappy Linux host. vMX has a lot more overhead.
>
> Is the same true for VMware?
>
> I had a similar experience trying to get CSR1000v on KVM going back in
> 2014 (and Junos vRR, as it were). Gave up and moved to CSR1000v on
> VMware where it was all sweeter. Back then, vRR did not support
> VMware... only KVM.
>
> On the other hand, if you are deploying one of these as an RR, hardware
> resources are going to be the least of your worries. In other words,
> some splurging is in order. I'd rather do that and be able to run a
> solid software-only OS than be a test-bed for cRPD in such a use-case.
>
> Mark.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-08 Thread Mark Tinka via juniper-nsp



On 2/8/24 17:10, Tom Beecher wrote:



For any use cases that you want protocol interaction, but not 
substantive traffic forwarding capabilities , cRPD is by far the 
better option.


It can handle around 1M total RIB/FIB using around 2G RAM, right in 
Docker or k8. The last version of vMX I played with required at least 
5G RAM / 4 cores to even start the vRE and vPFEs up, plus you have to 
do a bunch of KVM tweaking and customization, along with NIC driver 
fun. All of that has to work right just to START the thing, even if 
you have no intent to use it for forwarding. You could have cRPD up in 
20 minutes on even a crappy Linux host. vMX has a lot more overhead.


Is the same true for VMware?

I had a similar experience trying to get CSR1000v on KVM going back in 
2014 (and Junos vRR, as it were). Gave up and moved to CSR1000v on 
VMware where it was all sweeter. Back then, vRR did not support 
VMware... only KVM.


On the other hand, if you are deploying one of these as an RR, hardware 
resources are going to be the least of your worries. In other words, 
some splurging is in order. I'd rather do that and be able to run a 
solid software-only OS than be a test-bed for cRPD in such a use-case.


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-08 Thread Tom Beecher via juniper-nsp
>
> I wouldn't consider cRPD for production. vRR (or vMX, if it's still a
> thing) seems to make more sense.
>

For any use cases that you want protocol interaction, but not substantive
traffic forwarding capabilities , cRPD is by far the better option.

It can handle around 1M total RIB/FIB using around 2G RAM, right in Docker
or k8. The last version of vMX I played with required at least 5G RAM / 4
cores to even start the vRE and vPFEs up, plus you have to do a bunch of
KVM tweaking and customization, along with NIC driver fun. All of that has
to work right just to START the thing, even if you have no intent to use it
for forwarding. You could have cRPD up in 20 minutes on even a crappy Linux
host. vMX has a lot more overhead.



On Thu, Feb 8, 2024 at 3:13 AM Mark Tinka via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
>
> On 2/8/24 09:50, Roger Wiklund via juniper-nsp wrote:
>
> > Hi
> >
> > I'm curious, when moving from vRR to cRPD, how do you plan to
> manage/setup
> > the infrastructure that cRPD runs on?
>
> I run cRPD on my laptop for nothing really useful apart from testing
> configuration commands, e.t.c.
>
> I wouldn't consider cRPD for production. vRR (or vMX, if it's still a
> thing) seems to make more sense.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-08 Thread Mark Tinka via juniper-nsp




On 2/8/24 16:29, Saku Ytti wrote:


In absence of more specifics, junos by default doesn't discard but
reject.


Right, which I wanted to clarify if it does the same thing with this 
specific feature, or if it does "discard"


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-08 Thread Saku Ytti via juniper-nsp
On Thu, 8 Feb 2024 at 16:07, Mark Tinka via juniper-nsp
 wrote:

> So internally, if it attracts any traffic for non-specific destinations,
> does Junos send it /dev/null in hardware? I'd guess so...

In absence of more specifics, junos by default doesn't discard but
reject. There is essentially implied 0/0 static route to reject
adjacency. This can be changed to be discard, or you can just nail
down default discard.


-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-08 Thread Jeff Haas via juniper-nsp
Correcting myself, yes, it’s discard.

-- Jeff




Juniper Business Use Only
From: Mark Tinka 
Date: Thursday, February 8, 2024 at 9:07 AM
To: Jeff Haas , Lee Starnes , 
"juniper-nsp@puck.nether.net" 
Subject: Re: [j-nsp] MX204 and IPv6 BGP announcements

[External Email. Be cautious of content]


On 2/8/24 15:48, Jeff Haas wrote:
It’s rib-only.  If you wanted the usual other properties, you’d use the usual 
other features.

So internally, if it attracts any traffic for non-specific destinations, does 
Junos send it /dev/null in hardware? I'd guess so...

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-08 Thread Mark Tinka via juniper-nsp


On 2/8/24 15:48, Jeff Haas wrote:

It’s rib-only.  If you wanted the usual other properties, you’d use 
the usual other features.




So internally, if it attracts any traffic for non-specific destinations, 
does Junos send it /dev/null in hardware? I'd guess so...


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-08 Thread Jeff Haas via juniper-nsp
It’s rib-only.  If you wanted the usual other properties, you’d use the usual 
other features.

-- Jeff




Juniper Business Use Only
From: Mark Tinka 
Date: Thursday, February 8, 2024 at 12:14 AM
To: Jeff Haas , Lee Starnes , 
"juniper-nsp@puck.nether.net" 
Subject: Re: [j-nsp] MX204 and IPv6 BGP announcements

[External Email. Be cautious of content]


On 2/6/24 19:42, Jeff Haas wrote:



And for situations where you need it nailed up:



https://www.juniper.net/documentation/us/en/software/junos/cli-reference/topics/ref/statement/bgp-static-edit-routing-options.html

Interesting, never knew about this BGP-specific feature.

What does the router do with this in FIB? Same as a a regular static route 
pointing to 'discard'? Or does it just stay in RIB?

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-08 Thread Saku Ytti via juniper-nsp
On Thu, 8 Feb 2024 at 10:16, Mark Tinka  wrote:

> Is the MX150 still a current product? My understanding is it's an x86 
> platform running vMX.

No longer orderable.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-08 Thread Mark Tinka via juniper-nsp




On 2/8/24 09:56, Saku Ytti via juniper-nsp wrote:


Same concerns, I would just push it back and be a late adopter. Rock
existing vRR while supported, not pre-empt into cRPD because vendor
says that's the future. Let someone else work with the vendor to
ensure feature parity and indeed perhaps get some appliance from the
vendor.


Agreed.


With HPE, I feel like there is a lot more incentive to sell integrated
appliances to you than before.


Is the MX150 still a current product? My understanding is it's an x86 
platform running vMX.


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-08 Thread Mark Tinka via juniper-nsp




On 2/8/24 09:56, Saku Ytti via juniper-nsp wrote:


Same concerns, I would just push it back and be a late adopter. Rock
existing vRR while supported, not pre-empt into cRPD because vendor
says that's the future. Let someone else work with the vendor to
ensure feature parity and indeed perhaps get some appliance from the
vendor.


Agreed.


With HPE, I feel like there is a lot more incentive to sell integrated
appliances to you than before.


Is the MX150 still a current product? My understand is it's an x86 
platform running vMX.


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-08 Thread Mark Tinka via juniper-nsp




On 2/8/24 09:50, Roger Wiklund via juniper-nsp wrote:


Hi

I'm curious, when moving from vRR to cRPD, how do you plan to manage/setup
the infrastructure that cRPD runs on?


I run cRPD on my laptop for nothing really useful apart from testing 
configuration commands, e.t.c.


I wouldn't consider cRPD for production. vRR (or vMX, if it's still a 
thing) seems to make more sense.


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp