Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-06 Thread Jeff Haas via juniper-nsp


On 2/6/24, 11:55 AM, "juniper-nsp on behalf of Mark Tinka via juniper-nsp" 
mailto:juniper-nsp-boun...@puck.nether.net> on behalf of 
juniper-nsp@puck.nether.net > wrote:
> Typically, BGP will not originate a route to its neighbors unless it
> already exists in the routing table through some source. If it is an
> aggregate route, a hold-down pointing to "discard" (Null0 in Cisco) is
> enough. If it is a longer route assigned to a customer, that route
> pointing to the customer will do.

And for situations where you need it nailed up:

https://www.juniper.net/documentation/us/en/software/junos/cli-reference/topics/ref/statement/bgp-static-edit-routing-options.html


Juniper Business Use Only
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-06 Thread Lee Starnes via juniper-nsp
Thanks Mark for the quick reply. That was the validation I was looking for.
The TAC tech was really unsure about what he was doing and I had to guide
him through things, So this is very helpful.

Thanks again.

-Lee


On Tue, Feb 6, 2024 at 8:54 AM Mark Tinka  wrote:

>
>
> On 2/6/24 18:48, Lee Starnes via juniper-nsp wrote:
>
> > Hello everyone,
> >
> > I was having difficulty in getting an announcement of a IPv6 /32 block
> > using prefix-lists rather than redistribution of the IP addresses in from
> > other protocols. We only have a couple /64 blocks in use at the moment
> but
> > want to be able to announce the entire /32. In cisco, that would just be
> a
> > holddown route and then announce. Not sure how it works to Juniper.
> >
> > I configured a prefix-list that contained the /32 block in it. Then
> created
> > a policy statement with term 1 from prefix-list  and then term 2
> then
> > accept. Set the export in BGP protocol peer of this policy statement and
> it
> > just ignores it.
> >
> > Now this same setup in IPv4 works fine.
> >
> > After a week of going round and round with Juniper TAC, they had me
> setup a
> > rib inet6 aggregate entry for the /32 and then use that in the policy
> > statement.
>
> This is the equivalent of the "hold-down" route you refer to in
> Cisco-land. Useful if the route does not exist in the RIB from any other
> source.
>
> I'm guessing your IPv4 route just works without a hold-down route
> because it is being learned from somewhere else (perhaps your IGP, iBGP
> or a static route), and as such, already exists in the router's RIB for
> your export policy to pick it up with no additional fiddling.
>
> Typically, BGP will not originate a route to its neighbors unless it
> already exists in the routing table through some source. If it is an
> aggregate route, a hold-down pointing to "discard" (Null0 in Cisco) is
> enough. If it is a longer route assigned to a customer, that route
> pointing to the customer will do.
>
> Mark.
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-06 Thread Mark Tinka via juniper-nsp




On 2/6/24 18:53, Saku Ytti wrote:


Not just opinion, fact. If you see everything, ORR does nothing but adds cost.

You only need AddPath and ORR, when everything is too expensive, but
you still need good choices.

But even if you have resources to see all, you may not actually want
to have a lot of useless signalling and overhead, as it'll add
convergence time and risk of encouraging rare bugs to surface. In the
case where I deployed it, having all was not realistic possibly, in
that, having all would mean network upgrade cycle is determined when
enough peers are added, causing RIB scale to demand triggering full
upgrade cycle, despite not selling the ports already paid.
You shouldn't need to upgrade your boxes, because your RIB/FIB doesn't
scale, you should only need to upgrade your boxes, if you don't have
holes to stick paying fiber into.


I agree.

We started with 6 paths to see how far the network could go, and how 
well ECMP would work across customers who connected to us in multiple 
cities/countries with the same AS. That was exceedingly successful and 
customers were very happy that they could increase their capacity 
through multiple, multi-site links, without paying anything extra and 
improving performance all around.


Same for peers.

But yes, it does cost a lot of control plane for anything less than 32GB 
on the MX. The MX204 played well if you unleased it's "hidden memory" 
hack :-).


This was not a massive issue for the RR's which were running on CSR1000v 
(now replaced with Cat8000v). But certainly, it did test the 16GB 
Juniper RE's we had.


The next step, before I left, was to work on how many paths we can 
reduce to from 6 without losing the gains we had made for our customers 
and peers. That would have lowered pressure on the control plane, but 
not sure how it would have impacted the improvement in multi-site load 
balancing.


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 and IPv6 BGP announcements

2024-02-06 Thread Mark Tinka via juniper-nsp




On 2/6/24 18:48, Lee Starnes via juniper-nsp wrote:


Hello everyone,

I was having difficulty in getting an announcement of a IPv6 /32 block
using prefix-lists rather than redistribution of the IP addresses in from
other protocols. We only have a couple /64 blocks in use at the moment but
want to be able to announce the entire /32. In cisco, that would just be a
holddown route and then announce. Not sure how it works to Juniper.

I configured a prefix-list that contained the /32 block in it. Then created
a policy statement with term 1 from prefix-list  and then term 2 then
accept. Set the export in BGP protocol peer of this policy statement and it
just ignores it.

Now this same setup in IPv4 works fine.

After a week of going round and round with Juniper TAC, they had me setup a
rib inet6 aggregate entry for the /32 and then use that in the policy
statement.


This is the equivalent of the "hold-down" route you refer to in 
Cisco-land. Useful if the route does not exist in the RIB from any other 
source.


I'm guessing your IPv4 route just works without a hold-down route 
because it is being learned from somewhere else (perhaps your IGP, iBGP 
or a static route), and as such, already exists in the router's RIB for 
your export policy to pick it up with no additional fiddling.


Typically, BGP will not originate a route to its neighbors unless it 
already exists in the routing table through some source. If it is an 
aggregate route, a hold-down pointing to "discard" (Null0 in Cisco) is 
enough. If it is a longer route assigned to a customer, that route 
pointing to the customer will do.


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-06 Thread Saku Ytti via juniper-nsp
On Tue, 6 Feb 2024 at 18:35, Mark Tinka  wrote:

> IME, when we got all available paths, ORR was irrelevant.
>
> But yes, at the cost of some control plane resources.

Not just opinion, fact. If you see everything, ORR does nothing but adds cost.

You only need AddPath and ORR, when everything is too expensive, but
you still need good choices.

But even if you have resources to see all, you may not actually want
to have a lot of useless signalling and overhead, as it'll add
convergence time and risk of encouraging rare bugs to surface. In the
case where I deployed it, having all was not realistic possibly, in
that, having all would mean network upgrade cycle is determined when
enough peers are added, causing RIB scale to demand triggering full
upgrade cycle, despite not selling the ports already paid.
You shouldn't need to upgrade your boxes, because your RIB/FIB doesn't
scale, you should only need to upgrade your boxes, if you don't have
holes to stick paying fiber into.


-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MX204 and IPv6 BGP announcements

2024-02-06 Thread Lee Starnes via juniper-nsp
Hello everyone,

I was having difficulty in getting an announcement of a IPv6 /32 block
using prefix-lists rather than redistribution of the IP addresses in from
other protocols. We only have a couple /64 blocks in use at the moment but
want to be able to announce the entire /32. In cisco, that would just be a
holddown route and then announce. Not sure how it works to Juniper.

I configured a prefix-list that contained the /32 block in it. Then created
a policy statement with term 1 from prefix-list  and then term 2 then
accept. Set the export in BGP protocol peer of this policy statement and it
just ignores it.

Now this same setup in IPv4 works fine.

After a week of going round and round with Juniper TAC, they had me setup a
rib inet6 aggregate entry for the /32 and then use that in the policy
statement.

It seemed kinda clugy, so just wanted to ask here if this is the typical
way of going about this or is there a better more accepted way of doing
this?

Thanks,

-Lee
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-06 Thread Mark Tinka via juniper-nsp



On 12/8/23 19:36, Jared Mauch via juniper-nsp wrote:


I’ll also comment that many software suites don’t scale to 10’s or 100’s of 
million of paths

Keep in mind paths != routes and many folks don’t always catch the difference 
between them.  If you have a global network like 2914 (for example) you may be 
peering with someone in 10-20 places globally so if they send you 10k routes, * 
20 locations that’s 200k paths(exits), then move to someone with 100k or 400k 
prefixes like 3356 had at one point, those numbers go up quite a bit.


Our outfit was not as large as 2914 or 3356 when I worked there, but our 
RR's saw about 12.5 million IPv4 paths and 2.9 million IPv6 paths.


The clients saw about 6 million paths and 1.2 million paths, respectively.

The biggest issues to think about his how the RE handles path churn, 
which can be very high in a setup such as this, because while it 
provides excellent path stability for downstream eBGP customers, it 
creates a lot of noise inside your core.


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-06 Thread Mark Tinka via juniper-nsp




On 12/8/23 19:16, Saku Ytti via juniper-nsp wrote:


I tried to advocate for both, sorry if I was unclear.

ORR for good options, add-path for redundancy and/or ECMPability.


IME, when we got all available paths, ORR was irrelevant.

But yes, at the cost of some control plane resources.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-06 Thread Mark Tinka via juniper-nsp




On 12/8/23 18:57, Saku Ytti via juniper-nsp wrote:


Given a sufficient count of path options, they're not really
alternatives, but you need both. Like you can't do add-path , as
the clients won't scale. And you probably don't want only ORR, because
of the convergence cost of clients not having a backup option or the
lack of ECMP opportunity.


I found that if you run 6 or more paths on an RR client, the need for 
ORR was negated.


But yes, it does put a lot of pressure on the RE, with a 64GB RAM system 
being the recommended minimum, I would say.


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-06 Thread Mark Tinka via juniper-nsp




On 12/7/23 17:05, Saku Ytti via juniper-nsp wrote:


If you have a
low amount of duplicate RIB entries it might not be very useful, as
final collation of unique entries will be more or less single threaded
anyhow. But I believe anyone having a truly large RIB, like 20M, will
have massive duplication and will see significant benefit.


So essentially, outfits running BGP Add-Paths setups that have 6 or more 
paths per route, then...


Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp