Re: [j-nsp] SRX Dynamic Address limits

2024-03-04 Thread Roger Wiklund via juniper-nsp
We're using stamparm/ipsum: Daily feed of bad IPs (with blacklist hit
scores) (github.com)  with SRX300.

~37k entries with no issues.

  Address name  : ipsum-l2
  Address id: 11
  IPv4 entries  : 37317

Regards
Roger



On Fri, Mar 1, 2024 at 11:37 PM Chris Lee via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi Eric,
>
> Thanks for that, not too sure where the dynamic lists are stored in RAM or
> some other onboard memory.
>
> That said I ended up loading the lists in a srx340 first which is pretty
> similar anyway and couldn't see any issues so went ahead and loaded on the
> srx345's and it looks fine so far, I was a little out with my counts anyway
> as the lists I'm feeding are v4/v6 so the split ended up less than 20k v4
> and less than 10k v6.
>
> Instance Name  : default
> Total number of IPv4 entries   : 18226
> Total number of IPv4 entries from feed : 17258
> Total number of IPv6 entries   : 9733
> Total number of IPv6 entries from feed : 9518
>
> I just found another reference at
>
> https://www.juniper.net/documentation/us/en/software/junos/security-policies/topics/topic-map/security-policy-configuration.html
> which has a table of the various models and suggest the srx345 can only
> have 2048 address objects per policy, which is a bit confusing from the
> previous link but that was referring to address-sets I think.
>
> In any case this article seems to suggest to me the policy memory usage is
> definitely just loaded into regular RAM, and from what I've seen there was
> a little bit of an uptick in memory usage but only by a couple of hundred
> MB, just a few percentage points in the overall usage on the RE that I can
> see, so doesn't look to be an issue.
>
> I do have a pair of srx1600's on order to swap out the 345's, I just
> realised the datasheet doesn't list how much RAM they ship with as was
> mostly looking at them from the point of view of more throughput on
> interfaces, but hopefully they have been a bit more generous on the memory
> modules in the 1600's.
>
> Thanks,
> Chris
>
>
> On Sat, Mar 2, 2024 at 1:51 AM Eric Harrison <
> eric.harri...@cascadetech.org>
> wrote:
>
> >
> > I don't know if this is relevant or not in regards to the srx345, but I
> > recently stress tested a srx4100 and started to notice some
> > anomalies around 64k prefixes. I don't recall anything being logged and
> it
> > reported that it loaded all >=64k prefixes, "show security
> match-policies"
> > gave the right answers, but some actual test traffic started to be logged
> > on an unexpected policy.  Opening a ticket is on the TODO list.
> >
> >
> > One of our production srx4100's currently has 53k dynamic IPv4 prefixes
> > w/o skipping a beat:
> >
> > > show security dynamic-address summary
> > .
> > Instance Name  : default
> > Total number of IPv4 entries   : 232848
> > Total number of IPv4 entries from feed : 53445
> > Total number of IPv6 entries   : 0
> > Total number of IPv6 entries from feed : 0
> >
> >
> > -Eric
> >
> >
> > On Fri, Mar 1, 2024 at 5:11 AM Chris Lee via juniper-nsp <
> > juniper-nsp@puck.nether.net> wrote:
> >
> >> Hi All,
> >>
> >> Does anyone know if there's any specific limits/bounds/impacts on the
> >> number of IP addresses that can be imported into a SRX Dynamic Address
> >> list, specifically for an SRX345 ?
> >>
> >>
> >>
> https://www.juniper.net/documentation/us/en/software/junos/cli-reference/topics/ref/statement/dynamic-address.html
> >>
> >> Have been trialling it for a little while now with a relatively small
> >> number (around 3000 IPv4 and 1200 IPv6 entries), but looking to do some
> >> further GeoIP restrictions which would likely be around another 22000
> IPv4
> >> entries I need to import for the specific countries I need. Will
> anything
> >> topple/break with that many IP's in various dynamic lists ?
> >>
> >> I've tried looking but my google-fu is failing to turn up any data on
> >> limitations anywhere... I've found reference to address sets "One
> address
> >> set can reference a maximum of 16384 address entries and a maximum of
> 256
> >> address sets." but I'm not sure that this applies to dynamic address
> list
> >> entries as I figure that restriction may have more to do with the SRX
> >> having to parse a massive configuration file ?
> >>
> >> Thanks,
> >> Chris
> >> ___
> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> >>
> >
> >
> > --
> > Eric Harrison
> > Network Services
> > Cascade Technology Alliance / Multnomah Education Service District
> > office: 503-257-1554   cell: 971-998-6249   NOC 503-257-1510
> >
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> 

Re: [j-nsp] Hardware configuration for cRPD as RR

2024-02-07 Thread Roger Wiklund via juniper-nsp
Hi

I'm curious, when moving from vRR to cRPD, how do you plan to manage/setup
the infrastructure that cRPD runs on?

BMS with basic Docker or K8s? (kind of an appliance approach)
VM in hypervisor with the above?
Existing K8s cluster?

I can imagine that many networking teams would like an AIO cRPD appliance
from Juniper, rather than giving away the "control" to the server/container
team.

What are your thoughts on this?

Regards
Roger


On Tue, Feb 6, 2024 at 6:02 PM Mark Tinka via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
>
> On 2/6/24 18:53, Saku Ytti wrote:
>
> > Not just opinion, fact. If you see everything, ORR does nothing but adds
> cost.
> >
> > You only need AddPath and ORR, when everything is too expensive, but
> > you still need good choices.
> >
> > But even if you have resources to see all, you may not actually want
> > to have a lot of useless signalling and overhead, as it'll add
> > convergence time and risk of encouraging rare bugs to surface. In the
> > case where I deployed it, having all was not realistic possibly, in
> > that, having all would mean network upgrade cycle is determined when
> > enough peers are added, causing RIB scale to demand triggering full
> > upgrade cycle, despite not selling the ports already paid.
> > You shouldn't need to upgrade your boxes, because your RIB/FIB doesn't
> > scale, you should only need to upgrade your boxes, if you don't have
> > holes to stick paying fiber into.
>
> I agree.
>
> We started with 6 paths to see how far the network could go, and how
> well ECMP would work across customers who connected to us in multiple
> cities/countries with the same AS. That was exceedingly successful and
> customers were very happy that they could increase their capacity
> through multiple, multi-site links, without paying anything extra and
> improving performance all around.
>
> Same for peers.
>
> But yes, it does cost a lot of control plane for anything less than 32GB
> on the MX. The MX204 played well if you unleased it's "hidden memory"
> hack :-).
>
> This was not a massive issue for the RR's which were running on CSR1000v
> (now replaced with Cat8000v). But certainly, it did test the 16GB
> Juniper RE's we had.
>
> The next step, before I left, was to work on how many paths we can
> reduce to from 6 without losing the gains we had made for our customers
> and peers. That would have lowered pressure on the control plane, but
> not sure how it would have impacted the improvement in multi-site load
> balancing.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Thanks for all the fish

2024-01-09 Thread Roger Wiklund via juniper-nsp
Not the first rumour of Juniper getting acquired. Last time it was Ericsson
and before that I think it was IBM or EMC, but perhaps this time it's the
real deal.

Juniper has been very successful in Enterprise with the Mist acquisition,
so I'm a bit surprised that the stock price is still stale.
Perhaps there's not enough money there or it's too little too late.

I wonder how they would merge Mist and Aruba, the top wifi players on the
market. Usually you acquire a company to fill a gap in the portfolio. But
perhaps that's primarily done for Junipers routing/dc/switching stuff then.

Yeah the ISP business is no fun, I feel like everyone secretly wishes they
can start buying Huawei again, It seems it's all about the lowest price per
100G/400G port.

/Roger

On Tue, Jan 9, 2024 at 10:19 AM Mark Tinka via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

>
>
> On 1/9/24 10:55, Saku Ytti via juniper-nsp wrote:
> > What do we think of HPE acquiring JNPR?
> >
> >
> > I guess it was given that something's gotta give, JNPR has lost to
> > dollar as an investment for more than 2 decades, which is not
> > sustainable in the way we model our economy.
> >
> > Out of all possible outcomes:
> > - JNPR suddenly starts to grow (how?)
> > - JNPR defaults
> > - JNPR gets acquired
> >
> > It's not the worst outcome, and from who acquires them, HPE isn't the
> > worst option, nor the best. I guess the best option would have been,
> > several large telcos buying it through a co-owned sister company, who
> > then are less interested in profits, and more interested in having a
> > device that works for them. Worst would probably have been Cisco,
> > Nokia, Huawei.
> >
> > I think the main concern is that SP business is kinda shitty business,
> > long sales times, low sales volumes, high requirements. But that's
> > also the side of JNPR that has USP.
> >
> > What is the future of NPU (Trio) and Pipeline (Paradise/Triton), why
> > would I, as HP exec, keep them alive? I need JNPR to put QFX in my DC
> > RFPs, I don't really care about SP markets, and I can realise some
> > savings by axing chip design and support. I think Trio is the best NPU
> > on the market, and I think we may have a real risk losing it, and no
> > mechanism that would guarantee new players surfacing to replace it.
> >
> > I do wish that JNPR had been more serious about how unsustainable it
> > is to lose to the dollar, and had tried more to capture markets. I
> > always suggested why not try Trio-PCI in newegg. Long tail is long,
> > maybe if you could buy it for 2-3k, there would be a new market of
> > Linux PCI users who want wire rate programmable features for multiple
> > ports? Maybe ESXi server integration for various pre-VPC protection
> > features at wire-rate? I think there might be a lot of potential in
> > NPU-PCI, perhaps even FAB-PCI, to have more ports than single NPU-PCI.
>
> HP could do what Geely did for Volvo - give them cash, leave them alone,
> but force them to wake up and get into the real world.
>
> I don't think HP can match Juniper intellectually in the networking
> space, so perhaps they add another sort of credibility to Juniper, as
> long as Juniper realize that they need to get cleverer at staying in
> business than just being technically smart.
>
> I am concerned that if we lose Trio, it would be the end of half-decent
> line-rate networking, which would level the playing field around
> Broadcom... good for them, but perhaps not so great for operators. On
> the other hand, as you say, the ISP business is in a terrible place
> right now, and not looking to get any better as the core of the Internet
> continues to be owned by a small % of the content crew.
>
> And then there was this, hehe:
>
>  https://hpjuniper.com/en/signature-gin/
>
> Hehe.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] juniper fanless devices

2023-12-08 Thread Roger Wiklund via juniper-nsp
Hi

With fans ACX7024 comes to mind, -40 to +65:
acx7024-cloud-metro-router-datasheet.pdf (juniper.net)


It's a Qumran-2U (Jericho2 family) ASIC.

Regards
Roger

On Fri, Dec 8, 2023 at 3:39 PM Chris Cappuccio via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> are there any fanless devices that do layer 3 vpn, vlan-ccc, rsvp, ldp,
> and have more than two 10gbps ethernet ports?
>
> the acx2100/2200 are a little short on xe ports.
>
> are there other hardened devices i should be thinking about even if they
> have fans?
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] proxy-arp on EVPN irb

2023-12-08 Thread Roger Wiklund via juniper-nsp
Hi

It seems that proxy arp is disabled by default:
proxy-arp | Junos OS | Juniper Networks


Regarding proxy-arp for EVPN (arp suppression) it only works for the same
subnet, not between subnets.

So that seems to match what you're seeing that you must enable proxy-arp on
the IRB in order to reach the other subnets.

Regards
Roger


On Wed, Dec 6, 2023 at 5:04 PM Aaron1 via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> As I recall, proxy-arp behavior is proven by looking in the local host arp
> cache and finding entries for foreign ip’s mapped to the default gateway’s
> mac address.  If that is still occurring, then it would seem that proxy arp
> functionality is still working and you can move on to tshooting something
> beyond that… like what is the upstream def gw/evpn pe doing with those
> packets
>
> Aaron
>
> > On Dec 6, 2023, at 6:16 AM, Jackson, William via juniper-nsp <
> juniper-nsp@puck.nether.net> wrote:
> >
> > Hi
> >
> > Maybe somebody knows the answer to this one:
> >
> > We migrated some customers to an EVPN domain away from a legacy node
> that used proxy-arp on its L3 interface.
> >
> > The downstream clients have some funky routing and they are relying on
> proxy-arp to resolve an offnet address (don't ask me why for our sanities
> sake)!
> >
> > We have a implemented EVPN bridge domain with the following config on MX
> PE nodes running 21.1 code.
> >
> > instance-type virtual-switch;
> > protocols {
> >evpn {
> >encapsulation mpls;
> >default-gateway do-not-advertise;
> >extended-vlan-list [ 250  ];
> >}
> > }
> > bridge-domains {
> >250 {
> >domain-type bridge;
> >vlan-id 250;
> >interface ae68.250;
> >routing-interface irb.25068;
> >}
> > }
> >
> > interfaces irb.25068 {
> >  proxy-arp;
> >  family inet {
> >  address 172.23.248.1/22;
> >  }
> >  mac 00:aa:dd:00:00:68;
> > }
> >
> > This irb is in a L3VPN instance.
> >
> > Now the documentation states that proxy-arp and arp-suppression is on by
> default yet these clients cant reach the offnet host with or without the
> "proxy-arp" command on the irb.
> >
> > Any ideas?
> >
> > thanks
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX5110 / EVPN-VXLAN with IPv6 underlay

2023-11-28 Thread Roger Wiklund via juniper-nsp
For the QFX5110 specifically, MAC-VRF is supported:
https://apps.juniper.net/feature-explorer/feature-info.html?fKey=9788=MAC+VRF+with+EVPN-VXLAN

But IPv6 underlay is not:
https://apps.juniper.net/feature-explorer/feature-info.html?fKey=11165=EVPN-VXLAN+fabric+with+an+IPv6+underlay

So maybe it's an ASIC limitation as QFX5110 is using Trident 2+ and
QFX5120/EX4400 is using Trident 3.

Regards
Roger



On Tue, Nov 28, 2023 at 3:48 PM Roger Wiklund 
wrote:

> Hey
>
> You're interpreting the default switch limitation incorrectly.
>
> It doesn't mean the QFX5120 can't support MAC-VRFs, it means even if you
> implement MAC-VRFs you still only have a single switch domain and can't
> have overlapping VLANs in the different MAC-VRFs. (MX does not have this
> limitation. It supports 32k VLANs)
>
> IPv6 underlay is supported on QFX5120 in MAC-VRF from Junos 21.2R2:
> Explore Features by Product | Juniper Networks Pathfinder Feature Explorer
> 
>
> You can configure an EVPN-VXLAN fabric with an IPv6 underlay. You can use
> this feature only with MAC-VRF routing instances (all service types). You
> must configure either an IPv4 or an IPv6 underlay across the EVPN instances
> in the fabric; you can’t mix IPv4 and IPv6 underlays in the same fabric.
> To enable this feature, include these steps when you configure the EVPN
> underlay:
> • Configure the underlay VXLAN tunnel endpoint (VTEP) source interface as
> an IPv6 address:
> • Even though the underlay uses the IPv6 address family, for BGP
> handshaking to work in the underlay, you must configure the router ID in
> the routing instance with an IPv4 address:
> • Enable the Broadcom VXLAN flexible flow feature, release where the
> feature is not enabled by default:
> We support the following EVPN-VXLAN features with an IPv6 underlay:
> • EVPN Type 1, Type 2, Type 3, Type 4, and Type 5 routes(excluding EX9200
> for type 5).
> • Shared VTEP tunnels (required with MAC-VRF instances).
> • All-active multihoming, including Ethernet segment ID (ESI)
> auto-generation and preferencebased DF (DF) election.
> • EVPN core isolation.
> • Bridged overlays.
> • Layer 3 gateway functions in ERB and CRB overlays with IPv4 or IPv6
> traffic.
> • Underlay and overlay load balancing.
> • Layer 3 protocols over IRB interfaces—BFD, BGP, OSPF.
> • Data center interconnect (DCI)—over-the-top (OTT) full mesh only.
> • EVPN proxy ARP and ARP suppression, and proxy NDP and NDP suppression.
>
> Regards
> Roger
>
> On Mon, Nov 27, 2023 at 11:31 AM Denis Fondras via juniper-nsp <
> juniper-nsp@puck.nether.net> wrote:
>
>> Hello,
>>
>> Thank you very much everyone for the help.
>>
>> It seems that `netraven` nailed it.
>> I missed the part where QFX5110 could not support multiple forwarding
>> instances.
>>
>> I will have to go back to the legacy protocol then :/
>> Replacing IPv6 addresses with IPv4 addresses, keeping the same config,
>> worked on
>> first try.
>>
>> Thank you again !
>> Denis
>>
>>
>> Le Mon, Nov 27, 2023 at 10:52:52AM +0100, netravnen+nspl...@gmail.com a
>> écrit :
>> > Dennis,
>> >
>> > On Sat, 25 Nov 2023 at 15:26, Denis Fondras via juniper-nsp
>> >  wrote:
>> > > Can you give a clue ? I haven't found any information on wether it
>> could work on
>> > > QFX5110.
>> >
>> > Looking at the two pages below.
>> > 1. The QFX5120 (assuming this also applies to the QFX5120-32C model)
>> > *only* supports the default-switch forwarding instance.
>> > 2. And IPv6 underlays seem to be *exactly not* supported for the
>> > default-switch forwarding instance.
>> >
>> > If I take this from what it reads. It looks like you cannot archive
>> > what you are trying atm.
>> >
>> > Try asking JTAC to confirm this?
>> >
>> > From:
>> >
>> https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/topics/concept/mac-vrf-routing-instance-overview.html#xd_4081e20476f017c2--1e138ae7-1795628658a--7dbc__subsection_mac-vrf-service-types
>> > """
>> > EX4400, QFX5100, QFX5110, QFX5120, QFX5200, QFX5130-32CD, and QFX5700
>> > switches, and PTX10001-36MR, PTX10004, PTX10008, PTX10016 routers
>> > These devices support only one forwarding instance (default-switch).
>> (...)
>> > """
>> >
>> > From:
>> >
>> https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/topics/topic-map/vxlan-ipv6-underlay-overview.html
>> > """
>> > (QFX Series switches) You must use MAC-VRF routing instances with EVPN
>> > protocol and VXLAN encapsulation. We don't support IPv6 underlays with
>> > other instance types such as evpn, evpn-vpws, virtual-switch or the
>> > default switching instance.
>> > """
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net

Re: [j-nsp] QFX5110 / EVPN-VXLAN with IPv6 underlay

2023-11-28 Thread Roger Wiklund via juniper-nsp
Hey

You're interpreting the default switch limitation incorrectly.

It doesn't mean the QFX5120 can't support MAC-VRFs, it means even if you
implement MAC-VRFs you still only have a single switch domain and can't
have overlapping VLANs in the different MAC-VRFs. (MX does not have this
limitation. It supports 32k VLANs)

IPv6 underlay is supported on QFX5120 in MAC-VRF from Junos 21.2R2:
Explore Features by Product | Juniper Networks Pathfinder Feature Explorer


You can configure an EVPN-VXLAN fabric with an IPv6 underlay. You can use
this feature only with MAC-VRF routing instances (all service types). You
must configure either an IPv4 or an IPv6 underlay across the EVPN instances
in the fabric; you can’t mix IPv4 and IPv6 underlays in the same fabric.
To enable this feature, include these steps when you configure the EVPN
underlay:
• Configure the underlay VXLAN tunnel endpoint (VTEP) source interface as
an IPv6 address:
• Even though the underlay uses the IPv6 address family, for BGP
handshaking to work in the underlay, you must configure the router ID in
the routing instance with an IPv4 address:
• Enable the Broadcom VXLAN flexible flow feature, release where the
feature is not enabled by default:
We support the following EVPN-VXLAN features with an IPv6 underlay:
• EVPN Type 1, Type 2, Type 3, Type 4, and Type 5 routes(excluding EX9200
for type 5).
• Shared VTEP tunnels (required with MAC-VRF instances).
• All-active multihoming, including Ethernet segment ID (ESI)
auto-generation and preferencebased DF (DF) election.
• EVPN core isolation.
• Bridged overlays.
• Layer 3 gateway functions in ERB and CRB overlays with IPv4 or IPv6
traffic.
• Underlay and overlay load balancing.
• Layer 3 protocols over IRB interfaces—BFD, BGP, OSPF.
• Data center interconnect (DCI)—over-the-top (OTT) full mesh only.
• EVPN proxy ARP and ARP suppression, and proxy NDP and NDP suppression.

Regards
Roger

On Mon, Nov 27, 2023 at 11:31 AM Denis Fondras via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hello,
>
> Thank you very much everyone for the help.
>
> It seems that `netraven` nailed it.
> I missed the part where QFX5110 could not support multiple forwarding
> instances.
>
> I will have to go back to the legacy protocol then :/
> Replacing IPv6 addresses with IPv4 addresses, keeping the same config,
> worked on
> first try.
>
> Thank you again !
> Denis
>
>
> Le Mon, Nov 27, 2023 at 10:52:52AM +0100, netravnen+nspl...@gmail.com a
> écrit :
> > Dennis,
> >
> > On Sat, 25 Nov 2023 at 15:26, Denis Fondras via juniper-nsp
> >  wrote:
> > > Can you give a clue ? I haven't found any information on wether it
> could work on
> > > QFX5110.
> >
> > Looking at the two pages below.
> > 1. The QFX5120 (assuming this also applies to the QFX5120-32C model)
> > *only* supports the default-switch forwarding instance.
> > 2. And IPv6 underlays seem to be *exactly not* supported for the
> > default-switch forwarding instance.
> >
> > If I take this from what it reads. It looks like you cannot archive
> > what you are trying atm.
> >
> > Try asking JTAC to confirm this?
> >
> > From:
> >
> https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/topics/concept/mac-vrf-routing-instance-overview.html#xd_4081e20476f017c2--1e138ae7-1795628658a--7dbc__subsection_mac-vrf-service-types
> > """
> > EX4400, QFX5100, QFX5110, QFX5120, QFX5200, QFX5130-32CD, and QFX5700
> > switches, and PTX10001-36MR, PTX10004, PTX10008, PTX10016 routers
> > These devices support only one forwarding instance (default-switch).
> (...)
> > """
> >
> > From:
> >
> https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/topics/topic-map/vxlan-ipv6-underlay-overview.html
> > """
> > (QFX Series switches) You must use MAC-VRF routing instances with EVPN
> > protocol and VXLAN encapsulation. We don't support IPv6 underlays with
> > other instance types such as evpn, evpn-vpws, virtual-switch or the
> > default switching instance.
> > """
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ACX7100-48L

2023-06-07 Thread Roger Wiklund via juniper-nsp
Hi

Some generic pointers here:
Checklist for Collecting Crash Data - TechLibrary - Juniper Networks


show chassis routing-engine
What does "last reboot reason say"?

I would upgrade to 22.2R3, it's working fine for us so far.

Regards
Roger



On Wed, Jun 7, 2023 at 9:18 PM Aaron Gould via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> I had a ACX7100-48L suddenly go down in my lab.  Is there a way to find
> the cause of it going down?
>
> agould@eng-lab-7100-2> show system information
> Model: ACX7100-48L
> Family: junos
> Junos: 22.2R1.12-EVO
> Hostname: eng-lab-7100-2
>
> agould@eng-lab-7100-2> show system core-dumps
> re0:
> --
>
> agould@eng-lab-7100-2> file ls /var/crash
> /var/crash: No such file or directory
>
> agould@eng-lab-7100-2>
>
>
>
> --
> -Aaron
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX DDOS Violations

2022-11-30 Thread Roger Wiklund via juniper-nsp
Hi John

The default DDoS values on QFX5k for EVPN-VXLAN is way too low.
I recommend these values + very tight storm-control on each applicable port.

RSVP and LDP are not used but share the same queue as BGP so you will see
strange triggers if you omit these.

set system ddos-protection protocols rsvp aggregate bandwidth 1
set system ddos-protection protocols rsvp aggregate burst 1000
set system ddos-protection protocols ldp aggregate bandwidth 1
set system ddos-protection protocols ldp aggregate burst 1000
set system ddos-protection protocols bgp aggregate bandwidth 1
set system ddos-protection protocols bgp aggregate burst 1000
set system ddos-protection protocols arp aggregate bandwidth 5
set system ddos-protection protocols arp aggregate burst 5000
set system ddos-protection protocols vxlan aggregate bandwidth 5
set system ddos-protection protocols vxlan aggregate burst 5000

The reason you're seeing VXLAN violation is because of EVPN arp/nd
suppression.
Have a look at the VXLAN queue here:
Detailed information about DDOS queues on QFX5K switches (juniper.net)


Every single ARP/ND packet is proxied by the QFX. If you have an ARP storm
the VXLAN DDoS will kick in and rate limit this.
If this is sustained, arp cache will timeout on each client and eventually
break connectivity.

You mitigate this by having a very tight storm-control on each L2 interface.
We use 1000kbps which translates roughly into 2000pps ARP

set forwarding-options storm-control-profiles sc-1000kbps all
bandwidth-level 1000
set forwarding-options storm-control enhanced

Because storm-control is distributed and mitigated on each port, while the
DDoS is aggregated to the RE, this combo works fine and VXLAN is never
triggered.

Hope this helps.

Regards

On Wed, Nov 30, 2022 at 2:43 PM Cristian Cardoso via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi Johan
>
> I experienced a similar issue in my evpn-vxlan environment on QFX5120-48y
> switches. The DDOS alert occurred whenever a large number of VM migrations
> occurred simultaneously in my environment, some times there were 20 VM's in
> simultaneous migration and the DDOS alarmed.
>
> To solve this, I set the following value in the configuration:
>
> qfx5120> show configuration system ddos-protection protocols
> vxlan {
> aggregate {
> bandwidth 1;
> burst 12000;
> }
> }
>
>
>
> Em qua., 30 de nov. de 2022 às 07:16, john doe via juniper-nsp <
> juniper-nsp@puck.nether.net> escreveu:
>
> > Hi!
> >
> > The leaf switches are QFX5k and it seems to be lacking some of the
> command
> > you mentioned. We don't have any problem with bgp sessions going down,
> the
> > impact is only the payload inside vxlan.
> >
> > Protocol Group: VXLAN
> >
> >   Packet type: aggregate (Aggregate for vxlan control packets)
> > Aggregate policer configuration:
> >   Bandwidth:500 pps
> >   Burst:200 packets
> >   Recover time: 300 seconds
> >   Enabled:  Yes
> > Flow detection configuration:
> >   Flow detection system is off
> >   Detection mode: Automatic  Detect time:  0 seconds
> >   Log flows:  YesRecover time: 0 seconds
> >   Timeout flows:  No Timeout time: 0 seconds
> >   Flow aggregation level configuration:
> > Aggregation level   Detection mode  Control mode  Flow rate
> > Subscriber  Automatic   Drop  0  pps
> > Logical interface   Automatic   Drop  0  pps
> > Physical interface  Automatic   Drop  500 pps
> > System-wide information:
> >   Aggregate bandwidth is no longer being violated
> > No. of FPCs that have received excess traffic: 1
> > Last violation started at: 2022-11-30 09:08:02 CET
> > Last violation ended at:   2022-11-30 09:09:32 CET
> > Duration of last violation: 00:01:40 Number of violations: 1508
> >   Received:  3548252144  Arrival rate: 201 pps
> >   Dropped:   49294329Max arrival rate: 160189 pps
> > Routing Engine information:
> >   Bandwidth: 500 pps, Burst: 200 packets, enabled
> >   Aggregate policer is never violated
> >   Received:  0   Arrival rate: 0 pps
> >   Dropped:   0   Max arrival rate: 0 pps
> > Dropped by individual policers: 0
> > FPC slot 0 information:
> >   Bandwidth: 100% (500 pps), Burst: 100% (200 packets), enabled
> >   Hostbound queue 255
> >   Aggregate policer is no longer being violated
> > Last violation started at: 2022-11-30 09:08:02 CET
> > Last violation ended at:   2022-11-30 09:09:32 CET
> > Duration of last violation: 00:01:40 Number of violations: 1508
> >   Received:  3548252144  Arrival rate: 201 pps
> >   

Re: [j-nsp] Collapse spine EVPN type 5 routes issue

2022-11-18 Thread Roger Wiklund via juniper-nsp
Hi Niklas

We always use unique ASNs per device in the VRFs.
Loop 2 should work fine, as-override also as previously mentioned.

There's also a few knobs for RT5 you can play with:

root@qfx5120-48y-02# set routing-instances test protocols evpn
ip-prefix-routes route-attributes ?
Possible completions:
+ apply-groups Groups from which to inherit configuration data
+ apply-groups-except  Don't inherit configuration data from these groups
> as-path  AS-PATH Attribute
> communityCommunity Attribute
> preference   Preference Attribute

Regards
Roger

On Tue, Nov 15, 2022 at 2:53 PM Saku Ytti via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> I would still consider as-override, or at least I would figure out the
> reason why it is not a good solution.
>
> On Tue, 15 Nov 2022 at 15:40, niklas rehnberg via juniper-nsp
>  wrote:
> >
> > Hi,
> > Thanks for the quick reply, I hope following very simple picture may help
> >
> >   ClientsClients
> >
> >   |  |
> >   |   EVPN/VXLAN|
> >   |  Overlay AS 6555 |
> >   spine1 --- type 5--- spine2
> >  vrf WAN AS X   |  |   vrf WAN AS X
> >eBGP  |  |   eBGP
> >   |  |
> >  PE  AS Y   PE   AS Y
> >   |  |
> >
> >   Core Network---
> >
> > route example when loop occur
> > show route hidden table bgp.evpn extensive
> >
> > bgp.evpn.0: 156 destinations, 156 routes (153 active, 0 holddown, 3
> hidden)
> > 5:10.254.0.2:100::0::5.0.0.0::16/248 (1 entry, 0 announced)
> >  BGP /-101
> > Route Distinguisher: 10.254.0.2:100
> > Next hop type: Indirect, Next hop index: 0
> > Address: 0x55a1fd2d2cdc
> > Next-hop reference count: 108, key opaque handle: (nil),
> > non-key opaque handle: (nil)
> > Source: 10.254.0.2
> > Protocol next hop: 10.254.0.2
> > Indirect next hop: 0x2 no-forward INH Session ID: 0
> > State: 
> > Peer AS: 6
> > Age: 1:14   Metric2: 0
> > Validation State: unverified
> > Task: BGP_6_6.10.254.0.2
> > AS path: 65263 xxx I  (Looped: 65263)
> > Communities: target:10:100 encapsulation:vxlan(0x8)
> > router-mac:34:11:8e:16:52:b2
> > Import
> > Route Label: 99100
> > Overlay gateway address: 0.0.0.0
> > ESI 00:00:00:00:00:00:00:00:00:00
> > Localpref: 100
> > Router ID: 10.254.0.2
> > Hidden reason: AS path loop
> > Secondary Tables: WAN.evpn.0
> > Thread: junos-main
> > Indirect next hops: 1
> > Protocol next hop: 10.254.0.2
> > Indirect next hop: 0x2 no-forward INH Session
> ID: 0
> > Indirect path forwarding next hops: 2
> > Next hop type: Router
> > Next hop: 10.0.0.1 via et-0/0/46.1000
> > Session Id: 0
> > Next hop: 10.0.0.11 via et-0/0/45.1000
> > Session Id: 0
> > 10.254.0.2/32 Originating RIB: inet.0
> >   Node path count: 1
> >   Forwarding nexthops: 2
> > Next hop type: Router
> > Next hop: 10.0.0.1 via
> > et-0/0/46.1000
> > Session Id: 0
> > Next hop: 10.0.0.11 via
> > et-0/0/45.1000
> > Session Id: 0
> >
> >
> > // Niklas
> >
> >
> >
> >
> > Den tis 15 nov. 2022 kl 13:58 skrev Saku Ytti :
> >
> > > Hey Niklas,
> > >
> > > My apologies, I do not understand your topology or what you are trying
> > > to do, and would need a lot more context.
> > >
> > > In my ignorance I would still ask, have you considered 'as-override' -
> > >
> > >
> https://www.juniper.net/documentation/us/en/software/junos/bgp/topics/ref/statement/as-override-edit-protocols-bgp.html
> > > this is somewhat common in another use-case, which may or may not be
> > > near to yours. Say you want to connect arbitrarily many CE routers to
> > > MPLS VPN cloud with BGP, but you don't want to get unique ASNs to
> > > them, you'd use a single ASN on every CE and use 

Re: [j-nsp] VXLAN multi-site solution problem

2022-05-24 Thread Roger Wiklund via juniper-nsp
Hi

I assume you have some broad policy config on the leafs like this:

set policy-options policy-statement BGP-OVERLAY-IN term reject-remote-gw
from family evpn
set policy-options policy-statement BGP-OVERLAY-IN term reject-remote-gw
from next-hop 2.255.255.1
set policy-options policy-statement BGP-OVERLAY-IN term reject-remote-gw
from next-hop 2.255.255.2
set policy-options policy-statement BGP-OVERLAY-IN term reject-remote-gw
from nlri-route-type 1
set policy-options policy-statement BGP-OVERLAY-IN term reject-remote-gw
from nlri-route-type 2
set policy-options policy-statement BGP-OVERLAY-IN term reject-remote-gw
then reject
set policy-options policy-statement BGP-OVERLAY-IN term accept-all then
accept

Starting with 19.4R1 you can use complex route filters with EVPN
attributes, maybe this can help you influence the path without discarding
them?
Example: Using Policy Filters to Filter EVPN Routes | Junos OS | Juniper
Networks


Another way is to use VMTO where you advertise the /32 "northbound" that
the Spines have installed so you alway have an optimal return path.
Ingress Virtual Machine Traffic Optimization | Junos OS | Juniper Networks


Juniper equivalent to "Cisco VXLAN EVPN Multi-Site Design" should be VXLAN
Stitching. I haven't tried it myself. QFX10k is a requirement.
Configure VXLAN Stitching for Layer 2 Data Center Interconnect -
TechLibrary - Juniper Networks


Regards
Roger



On Wed, May 18, 2022 at 8:18 AM niklas rehnberg via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi all,
>
> I have two DCs that work as multi-site , IE all subnets are available in
> both DCs.
> Using spine/leaf setup, two spines (QFX1) on each site.
> Using the Central GW setup, the spines are L3 GWs.
> In the overlay BGP a filter has been included so the leafs see only their
> own site spines, and will result in the traffic always using the nearest
> spine to reach the WAN.
> A design problem I have found now is if the return traffic to a leaf in
> site1 comes via site2 spines.
> The spine in site2 will add a vxlan header and send it leaf in site1, but
> the leaf will drop the traffic as the spine is unknown (as I have designed
> the solution).
>
> I need suggestions on how to solve this.
> A very easy way is to remove the filter so all leafs see all spine, but the
> drawback will be that a leaf in site1 can use a spine in site2 as L3 GW.
> Cisco has a solution called "VXLAN EVPN Multi-Site Design", I have not seen
> any simulare example for juniper.
>
> Thanks Niklas
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN VGA MAC address learning cause flooding

2021-12-07 Thread Roger Wiklund via juniper-nsp
Sorry I missed you _actual_ question :)
I never figured out why this is the default behavior, sorry.

Regards
Roger

On Tue, Dec 7, 2021 at 4:40 PM Roger Wiklund 
wrote:

> Hi
>
> Yes you need to set the vga-v4/v6-mac on the IRB interface:
>
> virtual-gateway-v4-mac | EVPN User Guide | Juniper Networks TechLibrary
> 
>
> eg:
>
> set interfaces irb.x virtual-gateway-v4-mac 00:00:5e:44:44:44
> set interfaces irb.x virtual-gateway-v6-mac 00:00:5e:66:66:66
>
> Regards
> Roger
>
> On Sun, Nov 21, 2021 at 12:46 PM Chen Jiang via juniper-nsp <
> juniper-nsp@puck.nether.net> wrote:
>
>> Hi! Experts
>>
>> Sorry for disturbing, I am curious why IRB interface in EVPN does not use
>> VGAs' Virtual MAC address (00:00:5e:00:01:01)  to originate packets,  but
>> instead uses the interface real MAC address to originate packets.
>>
>> Are there any special thoughts behind this? It will cause BUM flooding if
>> peer is a layer 2 switch ( peer s will never learn VGA virtual MAC
>> address)
>>
>> Thanks for your help!
>>
>> --
>> BR!
>>
>>
>>
>>James Chen
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN VGA MAC address learning cause flooding

2021-12-07 Thread Roger Wiklund via juniper-nsp
Hi

Yes you need to set the vga-v4/v6-mac on the IRB interface:

virtual-gateway-v4-mac | EVPN User Guide | Juniper Networks TechLibrary


eg:

set interfaces irb.x virtual-gateway-v4-mac 00:00:5e:44:44:44
set interfaces irb.x virtual-gateway-v6-mac 00:00:5e:66:66:66

Regards
Roger

On Sun, Nov 21, 2021 at 12:46 PM Chen Jiang via juniper-nsp <
juniper-nsp@puck.nether.net> wrote:

> Hi! Experts
>
> Sorry for disturbing, I am curious why IRB interface in EVPN does not use
> VGAs' Virtual MAC address (00:00:5e:00:01:01)  to originate packets,  but
> instead uses the interface real MAC address to originate packets.
>
> Are there any special thoughts behind this? It will cause BUM flooding if
> peer is a layer 2 switch ( peer s will never learn VGA virtual MAC address)
>
> Thanks for your help!
>
> --
> BR!
>
>
>
>James Chen
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp