Re: [j-nsp] Parallel BGP sessions for v6 prefixes over v4 and v6

2024-07-09 Thread Andrey Kostin via juniper-nsp

Alexandre Snarskii писал(а) 2024-07-09 07:25:
On Mon, Jul 08, 2024 at 11:33:48AM -0400, Andrey Kostin via juniper-nsp 
wrote:

[...]
The problem here is that route-reflector selects a path with ipv4 
mapped

nexthop and advertises it over ipv6 session. I'm wondering, is anybody
already encountered this problem and found a solution how to make a RR
to advertise paths with a correct nexthop?


Have you considered making your inet6 sessions not unicast but 
labeled-unicast

explicit-null too ? I guess it may prevent bgp from losing label..



No, I didn't, thanks for the clue. In this case if v6 prefix with v4 
mapped next-hop is advertised via v6 session it should have VPN label 
(explicit-null) and potentially could work in the same way as inet6 
labeled-unicast over v4 session. This is an interesting idea, however 
I'd like to have a plain v6 session as a backup in case if MPLS 
forwarding path isn't available for any reason. Like with v4, if MPLS 
path can't be established, forwarding can always fallback to plain IP. I 
understand though, that having only label 2 attached to v6 packets 
probably would work, although the label will be removed and added again 
on every hop a packet passes. On the other hand, we're in the process of 
implementing SR-ISIS, so this change would be a move in the opposite 
direction, so I'd prefer to keep v6 session in it's natural state. 
Anyways, thank you for the advice!


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Parallel BGP sessions for v6 prefixes over v4 and v6

2024-07-08 Thread Andrey Kostin via juniper-nsp



Hi juniper-nsp readers,

Recently we encountered an issue with L3-incompletes counters started 
incrementing on internal backbone links. It began after adding new PE, 
core routers and route-reflectors.
After quite long investigation with TAC involved the problem was 
identified: v6 traffic was sent over RSVP tunnels without explicit-null 
label and was arriving with v4 Ethertype in MAC header to the egress PE.


The issue with missing explicit-null label turned out to be caused by 
having both inet6 unicast (over ipv6) and inet6 labeled-unicast 
explicit-null (over ipv4) BGP sessions running in parallel.
Route-reflector receives the same prefix from originating PE over v4 and 
v6 BGP session and installs both paths in inet6.0 table.


akostin@rr02> show route 2a03:2880:f10e::/48 receive-protocol bgp 
X.X.X.130 detail   <<< Received over v4 BGP session with family inet6 
labeled-unicast explicit-null and has Label 2 accordingly


inet6.0: 195655 destinations, 1173973 routes (195655 active, 6 holddown, 
0 hidden)


* 2a03:2880:f10e::/48 (2 entries, 0 announced)
 Accepted Multipath
 Route Label: 2
 Nexthop: :::X.X.X.130
 MED: 95
 Localpref: 106
 AS path: 32934 I
 Communities: Y:3 Y:30127
 Addpath Path ID: 1
 Accepted MultipathContrib MultipathDup
 Route Label: 2
 Nexthop: :::X.X.X.140
 MED: 95
 Localpref: 106
 AS path: 32934 I  (Originator)
 Cluster list:  X.X.2.4
 Originator ID: X.X.X.140
 Communities: Y:3 Y:30127
 Addpath Path ID: 2

akostin@rr02> show route 2a03:2880:f10e::/48 receive-protocol bgp 
2607:X:X::1:130 detail  Received over v6 BGP session and has v6 
nexthop


inet6.0: 195656 destinations, 1173985 routes (195657 active, 6 holddown, 
0 hidden)


  2a03:2880:f10e::/48 (1 entry, 0 announced)
 Accepted
 Nexthop: 2607:X:X::1:130
 MED: 95
 Localpref: 106
 AS path: 32934 I
 Communities: Y:3 Y:30127

So far so good, but when route-reflector advertises the prefix to a 
rr-client it picks up one or more best paths if add-path is configured. 
In this case RR chooses the path with mapped IPv4 address and sends it 
over ipv6 BGP session, obviously without implicit-null label.


akostin@rr02> show route 2a03:2880:f10e::/48 advertising-protocol bgp 
X.X.X.237 detail    Correctly advertised over v4 BGP session 
with mapped v4 nexthop and explicit-null label


inet6.0: 195756 destinations, 1174580 routes (195756 active, 6 holddown, 
0 hidden)


* 2a03:2880:f10e::/48 (6 entries, 0 announced)
 BGP group internal-rr-v4 type Internal
 Route Label: 2
 Nexthop: :::X.X.X.130
 MED: 95
 Localpref: 106
 AS path: [Y] 32934 I
 Communities: Y:3 Y:30127
 Cluster ID: X.X.X.155
 Originator ID: X.X.X.130
 Addpath Path ID: 1
 BGP group internal-rr-v4 type Internal
 Route Label: 2
 Nexthop: :::X.X.X.140
 MED: 95
 Localpref: 106
 AS path: [Y] 32934 I
 Communities: Y:3 Y:30127
 Cluster ID: X.X.X.155
 Originator ID: X.X.X.140
 Addpath Path ID: 2

akostin@rr02> show route 2a03:2880:f10e::/48 advertising-protocol bgp 
2607:X:X::1:237 detail  The path, received over v4 BGP session, 
is advertised over v6 session. Important, that this path has mapped IPv4 
nexthop but doesn't have explicit-null label.


inet6.0: 195760 destinations, 1174603 routes (195760 active, 7 holddown, 
0 hidden)


* 2a03:2880:f10e::/48 (6 entries, 0 announced)
 BGP group internal-rr-v6 type Internal
 Nexthop: :::X.X.X.130
 MED: 95
 Localpref: 106
 AS path: [Y] 32934 I
 Communities: Y:3 Y:30127
 Cluster ID: X.X.X.155
 Originator ID: X.X.X.130

On the receiving router all paths are installed because of BGP 
multipath. If the last path is used, v6 packets are sent without 
explicit-null label, arrive to the egress PE with wrong ethertype and 
dropped as L3-incompletes.


akostin@re0.agg02> show route  2a03:2880:f10e::/48  table inet6.0

+ = Active Route, - = Last Active, * = Both

2a03:2880:f10e::/48*[BGP/170] 2d 21:46:57, MED 95, localpref 106, from 
X.X.X.154

  AS path: 32934 I, validation-state: unverified
   to X.X.X.14 via ae0.0, label-switched-path 
BE-agg02-to-bdr01-1
>  to X.X.X.14 via ae0.0, label-switched-path 
BE-agg02-to-bdr01-2
[BGP/170] 2d 21:54:26, MED 95, localpref 106, from 
X.X.X.155

  AS path: 32934 I, validation-state: unverified
   to X.X.X.14 via ae0.0, Push 2, Push 129063(top)
>  to X.X.X.14 via ae0.0, Push 2, Push 129001(top)
[BGP/170] 2d 21:47:17, MED 95, localpref 106, from 
X.X.X.154

  AS path: 32934 I, validation-state: unverified
   to X.X.X.14 via ae0.0, Push 2, Push 129314(top)
>  to X.X.X.14 via ae0.0, Push 2, Push 128995(top)

Re: [j-nsp] Migrating to MVPN

2024-06-10 Thread Andrey Kostin via juniper-nsp

Dragan Jovicic via juniper-nsp писал(а) 2024-06-10 13:15:

The latter.
Can't modify STBs (they have default route).
Should be networking solution it seems.
Thanks...



IIRC, multicast topology don't have to match unicast topology. I 
probably wouldn't mess up with trying to split the topology on access 
level, but what if on the router that serves as default GW for STBs you 
have static default for multicast pointing to the current network core 
and a default route for unicast traffic pointing to CGNAT box. If your 
multicast sources are fixed, then you probably use more specific routes 
for them.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] rib-sharding and NSR update

2024-05-10 Thread Andrey Kostin via juniper-nsp

Hi juniper-nsp,

Just hit exactly the same issue as described in the message found in the 
list archives:


Gustavo Santos
Mon Jan 4 15:13:18 EST 2021

Hi,

We got another MX10003 and we are updating it before get in production.
Reading the 19.4R3 release notes, we noticed that two
features update-threading  and  rib-sharding and I really liked what it
"promises" as faster BGP updates .

But there is a catch. We can't use this new feature with non-stop 
routing

enabled.

The question is , are these features worth the non-stop routing loss?

Regards
"
bgp {
##
## Warning: Can't be configured together with routing-options
nonstop-routing
##
rib-sharding;
##
## Warning: Update threading can't be configured together with
routing-options nonstop-routing
##
update-threading;
}
"

That message seems didn't get any response.
However, I found an explanation at the bottom the page: 
https://www.juniper.net/documentation/us/en/software/junos/cli-reference/topics/ref/statement/rib-sharding-edit-protocols-bgp.html

Support for NSR with sharding introduced in Junos OS Release 22.2.
BGP sharding supports IPv4, IPv6, L3VPN and BGP-LU from Junos OS Release 
20.4R1.


Still need to test and confirm on this platform, but on another router 
it already works.


--
Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper publishes Release notes as pdf

2024-03-18 Thread Andrey Kostin via juniper-nsp

Thanks, Joe.

Right, pdf only for SR releases has been a while, but not very long, the 
change happened just few months ago. My personal preference would be to 
read html that can adapt to screen size, etc. Imo the value of pdf is to 
be able to print a paper copy, but it's hard to imagine that somebody 
would print release notes in the present time.


Kind regards,
Andrey

Joe Horton via juniper-nsp писал(а) 2024-03-15 21:36:

Correct.

SR releases – PDF only, and I think it has been that way a while.
R release – html/web based + PDF

And understand, I’ll pass along the feedback to the docs team.

Joe



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Juniper publishes Release notes as pdf

2024-03-15 Thread Andrey Kostin via juniper-nsp



Hi Juniper-NSP readers,

Did anybody mention that recent Junos Release Notes are now published as 
pdf, instead of usual web page?
Here is the example: 
https://supportportal.juniper.net/s/article/22-2R3-S3-SRN?language=en_US

What do you think about it?
For me, it's very inconvenient. To click links to PR or copy one 
paragraph I now have to download the pdf and open it in Acrobat. Please 
chime in and maybe our voices will be heard.


Kind regards,
Andrey Kostin
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] mx304 alarm seen after junos upgrade

2024-03-04 Thread Andrey Kostin via juniper-nsp

Hi Aaron,

Maybe this can be helpful:
https://supportportal.juniper.net/s/article/MX304-MinorFPC-0-firmware-outdated

Kind regards,
Andrey Kostin

Aaron1 via juniper-nsp писал(а) 2024-02-29 21:55:

Resolved… with the following…

FPC showed a difference between what was running and what is
available… reminiscent of IOS-XR upgrades and subsequent fpd/fpga
upgrades.


show system firmware

FPC 0ZL30634 DPLL  9   6022.0.0  7006.0.0
OK

request system firmware upgrade fpc slot 0

request chassis fpc restart slot 0

Aaron


On Feb 29, 2024, at 8:14 PM, Aaron Gould  wrote:

Anyone ever seen this alarm on an MX304 following a Junos upgrade?

I went from ...

22.2R3-S1.9 - initially had this
22.4R2-S2.6 - upgrade
23.2R1-S2.5 - final

now with23.2R1-S2.5, i have an issue with more than one, 100g 
interfaces being able to operate.  I have a 100g on et-0/0/4 and 
another one on et-0/0/12... BUT, they won't both function at the same 
time.  4 works, 12 doesn't... reboot mx304, 4 doesn't work, but 12 
does. Very weird.


root@304-1> show system alarms
6 alarms currently active
Alarm time   Class  Description
2024-02-29 06:00:25 CST  Minor  200 ADVANCE Bandwidth (in gbps)s(315) 
require a license
2024-02-29 06:00:25 CST  Minor  OSPF protocol(282) usage requires a 
license
2024-02-29 06:00:25 CST  Minor  LDP Protocol(257) usage requires a 
license

2024-02-28 09:35:10 CST  Minor *FPC 0 firmware outdated*
2024-02-28 09:29:45 CST  Major  Host 0 fxp0 : Ethernet Link Down
2024-02-28 09:28:15 CST  Major  Management Ethernet Links Down


root@304-1> show chassis alarms
3 alarms currently active
Alarm time   Class  Description
2024-02-28 09:35:10 CST  Minor *FPC 0 firmware outdated*
2024-02-28 09:29:45 CST  Major  Host 0 fxp0 : Ethernet Link Down
2024-02-28 09:28:15 CST  Major  Management Ethernet Links Down


--
-Aaron
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] igmp snooping layer 2 querier breaks ospf in other devices

2024-02-01 Thread Andrey Kostin via juniper-nsp

Hi Aaron,

It's not clear from your explanation where l2circuits with ospf are 
connected and how they are related to this irb/vlan.
Do you really need a querier in this case? IIRC, querier is needed when 
only hosts are present on LAN and a switch has to send igmp queries. In 
your case, you have a router with irb interface that should work as igmp 
querier by default. Not sure if it helps though.


Kind regards,
Andrey

Aaron Gould via juniper-nsp писал(а) 2024-01-31 14:54:


I'm having an issue where igmp snooping layer 2 querier breaks ospf in
other devices which are in l2circuits

Has anyone ever come across this issue, and have a work-around for it?

I have the following configured and devices in vlan 100 can join
multicast just fine.  But there are other unrelated l2circuits that
carry traffic for devices in other vlans and inside this l2circuit is
ospf hellos that seem to be getting broken by this configuration

set interfaces irb unit 100 family inet address 10.100.4.1/27
set protocols ospf area 0.0.0.1 interface irb.100 passive
set protocols igmp interface irb.100 version 3
set protocols pim interface irb.100
set protocols igmp-snooping vlan vlan100 l2-querier source-address 
10.100.4.1


Model: acx5048
Junos: 17.4R2-S11


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos 21+ Killing Finger Muscle Memory...

2023-07-12 Thread Andrey Kostin via juniper-nsp

Hi Mark,
100% agree if it could help.
Very annoying. If UX designer touched it, he or she probably never 
actually worked with Junos.


Kind regards,
Andrey

Mark Tinka via juniper-nsp писал(а) 2023-07-12 04:49:

So, this is going to be a very priviledged post, and I have been
spending the last several months mulling over even complaining about
it either on here, or with my SE.

But a community friend sent me the exact same annoyance he is having
with Junos 21 or later, which has given me a final reason to just go
ahead and moan about it:

tinka@router> show rout
 ^
'rout' is ambiguous.
Possible completions:
  route    Show routing table information
  routing  Show routing information
{master}
tinka@router>

I'm going to send this to my Juniper SE and AM. Not sure what they'll
make of it, as it is a rather priviledged complaint - but in truth, it
does make working with Junos on a fairly historically commonly used
command rather cumbersome, and annoying.

The smile that comes to my face when I touch a box running Junos 20 or
earlier and run this specific command, is unconscionably satisfying
:-).

Mark.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ACX7100-48L

2023-06-13 Thread Andrey Kostin via juniper-nsp

Aaron Gould via juniper-nsp писал(а) 2023-06-12 11:22:


interestingly, the PR is said to be fixed in 22.2R2-EVO, wouldn't that
follow that it should be fixed in my version? 22.2R3.13-EVO

me@lab-7100-2> show version
...
Junos: 22.2R3.13-EVO



The fix should be already implemented in the version you use.

Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Hi Saku,

Saku Ytti писал(а) 2023-06-09 12:09:

On Fri, 9 Jun 2023 at 18:46, Andrey Kostin  wrote:


I'm not in this market, have no qualification and resources for
development. The demand in such devices should be really massive to
justify a process like this.


Are you not? You use a lot of open source software, because someone
else did the hard work, and you have something practical.

The same would be the thesis here,  You order the PCI NPU from newegg,
and you have an ecosystem of practical software to pull from various
sources. Maybe you'll contribute something back, maybe not.


Well, technically maybe I could do it. But putting it in production is 
another story. I have to not only make it run but also make sure that 
there are people who can support it 24x7. I think you said it before and 
I agree that the cost of capital investment in routers is just a small 
fraction in expenses for service providers. Cable infrastructure, 
facilities, payroll, etc. make a bigger part, but risk of a router 
failure extends to business risks like reputation and financial loss and 
may have a catastrophic impact. We all know how long and difficult can 
be troubleshooting and fixing a complex issue with vendor's TAC but I 
consider the price we pay hardware vendors for their TAC support 
partially as a liability insurance.



Very typical network is a border router or two, which needs features
and performance, then switches to connect to compute. People who have
no resources or competence to write software could still be users in
this market.


Sounds more like a datacenter setup, and for DC operator it could be 
attractive to do at scale. For a traditional ISP with relatively small 
PoPs spread across the country it may be not the case.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp
Thank you very much, Jeff, for sharing your experience. Will watch 
closely Release Notes for upcoming Junos releases. And kudos to Juniper 
for finding and fixing it, 1,5 week is very fast reaction!.


Kind regards,
Andrey

Litterick, Jeff  (BIT) писал(а) 2023-06-09 12:41:

This is why we got the MX304.  It was a test to replace our MX10008
Chassis, which we bought a few of because we had to get at a
reasonable price into 100G at high density at multiple sites a few
years back now.  Though we really only need 4 line cards, with 2 being
for redundancy.   The MX1004 was not available at the time back then
(Wish it had been.  The MX10008 is a heavy beast indeed and we had to
use fork lifts to move them around into the data centers).But
after handling the MX304 we will most likely for 400G go to the
MX10004 line for the future and just use the MX304 at very small edge
sites if needed.   Mainly due to full FPC redundancy requirements at
many of our locations.   And yes we had multiple full FPC failures in
the past on the MX10008 line.  We went through at first an RMA cycle
with multiple line cards which in the end was due to just 1 line cards
causing full FPC failure on a different line card in the chassis
around every 3 months or so.   Only having everything redundant across
both FPCs allowed us not to have serious downtime.



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Saku Ytti писал(а) 2023-06-09 10:35:


LGA8371 socketed BRCM TH4. Ostensibly this allows a lot more switches
to appear in the market, as the switch maker doesn't need to be
friendly with BRCM. They make the switch, the customer buys the chip
and sockets it. Wouldn't surprise me if FB, AMZN and the likes would
have pressed for something like this, so they could use cheaper
sources to make the rest of the switch, sources which BRCM didn't want
to play ball with.


Can anything else be inserted in this socket? If not, then what's the 
point? For server CPUs there are many models with different clocking and 
number of cores, so socket provides a flexibility. If there is only one 
chip that fits the socket, then the socket is a redundant part.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Mark Tinka писал(а) 2023-06-09 10:26:

On 6/9/23 16:12, Saku Ytti wrote:


I expect many people in this list have no need for more performance
than single Trio YT in any pop at all, yet they need ports. And they
are not adequately addressed by vendors. But they do need the deep
features of NPU.


This.

There is sufficient performance in Trio today (even a single Trio chip
on the board) that people are willing to take an oversubscribed box or
line card because in real life, they will run out of ports long before
they run out of aggregate forwarding capacity.

The MX204, even though it's a pizza box, is a good example of how it
could do with 8x 100Gbps ports, even though Trio on it will only
forward 400Gbps. Most use-cases will require another MX204 chassis,
just for ports, before the existing one has hit anywhere close to
capacity.


Agree, there is a gap between 204 and 304, but don't forget that they 
belong to different generations. 304 is shiny new with a next level 
performance that's replacing MX10k3. The previous generation was 
announced to retire, but life of MX204 was extended because Juniper 
realized that they don't have anything atm to replace it and probably 
will lose revenue. Maybe this gap was caused by covid that slowed down 
the new platform. And possibly we may see a single NPU model based on 
the new gen chip, because chips for 204 are finite. At least it would be 
logical to make it, considering success of MX204.


Really, folk are just chasing the Trio capability, otherwise they'd
have long solved their port-count problems by choosing any
Broadcom-based box on the market. Juniper know this, and they are
using it against their customers, knowingly or otherwise. Cisco was
good at this back in the day, over-subscribing line cards on their
switches and routers. Juniper have always been a little more purist,
but the market can't handle it because the rate of traffic growth is
being out-paced by what a single Trio chip can do for a couple of
ports, in the edge.


I think that it's not rational to make another chipset with lower 
bandwidth, easier to limit an existing more powerful chip. Then it leads 
to MX5/MX10/MX40/MX80 hardware and licensing model. It could be a single 
Trio6 with up to 1.6T in access ports and 1.6T in uplink ports with low 
features. Maybe it will come, who knows, let's watch ;)


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Saku Ytti писал(а) 2023-06-09 10:12:

On Fri, 9 Jun 2023 at 16:58, Andrey Kostin via juniper-nsp
 wrote:


Not sure why it's eye-watering. The price of fully populated MX304 is
basically the same as it's predecessor MX10003 but it provides 3.2T BW
capacity vs 2.4T. If you compare with MX204, then MX304 is about 20%
expensive for the same total BW, but MX204 doesn't have redundant RE 
and
if you use it in redundant chassis configuration you will have to 
spend
some BW on "fabric" links, effectively leveling the price if 
calculated

for the same BW. I'm just comparing numbers, not considering any real


That's not it, RE doesn't attach to fabric serdes.


Sorry, I mixed two different points. I wanted to say that redundant RE 
adds more cost to MX304, unrelated to forwarding BW. But if you want to 
have MX204s in redundant configuration, some ports have to be sacrificed 
for connectivity between them. We have two MX204s running in pair with 
2x100G taken for links between them and remaining BW is 6x100G for 
actual forwarding in/out. In this case it's kind of at the same level 
for price/100G value.




I expect many people in this list have no need for more performance
than single Trio YT in any pop at all, yet they need ports. And they
are not adequately addressed by vendors. But they do need the deep
features of NPU.


I agree, and that's why I asked about HQoS experience, just to add more 
inexpensive low-speed switch ports via trunk but still be able to treat 
them more like separate ports from a router perspective.



I keep hoping that someone is so disruptive that they take the
nvidia/gpu approach to npu. That is, you can buy Trio PCI from newegg
for 2 grand, and can program it as you wish. I think this market
remains unidentified and even adjusting to cannibalization would
increase market size.
I can't understand why JNPR is not trying this, they've lost for 20
years to inflation in valuation, what do they have to lose?


I'm not in this market, have no qualification and resources for 
development. The demand in such devices should be really massive to 
justify a process like this.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Hi Mark,

Not sure why it's eye-watering. The price of fully populated MX304 is 
basically the same as it's predecessor MX10003 but it provides 3.2T BW 
capacity vs 2.4T. If you compare with MX204, then MX304 is about 20% 
expensive for the same total BW, but MX204 doesn't have redundant RE and 
if you use it in redundant chassis configuration you will have to spend 
some BW on "fabric" links, effectively leveling the price if calculated 
for the same BW. I'm just comparing numbers, not considering any real 
topology, which is another can of worms. Most probably it's not worth to 
try to scale MX204s to more than a pair of devices, at least I wouldn't 
do it and consider it ;)
I'd rather call eye-watering prices for MPC7 and MPC10 to upgrade 
existing MX480 routers if you still to use their low-speed ports. Two 
MPC10s with SCB3s upgrade cost more than MX304, but gives 30% less BW 
capacity. For MPC7 this ratio is even worse.
This brings a question, does anybody have an experience with HQoS on 
MX304? I mean just per-subinterface queueing on an interface to a 
switch, not BNG subscribers CoS which is probably another big topic. At 
least I'm not dare yet to try MX304 in BNG role, maybe later ;)


Kind regards,
Andrey

Mark Tinka via juniper-nsp писал(а) 2023-06-08 12:04:


Trio capacity aside, based on our experience with the MPC7E, MX204 and
MX10003, we expect it to be fairly straight forward.

What is holding us back is the cost. The license for each 16-port line
card is eye-watering. While I don't see anything comparable in ASR99xx
Cisco-land (in terms of form factor and 100Gbps port density), those
prices are certainly going to force Juniper customers to look at other
options. They would do well to get that under control.



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Hi Jeff,

Thank you very mush for sharing this information. Do you know in what 
publicly available release it's going to be fixed? Knowing PR number 
would be the best but I guess it may be internal-only.


Kind regards,
Andrey

Litterick, Jeff  (BIT) via juniper-nsp писал(а) 2023-06-08 18:03:

No, that is not quite right.  We have 2 chassis of MX304 in Production
today and 1 spare all with Redundant REs   You do not need all the
ports filled in a port group.   I know since we mixed in some 40G and
40G is ONLY supported on the bottom row of ports so we have a mix and
had to break stuff out leaving empty ports because of that limitation,
and it is running just fine.But you do have to be careful which
type of optics get plugged into which ports.  IE Port 0/2 vs Port 1/3
in a grouping if you are not using 100G optics.

The big issue we ran into is if you have redundant REs then there is a
super bad bug that after 6 hours (1 of our 3 would lock up after
reboot quickly and the other 2 would take a very long time) to 8 days
will lock the entire chassis up solid where we had to pull the REs
physical out to reboot them. It is fixed now, but they had to
manually poke new firmware into the ASICs on each RE when they were in
a half-powered state,  Was a very complex procedure with tech support
and the MX304 engineering team.  It took about 3 hours to do all 3
MX304s  one RE at a time.   We have not seen an update with this
built-in yet.  (We just did this back at the end of April)



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX BNG with both local server and dhcp relay

2023-01-23 Thread Andrey Kostin via juniper-nsp
I didn't have any v6-specific issues with DHCP relay in Junos 21.4. If 
you're going to rely on option-82, consider to turn on proxy mode. 
Without it Junos didn't update Circuit-ID in RENEW packets sent unicast 
from clients to DHCP server. Although it could be fixed in last 
releases, worth to check.


Kind regards,
Andrey

Dave Bell писал(а) 2023-01-13 04:10:

Thanks Andrey,

Yes, I believe you are correct. You can't switch from using local DHCP
server in the global routing table to DHCP relay once authenticated in
a different VRF.

I can split my services onto different interfaces coming into the BNG,
though since you need to decapsulate them first, they end up on the
same demux interface anyway.

I analysed a lot of traceoptions and packet captures. My relay didn't
receive a single packet, and the logs indicated that it was not
looking for DHCP configuration in my VRF that has forwarding
configured.

I think my only option is to move everything over to DHCP forwarding
in all cases, though this seems quite flaky for v6...

Regards,
Dave

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX BNG with both local server and dhcp relay

2023-01-10 Thread Andrey Kostin via juniper-nsp

Hi Dave,

Don't have experience with your specific case, just a common sense 
speculation. When you configure local dhcp server it usually specifies a 
template interface, like demux0.0, pp0.0, psX.0. Probably in your case a 
conflict happens when junos tries to enable both server and relay on the 
same subscriber interface. Maybe if you could dynamically enable dhcp 
server or relay for a particular subscriber interface it could solve the 
issue. Regarding interface separation, I'm not sure if it's possible to 
have more than one demux or pp interface, I believe only demux0 is 
supported. With ps interfaces you however can have many of them and if 
you can aggregate subscribers to pseudowires by service, you could 
enable dhcp server or relay depending on psX interface. However, 
pseudowires might be not needed and excessive for your design.
Did you try to analyze DHCP and AAA traceoptions and capture DHCP 
packets, BTW?


Kind regards,
Andrey

Dave Bell via juniper-nsp писал(а) 2023-01-05 08:50:

Hi,

I'm having issues with DHCP relay on a Juniper MX BNG, and was 
wondering if

anyone had an insight on what may be the cause of my issue.

I've got subscribers terminating on the MX, authenticated by RADIUS, 
and
then placed into a VRF to get services. In the vast majority of cases 
the
IP addressing information is passed back by RADIUS, and so I'm using 
the

local DHCP server on the MX to deal with that side of things.

In one instance I require the use of an external DHCP server. I've got 
the

RADIUS server providing an Access-Accept for this subscriber, and also
returning the correct VRF in which to terminate the subscriber. I've 
also

tried passing back the external DHCP server via RADIUS.

In the VRF, I've got the DHCP relay configured, and there is 
reachability

to the appropriate server

The MX however seems reluctant to actually forward DHCP requests to 
this

server. From the logging, I can see that the appropriate attributes are
received and correctly decoded. The session gets relocated into the 
correct

routing instance, but then it tries to look for a local DHCP server.

I have the feeling that my issues are due to trying to use both the 
local

DHCP server and DHCP relay depending on the subscriber scenario. If I
change the global configuration of DHCP from local server to DHCP 
relay, my
configuration works as expected though with the detriment of the 
scenario
where the attributes returned via RADIUS no longer work due to it not 
being

able to find a DHCP relay.

Since the MX decides how to authenticate the subscriber based on where 
the

demux interface is configured, I think ideally I would need to create a
different demux interface for these type of subscribers that I can then 
set

to be DHCP forwarded, thought I don't seem to be able to convince the
router to do that yet.

Has anyone come across this, and found a workable solution?

Regards,
Dave
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ACX7100 route scale

2023-01-03 Thread Andrey Kostin via juniper-nsp

Thanks, Mihai, for sharing this very useful info!

Kind regards,
Andrey

Mihai via juniper-nsp писал(а) 2022-12-31 07:20:

I found the info here:

https://www.juniper.net/documentation/us/en/software/junos/routing-policy/topics/ref/statement/system-packet-forwarding-options-hw-db-profile.html


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX VRRP on VXLAN enviroment

2022-12-15 Thread Andrey Kostin via juniper-nsp

Hi Cristian,

I tried to reproduce the issue by reverting the configuration, but it 
didn't occur. It's still unclear to me why only v4 was affected and v6 
was not. Furthermore, in stable state (which it was in my case) vrrp 
backup is silent so no mac move events can happen and I confirmed it 
with vrrp statistics that I collected before disabling VRRP, backup 
router wasn't sending anything.
After re-activating the interface on backup router I also didn't see any 
vrrp packet sent from it. Eventually I ended up with configuring 
exception for VRRP MACs and will watch how it goes:


https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/multicast-l2/topics/ref/statement/exclusive-mac-edit-protocols-l2-learning-global-mac-move.html

Kind regards,
Andrey

Cristian Cardoso писал(а) 2022-12-14 11:20:

Hi Andrey

In my case, what you said happened, as I modified the arp suppression
configuration of evpn-vxlan, since this was silently dropping mac's
and dropping VRRPv4 only, in IPv6 this did not happen.

set protocols evpn duplicate-mac-detection detection-threshold 20
set protocols evpn duplicate-mac-detection detection-window 5
set protocols evpn duplicate-mac-detection auto-recovery-time 5

With the above configurations, I never had a problem with VRRPv4
crashing in my environment.

Environment with VRRP is already working since the email that responds
in 2021 without any drop or problems.

kind regards,

Cristian Cardoso


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Updating SRX300, now slower than before

2022-07-13 Thread Andrey Kostin via juniper-nsp
Have nothing directly related to these releases, but we have a few 
SRX320 and they always felt slow in comparison to SRX345 that we also 
use mainly for power supply redundancy. Now they are all on 19.4R3-Sx. 
320s time to time log LACP timeouts and ae interface flaps. 345s have 
the same configuration and never experienced anything like this.


Kind regards,
Andrey

Markus via juniper-nsp писал(а) 2022-07-12 17:16:

Hi list,

I'm moving a couple SRX300 that were running 15.1X49-D90.7 to a new
purpose and just updated them to 21.3R1.9 to be a bit more up-to-date
and now booting takes twice the time (5 mins?) and CLI input also
seems "lag-ish" sometimes. Did I just do a big mistake? If
routing/IPsec is unimpacted then it's OK, or could it be that the
SRX300 will now perform slower than before in terms of routing
packets/IPsec performance/throughput? Or is that stupid of me to
think? :)

Thank you!

Regards
Markus
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP export policy, group vs neighbor level

2022-02-07 Thread Andrey Kostin via juniper-nsp


I agree, there is no clarity for all possible situations and from my 
experience a) and c) should be correct and take special care. Changing 
existing policy doesn't drop a session (usually ;) and I saw when adding 
a new policy in the existing policy chain didn't drop BGP, but might be 
not always the case. If the router is RR may also affect it's behavior, 
I think almost every network engineer was hit by this Juniper "feature".


Kind regards,
Andrey

Raph Tello писал(а) 2022-02-05 03:55:

Hey,

not really clear to me what that KB is exactly saying.

Does it say:

a) Peer will be reset when it previously hadn’t an individual
import/export policy statement but the group one and then an
individual one is configured

b) Peer will be reset each time it‘s individual policy is touched
while there is another policy in the group

or

c) Peer is reset the first time it receives it‘s own policy under
the group

Unfortunate that this seems to be not really well documented.





- Tello


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP export policy, group vs neighbor level

2022-02-04 Thread Andrey Kostin via juniper-nsp

Hi,
this KB article just came in:
https://kb.juniper.net/InfoCenter/index?page=content=KB12008=SUBSCRIPTION
Symptoms:
Why does modifying a policy on a BGP neighbor in a group cause that 
particular peer to be reset, when another policy is applied for the 
whole peer group?

Solution:
Changing the export policy on a member (peer) in a group will cause that 
member to be reset, as there is no graceful way to modify a group 
parameter for a particular peer. Junos can gracefully change the export 
policy, only when it is applied to the complete group.


It's not much helpful but just provides a confirmation.

Kind regards,
Andrey

Raph Tello via juniper-nsp писал(а) 2022-02-04 09:33:

I would also like to hear opinions about having ipv4 and ipv6 ebgp peer
sessions in the same group and using the same policy instead of having 
two

separate groups and two policies (I saw this kind policy at
https://bgpfilterguide.nlnog.net/guides/small_prefixes/#junos).

It would nicely pack things together. Could that be considered kind of 
new

best practice?

On Thu 3. Feb 2022 at 16:12, Raph Tello  wrote:


Hi list,

I wonder what kind of bgp group configuration would allow me to change 
the
import/export policy of a single neighbor without resetting the 
session of

this neighbor nor any other session of other neighbors. Similar to
enabling/disabling features on a single session without resetting the
sessions of others.

Let‘s say I have a bgp group IX-peers and each peer in that group has 
its
own import/export policy statement but all reference the same 
policies. Now

a single IX-peer needs a different policy which is going to change
local-pref, so I would replace the policy chain of that peer with a
different one.

Would this cause a session reset because the peer would be moved out 
of

the update group?

(I wonder mainly about group>peer>policy vs. group>policy vs. each 
peer

it‘s own group)

- Tello


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] labeled-unicast to ldp redistribution ?

2021-12-20 Thread Andrey Kostin via juniper-nsp

Thanks for details, it looks a little illogical though.
I mentioned that the best BGP route is received from the same OSPF/LDP 
neighbor in the same island. And it looks like it's using P2P IPs for 
BGP session. Is there some reason why it's not run between loopback IPs? 
 Just shot in the dark, maybe with sessions between loopbacks BGP would 
rely on OSPF for next-hop resolution and it can change the behavior?


Kind regards,
Andrey Kostin

Alexandre Snarskii писал(а) 2021-12-20 12:31:

On Mon, Dec 20, 2021 at 09:08:40AM -0500, Andrey Kostin wrote:

Hi Alexandre,

Not sure that I completely understood the issue. When connectivity
between islands recovers, what is the primary route for regular BGP
routes' protocol next-hop?


It's not the connectivity between islands, it's the connectivity
within IGP island that recovers. Assume the following simple
topology:

   A == B
   ||
   C == D

Routers A and B form one IGP island, C and D - other, and there are
two ibgp-lu links between islands with ldp->ibgp-lu->ldp 
redistribution.


In normal situation, route A to B goes via direct link (igp/ldp),
when link A-B breaks, A switches to ibgp-lu route from C.
When link A-B recovers, A does not switch back to direct link and
still uses A->C route (in best case it's just suboptimal, in worst
case it results in routing loops).


Looks like it should be OSPF with route
preference lower than BGP and in this case it should be labeled by LDP
and propagated. Only if OSPF route for a protocol next-hop is not the
best, the next-hop from BGP-LU will be used.


Unfortunately, it's expected behaviour, but not what I see in lab.
Oversimplified: just two routers, one p2p link with all three ospf/ldp/
ibgp-lu enabled,

show route xx.xxx.xxx.78/32 table inet.0

inet.0:
xx.xxx.xxx.78/32   *[OSPF/10] 5d 04:58:59, metric 119
>  to xxx.xx.xxx.21 via ae0.6

(so, ospf route is the best one in inet.0)

show ldp database session xx.xxx.xxx.7 | match 
"database|xx.xxx.xxx.78/32"

Input label database, xxx.xx.xxx.8:0--xx.xxx.xxx.7:0
  66742  xx.xxx.xxx.78/32
Output label database, xxx.xx.xxx.8:0--xx.xxx.xxx.7:0
   5743  xx.xxx.xxx.78/32

so the label is present and not filtered (.7 is the router-id of .21),

show route xx.xxx.xxx.78/32 receive-protocol bgp xxx.xx.xxx.21

inet.3: 467 destinations, 1125 routes (467 active, 0 holddown, 0 
hidden)

Restart Complete
  Prefix  Nexthop  MED LclprefAS path
* xx.xxx.xxx.78/32xxx.xx.xxx.2119  100I

so, it's received and is the best route in inet.3 (best, because
there are no ldp route in inet.3 at all:

show route .. table inet.3

xx.xxx.xxx.78/32   *[BGP/10] 02:10:43, MED 19, localpref 100
  AS path: I, validation-state: unverified
>  to xxx.xx.xxx.21 via ae0.6, Push 69954

), and, finally,

show ldp route extensive xx.xxx.xxx.78/32
DestinationNext-hop intf/lsp/table  
Next-hop address
 xx.xxx.xxx.78/32  ae0.6
xxx.xx.xxx.21

   Session ID xxx.xx.xxx.8:0--xx.xxx.xxx.7:0

xxx.xx.xxx.21

   Bound to outgoing label 5743, Topology entry: 0x776dd88
   Ingress route status: Inactive
   Route type: Egress route, BGP labeled route
   Route flags: Route deaggregate

suggests that presence of ibgp-lu route prevented ldp route from being
installed to inet.3 and being used.

PS: idea from KB32600 (copy ibgp-lu route from inet.3 to inet.0 and 
then

use "from protocol bgp rib inet.0" in ldp egress policy) does not work
too. Well, in this case presence of ibgp-lu route does not prevent ldp
route from being installed into inet.3 and used as best route (when
present, of course), but when ldp/igp route is missed, route received
with ibgp-lu not gets redistributed into ldp.




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] labeled-unicast to ldp redistribution ?

2021-12-20 Thread Andrey Kostin via juniper-nsp

Hi Alexandre,

Not sure that I completely understood the issue. When connectivity 
between islands recovers, what is the primary route for regular BGP 
routes' protocol next-hop? Looks like it should be OSPF with route 
preference lower than BGP and in this case it should be labeled by LDP 
and propagated. Only if OSPF route for a protocol next-hop is not the 
best, the next-hop from BGP-LU will be used.


Kind regards,
Andrey Kostin

Alexandre Snarskii via juniper-nsp писал(а) 2021-12-17 12:29:

Hi!

Scenario: router is a part of ospf/ldp island and also have ibgp
labeled-unicast rib inet.3 link to other ospf/ldp island. In normal
situations, some routes are known through ospf/ldp, however, during
failures they may appear from ibgp-lu and redistributed to ldp just
fine. However, when failure ends and route is known via ospf/ldp again,
it's NOT actually in use. Instead, 'show ldp route extensive' shows
this route as:

   Ingress route status: Inactive
   Route type: Egress route, BGP labeled route
   Route flags: Route deaggregate

and there are only ibgp route[s] in inet.3 table.

Are there any way to make ldp ignore 'BGP labeled' flag and install
route to inet.3 ? (other than making all routes be known not only
via ospf/ldp but also via ibgp-lu too).

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-09-09 Thread Andrey Kostin via juniper-nsp

Hi Nathan,



You want to look in the example configs. Start from an understanding
of what you want the RADIUS messages to have in them. You can do this
with just a static Users file in your test environment with just one
subscriber, and then look at moving that in to sqlippool or similar,
with whatever logic you need to get those attributes in to the right
place. Framed-IP-Address obviously, but maybe also Framed-IP-Netmask
etc. - better to experiment with the attributes and get them right
without the sqlippool complexity.

https://wiki.freeradius.org/modules/Rlm_sqlippool This is alright (it
appears outdated on the surface, but is up to date I think)
https://github.com/FreeRADIUS/freeradius-server/blob/v3.0.x/raddb/mods-available/sqlippool
This is the example config and has some more detail than the above.
https://github.com/FreeRADIUS/freeradius-server/blob/v3.0.x/raddb/mods-config/sql/ippool/postgresql/queries.conf
This is useful to understand some of the internals



I started to play with sqlippool and have a couple of questions.
Does sqlippool or any other module supports IPv6? I haven't found 
anything about it in the documentation.
Is ippool module used only as example for database schema? It looks like 
it doesn't need to be enabled for sqlippool operation.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-11 Thread Andrey Kostin via juniper-nsp

Nathan Ward писал 2021-08-10 20:53:


Yeah the FreeRADIUS docs are hard to navigate - but getting better.

You want to look in the example configs. Start from an understanding
of what you want the RADIUS messages to have in them. You can do this
with just a static Users file in your test environment with just one
subscriber, and then look at moving that in to sqlippool or similar,
with whatever logic you need to get those attributes in to the right
place. Framed-IP-Address obviously, but maybe also Framed-IP-Netmask
etc. - better to experiment with the attributes and get them right
without the sqlippool complexity.

https://wiki.freeradius.org/modules/Rlm_sqlippool This is alright (it
appears outdated on the surface, but is up to date I think)
https://github.com/FreeRADIUS/freeradius-server/blob/v3.0.x/raddb/mods-available/sqlippool
This is the example config and has some more detail than the above.
https://github.com/FreeRADIUS/freeradius-server/blob/v3.0.x/raddb/mods-config/sql/ippool/postgresql/queries.conf
This is useful to understand some of the internals



Thanks for links. I'm pretty well familiar with radius users file syntax 
but freeradius modules calls puzzles me a little.




A good setup for IPv4 DHCP relay is:

lo0 addresses on BNG-1
192.168.0.1/32 - use as giaddr
10.0.0.1/32
10.0.1.1/32
10.0.2.1/32
10.0.3.1/32

lo0 addresses on BNG-2
192.168.0.2/32 - use as giaddr
10.0.0.1/32
10.0.1.1/32
10.0.2.1/32
10.0.3.1/32

DHCP server:
Single shared network over all these subnets:
Subnet 192.168.0.0/24 - i.e. covering giaddrs
  No pool
Subnet 10.0.0.0/24
  pool 10.0.0.2-254
Subnet 10.0.1.0/24
  pool 10.0.1.2-254
Subnet 10.0.2.0/24
  pool 10.0.2.2-254
Subnet 10.0.3.0/24
  pool 10.0.3.2-254

This causes your giaddrs to be in the shared network with the subnets
you want to assign addresses from (i.e. the ones with pools), so the
DHCP server can match them up, but, with no pool in the 192.168.0.0/24
subnet you don’t assign addresses out of that network.

Otherwise you have to have a unique /32 for each BNG in each subnet
and you burn lots of addresses that way.


How is potential IP conflict handled in this case if BNGs are connected 
to the switched LAN segment? In my case with vlan per customer it can 
happen when a client requests the lease and can get replies from same IP 
but different MACs. BNGs can also see each other and report IP conflict.


Kind regards,

Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-10 Thread Andrey Kostin via juniper-nsp

Andrey Kostin via juniper-nsp писал 2021-08-10 16:44:


So far, I started to play with KEA dhcp server and stumbled on "shared
subnet" with multiple pools topic. I have two clients connected. The
first pool has only one IP available to force the client who comes
last to use the second pool. The first client successfully gets .226
IP from the first pool, but the second client fails.


Found the problem with KEA config, I didn't read docs thoroughly and 
missed "shared-networks" statement. It works this way:


"shared-networks": [
{
"name": "ftth",
"relay": {
"ip-addresses": [ "Y.Y.Y.Y" ]
},


 "subnet4": [
 {
 "subnet": "X.X.X.224/28",
 "pools": [ { "pool": "X.X.X.226 - X.X.X.226" } ],

 "option-data": [
 {
 // For each IPv4 subnet you most likely need to 
specify at

 // least one router.
 "name": "routers",
 "data": "X.X.X.225"
 }
 ]
 },
 {
 "subnet": "X.X.X.240/28",
 "pools": [ { "pool": "X.X.X.242 - X.X.X.245" } ],

 "option-data": [
 {
 // For each IPv4 subnet you most likely need to 
specify at

 // least one router.
 "name": "routers",
 "data": "X.X.X.241"
 }
 ]
}
],

However it puzzled me why KEA didn't send anything in response to BNG, 
but it's a different topic.
Meanwhile I set unique IP on lo0 as primary and now it appears in 
giaddr. On demux interfaces Junos uses an IP that matches the subnet of 
leased IP. So it looks like in this case preferred IP setting doesn't 
affect address selection process in any way.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-10 Thread Andrey Kostin via juniper-nsp

Nathan Ward via juniper-nsp писал 2021-08-10 08:00:
On 10/08/2021, at 10:40 PM, Bjørn Mork via juniper-nsp 
 wrote:


Thank you Nathan and Bjorn for your explanations, they are very helpful!
I'll definitely look at ip pool management in RADIUS. I'm struggling to 
find a good freeradius documentation source, could you give some links?


So far, I started to play with KEA dhcp server and stumbled on "shared 
subnet" with multiple pools topic. I have two clients connected. The 
first pool has only one IP available to force the client who comes last 
to use the second pool. The first client successfully gets .226 IP from 
the first pool, but the second client fails.


My config has this:

"subnet4": [
{
"subnet": "X.X.X.224/28",
"pools": [ { "pool": "X.X.X.226 - X.X.X.226" } ],
"relay": {
"ip-addresses": [ "X.X.X.225" ]
},
"option-data": [
{
// For each IPv4 subnet you most likely need to 
specify at

// least one router.
"name": "routers",
"data": "X.X.X.225"
}
]
},
{
"subnet": "X.X.X.240/28",
"pools": [ { "pool": "X.X.X.242 - X.X.X.245" } ],
"relay": {
"ip-addresses": [ "X.X.X.225" ]
},
"option-data": [
{
// For each IPv4 subnet you most likely need to 
specify at

// least one router.
"name": "routers",
"data": "X.X.X.241"
}
],

In the log I get this:
Aug 10 15:51:17 testradius kea-dhcp4[44325]: WARN  
ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 d0:76:8f:a7:43:ca], cid=[no 
info], tid=0x485c2228: failed to allocate an IPv4 address in the subnet 
X.X.X.224/28, subnet-id 1, shared network
Aug 10 15:51:17 testradius kea-dhcp4[44325]: WARN  
ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 d0:76:8f:a7:43:ca], cid=[no info], 
tid=0x485c2228: failed to allocate an IPv4 address after 1 attempt(s)
Aug 10 15:51:17 testradius kea-dhcp4[44325]: WARN  
ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 d0:76:8f:a7:43:ca], cid=[no 
info], tid=0x485c2228: Failed to allocate an IPv4 address for client 
with classes: ALL, VENDOR_CLASS_.dslforum.org, UNKNOWN


Looks like KEA doesn't consider the second subnet as belonging to the 
same shared network despite the matching giaddr. I followed example in 
Kea documentation and expect that relay address matching giaddr should 
do the trick, but I feel maybe subnets have to be in the same bracket, 
however don't know how to put it there. At one moment I saw addresses 
leased from both pools but later it returned back to this. Maybe it was 
a transient state when previous lease didn't expire yet, I'm not sure.




Note that you also must have a unique address as the primary address
on the interface as the giaddr - which the the centralised dhcp server
talks to. If that giaddr is shared across BNGs, your replies will go
to the wrong place a large % of the time, and not get to the
subscriber.
The giaddr does not need to be an address in any of the subnets you
want to hand out addresses in - in isc dhcpd, you can configure the
giaddr in a subnet as part of the “shared network” you want to hand
out addresses from, which if you have a lot of BNGs saves you a
handful of addresses you can give to customers.


Good point, thanks. I find Juniper documentation on primary and 
preferred IP very confusing, for me it's always try and fail method to 
find a working combination. Even more confusing, few years ago I had a 
TAC case opened regarding the meaning of preferred address for IPv6 
assignment to pppoe subscriber and I was told by TAC that it's not 
supported for IPv6 at all. I think it changed in recent releases.
For example, there is unique IP on lo0 that is used as router-id etc., 
and also there should be one or more IPs that match subnets in address 
pools. In dynamic profile address is specified this way:
unnumbered-address "$junos-loopback-interface" preferred-source-address 
"$junos-preferred-source-address"
Currently I don't have neither primary or preferred specified on lo0 and 
.225 is somehow selected.
In my understanding preferred-source-address has to match subnet in 
address pool, otherwise it will fail to assign an address. And it also 
will be used as giaddr in this case. Which address should be primary and 
which preferred in this case?


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-09 Thread Andrey Kostin via juniper-nsp

Bjørn Mork via juniper-nsp писал 2021-08-06 15:27:

Thanks for your reply.


Probably stupid question, but here goes... How does a central server
make the IP usage more effective?  Are you sharing pools between
routers?


Yes, going to have at least two routers as BNG and trying to find a way 
to not lock IP addresses if they aren't needed.



In any case, you can do that with a sufficiently smart RADIUS server
too.  You don't have to let JUNOS manage the address pools even if it 
is

providing the DHCP frontend.


I understand that it could be an option, but for vlan-per-customer model 
radius authentication isn't really needed for DHCP clients. Auth is done 
for a parent VLAN-demux interface, so for DHCP sessions BNG will send 
only accounting. In this case it will require to develop "smart-enough" 
radius backend. If there is any solution already available I'd 
definitely look at it, but I'd try to avoid building a homebrew 
solution.



IMHO, having the DHCP frontend on the edge makes life so much easier.
Building a sufficiently redundant and robust centralized DHCP service 
is
hard.  And the edge router still has to do most of the same work 
anyway,

relaying broadcasts and injecting access routes.  The centralized DHCP
server just adds an unneccessary single point of failure.


I agree that it's a complication, but imo it's a reasonable tradeoff for 
effective IP space usage. For relatively big IP pools it would be 
sufficient saving. From KEA DHCP server documentation I see that 
different scenarios for HA are supported, so some redundancy can be 
achieved.


Another question that puzzles me is how to use multiple discontinuous 
pools with DHCP server. With Junos internal DHCP I can link DHCP pools 
in the same way as for PPPoE and just assign additional GW IP to lo0. 
With that Junos takes care of finding available IP in pools and use 
proper GW address. In case of external DHCP server, router has to insert 
relay option but how can it choose what subnet to use in this case if 
there are more than one available? This problem should be also actual 
for big cable segments, although for cable interface IP addresses are 
directly configured on the interface, but for Junos BNG a 
customer-facing interface is unnumbered.


Kind regards,

Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-06 Thread Andrey Kostin via juniper-nsp

Bjørn Mork via juniper-nsp писал 2021-08-06 12:38:

Andrey Kostin via juniper-nsp  writes:


What DHCP server do you use/would recommend to deploy for subscriber
management?


The one in JUNOS. Using RADIUS as backend.



Thanks, currently using it but looking for a central server for more 
effective IP usage.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-06 Thread Andrey Kostin via juniper-nsp

Jerry Jones писал 2021-08-06 09:37:

Strongly suggest having active lease query or bulk active lease query

I believe kea has this support

Jerry Jones​


Thanks for reply, Jerry.
In my understanding active leasequery can be run between routers, so 
might be not needed on DHCP server, am I correct?
Interesting question what happens if we have two routers with 
synchronized DHCP bindings, will be DHCP demux interfaces created on the 
secondary router based on that? My guess is no, but need to test it. If 
then traffic switches from primary to secondary router, will the 
secondary be able to pass IP traffic right away or it will have to wait 
for next DHCP packet from a client to create demux interface?


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] DHCP server recommendation for subscribers management

2021-08-06 Thread Andrey Kostin via juniper-nsp

Hi Juniper-NSP community,

What DHCP server do you use/would recommend to deploy for subscriber 
management? Preferably packaged for CentOS. Required features are IPv4, 
IPv6 IANA, IPv6 IA_PD. Active leasequery support is desirable but 
optional.


--
Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX VRRP on VXLAN enviroment

2021-07-23 Thread Andrey Kostin via juniper-nsp

Cristian Cardoso via juniper-nsp писал 2021-07-19 14:15:

Hi
Thanks for the tip, I'll set it up here.



Are you trying to setup MX80 as end-host, without including it in EVPN? 
If so, then you can extend EVPN to MX80 and run virtual-gateway from it. 
No need for VRRP in this case.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp