Re: [j-nsp] QFX CRB

2020-11-10 Thread Cristian Cardoso
I running 19.1R2.8 version on Junos.
Today I was in contact with Juniper support about a route depletion
problem and it seems to be related to the IRQs problem.
When the IPv4 / IPv6 routes are exhausted in the LTM table, the IRQ
increment begins.
I did an analysis of the packages trafficked on the servers, but I
found nothing out of the ordinary.

Em ter., 10 de nov. de 2020 às 17:47, Nitzan Tzelniker
 escreveu:
>
> Looks ok to me
> Which junos version you are running ? and which devices ?
> Did you capture on the servers to see what is the traffic that causes the 
> high CPU utilization ?
>
>
> On Tue, Nov 10, 2020 at 9:07 PM Cristian Cardoso 
>  wrote:
>>
>> > show configuration protocols evpn
>> vni-options {
>> vni 810 {
>> vrf-target target:888:888;
>> }
>> vni 815 {
>> vrf-target target:888:888;
>> }
>> vni 821 {
>> vrf-target target:888:888;
>> }
>> vni 822 {
>> vrf-target target:888:888;
>> }
>> vni 827 {
>> vrf-target target:888:888;
>> }
>> vni 830 {
>> vrf-target target:888:888;
>> }
>> vni 832 {
>> vrf-target target:888:888;
>> }
>> vni 910 {
>> vrf-target target:666:666;
>> }
>> vni 915 {
>> vrf-target target:666:666;
>> }
>> vni 921 {
>> vrf-target target:666:666;
>> }
>> vni 922 {
>> vrf-target target:666:666;
>> }
>> vni 927 {
>> vrf-target target:666:666;
>> }
>> vni 930 {
>> vrf-target target:666:666;
>> }
>> vni 932 {
>> vrf-target target:666:666;
>> }
>> vni 4018 {
>> vrf-target target:4018:4018;
>> }
>> }
>> encapsulation vxlan;
>> default-gateway no-gateway-community;
>> extended-vni-list all;
>>
>>
>> An example of configuring the interfaces follows, all follow this
>> pattern with more or less IP's.
>> > show configuration interfaces irb.810
>> proxy-macip-advertisement;
>> virtual-gateway-accept-data;
>> family inet {
>> mtu 9000;
>> address 10.19.11.253/22 {
>> preferred;
>> virtual-gateway-address 10.19.8.1;
>> }
>> }
>>
>> Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
>>  escreveu:
>> >
>> > Can you show your irb and protocols evpn configuration please
>> >
>> > Nitzan
>> >
>> > On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso 
>> >  wrote:
>> >>
>> >> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging topology?
>> >> I have two spine switches and two leaf switches, when I use the
>> >> virtual-gateway in active / active mode in the spines, the servers
>> >> connected only in leaf1 have a large increase in IRQ's, generating
>> >> higher CPU consumption in the servers.
>> >> I did a test by deactivating spine2 and leaving only the gateway
>> >> spine1, and the IRQ was zeroed out.
>> >> Did anyone happen to go through this?
>> >> ___
>> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX CRB

2020-11-10 Thread Nitzan Tzelniker
Looks ok to me
Which junos version you are running ? and which devices ?
Did you capture on the servers to see what is the traffic that causes the
high CPU utilization ?


On Tue, Nov 10, 2020 at 9:07 PM Cristian Cardoso <
cristian.cardos...@gmail.com> wrote:

> > show configuration protocols evpn
> vni-options {
> vni 810 {
> vrf-target target:888:888;
> }
> vni 815 {
> vrf-target target:888:888;
> }
> vni 821 {
> vrf-target target:888:888;
> }
> vni 822 {
> vrf-target target:888:888;
> }
> vni 827 {
> vrf-target target:888:888;
> }
> vni 830 {
> vrf-target target:888:888;
> }
> vni 832 {
> vrf-target target:888:888;
> }
> vni 910 {
> vrf-target target:666:666;
> }
> vni 915 {
> vrf-target target:666:666;
> }
> vni 921 {
> vrf-target target:666:666;
> }
> vni 922 {
> vrf-target target:666:666;
> }
> vni 927 {
> vrf-target target:666:666;
> }
> vni 930 {
> vrf-target target:666:666;
> }
> vni 932 {
> vrf-target target:666:666;
> }
> vni 4018 {
> vrf-target target:4018:4018;
> }
> }
> encapsulation vxlan;
> default-gateway no-gateway-community;
> extended-vni-list all;
>
>
> An example of configuring the interfaces follows, all follow this
> pattern with more or less IP's.
> > show configuration interfaces irb.810
> proxy-macip-advertisement;
> virtual-gateway-accept-data;
> family inet {
> mtu 9000;
> address 10.19.11.253/22 {
> preferred;
> virtual-gateway-address 10.19.8.1;
> }
> }
>
> Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
>  escreveu:
> >
> > Can you show your irb and protocols evpn configuration please
> >
> > Nitzan
> >
> > On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <
> cristian.cardos...@gmail.com> wrote:
> >>
> >> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging
> topology?
> >> I have two spine switches and two leaf switches, when I use the
> >> virtual-gateway in active / active mode in the spines, the servers
> >> connected only in leaf1 have a large increase in IRQ's, generating
> >> higher CPU consumption in the servers.
> >> I did a test by deactivating spine2 and leaving only the gateway
> >> spine1, and the IRQ was zeroed out.
> >> Did anyone happen to go through this?
> >> ___
> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP output queue priorities between RIBs/NLRIs

2020-11-10 Thread Rob Foehl

On Tue, 10 Nov 2020, Robert Raszuk wrote:


But what seems wired is last statement: 

"This has problems with blackholing traffic for long periods in several
cases,..." 

We as the industry have solved this problem many years ago, by clearly
decoupling connectivity restoration term from protocol convergence term. 


Fundamentally, yes -- but not for EVPN DF elections.  Each PE making its 
own decisions about who wins without any round-trip handshake agreement is 
the root of the problem, at least when coupled with all of the fun that 
comes with layer 2 flooding.


There's also no binding between whether a PE has actually converged and 
when it brings up IRBs and starts announcing those routes, which leads to 
a different sort of blackholing.  Or in the single-active case, whether 
the IRB should even be brought up at all, which leads to some really dumb 
traffic paths.  (Think layer 3 via P -> inactive PE -> same P, different 
encapsulation -> active PE -> layer 2 segment, for an example.)



I think this would be a recommended direction not so much to mangle BGP code
to optimize here and in the same time cause new maybe more severe issues
somewhere else. Sure per SAFI refresh should be the norm, but I don't think
this is the main issue here. 


Absolutely.  The reason for the concern here is that the output queue 
priorities would be sufficient to work around the more fundamental flaws, 
if not for the fact that they're largely ineffective in this exact case.


-Rob
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP output queue priorities between RIBs/NLRIs

2020-11-10 Thread Rob Foehl

On Tue, 10 Nov 2020, Gert Doering wrote:


Can you do the EVPN routes on a separate session (different loopback on
both ends, dedicated to EVPN-afi-only BGP)?  Or separate RRs?

Yes, this is not what you're asking, just a wild idea to make life
better :-)


Not that wild -- I've already been pinning up EVPN-only sessions between 
adjacent PEs to smooth out the DF elections where possible.  Discrete 
sessions over multiple loopback addresses also work, at the cost of extra 
complexity.


At some point that starts to look like giving up on RRs, though -- which 
I'd rather avoid, they're kinda useful :)


-Rob
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP output queue priorities between RIBs/NLRIs

2020-11-10 Thread Robert Raszuk
>
> Can you do the EVPN routes on a separate session (different loopback on
> both ends, dedicated to EVPN-afi-only BGP)?


Separate sessions would help if TCP socket would be the real issue, but
here clear it is not.


> Or separate RRs?
>

Sure that may help. In fact even separate RPD demon on the same box may
help :)

But what seems wired is last statement:

"This has problems with blackholing traffic for long periods in several
cases,..."

We as the industry have solved this problem many years ago, by clearly
decoupling connectivity restoration term from protocol convergence term.

IMO protocols can take as much as they like to "converge" after bad or good
network event yet connectivity restoration upon any network event within a
domain (RRs were brought as example) should be max of 100s of ms. Clearly
sub second.

How:

- RIB tracks next hops and when they go down (known via fast IGP flooding)
or their metric changes then paths with such next hop are either removed or
best path is run
- Data plane has precomputed backup paths and switchover happens in the PIC
fashion in parallel to any control plane stress free work

I think this would be a recommended direction not so much to mangle BGP
code to optimize here and in the same time cause new maybe more severe
issues somewhere else. Sure per SAFI refresh should be the norm, but I
don't think this is the main issue here.

Thx,
R.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP output queue priorities between RIBs/NLRIs

2020-11-10 Thread Gert Doering
Hi,

On Tue, Nov 10, 2020 at 01:26:09PM -0500, Rob Foehl wrote:
> Ah, maybe this is the sticking point: on a route reflector with an 
> RE-S-X6-64 carrying ~10M inet routes and ~10K evpn routes, a new session 
> toward an RR client PE needing to be sent ~1.6M inet routes (full table, 
> add-path 2) and maybe ~3K evpn routes takes between 11-17 minutes to get 
> through the initial batch.  The evpn routes only arrive at the tail end of 
> that, and may only preempt around 1000 inet routes in the output queues, 
> as confirmed by TAC.

Can you do the EVPN routes on a separate session (different loopback on
both ends, dedicated to EVPN-afi-only BGP)?  Or separate RRs?

Yes, this is not what you're asking, just a wild idea to make life
better :-)

gert
-- 
"If was one thing all people took for granted, was conviction that if you 
 feed honest figures into a computer, honest figures come out. Never doubted 
 it myself till I met a computer with a sense of humor."
 Robert A. Heinlein, The Moon is a Harsh Mistress

Gert Doering - Munich, Germany g...@greenie.muc.de


signature.asc
Description: PGP signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX CRB

2020-11-10 Thread Cristian Cardoso
> show configuration protocols evpn
vni-options {
vni 810 {
vrf-target target:888:888;
}
vni 815 {
vrf-target target:888:888;
}
vni 821 {
vrf-target target:888:888;
}
vni 822 {
vrf-target target:888:888;
}
vni 827 {
vrf-target target:888:888;
}
vni 830 {
vrf-target target:888:888;
}
vni 832 {
vrf-target target:888:888;
}
vni 910 {
vrf-target target:666:666;
}
vni 915 {
vrf-target target:666:666;
}
vni 921 {
vrf-target target:666:666;
}
vni 922 {
vrf-target target:666:666;
}
vni 927 {
vrf-target target:666:666;
}
vni 930 {
vrf-target target:666:666;
}
vni 932 {
vrf-target target:666:666;
}
vni 4018 {
vrf-target target:4018:4018;
}
}
encapsulation vxlan;
default-gateway no-gateway-community;
extended-vni-list all;


An example of configuring the interfaces follows, all follow this
pattern with more or less IP's.
> show configuration interfaces irb.810
proxy-macip-advertisement;
virtual-gateway-accept-data;
family inet {
mtu 9000;
address 10.19.11.253/22 {
preferred;
virtual-gateway-address 10.19.8.1;
}
}

Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
 escreveu:
>
> Can you show your irb and protocols evpn configuration please
>
> Nitzan
>
> On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso 
>  wrote:
>>
>> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging topology?
>> I have two spine switches and two leaf switches, when I use the
>> virtual-gateway in active / active mode in the spines, the servers
>> connected only in leaf1 have a large increase in IRQ's, generating
>> higher CPU consumption in the servers.
>> I did a test by deactivating spine2 and leaving only the gateway
>> spine1, and the IRQ was zeroed out.
>> Did anyone happen to go through this?
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP output queue priorities between RIBs/NLRIs

2020-11-10 Thread Rob Foehl

On Tue, 10 Nov 2020, Jeffrey Haas wrote:


The thing to remember is that even though you're not getting a given afi/safi 
as front-loaded as you want (absolute front of queue), as soon as we have 
routes for that priority they're dispatched accordingly.


Right, that turns out to be the essential issue -- the output queues 
actually are working as configured, but the AFI/SAFI routes relevant to a 
higher priority queue arrive so late in the process that it's basically 
irrelevant whether they get to cut in line at that point.  Certainly 
wasn't observable to human eyes, had to capture the traffic to verify.



Full table walks to populate the queues take some seconds to several minutes 
depending on the scale of the router.  In the absence of prioritization, 
something like the evpn routes might not go out for most of a minute rather 
than getting delayed some number of seconds until the rib walker has reached 
that table.


Ah, maybe this is the sticking point: on a route reflector with an 
RE-S-X6-64 carrying ~10M inet routes and ~10K evpn routes, a new session 
toward an RR client PE needing to be sent ~1.6M inet routes (full table, 
add-path 2) and maybe ~3K evpn routes takes between 11-17 minutes to get 
through the initial batch.  The evpn routes only arrive at the tail end of 
that, and may only preempt around 1000 inet routes in the output queues, 
as confirmed by TAC.


I have some RRs that tend toward the low end of that range and some that 
tend toward the high end -- and not entirely sure why in either case -- 
but that timing is pretty consistent overall, and pretty horrifying.  I 
could almost live with "most of a minute", but this is not that.


This has problems with blackholing traffic for long periods in several 
cases, but the consequences for DF elections are particularly disastrous, 
given that they make up their own minds based on received state without 
any affirmative handshake: the only possible behaviors are discarding or 
looping traffic for every ethernet segment involved until the routes 
settle, depending on whether the PE involved believes it's going to win 
the election and how soon.  Setting extremely long 20 minute DF election 
hold timers is currently the least worst "solution", as losing traffic for 
up to 20 minutes is preferable to flooding a segment into oblivion -- but 
only just.


I wouldn't be nearly as concerned with this if we weren't taking 15-20 
minute outages every time anything changes on one of the PEs involved...



[on the topic of route refreshes]


The intent of the code is to issue the minimum set of refreshes for new 
configuration.  If it's provably not minimum for a given config, there should 
be a PR on that.


I'm pretty sure that much is working as intended, given what is actually 
sent -- this issue is the time spent walking other RIBs that have no 
bearing on what's being refreshed.



The cost of the refresh in getting routes sent to you is another artifact of "we 
don't keep that state" - at least in that configuration.  This is a circumstance 
where family route-target (RT-Constrain) may help.  You should find when using that 
feature that adding a new VRF with support for that feature results in the missing routes 
arriving quite fast - we keep the state.


I'd briefly looked at RT-Constrain, but wasn't convinced it'd be useful 
here since disinterested PEs only have to discard at most ~10K EVPN routes 
at present.  Worth revisiting that assessment?


-Rob


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX CRB

2020-11-10 Thread Nitzan Tzelniker
Can you show your irb and protocols evpn configuration please

Nitzan

On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <
cristian.cardos...@gmail.com> wrote:

> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging topology?
> I have two spine switches and two leaf switches, when I use the
> virtual-gateway in active / active mode in the spines, the servers
> connected only in leaf1 have a large increase in IRQ's, generating
> higher CPU consumption in the servers.
> I did a test by deactivating spine2 and leaving only the gateway
> spine1, and the IRQ was zeroed out.
> Did anyone happen to go through this?
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP output queue priorities between RIBs/NLRIs

2020-11-10 Thread Jeffrey Haas via juniper-nsp
--- Begin Message ---
Rob,


> On Nov 9, 2020, at 9:53 PM, Rob Foehl  wrote:
> 
>> An immense amount of work in the BGP code is built around the need to not 
>> have to keep full state on EVERYTHING.  We're already one of the most 
>> stateful BGP implementations on the planet.  Many times that helps us, 
>> sometimes it doesn't.
>> 
>> But as a result of such designs, for certain kinds of large work it is 
>> necessary to have a consistent work list and build a simple iterator on 
>> that.  One of the more common patterns that is impacted by this is the walk 
>> of the various routing tables.  As noted, we start roughly at inet.0 and go 
>> forward based on internal table order.
> 
> Makes sense, but also erases the utility of output queue priorities when
> multiple tables are involved.  Is there any feasibility of moving the RIB
> walking in the direction of more parallelism, or at least something like
> round robin between tables, without incurring too much overhead / bug
> surface / et cetera?

Recent RPD work has been done toward introducing multiple threads of execution 
on the BGP pipeline.  (Sharding.)  The output queue work is still applicable 
here.

The thing to remember is that even though you're not getting a given afi/safi 
as front-loaded as you want (absolute front of queue), as soon as we have 
routes for that priority they're dispatched accordingly.

Full table walks to populate the queues take some seconds to several minutes 
depending on the scale of the router.  In the absence of prioritization, 
something like the evpn routes might not go out for most of a minute rather 
than getting delayed some number of seconds until the rib walker has reached 
that table.

Perfect?  No. Better, yes.  It was a tradeoff of complexity vs. perfect 
queueing behavior.

> 
>> The primary challenge for populating the route queues in user desired orders 
>> is to move that code out of the pattern that is used for quite a few other 
>> things.  While you may want your evpn routes to go first, you likely don't 
>> want route resolution which is using earlier tables to be negatively 
>> impacted.  Decoupling the iterators for the overlapping table impacts is 
>> challenging, at best.  Once we're able to achieve that, the user 
>> configuration becomes a small thing.
> 
> I'm actually worried that if the open ER goes anywhere, it'll result in
> the ability to specify a table order only, and that's an awfully big
> hammer when what's really needed is the equivalent of the output queue
> priorities covering the entire process.  Some of these animals are more
> equal than others.

Which is why there's 16 queues to work with.  In my usual presentation on this 
feature, "why 16?".  The answer is "we needed at least the usual 3 
(low/medium/high), but then users would fight over arbitrary arrangement of 
those among different tables, and you can't do absolute prioritization of those 
per table because VPN may be least priority for some people and Internet 
highest or vice versa... and also, it should be less than 32 based on available 
bits in the data structure".

Yes, some more equal than others.  Hence the flexibility.

Once there's an ability to adjust the walker, it wouldn't impact the use of the 
queues.  It simply would make sure that things were prioritized earlier.

And, as noted above, we're continuing threading work.  At some point the tables 
may gain additional levels of independence which would obviate an explicit 
feature... maybe.  The core observation I make for a lot of this stuff is 
"There's always too much work to do".

> 
>> I don't recall seeing the question about the route refreshes, but I can 
>> offer a small bit of commentary: The CLI for our route refresh isn't as 
>> fine-grained as it could be.  The BGP extension for route refresh permits 
>> per afi/safi refreshing and honestly, we should expose that to the user.  I 
>> know I flagged this for PLM at one point in the past.
> 
> The route refresh issue mostly causes trouble when bringing new PEs into
> existing instances, and is presumably a consequence of the same behavior:
> the refresh message includes the correct AFI/SAFI, but the remote winds up
> walking every RIB before it starts emitting routes for the requested
> family (and no others).  The open case for the output queue issue has a
> note from 9/2 wherein TAC was able to reproduce this behavior and collect
> packet captures of both the specific refresh message and the long period
> of silence before any routes were sent.

The intent of the code is to issue the minimum set of refreshes for new 
configuration.  If it's provably not minimum for a given config, there should 
be a PR on that.

The cost of the refresh in getting routes sent to you is another artifact of 
"we don't keep that state" - at least in that configuration.  This is a 
circumstance where family route-target (RT-Constrain) may help.  You should 
find when using that feature that adding a new VRF with 

[j-nsp] QFX CRB

2020-11-10 Thread Cristian Cardoso
Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging topology?
I have two spine switches and two leaf switches, when I use the
virtual-gateway in active / active mode in the spines, the servers
connected only in leaf1 have a large increase in IRQ's, generating
higher CPU consumption in the servers.
I did a test by deactivating spine2 and leaving only the gateway
spine1, and the IRQ was zeroed out.
Did anyone happen to go through this?
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp