[j-nsp] Junos Telemetry Interface

2020-04-09 Thread Colton Conor
Instead of monitoring Juniper equipment by SNMP with 5 minute polling we
would like to use streaming telemetry to monitor the devices in real-time.
This requires the Junos Telemetry Interface.

Looking in the Juniper Feature Explorer, Junos Telemetry Interface is not a
feature, but rater a whole category in the feature explorer, with multiple
features under it. What feature am I looking for to be able to monitor the
interfaces in real-time, and see how much bandwidth flows across them
similar to SNMP?

The ACX platforms only support the Specify Routing Instance for JTI
feature?
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-09 Thread John Kristoff
On Thu, 9 Apr 2020 06:20:00 +
Liam Farr  wrote:

> However I am getting export packet failures.

Some loss of flows being exported may be unavoidable depending on
your configuration and environment.  If you want to see fewer errors
you may just have to sample less frequently.  The numbers reported in
your "accounting errors" don't seem that large.

In my repo page were the example config is from you'll see a couple of
images at the bottom that show the difference between the two modes.  I
was aware of the flex mode when I originally did this.  I think at the
time I was under the impression that setting the memory pools manually
offered some desirable predictability.

Looking back at my notes, I think it was when Juniper TAC told me this
that led me to that conclusion: "And regarding flex-flow-sizing; this
configuration results in a first-come-first-serve creation of flows.
Whichever flow comes first, that is allowed to occupy the flow-table if
there is space in the table. Otherwise, the flow is dropped and an
error count is created."  Rightly or wrongly, I recall seeming to want
to ensure some amount of reasonable memory for both v4 and v6 flows.

John
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Prioritize route advertisement

2020-04-09 Thread Gustavo Santos
Thanks for all inputs.

Before the change I set the vlans that the peer was 1500Bytes MTU set on
the interface to avoid MTU issues, but I can try with some
of this transit providers the IP MTU on their side to match and check if
the convergence time will get better.

Regards!

On Mon, Apr 6, 2020 at 2:06 PM Jeffrey Haas  wrote:

>
>
> > On Apr 6, 2020, at 12:59 PM,  <
> adamv0...@netconsultings.com> wrote:
> >
> >> Gustavo Santos
> >> Sent: Monday, April 6, 2020 4:06 PM
> >>
> >> Is there a way to prioritize advertisement on some BGP sessions above
> >> others? I tried the
> >> https://www.juniper.net/documentation/en_US/junos/topics/topic-
> >> map/bgp-route-prioritization.html
> >>
> > The feature you mentioned is used to say first send L3VPN routes before
> > L2VPN routes when talking to a given peer.
> > I'm not aware of any such mechanism to say peer 192.0.2.1 should be
> served
> > first and then peer 192.0.2.2 should be next, etc...
>
> Junos roughly serves the back queue for the peer group based on the
> subsets of peers that get common updates vs. the peers that are ready to
> write.  In the presence of a large back queue for a peer in the group, we
> might appear sluggish to service some of the more in sync peers.  However,
> what will tend to happen is those slower and more out of sync peers will
> write block and next round we just move along to do other work.
>
> >
> >> The question is if there is a way to work around this change that
> > behavior?
> >>
> > Wondering if it was due to some slow peer(s)
> > Not sure if juniper BGP works similarly to cisco BGP in this area (but
> > considering the differences in update groups between the two it might
> very
> > well NOT be the case)
> > But cisco has the following to address slow peers holding down the whole
> > update group more info at:
>
> Yeah, we don't need that.  We just form optimal micro-groups for the peers
> that are in the same sync level at a given part of the queue and are ready
> to write.
>
> The prior email on path MTU issues will account for all sorts of
> headaches.  BGP is a TCP application, and things that manifest in a fashion
> similar to lost packets will do terrible things to throughput.
>
> -- Jeff
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-09 Thread Richard McGovern via juniper-nsp
--- Begin Message ---
By any chance does you config/design include LSYS?  If yes export could/will 
have issues, BUT at same time this combination is not officially supported 
together to start with.  So if trying to use these together, you are on your 
own.

https://kb.juniper.net/InfoCenter/index?page=content=KB27035=RSS

FYI Only, Rich

Richard McGovern
Sr Sales Engineer, Juniper Networks 
978-618-3342
 
I’d rather be lucky than good, as I know I am not good
I don’t make the news, I just report it
 

On 4/9/20, 3:35 AM, "Timur Maryin"  wrote:



On 09-Apr-20 08:20, Liam Farr wrote:
> Hi,
> 
> changed to a loopback address on one of the VRF's,

...

> Not sure specifically what I am doing wrong here, it seems to be 
collecting
> the flows ok, but exporting is the issue?
> 
> I'd appreciate any advice or pointers thanks :)


maybe this?


https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/routing-instance-edit-forwarding-options-sampling.html




--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] [c-nsp] how many IGP routes is too many?

2020-04-09 Thread Mark Tinka



On 9/Apr/20 10:55, adamv0...@netconsultings.com wrote:

> Right, but there are bunch of techniques to address the FIB scaling problem
> of MPLS all the way to access layer (cell tower) deployments. 

Agreed.

The goal is always to implement the least complex one (bearing in mind,
of course, that "complex" means different things to different people).

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] [c-nsp] how many IGP routes is too many?

2020-04-09 Thread adamv0025
> Mark Tinka
> Sent: Wednesday, April 8, 2020 12:55 PM
> 
> On 5/Apr/20 12:25, adamv0...@netconsultings.com wrote:
> 
> > Nowadays however, in times of FRR (-well that one has u-loops), but
> > for instance ti-LFA or classical RSVP-TE Bypass... and BGP PIC "Core",
> > I'd say the SPF calculation time is becoming less and less relevant.
> > So in current designs I'm tuning IGPs for egress edge-node protection
> > only, i.e. for generating LSP/LSA ASAP and then propagating it to all
> > other ingress edge-nodes as fast as possible so that BGP PIC "Core"
> > can react to the missing loopback and switch to an alternate egress
> > edge-node.(reactions to core-node failures or link-failures are IGP
> > agnostic and driven solely by loss of light or BFD/LFM...).
> > *Even in the egress edge-node protection case there are now RSVP-TE
> > and SR-TE features addressing this.
> >
> > So I guess only the mem and cpu load and ultimately stability of the
> > RPD (or IGP process) is the remaining concern in extreme load cases (not
> the
> > convergence though).
> 
> For me, I'd say small FIB's in a network that runs MPLS all the way into
the
> Access (where the small FIB's reside) is the biggest risk to scaling out
the IGP.
> On those boxes, CPU and memory aren't the issue (and they are nowhere
> near as powerful as the chassis' in the data centre), it's the FIB slots.
> 
Right, but there are bunch of techniques to address the FIB scaling problem
of MPLS all the way to access layer (cell tower) deployments. 

adam

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-09 Thread Liam Farr
Seems I cant just drop the forwarding options into the vrf verbatim;

# show | compare
[edit]
-  forwarding-options {
-  sampling {
-  sample-once;
-  instance {
-  default {
-  input {
-  rate 100;
-  }
-  family inet {
-  output {
-  flow-server 103.247.xxx.xxx {
-  port 6363;
-  version-ipfix {
-  template {
-  v4;
-  }
-  }
-  }
-  inline-jflow {
-  source-address 43.252.xxx.xxx;
-  }
-  }
-  }
-  family inet6 {
-  output {
-  flow-server 103.247.xxx.xxx {
-  port 6363;
-  version-ipfix {
-  template {
-  v6;
-  }
-  }
-  }
-  inline-jflow {
-  source-address 43.252.xxx.xxx;
-  }
-  }
-  }
-  }
-  }
-  }
-  }
[edit routing-instances myvrf_Intl]
+forwarding-options {
+sampling {
+sample-once;
+instance {
+default {
+input {
+rate 100;
+}
+family inet {
+output {
+flow-server 103.247.xxx.xxx {
+port 6363;
+version-ipfix {
+template {
+v4;
+}
+}
+}
+inline-jflow {
+source-address 43.252.xxx.xxx;
+}
+}
+}
+family inet6 {
+output {
+flow-server 103.247.xxx.xxx {
+port 6363;
+version-ipfix {
+template {
+v6;
+}
+}
+}
+inline-jflow {
+source-address 43.252.xxx.xxx;
+}
+}
+}
+}
+}
+}
+}

[edit]
# commit check
[edit chassis fpc 0 sampling-instance]
  'default'
Referenced sampling instance does not exist
[edit interfaces xe-0/1/7 unit 442 family inet]
  'sampling'
Requires forwarding-options sampling or packet-capture config
[edit interfaces xe-0/1/7 unit 442 family inet6]
  'sampling'
Requires forwarding-options sampling or packet-capture config
error: configuration check-out failed: (statements constraint check failed)


This would commit (i.e. not removing the base forwarding-options);

# show | compare
[edit routing-instances myvrf_Intl]
+forwarding-options {
+sampling {
+instance {
+default {
+input {
+rate 100;
+}
+family inet {
+output {
+flow-server 103.247.xxx.xxx {
+port 6363;
+version-ipfix {
+template {
+v4;
+}
+}
+}
+inline-jflow {
+source-address 43.252.xxx.xxx;
+}
+}
+}
+family inet6 {
+output {
+flow-server 103.247.xxx.xxx {
+port 6363;
+version-ipfix {
+template {
+v6;
+}
+}
+}
+inline-jflow {
+source-address 43.252.xxx.xxx;
+}
+}
+}
+

Re: [j-nsp] Netflow config for MX204

2020-04-09 Thread Tarko Tikan

hey,


To be honest, we are on the old method and don't notice any badness. One
of those "If it ain't broke" times :-).


If you have your tables sized correctly then why would you notice 
anything? They are the same tables after all.


I was just pointing out that if someone is distributing a template for 
new users, perhaps include the newer automatic sizing (which was not 
available at first so it's reasonable to use it if you started before it 
become available).


--
tarko
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-09 Thread Timur Maryin via juniper-nsp
--- Begin Message ---



On 09-Apr-20 08:20, Liam Farr wrote:

Hi,

changed to a loopback address on one of the VRF's,


...


Not sure specifically what I am doing wrong here, it seems to be collecting
the flows ok, but exporting is the issue?

I'd appreciate any advice or pointers thanks :)



maybe this?

https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/routing-instance-edit-forwarding-options-sampling.html

--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-09 Thread Mark Tinka


On 8/Apr/20 18:17, Tarko Tikan wrote:

>  
>
> AFAIR no. You can verify via "show jnh 0 inline-services
> flow-table-info" from the PFE shell.

Okay.

To be honest, we are on the old method and don't notice any badness. One
of those "If it ain't broke" times :-).

Mark.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-09 Thread Liam Farr
Hi,

I'm using the config example at
https://github.com/jtkristoff/junos/blob/master/flows.md (many thanks) with
a couple of exceptions.

However I am getting export packet failures.

Exceptions / changes from the example are the use of
*flex-flow-sizing* and *sampling
on the interface* rather than firewall.

Config is as follows;

chassis {
fpc 0 {
sampling-instance default;
inline-services {
flex-flow-sizing;
}
}
}
services {
flow-monitoring {
version-ipfix {
template v4 {
ipv4-template;
}
template v6 {
ipv6-template;
}
}
}
}
forwarding-options {
sampling {
sample-once;
instance {
default {
input {
rate 100;
}
family inet {
output {
flow-server 103.247.xxx.xxx {
port 6363;
version-ipfix {
template {
v4;
}
}
}
inline-jflow {
source-address 43.252.xxx.xxx;
}
}
}
family inet6 {
output {
flow-server 103.247.xxx.xxx {
port 6363;
version-ipfix {
template {
v6;
}
}
}
inline-jflow {
source-address 43.252.xxx.xxx;
}
}
}
}
}
}
}
interfaces {
xe-0/1/7 {
unit 442 {
vlan-id 442;
family inet {
mtu 1998;
sampling {
input;
output;
}
address 111.69.xxx.xxx/30;
}
family inet6 {
mtu 1998;
sampling {
input;
output;
}
address 2406:::::/64;
}

}
}
}

For the source address I had originally used the internal management
network address on fxp0 but was receiving no flows at the collector so
changed to a loopback address on one of the VRF's, both the internal
management IP and the VRF loopback have reachability to the flow-server
address.

The below is the error output;

show services accounting errors inline-jflow fpc-slot 0
  Error information
FPC Slot: 0
Flow Creation Failures: 0
Route Record Lookup Failures: 0, AS Lookup Failures: 0
Export Packet Failures: 137
Memory Overload: No, Memory Alloc Fail Count: 0

IPv4:
IPv4 Flow Creation Failures: 0
IPv4 Route Record Lookup Failures: 0, IPv4 AS Lookup Failures: 0
IPv4 Export Packet Failures: 134

IPv6:
IPv6 Flow Creation Failures: 0
IPv6 Route Record Lookup Failures: 0, IPv6 AS Lookup Failures: 0
IPv6 Export Packet Failures: 3

show services accounting flow inline-jflow fpc-slot 0
  Flow information
FPC Slot: 0
Flow Packets: 7976, Flow Bytes: 1129785
Active Flows: 83, Total Flows: 2971
Flows Exported: 1814, Flow Packets Exported: 1477
Flows Inactive Timed Out: 1020, Flows Active Timed Out: 1725
Total Flow Insert Count: 1246

IPv4 Flows:
IPv4 Flow Packets: 7821, IPv4 Flow Bytes: 951645
IPv4 Active Flows: 82, IPv4 Total Flows: 2912
IPv4 Flows Exported: 1776, IPv4 Flow Packets exported: 1439
IPv4 Flows Inactive Timed Out: 1003, IPv4 Flows Active Timed Out: 1687
IPv4 Flow Insert Count: 1225

IPv6 Flows:
IPv6 Flow Packets: 155, IPv6 Flow Bytes: 178140
IPv6 Active Flows: 1, IPv6 Total Flows: 59
IPv6 Flows Exported: 38, IPv6 Flow Packets Exported: 38
IPv6 Flows Inactive Timed Out: 17, IPv6 Flows Active Timed Out: 38
IPv6 Flow Insert Count: 21

show services accounting status inline-jflow fpc-slot 0
  Status information
FPC Slot: 0
IPV4 export format: Version-IPFIX, IPV6 export format: Version-IPFIX
BRIDGE export format: Not set, MPLS export format: Not set
IPv4 Route Record Count: 1698135, IPv6 Route Record Count: 247572, MPLS
Route Record Count: 0
Route Record Count: 1945707, AS Record Count: 167101
Route-Records Set: Yes, Config Set: Yes
Service Status: PFE-0: Steady
Using Extended Flow Memory?: PFE-0: No
Flex Flow Sizing ENABLED?: PFE-0: Yes
IPv4 MAX FLOW Count: 5242884, IPv6 MAX FLOW Count: 5242884
BRIDGE MAX FLOW Count: 5242884, MPLS MAX FLOW Count: 5242884

Not sure specifically what I am doing wrong here, it seems to