Re: [j-nsp] Netflow config for MX204

2020-04-14 Thread Nick Schmalenberger via juniper-nsp
--- Begin Message ---
On Sun, Apr 12, 2020 at 02:45:57AM +0200, Mark Tinka wrote:
> 
> 
> On 11/Apr/20 08:04, Nick Schmalenberger via juniper-nsp wrote:
> > I had the same issue with first trying to export over fxp0, then
> > trying with my routing instance, and I ended up making a static
> > route in inet6.0 with next-table over to the instance table where
> > the route into the LAN for my elastiflow collector is. Flow
> > export over IPv6 does also seem to work.
> 
> We just export flows in-band. Just seems simpler, and has been reliable
> for close to 10 years.
> 
> Mark.
>
I am exporting in-band, the next-table is so the default table
can access a port in my routing instance that has the in-band
ports. What flow collector are you using? Any tips on the
under-counting? Thanks!
-Nick
--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-11 Thread Nick Schmalenberger via juniper-nsp
--- Begin Message ---
On Sat, Apr 11, 2020 at 03:52:53PM +1200, Liam Farr wrote:
> Hi,
> 
> Got things working in the end, thanks everyone for their help and patience.
> 
> Also thanks @John Kristoff especially for the template at
> https://github.com/jtkristoff/junos/blob/master/flows.md it was
> very helpful.
> 
> As I suspected I was doing something dumb, or rather a combination of the
> dumb.
> 
> 1. I had initially tried to use fxp0 as my export interface, it seems this
> is not supported.
> 2. I then tried to use an interface in a VRF to export the flows, I think
> some additional config may be required for this (
> https://kb.juniper.net/InfoCenter/index?page=content=KB28958).
> 3. It's always MTU... I suspect in one of my various config attempts flow's
> were being sent, but dropped because of the 1500 MTU on the flow collector
> and a larger MTU on the MX204 interface generating them.
> 
> In the end I set up a new link-net on a new vlan interface attached to
> inet0 between the MX204 and netflow collector, set the inet mtu to 1500
> and  everything started working.
> 
> 
> Again thanks everyone for the help, I now have some really interesting flow
> stats to examine :)
> 
>
What are you using for flow analysis? I have elastiflow setup and
its showing me some pretty graphs but seems to severely
undercount. Like showing 5Mbps of traffic when SNMP tells me its
1-2Gbps. I'm not sure if its a performance problem on the router
or elastiflow side, but I'm glad to see someone else configuring
this on a MX204 :) Let me know if you run into that also.

I had the same issue with first trying to export over fxp0, then
trying with my routing instance, and I ended up making a static
route in inet6.0 with next-table over to the instance table where
the route into the LAN for my elastiflow collector is. Flow
export over IPv6 does also seem to work.
-Nick
--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] [c-nsp] how many IGP routes is too many?

2020-04-07 Thread Nick Schmalenberger via juniper-nsp
--- Begin Message ---
On Sun, Apr 05, 2020 at 11:25:25AM +0100, adamv0...@netconsultings.com wrote:
> > Pierfrancesco Caci
> > Sent: Thursday, April 2, 2020 8:46 AM
> > 
> > Hello,
> > 
> > is there any recent study about how many IGP (isis or ospf, I don't really
> care
> > right now) routes are "too many" with current generations of route
> > processors? Think RSP880, NCS55xx and so on on the cisco side and PTX1000,
> > PTX10002, etc on the juniper side.
> > 
> Guessing it was in ~2012 one of the Tier1s asked cisco for 1M IGP routes -so
> guessing 1M was too much when they tested back then.
> The fact you're asking here tells me you're not one of the big folks trying
> to break a sweat on IGP,  so my guess is you'll be fine (assuming the usual
> "don't redistribute DFZ to IGP"...).
> 
> But as Saku eluded to, there will be a threshold above which the SPF
> calculation will take a significant time -which might negatively impact
> convergence or CPU load especially in case of a flapping links (and no
> dampening measures), back in the days this could have had effect on your
> tuning of the IGP (or default settings if you didn't to any tuning) so
> readjusting was needed to get optimal results.
> Nowadays however, in times of FRR (-well that one has u-loops), but for
> instance ti-LFA or classical RSVP-TE Bypass... and BGP PIC "Core", I'd say
> the SPF calculation time is becoming less and less relevant. 
> So in current designs I'm tuning IGPs for egress edge-node protection only,
> i.e. for generating LSP/LSA ASAP and then propagating it to all other
> ingress edge-nodes as fast as possible so that BGP PIC "Core" can react to
> the missing loopback and switch to an alternate egress edge-node.(reactions
> to core-node failures or link-failures are IGP agnostic and driven solely by
> loss of light or BFD/LFM...).
> *Even in the egress edge-node protection case there are now RSVP-TE and
> SR-TE features addressing this.
> 
> So I guess only the mem and cpu load and ultimately stability of the RPD (or
> IGP process) is the remaining concern in extreme load cases (not the
> convergence though). 
> 
>  
> adam 
>
Yes, according to this very interesting experiment
http://www.blackhole-networks.com/OSPF_overload/ it is mostly
about memory and cpu load :)
-Nick
--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp