Yeah I had the feeling I would break those MX's.

    At this point it is worth it to rebuilt our vMX lab to test the IPFIX variant...

    Thanks for the input.


    As for routing we have a pretty good mix of T1/T2 providers and we rarely drop sessions so it is providing a pretty good uptime...  And that's why we got a pair of MX960 coming down anytime this year.


    PS: Unrelated quote - Yeah fat fingers sorry list.

-----
Alain Hebert                                aheb...@pubnix.net
PubNIX Inc.
50 boul. St-Charles
P.O. Box 26770     Beaconsfield, Quebec     H9W 6G7
Tel: 514-990-5911  http://www.pubnix.net    Fax: 514-990-9443

On 04/30/18 19:41, Olivier Benghozi wrote:
Hi Alain,

While you seem to already be kind of suicidal (5 full tables peers on an 
MX104), on an MX you must not use netflow v9 (CPU based) but use inline IPFIX 
(Trio / PFE based).
I suppose that Netflow-v9 on an MX104 could be quickly an interesting horror 
story with real traffic due to its ridiculously slow CPU, by the way.
With inline IPFIX it should just take some more RAM, and FIB update could be a 
bit slower.

By the way on MX104 you don't configure «fpc» (bigger MXs) of «tfeb» (MX80) in 
chassis hierarchy, but «afeb», so you can remove your fpc line and fix your 
tfeb line.

So you'll need something like that in services, instead of version9:
set services flow-monitoring version-ipfix template ipv4 template-refresh-rate
set services flow-monitoring version-ipfix template ipv4 option-refresh-rate
set services flow-monitoring version-ipfix template ipv4 ipv4-template

And these ones too, to allocate some memory for the flows in the Trio and to 
define how it will speaks with the collector:
set chassis afeb slot 0 inline-services flex-flow-sizing
set forwarding-options sampling instance NETFLOW-SI family inet output 
inline-jflow source-address a.b.c.d

Of course you'll remove the line with «output flow-server <snip> source <Mgmt>».



I don't see why you quoted the mail from Brijesh Patel about the Routing 
licences, by the way :P


Olivier

On 30 apr. 2018 at 21:34, Alain Hebert <aheb...@pubnix.net> wrote :


Anyone has any horror stories with something similar to what we're about to do?
     We're planning to turn up the following Netflow config (see below) on our MX104s 
(while we wait for our new MX960 =D), it worked well with everything else (SRX mostly), 
the "*s**et chassis"* are making us wonder how high would be the possibility to 
render those system unstable, at short and long term.

     Thanks again for your time.

     PS: We're using Elastiflow, and its working great for our needs atm.


------ A bit of context

         Model: mx104
         Junos: 16.1R4-S1.3

     They're routing about 20Gbps atm, with 5 full tables peers, ~0.20 load 
average, and 700MB mem free.


------ The Netflow config

*set chassis tfeb0 slot 0 sampling-instance NETFLOW-SI*

*set chassis fpc 1 sampling-instance NETFLOW-SI*

set services flow-monitoring version9 template FM-V9 option-refresh-rate 
seconds 25
set services flow-monitoring version9 template FM-V9 template-refresh-rate 
seconds 15
set services flow-monitoring version9 template FM-V9 ipv4-template

set forwarding-options sampling instance NETFLOW-SI input rate 1 run-length 0
set forwarding-options sampling instance NETFLOW-SI family inet output flow-server 
<snip> port 2055
set forwarding-options sampling instance NETFLOW-SI family inet output flow-server 
<snip> source <Mgmt>
set forwarding-options sampling instance NETFLOW-SI family inet output flow-server 
<snip> version9 template FM-V9
set forwarding-options sampling instance NETFLOW-SI family inet output inline-jflow 
source-address <Mgmt>

set interfaces <X> unit <Y> family inet sampling input
set interfaces <X> unit <Y> family inet sampling output
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to