Re: [j-nsp] MX104 and NetFlow - Any horror story to share?

2018-05-01 Thread Vincent Bernat
 ❦  1 mai 2018 14:30 GMT, Michael Hare  :

> chassis {
> afeb {
> slot 0 {
> inline-services {
> flow-table-size {
> ipv4-flow-table-size 7;
> ipv6-flow-table-size 7;
> }
> }
> }
> }
> }

On 15.1R6, I am using this without any issue:

   afeb {
slot 0 {
inline-services {
flow-table-size {
ipv4-flow-table-size 10;
ipv6-flow-table-size 5;
}
}
}
}
-- 
Don't sacrifice clarity for small gains in "efficiency".
- The Elements of Programming Style (Kernighan & Plauger)
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 and NetFlow - Any horror story to share?

2018-05-01 Thread Michael Hare
Alain,

Do you want to collect IPv6?  You are probably passed 14.X code on MX104 but I 
observed that I was unable to change the ipv6-flow-table-size at all (including 
after reboot).  I was able to set flow-table-size in 16.X but my load average 
on 16.X on MX104 is pretty terrible; seems like I got all of the performance 
penalty of threading in 16.X without an additional core unlocked on the MX104 
RE.  Since 14.X is near EOL I didn't harass JTAC.

Thanks and a nod to Olivier, I hadn't seen "flex-flow-sizing" before, seems 
like I really wanted that, not the explicit flow-table-size commands.

abbreviated code example below.

chassis {
afeb {
slot 0 {
inline-services {
flow-table-size {
ipv4-flow-table-size 7;
ipv6-flow-table-size 7;
}
}
}
}
}

-Michael

>>-Original Message-
>>From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
>>Of Alain Hebert
>>Sent: Tuesday, May 01, 2018 8:23 AM
>>To: juniper-nsp@puck.nether.net
>>Subject: Re: [j-nsp] MX104 and NetFlow - Any horror story to share?
>>
>>     Yeah I had the feeling I would break those MX's.
>>
>>     At this point it is worth it to rebuilt our vMX lab to test the
>>IPFIX variant...
>>
>>     Thanks for the input.
>>
>>
>>     As for routing we have a pretty good mix of T1/T2 providers and we
>>rarely drop sessions so it is providing a pretty good uptime...  And
>>that's why we got a pair of MX960 coming down anytime this year.
>>
>>
>>     PS: Unrelated quote - Yeah fat fingers sorry list.
>>
>>-
>>Alain Hebertaheb...@pubnix.net
>>PubNIX Inc.
>>50 boul. St-Charles
>>P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
>>Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443
>>
>>On 04/30/18 19:41, Olivier Benghozi wrote:
>>> Hi Alain,
>>>
>>> While you seem to already be kind of suicidal (5 full tables peers on an
>>MX104), on an MX you must not use netflow v9 (CPU based) but use inline
>>IPFIX (Trio / PFE based).
>>> I suppose that Netflow-v9 on an MX104 could be quickly an interesting
>>horror story with real traffic due to its ridiculously slow CPU, by the way.
>>> With inline IPFIX it should just take some more RAM, and FIB update could
>>be a bit slower.
>>>
>>> By the way on MX104 you don't configure «fpc» (bigger MXs) of «tfeb»
>>(MX80) in chassis hierarchy, but «afeb», so you can remove your fpc line and
>>fix your tfeb line.
>>>
>>> So you'll need something like that in services, instead of version9:
>>> set services flow-monitoring version-ipfix template ipv4 template-refresh-
>>rate
>>> set services flow-monitoring version-ipfix template ipv4 option-refresh-
>>rate
>>> set services flow-monitoring version-ipfix template ipv4 ipv4-template
>>>
>>> And these ones too, to allocate some memory for the flows in the Trio and
>>to define how it will speaks with the collector:
>>> set chassis afeb slot 0 inline-services flex-flow-sizing
>>> set forwarding-options sampling instance NETFLOW-SI family inet output
>>inline-jflow source-address a.b.c.d
>>>
>>> Of course you'll remove the line with «output flow-server  source
>>».
>>>
>>>
>>>
>>> I don't see why you quoted the mail from Brijesh Patel about the Routing
>>licences, by the way :P
>>>
>>>
>>> Olivier
>>>
>>>> On 30 apr. 2018 at 21:34, Alain Hebert <aheb...@pubnix.net> wrote :
>>>>
>>>>
>>>> Anyone has any horror stories with something similar to what we're about
>>to do?
>>>>  We're planning to turn up the following Netflow config (see below) on
>>our MX104s (while we wait for our new MX960 =D), it worked well with
>>everything else (SRX mostly), the "*s**et chassis"* are making us wonder
>>how high would be the possibility to render those system unstable, at short
>>and long term.
>>>>
>>>>  Thanks again for your time.
>>>>
>>>>  PS: We're using Elastiflow, and its working great for our needs atm.
>>>>
>>>>
>>>> -- A bit of context
>>>>
>>>>  Model: mx104
>>>>  Junos: 16.1R4-S1.3
>>>>
>>>>  They're routing about 20Gbps atm, with 5 full

Re: [j-nsp] MX104 and NetFlow - Any horror story to share?

2018-04-30 Thread Olivier Benghozi
Hi Alain,

While you seem to already be kind of suicidal (5 full tables peers on an 
MX104), on an MX you must not use netflow v9 (CPU based) but use inline IPFIX 
(Trio / PFE based).
I suppose that Netflow-v9 on an MX104 could be quickly an interesting horror 
story with real traffic due to its ridiculously slow CPU, by the way.
With inline IPFIX it should just take some more RAM, and FIB update could be a 
bit slower.

By the way on MX104 you don't configure «fpc» (bigger MXs) of «tfeb» (MX80) in 
chassis hierarchy, but «afeb», so you can remove your fpc line and fix your 
tfeb line.

So you'll need something like that in services, instead of version9:
set services flow-monitoring version-ipfix template ipv4 template-refresh-rate
set services flow-monitoring version-ipfix template ipv4 option-refresh-rate
set services flow-monitoring version-ipfix template ipv4 ipv4-template

And these ones too, to allocate some memory for the flows in the Trio and to 
define how it will speaks with the collector:
set chassis afeb slot 0 inline-services flex-flow-sizing
set forwarding-options sampling instance NETFLOW-SI family inet output 
inline-jflow source-address a.b.c.d

Of course you'll remove the line with «output flow-server  source ».



I don't see why you quoted the mail from Brijesh Patel about the Routing 
licences, by the way :P


Olivier

> On 30 apr. 2018 at 21:34, Alain Hebert  wrote :
> 
> 
> Anyone has any horror stories with something similar to what we're about to 
> do?

> We're planning to turn up the following Netflow config (see below) on our 
> MX104s (while we wait for our new MX960 =D), it worked well with everything 
> else (SRX mostly), the "*s**et chassis"* are making us wonder how high would 
> be the possibility to render those system unstable, at short and long term.
> 
> Thanks again for your time.
> 
> PS: We're using Elastiflow, and its working great for our needs atm.
> 
> 
> -- A bit of context
> 
> Model: mx104
> Junos: 16.1R4-S1.3
> 
> They're routing about 20Gbps atm, with 5 full tables peers, ~0.20 load 
> average, and 700MB mem free.
> 
> 
> -- The Netflow config
> 
> *set chassis tfeb0 slot 0 sampling-instance NETFLOW-SI*
> 
> *set chassis fpc 1 sampling-instance NETFLOW-SI*
> 
> set services flow-monitoring version9 template FM-V9 option-refresh-rate 
> seconds 25
> set services flow-monitoring version9 template FM-V9 template-refresh-rate 
> seconds 15
> set services flow-monitoring version9 template FM-V9 ipv4-template
> 
> set forwarding-options sampling instance NETFLOW-SI input rate 1 run-length 0
> set forwarding-options sampling instance NETFLOW-SI family inet output 
> flow-server  port 2055
> set forwarding-options sampling instance NETFLOW-SI family inet output 
> flow-server  source 
> set forwarding-options sampling instance NETFLOW-SI family inet output 
> flow-server  version9 template FM-V9
> set forwarding-options sampling instance NETFLOW-SI family inet output 
> inline-jflow source-address 
> 
> set interfaces  unit  family inet sampling input
> set interfaces  unit  family inet sampling output

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MX104 and NetFlow - Any horror story to share?

2018-04-30 Thread Alain Hebert

    Hi,

Anyone has any horror stories with something similar to what we're about 
to do?


    We're planning to turn up the following Netflow config (see below) 
on our MX104s (while we wait for our new MX960 =D), it worked well with 
everything else (SRX mostly), the "*s**et chassis"* are making us wonder 
how high would be the possibility to render those system unstable, at 
short and long term.


    Thanks again for your time.

    PS: We're using Elastiflow, and its working great for our needs atm.


-- A bit of context

        Model: mx104
        Junos: 16.1R4-S1.3

    They're routing about 20Gbps atm, with 5 full tables peers, ~0.20 
load average, and 700MB mem free.



-- The Netflow config

*set chassis tfeb0 slot 0 sampling-instance NETFLOW-SI*

*set chassis fpc 1 sampling-instance NETFLOW-SI*

set services flow-monitoring version9 template FM-V9 option-refresh-rate 
seconds 25
set services flow-monitoring version9 template FM-V9 
template-refresh-rate seconds 15

set services flow-monitoring version9 template FM-V9 ipv4-template

set forwarding-options sampling instance NETFLOW-SI input rate 1 
run-length 0
set forwarding-options sampling instance NETFLOW-SI family inet output 
flow-server  port 2055
set forwarding-options sampling instance NETFLOW-SI family inet output 
flow-server  source 
set forwarding-options sampling instance NETFLOW-SI family inet output 
flow-server  version9 template FM-V9
set forwarding-options sampling instance NETFLOW-SI family inet output 
inline-jflow source-address 


set interfaces  unit  family inet sampling input
set interfaces  unit  family inet sampling output


-
Alain Hebertaheb...@pubnix.net
PubNIX Inc.
50 boul. St-Charles
P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443

On 04/30/18 10:34, Brijesh Patel wrote:

Hello Members,

Any idea what is Difference between MPC4E-3D-32XGE-RB  and
MPC4E-3D-32XGE-SFPP ?

Juniper PDf says :

MPC4E-3D-32XGE-SFPP 32x10GbE, full scale L2/L2.5 and *reduced scale L3
features*
and
MPC4E-3D-32XGE-RB 32XGbE SFPP ports, full scale L2/L2.5,
* L3 and L3VPN features*

now question is *what is reduced scale L3 featurs and L3vpn features ?*

*Many Thanks,*

*Brijesh Patel*
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp