Re: [j-nsp] Netflow config for MX204

2020-04-08 Thread Liam Farr
Hi,

I'm using the config example at
https://github.com/jtkristoff/junos/blob/master/flows.md (many thanks) with
a couple of exceptions.

However I am getting export packet failures.

Exceptions / changes from the example are the use of
*flex-flow-sizing* and *sampling
on the interface* rather than firewall.

Config is as follows;

chassis {
fpc 0 {
sampling-instance default;
inline-services {
flex-flow-sizing;
}
}
}
services {
flow-monitoring {
version-ipfix {
template v4 {
ipv4-template;
}
template v6 {
ipv6-template;
}
}
}
}
forwarding-options {
sampling {
sample-once;
instance {
default {
input {
rate 100;
}
family inet {
output {
flow-server 103.247.xxx.xxx {
port 6363;
version-ipfix {
template {
v4;
}
}
}
inline-jflow {
source-address 43.252.xxx.xxx;
}
}
}
family inet6 {
output {
flow-server 103.247.xxx.xxx {
port 6363;
version-ipfix {
template {
v6;
}
}
}
inline-jflow {
source-address 43.252.xxx.xxx;
}
}
}
}
}
}
}
interfaces {
xe-0/1/7 {
unit 442 {
vlan-id 442;
family inet {
mtu 1998;
sampling {
input;
output;
}
address 111.69.xxx.xxx/30;
}
family inet6 {
mtu 1998;
sampling {
input;
output;
}
address 2406:::::/64;
}

}
}
}

For the source address I had originally used the internal management
network address on fxp0 but was receiving no flows at the collector so
changed to a loopback address on one of the VRF's, both the internal
management IP and the VRF loopback have reachability to the flow-server
address.

The below is the error output;

show services accounting errors inline-jflow fpc-slot 0
  Error information
FPC Slot: 0
Flow Creation Failures: 0
Route Record Lookup Failures: 0, AS Lookup Failures: 0
Export Packet Failures: 137
Memory Overload: No, Memory Alloc Fail Count: 0

IPv4:
IPv4 Flow Creation Failures: 0
IPv4 Route Record Lookup Failures: 0, IPv4 AS Lookup Failures: 0
IPv4 Export Packet Failures: 134

IPv6:
IPv6 Flow Creation Failures: 0
IPv6 Route Record Lookup Failures: 0, IPv6 AS Lookup Failures: 0
IPv6 Export Packet Failures: 3

show services accounting flow inline-jflow fpc-slot 0
  Flow information
FPC Slot: 0
Flow Packets: 7976, Flow Bytes: 1129785
Active Flows: 83, Total Flows: 2971
Flows Exported: 1814, Flow Packets Exported: 1477
Flows Inactive Timed Out: 1020, Flows Active Timed Out: 1725
Total Flow Insert Count: 1246

IPv4 Flows:
IPv4 Flow Packets: 7821, IPv4 Flow Bytes: 951645
IPv4 Active Flows: 82, IPv4 Total Flows: 2912
IPv4 Flows Exported: 1776, IPv4 Flow Packets exported: 1439
IPv4 Flows Inactive Timed Out: 1003, IPv4 Flows Active Timed Out: 1687
IPv4 Flow Insert Count: 1225

IPv6 Flows:
IPv6 Flow Packets: 155, IPv6 Flow Bytes: 178140
IPv6 Active Flows: 1, IPv6 Total Flows: 59
IPv6 Flows Exported: 38, IPv6 Flow Packets Exported: 38
IPv6 Flows Inactive Timed Out: 17, IPv6 Flows Active Timed Out: 38
IPv6 Flow Insert Count: 21

show services accounting status inline-jflow fpc-slot 0
  Status information
FPC Slot: 0
IPV4 export format: Version-IPFIX, IPV6 export format: Version-IPFIX
BRIDGE export format: Not set, MPLS export format: Not set
IPv4 Route Record Count: 1698135, IPv6 Route Record Count: 247572, MPLS
Route Record Count: 0
Route Record Count: 1945707, AS Record Count: 167101
Route-Records Set: Yes, Config Set: Yes
Service Status: PFE-0: Steady
Using Extended Flow Memory?: PFE-0: No
Flex Flow Sizing ENABLED?: PFE-0: Yes
IPv4 MAX FLOW Count: 5242884, IPv6 MAX FLOW Count: 5242884
BRIDGE MAX FLOW Count: 5242884, MPLS MAX FLOW Count: 5242884

Not sure specifically what I am doing wrong here, it seems to 

Re: [j-nsp] Netflow config for MX204

2020-04-08 Thread Tarko Tikan

hey,


Does one need to reboot the box if you switch to "flex-flow-sizing"? The
documentation seems to say so if you're going from the old format to the
new one.


AFAIR no. You can verify via "show jnh 0 inline-services 
flow-table-info" from the PFE shell.


--
tarko
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-08 Thread Mark Tinka



On 8/Apr/20 16:33, Tarko Tikan wrote:

>
> I don't have any 204s but perhaps use flex-flow-sizing instead manual
> table sizes?
>
> And if you do a lot of flow then you need to raise flow-export-rate
> from default as well.

Does one need to reboot the box if you switch to "flex-flow-sizing"? The
documentation seems to say so if you're going from the old format to the
new one.

Mark.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-08 Thread Tarko Tikan

hey,


I've used IPFIX before, here is an example of how that might be setup,
whether it is good or not I'll let others judge and I can fix if there
is feedback:

   


I don't have any 204s but perhaps use flex-flow-sizing instead manual 
table sizes?


And if you do a lot of flow then you need to raise flow-export-rate from 
default as well.


--
tarko
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-08 Thread Alain Hebert

    Hi,

    IMHO,

    Directly on the interface permit to use plugins in Elastiflow 
(example) to highlight odd traffic behavior (Scans/DDoS)


-
Alain Hebertaheb...@pubnix.net
PubNIX Inc.
50 boul. St-Charles
P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443

On 2020-04-08 08:56, Mark Tinka wrote:


On 8/Apr/20 14:51, Mark Tinka wrote:


Looks good.

The only other thing I would do different is to sample directly on the
interface, rather than through a firewall filter:

xe-0/1/0 {
     unit 0 {
     family inet {
     sampling {
     input;
     output;
     }
     family inet6 {
     sampling {
     input;
     output;
     }
     }
}

But either works. Just haven't sampled in firewall filters for some time
now.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-08 Thread Mark Tinka


On 8/Apr/20 14:51, Mark Tinka wrote:

>
> Looks good.

The only other thing I would do different is to sample directly on the
interface, rather than through a firewall filter:

xe-0/1/0 {
    unit 0 {
    family inet {
    sampling {
    input;
    output;
    }
    family inet6 {
    sampling {
    input;
    output;
    }
    }
}

But either works. Just haven't sampled in firewall filters for some time
now.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-08 Thread Mark Tinka



On 8/Apr/20 14:42, John Kristoff wrote:

>
> I've used IPFIX before, here is an example of how that might be setup,
> whether it is good or not I'll let others judge and I can fix if there
> is feedback:
>
>   

Looks good.

The only issue we've found is you can't export flows over IPv6. Not a
big issue since you can export IPv6 flows over IPv4, but still :-)...

Also, in some versions of Junos, you can't export flows to more than one
collector at the same time for the same address family. But this is
fixed in Junos 17 onward.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-08 Thread John Kristoff
On Wed, 8 Apr 2020 09:26:10 +
Liam Farr  wrote:

> Just wondering is someone here has a working netflow config for a MX204
> they might be able to share.

I've used IPFIX before, here is an example of how that might be setup,
whether it is good or not I'll let others judge and I can fix if there
is feedback:

  

John
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] [c-nsp] how many IGP routes is too many?

2020-04-08 Thread Mark Tinka



On 5/Apr/20 12:25, adamv0...@netconsultings.com wrote:

> Nowadays however, in times of FRR (-well that one has u-loops), but for
> instance ti-LFA or classical RSVP-TE Bypass... and BGP PIC "Core", I'd say
> the SPF calculation time is becoming less and less relevant. So in
> current designs I'm tuning IGPs for egress edge-node protection only,
> i.e. for generating LSP/LSA ASAP and then propagating it to all other
> ingress edge-nodes as fast as possible so that BGP PIC "Core" can react to
> the missing loopback and switch to an alternate egress
> edge-node.(reactions
> to core-node failures or link-failures are IGP agnostic and driven
> solely by
> loss of light or BFD/LFM...).
> *Even in the egress edge-node protection case there are now RSVP-TE and
> SR-TE features addressing this.
>
> So I guess only the mem and cpu load and ultimately stability of the
> RPD (or
> IGP process) is the remaining concern in extreme load cases (not the
> convergence though). 

For me, I'd say small FIB's in a network that runs MPLS all the way into
the Access (where the small FIB's reside) is the biggest risk to scaling
out the IGP. On those boxes, CPU and memory aren't the issue (and they
are nowhere near as powerful as the chassis' in the data centre), it's
the FIB slots.

I have zero worry about IS-IS blowing out all the Intel-based control
planes currently ruling our big a** routers. Wouldn't have been able to
say the same thing 15 years ago, though.

Mark.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] [c-nsp] how many IGP routes is too many?

2020-04-08 Thread Mark Tinka



On 5/Apr/20 12:25, adamv0...@netconsultings.com wrote:

> Nowadays however, in times of FRR (-well that one has u-loops), but for
> instance ti-LFA or classical RSVP-TE Bypass... and BGP PIC "Core", I'd say
> the SPF calculation time is becoming less and less relevant. 
> So in current designs I'm tuning IGPs for egress edge-node protection only,
> i.e. for generating LSP/LSA ASAP and then propagating it to all other
> ingress edge-nodes as fast as possible so that BGP PIC "Core" can react to
> the missing loopback and switch to an alternate egress edge-node.(reactions
> to core-node failures or link-failures are IGP agnostic and driven solely by
> loss of light or BFD/LFM...).
> *Even in the egress edge-node protection case there are now RSVP-TE and
> SR-TE features addressing this.
>
> So I guess only the mem and cpu load and ultimately stability of the RPD (or
> IGP process) is the remaining concern in extreme load cases (not the
> convergence though). 

For me, I'd say small FIB's in a network that runs MPLS all the way into
the Access (where the small FIB's reside) is the biggest risk to scaling
out the IGP. On those boxes, CPU and memory aren't the issue (and they
are nowhere near as powerful as the chassis' in the data centre), it's
the FIB slots.

I have zero worry about IS-IS blowing out all the Intel-based control
planes currently ruling our big a** routers. Wouldn't have been able to
say the same thing 15 years ago, though.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow config for MX204

2020-04-08 Thread Mark Tinka


On 8/Apr/20 11:26, Liam Farr wrote:
> Hi,
>
> Just wondering is someone here has a working netflow config for a MX204
> they might be able to share.
>
> Last time I did netflow on a Juniper router it was a J2320 😅

https://www.juniper.net/documentation/en_US/junos/topics/example/inline-sampling-configuring.html

Mark.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Netflow config for MX204

2020-04-08 Thread Liam Farr
Hi,

Just wondering is someone here has a working netflow config for a MX204
they might be able to share.

Last time I did netflow on a Juniper router it was a J2320 😅

-- 
Kind Regards


Liam Farr

Maxum Data
+64-9-950-5302
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp