Re: [pmacct-discussion] Fortigate netflow inaccurate?
Hi Thomas, Mario, Mario is right with his suggestion. Shall any of you have interest in troubleshooting the root cause why renormalization is not happening 'automagically' out of NetFlow data, feel free to ping me offline; it will require a snapshot of your NetFlow data for inspection and replay in lab. More than happy to support. Cheers, Paolo On Sun, Nov 01, 2015 at 10:07:07PM +, Jentsch, Mario wrote: > Hi Thomas, > > I guess you use sampled Netflow on the Cisco router and the renormalization > isn't working. In that case you may need to use pmacct's sampling_map > directive to tell it the sample rate. I didn't check (yet) why it is not > working in our system too, we just apply the renormalization after pmacct > ourself. > > Regards, > Mario > > -Original Message- > From: pmacct-discussion [mailto:pmacct-discussion-boun...@pmacct.net] On > Behalf Of Thomas M Steenholdt > Sent: Sunday, November 01, 2015 4:56 PM > To: pmacct-discussion@pmacct.net > Subject: [pmacct-discussion] Fortigate netflow inaccurate? > > Hi guys, > > NetFlow on the Fortigate devices is a relatively new thing. I've been using > sFlow on these devices for years, and it's been working very well. > > We're planning to swap out a lot of the older Fortigate devices for new Cisco > routers that can only do NetFlow, so I'd like to get NetFlow working on the > remaining Fortigates as well, to have all flows handled by the same system. > > I have sfacctd and nfacctd both setup and configured on the same server. > The configuration of the two are almost identical, yet the flow numbers I get > are not even close. These are the entries in the database tables for netflow > vs sflow of me downloading a 1054867456 byte .iso file. > > Just to be clear, the fortigate is exporting both NetFlow and sFlow at the > same time. I have tried to disable sFlow, but the NetFlow results are the > same. > > NetFlow: > | peer | src| dst| packets | bytes | > stamp_inserted | > +--+++-+-+-+ > | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 112586 | 6181828 | > 2015-11-01 11:34:00 | > | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 93304 | 5117100 | > 2015-11-01 11:33:00 | > | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 90794 | 4988224 | > 2015-11-01 11:32:00 | > | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 94255 | 5162745 | > 2015-11-01 11:31:00 | > | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 |4622 | 251893 | > 2015-11-01 11:30:00 | > totalbytes accounted for: 21701790 > > sFlow: > | peer | src| dst| packets | bytes | > stamp_inserted | > +--+++-+---+-+ > | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 78000 | 5724000 | > 2015-11-01 11:34:00 | > | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 162000 | 232956000 | > 2015-11-01 11:34:00 | > | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 19 | 27322 | > 2015-11-01 11:33:00 | > | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 96000 | 7024000 | > 2015-11-01 11:33:00 | > | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 84000 | 6128000 | > 2015-11-01 11:32:00 | > | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 168000 | 241584000 | > 2015-11-01 11:32:00 | > | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 92000 | 6744000 | > 2015-11-01 11:31:00 | > | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 178000 | 255964000 | > 2015-11-01 11:31:00 | > | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 46000 | 334 | > 2015-11-01 11:30:00 | > | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 84000 | 120792000 | > 2015-11-01 11:30:00 | > total bytes accounted for: 1153476000 > > The NetFlow bytes values are less that 2% of the sflow bytes values. > > Has anybody seen this before? Perhaps I'm missing some vital clue? > > I have not dug very deep into the numbers Ireceive from the Cisco boxes, but > those numbers seem to match the actual traffic way better. > > nfacctd.conf: > > aggregate[netflow1m]: peer_src_ip,src_host,dst_host > aggregate[netflow1h]: peer_src_ip,src_host,dst_host > aggregate_filter[netflow1m]: net 10.0.0.0/8 or net 172.16.0.0/12 or net > 192.168.0.0/16 > aggregate_filter[netflow1h]: net 10.0.0.0/8 or net 172.16.0.0/12 or net > 192.168.0.0/16 > interface: eth0 > nfacctd_ip: x.x.x.x > nfacctd_port: 2055 > nfacctd_time_new: true > nfacctd_renormalize: true > plugins: mysql[netflow1m], mysql[netflow1h] &g
Re: [pmacct-discussion] Fortigate netflow inaccurate?
Hi Thomas, I guess you use sampled Netflow on the Cisco router and the renormalization isn't working. In that case you may need to use pmacct's sampling_map directive to tell it the sample rate. I didn't check (yet) why it is not working in our system too, we just apply the renormalization after pmacct ourself. Regards, Mario -Original Message- From: pmacct-discussion [mailto:pmacct-discussion-boun...@pmacct.net] On Behalf Of Thomas M Steenholdt Sent: Sunday, November 01, 2015 4:56 PM To: pmacct-discussion@pmacct.net Subject: [pmacct-discussion] Fortigate netflow inaccurate? Hi guys, NetFlow on the Fortigate devices is a relatively new thing. I've been using sFlow on these devices for years, and it's been working very well. We're planning to swap out a lot of the older Fortigate devices for new Cisco routers that can only do NetFlow, so I'd like to get NetFlow working on the remaining Fortigates as well, to have all flows handled by the same system. I have sfacctd and nfacctd both setup and configured on the same server. The configuration of the two are almost identical, yet the flow numbers I get are not even close. These are the entries in the database tables for netflow vs sflow of me downloading a 1054867456 byte .iso file. Just to be clear, the fortigate is exporting both NetFlow and sFlow at the same time. I have tried to disable sFlow, but the NetFlow results are the same. NetFlow: | peer | src| dst| packets | bytes | stamp_inserted | +--+++-+-+-+ | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 112586 | 6181828 | 2015-11-01 11:34:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 93304 | 5117100 | 2015-11-01 11:33:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 90794 | 4988224 | 2015-11-01 11:32:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 94255 | 5162745 | 2015-11-01 11:31:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 |4622 | 251893 | 2015-11-01 11:30:00 | totalbytes accounted for: 21701790 sFlow: | peer | src| dst| packets | bytes | stamp_inserted | +--+++-+---+-+ | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 78000 | 5724000 | 2015-11-01 11:34:00 | | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 162000 | 232956000 | 2015-11-01 11:34:00 | | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 19 | 27322 | 2015-11-01 11:33:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 96000 | 7024000 | 2015-11-01 11:33:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 84000 | 6128000 | 2015-11-01 11:32:00 | | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 168000 | 241584000 | 2015-11-01 11:32:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 92000 | 6744000 | 2015-11-01 11:31:00 | | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 178000 | 255964000 | 2015-11-01 11:31:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 46000 | 334 | 2015-11-01 11:30:00 | | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 84000 | 120792000 | 2015-11-01 11:30:00 | total bytes accounted for: 1153476000 The NetFlow bytes values are less that 2% of the sflow bytes values. Has anybody seen this before? Perhaps I'm missing some vital clue? I have not dug very deep into the numbers Ireceive from the Cisco boxes, but those numbers seem to match the actual traffic way better. nfacctd.conf: aggregate[netflow1m]: peer_src_ip,src_host,dst_host aggregate[netflow1h]: peer_src_ip,src_host,dst_host aggregate_filter[netflow1m]: net 10.0.0.0/8 or net 172.16.0.0/12 or net 192.168.0.0/16 aggregate_filter[netflow1h]: net 10.0.0.0/8 or net 172.16.0.0/12 or net 192.168.0.0/16 interface: eth0 nfacctd_ip: x.x.x.x nfacctd_port: 2055 nfacctd_time_new: true nfacctd_renormalize: true plugins: mysql[netflow1m], mysql[netflow1h] sql_optimize_clauses: true sql_num_hosts: true sql_locking_style: row sql_table[netflow1m]: netflow1m sql_table[netflow1h]: netflow1h sql_refresh_time[netflow1m]: 60 sql_refresh_time[netflow1h]: 300 sql_dont_try_update[netflow1m]: true sql_dont_try_update[netflow1h]: false sql_history[netflow1m]: 1m sql_history[netflow1h]: 1h sql_history_roundoff[netflow1m]: m sql_history_roundoff[netflow1h]: h sfacctd.conf: aggregate[sflow1m]: peer_src_ip,src_host,dst_host aggregate[sflow1h]: peer_src_ip,src_host,dst_host aggregate_filter[sflow1m]: net 10.0.0.0/8 or net 172.16.0.0/12 or net 192.168.0.0/16 aggregate_filter[sflow1h]: net 10.0.0.0/8 or net 172.16.0.0/12 or net 192.168.0.0/16 interface: eth0 sfacctd_ip: x.x.x.x sfacctd_port: 6343 sfacctd_renormalize: true plugins: mysql[sflow1m], mysql[sflow1h] sql_optimize_clauses: true sql_num_hosts: true sql_locking_style: row sql_table[sflow1m]: sflow1m sql_table[sflow1h
[pmacct-discussion] Fortigate netflow inaccurate?
Hi guys, NetFlow on the Fortigate devices is a relatively new thing. I've been using sFlow on these devices for years, and it's been working very well. We're planning to swap out a lot of the older Fortigate devices for new Cisco routers that can only do NetFlow, so I'd like to get NetFlow working on the remaining Fortigates as well, to have all flows handled by the same system. I have sfacctd and nfacctd both setup and configured on the same server. The configuration of the two are almost identical, yet the flow numbers I get are not even close. These are the entries in the database tables for netflow vs sflow of me downloading a 1054867456 byte .iso file. Just to be clear, the fortigate is exporting both NetFlow and sFlow at the same time. I have tried to disable sFlow, but the NetFlow results are the same. NetFlow: | peer | src| dst| packets | bytes | stamp_inserted | +--+++-+-+-+ | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 112586 | 6181828 | 2015-11-01 11:34:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 93304 | 5117100 | 2015-11-01 11:33:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 90794 | 4988224 | 2015-11-01 11:32:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 94255 | 5162745 | 2015-11-01 11:31:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 |4622 | 251893 | 2015-11-01 11:30:00 | totalbytes accounted for: 21701790 sFlow: | peer | src| dst| packets | bytes | stamp_inserted | +--+++-+---+-+ | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 78000 | 5724000 | 2015-11-01 11:34:00 | | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 162000 | 232956000 | 2015-11-01 11:34:00 | | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 19 | 27322 | 2015-11-01 11:33:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 96000 | 7024000 | 2015-11-01 11:33:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 84000 | 6128000 | 2015-11-01 11:32:00 | | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 168000 | 241584000 | 2015-11-01 11:32:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 92000 | 6744000 | 2015-11-01 11:31:00 | | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 178000 | 255964000 | 2015-11-01 11:31:00 | | 10.112.166.1 | 10.112.166.241 | 194.177.224.50 | 46000 | 334 | 2015-11-01 11:30:00 | | 10.112.166.1 | 194.177.224.50 | 10.112.166.241 | 84000 | 120792000 | 2015-11-01 11:30:00 | total bytes accounted for: 1153476000 The NetFlow bytes values are less that 2% of the sflow bytes values. Has anybody seen this before? Perhaps I'm missing some vital clue? I have not dug very deep into the numbers Ireceive from the Cisco boxes, but those numbers seem to match the actual traffic way better. nfacctd.conf: aggregate[netflow1m]: peer_src_ip,src_host,dst_host aggregate[netflow1h]: peer_src_ip,src_host,dst_host aggregate_filter[netflow1m]: net 10.0.0.0/8 or net 172.16.0.0/12 or net 192.168.0.0/16 aggregate_filter[netflow1h]: net 10.0.0.0/8 or net 172.16.0.0/12 or net 192.168.0.0/16 interface: eth0 nfacctd_ip: x.x.x.x nfacctd_port: 2055 nfacctd_time_new: true nfacctd_renormalize: true plugins: mysql[netflow1m], mysql[netflow1h] sql_optimize_clauses: true sql_num_hosts: true sql_locking_style: row sql_table[netflow1m]: netflow1m sql_table[netflow1h]: netflow1h sql_refresh_time[netflow1m]: 60 sql_refresh_time[netflow1h]: 300 sql_dont_try_update[netflow1m]: true sql_dont_try_update[netflow1h]: false sql_history[netflow1m]: 1m sql_history[netflow1h]: 1h sql_history_roundoff[netflow1m]: m sql_history_roundoff[netflow1h]: h sfacctd.conf: aggregate[sflow1m]: peer_src_ip,src_host,dst_host aggregate[sflow1h]: peer_src_ip,src_host,dst_host aggregate_filter[sflow1m]: net 10.0.0.0/8 or net 172.16.0.0/12 or net 192.168.0.0/16 aggregate_filter[sflow1h]: net 10.0.0.0/8 or net 172.16.0.0/12 or net 192.168.0.0/16 interface: eth0 sfacctd_ip: x.x.x.x sfacctd_port: 6343 sfacctd_renormalize: true plugins: mysql[sflow1m], mysql[sflow1h] sql_optimize_clauses: true sql_num_hosts: true sql_locking_style: row sql_table[sflow1m]: sflow1m sql_table[sflow1h]: sflow1h sql_refresh_time[sflow1m]: 60 sql_refresh_time[sflow1h]: 300 sql_dont_try_update[sflow1m]: true sql_dont_try_update[sflow1h]: false sql_history[sflow1m]: 1m sql_history[sflow1h]: 1h sql_history_roundoff[sflow1m]: m sql_history_roundoff[sflow1h]: h On the Fortigates I have configured: set active-flow-timeout 1 Thanks in advance /Thomas ___ pmacct-discussion mailing list http://www.pmacct.net/#mailinglists