Hi Billy, Just a few additional remarks: On 22.02.16 11:54, online264...@telkomsa.net wrote: > Hi > > Some years ago I wrote netflow collector s/w on RHEL3 for dealing with high > volumes of netflow (from multiple routers and interfaces including GigE and > ATM). Pretty simplistic design. Collector process reads UDP packet, adds some > IP header data to it, and writes it (round robin) to a SVR4 message queue. > Several message queues are configured. A reader process reads the message > from its queue and chucks it into a binary file (and rotates files as and > when needed/configured). This way the slow disk I/O is offloaded from the UDP > listener process and done via parallel processes. > > Using this approach I am processing around 30K to 40K flows/sec across 8 > (regionally distributed) collectors, dealing with over 1TB of IP traffic per > day. > > The servers we use as collectors are pretty old and we got new servers as > replacements. Only, my collection s/w on the newer RHEL5/6/7 kernels loose in > excess of 70% of messages send via the SVR4 IPC message queues. Unsure why > exactly. Did some hacking and debugging and it simply seems that these old > message queues are not well supported on newer Linux kernels. > > So I have been looking at Open Source replacement for my old s/w - tried a > number and like nfcapd the most. Well written, and works without an effort.. > > So my question is two fold: > > - any ideas as what the approx UDP/sec process rate of nfcapd on a new server > running RHEL6 or RHEL7 is?
I've never stressed the flows/s to its limits but 100-200k/s should not be a problem. > > - has there been any attempts to separate the disk I/O factor from nfcapd via > some kind of threading/forking using IPC (any code laying around I can try > and perhaps add to?) On decent hardware, nfcapd should run with compressing flows (-z). Today's CPUs are fast enough and -z (LZO) is a good balance between speed and compression factor. Over all compression outperforms writing uncompressed data to disk, as the volume is less to write to disk. As for threading nfcapd - yes - there are plans and nfcapd will get threaded. However, be aware, that threading helps for shorter peaks of data and does not help for constant sustained high data volume. In the end data needs to get written to disk. With threading you can stress the I/O throughput to it's limits. Cheers - Peter > > > Comments/suggestions/beer (Belgium please?) will be much appreciated. ;-) > > > thanks, > Billy > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 > _______________________________________________ > Nfdump-discuss mailing list > Nfdump-discuss@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/nfdump-discuss > -- Be nice to your netflow data. Use NfSen and nfdump :) ------------------------------------------------------------------------------ Site24x7 APM Insight: Get Deep Visibility into Application Performance APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month Monitor end-to-end web transactions and take corrective actions now Troubleshoot faster and improve end-user experience. Signup Now! http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 _______________________________________________ Nfdump-discuss mailing list Nfdump-discuss@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfdump-discuss