Dear all
I have just committed PF_RING 4.1 that's an optimized version of 4.x. This version finally support transparent_mode for increasing further packet capture performance. Due to the way PF_RING 4 hooks to the Linux kernel, in order to exploit transparent_mode you need a PF_RING aware driver as the vanilla kernel lacks the basic packet juggling mechanisms needed by PF_RING (and I want to avoid patching the kernel as you know). transparent_mode (e.g. insmod pf_ring.ko transparent_mode=X) can have three values:
        • 'insmod pf_ring.ko transparent_mode=0': standard NAPI polling
• 'insmod pf_ring.ko transparent_mode=1': PF_RING-aware driver copies the packets into PF_RING, while the same packet is still passed to TNAPI • 'insmod pf_ring.ko transparent_mode=2': PF_RING-aware driver copies the packets into PF_RING, no the packet still is passed to TNAPI

Inside PF_RING/drivers you will find PF_RING-aware drivers for popular Intel 1 and 10 Gbit adapters. Note that transparent_mode with values 2 and 3 is ignored if the driver is not PF_RING aware!

I have performed some performance tests that you will find at http://www.ntop.org/PF_RING.html using a low-end Core2Duo 1.86 GHz and a traffic generator pumping 64 byte packets at wire-rate (1.48 Mpps). Beside various test results (that you can read yourself if curious), the obvious outcome is that standard Linux can capture up to 544 Kpps whereas with PF_RING you can go at almost 850 Kpps, and wire rate (1.48 Mpps) if you use TNAPI with PF_RING. This on a cheap 2-years old Core2Duo: guess what you can do with a Xeon or an i7 processor.

Enjoy Luca

---
If you can not measure it, you can not improve it - Lord Kelvin

_______________________________________________
Ntop mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop

Reply via email to