Hi Andrew
Did you try adding -a (active wait) to pfcount?
Did you configure the maximum number of rx slots (32K) in the RX ring (see 
ethtool -g)?
What is your sustained rate on that machine with ZC when receiving continuous 
traffic?

Alfredo

> On 15 Aug 2015, at 01:18, Andrew Howard <[email protected]> wrote:
> 
> 
> 
> Hi everyone,
> 
> I’m new to PF_RING so I’m no doubt doing something silly :) I’ve looked 
> through the ntop-misc archives as far back as Jan 2012, and as far as I can 
> see my particular circumstances haven’t been raised before.
> 
> I’ll describe my setup first, then the actual problem.
> 
> I’m studying the PF_RING driver and userland code to try and progress this 
> myself, but I’d welcome any suggestions for things to check or try.
> 
> Regards,
> A.
> 
> 
> 
> THE HARDWARE
> 
> Two HP ProLiant DL360 G6 machines, each with 2 x 4-core 2.66GHz X5550 Xeons, 
> plus PCIe v2 and 12Gb DDR3 RAM.
> 
> Machine 1 has a Napatech NT20-E2 card.
> 
> Machine 2 has an 82599-based dual-port HP560FLR card in an 8-lane slot.  
> Hyperthreading has been disabled on this machine, giving 8 cores.
> 
> Machine 1 interface napa0 is patched directly to machine 2 eth3.  For the 
> purposes of this discussion, I’m using machine1/Napatech as a packet 
> generator to test PF_RING on machine 2/Intel 82599.
> 
> 
> 
> THE SOFTWARE
> 
> Both machines running RHEL 6.6 – uname says 2.6.32-504.23.4.el6.x86_64.
> 
> On machine 2, PF_RING is v6.0.3, and I compiled with:-
> 
> cd PF_RING-6.0.3
> make
> cd drivers
> make
> 
> I have NOT performed a ‘make install’ anywhere in the PF_RING directory tree 
> (I wasn’t sure what ‘make install’ would do to my existing libpcap 1.7.4 
> installation.   I run multiple SSH logins, each with C_INCLUDE_PATH and 
> LD_LIBRARY_PATH set accordingly.  Within one login my code will build and 
> link to PF_RING libs; in another login it will build and link to standard 
> libpcap).
> 
> The pfcount and pf_ring.ko/ixgbe.ko executables are run directly from within 
> this tree.
> 
> Drivers are started via:-
> 
> cd drivers/ZC/intel/ixgbe/ixgbe-3.22.3-zc/src
> ./load_driver.sh
> 
> and I’ve experimented with various driver options for insmod pf_ring and 
> insmod ixgbe (and confirmed in dmesg that the options have been accepted).
> 
> The example app is then run via:-
> 
> ../../../../../../userland/examples/pfcount -i eth3
> 
> or
> 
> ../../../../../../userland/examples/pfcount -i zc:eth3
> 
> The ZC variant issues a warning about no ZC licence / running in demo mode, 
> which is fine for my testing.
> 
> 
> 
> THE PROBLEM
> 
> I’m testing PF_RING’s ability to receive sustained bursts of 9-10Gbits/sec 
> traffic.
> 
> I have a 110 Mb traffic recording in standard pcap dump file format.  On 
> machine 1, I’m using my own utility to read the dump file and inject each 
> packet to interface napa0 as fast as possible.  (The entire file is sent in < 
> 1 second.  I also have a 10Gb recording, and the Napatech ‘monitor’ utility 
> confirms that the napa0 TX data rate (including preamble and IFG) is 
> 9999-10000 Gbps.)
> 
> When sending the 110Mb file, pfcount –i eth3 reports approx. 38% dropped 
> packets, and approx. 65Mbs data received.  This is fine, I would not expect 
> 100% data reception.  As a test I can throttle back TX and send packets with 
> a short nanosecond sleep between each packet, and pfcount reports 100% 
> reception.
> 
> When running with pfcount –i zc:eth3, the following stats are consistently 
> reported:-
> 
> Packets received – 22335 (somewhat less than the 178K packets transmitted)
> 
> Packets dropped – 0
> 
> Data received – approx 13Mbs (somewhat less than the 110Mb source file).
> 
> These same stats are always reported, regardless of whether TX is at full 
> rate or throttled back.
> 
> I have experimented with various options to the ixgbe driver, and I have thus 
> far not succeeded in 100% packet reception in ZC mode.  I have tried:-
> 
> insmod ./ixgbe.ko (i.e. take all defaults)
> insmod ./ixgbe.ko RSS=0,0
> insmod ./ixgbe.ko DCA=1,1
> insmod ./ixgbe.ko MQ=1,1
> 
> plus various combinations of the RSS, DCA and MQ options on one command line.
> 
> I’m currently running with all defaults on the pf_ring driver.
> 
> I’m allowing queue-to-core affinity to be in set in 
> ../scripts/set_irq_affinity as supplied – no script modifications to ethtool 
> settings or core affinity.
> 
> Finally, irqbalance is not running on my machinery.
> 
> --
> Andrew Howard
> [email protected]
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to