Hi Jason
it seems that other applications are interfering with zsend affecting the 
transmission rate,
this could depend on several factor including core isolation (other 
applications using the
core where zsend is running) and memory bandwidth (it seems that starting 
pfcount on
another core affects zsend, thus this seems to be the case).
Were you able to run MoonGen or other applications with better performance than 
zsend
on the same machine?

Thank you
Alfredo

> On 10 Jun 2016, at 13:27, Jason Lixfeld <[email protected]> wrote:
> 
> Good day!
> 
> I’m just circling back to see if anyone has any further insights on how I 
> might be able to get a steady 14.88Mpps?
> 
> Admittedly, I spec’d this box to be able to use MoonGen.  According to their 
> specs, I should be able to get a full 14.88Mpps across all 6 ports on this 
> NIC.  That said, after discovering that pf_ring:zc is able to perform as well 
> as MoonGen, I’d rather go this route so I can (hopefully) us Ostinato as a 
> front-end.
> 
> Thanks!
> 
>> On Jun 8, 2016, at 5:55 PM, Jason Lixfeld <[email protected]> wrote:
>> 
>> Without zcount and ./zsend -i zc:eth7 -g 1 -c 1 -a, it starts off at 
>> 14.88/10Gbps. It stays like that until I start doing stuff in another window 
>> (ssh’d in) like running top or changing directories. At which point it 
>> dropped down to 14.5Mpps and seems to be hovering there now.
>> 
>> On Jun 8, 2016, at 5:43 PM, Alfredo Cardigliano <[email protected]> wrote:
>> 
>> It could be related to available memory bandwidth, do you see the same when 
>> zcount is not running?
>> 
>> Alfredo
>> 
>> On 08 Jun 2016, at 23:40, Jason Lixfeld <[email protected]> wrote:
>> 
>> I did. zsend to one core, zcount to another, but I can’t seen to quote get 
>> up there. send sometimes has the tendency to start off strong, about 
>> 14.86Mpps, but slowly ramps down to about 14.25Mpps after about 15 seconds.
>> 
>> On Jun 8, 2016, at 4:48 PM, Alfredo Cardigliano <[email protected]> wrote:
>> 
>> Did you bind pfsend/zsend to a cpu core?
>> 
>> Alfredo
>> 
>> On 08 Jun 2016, at 22:44, Jason Lixfeld <[email protected]> wrote:
>> 
>> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. 
>> Is there anything else I can tweak? I’ve added the -a option which has given 
>> gotten me a bit closer, but not all the way.
>> 
>> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>> 
>> Thanks once again!
>> 
>> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <[email protected]> wrote:
>> 
>> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>> 
>> Alfredo
>> 
>> On 08 Jun 2016, at 22:11, [email protected] wrote:
>> 
>> Thanks Alfredo,
>> 
>> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 
>> each:
>> 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat 
>> /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 
>> 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 
>> 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX 
>> Slots: 32768 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat 
>> /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 
>> 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 
>> 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX 
>> Slots: 32768 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>> 
>> load_driver.sh seems to be set to disable multi queue so I’m not quite sure 
>> how this got this way, or how to correct it?
>> 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# 
>> grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko 
>> RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod 
>> ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 
>> FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>> 
>> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <[email protected]> wrote:
>> 
>> Hi Jason no problem, I was able to read something :-) Please check that both 
>> the interfaces are configured with a single RSS queue (take a look at 
>> /proc/net/pf_ring/dev/eth2/info)
>> 
>> Alfredo
>> 
>> On 08 Jun 2016, at 21:09, Jason Lixfeld <[email protected]> wrote:
>> 
>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that 
>> this message was sent in plain-text, not whatever the heck this is!
>> 
>> I’m sorry this is so impossible to read :(
>> 
>> On Jun 8, 2016, at 3:05 PM, [email protected] wrote:
>> 
>> Hello,
>> 
>> My first run through with poring. :)
>> 
>> I’ve compiled the zc variant of pfring in an attempt to get linerate between 
>> two of the ports which are looped together.
>> 
>> The NIC is a 6 port 82599 based one.
>> 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# 
>> ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE 
>> MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity 
>> Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> 
>> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE 
>> ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> 
>> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> 
>> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 
>> -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 
>> 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity 
>> eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> 
>> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity 
>> Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> 
>> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 
>> -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 
>> 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity 
>> eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> 
>> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 
>> 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity 
>> eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK 
>> -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity 
>> Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> 
>> /proc/irq/93/smp_affinity 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>> 
>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the 
>> sender, zc:eth2 only sees Rx @ 0.54Gbps:
>> 
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 
>> 2
>> 
>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 
>> 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>> 
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 
>> -c 1
>> 
>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual 
>> Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>> 
>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what 
>> zc:eth2 is sending.
>> 
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 
>> 2
>> 
>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 
>> 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>> 
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 
>> -c 1
>> 
>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual 
>> Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>> 
>> I’ve done some reading, but I haven’t found anything that has pointed me 
>> towards a possible reason why this is happening. I’m wondering if anyone has 
>> any thoughts?
>> 
>> Thanks!
>> 
>> _____________________________________ Ntop-misc mailing list 
>> [email protected] 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> _______________________________________ Ntop-misc mailing list 
>> [email protected] 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> _______________________________________ Ntop-misc mailing list 
>> [email protected] 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> _________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> _________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> ___________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> ___________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> _____________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> _____________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> 
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to