I did.  zsend to one core, zcount to another, but I can’t seen to quote get up 
there.  send sometimes has the tendency to start off strong, about 14.86Mpps, 
but slowly ramps down to about 14.25Mpps after about 15 seconds.

> On Jun 8, 2016, at 4:48 PM, Alfredo Cardigliano <[email protected]> wrote:
> 
> Did you bind pfsend/zsend to a cpu core?
> 
> Alfredo
> 
>> On 08 Jun 2016, at 22:44, Jason Lixfeld <[email protected]> wrote:
>> 
>> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. 
>> Is there anything else I can tweak? I’ve added the -a option which has given 
>> gotten me a bit closer, but not all the way.
>> 
>> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>> 
>> Thanks once again!
>> 
>> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <[email protected]> wrote:
>> 
>> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>> 
>> Alfredo
>> 
>> On 08 Jun 2016, at 22:11, [email protected] wrote:
>> 
>> Thanks Alfredo,
>> 
>> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 
>> each:
>> 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat 
>> /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 
>> 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 
>> 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX 
>> Slots: 32768 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat 
>> /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 
>> 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 
>> 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX 
>> Slots: 32768 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>> 
>> load_driver.sh seems to be set to disable multi queue so I’m not quite sure 
>> how this got this way, or how to correct it?
>> 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# 
>> grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko 
>> RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod 
>> ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 
>> FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>> 
>> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <[email protected]> wrote:
>> 
>> Hi Jason no problem, I was able to read something :-) Please check that both 
>> the interfaces are configured with a single RSS queue (take a look at 
>> /proc/net/pf_ring/dev/eth2/info)
>> 
>> Alfredo
>> 
>> On 08 Jun 2016, at 21:09, Jason Lixfeld <[email protected]> wrote:
>> 
>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that 
>> this message was sent in plain-text, not whatever the heck this is!
>> 
>> I’m sorry this is so impossible to read :(
>> 
>> On Jun 8, 2016, at 3:05 PM, [email protected] wrote:
>> 
>> Hello,
>> 
>> My first run through with poring. :)
>> 
>> I’ve compiled the zc variant of pfring in an attempt to get linerate between 
>> two of the ports which are looped together.
>> 
>> The NIC is a 6 port 82599 based one.
>> 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# 
>> ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE 
>> MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity 
>> Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> 
>> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE 
>> ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> 
>> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> 
>> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 
>> -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 
>> 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity 
>> eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> 
>> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity 
>> Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> 
>> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 
>> -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 
>> 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity 
>> eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> 
>> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 
>> 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity 
>> eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK 
>> -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity 
>> Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> 
>> /proc/irq/93/smp_affinity 
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>> 
>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the 
>> sender, zc:eth2 only sees Rx @ 0.54Gbps:
>> 
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 
>> 2
>> 
>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 
>> 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>> 
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 
>> -c 1
>> 
>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual 
>> Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>> 
>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what 
>> zc:eth2 is sending.
>> 
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 
>> 2
>> 
>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 
>> 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>> 
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 
>> -c 1
>> 
>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual 
>> Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>> 
>> I’ve done some reading, but I haven’t found anything that has pointed me 
>> towards a possible reason why this is happening. I’m wondering if anyone has 
>> any thoughts?
>> 
>> Thanks!
>> 
>> _________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibki7wmHJdlV570AmOI2YpteiaAzHxF4bSRly9Qf9xm8h8t-2FuLy-2BKjR4mBPHoM8MhECHllIOU8DKo8aAURwG-2BSh40bZxRqJl-2Fvb-2Fy6PY5iXjsc1ygG3a2ZQ1R6xghdM0XV6ZKoavY4Ay7LjI3kz0iBBNc-3D
>> 
>> ___________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkvYWplSAwZyalR6vft-2BHCD7W0bpNQv4-2BzOnLFxVUSbbRbBZctxOL2C1iSnJQxZLA2dI8GZovN9ZC4JMbgc4hbbfDFgf1ak0FdEHHPKeid8lSVl0ckqg04VMVvht8aW5mmcQWj2BJfakDjPC8tUASEeg-3D
>> 
>> ___________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkoV-2BEbDwXcTqkXpmczHc3AjU-2FBauGHl0oOBBv-2FC-2FYERo4USxgFXFJbUdiLxnuOgK9PPs4-2BvLLlHAe8A7FAtb0gj-2FogbkTXdH1I08YDqO1Vl9wNVKTdzRm0jf8CKfAn0do-2B1kaxaXfEsRsJaDR3658nc-3D
>> 
>> _____________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkkvklG0-2BoCKq3MJ-2FmROeAQt0-2BbqXqlVB-2F9ci2OmQJyci40fIQEBFDdSHn613X9mAS99jLDl-2F-2FzJFBY-2FJdU-2FUYIGZQhvlxgy3mLoRuhC7BEMaMfTEK2BOharlv8XV-2FHwZc7SjuKmcY2-2Bkd3rEZbB6mnM-3D
>> 
>> _____________________________________________ Ntop-misc mailing list 
>> [email protected] 
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkl1JE09-2B-2FxPgmwD2UZyvOfS6d-2B-2BJWqUfY-2BkGF3qx7bSgV1nN8TWdrblkSgPJz9jBhR-2BVz3xHcDi8ytfTAKdEa-2Bksby-2F-2B7wZR6KrvZvvkjvKyU5nCAN1Uw-2Bz9FcxTN7VzP24BZjHNjBEgQ8MMjUzn0O4-3D
>> 
>> 
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibksKFK1fxC1rGjRW8BSRUxeSnc3nq2zZ6zcTdkvkPRQ0WKtIimSIvw8N8V3-2FQqiAIUvjkPJjjsRpBR6OS-2FlJRzbQtRNnNiX1iLbKyM-2BRnJdPh1sZMGG2x7kpR-2FWbTsT0z4phWyDh3fSIG28VJTSczsGQ-3D
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkrw-2BaOp6mQmELEUg2hWz4yi-2BRiGoDIKfEunl6GHtIunuDMt6bWsqILIobHnEp1c8Ge-2FYNVTjE-2F6reu04aGSmGfXXDuj8h3rlWt-2BYkco4mELtUHnVCuAfsdIIw9r460Xgo8OkXlz5ovJyK-2BXTYWVYfeM-3D

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to