hi luca:

800Mbps means total throughput.

ethtool -S ethX show:

rx_queue_0_packets:0
rx_queue_1_packets:0
rx_queue_2_packets:0
rx_queue_3_packets:0
....
rx_queue_9_packets:12345678
...

as I play an imbalance traffic, all of them fall at a same queue.

thanks
rui

On Wed, Dec 1, 2010 at 7:01 PM, Luca Deri <[email protected]> wrote:
> Rui
> you mean you capture 800Mbit * 16 or 800Mbit in total?
>
> note that splitting traffic across all the cores is a *bad* idea. Please
> read my papers. I understand that's not simple to digest it given that most
> of the people think in terms of "divide and conquer". With multicore if you
> do that (in particular across two physical processors) you invalidate teh
> caches, fill us the QPI bus etc etc. so the worst you can do.
>
> Cheers Luca
>
> On 12/01/2010 11:07 AM, Rui wrote:
>
> On Tue, Nov 30, 2010 at 5:20 PM, Luca Deri <[email protected]> wrote:
>
>
> Rui
> in my tests with 2 cores you can definitively capture at least 1 Gbit (1.48
> Mpps). How did you balance irq/threads/pfcount across cores? How did you
> start pfcount?
> Regards Luca
>
>
> set irq affinity(the tool is from intel ixgbe driver)
>  ./set_irq_affinity.sh eth4
> no rx vectors found on eth4
> no tx vectors found on eth4
> eth4 mask=1 for /proc/irq/86/smp_affinity
> eth4 mask=2 for /proc/irq/87/smp_affinity
> eth4 mask=4 for /proc/irq/88/smp_affinity
> eth4 mask=8 for /proc/irq/89/smp_affinity
> eth4 mask=10 for /proc/irq/90/smp_affinity
> eth4 mask=20 for /proc/irq/91/smp_affinity
> eth4 mask=40 for /proc/irq/92/smp_affinity
> eth4 mask=80 for /proc/irq/93/smp_affinity
> eth4 mask=100 for /proc/irq/94/smp_affinity
> eth4 mask=200 for /proc/irq/95/smp_affinity
> eth4 mask=400 for /proc/irq/96/smp_affinity
> eth4 mask=800 for /proc/irq/97/smp_affinity
> eth4 mask=1000 for /proc/irq/98/smp_affinity
> eth4 mask=2000 for /proc/irq/99/smp_affinity
> eth4 mask=4000 for /proc/irq/100/smp_affinity
> eth4 mask=8000 for /proc/irq/101/smp_affinity
> cat /proc/interrupts |grep eth4
>   86:       1108          0          0          0          0
> 0          0          0          0          0          0          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-0
>   87:          8        967          0          0          0
> 0          0          0          0          0          0          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-1
>   88:          8          0        967          0          0
> 0          0          0          0          0          0          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-2
>   89:          8          0          0        967          0
> 0          0          0          0          0          0          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-3
>   90:          8          0          0          0        968
> 0          0          0          0          0          0          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-4
>   91:          8          0          0          0          0
> 967          0          0          0          0          0          0
>         0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-5
>   92:          8          0          0          0          0
> 0        967          0          0          0          0          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-6
>   93:          8          0          0          0          0
> 0          0        967          0          0          0          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-7
>   94:          8          0          0          0          0
> 0          0          0       1009          0          0          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-8
>   95:      65741          0          0          0          0
> 0          0          0          0   14100392          0          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-9
>   96:          8          0          0          0          0
> 0          0          0          0          0        967          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-10
>   97:          8          0          0          0          0
> 0          0          0          0          0          0        967
>       0          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-11
>   98:          8          0          0          0          0
> 0          0          0          0          0          0          0
>     967          0          0          0  IR-PCI-MSI-edge
> eth4-TxRx-12
>   99:          8          0          0          0          0
> 0          0          0          0          0          0          0
>       0        967          0          0  IR-PCI-MSI-edge
> eth4-TxRx-13
>  100:          8          0          0          0          0
> 0          0          0          0          0          0          0
>       0          0        967          0  IR-PCI-MSI-edge
> eth4-TxRx-14
>  101:          8          0          0          0          0
> 0          0          0          0          0          0          0
>       0          0          0        967  IR-PCI-MSI-edge
> eth4-TxRx-15
>  102:     321857          0          0          0          0
> 0          0          0          0          0          0          0
>       0          0          0          0  IR-PCI-MSI-edge
> eth4:lsc
> I start pfcount like below:
> ./pfcount -e 1 -i e...@0  2> ./pfcount0.log &
> ./pfcount -e 1 -i e...@1  2> ./pfcount1.log &
> ./pfcount -e 1 -i e...@2  2> ./pfcount2.log &
> ./pfcount -e 1 -i e...@3  2> ./pfcount3.log &
> ./pfcount -e 1 -i e...@4  2> ./pfcount4.log &
> ./pfcount -e 1 -i e...@5  2> ./pfcount5.log &
> ./pfcount -e 1 -i e...@6  2> ./pfcount6.log &
> ./pfcount -e 1 -i e...@7  2> ./pfcount7.log &
> ./pfcount -e 1 -i e...@8  2> ./pfcount8.log &
> ./pfcount -e 1 -i e...@9  2> ./pfcount9.log &
> ./pfcount -e 1 -i e...@10  2> ./pfcount10.log &
> ./pfcount -e 1 -i e...@11  2> ./pfcount11.log &
> ./pfcount -e 1 -i e...@12  2> ./pfcount12.log &
> ./pfcount -e 1 -i e...@13  2> ./pfcount13.log &
> ./pfcount -e 1 -i e...@14  2> ./pfcount14.log &
> ./pfcount -e 1 -i e...@15  2> ./pfcount15.log &
> today I test a pfring aware ixgbe driver with "transparent_mode=1" and
> get a similar result(800Mbps)
>
>
> On 11/30/2010 08:28 AM, Rui wrote:
> hi luca:
> I have tuned some network parameters, such as irq affinity,
> wmem_default,tcp_window_scale,etc and observed the throughput had
> reached 6Gbps+
> but with imbalance traffic,pfcount performance is still bad (limited at
> 800Mbps)
> do you think it is normal? (is it too low?)
> what else I can do to improve this situation?
> I just think that general system optimization only increase the total
> throughput but won't be helpful for imbalance traffic?
> regards
> rui
> From: [email protected]
> Date: Sun, 28 Nov 2010 11:00:22 +0100
> To: [email protected]
> Subject: Re: [Ntop-misc] pfcount drop much when (800Mbps) falls at same
> RX-queue
> Rui
> if you out the card in promiscuous move (e.g. via pfcount) you drop
> traffic as traffic is not discarded by the NIC. Unless you have balance-able
> traffic (i.e. if you send always the same packet it will definitively go
> onto the same queue) all you observe is correct. Please read some of my
> papers that describe how to optimize the system. Unless you do that the
> performance would be poor
> Thanks Luca
> On Nov 27, 2010, at 3:52 AM, Rui wrote:
> hi
> today I did a test with pfcount,latest PF_RING and a intel 10G
> nic(ixgbe from intel,without PF_RING aware driver,transparent mode=0)
> I used a stress test equipment to generate the traffic, they have same
> src ip and dst ip, port, so will get same RSS hash result and fall at
> the same RX-queue.
> I found that performance is bad, pfcount will drop much if the
> traffic>800Mbps
> the performance is good If I didn't start pfcount. "ifconfig ethx"
> indicate no packet drop even that throughput reachs 6Gbps
> my machine(HP DL380 G6) has 16 cores, XEON 2.5Ghz, hyperthreading,the
> 10G nic was installed at a PCI-e 4X slot(I don't have 8X slot at the
> moment)
> any comment about this situation? is it normal?
> thanks
> rui
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> ---
> Keep looking, don't settle - Steve Jobs
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to