Soumyendra
If one channel "seems to attract more traffic", probably the traffic is not 
balanced (the card applies an hw hash function on packets, we can't do much).
The way you load the driver for multiqueue should be fine, usually we use 
something like:
insmod ./ixgbe.ko MQ=1,1 RSS=8,8 FdirMode=0,0 num_rx_slots=32768
If you want to try DNA, the standard PF_RING distribution includes everything 
you need (without a license you have 5 minutes demo per session, no limit on 
number of ports).

On Nov 19, 2011, at 2:31 PM, Soumyendra Narayan Saha (Vehere) wrote:

> I mean I see 3.5 GB data after 8 seconds from "ifconfig tnapi0" on the
> Rx port  if  pfcount_multichannel is reading ( and showing lesser
> numbers itself ) ...else the count does not go up.
> Once we see all bits pumped from Tx port ( around 3.8 Gbps ) I am
> going to try DNA on  Tx initially. If I see no packet loss with my
> parallelized app ( thread processes packet on core it was received )
> with TNAPI , I am going to stick to TNAPI else move to DNA for Rx.  Is
> there a trial license included in the svn repository for DNA ? How
> many ports can we use it on one machine?
> 
> Do we need to run multiple pfcount_multichannel for MQ=1,1 setting for
> ixgbe ? The context is TNAPI...
> Or a single queue is better performing given that still the packet is
> processed on the same core it was received on, by individual threads
> running per core .

If you are using TNAPI, probably it is better to have multiple queues.
If you are using DNA, it depends on your application. DNA is able to capture 10 
Gbit line-rate with single queue, but you have a few cpu cycles for packet 
processing.

Best regards
Alfredo

> 
> Soumyendra
> 
> 
> On Sat, Nov 19, 2011 at 6:46 PM, Soumyendra Narayan Saha (Vehere)
> <[email protected]> wrote:
>> I am using PF_RING aware ixgbe ( simple "insmod ixgbe.ko" ) on a Dell
>> server(8 core CPU ) for Tx and TNAPIv2 enabled ixgbe on the Rx
>> interface on a HP server ( 32 core CPU ). Both sides have dca and
>> ioatdma.
>> pfsend on eth0 using a .pcap file is not exceeding 3.8 Gbps ( which is
>> OK considering it is not DNA or TNAPI ), but on the Rx interface I see
>> some strange number.
>> 
>> 1. First the ifconfig tnapi0 Rx stats don't go up when no application
>> is reading from the interface ( which is OK I think and I guess is a
>> change because of the TNAPI changes to driver or transparent_mode=2 I
>> am using at pf_ring insmod) . But I do see correct number of bits
>> pumped from the Tx port 9 ( approx 3.5 GB data after 8 secs , giving
>> one second approx for each bit )
>> 
>> 2. The pfcount_multichannel using "pfcount_multichannel -i tnapi0"  is
>> showing only about 300 Mbps  aggregate on all 16 channels ( this is a
>> 32 core HP server ). The first channel ( 0 ) seems to attract more
>> traffic - around 250 Mbps and the rest 15 channels are like 5 to 10
>> Mbps each. Something is seriously wrong in my config.  I have added
>> pf_ring.ko ( from 5.1.0 svn ) with transparent_mode=2  and ixgbe.ko
>> with (  MQ=32,32 RSS=32,32 ). Also tried with ( MQ=1,1 and RSS=0,0,0,0
>> ), ( RSS=0,0,0,0 )  and plain insmod ixgbe.ko. Only thing I haven't
>> tried so far is ( just RSS=32,32,32,32 ).  With plain ixgbe.ko insert
>> and pfcount I am getting around 800 Mbps initially but is steadily
>> falling off...
>> 
>> Thanks !
>> 
>> Soumyendra
>> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to