I shall send the patch once I find the correct fix for it .

Interestingly the jump from around 16 threads to 72 ( of the 32 cores, 16
was the max I could use for Rx and 16 was for Tx )
happened when I moved from Ubuntu Lucid  to Oneiric so I think it's got to
do with Linux side changes

Rgs
Soumyendra

Note: The 32 cores are overkill from above presumably because of a lack of
confidence on myself and to some extent the open source community's
capability to do packet capture and multi-key search at 10 Gbps . We "can"
definitely do it with this processing power :)

On 26 November 2011 12:51, Luca Deri <[email protected]> wrote:

>
> On Nov 26, 2011, at 4:28 AM, Soumyendra Saha wrote:
>
> > Many thanks  for your earlier reply.
> > Though the load is not shared evenly I got a setting combining both
> FlowDir and RSS that does a decent job.
> > I am dealing with packets at  national gateway. Hence there are many
> encapsulating headers like pseudowire, PPP,VLAN, MPLS etc...so the hardware
> obviously is at a loss to parse all that and get to IP or TCP headers.
> Hence the fallback to queue 0 ( from both FlowDir and RSS ) which is
> getting a high amount of traffic in comparison.
> > Hopefully I can distribute packets to cores/queues in a round robin
> fashion with a hack in ixgbe. I don't need to track flows ( that will be
> done in userspace ), just packets are fine spread across CPUs.
> >
> I don;t see how you can do that. What you can do instead is to distribute
> (if possible) packets across queues based on the port.
>
> > Who is supposed to set the timestamp on packets ? I can see all packets
> with time offset 0.
> the DNA driver does set the timestamp if you pass a buffer, if not (you
> pass a pointer) we return 0 as timestamp. This is because computing it is
> costly and not everyone needs it (e.g. if you route/bridge packets you
> don't care).
> > Is it ixgbe / pf_ring / userland . gettimeofday() is very costly from
> userspace. This is slowing down pfcount processing in my case I think( but
> I need to verify that )
>
> It is costly everywhere.
>
> >
> > Also I found PF_RING pfring_open_multichannel() to create too many
> threads (72 )for any number of RSS settings on a 32-cpu machine running
> kernel 2.6.38 with MQ=1,1 RSS=8,8 FlowDir=1,1
> >
>
> I hope you can provide us a patch as I don't have a computer with so many
> CPU cores :-)
>
> Luca
>
>
> > Regards,
> > Soumyendra
> >
> > _______________________________________________
> > Ntop-misc mailing list
> > [email protected]
> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> ---
> Keep looking, don't settle - Steve Jobs
>
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to