Hi Alexander On Feb 17, 2010, at 3:03 PM, Alexander Didebulidze wrote:
> Hi Luca, > > if I load igb TNAPI driver with parameter RSS=2,2 i get 2 Kernel tnapi > Threads... > > If i capture from eth0 (without @0 and @1) pf_ring uses only one > RING-Buffer... > > in /proc/net/pf_ring/info i see: > PF_RING Version : 4.1.0 ($Revision: 4012 $) > Ring slots : 4096 > Slot version : 10 > Capture TX : Yes [RX+TX] > IP Defragment : No > Transparent mode : Yes > Total rings : 1 <---- ??? > Total plugins : 0 > > would same number of rings as RX-queues improve capturing performance? DEpends if you have traffic to handle. poll() is very costly so better to have something to do that rest/work/rest.... forever > > is it possible to use 2 or more rings from unmodified PCAP-based > capturing applications when using pcap+pfring? Yes. You can bind a ring per queue Luca > > --- > > you probably knew this, but i get much better results with TNAPI igb > driver and PCAP applications without using pf_ring at all. > It's really cool because people who don't want or can't switch to > pf_ring can also improve performance using this driver. > > with unmodified driver i get ~500-600Kpps and with TNAPI driver i get > more than 1000 Kpps. > if you can confirm this, than that's something you could mention on your > blog... :) > > I'm curious if TNAPI( without pf_ring) can be also used to generally > improve linux network rx(and maybe also tx) performance. If yes, than it > would be nice to have threaded NAPI in normal gigabit-ethernet > drivers(maybe as module parameter napi_thread=1). > It would be probably useful for servers which send and receive a lot of > small <100Byte packets. > > > Best Regards, > Alexander --- If you can not measure it, you can not improve it - Lord Kelvin _______________________________________________ Ntop-misc mailing list [email protected] http://listgateway.unipi.it/mailman/listinfo/ntop-misc
