Roman, On 01/18/2012 07:44 PM, rchertov wrote: > Ricard, > > What driver and kernel version are you using. I also assume that you > did not modify the myricom driver to work with click's polling mode, > right? I was using click 1.8-trunk with patchless kernel 2.6.32 without any driver modification. The results obtained where about 600kpkts/sec processed (I was not generating the packets, only processing the incomming ones).
Regards, Ricard P.S. I noticed that with myricom NIC small packets (64B) where processed slowlier, than 128B. > > I am using myricom cards for packet generation, and a dual port Intel > for bridge duties. > > Roman > > On 01.18.2012 05:38, Ricard Vilalta wrote: >> Hi Roman, >> >> I have recently published the following paper using click and 10GE >> transceivers. >> http://www.cttc.es/resources/doc/110728-hpsr-mplstp-final-46195.pdf >> >> I hope an extended version will be published soon in a journal. >> >> Best Regards, >> Ricard >> >> On 01/18/2012 02:29 PM, Luigi Rizzo wrote: >>> On Wed, Jan 18, 2012 at 2:13 AM, rchertov<rcher...@cs.ucsb.edu> >>> wrote: >>>> I finally got my hands on some 10GE equipment and started to play >>>> around with Click. So I noticed the following. On 2.6.24.7 >>>> patched >>>> kernel using 3.7.17 ixgbe driver, I get around 200K pps when >>>> running a >>>> node as a bridge (click pulled from git today). However, when on >>>> exactly the same node I run the exactly same test but I use >>>> click-1.7.0rc1, then I can easily achieve 300K pps. I have also >>>> tried >>>> 2.6.35.14-106.fc14.i686 using the latest Click and I still got >>>> around >>>> 200K pps. >>>> >>>> >>>> I am curious of people's experiences when running Click on 10GE >>>> equipment. Has anybody got the RouteBricks MQ code running? >>>> Everything >>>> compiled for me, but I get pretty strange packet forwarding >>>> performance, >>>> where the data is either delayed by quite a bit or it is just >>>> corrupted. >>>> >>>> My fancy one way bridge config >>>> >>>> fd :: FromDevice(eth2, PROMISC true, BURST 32) >>>> -> ctr1 :: AverageCounter >>>> -> q :: Queue(4096) >>>> -> ctr2 :: AverageCounter >>>> -> ToDevice(eth3, BURST 64); >>> remember that fetching timestamps, even staying in-kernel, >>> is extremely expensive (in the order of 250-500ns) so if you use >>> one of those elements in your pipeline you won't be able >>> to get decent performance. >>> >>> try to remove one or both counters and see if that improves >>> the throughput (this said, 200 or 300kpps really seems too >>> low to be explained by timestamps) >>> >>> cheers >>> luigi >>> >>> >>> >> >> -- >> ______________________________________________________________ >> >> Ricard Vilalta >> Research Engineer >> Optical Networking Area (ONA) http://wikiona.cttc.es/ >> CTTC - Centre Tecnològic de Telecomunicacions de Catalunya >> Parc Mediterrani de la Tecnologia (PMT) >> Av. Carl Friedrich Gauss 7, >> 08860 Castelldefels (Barcelona), Spain >> http://www.cttc.es/ >> Phone: +34 93 396 71 70 (ext. 2232). Fax: +34 93 645 29 01 >> E-mail: ricard.vila...@cttc.es >> >> _______________________________________________ >> click mailing list >> click@amsterdam.lcs.mit.edu >> https://amsterdam.lcs.mit.edu/mailman/listinfo/click > > _______________________________________________ > click mailing list > click@amsterdam.lcs.mit.edu > https://amsterdam.lcs.mit.edu/mailman/listinfo/click -- ______________________________________________________________ Ricard Vilalta Research Engineer Optical Networking Area (ONA) http://wikiona.cttc.es/ CTTC - Centre Tecnològic de Telecomunicacions de Catalunya Parc Mediterrani de la Tecnologia (PMT) Av. Carl Friedrich Gauss 7, 08860 Castelldefels (Barcelona), Spain http://www.cttc.es/ Phone: +34 93 396 71 70 (ext. 2232). Fax: +34 93 645 29 01 E-mail: ricard.vila...@cttc.es _______________________________________________ click mailing list click@amsterdam.lcs.mit.edu https://amsterdam.lcs.mit.edu/mailman/listinfo/click