My experience with 6.0-CURRENT has been that I am able to push at least about 400kpps INTO THE KERNEL from a gigE em card on its own 64-bit PCI-X 133MHz bus (i.e., the bus is uncontested) and that's basically out of the box GENERIC on a dual-CPU box with HTT disabled and no debugging options, with small 50-60 byte UDP packets.
I haven't measured how many I can push THROUGH to a second card and forward. That will probably reduce numbers. My tests were done without polling so with very high interrupt load and that also sucks when you have a high-traffic scenario. But still, way better than your numbers. Also, make sure you are not bottlenecking on the sender-side. e.g., make sure that your sender can actually push out more PPS than what you appear to be bottlenecking on in the router. -Bosko On Wed, Apr 20, 2005 at 12:12:00AM +0300, Petri Helenius wrote: > Eivind Hestnes wrote: > > >It's correct that the card is plugged into a 32-bit 33 Mhz PCI slot. > >If i'm not wrong, 33 Mhz PCI slots has a peak transfer rate of 133 > >MByte/s. However, when pulling 180 mbit/s without the polling enabled > >the system is very little responsive due to the interrupt load. I'll > >try to increase the polling frequency too see if this increases the > >bandwidth with polling enabled.. Thanks for the advice btw.. > > > There is something "interesting" going on in the em driver but I haven't > had the time to profile it properly and Intel has been less than > forthcoming with the specification which makes it more challenging to > try to optimize the driver further. > > Pete > > >- E. > > > >Jon Noack wrote: > > > >>On 4/19/2005 1:32 PM, Eivind Hestnes wrote: > >> > >>>I have an Intel Pro 1000 MT (PWLA8490MT) NIC (em(4) driver 1.7.35) > >>>installed > >>>in a Pentium III 500 Mhz with 512 MB RAM (100 Mhz) running FreeBSD > >>>5.4-RC3. > >>>The machine is routing traffic between multiple VLANs. Recently I did a > >>>benchmark with/without device polling enabled. Without device > >>>polling I was > >>>able to transfer roughly 180 Mbit/s. The router however was > >>>suffering when > >>>doing this benchmark. Interrupt load was peaking 100% - overall the > >>>system > >>>itself was quite unusable (_very_ high system load). With device > >>>polling > >>>enabled the interrupt kept stable around 40-50% and max transfer > >>>rate was > >>>nearly 70 Mbit/s. Not very scientific tests, but it gave me a pin > >>>point. > >> > >> > >> > >>The card is plugged into a 32-bit PCI slot, correct? If so, 180 > >>Mbit/s is decent. I have a gigabit LAN at home using Pro 1000 MTs > >>(in 32-bit PCI slots) and get NFS transfers maxing out around 23 > >>MB/s, which is ~180 Mbit/s. Gigabit performance with 32-bit cards is > >>atrocious. It reminds me of the old 100 Mbit/s ISA cards... > >> > >>><snip> > >>> > >>>HZ set to 1000 as recommended in README for the em(4) driver. Driver > >>>is of > >>>cource compiled into kernel. > >> > >> > >> > >>You'll need HZ set to more than 1000 for gigabit; bump it up to at > >>least 2000. That should increase polling throughput a lot. I'm not > >>sure about other polling parameters, however. > >> > >>Jon > > > > > > > >_______________________________________________ > >freebsd-performance@freebsd.org mailing list > >http://lists.freebsd.org/mailman/listinfo/freebsd-performance > >To unsubscribe, send any mail to > >"[EMAIL PROTECTED]" > > > > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to > "[EMAIL PROTECTED]" -- Bosko Milekic [EMAIL PROTECTED] [EMAIL PROTECTED] _______________________________________________ freebsd-performance@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-performance To unsubscribe, send any mail to "[EMAIL PROTECTED]"