Paul wrote:
The higher I set the buffer the worse it is.. 256 and 512 I get about
50-60k more pps than i do with 2048 or 4096.. You
would think it would be the other way around but obviously there is
some contention going on. :/
Looks like in bridge mode hw.em.rxd=512 and hw.em.txd=512 yields best
results also. reducing or increasing those leads to worse performance.
btw is there any news with hwpmc for new CPUs ? last time I checked was
real pain to get it working with core2 CPUs :(
I'm sticking with 512 for now, as it seems to make it worse with
anything higher.
Keep in mind, i'm using random source ips, random source and
destination ports.. Although that should have zero impact on the
amount of PPS it can route but for some reason it seems to.. ? Any
ideas on that one? A single stream one source ip/port to one
destination ip/port seems to use less cpu, although I haven't
generated the same pps with that yet.. I am going to test it soon
Ingo Flaschberger wrote:
Dear Paul,
I tried this.. I put 6-STABLE (6.3), using default driver was
slower than FBSD7
have you set the rx/tx buffers?
/boot/loader.conf
hw.em.rxd=4096
hw.em.txd=4096
bye,
Ingo
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"
--
Best Wishes,
Stefan Lambrev
ICQ# 24134177
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"