Hello,

Kris Kennaway wrote:
Stefan Lambrev wrote:

Thanks for investigating this. One thing to note is that ip flows from the same connection always go down the same interface, this is because
Ethernet is not allowed to reorder frames. The hash uses
src-mac, dst-mac, src-ip and dst-ip (see lagg_hashmbuf), make sure when
performance testing that your traffic varies in these values. Adding
tcp/udp ports to the hashing may help.
The traffic, that I generate is with random/spoofed src part, so it is split between interfaces for sure :)

Here you can find results when under load from hwpmc and lock_profiling:
http://89.186.204.158/lock_profiling-lagg.txt

OK, this shows the following major problems:

39 22375065 1500649 5690741 3 0 119007 712359 /usr/src/sys/net/route.c:147 (sleep mutex:radix node head) 21 3012732 1905704 1896914 1 1 14102 496427 /usr/src/sys/netinet/ip_output.c:594 (sleep mutex:rtentry) 22 120 2073128 47 2 44109 0 3 /usr/src/sys/modules/if_lagg/../../net/ieee8023ad_lacp.c:503 (rw:if_lagg rwlock) 39 17857439 4262576 5690740 3 0 95072 1484738 /usr/src/sys/net/route.c:197 (sleep mutex:rtentry)

It looks like the if_lagg one has been fixed already in 8.0, it could probably be backported but requires some other infrastructure that might not be in 7.0.

The others are to do with concurrent transmission of packets (it is doing silly things with route lookups). kmacy has a WIP that fixes this. If you are interested in testing an 8.0 kernel with the fixes let me know.
Well those servers are only for tests so I can test everything, but at some point I'll have to make final decision what to use in production :)

http://89.186.204.158/lagg-gprof.txt

http://89.186.204.158/lagg2-gprof.txt I forget this file :)

I found that MD5Transform aways uses ~14% (with rx/txcsum enabled or disabled).

Yeah, these don't have anything to do with MD5.
Well I didn't find from where MD5Transform() is called, so I guess it's a some 'magic', that I still do not understand ;)

And when using without lagg MD5Transform pick up to 20% of the time.
Is this normal?

It is probably from the syncache. You could disable it (net.inet.tcp.syncookies_only) if you don't need strong protection against SYN flooding.

Kris
How the server perform during SYN flooding is exactly what I test at the moment :)
So I can't disable this.

Just for information, if someone is interested - I looked how linux (2.6.22-14-generic ubuntu) perform in the same situation .. by default it doesn't perform at all - it hardly replays to 100-200 packets/s, with syncookies enabled it can handle up to 70-90,000 pps (250-270,000 compared to freebsd), but the server is very loaded and not very responsible.
Of course this doesn't mean that FreeBSD can't perform better ;)

I plan to test iptables, newer kernel, various options, and may be few others distros.
_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to