Henning Brauer wrote: > * nate <[EMAIL PROTECTED]> [2007-06-05 21:44]: >> I built 3 OpenBSD 3.6(?) servers in mid 2005 with these cards, and >> was able to get a peak throughput of about 520Mbps in bridged mode >> (pf disabled) measured using iperf. > > the single-stream tcp test iperf uses is pretty meaningless > (unless.. well, that's another story) > >> Interrupt cpu time was ~30%, the rest of the cpu was idle.
hmm, well I would expect this would provide a maximum number for throughput because there's only 1 connection, no extra processing vs multiple connections, not that multiple connections should matter since it was a bridge, and pf was disabled for the test. It doesn't make sense to me why more connections would increase throughput, can you(or someone) explain why this would be the case. I also would expect that this maximum number likely would not be achieved once pf is enabled and 'real world' traffic was flowing through the system keeping track of thousands of states from the ~400 hosts on both sides of the firewall. But at least it would give me a number, if I saw the same interrupt cpu% I could reasonably expect the box to be maxxed out. Fortunately normal network traffic was quite low, the biggest users of bandwidth were file copies via scp/rsync. Someone replied to my original post off-list and told me about a bug that was fixed in 2006 in the Intel GigE network driver that reduces the amount of pci hits per packet thus increasing throughput and packets per second, which may have contributed to the performance issue I experienced(again in mid 2005). Of course at the time I partipated in a thread very similar to this and I don't recall anyone responding with their openbsd network performance, so I had nothing to base it on(were the numbers normal? low ? high?). The FAQ says it's dependent on the system, and I purchased the fastest 32-bit CPU that was on the market at the time(64-bit was still too new I think that was (one of) the first releases to support 64-bit x86), and OpenBSD SMP crashed on all machines I tested at the time during boot). Even now I think I've gotten one response(may of been off-list) saying they get less than 500Mbit on their card(forgot which card off hand, not the Intel one though). So regardless of the performance I think it was about as fast as it was going to get, at the time. Short of absurdly low numbers (under 200Mbit, which I would of purchased a fully hardware firewall, we had just purchased 3000 gigabit switch ports so we were spending a bit), I was going to stick with OpenBSD because pf is a great tool, and easy to use, and the hardware was a good price too with hardware raid, triple redundant power supplies (each on a seperate UPS-backed circuit), hot swap fans etc. In the end the firewalls seemed to work out well, it's been 2 years since they launched and they haven't had a problem, fortunately network traffic is fairly low. Two firewalls are in active use(for different network segments, and are failover for each other's network segments), with a 3rd cold standby server. tcpreplay sounds like an interesting tool, I had not heard about it until your post. nate