I don't have any 10 Gbps NICs, so I cannot comment on that level of
throughput. I do have a couple 2.5 Gbps machines, and my system
saturates them with ease. No way to know if there is 7.5 Gbps more I
could get out of them without actually testing it.

Motherboard: Supermicro X13SAE flashed with newest BIOS.
CPU: Intel i5-13600K with iGPU underclocked.
RAM: 2 x 16 GiB DDR5-4400MHz unbuffered ECC modules.
Network interface adapter: Intel X710-DA2 flashed with newest firmware.

Switch: Juniper EX2300-24MP

Server and switch are connected via a dual-compatible SFP+ DAC Twinax
cable from FS.com.

The network interface adapter is a genuine Intel. It is _not_ an OEM
one. I didn't want to deal with any headaches when it comes to flashing
firmware.

I disabled SMT as well as the efficiency cores on the CPU. I tried to
reduce the use of the integrated GPU as much as I could. No inteldrm.
Only connected via a serial console. The reason for that CPU is that
the equivalent CPU without an iGPU was not officially listed as ECC
capable, so I played it safe and got the iGPU version. That CPU only
has 6 performance cores though; and as one of Stuart's links showed,
this means I am only using 4 queues instead of maxing out the card at
8. Had I known this, I would have gotten the CPU one level up which has
8 performance cores.

My machine does a lot more than just routing and firewall though. It
runs a web server, git repos, e-mail, DNS, authoritative nameserver, and
VPN servers just to name a few things. Despite all that, it handles 2.5
Gbps no problem. I haven't done any form of tuning (e.g., using MTUs
larger than 1500) either.

As Rachel pointed out, OpenBSD 7.3 does not work with the API of that
NIC when the newest firmware is flashed. Not sure what the most recent
version of firmware that has a working API is, but it is not a problem
for me since autonegotiation works just fine. If you don't require very
long runs and EMI is not an issue, then you should be fine with the
copper solution.

Reply via email to