Hi Sorry but Outlook force to top-posting :-/
I've checked also how this looks on my system and result wasn't ok, similar to your ones, ~200-300Mbit/s. I didn't look on that last time and also my NIC and CPU was different. Looker deeper once I've replaced my NIC with 10Gbit I've little scanned what I can do, with below ones I'm able to close 900Mbit of my Internet connection. I have no place to simulate 10Gbit full E2E, I have only one server with 10Gbit. devil# wget -4 --no-proxy https://waw-pl-ping.vultr.com/vultr.com.1000MB.bin --2025-10-25 19:43:04-- https://waw-pl-ping.vultr.com/vultr.com.1000MB.bin Resolving waw-pl-ping.vultr.com (waw-pl-ping.vultr.com)... 70.34.242.24 Connecting to waw-pl-ping.vultr.com (waw-pl-ping.vultr.com)|70.34.242.24|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1048576000 (1000M) [application/octet-stream] Saving to: 'vultr.com.1000MB.bin' vultr.com.1000MB.bin 100%[========================================================================================================================================>] 1000M 99.5MB/s in 10s 2025-10-25 19:43:14 (98.0 MB/s) - 'vultr.com.1000MB.bin' saved [1048576000/1048576000] devil# What was added, strictly related to network interface. # Interface-specific tuning for ixg0 hw.ixg0.rx_process_limit=512 hw.ixg0.tx_process_limit=512 hw.ixg0.enable_aim=1 hw.ixg0.num_tx_desc=4096 hw.ixg0.num_rx_desc=4096 # Interrupt moderation per queue (adjust based on workload) hw.ixg0.q0.interrupt_rate=75000 hw.ixg0.q1.interrupt_rate=75000 hw.ixg0.q2.interrupt_rate=75000 hw.ixg0.q3.interrupt_rate=75000 hw.ixg0.q4.interrupt_rate=75000 hw.ixg0.q5.interrupt_rate=75000 hw.ixg0.q6.interrupt_rate=75000 hw.ixg0.q7.interrupt_rate=75000 hw.ixg0.q8.interrupt_rate=75000 hw.ixg0.q9.interrupt_rate=75000 hw.ixg0.q10.interrupt_rate=75000 hw.ixg0.q11.interrupt_rate=75000 Thanks, -- Marcin Gondek / Drixter http://fido.e-utp.net/ AS56662 -----Original Message----- From: [email protected] <[email protected]> On Behalf Of Peter Miller Sent: Saturday, October 25, 2025 7:29 PM To: Michael van Elst <[email protected]> Cc: [email protected] Subject: Re: Slow 'real world' network performance On Sat, Oct 25, 2025 at 2:19 AM Michael van Elst <[email protected]> wrote: > The NetBSD defaults are from another time. Tuning the settings is the > first requirement on modern networks. > > But your "real world" is probably the internet. There performance > depends a lot on congestion control and error recovery, and the NetBSD > code is old and "conservative". I have never tuned my oses before. How does one go about finding where the bottlenecks are? Are there any good resources you can recommend? I haven't started investigating this yet, but will look later when I have some time. I should point out the obvious info I forgot. This particular test server is running NetBSD 10.1 4 cores of an Intel E5-2680 v4 8 gigs of ram It has a 10 Gbit shared port, using vioif0 I'm not sure what hard drive is, but it's 50 Gig and shows up as Qemu. It's also no more than 250 miles from where I live and am testing from. (Apologies for top posting earlier) -- Thanks Peter
