On Wed, Sep 17, 2025 at 04:04:38PM +0200, Rafael Sadowski wrote:
> On Tue Sep 16, 2025 at 03:01:33PM +0200, Rafael Sadowski wrote:
> > Hi all!
> > 
> > WireGuard shows severe performance degradation (95% bandwidth
> > loss) on Intel 10Gb interfaces compared to direct connections,
> > with significant packet loss patterns.
> > 
> > Performance Comparison:
> > 
> > ServerA (Chicago) - Intel 10Gb interface (ix0)
> > ServerB (Atlanta) - Intel 10Gb interface (ix3)
> > 
> > - Direct connection (iperf): 66.8 Mbps
> > - WireGuard tunnel (iperf): 3.3 Mbps
> > - Performance loss: 95%
> > 
> > The physical Intel interface (ix3) shows 149426 output failures:
> > 
> > ix3     1500  <Link>      f8:f2:1e:3c:9c:09 195418012     0 144748154 
> > 149426     0
> > 
> > suggesting hardware/driver level problems that worsen with
> > WireGuard traffic processing?
> > 
> > Are there known compatibility issues between ix(4) driver and
> > WireGuard packet processing?
> > 
> > Could the bridge configuration (veb0 + vport0) be contributing to
> > the packet loss patterns?
> > 
> > Any guidance on debugging approaches or known workarounds would be
> > greatly appreciated. I'm happy to provide additional data.
> > 
> > Data from ServerB
> > 
> 
> This ix3+veb0+vport0 +wg0 causes the problems we see under "Intel 82599".
> 
> $ dmesg | grep ix3
> ix3 at pci5 dev 0 function 1 "Intel 82599" rev 0x01, msix, 16 queues, address 
> f8:f2:1e:3c:9c:09
> 
> $ ifconfig ix3
> ix3: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST> mtu 
> 1500
>         lladdr f8:f2:1e:3c:9c:09
>         index 4 priority 0 llprio 3
>         media: Ethernet 10GSFP+Cu (10GSFP+Cu full-duplex,rxpause,txpause)
>         status: active
> 
> I found two different servers with the same Intel 82599"
> 
> dmesg | grep ix0
> ix0 at pci3 dev 0 function 0 "Intel 82599" rev 0x01, msix, 1 queue, address 
> 84:2b:2b:de:d2:cc
> ix0 at pci3 dev 0 function 0 "Intel 82599" rev 0x01, msix, 1 queue, address 
> 84:2b:2b:de:d2:cc
> ix0 at pci3 dev 0 function 0 "Intel 82599" rev 0x01, msix, 16 queues, address 
> 84:2b:2b:de:d2:cc
> ix0 at pci3 dev 0 function 0 "Intel 82599" rev 0x01, msix, 16 queues, address 
> 84:2b:2b:de:d2:cc
> ix0 at pci3 dev 0 function 0 "Intel 82599" rev 0x01, msix, 16 queues, address 
> 84:2b:2b:de:d2:cc
> 
> ifconfig ix0
> ix0: flags=8b43<UP,BROADCAST,RUNNING,PROMISC,ALLMULTI,SIMPLEX,MULTICAST> mtu 
> 1500
>       lladdr 84:2b:2b:de:d2:cc
>       index 1 priority 0 llprio 3
>       media: Ethernet autoselect (10GbaseKR full-duplex)
>       status: active
> 
> With this I see 132 Mbits/sec via wg0. What I notice: media is
> different and there are 5 lines in the dmesg. Maybe that means something.

The five lines are probably because the dmesg buffer contains the last 5
boot messages. 

Since traffic flows fine when wg is not in use the issue is not the media.
Layer 1 issues will affect all traffic and not just the wireguard packets.

Since this happens between servers in two different locations did you
ensure that the link between the two is not traffic shaping and dropping
UDP packets like mad?

-- 
:wq Claudio

Reply via email to