On Tue, Sep 16, 2025 at 04:25:50PM +0200, Jan Klemkow wrote:
> On Tue, Sep 16, 2025 at 03:25:21PM +0200, Rafael Sadowski wrote:
> > On Tue Sep 16, 2025 at 03:18:28PM +0200, Jan Klemkow wrote:
> > > On Tue, Sep 16, 2025 at 03:01:33PM +0200, Rafael Sadowski wrote:
> > > > WireGuard shows severe performance degradation (95% bandwidth
> > > > loss) on Intel 10Gb interfaces compared to direct connections,
> > > > with significant packet loss patterns.
> > > > 
> > > > Performance Comparison:
> > > > 
> > > > ServerA (Chicago) - Intel 10Gb interface (ix0)
> > > > ServerB (Atlanta) - Intel 10Gb interface (ix3)
> > > > 
> > > > - Direct connection (iperf): 66.8 Mbps
> > > > - WireGuard tunnel (iperf): 3.3 Mbps
> > > > - Performance loss: 95%
> > > > 
> > > > The physical Intel interface (ix3) shows 149426 output failures:
> > > > 
> > > > ix3     1500  <Link>      f8:f2:1e:3c:9c:09 195418012     0 144748154 
> > > > 149426     0
> > > > 
> > > > suggesting hardware/driver level problems that worsen with
> > > > WireGuard traffic processing?
> > > > 
> > > > Are there known compatibility issues between ix(4) driver and
> > > > WireGuard packet processing?
> > > > 
> > > > Could the bridge configuration (veb0 + vport0) be contributing to
> > > > the packet loss patterns?
> > > 
> > > Yes. You will lose that kind of performance over bridge(4) and veb(4)
> > > because, they don't use segmentation offloading nor parallel processing
> > > of packets.
> > > 
> > > The wg(4) device may have a similar missing performance features.
> > > 
> > > > Any guidance on debugging approaches or known workarounds would be
> > > > greatly appreciated. I'm happy to provide additional data.
> > > 
> > > Could you provide a netstat -s stats, before and after you've done you
> > > measurement?  So, we can see if there are also any error or drop counter
> > > involved.
> > 
> > Before iperf:
> > ...
> > 
> > After iperf via wg0
> > ...
> 
> All counter are looking fine.
> Nothing special for me.

I think the interesting stuff is missing. This feels like an MTU issue to
me. I assume that the goal is to push traffic from veb1 via vport1 over
wg(4). Now vport and veb have mtu 1500 but wg runs with mtu 1420.
So either the packets get fragmented and then maybe dropped in transit or
PMTU needs to kick in (which often enough fails). 

-- 
:wq Claudio

Reply via email to