>From TCP's point of view, your iperf result shows "Retr" as
retransmissions. Well, that tells packet drops or something else which
makes TCP's congestion control into playing throttling the throughput to a
certain degree.

Try a different TCP congestion control. If the default is CUBIC, try to use
NewReno. I don't know your FreeBSD version. So if it is FreeBSD14-current,
I would like to see the difference as there is improvement discussion on
CUBIC.

cc@fbsd ~$ sysctl net.inet.tcp.cc
net.inet.tcp.cc.newreno.beta_ecn: 80
net.inet.tcp.cc.newreno.beta: 50
net.inet.tcp.cc.abe_frlossreduce: 0
net.inet.tcp.cc.abe: 0
net.inet.tcp.cc.available: newreno
net.inet.tcp.cc.algorithm: newreno
cc@fbsd ~$ sysctl net.inet.tcp.cc.available
net.inet.tcp.cc.available: newreno
cc@fbsd ~$ sudo kldload cc_cubic                        << load TCP
congestion control module cc_cubic
cc@fbsd ~$ sysctl net.inet.tcp.cc.available
net.inet.tcp.cc.available: newreno, cubic
cc@fbsd ~$ sudo sysctl net.inet.tcp.cc.algorithm=cubic
net.inet.tcp.cc.algorithm: newreno -> cubic

cc@fbsd ~$ sudo sysctl net.inet.tcp.cc.algorithm
net.inet.tcp.cc.algorithm: cubic
cc@fbsd ~$ uname -a
FreeBSD fbsd.cc.home 13.1-RELEASE FreeBSD 13.1-RELEASE
releng/13.1-n250148-fc952ac2212 GENERIC amd64
cc@fbsd ~$


Best Regards,
Cheng Cui


On Wed, May 24, 2023 at 2:19 AM Benoit Chesneau <beno...@enki-multimedia.eu>
wrote:

> Sorry, I thought I posted it but it's a bridge:
>
> ```
> vlan200: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric
> 0 mtu 9000
>
> options=1c280401<RXCSUM,LRO,LINKSTATE,RXCSUM_IPV6,NOMAP,TXTLS4,TXTLS6>
>         ether 9c:dc:71:4c:84:f0
>         groups: vlan
>         vlan: 200 vlanproto: 802.1q vlanpcp: 0 parent interface: mce0
>         media: Ethernet 25GBase-SR <full-duplex,rxpause,txpause>
>         status: active
>         nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
> vlan200bridge: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0
> mtu 9000
>         ether 58:9c:fc:10:ff:95
>         id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
>         maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
>         root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
>         member: e0a_bastille0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
>                 ifmaxaddr 0 port 10 priority 128 path cost 2000
>         member: vlan200 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
>                 ifmaxaddr 0 port 8 priority 128 path cost 800
>         groups: bridge
>         nd6 options=9<PERFORMNUD,IFDISABLED>
> e0a_bastille0: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST>
> metric 0 mtu 9000
>         description: vnet host interface for Bastille jail testing
>         options=8<VLAN_MTU>
>         ether 02:20:98:4c:84:f0
>         hwaddr 02:68:8a:24:67:0a
>         groups: epair
>         media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
>         status: active
>         nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
> ```
>
> After relaunching the machine and removed filtering:
>
> ```
> net.link.bridge.pfil_bridge=0
> net.link.bridge.pfil_onlyip=0
> net.link.bridge.pfil_member=0
> ```
>
> I get better results. Still not dull speed but since it's in a bridge
> seems normal. Unsure what was the issue...
>
> ```
> [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> [  5]   0.00-1.01   sec  1.75 GBytes  14.8 Gbits/sec   74    936 KBytes
> [  5]   1.01-2.00   sec  1.31 GBytes  11.3 Gbits/sec   27   1.76 MBytes
> [  5]   2.00-3.00   sec  2.12 GBytes  18.2 Gbits/sec   34   1.74 MBytes
> [  5]   3.00-4.00   sec  2.08 GBytes  17.9 Gbits/sec   85   1.75 MBytes
> [  5]   4.00-5.00   sec  2.11 GBytes  18.2 Gbits/sec   37   1.75 MBytes
> [  5]   5.00-6.00   sec  2.09 GBytes  18.0 Gbits/sec   60   1.75 MBytes
> [  5]   6.00-7.00   sec  2.11 GBytes  18.2 Gbits/sec   10   1.50 MBytes
> [  5]   7.00-8.00   sec  1.51 GBytes  13.0 Gbits/sec   27   1.75 MBytes
> [  5]   8.00-9.00   sec  1.48 GBytes  12.7 Gbits/sec   75   1.50 MBytes
> [  5]   9.00-10.00  sec  2.09 GBytes  17.9 Gbits/sec   52   1.58 MBytes
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bitrate         Retr
> [  5]   0.00-10.00  sec  18.7 GBytes  16.0 Gbits/sec  481
>  sender
> [  5]   0.00-10.00  sec  18.7 GBytes  16.0 Gbits/sec
> receiver
> ```
>
>
>
> Benoît
>
>
> ------- Original Message -------
> On Tuesday, May 23rd, 2023 at 23:15, Marko Zec <z...@fer.hr> wrote:
>
>
> > On Tue, 23 May 2023 19:58:07 +0000
> > Benoit Chesneau beno...@enki-multimedia.eu wrote:
> >
> > > Hi all,
> > >
> > > I've created a jail using bastille and setup network. The mainin
> > > terface is a 25Gbps nic and between hosts I get 24.6 Gbits/sec :
> >
> >
> > [...]
> >
> > > But between one host and the jail I only get 3.96 Gbits/sec
> >
> >
> > [...]
> >
> > > Is there a way to increase the performance of the of the jail? The
> > > nice is a mellannox ConnectX-4 Lx, mce(4) .
> >
> >
> > Modern NICs offload a lot of the protocol stack processing (checksum,
> > segmentation, and / or reassembly) from the CPU to dedicated silicon,
> > whereas inter-vnet traffic needs to be handled completely in software,
> > that's where the difference comes from.
> >
> > Perhaps we could gain some speed by abusing mbuf flags to skip RXCSUM
> > for epair traffic, maybe even skip and fake TXCSUM...
> >
> > Marko
>
>

Reply via email to