[vpp-dev] Issues with failsafe dpdk pmd in Azure

2022-04-09 Thread Peter Morrow
Hello,

I currently have vpp working nicely in Azure via the netsvc pmd, however after 
reading https://docs.microsoft.com/en-us/azure/virtual-network/setup-dpdk and 
https://fd.io/docs/vpp/v2101/usecases/vppinazure.html it sounds like I should 
be using the failsafe pmd instead. So, I gave this a try but ran into some 
issues, some of which I've seen discussed on this email group but none of the 
solutions have worked for me thus far. I was able to make the failsafe pmd work 
via dpdk-testpmd with dpdk standalone from debian bullseye (dpdk 20.11).

I'm running vpp 22.06 and an external dpdk at version 21.11, though also see 
the same thing when downgrading to 20.11. We have 3 interfaces, the first is a 
non accelerated networking interface which is not to be controlled by vpp 
(eth0) the 2nd are data interfaces which are vpp owned. My dpdk section of the 
vpp startup config looks like this:

dpdk {

dev 0ed6:00:02.0
vdev net_vdev_netvsc0,iface=eth1

dev 6fa1:00:02.0
vdev net_vdev_netvsc1,iface=eth2

base-virtaddr 0x2
}

When vpp starts up the 2 interfaces are shown:

me@azure:~$ sudo vppctl sh hard
  NameIdx   Link  Hardware
FailsafeEthernet2  1 up   FailsafeEthernet2
  Link speed: 50 Gbps
  RX Queues:
queue thread mode
0 vpp_wk_0 (1)   polling
  Ethernet address 00:22:48:4c:c0:e5
  FailsafeEthernet
carrier up full duplex max-frame-size 1518
flags: maybe-multiseg tx-offload rx-ip4-cksum
Devargs: fd(17),dev(net_tap_vsc0,remote=eth1)
rx: queues 1 (max 16), desc 1024 (min 0 max 65535 align 1)
tx: queues 8 (max 16), desc 1024 (min 0 max 65535 align 1)
max rx packet len: 1522
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  ipv4-cksum udp-cksum tcp-cksum scatter
rx offload active: ipv4-cksum scatter
tx offload avail:  ipv4-cksum udp-cksum tcp-cksum tcp-tso multi-segs
tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
   ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
   ipv6-ex ipv6
rss active:none
tx burst function: (not available)
rx burst function: (not available)

FailsafeEthernet4  2 up   FailsafeEthernet4
  Link speed: 50 Gbps
  RX Queues:
queue thread mode
0 vpp_wk_1 (2)   polling
  Ethernet address 00:22:48:4c:c6:4a
  FailsafeEthernet
carrier up full duplex max-frame-size 1518
flags: maybe-multiseg tx-offload rx-ip4-cksum
Devargs: fd(33),dev(net_tap_vsc1,remote=eth2)
rx: queues 1 (max 16), desc 1024 (min 0 max 65535 align 1)
tx: queues 8 (max 16), desc 1024 (min 0 max 65535 align 1)
max rx packet len: 1522
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  ipv4-cksum udp-cksum tcp-cksum scatter
rx offload active: ipv4-cksum scatter
tx offload avail:  ipv4-cksum udp-cksum tcp-cksum tcp-tso multi-segs
tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
   ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
   ipv6-ex ipv6
rss active:none
tx burst function: (not available)
rx burst function: (not available)

local0 0down  local0
  Link speed: unknown
  local
me@azure:~$

However on enabling our application we see the following messages in the the 
vpp log & the vpp interfaces are unusable:

2022/04/09 09:11:16:424 notice dpdk   net_failsafe: Link update 
failed for sub_device 0 with error 1
2022/04/09 09:11:16:424 notice dpdk   net_failsafe: Link update 
failed for sub_device 0 with error 1
2022/04/09 09:12:32:380 notice dpdk   common_mlx5: Unable to find 
virtually contiguous chunk for address (0x10). rte_memseg_contig_walk() 
failed.

2022/04/09 09:12:36:144 notice ip6/link   enable: FailsafeEthernet2
2022/04/09 09:12:36:144 error  interface  hw_add_del_mac_address: 
dpdk_add_del_mac_address: mac address add/del failed: -28
2022/04/09 09:12:36:145 error  interface  hw_add_del_mac_address: 
dpdk_add_del_mac_address: mac address add/del failed: -28
2022/04/09 09:12:36:145 error  interface  hw_add_del_mac_address: 
dpdk_add_del_mac_address: mac address add/del failed: -28
2022/04/09 09:12:36:145 error  interface  hw_add_del_mac_address: 
dpdk_add_del_mac_address: mac address add/del failed: -28
2022/04/09 09:12:36:146 notice dpdk   Port 2: MAC address array full
2022/04/09 09:12:36:146 notice dpdk   Port 2: MAC address array full
2022/04/09 09:12:36:146 notice dpdk   Port 2: MAC address array full
2022/04/09 09:12:36:146 notic

Re: [vpp-dev] Issues with failsafe dpdk pmd in Azure

2022-04-11 Thread Peter Morrow
Thanks Stephen,

Very happy to stick with netvsc in that case.

Peter.

From: Stephen Hemminger 
Sent: 10 April 2022 18:59
To: Peter Morrow 
Cc: vpp-dev@lists.fd.io ; Long Li 
Subject: Re: Issues with failsafe dpdk pmd in Azure

On Sat, 9 Apr 2022 09:20:43 +
Peter Morrow  wrote:

> Hello,
>
> I currently have vpp working nicely in Azure via the netsvc pmd, however 
> after reading 
> https://docs.microsoft.com/en-us/azure/virtual-network/setup-dpdk and 
> https://fd.io/docs/vpp/v2101/usecases/vppinazure.html it sounds like I should 
> be using the failsafe pmd instead. So, I gave this a try but ran into some 
> issues, some of which I've seen discussed on this email group but none of the 
> solutions have worked for me thus far. I was able to make the failsafe pmd 
> work via dpdk-testpmd with dpdk standalone from debian bullseye (dpdk 20.11).

You have it backwards.  Failsafe is the older driver which was developed to be 
generic.
Faisafe is slower because it has to go through the kernel for the slow path.
Would like to deprecate use of failsafe but there are some use cases such as 
rte_flow which
netvsc PMD does not support. Supporting rte_flow in a software driver would 
require a significant
amount of work (same problem as virtio).


>
> I'm running vpp 22.06 and an external dpdk at version 21.11, though also see 
> the same thing when downgrading to 20.11. We have 3 interfaces, the first is 
> a non accelerated networking interface which is not to be controlled by vpp 
> (eth0) the 2nd are data interfaces which are vpp owned. My dpdk section of 
> the vpp startup config looks like this:
>
> dpdk {
>
> dev 0ed6:00:02.0
> vdev net_vdev_netvsc0,iface=eth1
>
> dev 6fa1:00:02.0
> vdev net_vdev_netvsc1,iface=eth2
>
> base-virtaddr 0x2
> }
>
> When vpp starts up the 2 interfaces are shown:
>
> me@azure:~$ sudo vppctl sh hard
>   NameIdx   Link  Hardware
> FailsafeEthernet2  1 up   FailsafeEthernet2
>   Link speed: 50 Gbps
>   RX Queues:
> queue thread mode
> 0 vpp_wk_0 (1)   polling
>   Ethernet address 00:22:48:4c:c0:e5
>   FailsafeEthernet
> carrier up full duplex max-frame-size 1518
> flags: maybe-multiseg tx-offload rx-ip4-cksum
> Devargs: fd(17),dev(net_tap_vsc0,remote=eth1)
> rx: queues 1 (max 16), desc 1024 (min 0 max 65535 align 1)
> tx: queues 8 (max 16), desc 1024 (min 0 max 65535 align 1)
> max rx packet len: 1522
> promiscuous: unicast off all-multicast off
> vlan offload: strip off filter off qinq off
> rx offload avail:  ipv4-cksum udp-cksum tcp-cksum scatter
> rx offload active: ipv4-cksum scatter
> tx offload avail:  ipv4-cksum udp-cksum tcp-cksum tcp-tso multi-segs
> tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> rss active:none
> tx burst function: (not available)
> rx burst function: (not available)
>
> FailsafeEthernet4  2 up   FailsafeEthernet4
>   Link speed: 50 Gbps
>   RX Queues:
> queue thread mode
> 0 vpp_wk_1 (2)   polling
>   Ethernet address 00:22:48:4c:c6:4a
>   FailsafeEthernet
> carrier up full duplex max-frame-size 1518
> flags: maybe-multiseg tx-offload rx-ip4-cksum
> Devargs: fd(33),dev(net_tap_vsc1,remote=eth2)
> rx: queues 1 (max 16), desc 1024 (min 0 max 65535 align 1)
> tx: queues 8 (max 16), desc 1024 (min 0 max 65535 align 1)
> max rx packet len: 1522
> promiscuous: unicast off all-multicast off
> vlan offload: strip off filter off qinq off
> rx offload avail:  ipv4-cksum udp-cksum tcp-cksum scatter
> rx offload active: ipv4-cksum scatter
> tx offload avail:  ipv4-cksum udp-cksum tcp-cksum tcp-tso multi-segs
> tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> rss active:none
> tx burst function: (not available)
> rx burst function: (not available)
>
> local0 0down  local0
>   Link speed: unknown
>   local
> me@azure:~$
>
> However on enabling our application we see the following messages in the the 
> vpp log & the vpp interfaces are unusable:
>
> 2022/04/09 09:11:16:424 notice dpdk   net_failsafe: Link update 

Re: [vpp-dev] Issues with failsafe dpdk pmd in Azure

2022-04-12 Thread Peter Morrow
Thanks Long,

We are good now that the guidance is to stick with the netvsc pmd. Replying 
here so others might see this message on the vpp mailing list.

Peter.

From: Long Li 
Sent: 11 April 2022 22:39
To: Peter Morrow ; Stephen Hemminger 

Cc: vpp-dev@lists.fd.io 
Subject: RE: Issues with failsafe dpdk pmd in Azure


Hi Peter,



The failure you see on Failsafe/MLX5:

> 2022/04/09 09:11:16:424 notice dpdk   net_failsafe: Link update 
> failed for sub_device 0 with error 1
> 2022/04/09 09:12:32:380 notice dpdk   common_mlx5: Unable to find 
> virtually contiguous chunk for address (0x10). 
> rte_memseg_contig_walk() failed.



This is a known issue with MLX5 registering a buffer not allocated through DPDK 
API. This can happen if other driver or other code pass a buffer to MLX5 but 
MLX5 can’t register it with the hardware.

The error detail is at:

https://github.com/DPDK/dpdk/blob/7cf3d07c3adcb015c303e4cdf2ef9712a65ce46d/drivers/common/mlx5/mlx5_common_mr.c#L626<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FDPDK%2Fdpdk%2Fblob%2F7cf3d07c3adcb015c303e4cdf2ef9712a65ce46d%2Fdrivers%2Fcommon%2Fmlx5%2Fmlx5_common_mr.c%23L626&data=04%7C01%7Clongli%40microsoft.com%7C35fdd512f3284030374908d876c4b96e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637389934630918596%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2FztzwFazZZ9Nk1KEc7S9HFFioQe2r5hTYBI5y6lWYQo%3D&reserved=0>



If this is troubling on your setup, can you set “mr_ext_memseg_en=0” for mlx5? 
This should get rid of this error.



Long



From: Peter Morrow 
Sent: Monday, April 11, 2022 1:49 AM
To: Stephen Hemminger 
Cc: vpp-dev@lists.fd.io; Long Li 
Subject: Re: Issues with failsafe dpdk pmd in Azure



You don't often get email from pe...@graphiant.com<mailto:pe...@graphiant.com>. 
Learn why this is important<http://aka.ms/LearnAboutSenderIdentification>

Thanks Stephen,



Very happy to stick with netvsc in that case.



Peter.



From: Stephen Hemminger 
mailto:step...@networkplumber.org>>
Sent: 10 April 2022 18:59
To: Peter Morrow mailto:pe...@graphiant.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>>; Long Li 
mailto:lon...@microsoft.com>>
Subject: Re: Issues with failsafe dpdk pmd in Azure



On Sat, 9 Apr 2022 09:20:43 +
Peter Morrow mailto:pe...@graphiant.com>> wrote:

> Hello,
>
> I currently have vpp working nicely in Azure via the netsvc pmd, however 
> after reading 
> https://docs.microsoft.com/en-us/azure/virtual-network/setup-dpdk<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fvirtual-network%2Fsetup-dpdk&data=05%7C01%7Clongli%40microsoft.com%7Cd7df8d748c7a4f209f0208da1b982cd7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637852638113192348%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=FFePUbwfVxYZwbDhVa19I97v5m9z7JjhQ76ZylDhm%2FU%3D&reserved=0>
>  and 
> https://fd.io/docs/vpp/v2101/usecases/vppinazure.html<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Ffd.io%2Fdocs%2Fvpp%2Fv2101%2Fusecases%2Fvppinazure.html&data=05%7C01%7Clongli%40microsoft.com%7Cd7df8d748c7a4f209f0208da1b982cd7%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637852638113192348%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=KmU7iQ9YyOfIiOENM5oqUdr%2BwOzxaHffJ0H9qLjxmsQ%3D&reserved=0>
>  it sounds like I should be using the failsafe pmd instead. So, I gave this a 
> try but ran into some issues, some of which I've seen discussed on this email 
> group but none of the solutions have worked for me thus far. I was able to 
> make the failsafe pmd work via dpdk-testpmd with dpdk standalone from debian 
> bullseye (dpdk 20.11).

You have it backwards.  Failsafe is the older driver which was developed to be 
generic.
Faisafe is slower because it has to go through the kernel for the slow path.
Would like to deprecate use of failsafe but there are some use cases such as 
rte_flow which
netvsc PMD does not support. Supporting rte_flow in a software driver would 
require a significant
amount of work (same problem as virtio).


>
> I'm running vpp 22.06 and an external dpdk at version 21.11, though also see 
> the same thing when downgrading to 20.11. We have 3 interfaces, the first is 
> a non accelerated networking interface which is not to be controlled by vpp 
> (eth0) the 2nd are data interfaces which are vpp owned. My dpdk section of 
> the vpp startup config looks like this:
>
> dpdk {
>
> dev 0ed6:00:02.0
> vdev net_vdev_netvsc

[vpp-dev] optimal buffer configuration on Azure

2022-06-29 Thread Peter Morrow
Hello,

In this example i've got a 4 vCPU Azure VM with 16G of RAM, 2G of that is given 
to 1024 2MB huge pages:

$ cat /proc/meminfo  | grep -i huge
AnonHugePages: 71680 kB
ShmemHugePages:0 kB
FileHugePages: 0 kB
HugePages_Total:1024
HugePages_Free:1
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
Hugetlb: 2097152 kB
$

There are 2 interfaces which are vpp owned and which are both using the netvsc 
pmd:

$ sudo vppctl sh hard
  NameIdx   Link  Hardware
GigabitEthernet1   1 up   GigabitEthernet1
  Link speed: 50 Gbps
  RX Queues:
queue thread mode
0 vpp_wk_0 (1)   polling
1 vpp_wk_1 (2)   polling
  Ethernet address 60:45:bd:85:22:97
  Microsoft Hyper-V Netvsc
carrier up full duplex max-frame-size 0
flags: tx-offload rx-ip4-cksum
Devargs:
rx: queues 2 (max 64), desc 1024 (min 0 max 65535 align 1)
tx: queues 2 (max 64), desc 1024 (min 1 max 4096 align 1)
max rx packet len: 65536
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum rss-hash
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6
rss active:ipv4-tcp ipv4 ipv6-tcp ipv6
tx burst function: (not available)
rx burst function: (not available)

GigabitEthernet2   2 up   GigabitEthernet2
  Link speed: 50 Gbps
  RX Queues:
queue thread mode
0 vpp_wk_2 (3)   polling
1 vpp_wk_0 (1)   polling
  Ethernet address 60:45:bd:85:23:94
  Microsoft Hyper-V Netvsc
carrier up full duplex max-frame-size 0
flags: tx-offload rx-ip4-cksum
Devargs:
rx: queues 2 (max 64), desc 1024 (min 0 max 65535 align 1)
tx: queues 2 (max 64), desc 1024 (min 1 max 4096 align 1)
max rx packet len: 65536
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum rss-hash
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6
rss active:ipv4-tcp ipv4 ipv6-tcp ipv6
tx burst function: (not available)
rx burst function: (not available)

local0 0down  local0
  Link speed: unknown
  local
$

Config file looks like this:

unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
gid vpp
}
api-trace {
on
}
api-segment {
gid vpp
}
socksvr {
socket-name /run/vpp/api.sock
}
plugins {
# Common plugins.
plugin default { disable }
plugin dpdk_plugin.so { enable }
plugin linux_cp_plugin.so { enable }
plugin crypto_native_plugin.so { enable }
   < -- snip lots of plugins -- >
}
dpdk {

# VMBUS UUID.
dev 6045bd85-2297-6045-bd85-22976045bd85 {
num-rx-queues 4
num-tx-queues 4
name GigabitEthernet1
}

# VMBUS UUID.
dev 6045bd85-2394-6045-bd85-23946045bd85 {
num-rx-queues 4
num-tx-queues 4
name GigabitEthernet2
}

}

cpu {
skip-cores 0
main-core 0
corelist-workers 1-3
}
buffers {
# Max buffers based on data size & huge page configuration.
buffers-per-numa 853440
default data-size 2048
page-size default-hugepage
}

statseg {
size 128M
}

My issue is that I start to see errors from the mlnx5 driver when using a large 
number of buffers:

2022/06/29 12:44:11:427 notice dpdk   common_mlx5: Unable to find 
virtually contiguous chunk for address (0x10). rte_memseg_contig_walk() 
failed.
2022/06/29 12:44:11:427 notice dpdk   common_mlx5: Unable to find 
virtually contiguous chunk for address (0x103fe0). rte_memseg_contig_walk() 
failed.
2022/06/29 12:44:11:427 notice dpdk   common_mlx5: Unable to find 
virtually contiguous chunk for address (0x104000). rte_memseg_contig_walk() 
failed.
2022/06/29 12:44:11:427 notice dpdk   common_mlx5: Unable to find 
virtually contiguous chunk for address (0x104020). rte_memseg_contig_walk() 
failed.

The spew continues.

With a smaller number of buffers I don't see this problem and there are no 
issues with the packet forwarding side of things. I'm not sure what the buffer 
limit is before things
go bad.

I read the excellent description of how buffer sizes are calculated here: 
https://lists.fd.io/g/vpp-dev/topic/buffer_occupancy_calculation/76605334?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,20,7660533

Re: [vpp-dev] VPP on Azure

2022-07-04 Thread Peter Morrow
Hi Siddarth,

The linked document is quite old and I don't think it can be relied on in it's 
current state. I am using vpp (22.06-20220307) in Azure with external dpdk from 
debian 11 (20.11) and it's working well except for an issue with larger buffer 
sizes.

What sort of errors are you seeing in the vpp log? Are you sure that the 
interfaces that you want vpp to control are DOWN when vpp starts? Depending on 
how you've deployed the VM dhcp may be being run for the AN interfaces which 
you want to be vpp owned, which would block them from being used by vpp.

Another point is that I've had more success binding devices using their VMBUS 
id instead of the PCI ID, for example:

dpdk {

# VMBUS UUID.
dev 6045bd81-248e-6045-bd81-248e6045bd81 {
num-rx-queues 4
num-tx-queues 4
name GigabitEthernet1
}

# VMBUS UUID.
dev 6045bd81-2600-6045-bd81-26006045bd81 {
num-rx-queues 4
num-tx-queues 4
name GigabitEthernet2
}
}

VPP is up and running:

$ sudo vppctl sh hard
  NameIdx   Link  Hardware
GigabitEthernet1   1 up   GigabitEthernet1
  Link speed: 50 Gbps
  RX Queues:
queue thread mode
0 main (0)   polling
1 main (0)   polling
2 main (0)   polling
3 main (0)   polling
  Ethernet address 60:45:bd:81:24:8e
  Microsoft Hyper-V Netvsc
carrier up full duplex max-frame-size 0
flags: admin-up tx-offload rx-ip4-cksum
Devargs:
rx: queues 4 (max 64), desc 1024 (min 0 max 65535 align 1)
tx: queues 4 (max 64), desc 1024 (min 1 max 4096 align 1)
max rx packet len: 65536
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum rss-hash
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6
rss active:ipv4-tcp ipv4 ipv6-tcp ipv6
tx burst function: (not available)
rx burst function: (not available)

tx frames ok  263350
tx bytes ok 41871764
rx frames ok  253901
rx bytes ok 33154932
extended stats:
  rx_good_packets 253901
  tx_good_packets 263350
  rx_good_bytes 33154932
  tx_good_bytes 41871764
  rx_q0_packets 4849
  rx_q0_bytes 298233
  rx_q1_packets 1345
  rx_q1_bytes 147229
  rx_q2_packets32967
  rx_q2_bytes3625057
  rx_q3_packets 1330
  rx_q3_bytes 145796
  rx_q0_good_packets4849
  rx_q0_good_bytes298233
  rx_q0_undersize_packets   3462
  rx_q0_size_65_127_packets 1066
  rx_q0_size_128_255_packets 318
  rx_q0_size_256_511_packets   3
  rx_q1_good_packets1345
  rx_q1_good_bytes147229
  rx_q1_undersize_packets  2
  rx_q1_size_65_127_packets 1035
  rx_q1_size_128_255_packets 307
  rx_q1_size_1024_1518_packets 1
  rx_q2_good_packets   32967
  rx_q2_good_bytes   3625057
  rx_q2_multicast_packets  31662
  rx_q2_undersize_packets  1
  rx_q2_size_65_127_packets32663
  rx_q2_size_128_255_packets 303
  rx_q3_good_packets1330
  rx_q3_good_bytes145796
  rx_q3_size_65_127_packets 1016
  rx_q3_size_128_255_packets 313
  rx_q3_size_256_511_packets   1
  vf_rx_good_packets  213410
  vf_tx_good_packets  263350
  vf_rx_good_bytes  28938617
  vf_tx_good_bytes  41871764
  vf_rx_q0_packets16
  vf_rx_q0_bytes