I'd appreciate one additional bit of information if possible. Once the DPDK NIC is bound to vfio-pci the DPDK Linux manual at https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html#vfio <https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html#vfio> mentions setup steps including:

Create the desired number of VF devices
echo 2 > /sys/bus/pci/devices/0000:86:00.0/sriov_numvfs

My question: what is the upper bound on the number of VF devices? What's the thinking process? For example,
maybe one of these approaches makes sense?

- VF device count is bound from above by the number or RX/TX queues
- VF device count is bound from above by the amount of on-NIC memory
- VF device count is bound from above by manufacturer. Each NIC has some max; read specs - VF device count is like the number of ports on a UNIX: 1000s are available and what you need depends on software: how many concurrent connections are needed?

Thu upper bound on Virtual Functions (VF) comes from the hardware itself. It's advertised to the OS through the PCIe configuration register space. You can use the lspci utility to discover this information. For example, running "lspci | grep Ethernet" shows the NICs on my system:

0000:01:00.0 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex] 0000:01:00.1 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex] 0003:01:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet (rev 10) 0003:01:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet (rev 10) 0003:01:00.2 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet (rev 10) 0003:01:00.3 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet (rev 10) 0005:01:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01) 0005:01:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01) 0030:01:00.0 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex] 0030:01:00.1 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex] 0034:01:00.0 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02) 0034:01:00.1 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02)

Focusing on the Intel XL710 NIC, I can look at the SR-IOV capabilities values:

sudo lspci -vvvv -s 0034:01:00.0
0034:01:00.0 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02)
        Subsystem: Intel Corporation Ethernet Converged Network Adapter XL710-Q2
...
        Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
                IOVCap: Migration-, Interrupt Message Number: 000
                IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+
                IOVSta: Migration-
Initial VFs: 64, Total VFs: 64, Number of VFs: 0, Function Dependency Link: 00
                VF offset: 16, stride: 1, Device ID: 154c
                Supported Page Size: 00000553, System Page Size: 00000010
                Region 0: Memory at 0006224000000000 (64-bit, prefetchable)
                Region 3: Memory at 0006224001000000 (64-bit, prefetchable)
                VF Migration: offset: 00000000, BIR: 0

The "Total VFs" value indicates how many VFs can be enabled for this NIC and indicates the upper bound you can use when enabling VFs with the echo command you mention above. Other NICs may have different values depending on their individual hardware capabilities.

DPDK must have an API that programatically discovers the PFs and VFs per PF.

Support for SR-IOV is managed by the Linux kernel, not DPDK. Once a VF is enabled under Linux, DPDK treats it just like a physical function (PF) NIC, assuming the poll-mode driver (PMD) written by the hardware manufacturer supports operating on the VF.

Finally: is a VF device duplex (sends and receives)? Or just RX or just TX only?

In my experience VFs support both send and receive. There is also some Linux support for limiting bandwidth on VFs that support the capability (see "ip link set vf" on https://linux.die.net/man/8/ip).

Dave

Reply via email to