When using TSO in OVS-DPDK with an i40e device, the following
patch is required for DPDK, which fixes an issue on the TSO path:
https://patches.dpdk.org/patch/64136/
Document this as a limitation until a DPDK release with the fix
included is supported by OVS.

Also, document best known methods for performance tuning when
testing TSO with the tool iperf.

Signed-off-by: Ciara Loftus <ciara.lof...@intel.com>
Acked-by: Flavio Leitner <f...@sysclose.org>
---
v2:
- rebased to master
- changed patch links from net-next tree to patchwork
---
 Documentation/topics/userspace-tso.rst | 27 ++++++++++++++++++++++++++
 manpages.mk                            |  3 ---
 2 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/Documentation/topics/userspace-tso.rst 
b/Documentation/topics/userspace-tso.rst
index 893c64839..94eddc0b2 100644
--- a/Documentation/topics/userspace-tso.rst
+++ b/Documentation/topics/userspace-tso.rst
@@ -96,3 +96,30 @@ datapath must support TSO or packets using that feature will 
be dropped
 on ports without TSO support.  That also means guests using vhost-user
 in client mode will receive TSO packet regardless of TSO being enabled
 or disabled within the guest.
+
+When the NIC performing the segmentation is using the i40e DPDK PMD, a fix
+must be included in the DPDK build, otherwise TSO will not work. The fix can
+be found on `DPDK patchwork`__.
+
+__ https://patches.dpdk.org/patch/64136/
+
+This fix is expected to be included in the 19.11.1 release. When OVS migrates
+to this DPDK release, this limitation can be removed.
+
+~~~~~~~~~~~~~~~~~~
+Performance Tuning
+~~~~~~~~~~~~~~~~~~
+
+iperf is often used to test TSO performance. Care needs to be taken when
+configuring the environment in which the iperf server process is being run.
+Since the iperf server uses the NIC's kernel driver, IRQs will be generated.
+By default with some NICs eg. i40e, the IRQs will land on the same core as that
+which is being used by the server process, provided the number of NIC queues is
+greater or equal to that lcoreid. This causes contention between the iperf
+server process and the IRQs. For optimal performance, it is suggested to pin
+the IRQs to their own core. To change the affinity associated with a given IRQ
+number, you can 'echo' the desired coremask to the file
+/proc/irq/<number>/smp_affinity
+For more on SMP affinity, refer to the `Linux kernel documentation`__.
+
+__ https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
diff --git a/manpages.mk b/manpages.mk
index dc201484c..54a3a82ad 100644
--- a/manpages.mk
+++ b/manpages.mk
@@ -104,7 +104,6 @@ utilities/bugtool/ovs-bugtool.8: \
 utilities/bugtool/ovs-bugtool.8.in:
 lib/ovs.tmac:
 
-
 utilities/ovs-dpctl-top.8: \
        utilities/ovs-dpctl-top.8.in \
        lib/ovs.tmac
@@ -155,8 +154,6 @@ lib/common-syn.man:
 lib/common.man:
 lib/ovs.tmac:
 
-lib/ovs.tmac:
-
 utilities/ovs-testcontroller.8: \
        utilities/ovs-testcontroller.8.in \
        lib/common.man \
-- 
2.17.1

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to