On 26/04/18 19:41, Stokes, Ian wrote:
On 04/26/2018 04:26 PM, Stokes, Ian wrote:
Currently to RX jumbo packets fails for NICs not supporting scatter.
Scatter is not strictly needed for jumbo RX support. This change
fixes the issue by only enabling scatter for NICs known to need it to
support jumbo RX. Add a quirk for "igb" while the PMD is fixed.

Thanks for the v3 Pablo, comments inline.

Reported-by: Louis Peens <louis.pe...@netronome.com>
Signed-off-by: Pablo Cascón <pablo.cas...@netronome.com>
Reviewed-by: Simon Horman <simon.hor...@netronome.com>
---

Changelog:
v3->v2:
    - only check for driver_name
    - tested with "nfp" and not with "igb"


  lib/netdev-dpdk.c | 12 +++++++++---
  1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index
ee39cbe..02ed85b
100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -694,11 +694,17 @@ dpdk_eth_dev_queue_setup(struct netdev_dpdk
*dev, int n_rxq, int n_txq)
      int diag = 0;
      int i;
      struct rte_eth_conf conf = port_conf;
+    struct rte_eth_dev_info info;

-    /* For some NICs (e.g. Niantic), scatter_rx mode needs to be
explicitly
-     * enabled. */
+    /* Quirk: as of DPDK 17.11.1 igb's PMD requires explicitly
+     * enabling scatter to support jumbo RX. Note: PMDs are not
+     * required to set the offload capabilities and so is not reliable
+     * info, only the driver_name is after testing the PMD/NIC */
      if (dev->mtu > ETHER_MTU) {
-        conf.rxmode.enable_scatter = 1;
+        rte_eth_dev_info_get(dev->port_id, &info);
+        if (!strcmp(info.driver_name, "igb")) {
For igb devices condition above will never hit, the driver name in info
will be reported as 'net_e1000_igb'.
I'm seeing a bit of a list growing for other drivers that support the
current behavior/requirement for scatter with Jumbo frames.
Through testing today I saw that Scatter must be set for ixgbe SRIOV
functions also, "net_ixgbe_vf".
 From code inspection em devices "net_e1000_em" would have to be included
also.
I would think that 'net_e1000_igb_vf' will be required also when set_mtu
support is introduced in the future (quite likely), but for the moment
that’s not relevant.
I'm wondering if there's an alternative solution to checking driver
names as we could end up with an extensive list. Would have to look at
this.
Did I hear someone say "alternative solution"? ;-)

---8<---
That's why I think it's better to make the exception for "nfp" at present.
It works for nfp, it works for Intel and it works for any other NICs that
currently work.

When OVS is updated to a future DPDK release where all the PMDs support
setting/not setting this flag, then I agree any workaround should be made
for NICs that are not properly reporting their status, or have quirks.
---8<---

https://mail.openvswitch.org/pipermail/ovs-dev/2018-April/346256.html

+1

I do think this is a simpler solution until we have the offload API fully 
supported in DPDK for the relevant devices.

OK fine, reluctantly agree. Let me share a tested v4 that makes the exception for "nfp".

@Pablo: Do you want the solution backported to previous OVS versions also? The 
"nfp" approach would be ok I think but as flagged in previous mail, listing 
igb, ixgbe etc would be a different story as the behavior of these changed between DPDK 
releases.

Yes, backport would be good please. The behavior of "nfp" has not changed (scatter is not needed for jumbo across releases)


Just something we should be aware of.

Ian

Kevin.

Ian

+            conf.rxmode.enable_scatter = 1;
+        }
      }

      conf.rxmode.hw_ip_checksum = (dev->hw_ol_features &
--
2.7.4

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to