[Bug 1006] [dpdk-22.03] [asan] coremask/individual_coremask: AddressSanitizer DEADLYSIGNAL

2022-05-09 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1006

Bug ID: 1006
   Summary: [dpdk-22.03] [asan] coremask/individual_coremask:
AddressSanitizer DEADLYSIGNAL
   Product: DPDK
   Version: 22.03
  Hardware: x86
OS: Linux
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: core
  Assignee: dev@dpdk.org
  Reporter: songx.ji...@intel.com
  Target Milestone: ---

OS: Ubuntu 20.04.4 LTS/5.13.0-30-generic
Compiler: gcc version 9.4.0
Hardware platform: Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz
NIC hardware: Ethernet Controller E810-C for SFP 1593
NIC firmware: 
driver: vfio-pci
kdriver: ice-1.8.3
firmware:  3.20 0x8000d847 1.3146.0
pkg: ice os default 1.3.28.0

Test Setup
1. Compile dpdk:
rm -rf x86_64-native-linuxapp-gcc
CC=gcc meson -Denable_kmods=True -Dlibdir=lib  -Dbuildtype=debug
-Db_lundef=false -Db_sanitize=address --default-library=static
x86_64-native-linuxapp-gcc
ninja -C x86_64-native-linuxapp-gcc -j 70
2.blind port to dpdk
usertools/dpdk-devbind.py --force --bind=vfio-pci :31:00.0 :31:00.1
:31:00.2 :31:00.3
echo 0 > /proc/sys/kernel/randomize_va_space
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x2 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x4 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x8 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x10 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x20 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x40 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x80 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x100 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x200 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x400 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x800 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x1000 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x2000 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x4000 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x8000 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x1 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x2 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x4 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x8 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x10 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x20 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x40 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x80 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x100 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x200 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x400 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x800 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x1000 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x2000 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x4000 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x8000 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x1 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x2 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x4 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x8 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x10 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x20 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x40 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x80 -n 4
--log-level="lib.eal,8"
quit
./x86_64-native-linuxapp-gcc/app/test/dpdk-test  -c 0x100 -n 4
--log-

Re: [PATCH v2 1/2] ci: switch to Ubuntu 20.04

2022-05-09 Thread David Marchand
On Sat, May 7, 2022 at 5:37 AM Ruifeng Wang  wrote:
>
> > -Original Message-
> > From: David Marchand 
> > Sent: Friday, May 6, 2022 7:58 PM
> > To: dev@dpdk.org
> > Cc: Aaron Conole ; Michael Santana
> > ; Ruifeng Wang ;
> > Jan Viktorin ; Bruce Richardson
> > ; David Christensen 
> > Subject: [PATCH v2 1/2] ci: switch to Ubuntu 20.04
> >
> > Ubuntu 18.04 is now rather old.
> > Besides, other entities in our CI are also testing this distribution.
> >
> > Switch to a newer Ubuntu release and benefit from more recent
> > tool(chain)s: for example, net/cnxk now builds fine and can be re-enabled.
> >
> > Note: Ubuntu 18.04 and 20.04 seem to preserve the same paths for the ARM
> > and PPC cross compilation toolchains, so we can use a single configuration 
> > file
> > (with the hope, future releases of Ubuntu will do the same).
> >
> > Signed-off-by: David Marchand 
> > Acked-by: Aaron Conole 
Reviewed-by: Ruifeng Wang 

> > ---
> > Changes since v1:
> > - renamed ubuntu cross compilation configs for ARM and PPC,

I had forgotten to amend the patch with links for the older config files.
I fixed it.

Thanks for the reviews, series applied.


-- 
David marchand



[PATCH v2] doc: update matching versions in i40e guide

2022-05-09 Thread Qiming Yang
Add recommended matching list for i40e PMD in DPDK 21.05,
21.08, 21.11 and 22.03. And add a known issue when FW upgrade
to a version higher than 8.4.

Cc: sta...@dpdk.org

Signed-off-by: Qiming Yang 
---
v2:
* added known issue in FW 8.4+
---
 doc/guides/nics/i40e.rst | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index ef91b3a1ac..e03bb84211 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -101,6 +101,14 @@ For X710/XL710/XXV710,
+--+---+--+
| DPDK version | Kernel driver version | Firmware version |
+==+===+==+
+   |22.03 | 2.17.15   |   8.30   |
+   +--+---+--+
+   |21.11 | 2.17.4|   8.30   |
+   +--+---+--+
+   |21.08 | 2.15.9|   8.30   |
+   +--+---+--+
+   |21.05 | 2.15.9|   8.30   |
+   +--+---+--+
|21.02 | 2.14.13   |   8.00   |
+--+---+--+
|20.11 | 2.14.13   |   8.00   |
@@ -148,6 +156,14 @@ For X722,
+--+---+--+
| DPDK version | Kernel driver version | Firmware version |
+==+===+==+
+   |22.03 | 2.17.15   |   5.50   |
+   +--+---+--+
+   |21.11 | 2.17.4|   5.30   |
+   +--+---+--+
+   |21.08 | 2.15.9|   5.15   |
+   +--+---+--+
+   |21.05 | 2.15.9|   5.15   |
+   +--+---+--+
|21.02 | 2.14.13   |   5.00   |
+--+---+--+
|20.11 | 2.13.10   |   5.00   |
@@ -771,6 +787,13 @@ it will fail and return the info "Conflict with the first 
rule's input set",
 which means the current rule's input set conflicts with the first rule's.
 Remove the first rule if want to change the input set of the PCTYPE.
 
+PF reset fail after QinQ set with FW >= 8.4
+~~~
+
+If upgrade FW to a version higher than 8.4, after set MAC VLAN filter and 
configure outer VLAN on PF, kill
+DPDK process will cause the card crash.
+
+
 Example of getting best performance with l3fwd example
 --
 
-- 
2.17.1



[RFC v2 0/9] add support for idpf PMD in DPDK

2022-05-09 Thread Junfeng Guo
This is a draft of idpf (Infrastructure Data Path Function) PMD
in DPDK for Intel Device ID of 0x1452.

v2:
fix code typo in func idpf_set_tx_function.

Junfeng Guo (9):
  net/idpf/base: introduce base code
  net/idpf/base: add OS specific implementation
  net/idpf: support device initialization
  net/idpf: support queue ops
  net/idpf: support getting device information
  net/idpf: support packet type getting
  net/idpf: support link update
  net/idpf: support basic Rx/Tx
  net/idpf: support RSS

 drivers/net/idpf/base/iecm_alloc.h|   22 +
 drivers/net/idpf/base/iecm_common.c   |  359 +++
 drivers/net/idpf/base/iecm_controlq.c |  662 
 drivers/net/idpf/base/iecm_controlq.h |  214 ++
 drivers/net/idpf/base/iecm_controlq_api.h |  227 ++
 drivers/net/idpf/base/iecm_controlq_setup.c   |  179 ++
 drivers/net/idpf/base/iecm_devids.h   |   17 +
 drivers/net/idpf/base/iecm_lan_pf_regs.h  |  134 +
 drivers/net/idpf/base/iecm_lan_txrx.h |  428 +++
 drivers/net/idpf/base/iecm_lan_vf_regs.h  |  114 +
 drivers/net/idpf/base/iecm_osdep.h|  365 +++
 drivers/net/idpf/base/iecm_prototype.h|   45 +
 drivers/net/idpf/base/iecm_type.h |  106 +
 drivers/net/idpf/base/meson.build |   27 +
 drivers/net/idpf/base/siov_regs.h |   41 +
 drivers/net/idpf/base/virtchnl.h  | 2743 +
 drivers/net/idpf/base/virtchnl2.h | 1411 +
 drivers/net/idpf/base/virtchnl2_lan_desc.h|  603 
 drivers/net/idpf/base/virtchnl_inline_ipsec.h |  567 
 drivers/net/idpf/idpf_ethdev.c| 1030 +++
 drivers/net/idpf/idpf_ethdev.h|  223 ++
 drivers/net/idpf/idpf_logs.h  |   38 +
 drivers/net/idpf/idpf_rxtx.c  | 2180 +
 drivers/net/idpf/idpf_rxtx.h  |  203 ++
 drivers/net/idpf/idpf_vchnl.c |  900 ++
 drivers/net/idpf/meson.build  |   19 +
 drivers/net/idpf/version.map  |3 +
 drivers/net/meson.build   |1 +
 28 files changed, 12861 insertions(+)
 create mode 100644 drivers/net/idpf/base/iecm_alloc.h
 create mode 100644 drivers/net/idpf/base/iecm_common.c
 create mode 100644 drivers/net/idpf/base/iecm_controlq.c
 create mode 100644 drivers/net/idpf/base/iecm_controlq.h
 create mode 100644 drivers/net/idpf/base/iecm_controlq_api.h
 create mode 100644 drivers/net/idpf/base/iecm_controlq_setup.c
 create mode 100644 drivers/net/idpf/base/iecm_devids.h
 create mode 100644 drivers/net/idpf/base/iecm_lan_pf_regs.h
 create mode 100644 drivers/net/idpf/base/iecm_lan_txrx.h
 create mode 100644 drivers/net/idpf/base/iecm_lan_vf_regs.h
 create mode 100644 drivers/net/idpf/base/iecm_osdep.h
 create mode 100644 drivers/net/idpf/base/iecm_prototype.h
 create mode 100644 drivers/net/idpf/base/iecm_type.h
 create mode 100644 drivers/net/idpf/base/meson.build
 create mode 100644 drivers/net/idpf/base/siov_regs.h
 create mode 100644 drivers/net/idpf/base/virtchnl.h
 create mode 100644 drivers/net/idpf/base/virtchnl2.h
 create mode 100644 drivers/net/idpf/base/virtchnl2_lan_desc.h
 create mode 100644 drivers/net/idpf/base/virtchnl_inline_ipsec.h
 create mode 100644 drivers/net/idpf/idpf_ethdev.c
 create mode 100644 drivers/net/idpf/idpf_ethdev.h
 create mode 100644 drivers/net/idpf/idpf_logs.h
 create mode 100644 drivers/net/idpf/idpf_rxtx.c
 create mode 100644 drivers/net/idpf/idpf_rxtx.h
 create mode 100644 drivers/net/idpf/idpf_vchnl.c
 create mode 100644 drivers/net/idpf/meson.build
 create mode 100644 drivers/net/idpf/version.map

-- 
2.25.1



[RFC v2 2/9] net/idpf/base: add OS specific implementation

2022-05-09 Thread Junfeng Guo
Add some MACRO definations and small functions which are specific
for DPDK.

Signed-off-by: Beilei Xing 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/base/iecm_osdep.h | 365 +
 1 file changed, 365 insertions(+)
 create mode 100644 drivers/net/idpf/base/iecm_osdep.h

diff --git a/drivers/net/idpf/base/iecm_osdep.h 
b/drivers/net/idpf/base/iecm_osdep.h
new file mode 100644
index 00..60e21fbc1b
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_osdep.h
@@ -0,0 +1,365 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_OSDEP_H_
+#define _IECM_OSDEP_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../idpf_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_tu8;
+typedef int8_t s8;
+typedef uint16_t   u16;
+typedef int16_ts16;
+typedef uint32_t   u32;
+typedef int32_ts32;
+typedef uint64_t   u64;
+typedef uint64_t   s64;
+
+typedef enum iecm_status iecm_status;
+typedef struct iecm_lock iecm_lock;
+
+#define __iomem
+#define hw_dbg(hw, S, A...)do {} while (0)
+#define upper_32_bits(n)   ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n)   ((u32)(n))
+#define low_16_bits(x) ((x) & 0x)
+#define high_16_bits(x)(((x) & 0x) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN   6
+#endif
+
+#ifndef __le16
+#define __le16 uint16_t
+#endif
+#ifndef __le32
+#define __le32 uint32_t
+#endif
+#ifndef __le64
+#define __le64 uint64_t
+#endif
+#ifndef __be16
+#define __be16 uint16_t
+#endif
+#ifndef __be32
+#define __be32 uint32_t
+#endif
+#ifndef __be64
+#define __be64 uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused  __attribute__((__unused__))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused  __attribute__((__unused__))
+#endif
+#ifndef __packed
+#define __packed  __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#ifndef BIT
+#define BIT(a) (1ULL << (a))
+#endif
+
+#define FALSE  0
+#define TRUE   1
+#define false  0
+#define true   1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->(f)))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S) PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define iecm_debug(h, m, s, ...)   \
+   do {\
+   if (((m) & (h)->debug_mask))\
+   PMD_DRV_LOG_RAW(DEBUG, "iecm %02x.%x " s,   \
+   (h)->bus.device, (h)->bus.func, \
+   ##__VA_ARGS__); \
+   } while (0)
+
+#define iecm_info(hw, fmt, args...) iecm_debug(hw, IECM_DBG_ALL, fmt, ##args)
+#define iecm_warn(hw, fmt, args...) iecm_debug(hw, IECM_DBG_ALL, fmt, ##args)
+#define iecm_debug_array(hw, type, rowsize, groupsize, buf, len)   \
+   do {\
+   struct iecm_hw *hw_l = hw;  \
+   u16 len_l = len;\
+   u8 *buf_l = buf;\
+   int i;  \
+   for (i = 0; i < len_l; i += 8)  \
+   iecm_debug(hw_l, type,  \
+  "0x%04X  0x%016"PRIx64"\n",  \
+  i, *((u64 *)((buf_l) + i))); \
+   } while (0)
+#define iecm_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF iecm_snprintf
+#endif
+
+#define IECM_PCI_REG(reg) rte_read32(reg)
+#define IECM_PCI_REG_ADDR(a, reg)  \
+   ((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+#define IECM_PCI_REG64(reg) rte_read64(reg)
+#define IECM_PCI_REG_ADDR64(a, reg)\
+   ((volatile uint64_t *)((char *)(a)->hw_addr + (reg)))
+
+#define iecm_wmb() rte_io_wmb()
+#define iecm_rmb() rte_io_rmb()
+#define iecm_mb() rte_io_mb()
+
+static inline uint32_t iecm_read_addr(volatile void *addr)
+{
+   return rte_le_to_cpu_32(IECM_PCI_REG(addr));
+}
+
+static inline uint64_t iecm_read_addr64(volatile void *addr)
+{
+   return rte_le_to_cpu_64(IECM_PCI_REG64(addr));
+}
+
+#define IECM_PCI_REG_WRITE(reg, value) \
+   rte_write32((rte_cpu_to_le_32(value)), reg)
+
+#define I

[RFC v2 6/9] net/idpf: support packet type getting

2022-05-09 Thread Junfeng Guo
Add ops dev_supported_ptypes_get.

Signed-off-by: Beilei Xing 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  3 ++
 drivers/net/idpf/idpf_rxtx.c   | 51 ++
 drivers/net/idpf/idpf_rxtx.h   |  3 ++
 3 files changed, 57 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index c58a40e7ab..01fd023bfc 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -32,6 +32,7 @@ static int idpf_dev_info_get(struct rte_eth_dev *dev,
 struct rte_eth_dev_info *dev_info);
 
 static const struct eth_dev_ops idpf_eth_dev_ops = {
+   .dev_supported_ptypes_get   = idpf_dev_supported_ptypes_get,
.dev_configure  = idpf_dev_configure,
.dev_start  = idpf_dev_start,
.dev_stop   = idpf_dev_stop,
@@ -501,6 +502,8 @@ idpf_adapter_init(struct rte_eth_dev *dev)
if (adapter->initialized)
return 0;
 
+   idpf_set_default_ptype_table(dev);
+
hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
hw->hw_addr_len = pci_dev->mem_resource[0].len;
hw->back = adapter;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 770ed52281..6b436141c8 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -8,6 +8,57 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
+const uint32_t *
+idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+   static const uint32_t ptypes[] = {
+   RTE_PTYPE_L2_ETHER,
+   RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+   RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+   RTE_PTYPE_L4_FRAG,
+   RTE_PTYPE_L4_NONFRAG,
+   RTE_PTYPE_L4_UDP,
+   RTE_PTYPE_L4_TCP,
+   RTE_PTYPE_L4_SCTP,
+   RTE_PTYPE_L4_ICMP,
+   RTE_PTYPE_UNKNOWN
+   };
+
+   return ptypes;
+}
+
+static inline uint32_t
+idpf_get_default_pkt_type(uint16_t ptype)
+{
+   static const uint32_t type_table[IDPF_MAX_PKT_TYPE]
+   __rte_cache_aligned = {
+   [1] = RTE_PTYPE_L2_ETHER,
+   [22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | 
RTE_PTYPE_L4_FRAG,
+   [23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4,
+   [24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | 
RTE_PTYPE_L4_UDP,
+   [26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | 
RTE_PTYPE_L4_TCP,
+   [27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | 
RTE_PTYPE_L4_SCTP,
+   [28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | 
RTE_PTYPE_L4_ICMP,
+   [88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | 
RTE_PTYPE_L4_FRAG,
+   [89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6,
+   [90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | 
RTE_PTYPE_L4_UDP,
+   [92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | 
RTE_PTYPE_L4_TCP,
+   [93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | 
RTE_PTYPE_L4_SCTP,
+   [94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | 
RTE_PTYPE_L4_ICMP,
+   };
+
+   return type_table[ptype];
+}
+
+void __rte_cold
+idpf_set_default_ptype_table(struct rte_eth_dev *dev __rte_unused)
+{
+   int i;
+
+   for (i = 0; i < IDPF_MAX_PKT_TYPE; i++)
+   adapter->ptype_tbl[i] = idpf_get_default_pkt_type(i);
+}
+
 static inline int
 check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
 {
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 705f706890..21b6d8cb84 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -164,4 +164,7 @@ void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, 
uint16_t qid);
 
 void idpf_stop_queues(struct rte_eth_dev *dev);
 
+void idpf_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+
 #endif /* _IDPF_RXTX_H_ */
-- 
2.25.1



[RFC v2 3/9] net/idpf: support device initialization

2022-05-09 Thread Junfeng Guo
Support dev init and add dev ops for IDPF PMD:
dev_configure
dev_start
dev_stop
dev_close

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Xiao Wang 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c | 652 +
 drivers/net/idpf/idpf_ethdev.h | 200 ++
 drivers/net/idpf/idpf_logs.h   |  38 ++
 drivers/net/idpf/idpf_vchnl.c  | 465 +++
 drivers/net/idpf/meson.build   |  18 +
 drivers/net/idpf/version.map   |   3 +
 drivers/net/meson.build|   1 +
 7 files changed, 1377 insertions(+)
 create mode 100644 drivers/net/idpf/idpf_ethdev.c
 create mode 100644 drivers/net/idpf/idpf_ethdev.h
 create mode 100644 drivers/net/idpf/idpf_logs.h
 create mode 100644 drivers/net/idpf/idpf_vchnl.c
 create mode 100644 drivers/net/idpf/meson.build
 create mode 100644 drivers/net/idpf/version.map

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
new file mode 100644
index 00..e34165a87d
--- /dev/null
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -0,0 +1,652 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "idpf_ethdev.h"
+
+#define VPORT_NUM  "vport_num"
+
+struct idpf_adapter *adapter;
+uint16_t vport_num = 1;
+
+static const char * const idpf_valid_args[] = {
+   VPORT_NUM,
+   NULL
+};
+
+static int idpf_dev_configure(struct rte_eth_dev *dev);
+static int idpf_dev_start(struct rte_eth_dev *dev);
+static int idpf_dev_stop(struct rte_eth_dev *dev);
+static int idpf_dev_close(struct rte_eth_dev *dev);
+
+static const struct eth_dev_ops idpf_eth_dev_ops = {
+   .dev_configure  = idpf_dev_configure,
+   .dev_start  = idpf_dev_start,
+   .dev_stop   = idpf_dev_stop,
+   .dev_close  = idpf_dev_close,
+};
+
+
+static int
+idpf_init_vport_req_info(struct rte_eth_dev *dev)
+{
+   struct virtchnl2_create_vport *vport_info;
+   uint16_t idx = adapter->next_vport_idx;
+
+   if (!adapter->vport_req_info[idx]) {
+   adapter->vport_req_info[idx] = rte_zmalloc(NULL,
+   sizeof(struct virtchnl2_create_vport), 0);
+   if (!adapter->vport_req_info[idx]) {
+   PMD_INIT_LOG(ERR, "Failed to allocate vport_req_info");
+   return -1;
+   }
+   }
+
+   vport_info =
+   (struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
+
+   vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+
+   return 0;
+}
+
+static uint16_t
+idpf_get_next_vport_idx(struct idpf_vport **vports, uint16_t max_vport_nb,
+   uint16_t cur_vport_idx)
+{
+   uint16_t vport_idx;
+   uint16_t i;
+
+   if (cur_vport_idx < max_vport_nb && !vports[cur_vport_idx + 1]) {
+   vport_idx = cur_vport_idx + 1;
+   return vport_idx;
+   }
+
+   for (i = 0; i < max_vport_nb; i++) {
+   if (vports[i])
+   continue;
+   }
+
+   if (i == max_vport_nb)
+   vport_idx = IDPF_INVALID_VPORT_IDX;
+   else
+   vport_idx = i;
+
+   return vport_idx;
+}
+
+#ifndef IDPF_RSS_KEY_LEN
+#define IDPF_RSS_KEY_LEN 52
+#endif
+
+static int
+idpf_init_vport(struct rte_eth_dev *dev)
+{
+   uint16_t idx = adapter->next_vport_idx;
+   struct virtchnl2_create_vport *vport_info =
+   (struct virtchnl2_create_vport *)adapter->vport_recv_info[idx];
+   struct idpf_vport *vport =
+   (struct idpf_vport *)dev->data->dev_private;
+   int i;
+
+   vport->adapter = adapter;
+   vport->vport_id = vport_info->vport_id;
+   vport->txq_model = vport_info->txq_model;
+   vport->rxq_model = vport_info->rxq_model;
+   vport->num_tx_q = vport_info->num_tx_q;
+   vport->num_tx_complq = vport_info->num_tx_complq;
+   vport->num_rx_q = vport_info->num_rx_q;
+   vport->num_rx_bufq = vport_info->num_rx_bufq;
+   vport->max_mtu = vport_info->max_mtu;
+   rte_memcpy(vport->default_mac_addr,
+  vport_info->default_mac_addr, ETH_ALEN);
+   vport->rss_algorithm = vport_info->rss_algorithm;
+   vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+vport_info->rss_key_size);
+   vport->rss_lut_size = vport_info->rss_lut_size;
+   vport->sw_idx = idx;
+
+   for (i = 0; i < vport_info->chunks.num_chunks; i++) {
+   if (vport_info->chunks.chunks[i].type ==
+   VIRTCHNL2_QUEUE_TYPE_TX) {
+   vport->chunks_info.tx_start_qid =
+   vport_info->chunks.chunks[i].start_queue_id;
+   vport->chunks_info.tx_qt

[RFC v2 4/9] net/idpf: support queue ops

2022-05-09 Thread Junfeng Guo
Add queue ops for IDPF PMD:
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |   85 +++
 drivers/net/idpf/idpf_ethdev.h |5 +
 drivers/net/idpf/idpf_rxtx.c   | 1252 
 drivers/net/idpf/idpf_rxtx.h   |  167 +
 drivers/net/idpf/idpf_vchnl.c  |  342 +
 drivers/net/idpf/meson.build   |1 +
 6 files changed, 1852 insertions(+)
 create mode 100644 drivers/net/idpf/idpf_rxtx.c
 create mode 100644 drivers/net/idpf/idpf_rxtx.h

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index e34165a87d..511770ed4f 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -12,6 +12,7 @@
 #include 
 
 #include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
 
 #define VPORT_NUM  "vport_num"
 
@@ -33,6 +34,14 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_start  = idpf_dev_start,
.dev_stop   = idpf_dev_stop,
.dev_close  = idpf_dev_close,
+   .rx_queue_start = idpf_rx_queue_start,
+   .rx_queue_stop  = idpf_rx_queue_stop,
+   .tx_queue_start = idpf_tx_queue_start,
+   .tx_queue_stop  = idpf_tx_queue_stop,
+   .rx_queue_setup = idpf_rx_queue_setup,
+   .rx_queue_release   = idpf_dev_rx_queue_release,
+   .tx_queue_setup = idpf_tx_queue_setup,
+   .tx_queue_release   = idpf_dev_tx_queue_release,
 };
 
 
@@ -193,6 +202,65 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return ret;
 }
 
+static int
+idpf_config_queues(struct idpf_vport *vport)
+{
+   int err;
+
+   err = idpf_config_rxqs(vport);
+   if (err)
+   return err;
+
+   err = idpf_config_txqs(vport);
+
+   return err;
+}
+
+static int
+idpf_start_queues(struct rte_eth_dev *dev)
+{
+   struct idpf_vport *vport =
+   (struct idpf_vport *)dev->data->dev_private;
+   struct idpf_rx_queue *rxq;
+   struct idpf_tx_queue *txq;
+   int i, err = 0;
+
+   for (i = 0; i < dev->data->nb_tx_queues; i++) {
+   txq = dev->data->tx_queues[i];
+   if (txq->tx_deferred_start)
+   continue;
+   if (idpf_tx_queue_init(dev, i) != 0) {
+   PMD_DRV_LOG(ERR, "Fail to init tx queue %u", i);
+   return -1;
+   }
+   }
+
+   for (i = 0; i < dev->data->nb_rx_queues; i++) {
+   rxq = dev->data->rx_queues[i];
+   if (rxq->rx_deferred_start)
+   continue;
+   if (idpf_rx_queue_init(dev, i) != 0) {
+   PMD_DRV_LOG(ERR, "Fail to init rx queue %u", i);
+   return -1;
+   }
+   }
+
+   err = idpf_ena_dis_queues(vport, true);
+   if (err) {
+   PMD_DRV_LOG(ERR, "Fail to start queues");
+   return err;
+   }
+
+   for (i = 0; i < dev->data->nb_tx_queues; i++)
+   dev->data->tx_queue_state[i] =
+   RTE_ETH_QUEUE_STATE_STARTED;
+   for (i = 0; i < dev->data->nb_rx_queues; i++)
+   dev->data->rx_queue_state[i] =
+   RTE_ETH_QUEUE_STATE_STARTED;
+
+   return err;
+}
+
 static int
 idpf_dev_start(struct rte_eth_dev *dev)
 {
@@ -203,6 +271,19 @@ idpf_dev_start(struct rte_eth_dev *dev)
 
vport->stopped = 0;
 
+   if (idpf_config_queues(vport)) {
+   PMD_DRV_LOG(ERR, "Failed to configure queues");
+   goto err_queue;
+   }
+
+   idpf_set_rx_function(dev);
+   idpf_set_tx_function(dev);
+
+   if (idpf_start_queues(dev)) {
+   PMD_DRV_LOG(ERR, "Failed to start queues");
+   goto err_queue;
+   }
+
if (idpf_ena_dis_vport(vport, true)) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
goto err_vport;
@@ -211,6 +292,8 @@ idpf_dev_start(struct rte_eth_dev *dev)
return 0;
 
 err_vport:
+   idpf_stop_queues(dev);
+err_queue:
return -1;
 }
 
@@ -228,6 +311,8 @@ idpf_dev_stop(struct rte_eth_dev *dev)
if (idpf_ena_dis_vport(vport, false))
PMD_DRV_LOG(ERR, "disable vport failed");
 
+   idpf_stop_queues(dev);
+
vport->stopped = 1;
dev->data->dev_started = 0;
 
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 762d5ff66a..c5aa168d95 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -195,6 +195,11 @@ int idpf_get_caps(struct idpf_adapter *adapter);
 int idpf_create_vport(__rte_unused

[RFC v2 5/9] net/idpf: support getting device information

2022-05-09 Thread Junfeng Guo
Add ops dev_infos_get.

Signed-off-by: Beilei Xing 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c | 69 ++
 1 file changed, 69 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 511770ed4f..c58a40e7ab 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -28,6 +28,8 @@ static int idpf_dev_configure(struct rte_eth_dev *dev);
 static int idpf_dev_start(struct rte_eth_dev *dev);
 static int idpf_dev_stop(struct rte_eth_dev *dev);
 static int idpf_dev_close(struct rte_eth_dev *dev);
+static int idpf_dev_info_get(struct rte_eth_dev *dev,
+struct rte_eth_dev_info *dev_info);
 
 static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_configure  = idpf_dev_configure,
@@ -42,8 +44,75 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.rx_queue_release   = idpf_dev_rx_queue_release,
.tx_queue_setup = idpf_tx_queue_setup,
.tx_queue_release   = idpf_dev_tx_queue_release,
+   .dev_infos_get  = idpf_dev_info_get,
 };
 
+static int
+idpf_dev_info_get(__rte_unused struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
+{
+   dev_info->max_rx_queues = adapter->caps->max_rx_q;
+   dev_info->max_tx_queues = adapter->caps->max_tx_q;
+   dev_info->min_rx_bufsize = IDPF_MIN_BUF_SIZE;
+   dev_info->max_rx_pktlen = IDPF_MAX_FRAME_SIZE;
+
+   dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
+   dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+   dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
+   dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+   dev_info->rx_offload_capa =
+   RTE_ETH_RX_OFFLOAD_VLAN_STRIP   |
+   RTE_ETH_RX_OFFLOAD_QINQ_STRIP   |
+   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM   |
+   RTE_ETH_RX_OFFLOAD_UDP_CKSUM|
+   RTE_ETH_RX_OFFLOAD_TCP_CKSUM|
+   RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+   RTE_ETH_RX_OFFLOAD_SCATTER  |
+   RTE_ETH_RX_OFFLOAD_VLAN_FILTER  |
+   RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+   dev_info->tx_offload_capa =
+   RTE_ETH_TX_OFFLOAD_VLAN_INSERT  |
+   RTE_ETH_TX_OFFLOAD_QINQ_INSERT  |
+   RTE_ETH_TX_OFFLOAD_IPV4_CKSUM   |
+   RTE_ETH_TX_OFFLOAD_UDP_CKSUM|
+   RTE_ETH_TX_OFFLOAD_TCP_CKSUM|
+   RTE_ETH_TX_OFFLOAD_SCTP_CKSUM   |
+   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+   RTE_ETH_TX_OFFLOAD_TCP_TSO  |
+   RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO|
+   RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO  |
+   RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+   RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   |
+   RTE_ETH_TX_OFFLOAD_MULTI_SEGS   |
+   RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
+
+   dev_info->default_rxconf = (struct rte_eth_rxconf) {
+   .rx_free_thresh = IDPF_DEFAULT_RX_FREE_THRESH,
+   .rx_drop_en = 0,
+   .offloads = 0,
+   };
+
+   dev_info->default_txconf = (struct rte_eth_txconf) {
+   .tx_free_thresh = IDPF_DEFAULT_RX_FREE_THRESH,
+   .tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH,
+   .offloads = 0,
+   };
+
+   dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+   .nb_max = IDPF_MAX_RING_DESC,
+   .nb_min = IDPF_MIN_RING_DESC,
+   .nb_align = IDPF_ALIGN_RING_DESC,
+   };
+
+   dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+   .nb_max = IDPF_MAX_RING_DESC,
+   .nb_min = IDPF_MIN_RING_DESC,
+   .nb_align = IDPF_ALIGN_RING_DESC,
+   };
+
+   return 0;
+}
 
 static int
 idpf_init_vport_req_info(struct rte_eth_dev *dev)
-- 
2.25.1



[RFC v2 7/9] net/idpf: support link update

2022-05-09 Thread Junfeng Guo
Add ops link_update.

Signed-off-by: Beilei Xing 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c | 22 ++
 drivers/net/idpf/idpf_ethdev.h |  2 ++
 2 files changed, 24 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 01fd023bfc..39efb387cf 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -31,6 +31,27 @@ static int idpf_dev_close(struct rte_eth_dev *dev);
 static int idpf_dev_info_get(struct rte_eth_dev *dev,
 struct rte_eth_dev_info *dev_info);
 
+int
+idpf_dev_link_update(struct rte_eth_dev *dev,
+__rte_unused int wait_to_complete)
+{
+   struct idpf_vport *vport =
+   (struct idpf_vport *)dev->data->dev_private;
+   struct rte_eth_link new_link;
+
+   memset(&new_link, 0, sizeof(new_link));
+
+   new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+
+   new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+   new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+   RTE_ETH_LINK_DOWN;
+   new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+ RTE_ETH_LINK_SPEED_FIXED);
+
+   return rte_eth_linkstatus_set(dev, &new_link);
+}
+
 static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_supported_ptypes_get   = idpf_dev_supported_ptypes_get,
.dev_configure  = idpf_dev_configure,
@@ -46,6 +67,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.tx_queue_setup = idpf_tx_queue_setup,
.tx_queue_release   = idpf_dev_tx_queue_release,
.dev_infos_get  = idpf_dev_info_get,
+   .link_update= idpf_dev_link_update,
 };
 
 static int
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index c5aa168d95..5520b2d6ce 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -189,6 +189,8 @@ _atomic_set_cmd(struct idpf_adapter *adapter, enum 
virtchnl_ops ops)
return !ret;
 }
 
+int idpf_dev_link_update(struct rte_eth_dev *dev,
+__rte_unused int wait_to_complete);
 void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
 int idpf_check_api_version(struct idpf_adapter *adapter);
 int idpf_get_caps(struct idpf_adapter *adapter);
-- 
2.25.1



[RFC v2 8/9] net/idpf: support basic Rx/Tx

2022-05-09 Thread Junfeng Guo
Add basic RX & TX support in split queue mode and single queue mode.
Using split queue mode by default.

Signed-off-by: Beilei Xing 
Signed-off-by: Xiaoyun Li 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c |  93 
 drivers/net/idpf/idpf_rxtx.c   | 877 +
 drivers/net/idpf/idpf_rxtx.h   |  33 ++
 3 files changed, 1003 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 39efb387cf..1a985caf46 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -14,12 +14,16 @@
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
 
+#define IDPF_TX_SINGLE_Q   "tx_single"
+#define IDPF_RX_SINGLE_Q   "rx_single"
 #define VPORT_NUM  "vport_num"
 
 struct idpf_adapter *adapter;
 uint16_t vport_num = 1;
 
 static const char * const idpf_valid_args[] = {
+   IDPF_TX_SINGLE_Q,
+   IDPF_RX_SINGLE_Q,
VPORT_NUM,
NULL
 };
@@ -156,6 +160,30 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev)
(struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
 
vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+   if (!adapter->txq_model) {
+   vport_info->txq_model =
+   rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+   vport_info->num_tx_q = dev->data->nb_tx_queues;
+   vport_info->num_tx_complq =
+   dev->data->nb_tx_queues * IDPF_TX_COMPLQ_PER_GRP;
+   } else {
+   vport_info->txq_model =
+   rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+   vport_info->num_tx_q = dev->data->nb_tx_queues;
+   vport_info->num_tx_complq = 0;
+   }
+   if (!adapter->rxq_model) {
+   vport_info->rxq_model =
+   rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+   vport_info->num_rx_q = dev->data->nb_rx_queues;
+   vport_info->num_rx_bufq =
+   dev->data->nb_rx_queues * IDPF_RX_BUFQ_PER_GRP;
+   } else {
+   vport_info->rxq_model =
+   rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+   vport_info->num_rx_q = dev->data->nb_rx_queues;
+   vport_info->num_rx_bufq = 0;
+   }
 
return 0;
 }
@@ -426,6 +454,56 @@ idpf_dev_close(struct rte_eth_dev *dev)
return 0;
 }
 
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+   int *i = (int *)args;
+   char *end;
+   int num;
+
+   num = strtoul(value, &end, 10);
+
+   if (num != 0 && num != 1) {
+   PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+   "value must be 0 or 1",
+   value, key);
+   return -1;
+   }
+
+   *i = num;
+   return 0;
+}
+
+static int idpf_parse_devargs(struct rte_eth_dev *dev)
+{
+   struct rte_devargs *devargs = dev->device->devargs;
+   struct rte_kvargs *kvlist;
+   int ret;
+
+   if (!devargs)
+   return 0;
+
+   kvlist = rte_kvargs_parse(devargs->args, idpf_valid_args);
+   if (!kvlist) {
+   PMD_INIT_LOG(ERR, "invalid kvargs key");
+   return -EINVAL;
+   }
+
+   ret = rte_kvargs_process(kvlist, IDPF_TX_SINGLE_Q, &parse_bool,
+&adapter->txq_model);
+   if (ret)
+   goto bail;
+
+   ret = rte_kvargs_process(kvlist, IDPF_RX_SINGLE_Q, &parse_bool,
+&adapter->rxq_model);
+   if (ret)
+   goto bail;
+
+bail:
+   rte_kvargs_free(kvlist);
+   return ret;
+}
+
 static void
 idpf_reset_pf(struct iecm_hw *hw)
 {
@@ -533,6 +611,12 @@ idpf_adapter_init(struct rte_eth_dev *dev)
hw->device_id = pci_dev->id.device_id;
hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
 
+   ret = idpf_parse_devargs(dev);
+   if (ret) {
+   PMD_INIT_LOG(ERR, "Failed to parse devargs");
+   goto err;
+   }
+
idpf_reset_pf(hw);
ret = idpf_check_pf_reset_done(hw);
if (ret) {
@@ -641,6 +725,15 @@ idpf_dev_init(struct rte_eth_dev *dev, __rte_unused void 
*init_params)
 
dev->dev_ops = &idpf_eth_dev_ops;
 
+   /* for secondary processes, we don't initialise any further as primary
+* has already done this work.
+*/
+   if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+   idpf_set_rx_function(dev);
+   idpf_set_tx_function(dev);
+   return ret;
+   }
+
ret = idpf_adapter_init(dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to init adapter.");
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 6b436141c8..d5613d63d6 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1301,3 +1301,

[RFC v2 9/9] net/idpf: support RSS

2022-05-09 Thread Junfeng Guo
Add RSS support.

Signed-off-by: Beilei Xing 
Signed-off-by: Junfeng Guo 
---
 drivers/net/idpf/idpf_ethdev.c | 106 +
 drivers/net/idpf/idpf_ethdev.h |  18 +-
 drivers/net/idpf/idpf_vchnl.c  |  93 +
 3 files changed, 216 insertions(+), 1 deletion(-)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 1a985caf46..2a0304c18e 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -85,6 +85,7 @@ idpf_dev_info_get(__rte_unused struct rte_eth_dev *dev, 
struct rte_eth_dev_info
dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+   dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->rx_offload_capa =
@@ -292,9 +293,96 @@ idpf_init_vport(struct rte_eth_dev *dev)
return 0;
 }
 
+static int
+idpf_config_rss(struct idpf_vport *vport)
+{
+   int ret;
+
+   ret = idpf_set_rss_key(vport);
+   if (ret) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+   return ret;
+   }
+
+   ret = idpf_set_rss_lut(vport);
+   if (ret) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+   return ret;
+   }
+
+   ret = idpf_set_rss_hash(vport);
+   if (ret) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+   return ret;
+   }
+
+   return ret;
+}
+
+static int
+idpf_init_rss(struct idpf_vport *vport)
+{
+   struct rte_eth_rss_conf *rss_conf;
+   uint16_t i, nb_q, lut_size;
+   int ret = 0;
+
+   rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf;
+   nb_q = vport->num_rx_q;
+
+   vport->rss_key = (uint8_t *)rte_zmalloc("rss_key",
+vport->rss_key_size, 0);
+   if (!vport->rss_key) {
+   PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
+   ret = -ENOMEM;
+   goto err_key;
+   }
+
+   lut_size = vport->rss_lut_size;
+   vport->rss_lut = (uint32_t *)rte_zmalloc("rss_lut",
+ sizeof(uint32_t) * lut_size, 0);
+   if (!vport->rss_lut) {
+   PMD_INIT_LOG(ERR, "Failed to allocate RSS lut");
+   ret = -ENOMEM;
+   goto err_lut;
+   }
+
+   if (!rss_conf->rss_key) {
+   for (i = 0; i < vport->rss_key_size; i++)
+   vport->rss_key[i] = (uint8_t)rte_rand();
+   } else {
+   rte_memcpy(vport->rss_key, rss_conf->rss_key,
+  RTE_MIN(rss_conf->rss_key_len,
+  vport->rss_key_size));
+   }
+
+   for (i = 0; i < lut_size; i++)
+   vport->rss_lut[i] = i % nb_q;
+
+   vport->rss_hf = IECM_DEFAULT_RSS_HASH_EXPANDED;
+
+   ret = idpf_config_rss(vport);
+   if (ret) {
+   PMD_INIT_LOG(ERR, "Failed to configure RSS");
+   goto err_cfg;
+   }
+
+   return ret;
+
+err_cfg:
+   rte_free(vport->rss_lut);
+   vport->rss_lut = NULL;
+err_lut:
+   rte_free(vport->rss_key);
+   vport->rss_key = NULL;
+err_key:
+   return ret;
+}
+
 static int
 idpf_dev_configure(struct rte_eth_dev *dev)
 {
+   struct idpf_vport *vport =
+   (struct idpf_vport *)dev->data->dev_private;
int ret = 0;
 
if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
@@ -319,6 +407,14 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return ret;
}
 
+   if (adapter->caps->rss_caps) {
+   ret = idpf_init_rss(vport);
+   if (ret) {
+   PMD_INIT_LOG(ERR, "Failed to init rss");
+   return ret;
+   }
+   }
+
return ret;
 }
 
@@ -451,6 +547,16 @@ idpf_dev_close(struct rte_eth_dev *dev)
idpf_dev_stop(dev);
idpf_destroy_vport(vport);
 
+   if (vport->rss_lut) {
+   rte_free(vport->rss_lut);
+   vport->rss_lut = NULL;
+   }
+
+   if (vport->rss_key) {
+   rte_free(vport->rss_key);
+   vport->rss_key = NULL;
+   }
+
return 0;
 }
 
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 5520b2d6ce..0b8e163bbb 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -43,6 +43,20 @@
 #define IDPF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
 
+#define IDPF_RSS_OFFLOAD_ALL ( \
+   RTE_ETH_RSS_IPV4| \
+   RTE_ETH_RSS_FRAG_IPV4   | \
+   RTE_ETH_RSS_NONFRAG_IPV4_TCP| \
+   RTE_ETH_RSS_NONFRAG_IPV4_UDP| \
+   RTE_ETH_RSS_NONFRAG_IPV4_SCTP   | \
+   R

[PATCH v3 1/2] config/arm: add SVE ACLE control flag

2022-05-09 Thread Rahul Bhansali
This add the control flag for SVE ACLE to enable or disable
RTE_HAS_SVE_ACLE macro in the build.

Signed-off-by: Rahul Bhansali 
---
Changes in v3:
- Moved sve_acle condition to be consider for
RTE_HAS_SVE_ACLE flag only.

Changes in v2:
- Renamed the flag to sve_acle from sve
- Added double-indent.

 config/arm/meson.build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/config/arm/meson.build b/config/arm/meson.build
index 8aead74086..6f8961eac8 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -605,7 +605,7 @@ endif

 if cc.get_define('__ARM_FEATURE_SVE', args: machine_args) != ''
 compile_time_cpuflags += ['RTE_CPUFLAG_SVE']
-if (cc.check_header('arm_sve.h'))
+if (cc.check_header('arm_sve.h') and soc_config.get('sve_acle', true))
 dpdk_conf.set('RTE_HAS_SVE_ACLE', 1)
 endif
 endif
--
2.25.1



[PATCH v3 2/2] config/arm: disable SVE ACLE for cn10k

2022-05-09 Thread Rahul Bhansali
This disable the sve_acle flag for cn10k.

Performance impact:-
With l3fwd example, lpm lookup performance increased
by ~21% if Neon is used instead of SVE.

Signed-off-by: Rahul Bhansali 
---
Changes in v3: No change

Changes in v2:
- Renamed the flag to sve_acle from sve

 config/arm/meson.build | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/config/arm/meson.build b/config/arm/meson.build
index 6f8961eac8..a94129168f 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -281,7 +281,8 @@ soc_cn10k = {
 ],
 'part_number': '0xd49',
 'extra_march_features': ['crypto'],
-'numa': false
+'numa': false,
+'sve_acle': false
 }

 soc_dpaa = {
--
2.25.1



RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if queues are full and Tx fails

2022-05-09 Thread Rakesh Kudurumalla
Hi Thomas Monjalon,

Same behavior is observed in cnxk driver as well.
Can we please get this patch merged ?

Regards,
Rakesh 
> -Original Message-
> From: Rakesh Kudurumalla
> Sent: Monday, February 14, 2022 10:27 AM
> To: Thomas Monjalon ; Jerin Jacob Kollanukkaran
> 
> Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com;
> ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> ajit.khapa...@broadcom.com
> Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if queues are
> full and Tx fails
> 
> 
> 
> > -Original Message-
> > From: Thomas Monjalon 
> > Sent: Tuesday, February 1, 2022 1:15 PM
> > To: Jerin Jacob Kollanukkaran ; Rakesh Kudurumalla
> > 
> > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com;
> > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> > ajit.khapa...@broadcom.com
> > Subject: Re: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang if
> > queues are full and Tx fails
> >
> > octeontx2 driver is removed
> > Can we close this patch?
> Same behavior is observed with cnxk driver, so we need  this patch
> >
> >
> > 01/02/2022 07:30, Rakesh Kudurumalla:
> > > ping
> > >
> > > > -Original Message-
> > > > From: Rakesh Kudurumalla
> > > > Sent: Monday, January 10, 2022 2:35 PM
> > > > To: Thomas Monjalon ; Jerin Jacob
> > Kollanukkaran
> > > > 
> > > > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com;
> > > > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> > > > ajit.khapa...@broadcom.com
> > > > Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang
> > > > if queues are full and Tx fails
> > > >
> > > > ping
> > > >
> > > > > -Original Message-
> > > > > From: Rakesh Kudurumalla
> > > > > Sent: Monday, December 13, 2021 12:10 PM
> > > > > To: Thomas Monjalon ; Jerin Jacob
> > > > > Kollanukkaran 
> > > > > Cc: sta...@dpdk.org; dev@dpdk.org; david.march...@redhat.com;
> > > > > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> > > > > ajit.khapa...@broadcom.com
> > > > > Subject: RE: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid hang
> > > > > if queues are full and Tx fails
> > > > >
> > > > >
> > > > >
> > > > > > -Original Message-
> > > > > > From: Thomas Monjalon 
> > > > > > Sent: Monday, November 29, 2021 2:44 PM
> > > > > > To: Rakesh Kudurumalla ; Jerin Jacob
> > > > > > Kollanukkaran 
> > > > > > Cc: sta...@dpdk.org; dev@dpdk.org;
> david.march...@redhat.com;
> > > > > > ferruh.yi...@intel.com; andrew.rybche...@oktetlabs.ru;
> > > > > > ajit.khapa...@broadcom.com
> > > > > > Subject: Re: [EXT] Re: [dpdk-stable] [PATCH v2] test: avoid
> > > > > > hang if queues are full and Tx fails
> > > > > >
> > > > > > 29/11/2021 09:52, Rakesh Kudurumalla:
> > > > > > > From: Thomas Monjalon 
> > > > > > > > 22/11/2021 08:59, Rakesh Kudurumalla:
> > > > > > > > > From: Thomas Monjalon 
> > > > > > > > > > 20/07/2021 18:50, Rakesh Kudurumalla:
> > > > > > > > > > > Current pmd_perf_autotest() in continuous mode tries
> > > > > > > > > > > to enqueue MAX_TRAFFIC_BURST completely before
> > > > > > > > > > > starting the
> > > > test.
> > > > > > > > > > > Some drivers cannot accept complete
> > > > > > > > > > > MAX_TRAFFIC_BURST even though
> > > > > > > > rx+tx
> > > > > > > > > > > desc
> > > > > > > > > > count
> > > > > > > > > > > can fit it.
> > > > > > > > > >
> > > > > > > > > > Which driver is failing to do so?
> > > > > > > > > > Why it cannot enqueue 32 packets?
> > > > > > > > >
> > > > > > > > > Octeontx2 driver is failing to enqueue because hardware
> > > > > > > > > buffers are full
> > > > > > > > before test.
> > > > > >
> > > > > > Aren't you stopping the support of octeontx2?
> > > > > > Why do you care now?
> > > > > >  yes we are not supporting octeontx2,but this  issue is
> > > > > > observed in cnxk driver ,current patch fixes the same
> > > > > > > >
> > > > > > > > Why hardware buffers are full?
> > > > > > > Hardware buffers are full because number of number of
> > > > > > > descriptors in continuous mode Is less than
> > > > > > > MAX_TRAFFIC_BURST, so if enque fails , there is no way hardware
> can drop the Packets .
> > > > > > > pmd_per_autotest application evaluates performance after
> > > > > > > enqueueing
> > > > packets Initially.
> > > > > > > >
> > > > > > > > > pmd_perf_autotest() in continuous mode tries to enqueue
> > > > > > > > > MAX_TRAFFIC_BURST (2048) before starting the test.
> > > > > > > > >
> > > > > > > > > > > This patch changes behaviour to stop enqueuing after
> > > > > > > > > > > few
> > > > retries.
> > > > > > > > > >
> > > > > > > > > > If there is a real limitation, there will be issues in
> > > > > > > > > > more places than this test program.
> > > > > > > > > > I feel it should be addressed either in the driver or
> > > > > > > > > > at ethdev
> > level.
> > > > > > > > > >
> > > > > > > > > > [...]
> > > > > > > > > > > @@ -480,10 +483,19 @@ main_loop(__rte_unused void
> > *args)
> > > > > > > > > > > 

[PATCH v4 1/2] config/arm: add SVE ACLE control flag

2022-05-09 Thread Rahul Bhansali
This add the control flag for SVE ACLE to enable or disable
RTE_HAS_SVE_ACLE macro in the build.

Signed-off-by: Rahul Bhansali 
---
Changes in v4:
- Resend patches. With v3, patches were not sent properly
in single series.

Changes in v3:
- Moved sve_acle condition to be consider for
RTE_HAS_SVE_ACLE flag only.

Changes in v2:
- Renamed the flag to sve_acle from sve
- Added double-indent.

 config/arm/meson.build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/config/arm/meson.build b/config/arm/meson.build
index 8aead74086..6f8961eac8 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -605,7 +605,7 @@ endif

 if cc.get_define('__ARM_FEATURE_SVE', args: machine_args) != ''
 compile_time_cpuflags += ['RTE_CPUFLAG_SVE']
-if (cc.check_header('arm_sve.h'))
+if (cc.check_header('arm_sve.h') and soc_config.get('sve_acle', true))
 dpdk_conf.set('RTE_HAS_SVE_ACLE', 1)
 endif
 endif
--
2.25.1



[PATCH v4 2/2] config/arm: disable SVE ACLE for cn10k

2022-05-09 Thread Rahul Bhansali
This disable the sve_acle flag for cn10k.

Performance impact:-
With l3fwd example, lpm lookup performance increased
by ~21% if Neon is used instead of SVE.

Signed-off-by: Rahul Bhansali 
---
Changes in v4:
- Resend patches. With v3, patches were not sent properly
in single series.

Changes in v3: No change

Changes in v2:
- Renamed the flag to sve_acle from sve

 config/arm/meson.build | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/config/arm/meson.build b/config/arm/meson.build
index 6f8961eac8..a94129168f 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -281,7 +281,8 @@ soc_cn10k = {
 ],
 'part_number': '0xd49',
 'extra_march_features': ['crypto'],
-'numa': false
+'numa': false,
+'sve_acle': false
 }

 soc_dpaa = {
--
2.25.1



Re: [PATCH v6] lib/eal/ppc fix compilation for musl

2022-05-09 Thread David Marchand
On Sat, May 7, 2022 at 11:16 PM Duncan Bellamy  wrote:
>
> musl lacks __ppc_get_timebase() but has __builtin_ppc_get_timebase()
>
> the __ppc_get_timebase_freq() is taken from:
> https://git.alpinelinux.org/aports/commit/?id=06b03f70fb94972286c0c9f6278df89e53903833
>
> Signed-off-by: Duncan Bellamy 

- A patch title does not need lib/ prefix.
Here, "eal/ppc: " is enough.


- Code in lib/eal/linux won't be used for FreeBSD/Windows.
On the other hand, arch-specific code (here, lib/eal/ppc/) can be used
for the various OS.
Besides, as far as I can see in the Linux kernel sources, powerpc is
the only architecture that exports a "timebase" entry in
/proc/cpuinfo.
So, I see no reason to put any code out of lib/eal/ppc.


- In the end, unless I missed some point, the patch could probably
look like (untested):

diff --git a/lib/eal/ppc/include/rte_cycles.h b/lib/eal/ppc/include/rte_cycles.h
index 5585f9273c..666fc9b0bf 100644
--- a/lib/eal/ppc/include/rte_cycles.h
+++ b/lib/eal/ppc/include/rte_cycles.h
@@ -10,7 +10,10 @@
 extern "C" {
 #endif

+#include 
+#ifdef __GLIBC__
 #include 
+#endif

 #include "generic/rte_cycles.h"

@@ -26,7 +29,11 @@ extern "C" {
 static inline uint64_t
 rte_rdtsc(void)
 {
+#ifdef __GLIBC__
return __ppc_get_timebase();
+#else
+   return __builtin_ppc_get_timebase();
+#endif
 }

 static inline uint64_t
diff --git a/lib/eal/ppc/rte_cycles.c b/lib/eal/ppc/rte_cycles.c
index 3180adb0ff..99d36b2f7e 100644
--- a/lib/eal/ppc/rte_cycles.c
+++ b/lib/eal/ppc/rte_cycles.c
@@ -2,12 +2,50 @@
  * Copyright (C) IBM Corporation 2019.
  */

+#include 
+#ifdef __GLIBC__
 #include 
+#elif RTE_EXEC_ENV_LINUX
+#include 
+#include 
+#endif

 #include "eal_private.h"

 uint64_t
 get_tsc_freq_arch(void)
 {
+#ifdef __GLIBC__
return __ppc_get_timebase_freq();
+#elif RTE_EXEC_ENV_LINUX
+   static unsigned long base;
+   char buf[512];
+   ssize_t nr;
+   FILE *f;
+
+   if (base != 0)
+   goto out;
+
+   f = fopen("/proc/cpuinfo", "rb");
+   if (f == NULL)
+   goto out;
+
+   while (fgets(buf, sizeof(buf), f) != NULL) {
+   char *ret = strstr(buf, "timebase");
+
+   if (ret == NULL)
+   continue;
+   ret += sizeof("timebase") - 1;
+   ret = strchr(ret, ':');
+   if (ret == NULL)
+   continue;
+   base = strtoul(ret + 1, NULL, 10);
+   break;
+   }
+   fclose(f);
+out:
+   return (uint64_t) base;
+#else
+   return 0;
+#endif
 }


-- 
David Marchand



Re: [PATCH 00/11] Introduce support for RISC-V architecture

2022-05-09 Thread Stanisław Kardach
On Fri, May 6, 2022 at 11:13 AM David Marchand 
wrote:

> On Thu, May 5, 2022 at 7:30 PM Stanislaw Kardach  wrote:
> >
> > This patchset adds support for building and running DPDK on 64bit RISC-V
> > architecture. The initial support targets rv64gc (rv64imafdc) ISA and
> > was tested on SiFive Unmatched development board with the Freedom U740
> > SoC running Linux (freedom-u-sdk based kernel).
> > I have tested this codebase using DPDK unit and perf tests as well as
> > test-pmd, l2fwd and l3fwd examples.
> > The NIC attached to the DUT was Intel X520-DA2 which uses ixgbe PMD.
> > On the UIO side, since U740 does not have an IOMMU, I've used igb_uio,
> > uio_pci_generic and vfio-pci noiommu drivers.
> >
> > Commits 1-2 fix small issues which are encountered if a given platform
> >does not support any vector operations (which is the case with U740).
> > Commit 3 introduces EAL and build system support for RISC-V architecture
> >as well as documentation updates.
> > Commits 4-7 add missing defines and stubs to enable RISC-V operation in
> >non-EAL parts.
> > Commit 8 adds RISC-V specific cpuflags test.
> > Commit 9 works around a bug in the current GCC in test_ring compiled
> >with -O0 or -Og.
> > Commit 10 adds RISC-V testing to test-meson-builds.sh automatically
> >iterating over cross-compile config files (currently present for
> >generic rv64gc and SiFive U740).
> > Commit 11 extends hash r/w perf test by displaying both HTM and non-HTM
> >measurements. This is an extraneous commit which is not directly
> >needed for RISC-V support but was noticed when we have started
> >gathering test results. If needed, I can submit it separately.
> >
> > I appreciate Your comments and feedback.
>
> Thanks for working on this!
>
Thanks for your review!

>
> Please add a cross compilation job to GHA, something like:
>
> https://github.com/david-marchand/dpdk/commit/4023e28f9050b85fb138eba14068bfe882036f01
> Which looks to run fine:
>
> https://github.com/david-marchand/dpdk/runs/6319625002?check_suite_focus=true

Will do in V2.

>
>
> Testing all riscv configs in test-meson-buils.sh seems too much to me.
> Is there a real value to test both current targets?
>
It's for sanity and compilation coverage testing. I.e. SiFive variant has a
specific build config which does not require extra barriers when reading
time and cycle registers for rte_rdtsc_precise(). I want to make sure that
if anyone changes some code based on configuration flags, it gets at least
compile-checked.
I believe similar thing is done for Aarch64 builds.

>
> About the new "Sponsored-by" tag, it should not raise warnings in the
> CI if we agree on its addition.
>
I'll modify it in V2 to be in form of:
  Sponsored by: StarFive Technology
  ...
  Signed-off-by: ...
This was suggested by Stephen Hemminger as having a precedent in Linux
kernel. Interestingly enough first use of this tag in kernel source was
this year in January.

>
> devtools/check-meson.py caught coding style issues.
>
Will fix in V2.

>
> In general, please avoid letting arch specific headers leak
> internal/non rte_ prefixed helpers out of them.
> For example, I noticed a RV64_CSRR macro that can be undefined after usage.
>
Thanks for noticing. I'l fix this one in V2.
There are 2 other symbols that leak but on purpose (out of a better
idea): vect_load_128() and vect_and(). Both are used in l3fwd_em to
simulate vector operations. Other platforms reference their intrinsics
straight in the l3fwd_em.c. As I don't have support for vector ops and I
wanted to indicate that xmm_t should be an isolated API, I've put both in
rte_vect.h. That said I'm not happy with this solution and am open to
suggestions on how to solve it neatly.

>
> Patch 3 is huge, not sure it is easy to split, did you consider doing so?
>
It seems to me the nature of a new EAL implementation, I have to include
all symbols, otherwise DPDK won't compile.
Alternatively I could have a huge initial patch with empty stubs that would
be filled in later commits. Downside of this approach is that it's hard to
verify each commit separately as tests will fail until all implementation
is there, so the division is only visual.

>
> The release notes update is verbose and some parts could be dropped,
> like the list of verifications that are fine in a series cover letter.
>
Will do. I'll move listed items to the cover letter.

>
> Please resubmit fixes separately from this series so that we can merge
> them sooner than this series.
>
Will do. Since at least 2 fixes are required for the RISC-V EAL to work or
compile, I'll put  Depends-on tag in the EAL commit.

>
>
> --
> David Marchand
>
>


Re: [PATCH 00/11] Introduce support for RISC-V architecture

2022-05-09 Thread Thomas Monjalon
09/05/2022 14:24, Stanisław Kardach:
> On Fri, May 6, 2022 at 11:13 AM David Marchand 
> wrote:
> > About the new "Sponsored-by" tag, it should not raise warnings in the
> > CI if we agree on its addition.
> >
> I'll modify it in V2 to be in form of:
>   Sponsored by: StarFive Technology

You mean removing the hyphen?
I think it is better to keep it so all tags have the same format.

>   ...
>   Signed-off-by: ...
> 
> This was suggested by Stephen Hemminger as having a precedent in Linux
> kernel. Interestingly enough first use of this tag in kernel source was
> this year in January.

The precedent is not strong enough to be copied in my opinion.





Re: [dpdk-dev] [PATCH 2/2] net/cnxk: support IPv6 fragment flow pattern item

2022-05-09 Thread Jerin Jacob
On Wed, Apr 27, 2022 at 11:53 AM  wrote:
>
> From: Satheesh Paul 
>
> Support matching IPv6 fragment extension header
> with RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT flow pattern item.
>
> Signed-off-by: Satheesh Paul 

Acked-by: Jerin Jacob 
Series applied to dpdk-next-net-mrvl/for-next-net. Thanks

> ---
>  doc/guides/nics/features/cnxk.ini | 1 +
>  doc/guides/nics/features/cnxk_vec.ini | 1 +
>  doc/guides/nics/features/cnxk_vf.ini  | 1 +
>  drivers/net/cnxk/cnxk_flow.c  | 3 +++
>  4 files changed, 6 insertions(+)
>
> diff --git a/doc/guides/nics/features/cnxk.ini 
> b/doc/guides/nics/features/cnxk.ini
> index 7cac8beb61..1876fe86c7 100644
> --- a/doc/guides/nics/features/cnxk.ini
> +++ b/doc/guides/nics/features/cnxk.ini
> @@ -65,6 +65,7 @@ icmp = Y
>  ipv4 = Y
>  ipv6 = Y
>  ipv6_ext = Y
> +ipv6_frag_ext= Y
>  mark = Y
>  mpls = Y
>  nvgre= Y
> diff --git a/doc/guides/nics/features/cnxk_vec.ini 
> b/doc/guides/nics/features/cnxk_vec.ini
> index 0803bb3c29..5d0976e6ce 100644
> --- a/doc/guides/nics/features/cnxk_vec.ini
> +++ b/doc/guides/nics/features/cnxk_vec.ini
> @@ -61,6 +61,7 @@ icmp = Y
>  ipv4 = Y
>  ipv6 = Y
>  ipv6_ext = Y
> +ipv6_frag_ext= Y
>  mark = Y
>  mpls = Y
>  nvgre= Y
> diff --git a/doc/guides/nics/features/cnxk_vf.ini 
> b/doc/guides/nics/features/cnxk_vf.ini
> index ed3e231c5f..c4ee32a9ad 100644
> --- a/doc/guides/nics/features/cnxk_vf.ini
> +++ b/doc/guides/nics/features/cnxk_vf.ini
> @@ -57,6 +57,7 @@ icmp = Y
>  ipv4 = Y
>  ipv6 = Y
>  ipv6_ext = Y
> +ipv6_frag_ext= Y
>  mark = Y
>  mpls = Y
>  nvgre= Y
> diff --git a/drivers/net/cnxk/cnxk_flow.c b/drivers/net/cnxk/cnxk_flow.c
> index ff962c141d..34f5d54f28 100644
> --- a/drivers/net/cnxk/cnxk_flow.c
> +++ b/drivers/net/cnxk/cnxk_flow.c
> @@ -14,6 +14,9 @@ const struct cnxk_rte_flow_term_info term[] = {
>  sizeof(struct rte_flow_item_ipv4)},
> [RTE_FLOW_ITEM_TYPE_IPV6] = {ROC_NPC_ITEM_TYPE_IPV6,
>  sizeof(struct rte_flow_item_ipv6)},
> +   [RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT] = {
> +   ROC_NPC_ITEM_TYPE_IPV6_FRAG_EXT,
> +   sizeof(struct rte_flow_item_ipv6_frag_ext)},
> [RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = {
> ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4,
> sizeof(struct rte_flow_item_arp_eth_ipv4)},
> --
> 2.25.4
>


Re: [PATCH v2 1/6] eventdev: support to set queue attributes at runtime

2022-05-09 Thread Jerin Jacob
On Tue, Apr 5, 2022 at 11:12 AM Shijith Thotton  wrote:
>
> Added a new eventdev API rte_event_queue_attr_set(), to set event queue
> attributes at runtime from the values set during initialization using
> rte_event_queue_setup(). PMD's supporting this feature should expose the
> capability RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR.
>
> Signed-off-by: Shijith Thotton 

Please update release notes.
With the above change,

Acked-by: Jerin Jacob 


> ---
>  doc/guides/eventdevs/features/default.ini |  1 +
>  lib/eventdev/eventdev_pmd.h   | 22 +++
>  lib/eventdev/rte_eventdev.c   | 26 ++
>  lib/eventdev/rte_eventdev.h   | 33 ++-
>  lib/eventdev/version.map  |  3 +++
>  5 files changed, 84 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/eventdevs/features/default.ini 
> b/doc/guides/eventdevs/features/default.ini
> index 2ea233463a..00360f60c6 100644
> --- a/doc/guides/eventdevs/features/default.ini
> +++ b/doc/guides/eventdevs/features/default.ini
> @@ -17,6 +17,7 @@ runtime_port_link  =
>  multiple_queue_port=
>  carry_flow_id  =
>  maintenance_free   =
> +runtime_queue_attr =
>
>  ;
>  ; Features of a default Ethernet Rx adapter.
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index ce469d47a6..3b85d9f7a5 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct 
> rte_eventdev *dev,
>  typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
> uint8_t queue_id);
>
> +/**
> + * Set an event queue attribute at runtime.
> + *
> + * @param dev
> + *   Event device pointer
> + * @param queue_id
> + *   Event queue index
> + * @param attr_id
> + *   Event queue attribute id
> + * @param attr_value
> + *   Event queue attribute value
> + *
> + * @return
> + *  - 0: Success.
> + *  - <0: Error code on failure.
> + */
> +typedef int (*eventdev_queue_attr_set_t)(struct rte_eventdev *dev,
> +uint8_t queue_id, uint32_t attr_id,
> +uint64_t attr_value);
> +
>  /**
>   * Retrieve the default event port configuration.
>   *
> @@ -1211,6 +1231,8 @@ struct eventdev_ops {
> /**< Set up an event queue. */
> eventdev_queue_release_t queue_release;
> /**< Release an event queue. */
> +   eventdev_queue_attr_set_t queue_attr_set;
> +   /**< Set an event queue attribute. */
>
> eventdev_port_default_conf_get_t port_def_conf;
> /**< Get default port configuration. */
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 532a253553..a31e99be02 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -844,6 +844,32 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t 
> queue_id, uint32_t attr_id,
> return 0;
>  }
>
> +int
> +rte_event_queue_attr_set(uint8_t dev_id, uint8_t queue_id, uint32_t attr_id,
> +uint64_t attr_value)
> +{
> +   struct rte_eventdev *dev;
> +
> +   RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> +   dev = &rte_eventdevs[dev_id];
> +   if (!is_valid_queue(dev, queue_id)) {
> +   RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> +   return -EINVAL;
> +   }
> +
> +   if (!(dev->data->event_dev_cap &
> + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR)) {
> +   RTE_EDEV_LOG_ERR(
> +   "Device %" PRIu8 "does not support changing queue 
> attributes at runtime",
> +   dev_id);
> +   return -ENOTSUP;
> +   }
> +
> +   RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_attr_set, -ENOTSUP);
> +   return (*dev->dev_ops->queue_attr_set)(dev, queue_id, attr_id,
> +  attr_value);
> +}
> +
>  int
>  rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> const uint8_t queues[], const uint8_t priorities[],
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 42a5660169..16e9d5fb5b 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -225,7 +225,7 @@ struct rte_event;
>  /**< Event scheduling prioritization is based on the priority associated with
>   *  each event queue.
>   *
> - *  @see rte_event_queue_setup()
> + *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>   */
>  #define RTE_EVENT_DEV_CAP_EVENT_QOS   (1ULL << 1)
>  /**< Event scheduling prioritization is based on the priority associated with
> @@ -307,6 +307,13 @@ struct rte_event;
>   * global pool, or process signaling related to load balancing.
>   */
>
> +#define RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR (1ULL << 11)
> +/**< Event device is capable of changing the queue attributes at runtime i.e 
> af

Re: [PATCH v2 2/6] eventdev: add weight and affinity to queue attributes

2022-05-09 Thread Jerin Jacob
On Tue, Apr 5, 2022 at 11:11 AM Shijith Thotton  wrote:
>
> Extended eventdev queue QoS attributes to support weight and affinity.
> If queues are of same priority, events from the queue with highest

the same priority

> weight will be scheduled first. Affinity indicates the number of times,
> the subsequent schedule calls from an event port will use the same event
> queue. Schedule call selects another queue if current queue goes empty
> or schedule count reaches affinity count.
>
> To avoid ABI break, weight and affinity attributes are not yet added to
> queue config structure and relies on PMD for managing it. New eventdev

rely on

> op queue_attr_get can be used to get it from the PMD.
>
> Signed-off-by: Shijith Thotton 

Please update the release notes.

With above change,

Acked-by: Jerin Jacob 


> ---
>  lib/eventdev/eventdev_pmd.h | 22 +
>  lib/eventdev/rte_eventdev.c | 12 
>  lib/eventdev/rte_eventdev.h | 38 +++--
>  3 files changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index 3b85d9f7a5..5495aee4f6 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -341,6 +341,26 @@ typedef int (*eventdev_queue_setup_t)(struct 
> rte_eventdev *dev,
>  typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
> uint8_t queue_id);
>
> +/**
> + * Get an event queue attribute at runtime.
> + *
> + * @param dev
> + *   Event device pointer
> + * @param queue_id
> + *   Event queue index
> + * @param attr_id
> + *   Event queue attribute id
> + * @param[out] attr_value
> + *   Event queue attribute value
> + *
> + * @return
> + *  - 0: Success.
> + *  - <0: Error code on failure.
> + */
> +typedef int (*eventdev_queue_attr_get_t)(struct rte_eventdev *dev,
> +uint8_t queue_id, uint32_t attr_id,
> +uint32_t *attr_value);
> +
>  /**
>   * Set an event queue attribute at runtime.
>   *
> @@ -1231,6 +1251,8 @@ struct eventdev_ops {
> /**< Set up an event queue. */
> eventdev_queue_release_t queue_release;
> /**< Release an event queue. */
> +   eventdev_queue_attr_get_t queue_attr_get;
> +   /**< Get an event queue attribute. */
> eventdev_queue_attr_set_t queue_attr_set;
> /**< Set an event queue attribute. */
>
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index a31e99be02..12b261f923 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -838,6 +838,18 @@ rte_event_queue_attr_get(uint8_t dev_id, uint8_t 
> queue_id, uint32_t attr_id,
>
> *attr_value = conf->schedule_type;
> break;
> +   case RTE_EVENT_QUEUE_ATTR_WEIGHT:
> +   *attr_value = RTE_EVENT_QUEUE_WEIGHT_LOWEST;
> +   if (dev->dev_ops->queue_attr_get)
> +   return (*dev->dev_ops->queue_attr_get)(
> +   dev, queue_id, attr_id, attr_value);
> +   break;
> +   case RTE_EVENT_QUEUE_ATTR_AFFINITY:
> +   *attr_value = RTE_EVENT_QUEUE_AFFINITY_LOWEST;
> +   if (dev->dev_ops->queue_attr_get)
> +   return (*dev->dev_ops->queue_attr_get)(
> +   dev, queue_id, attr_id, attr_value);
> +   break;
> default:
> return -EINVAL;
> };
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 16e9d5fb5b..a6fbaf1c11 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -222,8 +222,14 @@ struct rte_event;
>
>  /* Event device capability bitmap flags */
>  #define RTE_EVENT_DEV_CAP_QUEUE_QOS   (1ULL << 0)
> -/**< Event scheduling prioritization is based on the priority associated with
> - *  each event queue.
> +/**< Event scheduling prioritization is based on the priority and weight
> + * associated with each event queue. Events from a queue with highest 
> priority
> + * is scheduled first. If the queues are of same priority, weight of the 
> queues
> + * are considered to select a queue in a weighted round robin fashion.
> + * Subsequent dequeue calls from an event port could see events from the same
> + * event queue, if the queue is configured with an affinity count. Affinity
> + * count is the number of subsequent dequeue calls, in which an event port
> + * should use the same event queue if the queue is non-empty
>   *
>   *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
>   */
> @@ -331,6 +337,26 @@ struct rte_event;
>   * @see rte_event_port_link()
>   */
>
> +/* Event queue scheduling weights */
> +#define RTE_EVENT_QUEUE_WEIGHT_HIGHEST   255
> +/**< Highest weight of an event queue
> + * @see rte_event_queue_attr_get(), rte_event_queue_attr_set()
> + */
> +#define RTE_EVENT_QUEUE_WEIGHT_LOWES

Re: [PATCH v2 3/6] doc: announce change in event queue conf structure

2022-05-09 Thread Jerin Jacob
On Tue, Apr 5, 2022 at 11:12 AM Shijith Thotton  wrote:
>
> Structure rte_event_queue_conf will be extended to include fields to
> support weight and affinity attribute. Once it gets added in DPDK 22.11,
> eventdev internal op, queue_attr_get can be removed.
>
> Signed-off-by: Shijith Thotton 

Please remove the deprecation notice patch from this series and send
it as a separate patch.

> ---
>  doc/guides/rel_notes/deprecation.rst | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst 
> b/doc/guides/rel_notes/deprecation.rst
> index 4e5b23c53d..04125db681 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -125,3 +125,6 @@ Deprecation Notices
>applications should be updated to use the ``dmadev`` library instead,
>with the underlying HW-functionality being provided by the ``ioat`` or
>``idxd`` dma drivers
> +
> +* eventdev: New fields to represent event queue weight and affinity will be
> +  added to ``rte_event_queue_conf`` structure in DPDK 22.11.
> --
> 2.25.1
>


Re: [PATCH v2 4/6] test/event: test cases to test runtime queue attribute

2022-05-09 Thread Jerin Jacob
On Tue, Apr 5, 2022 at 11:12 AM Shijith Thotton  wrote:
>
> Added test cases to test changing of queue QoS attributes priority,
> weight and affinity at runtime.
>
> Signed-off-by: Shijith Thotton 
> ---
>  app/test/test_eventdev.c | 149 +++
>  1 file changed, 149 insertions(+)
>
> diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
> index 4f51042bda..1af93d3b77 100644
> --- a/app/test/test_eventdev.c
> +++ b/app/test/test_eventdev.c
> @@ -385,6 +385,149 @@ test_eventdev_queue_attr_priority(void)
> return TEST_SUCCESS;
>  }
>
> +static int
> +test_eventdev_queue_attr_priority_runtime(void)
> +{
> +   struct rte_event_queue_conf qconf;
> +   struct rte_event_dev_info info;
> +   uint32_t queue_count;
> +   int i, ret;
> +
> +   ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
> +   TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
> +
> +   if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
> +   return TEST_SKIPPED;
> +
> +   set_val = i % RTE_EVENT_DEV_PRIORITY_LOWEST;
> +   TEST_ASSERT_SUCCESS(
> +   rte_event_queue_attr_set(TEST_DEV_ID, i,
> +
> RTE_EVENT_QUEUE_ATTR_PRIORITY,
> +set_val),
> +   "Queue priority set failed");

If the return code is -ENOSUP, Please mark the test as TEST_SKIPPED

> +   TEST_ASSERT_SUCCESS(
> +   rte_event_queue_attr_get(TEST_DEV_ID, i,
> +
> RTE_EVENT_QUEUE_ATTR_PRIORITY,
> +&get_val),
> +   "Queue priority get failed");
> +   TEST_ASSERT_EQUAL(get_val, set_val,
> + "Wrong priority value for queue%d", i);
> +   }
> +
> +   return TEST_SUCCESS;
> +}
> +
> +static int
> +test_eventdev_queue_attr_weight_runtime(void)
> +{
> +   struct rte_event_queue_conf qconf;
> +   struct rte_event_dev_info info;
> +   uint32_t queue_count;
> +   int i, ret;
> +
> +   ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
> +   TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
> +
> +   if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR))
> +   return TEST_SKIPPED;
> +
> +   TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(
> +   TEST_DEV_ID, 
> RTE_EVENT_DEV_ATTR_QUEUE_COUNT,
> +   &queue_count),
> +   "Queue count get failed");
> +
> +   for (i = 0; i < (int)queue_count; i++) {
> +   ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i, 
> &qconf);
> +   TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
> +   ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
> +   TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
> +   }
> +
> +   for (i = 0; i < (int)queue_count; i++) {
> +   uint32_t get_val;
> +   uint64_t set_val;
> +
> +   set_val = i % RTE_EVENT_QUEUE_WEIGHT_HIGHEST;
> +   TEST_ASSERT_SUCCESS(
> +   rte_event_queue_attr_set(TEST_DEV_ID, i,
> +RTE_EVENT_QUEUE_ATTR_WEIGHT,
> +set_val),
> +   "Queue weight set failed");

If the return code is -ENOSUP, Please mark the test as TEST_SKIPPED


> +   TEST_ASSERT_SUCCESS(rte_event_queue_attr_get(
> +   TEST_DEV_ID, i,
> +   RTE_EVENT_QUEUE_ATTR_WEIGHT, 
> &get_val),
> +   "Queue weight get failed");
> +   TEST_ASSERT_EQUAL(get_val, set_val,
> + "Wrong weight value for queue%d", i);
> +   }
> +
> +   return TEST_SUCCESS;
> +}
> +
> +static int
> +test_eventdev_queue_attr_affinity_runtime(void)
> +{

Please use rte_event_dequeue_burst() to get APIs to test the full
functionality to validate the
feature for both priority and affinity test cases.


Re: [PATCH v2 5/6] event/cnxk: support to set runtime queue attributes

2022-05-09 Thread Jerin Jacob
On Tue, Apr 5, 2022 at 11:12 AM Shijith Thotton  wrote:
>
> Added API to set queue attributes at runtime and API to get weight and
> affinity.
>
> Signed-off-by: Shijith Thotton 
> ---
>  doc/guides/eventdevs/features/cnxk.ini |  1 +
>  drivers/event/cnxk/cn10k_eventdev.c|  4 ++
>  drivers/event/cnxk/cn9k_eventdev.c |  4 ++
>  drivers/event/cnxk/cnxk_eventdev.c | 91 --
>  drivers/event/cnxk/cnxk_eventdev.h | 16 +
>  5 files changed, 110 insertions(+), 6 deletions(-)
>
> diff --git a/doc/guides/eventdevs/features/cnxk.ini 
> b/doc/guides/eventdevs/features/cnxk.ini
> index 7633c6e3a2..bee69bf8f4 100644
> --- a/doc/guides/eventdevs/features/cnxk.ini
> +++ b/doc/guides/eventdevs/features/cnxk.ini
> @@ -12,6 +12,7 @@ runtime_port_link  = Y
>  multiple_queue_port= Y
>  carry_flow_id  = Y
>  maintenance_free   = Y
> +runtime_queue_attr = y
> +
> .port_def_conf = cnxk_sso_port_def_conf,
> .port_setup = cn9k_sso_port_setup,
> .port_release = cn9k_sso_port_release,
> diff --git a/drivers/event/cnxk/cnxk_eventdev.c 
> b/drivers/event/cnxk/cnxk_eventdev.c
> index be021d86c9..e07cb589f2 100644
> --- a/drivers/event/cnxk/cnxk_eventdev.c
> +++ b/drivers/event/cnxk/cnxk_eventdev.c
> @@ -120,7 +120,8 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev,
>   RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>   RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>   RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
> - RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
> + RTE_EVENT_DEV_CAP_MAINTENANCE_FREE |
> + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR;

Please swap 6/6 and 5/6 as to avoid the runtime failure at this point.


Re: [PATCH 00/11] Introduce support for RISC-V architecture

2022-05-09 Thread David Marchand
On Mon, May 9, 2022 at 2:24 PM Stanisław Kardach  wrote:
>> Testing all riscv configs in test-meson-buils.sh seems too much to me.
>> Is there a real value to test both current targets?
>
> It's for sanity and compilation coverage testing. I.e. SiFive variant has a 
> specific build config which does not require extra barriers when reading time 
> and cycle registers for rte_rdtsc_precise(). I want to make sure that if 
> anyone changes some code based on configuration flags, it gets at least 
> compile-checked.
> I believe similar thing is done for Aarch64 builds.

As far as I experienced, building all those aarch64 combinations never
revealed any specific platform compilation issue.
It only consumes cpu, disk and our (maintainers) time.
I proposed to Thomas to shrink aarch64 builds list not so long ago :-).

The best would be for SiFive to provide a system for the CI to do
those checks on their variant.


>> About the new "Sponsored-by" tag, it should not raise warnings in the
>> CI if we agree on its addition.
>
> I'll modify it in V2 to be in form of:
>   Sponsored by: StarFive Technology
>   ...
>   Signed-off-by: ...
> This was suggested by Stephen Hemminger as having a precedent in Linux 
> kernel. Interestingly enough first use of this tag in kernel source was this 
> year in January.

I don't have an opinion on the spelling.

At the moment, the checks raise a warning:
http://mails.dpdk.org/archives/test-report/2022-May/278580.html

My point is that for this new tag, either checkpatch.pl in kernel
handles it (which I don't think it is the case) or we need to disable
the signature check in checkpatch.pl and something is added in dpdk
checkpatches.sh to accept all known tags.


>> In general, please avoid letting arch specific headers leak
>> internal/non rte_ prefixed helpers out of them.
>> For example, I noticed a RV64_CSRR macro that can be undefined after usage.
>
> Thanks for noticing. I'l fix this one in V2.
> There are 2 other symbols that leak but on purpose (out of a better idea): 
> vect_load_128() and vect_and(). Both are used in l3fwd_em to simulate vector 
> operations. Other platforms reference their intrinsics straight in the 
> l3fwd_em.c. As I don't have support for vector ops and I wanted to indicate 
> that xmm_t should be an isolated API, I've put both in rte_vect.h. That said 
> I'm not happy with this solution and am open to suggestions on how to solve 
> it neatly.

I'll try to have a look in the next revision.


>>
>>
>> Patch 3 is huge, not sure it is easy to split, did you consider doing so?
>
> It seems to me the nature of a new EAL implementation, I have to include all 
> symbols, otherwise DPDK won't compile.
> Alternatively I could have a huge initial patch with empty stubs that would 
> be filled in later commits. Downside of this approach is that it's hard to 
> verify each commit separately as tests will fail until all implementation is 
> there, so the division is only visual.

If you are sure there is nothing that can be separated, let's keep it whole.



-- 
David Marchand



[Bug 1007] ixgbe-bound x553 sibling interface broken after app crash

2022-05-09 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1007

Bug ID: 1007
   Summary: ixgbe-bound x553 sibling interface broken after app
crash
   Product: DPDK
   Version: 20.11
  Hardware: x86
OS: Linux
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: ethdev
  Assignee: dev@dpdk.org
  Reporter: dmontgom...@juniper.net
  Target Milestone: ---

We have two interfaces on one chipset, one bound to igb_uio for use in a DPDK
application, the sibling still owned by linux.  After an application crash, the
device we were *not* using stops receiving packets.

dpdk-devbind.py snippet:

[root@sn9120210013 ~]# dpdk-devbind.py --status

Network devices using DPDK-compatible driver

:02:00.1 'Ethernet Connection X553 1GbE 15e4' drv=igb_uio
unused=ixgbe,vfio-pci

Network devices using kernel driver
===
:02:00.0 'Ethernet Connection X553 1GbE 15e4' if=enp2s0f0 drv=ixgbe
unused=igb_uio,vfio-pci

Using ethtool on enp2s0f0, we see these counters:

 rx_packets: 1389712
 rx_bytes: 478998162
 rx_pkts_nic: 1398666
 rx_bytes_nic: 485323772

rx_packets and rx_bytes halt, but rx_pkts_nic and rx_bytes_nic continue
incrementing.

Is there any chance there is bleedover to sibling devices inside the PMD code?

-- 
You are receiving this mail because:
You are the assignee for the bug.

Re: [dpdk-dev] [PATCH] doc: fix build error with sphinx 4.5.0

2022-05-09 Thread Jerin Jacob
On Mon, May 2, 2022 at 12:30 PM Jayatheerthan, Jay
 wrote:
>
> Looks good, thanks!
>
> Acked-by: Jay Jayatheerthan 

Applied to dpdk-next-eventdev/for-main. Thanks.

>
> -Jay
>
>
>
> > -Original Message-
> > From: jer...@marvell.com 
> > Sent: Sunday, May 1, 2022 8:27 PM
> > To: dev@dpdk.org; Jayatheerthan, Jay ; Jerin 
> > Jacob ; Ray Kinsella
> > ; Pavan Nikhilesh 
> > Cc: tho...@monjalon.net; david.march...@redhat.com; sta...@dpdk.org
> > Subject: [dpdk-dev] [PATCH] doc: fix build error with sphinx 4.5.0
> >
> > From: Jerin Jacob 
> >
> > Latest sphinx checks c language syntax more aggressively.
> > Fix the following warning by correcting c language syntax.
> >
> > doc/guides/prog_guide/event_ethernet_rx_adapter.rst:243:
> > WARNING: Could not lex literal_block as "c". Highlighting skipped.
> >
> > Fixes: 3c838062b91f ("eventdev: introduce event vector Rx capability")
> > Cc: Pavan Nikhilesh 
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: Jerin Jacob 
> > ---
> >  doc/guides/prog_guide/event_ethernet_rx_adapter.rst | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst 
> > b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
> > index 67b11e1563..3b4ef502b2 100644
> > --- a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
> > +++ b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst
> > @@ -257,8 +257,8 @@ A loop processing ``rte_event_vector`` containing mbufs 
> > is shown below.
> >  /* Process each mbuf. */
> >  }
> >  break;
> > -case ...
> > -...
> > +case default:
> > +/* Handle other event_types. */
> >  }
> >
> >  Rx event vectorization for SW Rx adapter
> > --
> > 2.36.0
>


Re: [PATCH 1/3] eventdev: add function to quiesce an event port

2022-05-09 Thread Jerin Jacob
On Wed, Apr 27, 2022 at 5:02 PM Pavan Nikhilesh
 wrote:
>
> Add function to quiesce any core specific resources consumed by
> the event port.
>
> When the application decides to migrate the event port to another lcore
> or teardown the current lcore it may to call `rte_event_port_quiesce`
> to make sure that all the data associated with the event port are released
> from the lcore, this might also include any prefetched events.
>
> While releasing the event port from the lcore, this function calls the
> user-provided flush callback once per event.
>
> Signed-off-by: Pavan Nikhilesh 

+ eventdev stake holder

@jay.jayatheert...@intel.com @erik.g.carri...@intel.com
@abhinandan.guj...@intel.com timothy.mcdan...@intel.com
sthot...@marvell.com hemant.agra...@nxp.com nipun.gu...@nxp.com
harry.van.haa...@intel.com mattias.ronnb...@ericsson.com
lian...@liangbit.com peter.mccar...@intel.com

Since it is in a slow path and allows port teardown on migration for
the implementations where core has some state for the port. The new
API addition looks good to me.

Any objection or alternative thought from eventdev stake holders?

Some comments below.



> ---
>  lib/eventdev/eventdev_pmd.h | 19 +++
>  lib/eventdev/rte_eventdev.c | 19 +++
>  lib/eventdev/rte_eventdev.h | 33 +
>  lib/eventdev/version.map|  3 +++
>  4 files changed, 74 insertions(+)
>
> diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
> index ce469d47a6..cf9f2146a1 100644
> --- a/lib/eventdev/eventdev_pmd.h
> +++ b/lib/eventdev/eventdev_pmd.h
> @@ -381,6 +381,23 @@ typedef int (*eventdev_port_setup_t)(struct rte_eventdev 
> *dev,
>   */
>  typedef void (*eventdev_port_release_t)(void *port);
>
> +/**
> + * Quiesce any core specific resources consumed by the event port
> + *
> + * @param dev
> + *   Event device pointer.
> + * @param port
> + *   Event port pointer.
> + * @param flush_cb
> + *   User-provided event flush function.
> + * @param args
> + *   Arguments to be passed to the user-provided event flush function.
> + *
> + */
> +typedef void (*eventdev_port_quiesce_t)(struct rte_eventdev *dev, void *port,

Please prefix rte_ for public symbols. i.e rte_event_port_quiesce_t.

I know we missed for existing eventdev_stop_flush_t, which we can fix
in the next ABI. I will send a patch for same.


> +   eventdev_port_flush_t flush_cb,
> +   void *args);
> +
>  /**
>   * Link multiple source event queues to destination event port.
>   *
> @@ -1218,6 +1235,8 @@ struct eventdev_ops {
> /**< Set up an event port. */
> eventdev_port_release_t port_release;
> /**< Release an event port. */
> +   eventdev_port_quiesce_t port_quiesce;
> +   /**< Quiesce an event port. */
>
> eventdev_port_link_t port_link;
> /**< Link event queues to an event port. */
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 532a253553..541fa5dc61 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -730,6 +730,25 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
> return 0;
>  }
>
> +void
> +rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id,
> +  eventdev_port_flush_t release_cb, void *args)
> +{
> +   struct rte_eventdev *dev;
> +
> +   RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
> +   dev = &rte_eventdevs[dev_id];
> +
> +   if (!is_valid_port(dev, port_id)) {
> +   RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> +   return;
> +   }
> +
> +   if (dev->dev_ops->port_quiesce)
> +   (*dev->dev_ops->port_quiesce)(dev, dev->data->ports[port_id],
> + release_cb, args);
> +}
> +
>  int
>  rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id,
>uint32_t *attr_value)
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index 42a5660169..c86d8a5576 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -830,6 +830,39 @@ int
>  rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>  const struct rte_event_port_conf *port_conf);
>
> +typedef void (*eventdev_port_flush_t)(uint8_t dev_id, struct rte_event event,
> + void *arg);
> +/**< Callback function prototype that can be passed during
> + * rte_event_port_release(), invoked once per a released event.
> + */
> +
> +/**
> + * Quiesce any core specific resources consumed by the event port.
> + *
> + * Event ports are generally coupled with lcores, and a given Hardware
> + * implementation might require the PMD to store port specific data in the
> + * lcore.
> + * When the application decides to migrate the event port to an other lcore

an other -> another

> + * or teardown the current lcore it may to call

CVE-2021-3839 Release Notice

2022-05-09 Thread Jiang, Cheng1
A vulnerability was fixed in DPDK.
Some downstream stakeholders were warned in advance
in order to coordinate the release of fixes
and reduce the vulnerability window.

In DPDK Vhost communication, we didn't test if msg->payload.inflight.num_queues 
is out of bounds in function 'vhost_user_set_inflight_fd()', and could cause 
the program to write OOB.

Commits: 6442c329b9d2 on the main branch

CVE: CVE-2021-3839
Bugzilla: https://bugs.dpdk.org/show_bug.cgi?id=657
Severity: 5.2 (Medium)
CVSS scores: 3.0/AV:L/AC:L/PR:L/UI:N/S:C/C:N/I:L/A:L



CVE-2022-0669 Release Notice

2022-05-09 Thread Jiang, Cheng1
A vulnerability was fixed in DPDK.
Some downstream stakeholders were warned in advance
in order to coordinate the release of fixes
and reduce the vulnerability window.

It's an issue in the handling of vhost-user inflight type messages. A malicious 
vhost-user master can attach an unexpected number of fds as ancillary data to 
VHOST_USER_GET_INFLIGHT_FD / VHOST_USER_SET_INFLIGHT_FD messages that are not 
closed by the vhost-user slave. By sending such messages continuously, the 
vhost-user master could exhaust available fd in the vhost-user slave process 
and lead to a DoS.

Commits: af74f7db384e on the main branch

CVE: CVE-2022-0669
Bugzilla: https://bugs.dpdk.org/show_bug.cgi?id=922
Severity: 6.5 (Medium)
CVSS scores: 3.0/AV:L/AC:L/PR:L/UI:N/S:C/C:N/I:N/A:H



[PATCH] Add support for NVIDIA ARM implementer ID

2022-05-09 Thread Cliff Burdick
build: added NVIDIA ARM implementer ID

NVIDIA ARM CPUs (Xavier, Grace) use implementer ID 0x4e.
This patch adds initial support for the Xavier chip rather than
compiling using the generic platform.

Signed-off-by: Cliff Burdick cburd...@nvidia.com
---
config/arm/meson.build | 18 ++
1 file changed, 18 insertions(+)

diff --git a/config/arm/meson.build b/config/arm/meson.build
index 8aead74086..91ccbfce2c 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -197,6 +197,23 @@ implementer_hisilicon = {
 }
}

+implementer_nvidia = {
+'description': 'NVIDIA',
+'flags': [
+['RTE_MACHINE', '"armv8a"'],
+['RTE_USE_C11_MEM_MODEL', true],
+['RTE_MAX_LCORE', 256],
+['RTE_MAX_NUMA_NODES', 4]
+],
+'part_number_config': {
+'0x4': {
+'march': 'armv8-a',
+'march_features': ['crc'],
+'compiler_options': ['-moutline-atomics']
+}
+}
+}
+
implementer_qualcomm = {
 'description': 'Qualcomm',
 'flags': [
@@ -224,6 +241,7 @@ implementers = {
 '0x41': implementer_arm,
 '0x43': implementer_cavium,
 '0x48': implementer_hisilicon,
+'0x4e': implementer_nvidia,
 '0x50': implementer_ampere,
 '0x51': implementer_qualcomm
}
--
2.17.1



Re: [PATCH v3 0/7] vdpa/mlx5: improve device shutdown time

2022-05-09 Thread Maxime Coquelin




On 5/8/22 16:25, Xueming Li wrote:

v1:
  - rebase with latest upstream code
  - fix coverity issues
v2:
  - fix build issue on OS w/o flow DR API
v3:
  - commit message update, thanks Maxime!


Xueming Li (7):
   vdpa/mlx5: fix interrupt trash that leads to segment fault
   vdpa/mlx5: fix dead loop when process interrupted
   vdpa/mlx5: no kick handling during shutdown
   vdpa/mlx5: reuse resources in reconfiguration
   vdpa/mlx5: cache and reuse hardware resources
   vdpa/mlx5: support device cleanup callback
   vdpa/mlx5: make statistics counter persistent

  doc/guides/vdpadevs/mlx5.rst|   6 +
  drivers/vdpa/mlx5/mlx5_vdpa.c   | 231 +---
  drivers/vdpa/mlx5/mlx5_vdpa.h   |  31 +++-
  drivers/vdpa/mlx5/mlx5_vdpa_event.c |  23 +--
  drivers/vdpa/mlx5/mlx5_vdpa_mem.c   |  38 +++--
  drivers/vdpa/mlx5/mlx5_vdpa_steer.c |  30 +---
  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 189 +++
  7 files changed, 336 insertions(+), 212 deletions(-)




Applied to dpdk-next-virtio/main.

Thanks,
Maxime



DPDK Release Status Meeting 2022-05-05

2022-05-09 Thread Mcnamara, John
Release status meeting minutes 2022-05-05
=

Agenda:
* Release Dates
* Subtrees
* Roadmaps
* LTS
* Defects
* Opens

Participants:
* ARM
* Intel
* Debian/Microsoft
* Marvell
* Nvidia
* Red Hat
* Xilinx/AMD


Release Dates
-

The following are the proposed current dates for 22.07:

* Proposal deadline (RFC/v1 patches): 10 April 2022
* API freeze (-rc1): 30 May 2022
* PMD features freeze (-rc2): 20 June 2022
* Built-in applications features freeze (-rc3): 27 June 2022
* Release: 13 July 2022

Updated dates are posted shortly after they are changed at:
http://core.dpdk.org/roadmap/#dates

Here are some provisional dates for 22.11:

* Proposal deadline (RFC/v1 patches): 14 August 2022
* API freeze (-rc1): 3 October 2022
* PMD features freeze (-rc2): 24 October 2022
* Built-in applications features freeze (-rc3): 31 October 2022
* Release: 16 November 2022

Subtrees


* next-net
  * Moving forward with merging patches

* next-net-intel
  * 18 patches ready for merge

* next-net-mlx
  * Some patches pushed to next-net

* next-net-brcm
  * No update

* next-net-mrvl
  * 20 patches merged
  * 14 patches in queue

* next-eventdev
  * 20+ patches merged under review or pending merge

* next-virtio
  * Fixing checksum bugs in virtio
  * New version of Async vhost patches - getting closer to merge

* next-crypto
  * 30 patches merged and 20 ready to be merged
  * New PMD from Atomic Rules

* main
  * Looking at Thread patch series for Windows
  * Progressing - needs reviews
  * Enabling of ASAN in the CI
  * Some blocking issues
   * Fix sent by Anatoly
  * Lock annotations series:
* https://patchwork.dpdk.org/project/dpdk/list/?series=22292
* Needs docs and remerge after Maxime's changes
  * Cleanup series for memory leaks and NULL pointers need reviews:
* https://patchwork.dpdk.org/bundle/dmarchand/need_reviews/
  * Sample apps being updated
  * Header split patchset needs review
* 
http://patchwork.dpdk.org/project/dpdk/patch/20220402104109.472078-2-wenxuanx...@intel.com/
  * Docs fix for ETH and VLAN flow items in Support Matrix from Ilya needs 
review/merge
* 
https://patchwork.dpdk.org/project/dpdk/patch/20220316120157.390311-1-i.maxim...@ovn.org/
  * Public meeting on: ethdev: datapath-focused meter actions
* http://mails.dpdk.org/archives/dev/2022-April/239961.html




LTS
---

* 21.11.1
  * Released April 26th

* 20.11.5
  * Released April 4th

* 19.11.12
  * Released April 7th


* Distros
  * v20.11 in Debian 11
  * 22.04 will contain 21.11

Defects
---

* Bugzilla links, 'Bugs',  added for hosted projects
  * https://www.dpdk.org/hosted-projects/


Opens
-

* None


DPDK Release Status Meetings


The DPDK Release Status Meeting is intended for DPDK Committers to discuss the 
status of the master tree and sub-trees, and for project managers to track 
progress or milestone dates.

The meeting occurs on every Thursday at 9:30 UTC. on https://meet.jit.si/DPDK

If you wish to attend just send an email to "John McNamara 
john.mcnam...@intel.com" for the invite.


RE: [PATCH v3] sched: enable/disable TC OV at runtime

2022-05-09 Thread Dumitrescu, Cristian
Hi Marcin,

> -Original Message-
> From: Danilewicz, MarcinX 
> Sent: Wednesday, April 27, 2022 10:24 AM
> To: dev@dpdk.org; Singh, Jasvinder ; Dumitrescu,
> Cristian 
> Cc: Ajmera, Megha 
> Subject: [PATCH v3] sched: enable/disable TC OV at runtime

We are not trying to enable/disable the traffic class oversubscription feature 
at run-time, but at initialization. If cat, we should prohibit changing this 
post-initialization.

Also the name of the feature should not be abbreviated in the patch title.

I suggest you rework the title to:
[PATCH] sched: enable traffic class oversubscription conditionally

> 
> Added new API to enable or disable TC over subscription for best
> effort traffic class at subport level.
> Added changes after review and increased throughput.
> 
> By default TC OV is disabled.

It should be the other way around, the TC_OV should be enabled by default. The 
TC oversubscription is a more natural way to use this library, we usually want 
to disable this feature just for better performance in case this functionality 
is not needed. Please initialize the tc_ov flag accordingly.

> 
> Signed-off-by: Marcin Danilewicz 
> ---
>  lib/sched/rte_sched.c | 189 +++---
>  lib/sched/rte_sched.h |  18 
>  lib/sched/version.map |   3 +
>  3 files changed, 178 insertions(+), 32 deletions(-)
> 
> diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c
> index ec74bee939..6e7d81df46 100644
> --- a/lib/sched/rte_sched.c
> +++ b/lib/sched/rte_sched.c
> @@ -213,6 +213,9 @@ struct rte_sched_subport {
>   uint8_t *bmp_array;
>   struct rte_mbuf **queue_array;
>   uint8_t memory[0] __rte_cache_aligned;
> +
> + /* TC oversubscription activation */
> + int is_tc_ov_enabled;

How about we simplify the name of this variable to: tc_ov_enabled ?

>  } __rte_cache_aligned;
> 
>  struct rte_sched_port {
> @@ -1165,6 +1168,45 @@ rte_sched_cman_config(struct rte_sched_port
> *port,
>  }
>  #endif
> 
> +int
> +rte_sched_subport_tc_ov_config(struct rte_sched_port *port,
> + uint32_t subport_id,
> + bool tc_ov_enable)
> +{
> + struct rte_sched_subport *s;
> + struct rte_sched_subport_profile *profile;
> +
> + if (port == NULL) {
> + RTE_LOG(ERR, SCHED,
> + "%s: Incorrect value for parameter port\n", __func__);
> + return -EINVAL;
> + }
> +
> + if (subport_id >= port->n_subports_per_port) {
> + RTE_LOG(ERR, SCHED,
> + "%s: Incorrect value for parameter subport id\n",
> __func__);
> + return  -EINVAL;
> + }
> +
> + s = port->subports[subport_id];
> + s->is_tc_ov_enabled = tc_ov_enable ? 1 : 0;
> +
> + if (s->is_tc_ov_enabled) {
> + /* TC oversubscription */
> + s->tc_ov_wm_min = port->mtu;
> + s->tc_ov_period_id = 0;
> + s->tc_ov = 0;
> + s->tc_ov_n = 0;
> + s->tc_ov_rate = 0;
> +
> + profile = port->subport_profiles + s->profile;
> + s->tc_ov_wm_max = rte_sched_time_ms_to_bytes(profile-
> >tc_period,
> + s->pipe_tc_be_rate_max);
> + s->tc_ov_wm = s->tc_ov_wm_max;
> + }
> + return 0;
> +}

This function should not exist, please remove it and keep the initial code that 
computes the tc_ov related variable regardless of whether tc_ov is enabled or 
not.

All the tc_ov related variables have the tc_ov particle in their name, so there 
is no clash. This is initialization code, so no performance overhead. Let's 
keep the code unmodified and compute both the tc_ov and the non-tc_ov varables 
at initialization, regardless of whether the feature is enabled or not.

This comment is applicable to all the initialization code, please adjust all 
the init code accordingly. There should be no diff showing in the patch for any 
of the init code!

For this file "rte_sched.c", your patch should contain just two additional 
run-time functions, i.e. the non-tc-ov version of functions 
grinder_credits_update() and grindler_credits_check(), and the small code 
required to test when to use the tc-ov vs. the non-tc_ov version, makes sense?

> +
>  int
>  is_tc_ov_enabled (struct rte_sched_port *port,
>   uint32_t subport_id,
> @@ -1254,6 +1296,9 @@ rte_sched_subport_config(struct rte_sched_port
> *port,
>   s->n_pipe_profiles = params->n_pipe_profiles;
>   s->n_max_pipe_profiles = params->n_max_pipe_profiles;
> 
> + /* TC over-subscription is disabled by default */
> + s->is_tc_ov_enabled = 0;
> +

By default, this feature should be enabled:
s->is_tc_ov_enabled = 1;

>  #ifdef RTE_SCHED_CMAN
>   if (params->cman_params != NULL) {
>   s->cman_enabled = true;
> @@ -1316,13 +1361,6 @@ rte_sched_subport_config(struct rte_sched_port
> *port,
> 
>   for (i = 0; i < RTE_SCHED_PORT_N_GRINDERS; i++)
>

[PATCH v1 0/5] baseband/fpga_5gnr: maintenance changes to fpga_5gnr PMD

2022-05-09 Thread Hernan
Few PMD changes as part of maintenance of the driver. These are not required on 
the stable variants. Aiming to upstream these in 22.07.

Hernan (5):
  baseband/fpga_5gnr_fec: remove FLR timeout
  baseband/fpga_5gnr_fec: add FPGA Mutex
  baseband/fpga_5gnr_fec: add check for HARQ input length
  baseband/fpga_5gnr_fec: enable validate LDPC enc/dec
  baseband/fpga_5gnr_fec: remove filler from HARQ

 app/test-bbdev/test_bbdev_perf.c  |   4 -
 .../baseband/fpga_5gnr_fec/fpga_5gnr_fec.h|   9 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 581 ++
 .../fpga_5gnr_fec/rte_pmd_fpga_5gnr_fec.h |   2 -
 4 files changed, 451 insertions(+), 145 deletions(-)

-- 
2.25.1



[PATCH v1 1/5] baseband/fpga_5gnr_fec: remove FLR timeout

2022-05-09 Thread Hernan
FLR timeout register is not used in 5GNR FPGA.

Signed-off-by: Hernan 
---
 app/test-bbdev/test_bbdev_perf.c   | 4 
 drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h | 2 --
 drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 9 -
 drivers/baseband/fpga_5gnr_fec/rte_pmd_fpga_5gnr_fec.h | 2 --
 4 files changed, 17 deletions(-)

diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
index 0fa119a502..fad3b1e49d 100644
--- a/app/test-bbdev/test_bbdev_perf.c
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -50,7 +50,6 @@
 #define DL_5G_BANDWIDTH 3
 #define UL_5G_LOAD_BALANCE 128
 #define DL_5G_LOAD_BALANCE 128
-#define FLR_5G_TIMEOUT 610
 #endif
 
 #ifdef RTE_BASEBAND_ACC100
@@ -699,9 +698,6 @@ add_bbdev_dev(uint8_t dev_id, struct rte_bbdev_info *info,
conf.ul_load_balance = UL_5G_LOAD_BALANCE;
conf.dl_load_balance = DL_5G_LOAD_BALANCE;
 
-   /**< FLR timeout value */
-   conf.flr_time_out = FLR_5G_TIMEOUT;
-
/* setup FPGA PF with configuration information */
ret = rte_fpga_5gnr_fec_configure(info->dev_name, &conf);
TEST_ASSERT_SUCCESS(ret,
diff --git a/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h 
b/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h
index e72c95e936..ed8ce26eaa 100644
--- a/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h
+++ b/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h
@@ -36,7 +36,6 @@
 #define FPGA_RING_DESC_LEN_UNIT_BYTES (32)
 /* Maximum size of queue */
 #define FPGA_RING_MAX_SIZE (1024)
-#define FPGA_FLR_TIMEOUT_UNIT (16.384)
 
 #define FPGA_NUM_UL_QUEUES (32)
 #define FPGA_NUM_DL_QUEUES (32)
@@ -70,7 +69,6 @@ enum {
FPGA_5GNR_FEC_QUEUE_PF_VF_MAP_DONE = 0x0008, /* len: 1B */
FPGA_5GNR_FEC_LOAD_BALANCE_FACTOR = 0x000a, /* len: 2B */
FPGA_5GNR_FEC_RING_DESC_LEN = 0x000c, /* len: 2B */
-   FPGA_5GNR_FEC_FLR_TIME_OUT = 0x000e, /* len: 2B */
FPGA_5GNR_FEC_VFQ_FLUSH_STATUS_LW = 0x0018, /* len: 4B */
FPGA_5GNR_FEC_VFQ_FLUSH_STATUS_HI = 0x001c, /* len: 4B */
FPGA_5GNR_FEC_QUEUE_MAP = 0x0040, /* len: 256B */
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c 
b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 15d23d6269..6737b74901 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -83,8 +83,6 @@ print_static_reg_debug_info(void *mmio_base)
FPGA_5GNR_FEC_LOAD_BALANCE_FACTOR);
uint16_t ring_desc_len = fpga_reg_read_16(mmio_base,
FPGA_5GNR_FEC_RING_DESC_LEN);
-   uint16_t flr_time_out = fpga_reg_read_16(mmio_base,
-   FPGA_5GNR_FEC_FLR_TIME_OUT);
 
rte_bbdev_log_debug("UL.DL Weights = %u.%u",
((uint8_t)config), ((uint8_t)(config >> 8)));
@@ -94,8 +92,6 @@ print_static_reg_debug_info(void *mmio_base)
(qmap_done > 0) ? "READY" : "NOT-READY");
rte_bbdev_log_debug("Ring Descriptor Size = %u bytes",
ring_desc_len*FPGA_RING_DESC_LEN_UNIT_BYTES);
-   rte_bbdev_log_debug("FLR Timeout = %f usec",
-   (float)flr_time_out*FPGA_FLR_TIMEOUT_UNIT);
 }
 
 /* Print decode DMA Descriptor of FPGA 5GNR Decoder device */
@@ -2120,11 +2116,6 @@ rte_fpga_5gnr_fec_configure(const char *dev_name,
address = FPGA_5GNR_FEC_RING_DESC_LEN;
fpga_reg_write_16(d->mmio_base, address, payload_16);
 
-   /* Setting FLR timeout value */
-   payload_16 = conf->flr_time_out;
-   address = FPGA_5GNR_FEC_FLR_TIME_OUT;
-   fpga_reg_write_16(d->mmio_base, address, payload_16);
-
/* Queue PF/VF mapping table is ready */
payload_8 = 0x1;
address = FPGA_5GNR_FEC_QUEUE_PF_VF_MAP_DONE;
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_pmd_fpga_5gnr_fec.h 
b/drivers/baseband/fpga_5gnr_fec/rte_pmd_fpga_5gnr_fec.h
index c2752fbd52..93a87c8e82 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_pmd_fpga_5gnr_fec.h
+++ b/drivers/baseband/fpga_5gnr_fec/rte_pmd_fpga_5gnr_fec.h
@@ -45,8 +45,6 @@ struct rte_fpga_5gnr_fec_conf {
uint8_t ul_load_balance;
/** DL Load Balance */
uint8_t dl_load_balance;
-   /** FLR timeout value */
-   uint16_t flr_time_out;
 };
 
 /**
-- 
2.25.1



[PATCH v1 2/5] baseband/fpga_5gnr_fec: add FPGA Mutex

2022-05-09 Thread Hernan
FPGA mutex acquisition and mutex free implemented.

Signed-off-by: Hernan 
---
 .../baseband/fpga_5gnr_fec/fpga_5gnr_fec.h|  6 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 77 ++-
 2 files changed, 62 insertions(+), 21 deletions(-)

diff --git a/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h 
b/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h
index ed8ce26eaa..993cf61974 100644
--- a/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h
+++ b/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h
@@ -82,7 +82,9 @@ enum {
FPGA_5GNR_FEC_DDR4_RD_DATA_REGS = 0x0A30, /* len: 8B */
FPGA_5GNR_FEC_DDR4_ADDR_RDY_REGS = 0x0A38, /* len: 1B */
FPGA_5GNR_FEC_HARQ_BUF_SIZE_RDY_REGS = 0x0A40, /* len: 1B */
-   FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS = 0x0A48  /* len: 4B */
+   FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS = 0x0A48, /* len: 4B */
+   FPGA_5GNR_FEC_MUTEX = 0x0A60, /* len: 4B */
+   FPGA_5GNR_FEC_MUTEX_RESET = 0x0A68  /* len: 4B */
 };
 
 /* FPGA 5GNR FEC Ring Control Registers */
@@ -264,6 +266,8 @@ struct __rte_cache_aligned fpga_queue {
uint32_t sw_ring_wrap_mask;
uint32_t irq_enable;  /* Enable ops dequeue interrupts if set to 1 */
uint8_t q_idx;  /* Queue index */
+   /** uuid used for MUTEX acquision for DDR */
+   uint16_t ddr_mutex_uuid;
struct fpga_5gnr_fec_device *d;
/* MMIO register of shadow_tail used to enqueue descriptors */
void *shadow_tail_addr;
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c 
b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6737b74901..435b4d90d8 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -1194,11 +1194,45 @@ validate_dec_op(struct rte_bbdev_dec_op *op 
__rte_unused)
 }
 #endif
 
+static inline void
+fpga_mutex_acquisition(struct fpga_queue *q)
+{
+   uint32_t mutex_ctrl, mutex_read, cnt = 0;
+   /* Assign a unique id for the duration of the DDR access */
+   q->ddr_mutex_uuid = rand();
+   /* Request and wait for acquisition of the mutex */
+   mutex_ctrl = (q->ddr_mutex_uuid << 16) + 1;
+   do {
+   if (cnt > 0)
+   usleep(FPGA_TIMEOUT_CHECK_INTERVAL);
+   rte_bbdev_log_debug("Acquiring Mutex for %x\n",
+   q->ddr_mutex_uuid);
+   fpga_reg_write_32(q->d->mmio_base,
+   FPGA_5GNR_FEC_MUTEX,
+   mutex_ctrl);
+   mutex_read = fpga_reg_read_32(q->d->mmio_base,
+   FPGA_5GNR_FEC_MUTEX);
+   rte_bbdev_log_debug("Mutex %x cnt %d owner %x\n",
+   mutex_read, cnt, q->ddr_mutex_uuid);
+   cnt++;
+   } while ((mutex_read >> 16) != q->ddr_mutex_uuid);
+}
+
+static inline void
+fpga_mutex_free(struct fpga_queue *q)
+{
+   uint32_t mutex_ctrl = q->ddr_mutex_uuid << 16;
+   fpga_reg_write_32(q->d->mmio_base,
+   FPGA_5GNR_FEC_MUTEX,
+   mutex_ctrl);
+}
+
 static inline int
-fpga_harq_write_loopback(struct fpga_5gnr_fec_device *fpga_dev,
+fpga_harq_write_loopback(struct fpga_queue *q,
struct rte_mbuf *harq_input, uint16_t harq_in_length,
uint32_t harq_in_offset, uint32_t harq_out_offset)
 {
+   fpga_mutex_acquisition(q);
uint32_t out_offset = harq_out_offset;
uint32_t in_offset = harq_in_offset;
uint32_t left_length = harq_in_length;
@@ -1215,7 +1249,7 @@ fpga_harq_write_loopback(struct fpga_5gnr_fec_device 
*fpga_dev,
 * Get HARQ buffer size for each VF/PF: When 0x00, there is no
 * available DDR space for the corresponding VF/PF.
 */
-   reg_32 = fpga_reg_read_32(fpga_dev->mmio_base,
+   reg_32 = fpga_reg_read_32(q->d->mmio_base,
FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
if (reg_32 < harq_in_length) {
left_length = reg_32;
@@ -1226,46 +1260,48 @@ fpga_harq_write_loopback(struct fpga_5gnr_fec_device 
*fpga_dev,
uint8_t *, in_offset);
 
while (left_length > 0) {
-   if (fpga_reg_read_8(fpga_dev->mmio_base,
+   if (fpga_reg_read_8(q->d->mmio_base,
FPGA_5GNR_FEC_DDR4_ADDR_RDY_REGS) ==  1) {
-   fpga_reg_write_32(fpga_dev->mmio_base,
+   fpga_reg_write_32(q->d->mmio_base,
FPGA_5GNR_FEC_DDR4_WR_ADDR_REGS,
out_offset);
-   fpga_reg_write_64(fpga_dev->mmio_base,
+   fpga_reg_write_64(q->d->mmio_base,
FPGA_5GNR_FEC_DDR4_WR_DATA_REGS,
input[increment]);
left_length -= FPGA_5GNR_FEC_DDR_WR_DATA_LEN_IN_

[PATCH v1 3/5] baseband/fpga_5gnr_fec: add check for HARQ input length

2022-05-09 Thread Hernan
Add new case DESC_ERR_HARQ_INPUT_LEN to check for valid HARQ input
length.

Signed-off-by: Hernan 
---
 drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h | 1 +
 drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h 
b/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h
index 993cf61974..e3038112fa 100644
--- a/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h
+++ b/drivers/baseband/fpga_5gnr_fec/fpga_5gnr_fec.h
@@ -107,6 +107,7 @@ enum {
DESC_ERR_DESC_READ_FAIL = 0x8,
DESC_ERR_DESC_READ_TIMEOUT = 0x9,
DESC_ERR_DESC_READ_TLP_POISONED = 0xA,
+   DESC_ERR_HARQ_INPUT_LEN = 0xB,
DESC_ERR_CB_READ_FAIL = 0xC,
DESC_ERR_CB_READ_TIMEOUT = 0xD,
DESC_ERR_CB_READ_TLP_POISONED = 0xE,
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c 
b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 435b4d90d8..2d4b58067d 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -848,6 +848,9 @@ check_desc_error(uint32_t error_code) {
case DESC_ERR_DESC_READ_TLP_POISONED:
rte_bbdev_log(ERR, "Descriptor read TLP poisoned");
break;
+   case DESC_ERR_HARQ_INPUT_LEN:
+   rte_bbdev_log(ERR, "HARQ input length is invalid");
+   break;
case DESC_ERR_CB_READ_FAIL:
rte_bbdev_log(ERR, "Unsuccessful completion for code block");
break;
-- 
2.25.1



[PATCH v1 5/5] baseband/fpga_5gnr_fec: remove filler from HARQ

2022-05-09 Thread Hernan
Removed dec->n_filler from harq_out_length calculation.

Signed-off-by: Hernan 
---
 drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c 
b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 8fdb44c94a..22a548a336 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -1844,7 +1844,7 @@ enqueue_ldpc_dec_one_op_cb(struct fpga_queue *q, struct 
rte_bbdev_dec_op *op,
else
l = k0 + e + dec->n_filler;
harq_out_length = RTE_MIN(RTE_MAX(harq_in_length, l),
-   dec->n_cb - dec->n_filler);
+   dec->n_cb);
dec->harq_combined_output.length = harq_out_length;
}
 
-- 
2.25.1



[PATCH v1 4/5] baseband/fpga_5gnr_fec: enable validate LDPC enc/dec

2022-05-09 Thread Hernan
Enable validate_ldpc_enc_op and validate_ldpc_dec_op

Signed-off-by: Hernan 
---
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 490 ++
 1 file changed, 384 insertions(+), 106 deletions(-)

diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c 
b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 2d4b58067d..8fdb44c94a 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -1032,23 +1032,11 @@ fpga_dma_desc_ld_fill(struct rte_bbdev_dec_op *op,
return 0;
 }
 
-#ifdef RTE_LIBRTE_BBDEV_DEBUG
 /* Validates LDPC encoder parameters */
-static int
-validate_enc_op(struct rte_bbdev_enc_op *op __rte_unused)
+static inline int
+validate_ldpc_enc_op(struct rte_bbdev_enc_op *op)
 {
struct rte_bbdev_op_ldpc_enc *ldpc_enc = &op->ldpc_enc;
-   struct rte_bbdev_op_enc_ldpc_cb_params *cb = NULL;
-   struct rte_bbdev_op_enc_ldpc_tb_params *tb = NULL;
-
-
-   if (ldpc_enc->input.length >
-   RTE_BBDEV_LDPC_MAX_CB_SIZE >> 3) {
-   rte_bbdev_log(ERR, "CB size (%u) is too big, max: %d",
-   ldpc_enc->input.length,
-   RTE_BBDEV_LDPC_MAX_CB_SIZE);
-   return -1;
-   }
 
if (op->mempool == NULL) {
rte_bbdev_log(ERR, "Invalid mempool pointer");
@@ -1062,140 +1050,437 @@ validate_enc_op(struct rte_bbdev_enc_op *op 
__rte_unused)
rte_bbdev_log(ERR, "Invalid output pointer");
return -1;
}
+   if (ldpc_enc->input.length == 0) {
+   rte_bbdev_log(ERR, "CB size (%u) is null",
+   ldpc_enc->input.length);
+   return -1;
+   }
if ((ldpc_enc->basegraph > 2) || (ldpc_enc->basegraph == 0)) {
rte_bbdev_log(ERR,
-   "basegraph (%u) is out of range 1 <= value <= 
2",
+   "BG (%u) is out of range 1 <= value <= 2",
ldpc_enc->basegraph);
return -1;
}
+   if (ldpc_enc->rv_index > 3) {
+   rte_bbdev_log(ERR,
+   "rv_index (%u) is out of range 0 <= value <= 3",
+   ldpc_enc->rv_index);
+   return -1;
+   }
if (ldpc_enc->code_block_mode > RTE_BBDEV_CODE_BLOCK) {
rte_bbdev_log(ERR,
-   "code_block_mode (%u) is out of range 0:Tb 
1:CB",
+   "code_block_mode (%u) is out of range 0 <= 
value <= 1",
ldpc_enc->code_block_mode);
return -1;
}
 
-   if (ldpc_enc->code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK) {
-   tb = &ldpc_enc->tb_params;
-   if (tb->c == 0) {
-   rte_bbdev_log(ERR,
-   "c (%u) is out of range 1 <= value <= 
%u",
-   tb->c, RTE_BBDEV_LDPC_MAX_CODE_BLOCKS);
+   if (ldpc_enc->input.length >
+   RTE_BBDEV_LDPC_MAX_CB_SIZE >> 3) {
+   rte_bbdev_log(ERR, "CB size (%u) is too big, max: %d",
+   ldpc_enc->input.length,
+   RTE_BBDEV_LDPC_MAX_CB_SIZE);
+   return -1;
+   }
+   int z_c = ldpc_enc->z_c;
+   /* Check Zc is valid value */
+   if ((z_c > 384) || (z_c < 4)) {
+   rte_bbdev_log(ERR, "Zc (%u) is out of range", z_c);
+   return -1;
+   }
+   if (z_c > 256) {
+   if ((z_c % 32) != 0) {
+   rte_bbdev_log(ERR, "Invalid Zc %d", z_c);
return -1;
}
-   if (tb->cab > tb->c) {
-   rte_bbdev_log(ERR,
-   "cab (%u) is greater than c (%u)",
-   tb->cab, tb->c);
+   } else if (z_c > 128) {
+   if ((z_c % 16) != 0) {
+   rte_bbdev_log(ERR, "Invalid Zc %d", z_c);
return -1;
}
-   if ((tb->ea < RTE_BBDEV_LDPC_MIN_CB_SIZE)
-   && tb->r < tb->cab) {
-   rte_bbdev_log(ERR,
-   "ea (%u) is less than %u or it is not 
even",
-   tb->ea, RTE_BBDEV_LDPC_MIN_CB_SIZE);
+   } else if (z_c > 64) {
+   if ((z_c % 8) != 0) {
+   rte_bbdev_log(ERR, "Invalid Zc %d", z_c);
return -1;
}
-   if ((tb->eb < RTE_BBDEV_LDPC_MIN_CB_SIZE)
-   && tb->c > tb->cab) {
-   rte_bbdev_log(ERR,
-   "eb (%u) is less than %u",
-   tb->eb, RTE_BBDE

RE: [PATCH v1 4/5] baseband/fpga_5gnr_fec: enable validate LDPC enc/dec

2022-05-09 Thread Chautru, Nicolas



> -Original Message-
> From: Vargas, Hernan 
> Sent: Monday, May 9, 2022 1:18 PM
> To: dev@dpdk.org; gak...@marvell.com; t...@redhat.com
> Cc: Chautru, Nicolas ; Zhang, Qi Z
> ; Vargas, Hernan 
> Subject: [PATCH v1 4/5] baseband/fpga_5gnr_fec: enable validate LDPC
> enc/dec
> 
> Enable validate_ldpc_enc_op and validate_ldpc_dec_op
> 
> Signed-off-by: Hernan 
> ---
>  .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 490 ++
>  1 file changed, 384 insertions(+), 106 deletions(-)
> 
> diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> index 2d4b58067d..8fdb44c94a 100644
> --- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> +++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
> @@ -1032,23 +1032,11 @@ fpga_dma_desc_ld_fill(struct rte_bbdev_dec_op
> *op,
>   return 0;
>  }
> 
> -#ifdef RTE_LIBRTE_BBDEV_DEBUG
>  /* Validates LDPC encoder parameters */ -static int -validate_enc_op(struct
> rte_bbdev_enc_op *op __rte_unused)
> +static inline int
> +validate_ldpc_enc_op(struct rte_bbdev_enc_op *op)
>  {
>   struct rte_bbdev_op_ldpc_enc *ldpc_enc = &op->ldpc_enc;
> - struct rte_bbdev_op_enc_ldpc_cb_params *cb = NULL;
> - struct rte_bbdev_op_enc_ldpc_tb_params *tb = NULL;
> -
> -
> - if (ldpc_enc->input.length >
> - RTE_BBDEV_LDPC_MAX_CB_SIZE >> 3) {
> - rte_bbdev_log(ERR, "CB size (%u) is too big, max: %d",
> - ldpc_enc->input.length,
> - RTE_BBDEV_LDPC_MAX_CB_SIZE);
> - return -1;
> - }
> 
>   if (op->mempool == NULL) {
>   rte_bbdev_log(ERR, "Invalid mempool pointer"); @@ -
> 1062,140 +1050,437 @@ validate_enc_op(struct rte_bbdev_enc_op *op
> __rte_unused)
>   rte_bbdev_log(ERR, "Invalid output pointer");
>   return -1;
>   }
> + if (ldpc_enc->input.length == 0) {
> + rte_bbdev_log(ERR, "CB size (%u) is null",
> + ldpc_enc->input.length);
> + return -1;
> + }
>   if ((ldpc_enc->basegraph > 2) || (ldpc_enc->basegraph == 0)) {
>   rte_bbdev_log(ERR,
> - "basegraph (%u) is out of range 1 <= value <=
> 2",
> + "BG (%u) is out of range 1 <= value <= 2",
>   ldpc_enc->basegraph);
>   return -1;
>   }
> + if (ldpc_enc->rv_index > 3) {
> + rte_bbdev_log(ERR,
> + "rv_index (%u) is out of range 0 <= value <=
> 3",
> + ldpc_enc->rv_index);
> + return -1;
> + }
>   if (ldpc_enc->code_block_mode > RTE_BBDEV_CODE_BLOCK) {
>   rte_bbdev_log(ERR,
> - "code_block_mode (%u) is out of range 0:Tb
> 1:CB",
> + "code_block_mode (%u) is out of range 0 <=
> value <= 1",
>   ldpc_enc->code_block_mode);
>   return -1;
>   }
> 
> - if (ldpc_enc->code_block_mode == RTE_BBDEV_TRANSPORT_BLOCK) {
> - tb = &ldpc_enc->tb_params;
> - if (tb->c == 0) {
> - rte_bbdev_log(ERR,
> - "c (%u) is out of range 1 <= value <=
> %u",
> - tb->c,
> RTE_BBDEV_LDPC_MAX_CODE_BLOCKS);
> + if (ldpc_enc->input.length >
> + RTE_BBDEV_LDPC_MAX_CB_SIZE >> 3) {
> + rte_bbdev_log(ERR, "CB size (%u) is too big, max: %d",
> + ldpc_enc->input.length,
> + RTE_BBDEV_LDPC_MAX_CB_SIZE);
> + return -1;
> + }
> + int z_c = ldpc_enc->z_c;
> + /* Check Zc is valid value */
> + if ((z_c > 384) || (z_c < 4)) {
> + rte_bbdev_log(ERR, "Zc (%u) is out of range", z_c);
> + return -1;
> + }
> + if (z_c > 256) {
> + if ((z_c % 32) != 0) {
> + rte_bbdev_log(ERR, "Invalid Zc %d", z_c);
>   return -1;
>   }
> - if (tb->cab > tb->c) {
> - rte_bbdev_log(ERR,
> - "cab (%u) is greater than c (%u)",
> - tb->cab, tb->c);
> + } else if (z_c > 128) {
> + if ((z_c % 16) != 0) {
> + rte_bbdev_log(ERR, "Invalid Zc %d", z_c);
>   return -1;
>   }
> - if ((tb->ea < RTE_BBDEV_LDPC_MIN_CB_SIZE)
> - && tb->r < tb->cab) {
> - rte_bbdev_log(ERR,
> - "ea (%u) is less than %u or it is not
> even",
> - tb->ea,
> RTE_BBDEV_LDPC_MIN_CB_SIZE);
> + } else if (z_c > 64) {
> + if ((z_c % 8) != 0) {
> + rte_bbdev_log(ERR, "I

RE: [PATCH v2 1/5] baseband/acc100: introduce PMD for ACC101

2022-05-09 Thread Chautru, Nicolas
Hi Tom, 

> -Original Message-
> From: Tom Rix 
> Sent: Sunday, May 8, 2022 6:03 AM
> To: Chautru, Nicolas ; dev@dpdk.org;
> gak...@marvell.com
> Cc: tho...@monjalon.net; Kinsella, Ray ; Richardson,
> Bruce ; hemant.agra...@nxp.com; Zhang,
> Mingshan ; david.march...@redhat.com
> Subject: Re: [PATCH v2 1/5] baseband/acc100: introduce PMD for ACC101
> 
> This is a good start reusing code, but I think it needs to do more reuse.
> 
> These cards should be very close and likely represent a family of cards.
> 
> On 4/27/22 11:16 AM, Nicolas Chautru wrote:
> > Support for ACC101 as a derivative of ACC100.
> > Reusing existing code when possible.
> >
> > Signed-off-by: Nicolas Chautru 
> > ---
> >   doc/guides/bbdevs/acc101.rst | 237
> +++
> >   doc/guides/bbdevs/features/acc101.ini|  13 ++
> >   doc/guides/bbdevs/index.rst  |   1 +
> >   doc/guides/rel_notes/release_22_07.rst   |   4 +
> >   drivers/baseband/acc100/rte_acc100_pmd.c | 194
> -
> >   drivers/baseband/acc100/rte_acc100_pmd.h |   6 +
> >   drivers/baseband/acc100/rte_acc101_pmd.h |  61 
> >   7 files changed, 511 insertions(+), 5 deletions(-)
> >   create mode 100644 doc/guides/bbdevs/acc101.rst
> >   create mode 100644 doc/guides/bbdevs/features/acc101.ini
> >   create mode 100644 drivers/baseband/acc100/rte_acc101_pmd.h
> >
> > diff --git a/doc/guides/bbdevs/acc101.rst
> > b/doc/guides/bbdevs/acc101.rst new file mode 100644 index
> > 000..46c310b
> > --- /dev/null
> > +++ b/doc/guides/bbdevs/acc101.rst
> > @@ -0,0 +1,237 @@
> > +..  SPDX-License-Identifier: BSD-3-Clause
> > +Copyright(c) 2020 Intel Corporation
> > +
> > +Intel(R) ACC101 5G/4G FEC Poll Mode Driver
> > +==
> > +
> > +The BBDEV ACC101 5G/4G FEC poll mode driver (PMD) supports an
> > +implementation of a VRAN FEC wireless acceleration function.
> > +This device is also known as Mount Cirrus.
> > +This is a follow-up to Mount Bryce (ACC100) and includes fixes,
> > +improved feature set for error scenarios and performance capacity increase.
> 
> includes fixes, better error handling and increased performance.
> 
> A quick look at acc100.rst and the bulk of acc101.rst looks the same.
> 
> Consider a user of the acc100 is upgrading to acc101, they will
> 
> want to know what is the same and what has changed and test accordingly.
> 
> These two documents should be combined.
> 

Well in term of documentation, for the users it helps to be able to follow 
steps as they are for a given variant. 
As opposed to have to have multiple options through the document when using 
ACC100 vs ACC101.
Except if they are other objections, I would see this more useful for the user 
as is and less source of errors. 


> > +
> > +Features
> > +
> > +
> > +ACC101 5G/4G FEC PMD supports the following features:
> > +
> > +- LDPC Encode in the DL (5GNR)
> > +- LDPC Decode in the UL (5GNR)
> > +- Turbo Encode in the DL (4G)
> > +- Turbo Decode in the UL (4G)
> > +- 16 VFs per PF (physical device)
> > +- Maximum of 128 queues per VF
> > +- PCIe Gen-3 x16 Interface
> > +- MSI
> > +- SR-IOV
> > +
> > +ACC101 5G/4G FEC PMD supports the following BBDEV capabilities:
> > +
> > +* For the LDPC encode operation:
> > +   - ``RTE_BBDEV_LDPC_CRC_24B_ATTACH`` :  set to attach CRC24B to CB(s)
> > +   - ``RTE_BBDEV_LDPC_RATE_MATCH`` :  if set then do not do Rate Match
> bypass
> > +   - ``RTE_BBDEV_LDPC_INTERLEAVER_BYPASS`` : if set then bypass
> > +interleaver
> > +
> > +* For the LDPC decode operation:
> > +   - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK`` :  check CRC24B from CB(s)
> > +   - ``RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE`` :  disable early
> termination
> > +   - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP`` :  drops CRC24B bits
> appended while decoding
> > +   - ``RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE`` :  provides an input for
> HARQ combining
> > +   - ``RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE`` :  provides an input
> for HARQ combining
> > +   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE`` :  HARQ
> memory input is internal
> > +   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE`` :  HARQ
> memory output is internal
> > +   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK`` :
> loopback data to/from HARQ memory
> > +   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS`` :  HARQ
> memory includes the fillers bits
> > +   - ``RTE_BBDEV_LDPC_DEC_SCATTER_GATHER`` :  supports scatter-gather
> for input/output data
> > +   - ``RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION`` :  supports
> compression of the HARQ input/output
> > +   - ``RTE_BBDEV_LDPC_LLR_COMPRESSION`` :  supports LLR input
> > +compression
> > +
> > +* For the turbo encode operation:
> > +   - ``RTE_BBDEV_TURBO_CRC_24B_ATTACH`` :  set to attach CRC24B to
> CB(s)
> > +   - ``RTE_BBDEV_TURBO_RATE_MATCH`` :  if set then do not do Rate Match
> bypass
> > +   - ``RTE_BBDEV_TURBO_ENC_INTERRUPTS`` :  set for encoder dequeue
> interru

RE: [PATCH v2 2/5] baseband/acc100: modify validation code for ACC101

2022-05-09 Thread Chautru, Nicolas
Hi Tom,

> -Original Message-
> From: Tom Rix 
> Sent: Sunday, May 8, 2022 6:07 AM
> To: Chautru, Nicolas ; dev@dpdk.org;
> gak...@marvell.com
> Cc: tho...@monjalon.net; Kinsella, Ray ; Richardson,
> Bruce ; hemant.agra...@nxp.com; Zhang,
> Mingshan ; david.march...@redhat.com
> Subject: Re: [PATCH v2 2/5] baseband/acc100: modify validation code for
> ACC101
> 
> 
> On 4/27/22 11:17 AM, Nicolas Chautru wrote:
> > The validation requirement is different for the two devices.
> >
> > Signed-off-by: Nicolas Chautru 
> > ---
> >   drivers/baseband/acc100/rte_acc100_pmd.c | 40
> ++--
> >   1 file changed, 28 insertions(+), 12 deletions(-)
> >
> > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > index fca27ef..daf2ce0 100644
> > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > @@ -1293,6 +1293,14 @@
> > return (q->d->device_variant == ACC100_VARIANT);
> >   }
> >
> > +#ifdef RTE_LIBRTE_BBDEV_DEBUG
> > +static inline bool
> > +validate_op_required(struct acc100_queue *q)
> 
> There isn't an #else case so this will fail to build.

There is no else required I believe, since that function is not used when the 
RTE_LIBRTE_BBDEV_DEBUG is not set. It should build on both (I believe this is 
tested by CICD). 

> 
> This i believe could be another function in private data fops i suggested in 
> the
> first patch.

In that case we do not expect to validate the input on ACC100, there is not 
such function. 

> 
> Tom
> 
> > +{
> > +   return is_acc100(q);
> > +}
> > +#endif
> > +



RE: [PATCH v2 3/5] baseband/acc100: configuration of ACC101 from PF

2022-05-09 Thread Chautru, Nicolas


> -Original Message-
> From: Tom Rix 
> Sent: Sunday, May 8, 2022 6:38 AM
> To: Chautru, Nicolas ; dev@dpdk.org;
> gak...@marvell.com
> Cc: tho...@monjalon.net; Kinsella, Ray ; Richardson,
> Bruce ; hemant.agra...@nxp.com; Zhang,
> Mingshan ; david.march...@redhat.com
> Subject: Re: [PATCH v2 3/5] baseband/acc100: configuration of ACC101 from
> PF
> 
> 
> On 4/27/22 11:17 AM, Nicolas Chautru wrote:
> > Adding companion function specific to ACC100 and it can be called from
> > bbdev-test when running from PF.
> >
> > Signed-off-by: Nicolas Chautru 
> > ---
> >   app/test-bbdev/test_bbdev_perf.c |  57 ++
> >   drivers/baseband/acc100/rte_acc100_cfg.h |  17 ++
> >   drivers/baseband/acc100/rte_acc100_pmd.c | 302
> +++
> >   drivers/baseband/acc100/version.map  |   2 +-
> >   4 files changed, 377 insertions(+), 1 deletion(-)
> >
> > diff --git a/app/test-bbdev/test_bbdev_perf.c
> > b/app/test-bbdev/test_bbdev_perf.c
> > index 0fa119a..baf5f6d 100644
> > --- a/app/test-bbdev/test_bbdev_perf.c
> > +++ b/app/test-bbdev/test_bbdev_perf.c
> > @@ -63,6 +63,8 @@
> >   #define ACC100_QMGR_INVALID_IDX -1
> >   #define ACC100_QMGR_RR 1
> >   #define ACC100_QOS_GBR 0
> > +#define ACC101PF_DRIVER_NAME   ("intel_acc101_pf")
> > +#define ACC101VF_DRIVER_NAME   ("intel_acc101_vf")
> A dup from patch 1
> >   #endif
> >
> >   #define OPS_CACHE_SIZE 256U
> > @@ -765,6 +767,61 @@ typedef int (test_case_function)(struct
> active_device *ad,
> > "Failed to configure ACC100 PF for bbdev %s",
> > info->dev_name);
> > }
> > +   if ((get_init_device() == true) &&
> > +   (!strcmp(info->drv.driver_name, ACC101PF_DRIVER_NAME)))
> {
> > +   struct rte_acc100_conf conf;
> 
> Mixing up acc100 and acc101 ?
> 
> If this actually works, combine the two.

The configuration file template is the same but not the configuration file. I 
can combine a bit more that part. 

> 
> > +   unsigned int i;
> > +
> > +   printf("Configure ACC101 FEC Driver %s with default values\n",
> > +   info->drv.driver_name);
> > +
> > +   /* clear default configuration before initialization */
> > +   memset(&conf, 0, sizeof(struct rte_acc100_conf));
> > +
> > +   /* Always set in PF mode for built-in configuration */
> > +   conf.pf_mode_en = true;
> > +   for (i = 0; i < RTE_ACC100_NUM_VFS; ++i) {
> > +   conf.arb_dl_4g[i].gbr_threshold1 =
> ACC100_QOS_GBR;
> > +   conf.arb_dl_4g[i].gbr_threshold1 =
> ACC100_QOS_GBR;
> > +   conf.arb_dl_4g[i].round_robin_weight =
> ACC100_QMGR_RR;
> > +   conf.arb_ul_4g[i].gbr_threshold1 =
> ACC100_QOS_GBR;
> > +   conf.arb_ul_4g[i].gbr_threshold1 =
> ACC100_QOS_GBR;
> > +   conf.arb_ul_4g[i].round_robin_weight =
> ACC100_QMGR_RR;
> > +   conf.arb_dl_5g[i].gbr_threshold1 =
> ACC100_QOS_GBR;
> > +   conf.arb_dl_5g[i].gbr_threshold1 =
> ACC100_QOS_GBR;
> > +   conf.arb_dl_5g[i].round_robin_weight =
> ACC100_QMGR_RR;
> > +   conf.arb_ul_5g[i].gbr_threshold1 =
> ACC100_QOS_GBR;
> > +   conf.arb_ul_5g[i].gbr_threshold1 =
> ACC100_QOS_GBR;
> > +   conf.arb_ul_5g[i].round_robin_weight =
> ACC100_QMGR_RR;
> > +   }
> > +
> > +   conf.input_pos_llr_1_bit = true;
> > +   conf.output_pos_llr_1_bit = true;
> > +   conf.num_vf_bundles = 1; /**< Number of VF bundles to setup
> */
> > +
> > +   conf.q_ul_4g.num_qgroups = ACC100_QMGR_NUM_QGS;
> > +   conf.q_ul_4g.first_qgroup_index =
> ACC100_QMGR_INVALID_IDX;
> > +   conf.q_ul_4g.num_aqs_per_groups =
> ACC100_QMGR_NUM_AQS;
> > +   conf.q_ul_4g.aq_depth_log2 = ACC100_QMGR_AQ_DEPTH;
> > +   conf.q_dl_4g.num_qgroups = ACC100_QMGR_NUM_QGS;
> > +   conf.q_dl_4g.first_qgroup_index =
> ACC100_QMGR_INVALID_IDX;
> > +   conf.q_dl_4g.num_aqs_per_groups =
> ACC100_QMGR_NUM_AQS;
> > +   conf.q_dl_4g.aq_depth_log2 = ACC100_QMGR_AQ_DEPTH;
> > +   conf.q_ul_5g.num_qgroups = ACC100_QMGR_NUM_QGS;
> > +   conf.q_ul_5g.first_qgroup_index =
> ACC100_QMGR_INVALID_IDX;
> > +   conf.q_ul_5g.num_aqs_per_groups =
> ACC100_QMGR_NUM_AQS;
> > +   conf.q_ul_5g.aq_depth_log2 = ACC100_QMGR_AQ_DEPTH;
> > +   conf.q_dl_5g.num_qgroups = ACC100_QMGR_NUM_QGS;
> > +   conf.q_dl_5g.first_qgroup_index =
> ACC100_QMGR_INVALID_IDX;
> > +   conf.q_dl_5g.num_aqs_per_groups =
> ACC100_QMGR_NUM_AQS;
> > +   conf.q_dl_5g.aq_depth_log2 = ACC100_QMGR_AQ_DEPTH;
> > +
> > +   /* setup PF with configuration information */
> > +   ret = rte_acc101_configure(info->dev_name, &conf);
> > +   TEST_ASSERT_SUCCESS(ret,
> > +   "Failed to configure ACC101 

RE: [PATCH v2 5/5] baseband/acc100: add protection for some negative scenario

2022-05-09 Thread Chautru, Nicolas
Hi Tom, 

> -Original Message-
> From: Tom Rix 
> Sent: Sunday, May 8, 2022 6:56 AM
> To: Chautru, Nicolas ; dev@dpdk.org;
> gak...@marvell.com
> Cc: tho...@monjalon.net; Kinsella, Ray ; Richardson,
> Bruce ; hemant.agra...@nxp.com; Zhang,
> Mingshan ; david.march...@redhat.com
> Subject: Re: [PATCH v2 5/5] baseband/acc100: add protection for some
> negative scenario
> 
> 
> On 4/27/22 11:17 AM, Nicolas Chautru wrote:
> > Catch exception in PMD in case of invalid input parameter.
> 
> It is not clear if this is 1 fix or 2.
> 
> But it does look like an acc100 fix so it should be split from the
> acc101 patchset.
> 

What is the concern? This is a different commit related to acc100.  

> >
> > Signed-off-by: Nicolas Chautru 
> > ---
> >   drivers/baseband/acc100/rte_acc100_pmd.c | 6 ++
> >   1 file changed, 6 insertions(+)
> >
> > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > index b588f5f..a13966c 100644
> > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > @@ -1241,6 +1241,8 @@
> > return (bg == 1 ? ACC100_K0_3_1 : ACC100_K0_3_2)
> * z_c;
> > }
> > /* LBRM case - includes a division by N */
> > +   if (unlikely(z_c == 0))
> > +   return 0;
> 
> This check should be moved to earlier, if 'n' is set to 0 in the statement 
> above,
> there is div by 0 later

N is purely a factor of z_c, I don’t see the concern is order. 

> 
> Tom
> 
> > if (rv_index == 1)
> > return (((bg == 1 ? ACC100_K0_1_1 : ACC100_K0_1_2) *
> n_cb)
> > / n) * z_c;
> > @@ -1916,6 +1918,10 @@ static inline uint32_t hq_index(uint32_t
> > offset)
> >
> > /* Soft output */
> > if (check_bit(op->turbo_dec.op_flags,
> RTE_BBDEV_TURBO_SOFT_OUTPUT))
> > {
> > +   if (op->turbo_dec.soft_output.data == 0) {
> > +   rte_bbdev_log(ERR, "Soft output is not defined");
> > +   return -1;
> > +   }
> > if (check_bit(op->turbo_dec.op_flags,
> > RTE_BBDEV_TURBO_EQUALIZER))
> > *s_out_length = e;



RE: [PATCH v2 4/5] baseband/acc100: start explicitly PF Monitor from PMD

2022-05-09 Thread Chautru, Nicolas
Hi Tom, 

> -Original Message-
> From: Tom Rix 
> Sent: Sunday, May 8, 2022 6:45 AM
> To: Chautru, Nicolas ; dev@dpdk.org;
> gak...@marvell.com
> Cc: tho...@monjalon.net; Kinsella, Ray ; Richardson,
> Bruce ; hemant.agra...@nxp.com; Zhang,
> Mingshan ; david.march...@redhat.com
> Subject: Re: [PATCH v2 4/5] baseband/acc100: start explicitly PF Monitor from
> PMD
> 
> 
> On 4/27/22 11:17 AM, Nicolas Chautru wrote:
> > Ensure the performance monitor is restarted in case this is reset
> > after VF FLR.
> >
> > Signed-off-by: Nicolas Chautru 
> > ---
> >   drivers/baseband/acc100/rte_acc100_pmd.c | 4 
> >   drivers/baseband/acc100/rte_acc100_pmd.h | 6 ++
> >   2 files changed, 10 insertions(+)
> >
> > diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c
> > b/drivers/baseband/acc100/rte_acc100_pmd.c
> > index b03cedc..b588f5f 100644
> > --- a/drivers/baseband/acc100/rte_acc100_pmd.c
> > +++ b/drivers/baseband/acc100/rte_acc100_pmd.c
> > @@ -263,6 +263,10 @@
> > & 0xF;
> > }
> >
> > +   /* Start Pmon */
> > +   acc100_reg_write(d, reg_addr->pmon_ctrl_a, 0x2);
> > +   acc100_reg_write(d, reg_addr->pmon_ctrl_b, 0x2);
> 
> This looks like an acc100 bug fix, so it should be split from the acc101 
> pathset.
> 
> Where this code is added is fetch_acc100_config, a function that does reads.
> 
> Though convenient, this is likely not the best place to put the writes
> 

OK fair enough It could be part of setup_queues(), Thanks.  

> Tom
> 
> > +
> > /* Read PF mode */
> > if (d->pf_device) {
> > reg_mode = acc100_reg_read(d, HWPfHiPfMode); diff --git
> > a/drivers/baseband/acc100/rte_acc100_pmd.h
> > b/drivers/baseband/acc100/rte_acc100_pmd.h
> > index 6438031..f126cc0 100644
> > --- a/drivers/baseband/acc100/rte_acc100_pmd.h
> > +++ b/drivers/baseband/acc100/rte_acc100_pmd.h
> > @@ -475,6 +475,8 @@ struct acc100_registry_addr {
> > unsigned int depth_log1_offset;
> > unsigned int qman_group_func;
> > unsigned int ddr_range;
> > +   unsigned int pmon_ctrl_a;
> > +   unsigned int pmon_ctrl_b;
> >   };
> >
> >   /* Structure holding registry addresses for PF */ @@ -504,6 +506,8
> > @@ struct acc100_registry_addr {
> > .depth_log1_offset = HWPfQmgrGrpDepthLog21Vf,
> > .qman_group_func = HWPfQmgrGrpFunction0,
> > .ddr_range = HWPfDmaVfDdrBaseRw,
> > +   .pmon_ctrl_a = HWPfPermonACntrlRegVf,
> > +   .pmon_ctrl_b = HWPfPermonBCntrlRegVf,
> >   };
> >
> >   /* Structure holding registry addresses for VF */ @@ -533,6 +537,8
> > @@ struct acc100_registry_addr {
> > .depth_log1_offset = HWVfQmgrGrpDepthLog21Vf,
> > .qman_group_func = HWVfQmgrGrpFunction0Vf,
> > .ddr_range = HWVfDmaDdrBaseRangeRoVf,
> > +   .pmon_ctrl_a = HWVfPmACntrlRegVf,
> > +   .pmon_ctrl_b = HWVfPmBCntrlRegVf,
> >   };
> >
> >   /* Structure associated with each queue. */



Re: [PATCH v6] lib/eal/ppc fix compilation for musl

2022-05-09 Thread Dunk



> On 9 May 2022, at 13:06, David Marchand  wrote:
> 
> On Sat, May 7, 2022 at 11:16 PM Duncan Bellamy  wrote:
>> 
>> musl lacks __ppc_get_timebase() but has __builtin_ppc_get_timebase()
>> 
>> the __ppc_get_timebase_freq() is taken from:
>> https://git.alpinelinux.org/aports/commit/?id=06b03f70fb94972286c0c9f6278df89e53903833
>> 
>> Signed-off-by: Duncan Bellamy 
> 
> - A patch title does not need lib/ prefix.
> Here, "eal/ppc: " is enough.
> 
> 
> - Code in lib/eal/linux won't be used for FreeBSD/Windows.
> On the other hand, arch-specific code (here, lib/eal/ppc/) can be used
> for the various OS.
> Besides, as far as I can see in the Linux kernel sources, powerpc is
> the only architecture that exports a "timebase" entry in
> /proc/cpuinfo.
> So, I see no reason to put any code out of lib/eal/ppc.
> 
> 
> - In the end, unless I missed some point, the patch could probably
> look like (untested):
> 
> diff --git a/lib/eal/ppc/include/rte_cycles.h 
> b/lib/eal/ppc/include/rte_cycles.h
> index 5585f9273c..666fc9b0bf 100644
> --- a/lib/eal/ppc/include/rte_cycles.h
> +++ b/lib/eal/ppc/include/rte_cycles.h
> @@ -10,7 +10,10 @@
> extern "C" {
> #endif
> 
> +#include 
> +#ifdef __GLIBC__
> #include 
> +#endif
> 
> #include "generic/rte_cycles.h"
> 
> @@ -26,7 +29,11 @@ extern "C" {
> static inline uint64_t
> rte_rdtsc(void)
> {
> +#ifdef __GLIBC__
>return __ppc_get_timebase();
> +#else
> +   return __builtin_ppc_get_timebase();
> +#endif
> }
> 
> static inline uint64_t
> diff --git a/lib/eal/ppc/rte_cycles.c b/lib/eal/ppc/rte_cycles.c
> index 3180adb0ff..99d36b2f7e 100644
> --- a/lib/eal/ppc/rte_cycles.c
> +++ b/lib/eal/ppc/rte_cycles.c
> @@ -2,12 +2,50 @@
>  * Copyright (C) IBM Corporation 2019.
>  */
> 
> +#include 
> +#ifdef __GLIBC__
> #include 
> +#elif RTE_EXEC_ENV_LINUX
> +#include 
> +#include 
> +#endif
> 
> #include "eal_private.h"
> 
> uint64_t
> get_tsc_freq_arch(void)
> {
> +#ifdef __GLIBC__
>return __ppc_get_timebase_freq();
> +#elif RTE_EXEC_ENV_LINUX
> +   static unsigned long base;
> +   char buf[512];
> +   ssize_t nr;
> +   FILE *f;
> +
> +   if (base != 0)
> +   goto out;
> +
> +   f = fopen("/proc/cpuinfo", "rb");
> +   if (f == NULL)
> +   goto out;
> +
> +   while (fgets(buf, sizeof(buf), f) != NULL) {
> +   char *ret = strstr(buf, "timebase");
> +
> +   if (ret == NULL)
> +   continue;
> +   ret += sizeof("timebase") - 1;
> +   ret = strchr(ret, ':');
> +   if (ret == NULL)
> +   continue;
> +   base = strtoul(ret + 1, NULL, 10);
> +   break;
> +   }
> +   fclose(f);
> +out:
> +   return (uint64_t) base;
> +#else
> +   return 0;
> +#endif
> }
> 
> 
> -- 
> David Marchand
> 

Thanks, that looks the same thing.  Will run through alpine CI and change 
commit title

[PATCH v1] net/iavf: fix resource leak issue

2022-05-09 Thread Wenjun Wu
This patch fixes resource leak issue reported by coverity.

Coverity issue: 378017
Fixes: b14e8a57b9fe ("net/iavf: support quanta size configuration")

Signed-off-by: Wenjun Wu 
---
 drivers/net/iavf/iavf_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index d1a2b53675..82672841f4 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -2188,7 +2188,8 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
if (ad->devargs.quanta_size < 256 || ad->devargs.quanta_size > 4096 ||
ad->devargs.quanta_size & 0x40) {
PMD_INIT_LOG(ERR, "invalid quanta size\n");
-   return -EINVAL;
+   ret = -EINVAL;
+   goto bail;
}
 
 bail:
-- 
2.25.1



[PATCH v4] fix mbuf release function point corrupt in multi-process

2022-05-09 Thread Ke Zhang
In the multi process environment, the sub process operates on the
shared memory and changes the function pointer of the main process,
resulting in the failure to find the address of the function when main
process releasing, resulting in crash.

Signed-off-by: Ke Zhang 
---
 drivers/net/iavf/iavf_rxtx.c| 50 -
 drivers/net/iavf/iavf_rxtx.h| 11 ++
 drivers/net/iavf/iavf_rxtx_vec_avx512.c |  8 +---
 drivers/net/iavf/iavf_rxtx_vec_sse.c| 16 ++--
 4 files changed, 57 insertions(+), 28 deletions(-)

diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 16e8d021f9..67d00ee5bc 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -362,14 +362,44 @@ release_txq_mbufs(struct iavf_tx_queue *txq)
}
 }
 
-static const struct iavf_rxq_ops def_rxq_ops = {
+static const
+struct iavf_rxq_ops def_rxq_ops = {
.release_mbufs = release_rxq_mbufs,
 };
 
-static const struct iavf_txq_ops def_txq_ops = {
+static const
+struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
 };
 
+static const
+struct iavf_rxq_ops sse_vec_rxq_ops = {
+   .release_mbufs = iavf_rx_queue_release_mbufs_sse,
+};
+
+static const
+struct iavf_txq_ops sse_vec_txq_ops = {
+   .release_mbufs = iavf_tx_queue_release_mbufs_sse,
+};
+
+static const
+struct iavf_txq_ops avx512_vec_txq_ops = {
+   .release_mbufs = iavf_tx_queue_release_mbufs_avx512,
+};
+
+static const
+struct iavf_rxq_ops iavf_rxq_release_mbufs_ops[] = {
+   [IAVF_REL_MBUFS_DEFAULT] = def_rxq_ops,
+   [IAVF_REL_MBUFS_SSE_VEC] = sse_vec_rxq_ops,
+};
+
+static const
+struct iavf_txq_ops iavf_txq_release_mbufs_ops[] = {
+   [IAVF_REL_MBUFS_DEFAULT] = def_txq_ops,
+   [IAVF_REL_MBUFS_SSE_VEC] = sse_vec_txq_ops,
+   [IAVF_REL_MBUFS_AVX512_VEC] = avx512_vec_txq_ops,
+};
+
 static inline void
 iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
struct rte_mbuf *mb,
@@ -674,7 +704,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t 
queue_idx,
rxq->q_set = true;
dev->data->rx_queues[queue_idx] = rxq;
rxq->qrx_tail = hw->hw_addr + IAVF_QRX_TAIL1(rxq->queue_id);
-   rxq->ops = &def_rxq_ops;
+   rxq->rel_mbufs_type = IAVF_REL_MBUFS_DEFAULT;
 
if (check_rx_bulk_allow(rxq) == true) {
PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
@@ -811,7 +841,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
txq->q_set = true;
dev->data->tx_queues[queue_idx] = txq;
txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(queue_idx);
-   txq->ops = &def_txq_ops;
+   txq->rel_mbufs_type = IAVF_REL_MBUFS_DEFAULT;
 
if (check_tx_vec_allow(txq) == false) {
struct iavf_adapter *ad =
@@ -943,7 +973,7 @@ iavf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t 
rx_queue_id)
}
 
rxq = dev->data->rx_queues[rx_queue_id];
-   rxq->ops->release_mbufs(rxq);
+   iavf_rxq_release_mbufs_ops[rxq->rel_mbufs_type].release_mbufs(rxq);
reset_rx_queue(rxq);
dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -971,7 +1001,7 @@ iavf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t 
tx_queue_id)
}
 
txq = dev->data->tx_queues[tx_queue_id];
-   txq->ops->release_mbufs(txq);
+   iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
 
@@ -986,7 +1016,7 @@ iavf_dev_rx_queue_release(struct rte_eth_dev *dev, 
uint16_t qid)
if (!q)
return;
 
-   q->ops->release_mbufs(q);
+   iavf_rxq_release_mbufs_ops[q->rel_mbufs_type].release_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -1000,7 +1030,7 @@ iavf_dev_tx_queue_release(struct rte_eth_dev *dev, 
uint16_t qid)
if (!q)
return;
 
-   q->ops->release_mbufs(q);
+   iavf_txq_release_mbufs_ops[q->rel_mbufs_type].release_mbufs(q);
rte_free(q->sw_ring);
rte_memzone_free(q->mz);
rte_free(q);
@@ -1034,7 +1064,7 @@ iavf_stop_queues(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (!txq)
continue;
-   txq->ops->release_mbufs(txq);
+   
iavf_txq_release_mbufs_ops[txq->rel_mbufs_type].release_mbufs(txq);
reset_tx_queue(txq);
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -1042,7 +1072,7 @@ iavf_stop_queues(struct rte_eth_dev *dev)
rxq = dev->data->rx_queues[i];
if (!rxq)
continue;
-   rxq->ops->release_mbufs(rxq);
+   
iavf_rxq_release_mbufs_ops[rxq->rel_mbufs_type].release_mbufs(rxq);

Re: [dpdk-dev] [PATCH 00/17] bnxt PMD fixes

2022-05-09 Thread Ajit Khaparde
On Wed, Apr 27, 2022 at 7:58 AM Kalesh A P
 wrote:
>
> From: Kalesh AP 
>
> This patch set contains bug fixes in bnxt PMD. Please apply.
>
> Kalesh AP (12):
>   net/bnxt: update HWRM structures
>   net/bnxt: fix device capability reporting
>   net/bnxt: fix to remove an unused macro
>   net/bnxt: fix Rxq configure
>   net/bnxt: fix support for tunnel stateless offloads
>   net/bnxt: fix RSS action support
>   net/bnxt: add check for dupliate queue ids
>   net/bnxt: avoid unnecessary endianness conversion
>   net/bnxt: fix setting autoneg speed
>   net/bnxt: force PHY update on certain configurations
>   net/bnxt: fix reporting link status when port is stopped
>   net/bnxt: recheck FW readiness if FW is in reset process
>
> Somnath Kotur (5):
>   net/bnxt: remove support for COUNT action
>   net/bnxt: fix to reconfigure the VNIC's default receive ring
>   net/bnxt: fix to handle queue stop during RSS flow create
>   net/bnxt: fix freeing of VNIC filters
>   net/bnxt: don't wait for link up completion in dev start


Patches applied to dpdk-next-net-brcm. Thanks

>
>
>  drivers/net/bnxt/bnxt.h|   29 +-
>  drivers/net/bnxt/bnxt_ethdev.c |   58 +-
>  drivers/net/bnxt/bnxt_filter.c |2 +
>  drivers/net/bnxt/bnxt_flow.c   |   92 +-
>  drivers/net/bnxt/bnxt_hwrm.c   |   15 +-
>  drivers/net/bnxt/bnxt_hwrm.h   |   20 +
>  drivers/net/bnxt/bnxt_reps.c   |6 +-
>  drivers/net/bnxt/bnxt_rxq.c|   75 +-
>  drivers/net/bnxt/bnxt_rxq.h|1 +
>  drivers/net/bnxt/bnxt_txq.c|   29 +
>  drivers/net/bnxt/bnxt_txq.h|1 +
>  drivers/net/bnxt/hsi_struct_def_dpdk.h | 4025 
> 
>  12 files changed, 3809 insertions(+), 544 deletions(-)
>
> --
> 2.10.1
>


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [PATCH 0/3] BNXT changes

2022-05-09 Thread Ajit Khaparde
On Wed, Apr 13, 2022 at 3:12 PM Ajit Khaparde
 wrote:
>
> On Wed, Apr 13, 2022 at 3:32 AM Ruifeng Wang  wrote:
> >
> > This patch set includes changes proposed for BNXT PMD.
> > Found these in code review.
> >
> > Ruifeng Wang (3):
> >   net/bnxt: defer completion index update
> >   net/bnxt: remove redundant ifdefs
> >   net/bnxt: fix risk in Rx descriptor read in NEON path
> Thanks Ruifeng.
> Let me review the patchset and get back.

Patches applied to dpdk-next-net-brcm. Thanks
>
> >
> >  drivers/net/bnxt/bnxt_rxr.c   |  2 +-
> >  drivers/net/bnxt/bnxt_rxtx_vec_neon.c | 21 +++--
> >  2 files changed, 16 insertions(+), 7 deletions(-)
> >
> > --
> > 2.25.1
> >


smime.p7s
Description: S/MIME Cryptographic Signature


RE: [Patch v2] net/netvsc: report correct stats values

2022-05-09 Thread Long Li
> Subject: Re: [Patch v2] net/netvsc: report correct stats values
> 
> On 5/5/2022 5:40 PM, Stephen Hemminger wrote:
> > On Thu, 5 May 2022 17:28:38 +0100
> > Ferruh Yigit  wrote:
> >
> >> On 5/4/2022 7:38 PM, Long Li wrote:
>  Subject: Re: [Patch v2] net/netvsc: report correct stats values
> 
>  On 5/3/2022 9:48 PM, Long Li wrote:
> >> Subject: Re: [Patch v2] net/netvsc: report correct stats values
> >>
> >> On 5/3/2022 8:14 PM, Long Li wrote:
>  Subject: Re: [Patch v2] net/netvsc: report correct stats values
> 
>  On 5/3/2022 7:18 PM, Long Li wrote:
> >> Subject: Re: [Patch v2] net/netvsc: report correct stats
> >> values
> >>
> >> On Tue, 26 Apr 2022 22:56:14 +0100 Ferruh Yigit
> >>  wrote:
> >>
>   if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) {
>  -stats->q_opackets[i] = 
>  txq->stats.packets;
>  -stats->q_obytes[i] = txq->stats.bytes;
>  +stats->q_opackets[i] += txq-
> >stats.packets;
>  +stats->q_obytes[i] += txq->stats.bytes;
> >>>
> >>> This is per queue stats, 'stats->q_opackets[i]', in next
> >>> iteration of the loop, 'i' will be increased and 'txq' will
> >>> be updated, so as far as I can see the above change has no affect.
> >>
> >> Agree, that is why it was just assignment originally.
> >
> > The condition here is a little different. NETVSC is a master
> > device with
>  another PMD running as a slave. When reporting stats values, it
>  needs to add the values from the slave PMD. The original code
>  just overwrites the values from its slave PMD.
> 
>  Where the initial values are coming from, 'hn_vf_stats_get()'?
> 
>  If 'hn_vf_stats_get()' fills the stats, what are the values
>  kept in
>  'txq-
> >>> stats.*'
>  in above updated loop?
> >>>
> >>> Yes, hn_vf_stats_get() fills in the stats from the slave PMD.
> >>> txq->stats
> >> values are from the master PMD. Those values are different and
> >> accounted separated from the values from the slave PMD.
> >>
> >> I see, since this is a little different than what most of the
> >> PMDs do, can you please put a little more info to the commit log?
> >> Or perhaps can add some comments to the code.
> >
> > Ok, will do.
> >
> >>
> >> And still 'stats->rx_nombuf' change is not required right? If so
> >> can you remove it in the next version?
> >
> > It is still needed. NETVSC unconditionally calls the slave PMD to
> > receive
>  packets, even if it can't allocate a mbuf to receive a synthetic
>  packet itself. The accounting of rx_nombuf is valid because the
>  synthetic packets (to NETVSC) and VF packets (to slave PMD) are routed
> separately from Hyper-V.
> 
>  I am not referring to the "+=" update, my comment was because
>  'stats-
> > rx_nombuf' is overwritten in 'rte_eth_stats_get()' [1].
>  Is it still required?
> >>>
> >>> Yes, it is still needed. NETVSC calls the rte_eth_stats_get() on its 
> >>> slave PMD
> first, and stats->rx_nombuf is updated (overwritten) for its slave PMD. Afte 
> that,
> it needs to add to its own dev->data->rx_mbuf_alloc_failed back to stats-
> >rx_nombuf.
> >>>
> >>
> >> But its own stat also will be overwritten (not in PMD function, but
> >> in ethdev layer).
> >> 'stats->rx_nombuf' assignment in the PMD seems has no effect and can
> >> be removed.
> >>
> >> I can't see how it is needed, can you please put a call stack to describe?
> >
> > This here:
> >
> >
> > int
> > rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats) {
> > struct rte_eth_dev *dev;
> >
> > RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > dev = &rte_eth_devices[port_id];
> >
> > if (stats == NULL) {
> > RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u stats to
> NULL\n",
> > port_id);
> > return -EINVAL;
> > }
> >
> > memset(stats, 0, sizeof(*stats));
> >
> > RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);
> > stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
> > return eth_err(port_id, (*dev->dev_ops->stats_get)(dev, stats)); }
> >
> > Will fill in rx_nombuf from the current rx_mbuf_alloc_failed.
> > But it happens before the PMD specific stats function.
> >
> 
> I keep seeing the ethdev assignment as *after* the dev_ops, but it is not 
> [1], so
> code is OK as it is.

Hi Ferruh,

Do you still want me to send a v3, or this patch is good as it is?

Long

> 
> 
> [1]
> It seems assignment was after but it is fixed on the way:
> Commit 53ecfa24fbcd ("ethdev: fix overwriting driver-specific stats")


[PATCH 0/2] atomic changes

2022-05-09 Thread Ruifeng Wang
This patch set includes changes proposed for BNXT PMD.
These are C11 atomic and synchronization changes. Occruances of
rte_atomicNN and rte_smp_XX are touched. 

Ruifeng Wang (2):
  net/bnxt: use compiler atomics for stats
  net/bnxt: remove some dead code

 drivers/net/bnxt/bnxt_cpr.h   | 14 --
 drivers/net/bnxt/bnxt_rxq.c   |  2 +-
 drivers/net/bnxt/bnxt_rxq.h   |  2 +-
 drivers/net/bnxt/bnxt_rxr.c   |  9 +
 drivers/net/bnxt/bnxt_stats.c |  4 ++--
 5 files changed, 9 insertions(+), 22 deletions(-)

-- 
2.25.1



[PATCH 1/2] net/bnxt: use compiler atomics for stats

2022-05-09 Thread Ruifeng Wang
Converted rte_atomic usages to compiler atomic built-ins.

Signed-off-by: Ruifeng Wang 
Reviewed-by: Kathleen Capella 
---
 drivers/net/bnxt/bnxt_rxq.c   | 2 +-
 drivers/net/bnxt/bnxt_rxq.h   | 2 +-
 drivers/net/bnxt/bnxt_rxr.c   | 9 +
 drivers/net/bnxt/bnxt_stats.c | 4 ++--
 4 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index cd3bb1446f..9c93cde9b3 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -378,7 +378,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
"ring_dma_zone_reserve for rx_ring failed!\n");
goto err;
}
-   rte_atomic64_init(&rxq->rx_mbuf_alloc_fail);
+   rxq->rx_mbuf_alloc_fail = 0;
 
/* rxq 0 must not be stopped when used as async CPR */
if (!BNXT_NUM_ASYNC_CPR(bp) && queue_idx == 0)
diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h
index 0331c23810..7b52a21497 100644
--- a/drivers/net/bnxt/bnxt_rxq.h
+++ b/drivers/net/bnxt/bnxt_rxq.h
@@ -40,7 +40,7 @@ struct bnxt_rx_queue {
struct bnxt_rx_ring_info*rx_ring;
struct bnxt_cp_ring_info*cp_ring;
struct rte_mbuf fake_mbuf;
-   rte_atomic64_t  rx_mbuf_alloc_fail;
+   uint64_trx_mbuf_alloc_fail;
const struct rte_memzone *mz;
 };
 
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 5a9cf48e67..738f2f584c 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -49,7 +49,7 @@ static inline int bnxt_alloc_rx_data(struct bnxt_rx_queue 
*rxq,
rx_buf = &rxr->rx_buf_ring[prod];
mbuf = __bnxt_alloc_rx_data(rxq->mb_pool);
if (!mbuf) {
-   rte_atomic64_inc(&rxq->rx_mbuf_alloc_fail);
+   __atomic_fetch_add(&rxq->rx_mbuf_alloc_fail, 1, 
__ATOMIC_RELAXED);
return -ENOMEM;
}
 
@@ -84,7 +84,7 @@ static inline int bnxt_alloc_ag_data(struct bnxt_rx_queue 
*rxq,
 
mbuf = __bnxt_alloc_rx_data(rxq->mb_pool);
if (!mbuf) {
-   rte_atomic64_inc(&rxq->rx_mbuf_alloc_fail);
+   __atomic_fetch_add(&rxq->rx_mbuf_alloc_fail, 1, 
__ATOMIC_RELAXED);
return -ENOMEM;
}
 
@@ -459,7 +459,7 @@ static inline struct rte_mbuf *bnxt_tpa_end(
struct rte_mbuf *new_data = __bnxt_alloc_rx_data(rxq->mb_pool);
RTE_ASSERT(new_data != NULL);
if (!new_data) {
-   rte_atomic64_inc(&rxq->rx_mbuf_alloc_fail);
+   __atomic_fetch_add(&rxq->rx_mbuf_alloc_fail, 1, 
__ATOMIC_RELAXED);
return NULL;
}
tpa_info->mbuf = new_data;
@@ -1369,7 +1369,8 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq)
rxr->tpa_info[i].mbuf =
__bnxt_alloc_rx_data(rxq->mb_pool);
if (!rxr->tpa_info[i].mbuf) {
-   
rte_atomic64_inc(&rxq->rx_mbuf_alloc_fail);
+   
__atomic_fetch_add(&rxq->rx_mbuf_alloc_fail, 1,
+   __ATOMIC_RELAXED);
return -ENOMEM;
}
}
diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c
index 208aa5616d..72169e8b35 100644
--- a/drivers/net/bnxt/bnxt_stats.c
+++ b/drivers/net/bnxt/bnxt_stats.c
@@ -578,7 +578,7 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
 
bnxt_fill_rte_eth_stats(bnxt_stats, &ring_stats, i, true);
bnxt_stats->rx_nombuf +=
-   rte_atomic64_read(&rxq->rx_mbuf_alloc_fail);
+   __atomic_load_n(&rxq->rx_mbuf_alloc_fail, 
__ATOMIC_RELAXED);
}
 
num_q_stats = RTE_MIN(bp->tx_cp_nr_rings,
@@ -632,7 +632,7 @@ int bnxt_stats_reset_op(struct rte_eth_dev *eth_dev)
for (i = 0; i < bp->rx_cp_nr_rings; i++) {
struct bnxt_rx_queue *rxq = bp->rx_queues[i];
 
-   rte_atomic64_clear(&rxq->rx_mbuf_alloc_fail);
+   rxq->rx_mbuf_alloc_fail = 0;
}
 
bnxt_clear_prev_stat(bp);
-- 
2.25.1



[PATCH 2/2] net/bnxt: remove some dead code

2022-05-09 Thread Ruifeng Wang
Removed some macros that were defined but not used in this driver.
As a result, rte_smp_xx occurrence is removed from this driver.

Signed-off-by: Ruifeng Wang 
Reviewed-by: Kathleen Capella 
---
 drivers/net/bnxt/bnxt_cpr.h | 14 --
 1 file changed, 14 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_cpr.h b/drivers/net/bnxt/bnxt_cpr.h
index 52db382c2f..dab6bed2ae 100644
--- a/drivers/net/bnxt/bnxt_cpr.h
+++ b/drivers/net/bnxt/bnxt_cpr.h
@@ -39,25 +39,11 @@ struct bnxt_db_info;
 #define B_CP_DB_DISARM(cpr)(*(uint32_t *)((cpr)->cp_db.doorbell) = \
 DB_KEY_CP | DB_IRQ_DIS)
 
-#define B_CP_DB_IDX_ARM(cpr, cons) \
-   (*(uint32_t *)((cpr)->cp_db.doorbell) = (DB_CP_REARM_FLAGS | \
-   (cons)))
-
-#define B_CP_DB_IDX_DISARM(cpr, cons)  do {\
-   rte_smp_wmb();  \
-   (*(uint32_t *)((cpr)->cp_db.doorbell) = (DB_CP_FLAGS |  \
-   (cons));\
-} while (0)
 #define B_CP_DIS_DB(cpr, raw_cons) \
rte_write32_relaxed((DB_CP_FLAGS |  \
DB_RING_IDX(&((cpr)->cp_db), raw_cons)),\
((cpr)->cp_db.doorbell))
 
-#define B_CP_DB(cpr, raw_cons, ring_mask)  \
-   rte_write32((DB_CP_FLAGS |  \
-   RING_CMPL((ring_mask), raw_cons)),  \
-   ((cpr)->cp_db.doorbell))
-
 struct bnxt_db_info {
void*doorbell;
union {
-- 
2.25.1



RE: [PATCH v3] sched: enable/disable TC OV at runtime

2022-05-09 Thread Ajmera, Megha
Hi Cristian, Marcin,

> > -Original Message-
> > From: Danilewicz, MarcinX 
> > Sent: Wednesday, April 27, 2022 10:24 AM
> > To: dev@dpdk.org; Singh, Jasvinder ;
> > Dumitrescu, Cristian 
> > Cc: Ajmera, Megha 
> > Subject: [PATCH v3] sched: enable/disable TC OV at runtime
> 
> We are not trying to enable/disable the traffic class oversubscription 
> feature at
> run-time, but at initialization. If cat, we should prohibit changing this 
> post-
> initialization.
>

If we only need this to be configured at initialization time, then we can as 
well take this flag in subport config API itself. Then there will be no need 
for a new API. The purpose of new API was to enable/disable this feature at 
runtime.
 
> Also the name of the feature should not be abbreviated in the patch title.
> 
> I suggest you rework the title to:
> [PATCH] sched: enable traffic class oversubscription conditionally
> 
> >
> > Added new API to enable or disable TC over subscription for best
> > effort traffic class at subport level.
> > Added changes after review and increased throughput.
> >
> > By default TC OV is disabled.
> 
> It should be the other way around, the TC_OV should be enabled by default. The
> TC oversubscription is a more natural way to use this library, we usually 
> want to
> disable this feature just for better performance in case this functionality 
> is not
> needed. Please initialize the tc_ov flag accordingly.
>

In original code, this feature has always been disabled as it impacts 
performance.
So, in my opinion we should keep it disabled by default and let user enable it 
when required.
 
> >
> > Signed-off-by: Marcin Danilewicz 
> > ---
> >  lib/sched/rte_sched.c | 189
> > +++---
> >  lib/sched/rte_sched.h |  18 
> >  lib/sched/version.map |   3 +
> >  3 files changed, 178 insertions(+), 32 deletions(-)
> >
> > diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c index
> > ec74bee939..6e7d81df46 100644
> > --- a/lib/sched/rte_sched.c
> > +++ b/lib/sched/rte_sched.c
> > @@ -213,6 +213,9 @@ struct rte_sched_subport {
> > uint8_t *bmp_array;
> > struct rte_mbuf **queue_array;
> > uint8_t memory[0] __rte_cache_aligned;
> > +
> > +   /* TC oversubscription activation */
> > +   int is_tc_ov_enabled;
> 
> How about we simplify the name of this variable to: tc_ov_enabled ?
> 
> >  } __rte_cache_aligned;
> >
> >  struct rte_sched_port {
> > @@ -1165,6 +1168,45 @@ rte_sched_cman_config(struct rte_sched_port
> > *port,  }  #endif
> >
> > +int
> > +rte_sched_subport_tc_ov_config(struct rte_sched_port *port,
> > +   uint32_t subport_id,
> > +   bool tc_ov_enable)
> > +{
> > +   struct rte_sched_subport *s;
> > +   struct rte_sched_subport_profile *profile;
> > +
> > +   if (port == NULL) {
> > +   RTE_LOG(ERR, SCHED,
> > +   "%s: Incorrect value for parameter port\n", __func__);
> > +   return -EINVAL;
> > +   }
> > +
> > +   if (subport_id >= port->n_subports_per_port) {
> > +   RTE_LOG(ERR, SCHED,
> > +   "%s: Incorrect value for parameter subport id\n",
> > __func__);
> > +   return  -EINVAL;
> > +   }
> > +
> > +   s = port->subports[subport_id];
> > +   s->is_tc_ov_enabled = tc_ov_enable ? 1 : 0;
> > +
> > +   if (s->is_tc_ov_enabled) {
> > +   /* TC oversubscription */
> > +   s->tc_ov_wm_min = port->mtu;
> > +   s->tc_ov_period_id = 0;
> > +   s->tc_ov = 0;
> > +   s->tc_ov_n = 0;
> > +   s->tc_ov_rate = 0;
> > +
> > +   profile = port->subport_profiles + s->profile;
> > +   s->tc_ov_wm_max = rte_sched_time_ms_to_bytes(profile-
> > >tc_period,
> > +   s->pipe_tc_be_rate_max);
> > +   s->tc_ov_wm = s->tc_ov_wm_max;
> > +   }
> > +   return 0;
> > +}
> 
> This function should not exist, please remove it and keep the initial code 
> that
> computes the tc_ov related variable regardless of whether tc_ov is enabled or
> not.
> 
> All the tc_ov related variables have the tc_ov particle in their name, so 
> there is
> no clash. This is initialization code, so no performance overhead. Let's keep 
> the
> code unmodified and compute both the tc_ov and the non-tc_ov varables at
> initialization, regardless of whether the feature is enabled or not.
> 
> This comment is applicable to all the initialization code, please adjust all 
> the init
> code accordingly. There should be no diff showing in the patch for any of the 
> init
> code!
> 
> For this file "rte_sched.c", your patch should contain just two additional 
> run-
> time functions, i.e. the non-tc-ov version of functions 
> grinder_credits_update()
> and grindler_credits_check(), and the small code required to test when to use
> the tc-ov vs. the non-tc_ov version, makes sense?
> 
> > +
> >  int
> >  is_tc_ov_enabled (struct rte_sched_port *port,
> > uint32_t subport_id,
> > @@ -1254,6 +1296,9 @@ rte_sched_subport_config(struct rte_sched_por