RE: [PATCH v2] net/i40e: updated 23.11 recommended matching list

2024-01-03 Thread Xing, Beilei



> -Original Message-
> From: Su, Simei 
> Sent: Thursday, January 4, 2024 11:15 AM
> To: Xing, Beilei ; Zhang, Qi Z 
> Cc: dev@dpdk.org; Yang, Qiming ; Su, Simei
> 
> Subject: [PATCH v2] net/i40e: updated 23.11 recommended matching list
> 
> Add suggested DPDK/kernel driver/firmware version matching list.
> 
> Signed-off-by: Simei Su 
> ---
> v2:
> * Add commit log.
> 
>  doc/guides/nics/i40e.rst | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst index
> 3432eab..15689ac 100644
> --- a/doc/guides/nics/i40e.rst
> +++ b/doc/guides/nics/i40e.rst
> @@ -104,6 +104,8 @@ For X710/XL710/XXV710,
> +--+---+--+
> | DPDK version | Kernel driver version | Firmware version |
> +==+===+==+
> +   |23.11 | 2.23.17   |   9.30   |
> +   +--+---+--+
> |23.07 | 2.22.20   |   9.20   |
> +--+---+--+
> |23.03 | 2.22.18   |   9.20   |
> @@ -167,6 +169,8 @@ For X722,
> +--+---+--+
> | DPDK version | Kernel driver version | Firmware version |
> +==+===+==+
> +   |23.11 | 2.23.17   |   6.20   |
> +   +--+---+--+
> |23.07 | 2.22.20   |   6.20   |
> +--+---+--+
> |23.03 | 2.22.18   |   6.20   |
> --
> 2.9.5


Acked-by: Beilei Xing 


RE: [PATCH 2/4] vfio: add VFIO IOMMUFD support

2023-12-24 Thread Xing, Beilei



> -Original Message-
> From: Stephen Hemminger 
> Sent: Saturday, December 23, 2023 1:17 AM
> To: Xing, Beilei 
> Cc: Burakov, Anatoly ; dev@dpdk.org;
> tho...@monjalon.net; ferruh.yi...@amd.com; Richardson, Bruce
> ; chen...@nvidia.com; Cao, Yahui
> 
> Subject: Re: [PATCH 2/4] vfio: add VFIO IOMMUFD support
> 
> On Fri, 22 Dec 2023 19:44:51 +
> beilei.x...@intel.com wrote:
> 
> > diff --git a/lib/eal/include/rte_vfio.h b/lib/eal/include/rte_vfio.h
> > index 22832afd0f..7a9b26b0f7 100644
> > --- a/lib/eal/include/rte_vfio.h
> > +++ b/lib/eal/include/rte_vfio.h
> > @@ -17,6 +17,8 @@ extern "C" {
> >  #include 
> >  #include 
> >
> > +#include 
> > +
> >  /*
> >   * determine if VFIO is present on the system
> >   */
> > @@ -28,6 +30,9 @@ extern "C" {
> >  #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 0, 0)  #define
> > HAVE_VFIO_DEV_REQ_INTERFACE  #endif /* kernel version >= 4.0.0 */
> > +#if LINUX_VERSION_CODE >= KERNEL_VERSION(6, 6, 0) #define
> > +VFIO_IOMMUFD_PRESENT #endif /* kernel version >= 6.6.0 */
> >  #endif /* RTE_EAL_VFIO */
> 
> Depending on kernel version macro is a mistake because many enterprise
> distro's backport features and do not change kernel version.

Make sense. We defined VFIO_IOMMUFD_PRESENT with reference to
VFIO_PRESENT. Do you have suggestion for this point? Thanks a lot.

> Also, it means the build and target machine have to be same kernel version.


RE: [PATCH 4/4] eal: add new args to choose VFIO mode

2023-12-24 Thread Xing, Beilei



> -Original Message-
> From: Stephen Hemminger 
> Sent: Saturday, December 23, 2023 1:18 AM
> To: Xing, Beilei 
> Cc: Burakov, Anatoly ; dev@dpdk.org;
> tho...@monjalon.net; ferruh.yi...@amd.com; Richardson, Bruce
> ; chen...@nvidia.com; Cao, Yahui
> 
> Subject: Re: [PATCH 4/4] eal: add new args to choose VFIO mode
> 
> On Fri, 22 Dec 2023 19:44:53 +
> beilei.x...@intel.com wrote:
> 
> > From: Beilei Xing 
> >
> > Since now Linux has both of VFIO Container/GROUP & VFIO IOMMUFD/CDEV
> > support, user can determine how to probe the PCI device by the new
> > args "--vfio-mode".
> >
> > Use "--vfio-mode=container" to choose VFIO Container/GROUP, and use
> > "--vfio-mode=iommufd" to choose VFIO IOMMUFD/CDEV.
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Yahui Cao 
> 
> Can't this be automatic, users don't need more EAL options.

Thanks for your review. Since Linux supports both VFIO Container/GROUP and VFIO
OMMUFD/CDEV currently, I think user can choose which mode they want. The new
IOMMU features (e.g. PASID/SSID) may be only available through VFIO IOMMUFD/CDEV
interface, VFIO Container/GROUP may be deprecated in future, and then DPDK will
use iommufd mode automatically.
. 



RE: [PATCH v1] net/cpfl: fix incorrect status calculation

2023-10-23 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Yuying 
> Sent: Tuesday, August 22, 2023 9:17 AM
> To: Zhang, Yuying ; dev@dpdk.org; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Subject: [PATCH v1] net/cpfl: fix incorrect status calculation
>
> From: Yuying Zhang 
>
> Fix the incorrect ingress packet number calculation.
>
> Fixes: e3289d8fb63f ("net/cpfl: support basic statistics")
>
> Signed-off-by: Yuying Zhang 
> ---
>  drivers/net/cpfl/cpfl_ethdev.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index c4ca9343c3..8c02c84c5b 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -304,7 +304,7 @@ cpfl_dev_stats_get(struct rte_eth_dev *dev, struct
> rte_eth_stats *stats)
>
>   idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
>   stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
> - pstats->rx_broadcast - pstats->rx_discards;
> +   pstats->rx_broadcast;
>   stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
> +
>   pstats->tx_unicast;
>   stats->imissed = pstats->rx_discards;
> --
> 2.25.1


Acked-by: Beilei Xing 



RE: [PATCH v1] net/idpf: fix incorrect status calculation

2023-10-23 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Yuying 
> Sent: Thursday, September 28, 2023 1:05 PM
> To: Zhang, Yuying ; dev@dpdk.org; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: sta...@dpdk.org
> Subject: [PATCH v1] net/idpf: fix incorrect status calculation
> 
> From: Yuying Zhang 
> 
> Fix the incorrect ingress packet number calculation.
> 
> Fixes: 7514d76d407b ("net/idpf: add basic statistics")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Yuying Zhang 
> ---
>  drivers/net/idpf/idpf_ethdev.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
> index 3af7cf0bb7..6ae2ac2681 100644
> --- a/drivers/net/idpf/idpf_ethdev.c
> +++ b/drivers/net/idpf/idpf_ethdev.c
> @@ -281,7 +281,7 @@ idpf_dev_stats_get(struct rte_eth_dev *dev, struct
> rte_eth_stats *stats)
> 
>   idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
>   stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
> - pstats->rx_broadcast - pstats->rx_discards;
> + pstats->rx_broadcast;
>   stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
> +
>   pstats->tx_unicast;
>   stats->ierrors = pstats->rx_errors;
> --
> 2.34.1

Acked-by: Beilei Xing 



RE: [PATCH] net/cpfl: reset devargs during the first probe

2023-10-12 Thread Xing, Beilei



> -Original Message-
> From: Wu, Jingjing 
> Sent: Thursday, October 12, 2023 2:54 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org
> Subject: RE: [PATCH] net/cpfl: reset devargs during the first probe
> 
> 
> 
> > -Original Message-
> > From: Xing, Beilei 
> > Sent: Thursday, October 12, 2023 12:47 AM
> > To: Wu, Jingjing 
> > Cc: dev@dpdk.org; Xing, Beilei 
> > Subject: [PATCH] net/cpfl: reset devargs during the first probe
> >
> > From: Beilei Xing 
> 
> > Reset devargs during the first probe. Otherwise, probe again will be
> > affected.
> >
> > Fixes: a607312291b3 ("net/cpfl: support probe again")
> >
> > Signed-off-by: Beilei Xing 
> > ---
> >  drivers/net/cpfl/cpfl_ethdev.c | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/cpfl/cpfl_ethdev.c
> > b/drivers/net/cpfl/cpfl_ethdev.c index 762fbddfe6..890a027a1d 100644
> > --- a/drivers/net/cpfl/cpfl_ethdev.c
> > +++ b/drivers/net/cpfl/cpfl_ethdev.c
> > @@ -1611,11 +1611,12 @@ cpfl_parse_devargs(struct rte_pci_device
> > *pci_dev, struct cpfl_adapter_ext *adap
> > struct rte_kvargs *kvlist;
> > int ret;
> >
> > -   cpfl_args->req_vport_nb = 0;
> > -
> > if (devargs == NULL)
> > return 0;
> >
> > +   if (first)
> > +   memset(cpfl_args, 0, sizeof(struct cpfl_devargs));
> > +
> adapter is allocated by rte_zmalloc. It should be zero already.
> If I understand correctly, memset to 0 should be happened when first probe is
> done or before probe again but not at the beginning when first probe.

But 'struct cpfl_devargs devargs' is the member of adapter, if 'memset to 0' 
happens before probe again, adapter->devargs will only save the last devargs.



RE: [PATCH v5 09/40] net/cpfl: check RSS hash algorithms

2023-10-11 Thread Xing, Beilei


> -Original Message-
> From: Ferruh Yigit 
> Sent: Thursday, October 12, 2023 1:21 AM
> To: Jie Hai ; dev@dpdk.org; Zhang, Yuying
> ; Xing, Beilei ; Zhang, Qi Z
> 
> Cc: lihuis...@huawei.com; fengcheng...@huawei.com;
> liudongdo...@huawei.com
> Subject: Re: [PATCH v5 09/40] net/cpfl: check RSS hash algorithms
> 
> On 10/11/2023 10:27 AM, Jie Hai wrote:
> > A new field 'algorithm' has been added to rss_conf, check it in case
> > of ignoring unsupported values.
> >
> > Signed-off-by: Jie Hai 
> > ---
> >  drivers/net/cpfl/cpfl_ethdev.c | 6 ++
> >  1 file changed, 6 insertions(+)
> >
> > diff --git a/drivers/net/cpfl/cpfl_ethdev.c
> > b/drivers/net/cpfl/cpfl_ethdev.c index c4ca9343c3e0..6acb6ce9fd22
> > 100644
> > --- a/drivers/net/cpfl/cpfl_ethdev.c
> > +++ b/drivers/net/cpfl/cpfl_ethdev.c
> > @@ -450,6 +450,9 @@ cpfl_init_rss(struct idpf_vport *vport)
> > rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
> > nb_q = dev_data->nb_rx_queues;
> >
> > +   if (rss_conf->algorithm != RTE_ETH_HASH_FUNCTION_DEFAULT)
> > +   return -EINVAL;
> > +
> > if (rss_conf->rss_key == NULL) {
> > for (i = 0; i < vport->rss_key_size; i++)
> > vport->rss_key[i] = (uint8_t)rte_rand(); @@ -568,6
> +571,9 @@
> > cpfl_rss_hash_update(struct rte_eth_dev *dev,
> > return -ENOTSUP;
> > }
> >
> > +   if (rss_conf->algorithm != RTE_ETH_HASH_FUNCTION_DEFAULT)
> > +   return -EINVAL;
> > +
> > if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
> > PMD_DRV_LOG(DEBUG, "No key to be configured");
> > goto skip_rss_key;
> 
> 
> cpfl also doesn't report RSS capability
> (doc/guides/nics/features/cpfl.ini), but it is clear that driver supports RSS.
> 
> @Yuying, @Beilei, can you please update .ini file in a separate file?

Thanks for the reminder, will update .ini file later.



RE: [PATCH v1] net/idpf: fix incorrect status calculation

2023-10-09 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Yuying 
> Sent: Thursday, September 28, 2023 1:05 PM
> To: Zhang, Yuying ; dev@dpdk.org; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: sta...@dpdk.org
> Subject: [PATCH v1] net/idpf: fix incorrect status calculation
> 
> From: Yuying Zhang 
> 
> Fix the incorrect ingress packet number calculation.
> 
> Fixes: 7514d76d407b ("net/idpf: add basic statistics")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Yuying Zhang 
> ---
>  drivers/net/idpf/idpf_ethdev.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
> index 3af7cf0bb7..6ae2ac2681 100644
> --- a/drivers/net/idpf/idpf_ethdev.c
> +++ b/drivers/net/idpf/idpf_ethdev.c
> @@ -281,7 +281,7 @@ idpf_dev_stats_get(struct rte_eth_dev *dev, struct
> rte_eth_stats *stats)
> 
>   idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
>   stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
> - pstats->rx_broadcast - pstats->rx_discards;
> + pstats->rx_broadcast;

Seems rx_error is also considered in ingress packets.
Same for cpfl.

>   stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
> +
>   pstats->tx_unicast;
>   stats->ierrors = pstats->rx_errors;
> --
> 2.34.1



RE: [PATCH v3] net/cpfl: fix datapath function configuration

2023-09-25 Thread Xing, Beilei



> -Original Message-
> From: Wu, Wenjun1 
> Sent: Tuesday, September 26, 2023 2:05 PM
> To: dev@dpdk.org; Zhang, Yuying ; Xing, Beilei
> ; Zhang, Qi Z 
> Cc: Wu, Wenjun1 
> Subject: [PATCH v3] net/cpfl: fix datapath function configuration
> 
> Vector datapath does not support any advanced features for now, so disable
> vector path if TX checksum offload or RX scatter is enabled.
> 
> Fixes: 2f39845891e6 ("net/cpfl: add AVX512 data path for single queue
> model")
> 
> Signed-off-by: Wenjun Wu 
> 
> ---
> v3: fix log typo.
> v2: disable vector path for scatter cases.
> ---
>  drivers/net/cpfl/cpfl_rxtx_vec_common.h | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
> b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
> index d8e9191196..479e1ddcb9 100644
> --- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
> +++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
> @@ -25,7 +25,11 @@
>   RTE_ETH_RX_OFFLOAD_TIMESTAMP)
>  #define CPFL_TX_NO_VECTOR_FLAGS (\
>   RTE_ETH_TX_OFFLOAD_TCP_TSO |\
> - RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
> + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
> + RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
> + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
> + RTE_ETH_TX_OFFLOAD_UDP_CKSUM |  \
> + RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
> 
>  static inline int
>  cpfl_rx_vec_queue_default(struct idpf_rx_queue *rxq) @@ -81,6 +85,9 @@
> cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
>   struct cpfl_rx_queue *cpfl_rxq;
>   int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH;
> 
> + if (dev->data->scattered_rx)
> + return CPFL_SCALAR_PATH;
> +
>   for (i = 0; i < dev->data->nb_rx_queues; i++) {
>   cpfl_rxq = dev->data->rx_queues[i];
>   default_ret = cpfl_rx_vec_queue_default(&cpfl_rxq->base);
> --
> 2.34.1

Acked-by: Beilei Xing 


RE: [PATCH v3 00/17] update idpf base code

2023-09-14 Thread Xing, Beilei



> -Original Message-
> From: Su, Simei 
> Sent: Friday, September 15, 2023 10:17 AM
> To: Wu, Jingjing ; Xing, Beilei 
> ;
> Zhang, Qi Z 
> Cc: dev@dpdk.org; Liu, Mingxia ; Qiao, Wenjing
> ; Su, Simei 
> Subject: [PATCH v3 00/17] update idpf base code
> 
> This patch set updates idpf base code.
> 
> v3:
> * Fix coding style issue.
> * Modify unexpected error in the update version patch.
> 
> v2:
> * Add two patches for share code update.
> * Add version update.
> * Fix coding style issue.
> 
> Simei Su (17):
>   common/idpf/base: enable support for physical port stats
>   common/idpf/base: add miss completion capabilities
>   common/idpf/base: initial PTP support
>   common/idpf/base: remove mailbox registers
>   common/idpf/base: add some adi specific fields
>   common/idpf/base: add necessary check
>   common/idpf/base: add union for SW cookie fields in ctlq msg
>   common/idpf/base: define non-flexible size structure for ADI
>   common/idpf/base: use local pointer before updating 'CQ out'
>   common/idpf/base: use 'void' return type
>   common/idpf/base: refactor descriptor 'ret val' stripping
>   common/idpf/base: refine comments and alignment
>   common/idpf/base: use GENMASK macro
>   common/idpf/base: use 'type functionname(args)' style
>   common/idpf/base: don't declare union with 'flex'
>   common/idpf/base: remove unused Tx descriptor types
>   common/idpf/base: update version
> 
>  .mailmap  |   7 +
>  drivers/common/idpf/base/README   |   2 +-
>  drivers/common/idpf/base/idpf_common.c|  10 +-
>  drivers/common/idpf/base/idpf_controlq.c  |  64 ++--
>  drivers/common/idpf/base/idpf_controlq_api.h  |  17 +-
>  .../common/idpf/base/idpf_controlq_setup.c|   5 +-
>  drivers/common/idpf/base/idpf_lan_pf_regs.h   |  33 +-
>  drivers/common/idpf/base/idpf_lan_txrx.h  | 285 +---
>  drivers/common/idpf/base/idpf_lan_vf_regs.h   |  41 ++-
>  drivers/common/idpf/base/idpf_osdep.h |   7 +
>  drivers/common/idpf/base/idpf_prototype.h |   2 +-
>  drivers/common/idpf/base/siov_regs.h  |  13 +-
>  drivers/common/idpf/base/virtchnl2.h  | 303 --
>  13 files changed, 462 insertions(+), 327 deletions(-)
> 
> --
> 2.25.1

Acked-by: Beilei Xing 


RE: [PATCH v4 0/3] refactor single queue Tx data path

2023-09-13 Thread Xing, Beilei



> -Original Message-
> From: Su, Simei 
> Sent: Thursday, September 14, 2023 9:50 AM
> To: Wu, Jingjing ; Xing, Beilei
> ; Zhang, Qi Z 
> Cc: dev@dpdk.org; Wu, Wenjun1 ; Su, Simei
> 
> Subject: [PATCH v4 0/3] refactor single queue Tx data path
> 
> 1. Refine single queue Tx data path for idpf common module.
> 2. Refine Tx queue setup for idpf pmd.
> 3. Refine Tx queue setup for cpfl pmd.
> 
> v4:
> * Split one patch into patchset.
> * Refine commit title and commit log.
> 
> v3:
> * Change context TSO descriptor from base mode to flex mode.
> 
> v2:
> * Refine commit title and commit log.
> * Remove redundant definition.
> * Modify base mode context TSO descriptor.
> 
> Simei Su (3):
>   common/idpf: refactor single queue Tx data path
>   net/idpf: refine Tx queue setup
>   net/cpfl: refine Tx queue setup
> 
>  drivers/common/idpf/idpf_common_rxtx.c| 39 +--
>  drivers/common/idpf/idpf_common_rxtx.h|  2 +-
>  drivers/common/idpf/idpf_common_rxtx_avx512.c | 37 +-
>  drivers/net/cpfl/cpfl_rxtx.c  |  2 +-
>  drivers/net/idpf/idpf_rxtx.c  |  2 +-
>  5 files changed, 40 insertions(+), 42 deletions(-)
> 
> --
> 2.25.1

Acked-by: Beilei Xing 



RE: [PATCH v3] common/idpf: refactor single queue Tx function

2023-09-12 Thread Xing, Beilei



> -Original Message-
> From: Su, Simei 
> Sent: Friday, September 8, 2023 6:28 PM
> To: Wu, Jingjing ; Xing, Beilei 
> ;
> Zhang, Qi Z 
> Cc: dev@dpdk.org; Wu, Wenjun1 ; Su, Simei
> 
> Subject: [PATCH v3] common/idpf: refactor single queue Tx function
> 
> This patch replaces flex Tx descriptor with base Tx descriptor to align with 
> kernel
> driver practice.
> 
> Signed-off-by: Simei Su 
> ---
> v3:
> * Change context TSO descriptor from base mode to flex mode.
> 
> v2:
> * Refine commit title and commit log.
> * Remove redundant definition.
> * Modify base mode context TSO descriptor.
> 
>  drivers/common/idpf/idpf_common_rxtx.c| 39 +--
>  drivers/common/idpf/idpf_common_rxtx.h|  2 +-
>  drivers/common/idpf/idpf_common_rxtx_avx512.c | 37 +-
>  drivers/net/idpf/idpf_rxtx.c  |  2 +-
>  4 files changed, 39 insertions(+), 41 deletions(-)
> 


> diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index
> 3e3d81ca6d..64f2235580 100644
> --- a/drivers/net/idpf/idpf_rxtx.c
> +++ b/drivers/net/idpf/idpf_rxtx.c
> @@ -74,7 +74,7 @@ idpf_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t
> queue_idx,
>   ring_size = RTE_ALIGN(len * sizeof(struct
> idpf_flex_tx_sched_desc),
> IDPF_DMA_MEM_ALIGN);
>   else
> - ring_size = RTE_ALIGN(len * sizeof(struct
> idpf_flex_tx_desc),
> + ring_size = RTE_ALIGN(len * sizeof(struct
> idpf_base_tx_desc),

Check if idpf_flex_tx_desc is used in cpfl PMD.

> IDPF_DMA_MEM_ALIGN);
>   rte_memcpy(ring_name, "idpf Tx ring", sizeof("idpf Tx ring"));
>   break;
> --
> 2.25.1



RE: [PATCH v1 5/5] net/cpfl: add fxp flow engine

2023-08-25 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Yuying 
> Sent: Saturday, August 12, 2023 3:55 PM
> To: dev@dpdk.org; Xing, Beilei ; Zhang, Qi Z
> ; Wu, Jingjing 
> Cc: Zhang, Yuying 
> Subject: [PATCH v1 5/5] net/cpfl: add fxp flow engine
> 
> Adapt fxp low level as a flow engine.
> 
> Signed-off-by: Yuying Zhang 
> Signed-off-by: Qi Zhang 
> ---
>  drivers/net/cpfl/cpfl_ethdev.h  |  85 
>  drivers/net/cpfl/cpfl_flow_engine_fxp.c | 610 
>  drivers/net/cpfl/meson.build|   1 +
>  3 files changed, 696 insertions(+)
>  create mode 100644 drivers/net/cpfl/cpfl_flow_engine_fxp.c
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
> index 63bcc5551f..d7e9ea1a74 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.h
> +++ b/drivers/net/cpfl/cpfl_ethdev.h
> @@ -92,6 +92,8 @@
<...>
> +static inline uint16_t
> +cpfl_get_vsi_id(struct cpfl_itf *itf)
> +{
> + struct cpfl_adapter_ext *adapter = itf->adapter;
> + struct cpfl_vport_info *info;
> + uint32_t vport_id;
> + int ret;
> + struct cpfl_vport_id vport_identity;
> +
> + if (!itf)
> + return CPFL_INVALID_HW_ID;
> +
> + if (itf->type == CPFL_ITF_TYPE_REPRESENTOR) {
> + struct cpfl_repr *repr = (void *)itf;
> +
> + return repr->vport_info->vport_info.vsi_id;
> + } else if (itf->type == CPFL_ITF_TYPE_VPORT) {
> + vport_id = ((struct cpfl_vport *)itf)->base.vport_id;
> + vport_identity.func_type = CPCHNL2_FUNC_TYPE_PF;
> + /* host: HOST0_CPF_ID, acc: ACC_CPF_ID */
> + vport_identity.pf_id = ACC_CPF_ID;
> + vport_identity.vf_id = 0;
> + vport_identity.vport_id = vport_id;
> +
> + ret = rte_hash_lookup_data(adapter->vport_map_hash,
> &vport_identity,
> +   (void **)&info);
> + if (ret < 0) {
> + PMD_DRV_LOG(ERR, "vport id not exist");
> + goto err;
> + }
> +
> + /* rte_spinlock_unlock(&adapter->vport_map_lock); */
 
So do we need lock in the function?

> + return info->vport_info.vsi_id;
> + }
> +
> +err:
> + /* rte_spinlock_unlock(&adapter->vport_map_lock); */
> + return CPFL_INVALID_HW_ID;
> +}
> +
<...>
> 
>  #endif /* _CPFL_ETHDEV_H_ */
> diff --git a/drivers/net/cpfl/cpfl_flow_engine_fxp.c
> b/drivers/net/cpfl/cpfl_flow_engine_fxp.c
> new file mode 100644
> index 00..e10639c842
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_flow_engine_fxp.c
> @@ -0,0 +1,610 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 Intel Corporation
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include "cpfl_rules.h"
> +#include "cpfl_logs.h"
> +#include "cpfl_ethdev.h"
> +#include "cpfl_flow.h"
> +#include "cpfl_fxp_rule.h"
> +#include "cpfl_flow_parser.h"
> +#include "rte_memcpy.h"

#include  and move above?

> +
> +#define COOKIE_DEF   0x1000
> +#define PREC_MAX 7
> +#define PREC_DEF 1
> +#define PREC_SET 5
> +#define TYPE_ID  3
> +#define OFFSET   0x0a
> +#define HOST_ID_DEF  0
> +#define PF_NUM_DEF   0
> +#define PORT_NUM_DEF 0
> +#define RESP_REQ_DEF 2
> +#define PIN_TO_CACHE_DEF 0
> +#define CLEAR_MIRROR_1ST_STATE_DEF  0
> +#define FIXED_FETCH_DEF 0
> +#define PTI_DEF  0
> +#define MOD_OBJ_SIZE_DEF 0
> +#define PIN_MOD_CONTENT_DEF  0
> +
> +#define MAX_MOD_CONTENT_INDEX256
> +#define MAX_MR_ACTION_NUM 8

For the new defined macros in PMD, better to use CPFL_ prefix. 

> +
> +struct rule_info_meta {

cpfl_rule_info_meta.
Please check all other macros, global variables, structures and functions, etc. 
I will not comment for those.

BTW, Could you add some comments for the new structures and the members? Then 
it will be more readable.

> + struct cpfl_flow_pr_action pr_action;
> + uint32_t pr_num;
> + uint32_t mr_num;
> + uint32_t rule_num;
> + struct cpfl_rule_info rules[0];
> +};
> +
> +static uint32_t fxp_mod_idx_alloc(struct cpfl_adapter_ext *ad); static
> +void fxp_mod_idx_free(struct cpfl_adapter_ext *ad, uint32_t idx);
> +uint64_t rule_cookie = COOKIE_DEF;
> +
> +sta

RE: [PATCH v1 4/5] net/cpfl: add fxp rule module

2023-08-25 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Yuying 
> Sent: Saturday, August 12, 2023 3:55 PM
> To: dev@dpdk.org; Xing, Beilei ; Zhang, Qi Z
> ; Wu, Jingjing 
> Cc: Zhang, Yuying 
> Subject: [PATCH v1 4/5] net/cpfl: add fxp rule module
> 
> Added low level fxp module for rule packing / creation / destroying.
> 
> Signed-off-by: Yuying Zhang 
> ---
>  drivers/net/cpfl/cpfl_ethdev.h   |   4 +
>  drivers/net/cpfl/cpfl_fxp_rule.c | 288 +++
> drivers/net/cpfl/cpfl_fxp_rule.h |  87 ++
>  drivers/net/cpfl/meson.build |   1 +
>  4 files changed, 380 insertions(+)
>  create mode 100644 drivers/net/cpfl/cpfl_fxp_rule.c  create mode 100644
> drivers/net/cpfl/cpfl_fxp_rule.h
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
> index c71f16ac60..63bcc5551f 100644

<...>
> +struct cpfl_lem_rule_info {
> + uint16_t prof_id;
> + uint8_t key[CPFL_MAX_KEY_LEN];
> + uint8_t key_byte_len;
> + uint8_t pin_to_cache;
> + uint8_t fixed_fetch;
> +};

Remove LEM related structures and members below.
 
> +#define CPFL_MAX_MOD_CONTENT_LEN 256
> +struct cpfl_mod_rule_info {
> + uint8_t mod_content[CPFL_MAX_MOD_CONTENT_LEN];
> + uint8_t mod_content_byte_len;
> + uint32_t mod_index;
> + uint8_t pin_mod_content;
> + uint8_t mod_obj_size;
> +};
> +
> +enum cpfl_rule_type {
> + CPFL_RULE_TYPE_NONE,
> + CPFL_RULE_TYPE_SEM,
> + CPFL_RULE_TYPE_LEM,
> + CPFL_RULE_TYPE_MOD
> +};
> +
> +struct cpfl_rule_info {
> + enum cpfl_rule_type type;
> + uint64_t cookie;
> + uint8_t host_id;
> + uint8_t port_num;
> + uint8_t resp_req;
> + /* TODO: change this to be dynamically allocated/reallocated */
> + uint8_t act_bytes[CPFL_MAX_RULE_ACTIONS * sizeof(union
> cpfl_action_set)];
> + uint8_t act_byte_len;
> + /* vsi is used for lem and lpm rules */
> + uint16_t vsi;
> + uint8_t clear_mirror_1st_state;
> + /* mod related fields */
> + union {
> + struct cpfl_mod_rule_info mod;
> + struct cpfl_sem_rule_info sem;
> + struct cpfl_lem_rule_info lem;
> + };
> +};
> +
> +struct cpfl_meter_action_info {
> + uint8_t meter_logic_bank_id;
> + uint32_t meter_logic_idx;
> + uint8_t prof_id;
> + uint8_t slot;
> +};

Remove meter lated.




RE: [PATCH v1 4/5] net/cpfl: add fxp rule module

2023-08-25 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Yuying 
> Sent: Saturday, August 12, 2023 3:55 PM
> To: dev@dpdk.org; Xing, Beilei ; Zhang, Qi Z
> ; Wu, Jingjing 
> Cc: Zhang, Yuying 
> Subject: [PATCH v1 4/5] net/cpfl: add fxp rule module
> 
> Added low level fxp module for rule packing / creation / destroying.
> 
> Signed-off-by: Yuying Zhang 
> ---
>  drivers/net/cpfl/cpfl_ethdev.h   |   4 +
>  drivers/net/cpfl/cpfl_fxp_rule.c | 288 +++
> drivers/net/cpfl/cpfl_fxp_rule.h |  87 ++
>  drivers/net/cpfl/meson.build |   1 +
>  4 files changed, 380 insertions(+)
>  create mode 100644 drivers/net/cpfl/cpfl_fxp_rule.c  create mode 100644
> drivers/net/cpfl/cpfl_fxp_rule.h
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h 
> index
> c71f16ac60..63bcc5551f 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.h
> +++ b/drivers/net/cpfl/cpfl_ethdev.h
> @@ -145,10 +145,14 @@ enum cpfl_itf_type {
> 
>  TAILQ_HEAD(cpfl_flow_list, rte_flow);
> 
> +#define CPFL_FLOW_BATCH_SIZE  490
>  struct cpfl_itf {
>   enum cpfl_itf_type type;
>   struct cpfl_adapter_ext *adapter;
>   struct cpfl_flow_list flow_list;
> + struct idpf_dma_mem flow_dma;
> + struct idpf_dma_mem dma[CPFL_FLOW_BATCH_SIZE];
> + struct idpf_ctlq_msg msg[CPFL_FLOW_BATCH_SIZE];
>   void *data;
>  };
> 
> diff --git a/drivers/net/cpfl/cpfl_fxp_rule.c 
> b/drivers/net/cpfl/cpfl_fxp_rule.c
> new file mode 100644
> index 00..936f57e4fa
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_fxp_rule.c
> @@ -0,0 +1,288 @@

<...>

> +int
> +cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, uint16_t
> num_q_msg,
> +   struct idpf_ctlq_msg q_msg[])
> +{
> + int retries = 0;
> + struct idpf_dma_mem *dma;
> + uint16_t i;
> + uint16_t buff_cnt;
> + int ret = 0;
> +
> + retries = 0;
> + while (retries <= CTLQ_RECEIVE_RETRIES) {
> + rte_delay_us_sleep(10);
> + ret = cpfl_vport_ctlq_recv(cq, &num_q_msg, &q_msg[0]);
> +
> + if (ret && ret != CPFL_ERR_CTLQ_NO_WORK &&
> + ret != CPFL_ERR_CTLQ_ERROR) {
> + PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err:
> 0x%4x\n", ret);
> + retries++;
> + continue;
> + }
> +
> + if (ret == CPFL_ERR_CTLQ_NO_WORK) {
> + retries++;
> + continue;
> + }
> +
> + if (ret == CPFL_ERR_CTLQ_EMPTY)
> + break;
> +
> + ret = cpfl_process_rx_ctlq_msg(num_q_msg, q_msg);
> + if (ret) {
> + PMD_INIT_LOG(WARNING, "failed to process rx_ctrlq
> msg");
> + break;

Don't break, need to post buffer to recv ring.
Please check the internal fix patch.

> + }
> +
> + for (i = 0; i < num_q_msg; i++) {
> + if (q_msg[i].data_len > 0)
> + dma = q_msg[i].ctx.indirect.payload;
> + else
> + dma = NULL;
> +
> + buff_cnt = dma ? 1 : 0;
> + ret = cpfl_vport_ctlq_post_rx_buffs(hw, cq, &buff_cnt,
> &dma);
> + if (ret)
> + PMD_INIT_LOG(WARNING, "could not posted
> recv bufs\n");
> + }
> + break;
> + }
> +
> + if (retries > CTLQ_RECEIVE_RETRIES) {
> + PMD_INIT_LOG(ERR, "timed out while polling for receive
> response");
> + ret = -1;
> + }
> +
> + return ret;
> +}
> +
> +static int
> +pack_mod_rule(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma,

Please follow the function name style, how about cpfl_mod_rule_pack?

> +   struct idpf_ctlq_msg *msg)

<...>
> +
> +static int pack_default_rule(struct cpfl_rule_info *rinfo, struct 
> idpf_dma_mem

static init
cpfl_default_rule_pack

> *dma,
> +  struct idpf_ctlq_msg *msg, bool add) {
<...>
> +
> +static int pack_rule(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma,

static init
cpfl_rule_pack

> +  struct idpf_ctlq_msg *msg, bool add) {
> + int ret = 0;
> +
> + if (rinfo->type == CPFL_RULE_TYPE_SEM) {
> + if (pack_default_rule(rinfo, dma, msg, add) < 0)
> + ret = -1;
> + } else if (rinfo->type == CPFL_RULE_TYPE_MOD) {
> + if (pac

RE: [PATCH v1 3/5] net/cpfl: add cpfl control queue message handle

2023-08-24 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Yuying 
> Sent: Saturday, August 12, 2023 3:55 PM
> To: dev@dpdk.org; Xing, Beilei ; Zhang, Qi Z
> ; Wu, Jingjing 
> Cc: Zhang, Yuying 
> Subject: [PATCH v1 3/5] net/cpfl: add cpfl control queue message handle
> 
> Add cpfl driver control queue message handle, including
> send/receive/clean/post_rx_buffs.
> 
> Signed-off-by: Yuying Zhang 

Seems all the functions are similar with functions in idpf shared code. 
Can we use idpf_ctrlq_xxx directly?
BTW, the new field in the 2nd patch is not used here, so is the new field 
'data' necessary?


RE: [PATCH v1 2/5] common/idpf/base: refine idpf ctlq message structure

2023-08-24 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Yuying 
> Sent: Saturday, August 12, 2023 3:55 PM
> To: dev@dpdk.org; Xing, Beilei ; Zhang, Qi Z
> ; Wu, Jingjing 
> Cc: Zhang, Yuying 
> Subject: [PATCH v1 2/5] common/idpf/base: refine idpf ctlq message structure
> 
> Add cfg data in idpf_ctlq_msg.

Could you detail the commit log to describe why we need this field?

> 
> Signed-off-by: Yuying Zhang 
> ---
>  drivers/common/idpf/base/idpf_controlq_api.h | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/common/idpf/base/idpf_controlq_api.h
> b/drivers/common/idpf/base/idpf_controlq_api.h
> index 3780304256..b38b10465c 100644
> --- a/drivers/common/idpf/base/idpf_controlq_api.h
> +++ b/drivers/common/idpf/base/idpf_controlq_api.h
> @@ -65,6 +65,9 @@ struct idpf_ctlq_msg {
>   u32 chnl_opcode;
>   u32 chnl_retval;
>   } mbx;
> + struct {
> + u64 data;
> + } cfg;
>   } cookie;
>   union {
>  #define IDPF_DIRECT_CTX_SIZE 16
> --
> 2.25.1



RE: [PATCH v1 1/5] net/cpfl: setup rte flow skeleton

2023-08-24 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Yuying 
> Sent: Saturday, August 12, 2023 3:55 PM
> To: dev@dpdk.org; Xing, Beilei ; Zhang, Qi Z
> ; Wu, Jingjing 
> Cc: Zhang, Yuying 
> Subject: [PATCH v1 1/5] net/cpfl: setup rte flow skeleton
> 
> Setup the rte_flow backend skeleton. Introduce the framework to support
> different engines as rte_flow backend. Bridge rte_flow driver API to flow
> engines.
> 
> Signed-off-by: Yuying Zhang 
> Signed-off-by: Qi Zhang 
> ---
>  drivers/net/cpfl/cpfl_ethdev.c |  54 ++
>  drivers/net/cpfl/cpfl_ethdev.h |   5 +
>  drivers/net/cpfl/cpfl_flow.c   | 331 +
>  drivers/net/cpfl/cpfl_flow.h   |  88 +
>  drivers/net/cpfl/meson.build   |   3 +-
>  5 files changed, 480 insertions(+), 1 deletion(-)  create mode 100644
> drivers/net/cpfl/cpfl_flow.c  create mode 100644 drivers/net/cpfl/cpfl_flow.h
> 
<...>
> 
> +static int
> +cpfl_dev_flow_ops_get(struct rte_eth_dev *dev,
> +   const struct rte_flow_ops **ops) {
> + struct cpfl_itf *itf;
> +
> + if (!dev)
> + return -EINVAL;
> +
> + itf = CPFL_DEV_TO_ITF(dev);
> +
> + /* only vport support rte_flow */
> + if (itf->type != CPFL_ITF_TYPE_VPORT)
> + return -ENOTSUP;

Do we need this check? Seems this function is only for vport but not 
representor.

> +#ifdef CPFL_FLOW_JSON_SUPPORT
> + *ops = &cpfl_flow_ops;
> +#else
> + *ops = NULL;
> + PMD_DRV_LOG(NOTICE, "not support rte_flow, please install json-c
> +library."); #endif
> + return 0;
> +}
> +
<...>
> +
> +static int
> +cpfl_flow_valid_attr(const struct rte_flow_attr *attr,
> +  struct rte_flow_error *error)

Better to use cpfl_flow_attr_valid to align with cpfl_flow_param_valid.

> +{
> + if (attr->priority > 6) {

What's 6's meaning? Better to define macro to describe it.

> + rte_flow_error_set(error, EINVAL,
> +RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
> +attr, "Only support priority 0-6.");
> + return -rte_errno;
> + }
> +
> + return 0;
> +}
> +
<...>
> +struct rte_flow *
> +cpfl_flow_create(struct rte_eth_dev *dev __rte_unused,
> +  const struct rte_flow_attr *attr __rte_unused,
> +  const struct rte_flow_item pattern[] __rte_unused,
> +  const struct rte_flow_action actions[] __rte_unused,
> +  struct rte_flow_error *error __rte_unused) {
> + struct cpfl_itf *itf = CPFL_DEV_TO_ITF(dev);
> + struct cpfl_flow_engine *engine;
> + struct rte_flow *flow;
> + void *meta;
> + int ret;
> +
> + flow = rte_malloc(NULL, sizeof(struct rte_flow), 0);
> + if (!flow) {
> + rte_flow_error_set(error, ENOMEM,
> +RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
> +"Failed to allocate memory");
> + return NULL;
> + }
> +
> + ret = cpfl_flow_param_valid(attr, pattern, actions, error);
> + if (ret) {
> + rte_free(flow);
> + return NULL;
> + }
> +
> + engine = cpfl_flow_engine_match(dev, attr, pattern, actions, &meta);
> + if (!engine) {
> + rte_flow_error_set(error, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +NULL, "No matched engine");
> + rte_free(flow);
> + return NULL;
> + }

cpfl_flow_param_valid and cpfl_flow_engine_match can be replaced with 
cpfl_flow_validate function.

> +
> + if (!engine->create) {
> + rte_flow_error_set(error, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +NULL, "No matched flow creation function");
> + rte_free(flow);
> + return NULL;
> + }
> +
> + ret = engine->create(dev, flow, meta, error);
> + if (ret) {
> + rte_free(flow);
> + return NULL;
> + }
> +
> + flow->engine = engine;
> + TAILQ_INSERT_TAIL(&itf->flow_list, flow, next);
> +
> + return flow;
> +}
> +

<...>

> +
> +int
> +cpfl_flow_query(struct rte_eth_dev *dev __rte_unused,
> + struct rte_flow *flow __rte_unused,
> + const struct rte_flow_action *actions __rte_unused,
> + void *data __rte_unused,
> + struct rte_flow_error *error __rte_unused) {

Why is __rte_unused used here?

> + struct rte_flow_query_count *count = data;
> + int ret 

RE: [PATCH 3/4] net/cpfl: introduce CPF common library

2023-08-24 Thread Xing, Beilei



> -Original Message-
> From: Qiao, Wenjing 
> Sent: Friday, August 11, 2023 5:31 PM
> To: Zhang, Yuying ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Liu, Mingxia ; Qiao, Wenjing
> 
> Subject: [PATCH 3/4] net/cpfl: introduce CPF common library
> 
> Add common library support for CPFL rte_flow to
> create/delete rules.
> 
> Signed-off-by: Wenjing Qiao 
> ---


> +int
> +cpfl_ctlq_alloc_ring_res(struct idpf_hw *hw __rte_unused, struct

hw is used, so remove __rte_unused.
Please check other functions.

> idpf_ctlq_info *cq,
> +  struct cpfl_ctlq_create_info *qinfo)
> +{



RE: [PATCH 4/4] net/cpfl: setup ctrl path

2023-08-24 Thread Xing, Beilei



> -Original Message-
> From: Qiao, Wenjing 
> Sent: Friday, August 11, 2023 5:31 PM
> To: Zhang, Yuying ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Liu, Mingxia ; Qiao, Wenjing
> ; Zhang, Qi Z 
> Subject: [PATCH 4/4] net/cpfl: setup ctrl path
> 
> Setup the control vport and control queue for flow offloading.
> 
> Signed-off-by: Yuying Zhang 
> Signed-off-by: Beilei Xing 
> Signed-off-by: Qi Zhang 
> Signed-off-by: Wenjing Qiao 
> ---
>  drivers/net/cpfl/cpfl_ethdev.c | 270 -
> drivers/net/cpfl/cpfl_ethdev.h |  14 ++  drivers/net/cpfl/cpfl_vchnl.c  | 144
> ++
>  3 files changed, 425 insertions(+), 3 deletions(-)

<...>

> +
> +static void
> +cpfl_remove_cfgqs(struct cpfl_adapter_ext *adapter) {
> + struct idpf_hw *hw = (struct idpf_hw *)(&adapter->base.hw);
> + struct cpfl_ctlq_create_info *create_cfgq_info;
> + int i;
> +
> + create_cfgq_info = adapter->cfgq_info;
> +
> + for (i = 0; i < CPFL_CFGQ_NUM; i++) {
> + cpfl_vport_ctlq_remove(hw, adapter->ctlqp[i]);
> + if (create_cfgq_info[i].ring_mem.va)
> + idpf_free_dma_mem(&adapter->base.hw,
> &create_cfgq_info[i].ring_mem);
> + if (create_cfgq_info[i].buf_mem.va)
> + idpf_free_dma_mem(&adapter->base.hw,
> &create_cfgq_info[i].buf_mem);

 &adapter->base.hw can be replaced with hw.



RE: [PATCH v2 2/4] net/cpfl: add flow json parser

2023-08-24 Thread Xing, Beilei



> -Original Message-
> From: Qiao, Wenjing 
> Sent: Friday, August 11, 2023 6:00 PM
> To: Zhang, Yuying ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Liu, Mingxia ; Qiao, Wenjing
> 
> Subject: [PATCH v2 2/4] net/cpfl: add flow json parser
> 
> A JSON file will be used to direct DPDK CPF PMD to
> parse rte_flow tokens into low level hardware resources
> defined in a DDP package file.
> 
> Signed-off-by: Wenjing Qiao 
> ---
> Depends-on: series-29139 ("net/cpfl: support port representor")
> ---
>  drivers/net/cpfl/cpfl_flow_parser.c | 1758 +++
>  drivers/net/cpfl/cpfl_flow_parser.h |  205 
>  drivers/net/cpfl/meson.build|3 +
>  3 files changed, 1966 insertions(+)
>  create mode 100644 drivers/net/cpfl/cpfl_flow_parser.c
>  create mode 100644 drivers/net/cpfl/cpfl_flow_parser.h
> 
> diff --git a/drivers/net/cpfl/cpfl_flow_parser.c
> b/drivers/net/cpfl/cpfl_flow_parser.c
> new file mode 100644
> index 00..b4635813ff
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_flow_parser.c
> @@ -0,0 +1,1758 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 Intel Corporation
> + */
> +
> +#include 
> +#include 
> +#include 
> +
> +#include "cpfl_flow_parser.h"
> +#include "cpfl_ethdev.h"
> +#include "rte_malloc.h"
> +
> +static enum rte_flow_item_type
> +cpfl_get_item_type_by_str(const char *type)
> +{
> + if (strcmp(type, "eth") == 0)
> + return RTE_FLOW_ITEM_TYPE_ETH;
> + else if (strcmp(type, "ipv4") == 0)
> + return RTE_FLOW_ITEM_TYPE_IPV4;
> + else if (strcmp(type, "tcp") == 0)
> + return RTE_FLOW_ITEM_TYPE_TCP;
> + else if (strcmp(type, "udp") == 0)
> + return RTE_FLOW_ITEM_TYPE_UDP;
> + else if (strcmp(type, "vxlan") == 0)
> + return RTE_FLOW_ITEM_TYPE_VXLAN;
> + else if (strcmp(type, "icmp") == 0)
> + return RTE_FLOW_ITEM_TYPE_ICMP;
> + else if (strcmp(type, "vlan") == 0)
> + return RTE_FLOW_ITEM_TYPE_VLAN;
> +
> + PMD_DRV_LOG(ERR, "Not support this type: %s.", type);
> + return RTE_FLOW_ITEM_TYPE_VOID;
> +}
> +
> +static enum rte_flow_action_type
> +cpfl_get_action_type_by_str(const char *type)
> +{
> + if (strcmp(type, "vxlan_encap") == 0)
> + return RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP;
> +
> + PMD_DRV_LOG(ERR, "Not support this type: %s.", type);

Why the function only supports vxlan_encap? It's a bit confused.
If only for vxlan_encap, better to change the function name.

> + return RTE_FLOW_ACTION_TYPE_VOID;
> +}
> +
> +static const char *
> +cpfl_json_object_to_string(json_object *object, const char *name)
> +{
> + json_object *subobject;
> +
> + if (!object) {
> + PMD_DRV_LOG(ERR, "object doesn't exist.");
> + return NULL;
> + }
> + subobject = json_object_object_get(object, name);
> + if (!subobject) {
> + PMD_DRV_LOG(ERR, "%s doesn't exist.", name);
> + return 0;

Return NULL?

> + }
> + return json_object_get_string(subobject);
> +}
> +

<...>

> +static int
> +cpfl_flow_js_pattern_key_proto_field(json_object *cjson_field,
> +  struct cpfl_flow_js_pr_key_proto *js_field)
> +{
> + if (cjson_field) {

How about 
if (cjson_field ! =0 )
return 0;
first?

> + int len, i;
> +
> + len = json_object_array_length(cjson_field);
> + js_field->fields_size = len;
> + if (len == 0)
> + return 0;
> + js_field->fields =
> + rte_malloc(NULL, sizeof(struct
> cpfl_flow_js_pr_key_proto_field) * len, 0);
> + if (!js_field->fields) {
> + PMD_DRV_LOG(ERR, "Failed to alloc memory.");
> + return -ENOMEM;
> + }
> + for (i = 0; i < len; i++) {
> + json_object *object;
> + const char *name, *mask;
> +
> + object = json_object_array_get_idx(cjson_field, i);
> + name = cpfl_json_object_to_string(object, "name");
> + if (!name) {
> + rte_free(js_field->fields);
> + PMD_DRV_LOG(ERR, "Can not parse string
> 'name'.");
> + return -EINVAL;
> +

RE: [PATCH v2 1/4] net/cpfl: parse flow parser file in devargs

2023-08-23 Thread Xing, Beilei



> -Original Message-
> From: Qiao, Wenjing 
> Sent: Friday, August 11, 2023 6:00 PM
> To: Zhang, Yuying ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Liu, Mingxia ; Qiao, Wenjing
> 
> Subject: [PATCH v2 1/4] net/cpfl: parse flow parser file in devargs
> 
> Add devargs "flow_parser" for rte_flow json parser.
> 
> Signed-off-by: Wenjing Qiao 
> ---
> Depends-on: series-29139 ("net/cpfl: support port representor")
> ---
>  drivers/net/cpfl/cpfl_ethdev.c | 30 +-
> drivers/net/cpfl/cpfl_ethdev.h |  3 +++
>  drivers/net/cpfl/meson.build   |  6 ++
>  3 files changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c 
> index
> 8dbc175749..a2f308fb86 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -21,6 +21,7 @@
>  #define CPFL_TX_SINGLE_Q "tx_single"
>  #define CPFL_RX_SINGLE_Q "rx_single"
>  #define CPFL_VPORT   "vport"
> +#define CPFL_FLOW_PARSER "flow_parser"
> 
>  rte_spinlock_t cpfl_adapter_lock;
>  /* A list for all adapters, one adapter matches one PCI device */ @@ -32,6
> +33,9 @@ static const char * const cpfl_valid_args_first[] = {
>   CPFL_TX_SINGLE_Q,
>   CPFL_RX_SINGLE_Q,
>   CPFL_VPORT,
> +#ifdef CPFL_FLOW_JSON_SUPPORT
> + CPFL_FLOW_PARSER,
> +#endif
>   NULL
>  };
> 
> @@ -1671,6 +1675,19 @@ parse_repr(const char *key __rte_unused, const
> char *value, void *args)
>   return 0;
>  }
> 
> +#ifdef CPFL_FLOW_JSON_SUPPORT
> +static int
> +parse_parser_file(const char *key, const char *value, void *args) {
> + char *name = args;
> +
> + PMD_DRV_LOG(DEBUG, "value:\"%s\" for key:\"%s\"", value, key);

Better to check if the value is valid first, e.g. return error if the length > 
CPFL_FLOW_FILE_LEN.

> + strlcpy(name, value, CPFL_FLOW_FILE_LEN);
> +
> + return 0;
> +}
> +#endif
> +
>  static int
>  cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext
> *adapter, bool first)  { @@ -1719,7 +1736,18 @@ cpfl_parse_devargs(struct
> rte_pci_device *pci_dev, struct cpfl_adapter_ext *adap
>&adapter->base.is_rx_singleq);
>   if (ret != 0)
>   goto fail;
> -
> +#ifdef CPFL_FLOW_JSON_SUPPORT
> + if (rte_kvargs_get(kvlist, CPFL_FLOW_PARSER)) {
> + ret = rte_kvargs_process(kvlist, CPFL_FLOW_PARSER,
> +  &parse_parser_file, cpfl_args-
> >flow_parser);
> + if (ret) {
> + PMD_DRV_LOG(ERR, "Failed to parser flow_parser,
> ret: %d", ret);
> + goto fail;
> + }
> + } else {
> + cpfl_args->flow_parser[0] = '\0';
> + }
> +#endif
>  fail:
>   rte_kvargs_free(kvlist);
>   return ret;
> diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
> index 5bd6f930b8..cf989a29b3 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.h
> +++ b/drivers/net/cpfl/cpfl_ethdev.h
> @@ -87,6 +87,8 @@
>  #define ACC_LCE_ID   15
>  #define IMC_MBX_EFD_ID   0
> 
> +#define CPFL_FLOW_FILE_LEN 100
> +
>  struct cpfl_vport_param {
>   struct cpfl_adapter_ext *adapter;
>   uint16_t devarg_id; /* arg id from user */ @@ -100,6 +102,7 @@
> struct cpfl_devargs {
>   uint16_t req_vport_nb;
>   uint8_t repr_args_num;
>   struct rte_eth_devargs repr_args[CPFL_REPR_ARG_NUM_MAX];
> + char flow_parser[CPFL_FLOW_FILE_LEN];
>  };
> 
>  struct p2p_queue_chunks_info {
> diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build index
> fb075c6860..0be25512c3 100644
> --- a/drivers/net/cpfl/meson.build
> +++ b/drivers/net/cpfl/meson.build
> @@ -38,3 +38,9 @@ if arch_subdir == 'x86'
>  cflags += ['-DCC_AVX512_SUPPORT']
>  endif
>  endif
> +
> +js_dep = dependency('json-c', required: false, method : 'pkg-config')
> +if js_dep.found()
> +dpdk_conf.set('CPFL_FLOW_JSON_SUPPORT', true)
> +ext_deps += js_dep
> +endif
> \ No newline at end of file
> --
> 2.34.1

Update doc to describe installing json lib first if need to parse json file.



RE: [PATCH] doc: update doc for idpf and cpfl

2023-07-24 Thread Xing, Beilei



> -Original Message-
> From: Stephen Hemminger 
> Sent: Thursday, July 20, 2023 12:15 AM
> To: Xing, Beilei 
> Cc: Wu, Jingjing ; dev@dpdk.org
> Subject: Re: [PATCH] doc: update doc for idpf and cpfl
> 
> On Tue, 18 Jul 2023 17:02:12 +
> beilei.x...@intel.com wrote:
> 
> > From: Beilei Xing 
> >
> > Add recommended matching list for idpf pmd and cpfl pmd.
> >
> > Signed-off-by: Beilei Xing 
> > ---
> >  doc/guides/nics/cpfl.rst | 16 
> > doc/guides/nics/idpf.rst | 16 
> >  2 files changed, 32 insertions(+)
> >
> > diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst index
> > e88008e16e..258e89ed48 100644
> > --- a/doc/guides/nics/cpfl.rst
> > +++ b/doc/guides/nics/cpfl.rst
> > @@ -21,6 +21,22 @@ To get better performance on Intel platforms,
> > please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
> >
> >
> > +Recommended Matching List
> > +-
> > +
> > +It is highly recommended to upgrade the MEV-ts release to avoid the
> > +compatibility issues with the cpfl PMD.
> > +Here is the suggested matching list which has been tested and verified.
> > +
> > +   ++--+
> > +   | DPDK   |  MEV-ts release  |
> > +   ++==+
> > +   |23.03   |  0.6.0   |
> > +   ++--+
> > +   |23.07   |  0.9.1   |
> > +   ++--+
> >
> 
> Since 23.03 is not an LTS release, it will not be supported when 23.07 is 
> release.
> Probably best not to clutter up docs with 23.03


Thanks for the comments, will remove 23.03 in next version.


RE: [PATCH] doc: update release note for iavf AVX2 feature

2023-07-02 Thread Xing, Beilei



> -Original Message-
> From: Wenzhuo Lu 
> Sent: Thursday, June 29, 2023 8:06 AM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo 
> Subject: [PATCH] doc: update release note for iavf AVX2 feature
> 
> Add the missed release note for iavf AVX2 feature in 23.07.
> 
> Fixes: 5712bf9d6e14 ("net/iavf: add Tx AVX2 offload path")
> 
> Signed-off-by: Wenzhuo Lu 
> ---
>  doc/guides/rel_notes/release_23_07.rst | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_23_07.rst
> b/doc/guides/rel_notes/release_23_07.rst
> index 4459144..92c8a1d 100644
> --- a/doc/guides/rel_notes/release_23_07.rst
> +++ b/doc/guides/rel_notes/release_23_07.rst
> @@ -200,6 +200,12 @@ New Features
> 
>Enhanced the GRO library to support TCP packets over IPv6 network.
> 
> +* **Updated Intel iavf driver.**
> +
> +  * Added new RX and TX paths in the AVX2 code to use HW offload
> +features. When the HW offload features are configured to be used, the
> +offload paths are chosen automatically. In parallel the support for HW
> +offload features was removed from the legacy AVX2 paths.
> 
>  Removed Items
>  -
> --
> 1.8.3.1

Acked-by: Beilei Xing 



RE: [PATCH v8 03/14] net/cpfl: add haipin queue group during vport init

2023-06-05 Thread Xing, Beilei



> -Original Message-
> From: Wu, Jingjing 
> Sent: Monday, June 5, 2023 4:36 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Liu, Mingxia 
> Subject: RE: [PATCH v8 03/14] net/cpfl: add haipin queue group during vport
> init
> 
> 
> 
> > -Original Message-
> > From: Xing, Beilei 
> > Sent: Monday, June 5, 2023 2:17 PM
> > To: Wu, Jingjing 
> > Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> > 
> > Subject: [PATCH v8 03/14] net/cpfl: add haipin queue group during
> > vport init
> >
> > From: Beilei Xing 
> >
> > This patch adds haipin queue group during vport init.
> >
> > Signed-off-by: Mingxia Liu 
> > Signed-off-by: Beilei Xing 
> > ---
> >  drivers/net/cpfl/cpfl_ethdev.c | 133
> > +  drivers/net/cpfl/cpfl_ethdev.h |  18
> +
> >  drivers/net/cpfl/cpfl_rxtx.h   |   7 ++
> >  3 files changed, 158 insertions(+)
> >
> > diff --git a/drivers/net/cpfl/cpfl_ethdev.c
> > b/drivers/net/cpfl/cpfl_ethdev.c index e587155db6..c1273a7478 100644
> > --- a/drivers/net/cpfl/cpfl_ethdev.c
> > +++ b/drivers/net/cpfl/cpfl_ethdev.c
> > @@ -840,6 +840,20 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
> > return 0;
> >  }
> >
> > +static int
> > +cpfl_p2p_queue_grps_del(struct idpf_vport *vport) {
> > +   struct virtchnl2_queue_group_id qg_ids[CPFL_P2P_NB_QUEUE_GRPS]
> = {0};
> > +   int ret = 0;
> > +
> > +   qg_ids[0].queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
> > +   qg_ids[0].queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
> > +   ret = idpf_vc_queue_grps_del(vport, CPFL_P2P_NB_QUEUE_GRPS,
> qg_ids);
> > +   if (ret)
> > +   PMD_DRV_LOG(ERR, "Failed to delete p2p queue groups");
> > +   return ret;
> > +}
> > +
> >  static int
> >  cpfl_dev_close(struct rte_eth_dev *dev)  { @@ -848,7 +862,12 @@
> > cpfl_dev_close(struct rte_eth_dev *dev)
> > struct cpfl_adapter_ext *adapter =
> > CPFL_ADAPTER_TO_EXT(vport->adapter);
> >
> > cpfl_dev_stop(dev);
> > +
> > +   if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq)
> > +   cpfl_p2p_queue_grps_del(vport);
> > +
> > idpf_vport_deinit(vport);
> > +   rte_free(cpfl_vport->p2p_q_chunks_info);
> >
> > adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
> > adapter->cur_vport_nb--;
> > @@ -1284,6 +1303,96 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext
> *adapter)
> > return vport_idx;
> >  }
> >
> > +static int
> > +cpfl_p2p_q_grps_add(struct idpf_vport *vport,
> > +   struct virtchnl2_add_queue_groups *p2p_queue_grps_info,
> > +   uint8_t *p2p_q_vc_out_info)
> > +{
> > +   int ret;
> > +
> > +   p2p_queue_grps_info->vport_id = vport->vport_id;
> > +   p2p_queue_grps_info->qg_info.num_queue_groups =
> > CPFL_P2P_NB_QUEUE_GRPS;
> > +   p2p_queue_grps_info->qg_info.groups[0].num_rx_q =
> > CPFL_MAX_P2P_NB_QUEUES;
> > +   p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq =
> > CPFL_P2P_NB_RX_BUFQ;
> > +   p2p_queue_grps_info->qg_info.groups[0].num_tx_q =
> > CPFL_MAX_P2P_NB_QUEUES;
> > +   p2p_queue_grps_info->qg_info.groups[0].num_tx_complq =
> > CPFL_P2P_NB_TX_COMPLQ;
> > +   p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id =
> > CPFL_P2P_QUEUE_GRP_ID;
> > +   p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type =
> > VIRTCHNL2_QUEUE_GROUP_P2P;
> > +   p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size =
> 0;
> > +   p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0;
> > +   p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0;
> > +   p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0;
> > +   p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight =
> 0;
> > +
> > +   ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info,
> > p2p_q_vc_out_info);
> > +   if (ret != 0) {
> > +   PMD_DRV_LOG(ERR, "Failed to add p2p queue groups.");
> > +   return ret;
> > +   }
> > +
> > +   return ret;
> > +}
> > +
> > +static int
> > +cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport,
> > +struct virtchnl2_add_queue_groups
> *p2p_q_vc_out_info) {
> > +   struct p2p_queue_chunks_info *p2p_q_chunks_info = cpfl_vport-
> > >p2p_q_chunks_info;
> > +   struct virtchnl2_queue_reg_chunks *vc_ch

RE: [PATCH v4 13/13] net/cpfl: support hairpin bind/unbind

2023-05-31 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Tuesday, May 30, 2023 12:00 PM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: dev@dpdk.org; Wang, Xiao W 
> Subject: RE: [PATCH v4 13/13] net/cpfl: support hairpin bind/unbind
> 
> 
> 
> > -Original Message-
> > From: Xing, Beilei 
> > Sent: Friday, May 26, 2023 3:39 PM
> > To: Wu, Jingjing 
> > Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> > ; Wang, Xiao W 
> > Subject: [PATCH v4 13/13] net/cpfl: support hairpin bind/unbind
> >
> > From: Beilei Xing 
> >
> > This patch supports hairpin_bind/unbind ops.
> >
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Beilei Xing 
> > ---
> >  drivers/net/cpfl/cpfl_ethdev.c | 137 +
> >  drivers/net/cpfl/cpfl_rxtx.c   |  28 +++
> >  drivers/net/cpfl/cpfl_rxtx.h   |   2 +
> >  3 files changed, 167 insertions(+)
> >
> > diff --git a/drivers/net/cpfl/cpfl_ethdev.c
> > b/drivers/net/cpfl/cpfl_ethdev.c index
> > d6dc1672f1..4b70441e27 100644
> > --- a/drivers/net/cpfl/cpfl_ethdev.c
> > +++ b/drivers/net/cpfl/cpfl_ethdev.c
> > @@ -1114,6 +1114,141 @@ cpfl_hairpin_get_peer_ports(struct
> rte_eth_dev
> > *dev, uint16_t *peer_ports,
> > return j;
> >  }
> >
> >
> >  static int
> > diff --git a/drivers/net/cpfl/cpfl_rxtx.c
> > b/drivers/net/cpfl/cpfl_rxtx.c index 38c48ad8c7..ef83a03c2b 100644
> > --- a/drivers/net/cpfl/cpfl_rxtx.c
> > +++ b/drivers/net/cpfl/cpfl_rxtx.c
> > @@ -1011,6 +1011,34 @@ cpfl_switch_hairpin_bufq_complq(struct
> > cpfl_vport *cpfl_vport, bool on)
> > return err;
> >  }
> >
> > +int
> > +cpfl_switch_hairpin_complq(struct cpfl_vport *cpfl_vport, bool on) {
> > +   struct idpf_vport *vport = &cpfl_vport->base;
> > +   uint32_t type;
> > +   int err, queue_id;
> > +
> > +   type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
> > +   queue_id = cpfl_vport->p2p_tx_complq->queue_id;
> > +   err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> > +
> > +   return err;
> > +}
> > +
> > +int
> > +cpfl_switch_hairpin_bufq(struct cpfl_vport *cpfl_vport, bool on) {
> > +   struct idpf_vport *vport = &cpfl_vport->base;
> > +   uint32_t type;
> > +   int err, queue_id;
> > +
> > +   type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
> > +   queue_id = cpfl_vport->p2p_rx_bufq->queue_id;
> > +   err = idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> > +
> > +   return err;
> > +}
> > +
> [Liu, Mingxia] Can cpfl_switch_hairpin_bufq_complq() in patch 9/13 be
> optimized by calling cpfl_switch_hairpin_complq() and
> cpfl_switch_hairpin_bufq()?

Yes, the functions are duplicated. Refined in next version.

> >  int
> >  cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport,
> > uint16_t logic_qid,
> >bool rx, bool on)
> > diff --git a/drivers/net/cpfl/cpfl_rxtx.h
> > b/drivers/net/cpfl/cpfl_rxtx.h index
> > 42dfd07155..86e97541c4 100644
> > --- a/drivers/net/cpfl/cpfl_rxtx.h
> > +++ b/drivers/net/cpfl/cpfl_rxtx.h
> > @@ -114,6 +114,8 @@ int cpfl_hairpin_txq_config(struct idpf_vport
> > *vport, struct cpfl_tx_queue *cpfl  int
> > cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport);  int
> > cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue
> > *cpfl_rxq);  int cpfl_switch_hairpin_bufq_complq(struct
> > cpfl_vport *cpfl_vport, bool on);
> > +int cpfl_switch_hairpin_complq(struct cpfl_vport *cpfl_vport, bool
> > +on); int cpfl_switch_hairpin_bufq(struct cpfl_vport *cpfl_vport, bool
> > +on);
> >  int cpfl_switch_hairpin_rxtx_queue(struct cpfl_vport *cpfl_vport, uint16_t
> qid,
> >bool rx, bool on);
> >  #endif /* _CPFL_RXTX_H_ */
> > --
> > 2.26.2



RE: [PATCH v4 09/13] net/cpfl: support hairpin queue start/stop

2023-05-31 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Tuesday, May 30, 2023 11:31 AM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: dev@dpdk.org; Wang, Xiao W 
> Subject: RE: [PATCH v4 09/13] net/cpfl: support hairpin queue start/stop
> 
> 
> 
> > -Original Message-
> > From: Xing, Beilei 
> > Sent: Friday, May 26, 2023 3:39 PM
> > To: Wu, Jingjing 
> > Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> > ; Wang, Xiao W 
> > Subject: [PATCH v4 09/13] net/cpfl: support hairpin queue start/stop
> >
> > From: Beilei Xing 
> >
> > This patch supports Rx/Tx hairpin queue start/stop.
> >
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Mingxia Liu 
> > Signed-off-by: Beilei Xing 
> > ---
> >  drivers/net/cpfl/cpfl_ethdev.c |  41 +
> >  drivers/net/cpfl/cpfl_rxtx.c   | 151 +
> >  drivers/net/cpfl/cpfl_rxtx.h   |  14 +++
> >  3 files changed, 188 insertions(+), 18 deletions(-)
> >
> > diff --git a/drivers/net/cpfl/cpfl_ethdev.c
> > b/drivers/net/cpfl/cpfl_ethdev.c index
> > a06def06d0..8035878602 100644
> > --- a/drivers/net/cpfl/cpfl_ethdev.c
> > +++ b/drivers/net/cpfl/cpfl_ethdev.c
> > @@ -896,6 +896,47 @@ cpfl_start_queues(struct rte_eth_dev *dev)
> > }
> > }
> >
> > +   /* For non-manual bind hairpin queues, enable Tx queue and Rx queue,
> > +* then enable Tx completion queue and Rx buffer queue.
> > +*/
> > +   for (i = 0; i < dev->data->nb_tx_queues; i++) {
> [Liu, Mingxia] Better to use for (i = cpfl_tx_vport->nb_data_txq; i < 
> dev->data-
> >nb_tx_queues; i++), because when i < cpfl_tx_vport->nb_data_txq, (cpfl_txq-
> >hairpin_info.hairpin_q && !cpfl_vport-
> > >p2p_manual_bind) must be false, or (i - cpfl_vport->nb_data_txq) will < 0.
> 
> > +   cpfl_txq = dev->data->tx_queues[i];
> > +   if (cpfl_txq->hairpin_info.hairpin_q && !cpfl_vport-
> > >p2p_manual_bind) {
> > +   err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport,
> > +i - cpfl_vport-
> > >nb_data_txq,
> > +false, true);
> > +   if (err)
> > +   PMD_DRV_LOG(ERR, "Failed to switch hairpin
> > TX queue %u on",
> > +   i);
> > +   else
> > +   cpfl_txq->base.q_started = true;
> > +   }
> > +   }
> > +
> > +   for (i = 0; i < dev->data->nb_rx_queues; i++) {
> [Liu, Mingxia] Better to use for (i = cpfl_rx_vport->nb_data_rxq; i < 
> dev->data-
> >nb_rx_queues; i++), because when i < cpfl_rx_vport->nb_data_rxq, (cpfl_txq-
> >hairpin_info.hairpin_q && !cpfl_vport-
> > >p2p_manual_bind) must be false, or (i - cpfl_vport->nb_data_rxq) will < 0.

Make sense.
> 
> > +   cpfl_rxq = dev->data->rx_queues[i];
> > +   if (cpfl_rxq->hairpin_info.hairpin_q && !cpfl_vport-
> > >p2p_manual_bind) {
> > +   err = cpfl_switch_hairpin_rxtx_queue(cpfl_vport,
> > +i - cpfl_vport-
> > >nb_data_rxq,
> > +true, true);
> > +   if (err)
> > +   PMD_DRV_LOG(ERR, "Failed to switch hairpin
> > RX queue %u on",
> > +   i);
> > +   else
> > +   cpfl_rxq->base.q_started = true;
> > +   }
> > +   }
> > +
> > +   if (!cpfl_vport->p2p_manual_bind &&
> > +   cpfl_vport->p2p_tx_complq != NULL &&
> > +   cpfl_vport->p2p_rx_bufq != NULL) {
> > +   err = cpfl_switch_hairpin_bufq_complq(cpfl_vport, true);
> > +   if (err != 0) {
> > +   PMD_DRV_LOG(ERR, "Failed to switch hairpin Tx
> > complq and Rx bufq");
> > +   return err;
> > +   }
> > +   }
> > +
> > return err;
> >  }
> >
> > diff --git a/drivers/net/cpfl/cpfl_rxtx.c
> > b/drivers/net/cpfl/cpfl_rxtx.c index
> > 702054d1c5..38c48ad8c7 100644
> > --- a/drivers/net/cpfl/cpfl_rxtx.c
> > +++ b/drivers/net/cpfl/cpfl_rxtx.c
> > @@ -991,6 +991,81 @@ cpfl_hairpin_txq_config(struct idpf_vport *vport,
&

RE: [PATCH v4 05/13] net/cpfl: support hairpin queue setup and release

2023-05-31 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Tuesday, May 30, 2023 10:50 AM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: dev@dpdk.org; Wang, Xiao W 
> Subject: RE: [PATCH v4 05/13] net/cpfl: support hairpin queue setup and
> release
> 
> 
> 
> > -Original Message-
> > From: Liu, Mingxia
> > Sent: Tuesday, May 30, 2023 10:27 AM
> > To: Xing, Beilei ; Wu, Jingjing
> > 
> > Cc: dev@dpdk.org; Wang, Xiao W 
> > Subject: RE: [PATCH v4 05/13] net/cpfl: support hairpin queue setup
> > and release
> >
> >
> >
> > > -Original Message-
> > > From: Xing, Beilei 
> > > Sent: Friday, May 26, 2023 3:39 PM
> > > To: Wu, Jingjing 
> > > Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> > > ; Wang, Xiao W 
> > > Subject: [PATCH v4 05/13] net/cpfl: support hairpin queue setup and
> > > release
> > >
> > > From: Beilei Xing 
> > >
> > > Support hairpin Rx/Tx queue setup and release.
> > >
> > > Signed-off-by: Xiao Wang 
> > > Signed-off-by: Mingxia Liu 
> > > Signed-off-by: Beilei Xing 
> > > ---
> > >  drivers/net/cpfl/cpfl_ethdev.c  |   6 +
> > >  drivers/net/cpfl/cpfl_ethdev.h  |  11 +
> > >  drivers/net/cpfl/cpfl_rxtx.c| 353 +++-
> > >  drivers/net/cpfl/cpfl_rxtx.h|  36 +++
> > >  drivers/net/cpfl/cpfl_rxtx_vec_common.h |   4 +
> > >  5 files changed, 409 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/net/cpfl/cpfl_ethdev.c
> > > b/drivers/net/cpfl/cpfl_ethdev.c index
> > > 40b4515539..b17c538ec2 100644
> > > --- a/drivers/net/cpfl/cpfl_ethdev.c
> > > +++ b/drivers/net/cpfl/cpfl_ethdev.c
> > > @@ -879,6 +879,10 @@ cpfl_dev_close(struct rte_eth_dev *dev)
> > >   struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport-
> > > >adapter);
> > >
> > >   cpfl_dev_stop(dev);
> > > + if (cpfl_vport->p2p_mp) {
> > > + rte_mempool_free(cpfl_vport->p2p_mp);
> > > + cpfl_vport->p2p_mp = NULL;
> > > + }
> > >
> > >   if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq)
> > >   cpfl_p2p_queue_grps_del(vport);
> > > @@ -922,6 +926,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops =
> {
> > >   .xstats_get_names   = cpfl_dev_xstats_get_names,
> > >   .xstats_reset   = cpfl_dev_xstats_reset,
> > >   .hairpin_cap_get= cpfl_hairpin_cap_get,
> > > + .rx_hairpin_queue_setup =
> cpfl_rx_hairpin_queue_setup,
> > > + .tx_hairpin_queue_setup =
> cpfl_tx_hairpin_queue_setup,
> > >  };
> > >
> > > +int
> > > +cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> > > + uint16_t nb_desc,
> > > + const struct rte_eth_hairpin_conf *conf) {
> > > + struct cpfl_vport *cpfl_vport = (struct cpfl_vport *)dev->data-
> > > >dev_private;
> > > + struct idpf_vport *vport = &cpfl_vport->base;
> > > + struct idpf_adapter *adapter_base = vport->adapter;
> > > + uint16_t logic_qid = cpfl_vport->nb_p2p_rxq;
> > > + struct cpfl_rxq_hairpin_info *hairpin_info;
> > > + struct cpfl_rx_queue *cpfl_rxq;
> > > + struct idpf_rx_queue *bufq1 = NULL;
> > > + struct idpf_rx_queue *rxq;
> > > + uint16_t peer_port, peer_q;
> > > + uint16_t qid;
> > > + int ret;
> > > +
> > > + if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
> > > + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin
> > > queue.");
> > > + return -EINVAL;
> > > + }
> > > +
> > > + if (conf->peer_count != 1) {
> > > + PMD_INIT_LOG(ERR, "Can't support Rx hairpin queue peer
> > > count %d", conf->peer_count);
> > > + return -EINVAL;
> > > + }
> > > +
> > > + peer_port = conf->peers[0].port;
> > > + peer_q = conf->peers[0].queue;
> > > +
> > > + if (nb_desc % CPFL_ALIGN_RING_DESC != 0 ||
> > > + nb_desc > CPFL_MAX_RING_DESC ||
> > > + nb_desc < CPFL_MIN_RING_DESC) {
> > > + PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is
> > > invalid", nb_desc);
> > > + return -EINVAL;
> > > + }
&g

RE: [PATCH v3 05/10] net/cpfl: support hairpin queue setup and release

2023-05-25 Thread Xing, Beilei



> -Original Message-
> From: Wu, Jingjing 
> Sent: Thursday, May 25, 2023 11:59 AM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Liu, Mingxia ; Wang, Xiao W
> 
> Subject: RE: [PATCH v3 05/10] net/cpfl: support hairpin queue setup and
> release
> 
> >
> > +static int
> > +cpfl_rx_hairpin_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue
> *bufq,
> > +  uint16_t logic_qid, uint16_t nb_desc) {
> > +   struct cpfl_vport *cpfl_vport =
> > +   (struct cpfl_vport *)dev->data->dev_private;
> > +   struct idpf_vport *vport = &cpfl_vport->base;
> > +   struct idpf_adapter *adapter = vport->adapter;
> > +   struct rte_mempool *mp;
> > +   char pool_name[RTE_MEMPOOL_NAMESIZE];
> > +
> > +   mp = cpfl_vport->p2p_mp;
> > +   if (!mp) {
> > +   snprintf(pool_name, RTE_MEMPOOL_NAMESIZE,
> "p2p_mb_pool_%u",
> > +dev->data->port_id);
> > +   mp = rte_pktmbuf_pool_create(pool_name,
> CPFL_P2P_NB_MBUF,
> > CPFL_P2P_CACHE_SIZE,
> > +0, CPFL_P2P_MBUF_SIZE, dev-
> >device-
> > >numa_node);
> > +   if (!mp) {
> > +   PMD_INIT_LOG(ERR, "Failed to allocate mbuf pool for
> p2p");
> > +   return -ENOMEM;
> > +   }
> > +   cpfl_vport->p2p_mp = mp;
> > +   }
> > +
> > +   bufq->mp = mp;
> > +   bufq->nb_rx_desc = nb_desc;
> > +   bufq->queue_id = cpfl_hw_qid_get(cpfl_vport-
> > >p2p_q_chunks_info.rx_buf_start_qid, logic_qid);
> > +   bufq->port_id = dev->data->port_id;
> > +   bufq->adapter = adapter;
> > +   bufq->rx_buf_len = CPFL_P2P_MBUF_SIZE -
> RTE_PKTMBUF_HEADROOM;
> > +
> > +   bufq->sw_ring = rte_zmalloc("sw ring",
> > +   sizeof(struct rte_mbuf *) * nb_desc,
> > +   RTE_CACHE_LINE_SIZE);
> 
> Is sw_ring required in p2p case? It has been never used right?
> Please also check the sw_ring in tx queue.
Yes, it should be removed.

> 
> > +   if (!bufq->sw_ring) {
> > +   PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
> > +   return -ENOMEM;
> > +   }
> > +
> > +   bufq->q_set = true;
> > +   bufq->ops = &def_rxq_ops;
> > +
> > +   return 0;
> > +}
> > +
> > +int
> > +cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> > +   uint16_t nb_desc,
> > +   const struct rte_eth_hairpin_conf *conf) {
> > +   struct cpfl_vport *cpfl_vport = (struct cpfl_vport *)dev->data-
> >dev_private;
> > +   struct idpf_vport *vport = &cpfl_vport->base;
> > +   struct idpf_adapter *adapter_base = vport->adapter;
> > +   uint16_t logic_qid = cpfl_vport->nb_p2p_rxq;
> > +   struct cpfl_rxq_hairpin_info *hairpin_info;
> > +   struct cpfl_rx_queue *cpfl_rxq;
> > +   struct idpf_rx_queue *bufq1 = NULL;
> > +   struct idpf_rx_queue *rxq;
> > +   uint16_t peer_port, peer_q;
> > +   uint16_t qid;
> > +   int ret;
> > +
> > +   if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
> > +   PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin
> queue.");
> > +   return -EINVAL;
> > +   }
> > +
> > +   if (conf->peer_count != 1) {
> > +   PMD_INIT_LOG(ERR, "Can't support Rx hairpin queue peer
> count %d",
> > conf->peer_count);
> > +   return -EINVAL;
> > +   }
> > +
> > +   peer_port = conf->peers[0].port;
> > +   peer_q = conf->peers[0].queue;
> > +
> > +   if (nb_desc % CPFL_ALIGN_RING_DESC != 0 ||
> > +   nb_desc > CPFL_MAX_RING_DESC ||
> > +   nb_desc < CPFL_MIN_RING_DESC) {
> > +   PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is
> invalid",
> > nb_desc);
> > +   return -EINVAL;
> > +   }
> > +
> > +   /* Free memory if needed */
> > +   if (dev->data->rx_queues[queue_idx]) {
> > +   cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]);
> > +   dev->data->rx_queues[queue_idx] = NULL;
> > +   }
> > +
> > +   /* Setup Rx description queue */
> > +   cpfl_rxq = rte_zmalloc_socket("cpfl hairpin rxq",
> > +sizeof(struct cpfl_rx_queue),
> > +RTE_CACHE_L

RE: [PATCH v3 05/10] net/cpfl: support hairpin queue setup and release

2023-05-25 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Wednesday, May 24, 2023 5:02 PM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: dev@dpdk.org; Wang, Xiao W 
> Subject: RE: [PATCH v3 05/10] net/cpfl: support hairpin queue setup and
> release
> 
> > +cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> > +   uint16_t nb_desc,
> > +   const struct rte_eth_hairpin_conf *conf) {
> > +   struct cpfl_vport *cpfl_vport =
> > +   (struct cpfl_vport *)dev->data->dev_private;
> > +
> > +   struct idpf_vport *vport = &cpfl_vport->base;
> > +   struct idpf_adapter *adapter_base = vport->adapter;
> > +   uint16_t logic_qid = cpfl_vport->nb_p2p_txq;
> > +   struct cpfl_txq_hairpin_info *hairpin_info;
> > +   struct idpf_hw *hw = &adapter_base->hw;
> > +   struct cpfl_tx_queue *cpfl_txq;
> > +   struct idpf_tx_queue *txq, *cq;
> > +   const struct rte_memzone *mz;
> > +   uint32_t ring_size;
> > +   uint16_t peer_port, peer_q;
> > +
> > +   if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
> > +   PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin
> > queue.");
> > +   return -EINVAL;
> > +   }
> > +
> > +   if (conf->peer_count != 1) {
> > +   PMD_INIT_LOG(ERR, "Can't support Tx hairpin queue peer
> > count %d", conf->peer_count);
> > +   return -EINVAL;
> > +   }
> > +
> > +   peer_port = conf->peers[0].port;
> > +   peer_q = conf->peers[0].queue;
> > +
> > +   if (nb_desc % CPFL_ALIGN_RING_DESC != 0 ||
> > +   nb_desc > CPFL_MAX_RING_DESC ||
> > +   nb_desc < CPFL_MIN_RING_DESC) {
> > +   PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is
> > invalid",
> > +nb_desc);
> > +   return -EINVAL;
> > +   }
> > +
> > +   /* Free memory if needed. */
> > +   if (dev->data->tx_queues[queue_idx]) {
> > +   cpfl_tx_queue_release(dev->data->tx_queues[queue_idx]);
> > +   dev->data->tx_queues[queue_idx] = NULL;
> > +   }
> > +
> > +   /* Allocate the TX queue data structure. */
> > +   cpfl_txq = rte_zmalloc_socket("cpfl hairpin txq",
> > +sizeof(struct cpfl_tx_queue),
> > +RTE_CACHE_LINE_SIZE,
> > +SOCKET_ID_ANY);
> > +   if (!cpfl_txq) {
> > +   PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue
> > structure");
> > +   return -ENOMEM;
> > +   }
> > +
> > +   txq = &cpfl_txq->base;
> > +   hairpin_info = &cpfl_txq->hairpin_info;
> > +   /* Txq ring length should be 2 times of Tx completion queue size. */
> > +   txq->nb_tx_desc = nb_desc * 2;
> > +   txq->queue_id = cpfl_hw_qid_get(cpfl_vport-
> > >p2p_q_chunks_info.tx_start_qid, logic_qid);
> > +   txq->port_id = dev->data->port_id;
> > +   hairpin_info->hairpin_q = true;
> > +   hairpin_info->peer_rxp = peer_port;
> > +   hairpin_info->peer_rxq_id = peer_q;
> > +
> > +   if (conf->manual_bind != 0)
> > +   cpfl_vport->p2p_manual_bind = true;
> > +   else
> > +   cpfl_vport->p2p_manual_bind = false;
> > +
> > +   /* Always Tx hairpin queue allocates Tx HW ring */
> > +   ring_size = RTE_ALIGN(txq->nb_tx_desc * CPFL_P2P_DESC_LEN,
> > + CPFL_DMA_MEM_ALIGN);
> > +   mz = rte_eth_dma_zone_reserve(dev, "hairpin_tx_ring", logic_qid,
> > + ring_size + CPFL_P2P_RING_BUF,
> > + CPFL_RING_BASE_ALIGN,
> > + dev->device->numa_node);
> > +   if (!mz) {
> > +   PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
> > +   rte_free(txq->sw_ring);
> > +   rte_free(txq);
> > +   return -ENOMEM;
> > +   }
> > +
> > +   txq->tx_ring_phys_addr = mz->iova;
> > +   txq->desc_ring = mz->addr;
> > +   txq->mz = mz;
> > +
> > +   cpfl_tx_hairpin_descq_reset(txq);
> > +   txq->qtx_tail = hw->hw_addr +
> > +   cpfl_hw_qtail_get(cpfl_vport-
> > >p2p_q_chunks_info.tx_qtail_start,
> > + logic_qid, cpfl_vport-
> > >p2p_q_chunks_info.tx_qtail_spacing);
> > +   txq->ops = &def_t

RE: [PATCH 06/10] net/cpfl: support hairpin queue configuration

2023-05-18 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Monday, April 24, 2023 5:48 PM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: dev@dpdk.org; Wang, Xiao W 
> Subject: RE: [PATCH 06/10] net/cpfl: support hairpin queue configuration
> 
> 
> 
> > -Original Message-
> > From: Xing, Beilei 
> > Sent: Friday, April 21, 2023 2:51 PM
> > To: Wu, Jingjing 
> > Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> > ; Wang, Xiao W 
> > Subject: [PATCH 06/10] net/cpfl: support hairpin queue configuration
> >
> > From: Beilei Xing 
> >
> > This patch supports Rx/Tx hairpin queue configuration.
> >
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Mingxia Liu 
> > Signed-off-by: Beilei Xing 
> > ---
> >  drivers/common/idpf/idpf_common_virtchnl.c |  70 +++
> >  drivers/common/idpf/idpf_common_virtchnl.h |   6 +
> >  drivers/common/idpf/version.map|   2 +
> >  drivers/net/cpfl/cpfl_ethdev.c | 136 -
> >  drivers/net/cpfl/cpfl_rxtx.c   |  80 
> >  drivers/net/cpfl/cpfl_rxtx.h   |   7 ++
> >  6 files changed, 297 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/common/idpf/idpf_common_virtchnl.c
> > b/drivers/common/idpf/idpf_common_virtchnl.c
> > index 76a658bb26..50cd43a8dd 100644
> > --- a/drivers/common/idpf/idpf_common_virtchnl.c
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.c

<...> 

> >  static int
> >  cpfl_start_queues(struct rte_eth_dev *dev)  {
> > +   struct cpfl_vport *cpfl_vport = dev->data->dev_private;
> > +   struct idpf_vport *vport = &cpfl_vport->base;
> > struct cpfl_rx_queue *cpfl_rxq;
> > struct cpfl_tx_queue *cpfl_txq;
> > +   int tx_cmplq_flag = 0;
> > +   int rx_bufq_flag = 0;
> > +   int flag = 0;
> > int err = 0;
> > int i;
> >
> > +   /* For normal data queues, configure, init and enale Txq.
> > +* For non-cross vport hairpin queues, configure Txq.
> > +*/
> > for (i = 0; i < dev->data->nb_tx_queues; i++) {
> > cpfl_txq = dev->data->tx_queues[i];
> > if (cpfl_txq == NULL || cpfl_txq->base.tx_deferred_start)
> > continue;
> > -   err = cpfl_tx_queue_start(dev, i);
> > +   if (!cpfl_txq->hairpin_info.hairpin_q) {
> > +   err = cpfl_tx_queue_start(dev, i);
> > +   if (err != 0) {
> > +   PMD_DRV_LOG(ERR, "Fail to start Tx
> > queue %u", i);
> > +   return err;
> > +   }
> > +   } else if (!cpfl_txq->hairpin_info.manual_bind) {
> > +   if (flag == 0) {
> > +   err = cpfl_txq_hairpin_info_update(dev,
> > +  cpfl_txq-
> > >hairpin_info.peer_rxp);
> > +   if (err != 0) {
> > +   PMD_DRV_LOG(ERR, "Fail to update
> Tx
> > hairpin queue info");
> > +   return err;
> > +   }
> > +   flag = 1;
> [Liu, Mingxia] The variable flag is not been used, can it be removed?
 
It's used in above code, txq_hairpin_info should be updated once.

> > +   }
> > +   err = cpfl_hairpin_txq_config(vport, cpfl_txq);
> > +   if (err != 0) {
> > +   PMD_DRV_LOG(ERR, "Fail to configure hairpin
> > Tx queue %u", i);
> > +   return err;
> > +   }
> > +   tx_cmplq_flag = 1;
> > +   }
> > +   }
> > +
> 
> > +   /* For non-cross vport hairpin queues, configure Tx completion queue
> > first.*/
> > +   if (tx_cmplq_flag == 1 && cpfl_vport->p2p_tx_complq != NULL) {
> > +   err = cpfl_hairpin_tx_complq_config(cpfl_vport);
> > if (err != 0) {
> > -   PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
> > +   PMD_DRV_LOG(ERR, "Fail to config Tx completion
> > queue");
> > return err;
> > }
> > }
> >
> [Liu, Mingxia] Better to move this code next to
> +  err = cpfl_hairpin_txq_config(vport, cpfl_txq);
> + if (err != 0) {
> + PMD_DRV_LOG(ERR, "Fail to 

RE: [PATCH 04/10] net/cpfl: add haipin queue group during vpotr init

2023-05-18 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Monday, April 24, 2023 4:55 PM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: dev@dpdk.org
> Subject: RE: [PATCH 04/10] net/cpfl: add haipin queue group during vpotr init
> 
> 
> 
> > -Original Message-
> > From: Xing, Beilei 
> > Sent: Friday, April 21, 2023 2:51 PM
> > To: Wu, Jingjing 
> > Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> > 
> > Subject: [PATCH 04/10] net/cpfl: add haipin queue group during vpotr
> > init
> [Liu, Mingxia] vpotr , spelling error?

Good catch, thanks.

> >
> > From: Beilei Xing 
> >
> > This patch adds haipin queue group during vpotr init.
> >
> > Signed-off-by: Mingxia Liu 
> > Signed-off-by: Beilei Xing 
> > ---
> >  drivers/net/cpfl/cpfl_ethdev.c | 125
> > +  drivers/net/cpfl/cpfl_ethdev.h |
> > 17 +
> >  drivers/net/cpfl/cpfl_rxtx.h   |   4 ++
> >  3 files changed, 146 insertions(+)
> >
> > diff --git a/drivers/net/cpfl/cpfl_ethdev.c
> > b/drivers/net/cpfl/cpfl_ethdev.c index 114fc18f5f..ad5ddebd3a 100644
> > --- a/drivers/net/cpfl/cpfl_ethdev.c
> > +++ b/drivers/net/cpfl/cpfl_ethdev.c
> > @@ -856,6 +856,20 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
> > return 0;
> >  }
> >
> > +static int
> > +cpfl_p2p_queue_grps_del(struct idpf_vport *vport) {
> > +   struct virtchnl2_queue_group_id
> > qg_ids[CPFL_P2P_NB_QUEUE_GRPS] = {0};
> > +   int ret = 0;
> > +
> > +   qg_ids[0].queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
> > +   qg_ids[0].queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
> > +   ret = idpf_vc_queue_grps_del(vport, CPFL_P2P_NB_QUEUE_GRPS,
> > qg_ids);
> > +   if (ret)
> > +   PMD_DRV_LOG(ERR, "Failed to delete p2p queue groups");
> > +   return ret;
> > +}
> > +
> >  static int
> >  cpfl_dev_close(struct rte_eth_dev *dev)  { @@ -864,6 +878,9 @@
> > cpfl_dev_close(struct rte_eth_dev *dev)
> > struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport-
> > >adapter);
> >
> > cpfl_dev_stop(dev);
> > +
> > +   cpfl_p2p_queue_grps_del(vport);
> > +
> > idpf_vport_deinit(vport);
> >
> > adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id); @@ -
> > 1350,6 +1367,96 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext
> > *adapter)
> > return vport_idx;
> >  }
> >
> > +static int
> > +cpfl_p2p_q_grps_add(struct idpf_vport *vport,
> > +   struct virtchnl2_add_queue_groups
> > *p2p_queue_grps_info,
> > +   uint8_t *p2p_q_vc_out_info)
> > +{
> > +   int ret;
> > +
> > +   p2p_queue_grps_info->vport_id = vport->vport_id;
> > +   p2p_queue_grps_info->qg_info.num_queue_groups =
> > CPFL_P2P_NB_QUEUE_GRPS;
> > +   p2p_queue_grps_info->qg_info.groups[0].num_rx_q =
> > CPFL_MAX_P2P_NB_QUEUES;
> > +   p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq =
> > CPFL_P2P_NB_RX_BUFQ;
> > +   p2p_queue_grps_info->qg_info.groups[0].num_tx_q =
> > CPFL_MAX_P2P_NB_QUEUES;
> > +   p2p_queue_grps_info->qg_info.groups[0].num_tx_complq =
> > CPFL_P2P_NB_TX_COMPLQ;
> > +   p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id =
> > CPFL_P2P_QUEUE_GRP_ID;
> > +   p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type
> > = VIRTCHNL2_QUEUE_GROUP_P2P;
> > +   p2p_queue_grps_info-
> > >qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0;
> > +   p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0;
> > +   p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority =
> > 0;
> > +   p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0;
> > +   p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight
> > = 0;
> > +
> > +   ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info,
> > p2p_q_vc_out_info);
> > +   if (ret != 0) {
> > +   PMD_DRV_LOG(ERR, "Failed to add p2p queue groups.");
> > +   return ret;
> > +   }
> > +
> > +   return ret;
> > +}
> > +
> > +static int
> > +cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport,
> > +struct virtchnl2_add_queue_groups
> > *p2p_q_vc_out_info) {
> > +   struct p2p_queue_chunks_info *p2p_q_chunks_info =
> > &cpfl_vport->p2p_q_chunks_info;
> > +   struct virtchnl2_queue_reg_chunks *vc_chunks_out;
> > +   int i, type;
> > +
> > +   if (p2p

RE: [PATCH 03/10] common/idpf: support queue groups add/delete

2023-05-18 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Monday, April 24, 2023 4:50 PM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: dev@dpdk.org
> Subject: RE: [PATCH 03/10] common/idpf: support queue groups add/delete
> 
> 
> 
> > -Original Message-
> > From: Xing, Beilei 
> > Sent: Friday, April 21, 2023 2:51 PM
> > To: Wu, Jingjing 
> > Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> > 
> > Subject: [PATCH 03/10] common/idpf: support queue groups add/delete
> >
> > From: Beilei Xing 
> >
> > This patch adds queue group add/delete virtual channel support.
> >
> > Signed-off-by: Mingxia Liu 
> > Signed-off-by: Beilei Xing 
> > ---
> >  drivers/common/idpf/idpf_common_virtchnl.c | 66
> > ++
> > drivers/common/idpf/idpf_common_virtchnl.h |  9 +++
> >  drivers/common/idpf/version.map|  2 +
> >  3 files changed, 77 insertions(+)
> >
> > diff --git a/drivers/common/idpf/idpf_common_virtchnl.c
> > b/drivers/common/idpf/idpf_common_virtchnl.c
> > index a4e129062e..76a658bb26 100644
> > --- a/drivers/common/idpf/idpf_common_virtchnl.c
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.c
> > @@ -359,6 +359,72 @@ idpf_vc_vport_destroy(struct idpf_vport *vport)
> > return err;
> >  }
> >
> > +int
> > +idpf_vc_queue_grps_add(struct idpf_vport *vport,
> > +  struct virtchnl2_add_queue_groups
> > *ptp_queue_grps_info,
> > +  uint8_t *ptp_queue_grps_out)
> [Liu, Mingxia] Better to unify the abbreviation of "port to port" , this 
> patch ptp
> is used, in the next patch p2p is used.

Yes, it's refined in v2 patch.

> > +{
> > +   struct idpf_adapter *adapter = vport->adapter;
> > +   struct idpf_cmd_info args;
> > +   int size, qg_info_size;
> > +   int err = -1;
> > +
> > +   size = sizeof(*ptp_queue_grps_info) +
> > +  (ptp_queue_grps_info->qg_info.num_queue_groups - 1) *
> > +  sizeof(struct virtchnl2_queue_group_info);
> > +
> > +   memset(&args, 0, sizeof(args));
> > +   args.ops = VIRTCHNL2_OP_ADD_QUEUE_GROUPS;
> > +   args.in_args = (uint8_t *)ptp_queue_grps_info;
> > +   args.in_args_size = size;
> > +   args.out_buffer = adapter->mbx_resp;
> > +   args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +   err = idpf_vc_cmd_execute(adapter, &args);
> > +   if (err != 0) {
> > +   DRV_LOG(ERR,
> > +   "Failed to execute command of
> > VIRTCHNL2_OP_ADD_QUEUE_GROUPS");
> > +   return err;
> > +   }
> > +
> > +   rte_memcpy(ptp_queue_grps_out, args.out_buffer,
> > IDPF_DFLT_MBX_BUF_SIZE);
> > +   return 0;
> > +}
> > +
> > +int idpf_vc_queue_grps_del(struct idpf_vport *vport,
> > + uint16_t num_q_grps,
> > + struct virtchnl2_queue_group_id *qg_ids) {
> > +   struct idpf_adapter *adapter = vport->adapter;
> > +   struct virtchnl2_delete_queue_groups *vc_del_q_grps;
> > +   struct idpf_cmd_info args;
> > +   int size;
> > +   int err;
> > +
> > +   size = sizeof(*vc_del_q_grps) +
> > +  (num_q_grps - 1) * sizeof(struct virtchnl2_queue_group_id);
> > +   vc_del_q_grps = rte_zmalloc("vc_del_q_grps", size, 0);
> > +
> > +   vc_del_q_grps->vport_id = vport->vport_id;
> > +   vc_del_q_grps->num_queue_groups = num_q_grps;
> > +   memcpy(vc_del_q_grps->qg_ids, qg_ids,
> > +  num_q_grps * sizeof(struct virtchnl2_queue_group_id));
> > +
> > +   memset(&args, 0, sizeof(args));
> > +   args.ops = VIRTCHNL2_OP_DEL_QUEUE_GROUPS;
> > +   args.in_args = (uint8_t *)vc_del_q_grps;
> > +   args.in_args_size = size;
> > +   args.out_buffer = adapter->mbx_resp;
> > +   args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +   err = idpf_vc_cmd_execute(adapter, &args);
> > +   if (err != 0)
> > +   DRV_LOG(ERR, "Failed to execute command of
> > +VIRTCHNL2_OP_DEL_QUEUE_GROUPS");
> > +
> > +   rte_free(vc_del_q_grps);
> > +   return err;
> > +}
> > +
> >  int
> >  idpf_vc_rss_key_set(struct idpf_vport *vport)  { diff --git
> > a/drivers/common/idpf/idpf_common_virtchnl.h
> > b/drivers/common/idpf/idpf_common_virtchnl.h
> > index d479d93c8e..bf1d014c8d 100644
> > --- a/drivers/common/idpf/idpf_common_virtchnl.h
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.h
> > @@ -64,4 +64,13 @@ int idpf_vc_ctlq_re

RE: [PATCH] net/cpfl: update the doc of CPFL PMD

2023-05-14 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Friday, April 21, 2023 10:59 PM
> To: dev@dpdk.org; Xing, Beilei ; Zhang, Yuying
> 
> Cc: Liu, Mingxia 
> Subject: [PATCH] net/cpfl: update the doc of CPFL PMD
> 
> This patch updates cpfl.rst doc, adjusting the order of chapters referring to
> IDPF PMD doc.
> 
> Signed-off-by: Mingxia Liu 

Acked-by: Beilei Xing 


RE: [PATCH v2] common/idpf: remove unnecessary compile option

2023-04-27 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Wednesday, April 26, 2023 11:39 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Zhang, Qi Z 
> Subject: [PATCH v2] common/idpf: remove unnecessary compile option
> 
> Remove compile option "__KERNEL" which should not be considered in DPDK.
> Also only #include  in idpf_osdep.h.
> 
> Signed-off-by: Qi Zhang 

Acked-by: Beilei Xing 


RE: [PATCH] common/idpf: remove unnecessary field in vport

2023-04-26 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Friday, April 21, 2023 12:21 AM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Zhang, Qi Z 
> Subject: [PATCH] common/idpf: remove unnecessary field in vport
> 
> Remove the pointer to rte_eth_dev instance, as 1. there is already a pointer 
> to
> rte_eth_dev_data.
> 2. a pointer to rte_eth_dev will break multi-process usage.
> 
> Signed-off-by: Qi Zhang 
Acked-by: Beilei Xing 


RE: [PATCH] common/idpf: remove unnecessary field in vport

2023-04-25 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Friday, April 21, 2023 12:21 AM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Zhang, Qi Z 
> Subject: [PATCH] common/idpf: remove unnecessary field in vport
> 
> Remove the pointer to rte_eth_dev instance, as 1. there is already a pointer 
> to
> rte_eth_dev_data.
> 2. a pointer to rte_eth_dev will break multi-process usage.

Basically it's OK for me, do we need to add fix line?

> 
> Signed-off-by: Qi Zhang 
> ---
>  drivers/common/idpf/idpf_common_device.h | 1 -
>  drivers/net/cpfl/cpfl_ethdev.c   | 4 ++--
>  drivers/net/idpf/idpf_ethdev.c   | 4 ++--
>  3 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/common/idpf/idpf_common_device.h
> b/drivers/common/idpf/idpf_common_device.h
> index 7a54f7c937..d29bcc71ab 100644
> --- a/drivers/common/idpf/idpf_common_device.h
> +++ b/drivers/common/idpf/idpf_common_device.h
> @@ -117,7 +117,6 @@ struct idpf_vport {
> 
>   struct virtchnl2_vport_stats eth_stats_offset;
> 
> - void *dev;
>   /* Event from ipf */
>   bool link_up;
>   uint32_t link_speed;
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index f1d4425ce2..680c2326ec 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -1061,7 +1061,8 @@ static void
>  cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t
> msglen)  {
>   struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
> - struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
> + struct rte_eth_dev_data *data = vport->dev_data;
> + struct rte_eth_dev *dev = &rte_eth_devices[data->port_id];
> 
>   if (msglen < sizeof(struct virtchnl2_event)) {
>   PMD_DRV_LOG(ERR, "Error event");
> @@ -1245,7 +1246,6 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void
> *init_params)
>   vport->adapter = &adapter->base;
>   vport->sw_idx = param->idx;
>   vport->devarg_id = param->devarg_id;
> - vport->dev = dev;
> 
>   memset(&create_vport_info, 0, sizeof(create_vport_info));
>   ret = idpf_vport_info_init(vport, &create_vport_info); diff --git
> a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index
> e01eb3a2ec..38ad4e7ac0 100644
> --- a/drivers/net/idpf/idpf_ethdev.c
> +++ b/drivers/net/idpf/idpf_ethdev.c
> @@ -1024,7 +1024,8 @@ static void
>  idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t
> msglen)  {
>   struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
> - struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
> + struct rte_eth_dev_data *data = vport->dev_data;
> + struct rte_eth_dev *dev = &rte_eth_devices[data->port_id];
> 
>   if (msglen < sizeof(struct virtchnl2_event)) {
>   PMD_DRV_LOG(ERR, "Error event");
> @@ -1235,7 +1236,6 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void
> *init_params)
>   vport->adapter = &adapter->base;
>   vport->sw_idx = param->idx;
>   vport->devarg_id = param->devarg_id;
> - vport->dev = dev;
> 
>   memset(&create_vport_info, 0, sizeof(create_vport_info));
>   ret = idpf_vport_info_init(vport, &create_vport_info);
> --
> 2.31.1



RE: [PATCH] common/idpf: remove device stop flag

2023-04-25 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Thursday, April 20, 2023 11:57 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Zhang, Qi Z ; sta...@dpdk.org
> Subject: [PATCH] common/idpf: remove device stop flag
> 
> Remove device stop flag, as we already have dev->data-dev_started.
> This also fixed the issue when close port directly without start it first, 
> some
> error message will be reported in dev_stop.
> 
> Fixes: 14aa6ed8f2ec ("net/idpf: support device start and stop")
> Fixes: 1082a773a86b ("common/idpf: add vport structure")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Qi Zhang 
Acked-by: Beilei Xing 


RE: [PATCH] common/idpf: refine header file include

2023-04-25 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Tuesday, April 25, 2023 6:40 AM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Zhang, Qi Z 
> Subject: [PATCH] common/idpf: refine header file include
> 
> Replace #include  with #include "filename" for local header file.
> 
> Signed-off-by: Qi Zhang 

Acked-by: Beilei Xing 


RE: [PATCH v8 01/21] net/cpfl: support device initialization

2023-03-02 Thread Xing, Beilei


> -Original Message-
> From: Ferruh Yigit 
> Sent: Thursday, March 2, 2023 7:51 PM
> To: Liu, Mingxia ; dev@dpdk.org; Xing, Beilei
> ; Zhang, Yuying 
> Subject: Re: [PATCH v8 01/21] net/cpfl: support device initialization
> 
> On 3/2/2023 11:24 AM, Liu, Mingxia wrote:
> >
> >
> >> -Original Message-
> >> From: Ferruh Yigit 
> >> Sent: Thursday, March 2, 2023 5:31 PM
> >> To: Liu, Mingxia ; dev@dpdk.org; Xing, Beilei
> >> ; Zhang, Yuying 
> >> Subject: Re: [PATCH v8 01/21] net/cpfl: support device initialization
> >>
> >> On 3/2/2023 10:35 AM, Mingxia Liu wrote:
> >>> Support device init and add the following dev ops:
> >>>  - dev_configure
> >>>  - dev_close
> >>>  - dev_infos_get
> >>>  - link_update
> >>>  - dev_supported_ptypes_get
> >>>
> >>> Signed-off-by: Mingxia Liu 
> >>
> >> <...>
> >>
> >>> --- /dev/null
> >>> +++ b/doc/guides/nics/cpfl.rst
> >>> @@ -0,0 +1,85 @@
> >>> +.. SPDX-License-Identifier: BSD-3-Clause
> >>> +   Copyright(c) 2022 Intel Corporation.
> >>> +
> >>
> >> s/2022/2023
> >>
> >> <...>
> >>
> >>> +static int
> >>> +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> >>> +struct rte_pci_device *pci_dev) {
> >>> + struct cpfl_vport_param vport_param;
> >>> + struct cpfl_adapter_ext *adapter;
> >>> + struct cpfl_devargs devargs;
> >>> + char name[RTE_ETH_NAME_MAX_LEN];
> >>> + int i, retval;
> >>> + bool first_probe = false;
> >>> +
> >>> + if (!cpfl_adapter_list_init) {
> >>> + rte_spinlock_init(&cpfl_adapter_lock);
> >>> + TAILQ_INIT(&cpfl_adapter_list);
> >>> + cpfl_adapter_list_init = true;
> >>> + }
> >>> +
> >>> + adapter = cpfl_find_adapter_ext(pci_dev);
> >>> + if (adapter == NULL) {
> >>> + first_probe = true;
> >>> + adapter = rte_zmalloc("cpfl_adapter_ext",
> >>> +   sizeof(struct cpfl_adapter_ext), 0);
> >>> + if (adapter == NULL) {
> >>> + PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
> >>> + return -ENOMEM;
> >>> + }
> >>> +
> >>> + retval = cpfl_adapter_ext_init(pci_dev, adapter);
> >>> + if (retval != 0) {
> >>> + PMD_INIT_LOG(ERR, "Failed to init adapter.");
> >>> + return retval;
> >>> + }
> >>> +
> >>> + rte_spinlock_lock(&cpfl_adapter_lock);
> >>> + TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
> >>> + rte_spinlock_unlock(&cpfl_adapter_lock);
> >>> + }
> >>> +
> >>> + retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
> >>> + if (retval != 0) {
> >>> + PMD_INIT_LOG(ERR, "Failed to parse private devargs");
> >>> + goto err;
> >>> + }
> >>> +
> >>> + if (devargs.req_vport_nb == 0) {
> >>> + /* If no vport devarg, create vport 0 by default. */
> >>> + vport_param.adapter = adapter;
> >>> + vport_param.devarg_id = 0;
> >>> + vport_param.idx = cpfl_vport_idx_alloc(adapter);
> >>> + if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> >>> + PMD_INIT_LOG(ERR, "No space for vport %u",
> >> vport_param.devarg_id);
> >>> + return 0;
> >>> + }
> >>> + snprintf(name, sizeof(name), "cpfl_%s_vport_0",
> >>> +  pci_dev->device.name);
> >>> + retval = rte_eth_dev_create(&pci_dev->device, name,
> >>> + sizeof(struct idpf_vport),
> >>> + NULL, NULL, cpfl_dev_vport_init,
> >>> + &vport_param);
> >>> + if (retval != 0)
> >>> + PMD_DRV_LOG(ERR, "Failed to create default vport
> >> 0");
> >>> + } else {
> >>> + for (i = 0; i < devargs.req_vport_nb; i++) {

RE: [PATCH v8 01/21] net/cpfl: support device initialization

2023-03-02 Thread Xing, Beilei


> -Original Message-
> From: Ferruh Yigit 
> Sent: Thursday, March 2, 2023 5:31 PM
> To: Liu, Mingxia ; dev@dpdk.org; Xing, Beilei
> ; Zhang, Yuying 
> Subject: Re: [PATCH v8 01/21] net/cpfl: support device initialization
> 
> On 3/2/2023 10:35 AM, Mingxia Liu wrote:
> > Support device init and add the following dev ops:
> >  - dev_configure
> >  - dev_close
> >  - dev_infos_get
> >  - link_update
> >  - dev_supported_ptypes_get
> >
> > Signed-off-by: Mingxia Liu 
> 
> <...>
> 
> > --- /dev/null
> > +++ b/doc/guides/nics/cpfl.rst
> > @@ -0,0 +1,85 @@
> > +.. SPDX-License-Identifier: BSD-3-Clause
> > +   Copyright(c) 2022 Intel Corporation.
> > +
> 
> s/2022/2023
> 
> <...>
> 
> > +static int
> > +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> > +  struct rte_pci_device *pci_dev) {
> > +   struct cpfl_vport_param vport_param;
> > +   struct cpfl_adapter_ext *adapter;
> > +   struct cpfl_devargs devargs;
> > +   char name[RTE_ETH_NAME_MAX_LEN];
> > +   int i, retval;
> > +   bool first_probe = false;
> > +
> > +   if (!cpfl_adapter_list_init) {
> > +   rte_spinlock_init(&cpfl_adapter_lock);
> > +   TAILQ_INIT(&cpfl_adapter_list);
> > +   cpfl_adapter_list_init = true;
> > +   }
> > +
> > +   adapter = cpfl_find_adapter_ext(pci_dev);
> > +   if (adapter == NULL) {
> > +   first_probe = true;
> > +   adapter = rte_zmalloc("cpfl_adapter_ext",
> > + sizeof(struct cpfl_adapter_ext), 0);
> > +   if (adapter == NULL) {
> > +   PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
> > +   return -ENOMEM;
> > +   }
> > +
> > +   retval = cpfl_adapter_ext_init(pci_dev, adapter);
> > +   if (retval != 0) {
> > +   PMD_INIT_LOG(ERR, "Failed to init adapter.");
> > +   return retval;
> > +   }
> > +
> > +   rte_spinlock_lock(&cpfl_adapter_lock);
> > +   TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
> > +   rte_spinlock_unlock(&cpfl_adapter_lock);
> > +   }
> > +
> > +   retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
> > +   if (retval != 0) {
> > +   PMD_INIT_LOG(ERR, "Failed to parse private devargs");
> > +   goto err;
> > +   }
> > +
> > +   if (devargs.req_vport_nb == 0) {
> > +   /* If no vport devarg, create vport 0 by default. */
> > +   vport_param.adapter = adapter;
> > +   vport_param.devarg_id = 0;
> > +   vport_param.idx = cpfl_vport_idx_alloc(adapter);
> > +   if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> > +   PMD_INIT_LOG(ERR, "No space for vport %u",
> vport_param.devarg_id);
> > +   return 0;
> > +   }
> > +   snprintf(name, sizeof(name), "cpfl_%s_vport_0",
> > +pci_dev->device.name);
> > +   retval = rte_eth_dev_create(&pci_dev->device, name,
> > +   sizeof(struct idpf_vport),
> > +   NULL, NULL, cpfl_dev_vport_init,
> > +   &vport_param);
> > +   if (retval != 0)
> > +   PMD_DRV_LOG(ERR, "Failed to create default vport
> 0");
> > +   } else {
> > +   for (i = 0; i < devargs.req_vport_nb; i++) {
> > +   vport_param.adapter = adapter;
> > +   vport_param.devarg_id = devargs.req_vports[i];
> > +   vport_param.idx = cpfl_vport_idx_alloc(adapter);
> > +   if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> > +   PMD_INIT_LOG(ERR, "No space for vport %u",
> vport_param.devarg_id);
> > +   break;
> > +   }
> > +   snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
> > +pci_dev->device.name,
> > +devargs.req_vports[i]);
> > +   retval = rte_eth_dev_create(&pci_dev->device, name,
> > +   sizeof(struct idpf_vport),
> > +   NULL, NULL,
> cpfl_dev_vport_init,
> &g

RE: [PATCH v2] net/idpf: refine Rx/Tx queue model info

2023-03-02 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Thursday, March 2, 2023 3:27 AM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Liu, Mingxia 
> Subject: [PATCH v2] net/idpf: refine Rx/Tx queue model info
> 
> This patch updates queue mode info in struct idpf_adapter.
> Using is_rx_singleq_model to diffentiate rx_singq and rx_splitq explicitly,
> instead of deducing it from pointer values.
> 
> Signed-off-by: Mingxia Liu 

Acked-by: Beilei Xing 


RE: [PATCH v2] net/idpf: add hw stats ierrors

2023-02-23 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Friday, February 24, 2023 10:43 AM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Liu, Mingxia 
> Subject: [PATCH v2] net/idpf: add hw stats ierrors
> 
> This patch adds hw stats ierrors, when receiving packets with bad csum, 
> ierrors
> value will increase.
> 
> Signed-off-by: Mingxia Liu 
> ---
>  drivers/net/idpf/idpf_ethdev.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
> index 38cbbf369d..5b4f4fd82b 100644
> --- a/drivers/net/idpf/idpf_ethdev.c
> +++ b/drivers/net/idpf/idpf_ethdev.c
> @@ -291,6 +291,7 @@ idpf_dev_stats_get(struct rte_eth_dev *dev, struct
> rte_eth_stats *stats)
>   pstats->rx_broadcast - pstats->rx_discards;
>   stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
> +
>   pstats->tx_unicast;
> + stats->ierrors = pstats->rx_errors;
>   stats->imissed = pstats->rx_discards;
>   stats->oerrors = pstats->tx_errors + pstats->tx_discards;
>   stats->ibytes = pstats->rx_bytes;
> --
> 2.25.1

Acked-by: Beilei Xing 



RE: [PATCH 1/1] net/cpfl: add port to port feature.

2023-02-12 Thread Xing, Beilei



> -Original Message-
> From: Liu, Mingxia 
> Sent: Wednesday, January 18, 2023 9:07 PM
> To: dev@dpdk.org; Zhang, Qi Z ; Wu, Jingjing
> ; Xing, Beilei 
> Cc: Liu, Mingxia ; Wang, Xiao W
> ; Guo, Junfeng 
> Subject: [PATCH 1/1] net/cpfl: add port to port feature.
No need . at the end of the title

> 
> - Implement hairpin queue setup/confige/enable/disable.
Confige->configure
> - Cross-vport hairpin queue implemented via hairpin_bind/unbind API.
Better to split the features into different patches.

> 
> Test step:
> 1. Make sure no bug on CP side.
> 2. Add rule on IMC.
>- devmem 0x202920C100 64 0x804
>- opcode=0x1303 prof_id=0x34 sub_prof_id=0x0 cookie=0xa2b87 key=0x18,\
>  0x0,00,00,00,00,de,0xad,0xbe,0xef,0x20,0x24,0x0,0x0,0x0,0x0,00,00,\
>  00,00,00,00,0xa,0x2,0x1d,0x64,00,00,00,00,00,00,00,00,00,00,00,00,\
>  0xa,0x2,0x1d,0x2,00,00,00,00,00,00,00,00,00,00,00,00 act=set_vsi{\
>  act_val=0 val_type=2 dst_pe=0 slot=0x0} act=set_q{\
>  qnum=0x142 no_implicit_vsi=1 prec=5}
> 3. Send packets on ixia side
>UDP packets with dmac=de:ad:be:ef:20:24 sip=10.2.29.100
>dip=10.2.29.2
The steps should be refined with an example. Step 1 can be removed.

> 
> Signed-off-by: Beilei Xing 
> Signed-off-by: Xiao Wang 
> Signed-off-by: Junfeng Guo 
> Signed-off-by: Mingxia Liu 
> ---
>  drivers/common/idpf/idpf_common_device.c   |  50 ++
>  drivers/common/idpf/idpf_common_device.h   |   2 +
>  drivers/common/idpf/idpf_common_virtchnl.c | 100 ++-
>  drivers/common/idpf/idpf_common_virtchnl.h |  12 +
>  drivers/common/idpf/version.map|   5 +
>  drivers/net/cpfl/cpfl_ethdev.c | 374 +++--
>  drivers/net/cpfl/cpfl_ethdev.h |   8 +-
>  drivers/net/cpfl/cpfl_logs.h   |   2 +
>  drivers/net/cpfl/cpfl_rxtx.c   | 851 +++--
>  drivers/net/cpfl/cpfl_rxtx.h   |  58 ++
>  drivers/net/cpfl/cpfl_rxtx_vec_common.h|  18 +-
>  11 files changed, 1347 insertions(+), 133 deletions(-)
> 
> diff --git a/drivers/common/idpf/idpf_common_device.c
> b/drivers/common/idpf/idpf_common_device.c
> index b90b20d0f2..be2ec19650 100644
> --- a/drivers/common/idpf/idpf_common_device.c
> +++ b/drivers/common/idpf/idpf_common_device.c
> @@ -362,6 +362,56 @@ idpf_adapter_init(struct idpf_adapter *adapter)
>   return ret;
>  }
> 
> +int
> +idpf_adapter_common_init(struct idpf_adapter *adapter)
It's quite similar to idpf_adapter_init. Can be refined.

> +{
> + struct idpf_hw *hw = &adapter->hw;
> + int ret;
> +
> + idpf_reset_pf(hw);
> + ret = idpf_check_pf_reset_done(hw);
> + if (ret != 0) {
> + DRV_LOG(ERR, "IDPF is still resetting");
> + goto err_check_reset;
> + }
> +
> + ret = idpf_init_mbx(hw);
> + if (ret != 0) {
> + DRV_LOG(ERR, "Failed to init mailbox");
> + goto err_check_reset;
> + }
> +
> + adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
> + IDPF_DFLT_MBX_BUF_SIZE, 0);
> + if (adapter->mbx_resp == NULL) {
> + DRV_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp
> memory");
> + ret = -ENOMEM;
> + goto err_mbx_resp;
> + }
> +
> + ret = idpf_vc_check_api_version(adapter);
> + if (ret != 0) {
> + DRV_LOG(ERR, "Failed to check api version");
> + goto err_check_api;
> + }
> +
> + ret = idpf_get_pkt_type(adapter);
> + if (ret != 0) {
> + DRV_LOG(ERR, "Failed to set ptype table");
> + goto err_check_api;
> + }
> +
> + return 0;
> +
> +err_check_api:
> + rte_free(adapter->mbx_resp);
> + adapter->mbx_resp = NULL;
> +err_mbx_resp:
> + idpf_ctlq_deinit(hw);
> +err_check_reset:
> + return ret;
> +}
> +

<...>

> --- a/drivers/common/idpf/version.map
> +++ b/drivers/common/idpf/version.map
> @@ -67,6 +67,11 @@ INTERNAL {
>   idpf_vc_get_rss_key;
>   idpf_vc_get_rss_lut;
>   idpf_vc_get_rss_hash;
> + idpf_vc_ena_dis_one_queue;
> + idpf_vc_config_rxq_by_info;
> + idpf_vc_config_txq_by_info;
> + idpf_vc_get_caps_by_caps_info;
> + idpf_adapter_common_init;

Oder alphabetically.

> 
>   local: *;
>  };
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index f178f3fbb8..e464d76b60 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -108,7 +108,9 @@ static int
>  cpfl_dev_link_update(struct

RE: Testpmd/l3fwd port shutdown failure on Arm Altra systems

2023-02-06 Thread Xing, Beilei
Hi Qiming,

Could you please help on this? Thanks.

BR,
Beilei

> -Original Message-
> From: Juraj Linkeš 
> Sent: Monday, February 6, 2023 4:53 PM
> To: Singh, Aman Deep ; Zhang, Yuying
> ; Xing, Beilei 
> Cc: dev@dpdk.org; Ruifeng Wang ; Zhang, Lijian
> ; Honnappa Nagarahalli
> 
> Subject: Re: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> 
> Hello i40e and testpmd maintainers,
> 
> A gentle reminder - would you please advise how to debug the issue described
> below?
> 
> Thanks,
> Juraj
> 
> On Fri, Jan 20, 2023 at 1:07 PM Juraj Linkeš 
> wrote:
> >
> > Adding the logfile.
> >
> >
> >
> > One thing that's in the logs but didn't explicitly mention is the DPDK 
> > version
> we've tried this with:
> >
> > EAL: RTE Version: 'DPDK 22.07.0'
> >
> >
> >
> > We also tried earlier versions going back to 21.08, with no luck. I also 
> > did a
> quick check on 22.11, also with no luck.
> >
> >
> >
> > Juraj
> >
> >
> >
> > From: Juraj Linkeš
> > Sent: Friday, January 20, 2023 12:56 PM
> > To: 'aman.deep.si...@intel.com' ;
> > 'yuying.zh...@intel.com' ; Xing, Beilei
> > 
> > Cc: dev@dpdk.org; Ruifeng Wang ; 'Lijian Zhang'
> > ; 'Honnappa Nagarahalli'
> > 
> > Subject: Testpmd/l3fwd port shutdown failure on Arm Altra systems
> >
> >
> >
> > Hello i40e and testpmd maintainers,
> >
> >
> >
> > We're hitting an issue with DPDK testpmd on Ampere Altra servers in FD.io
> lab.
> >
> >
> >
> > A bit of background: along with VPP performance tests (which uses DPDK),
> we're running a small number of basic DPDK testpmd and l3fwd tests in FD.io
> as well. This is to catch any performance differences due to VPP updating its
> DPDK version.
> >
> >
> >
> > We're running both l3fwd tests and testpmd tests. The Altra servers are two
> socket and the topology is TG -> DUT1 -> DUT2 -> TG, traffic flows in both
> directions, but nothing gets forwarded (with a slight caveat - put a pin in 
> this).
> There's nothing special in the tests, just forwarding traffic. The NIC we're
> testing is xl710-QDA2.
> >
> >
> >
> > The same tests are passing on all other testbeds - we have various two node
> (1 DUT, 1 TG) and three node (2 DUT, 1 TG) Intel and Arm testbeds and with
> various NICs (Intel 700 and 800 series and the Intel testbeds use some
> Mellanox NICs as well). We don't have quite the same combination of another
> three node topology with the same NIC though, so it looks like something with
> testpmd/l3fwd and xl710-QDA2 on Altra servers.
> >
> >
> >
> > VPP performance tests are passing, but l3fwd and testpmd fail. This leads us
> to believe to it's a software issue, but there could something wrong with the
> hardware. I'll talk about testpmd from now on, but as far we can tell, the
> behavior is the same for testpmd and l3fwd.
> >
> >
> >
> > Getting back to the caveat mentioned earlier, there seems to be something
> wrong with port shutdown. When running testpmd on a testbed that hasn't
> been used for a while it seems that all ports are up right away (we don't see
> any "Port 0|1: link state change event") and the setup works fine (forwarding
> works). After restarting testpmd (restarting on one server is sufficient), the
> ports between DUT1 and DUT2 (but not between DUTs and TG) go down and
> are not usable in DPDK, VPP or in Linux (with i40e kernel driver) for a while
> (measured in minutes, sometimes dozens of minutes; the duration is seemingly
> random). The ports eventually recover and can be used again, but there's
> nothing in syslog suggesting what happened.
> >
> >
> >
> > What seems to be happening is testpmd put the ports into some faulty state.
> This only happens on the DUT1 -> DUT2 link though (the ports between the
> two testpmds), not on TG -> DUT1 link (the TG port is left alone).
> >
> >
> >
> > Some more info:
> >
> > We've come across the issue with this configuration:
> >
> > OS: Ubuntu20.04 with kernel 5.4.0-65-generic.
> >
> > Old NIC firmware, never upgraded: 6.01 0x800035da 1.1747.0.
> >
> > Drivers versions: i40e 2.17.15 and iavf 4.3.19.
> >
> >
> >
> > As well as with this configuration:
> >
> > OS: Ubuntu22.04 with kernel 5.15.0-46-generic.
> >
> > Updated firmware: 8.30 0x8000a4ae 1.2926.0.
&

RE: [PATCH v6 00/19] net/idpf: introduce idpf common modle

2023-02-05 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Monday, February 6, 2023 10:59 AM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v6 00/19] net/idpf: introduce idpf common modle
> 
> 
> 
> > -Original Message-
> > From: Xing, Beilei 
> > Sent: Friday, February 3, 2023 5:43 PM
> > To: Wu, Jingjing 
> > Cc: dev@dpdk.org; Zhang, Qi Z ; Xing, Beilei
> > 
> > Subject: [PATCH v6 00/19] net/idpf: introduce idpf common modle
> >
> > From: Beilei Xing 
> >
> > Refactor idpf pmd by introducing idpf common module, which will be
> > also consumed by a new PMD - CPFL (Control Plane Function Library) PMD.
> >
> > v2 changes:
> >  - Refine irq map/unmap functions.
> >  - Fix cross compile issue.
> > v3 changes:
> >  - Embed vport_info field into the vport structure.
> >  - Refine APIs' name and order in version.map.
> >  - Refine commit log.
> > v4 changes:
> >  - Refine commit log.
> > v5 changes:
> >  - Refine version.map.
> >  - Fix typo.
> >  - Return error log.
> > v6 changes:
> >  - Refine API name in common module.
> >
> > Beilei Xing (19):
> >   common/idpf: add adapter structure
> >   common/idpf: add vport structure
> >   common/idpf: add virtual channel functions
> >   common/idpf: introduce adapter init and deinit
> >   common/idpf: add vport init/deinit
> >   common/idpf: add config RSS
> >   common/idpf: add irq map/unmap
> >   common/idpf: support get packet type
> >   common/idpf: add vport info initialization
> >   common/idpf: add vector flags in vport
> >   common/idpf: add rxq and txq struct
> >   common/idpf: add help functions for queue setup and release
> >   common/idpf: add Rx and Tx data path
> >   common/idpf: add vec queue setup
> >   common/idpf: add avx512 for single queue model
> >   common/idpf: refine API name for vport functions
> >   common/idpf: refine API name for queue config module
> >   common/idpf: refine API name for data path module
> >   common/idpf: refine API name for virtual channel functions
> >
> >  drivers/common/idpf/base/idpf_controlq_api.h  |6 -
> >  drivers/common/idpf/base/meson.build  |2 +-
> >  drivers/common/idpf/idpf_common_device.c  |  655 +
> >  drivers/common/idpf/idpf_common_device.h  |  195 ++
> >  drivers/common/idpf/idpf_common_logs.h|   47 +
> >  drivers/common/idpf/idpf_common_rxtx.c| 1458 
> >  drivers/common/idpf/idpf_common_rxtx.h|  278 +++
> >  .../idpf/idpf_common_rxtx_avx512.c}   |   24 +-
> >  .../idpf/idpf_common_virtchnl.c}  |  945 ++--
> >  drivers/common/idpf/idpf_common_virtchnl.h|   52 +
> >  drivers/common/idpf/meson.build   |   38 +
> >  drivers/common/idpf/version.map   |   61 +-
> >  drivers/net/idpf/idpf_ethdev.c|  552 +
> >  drivers/net/idpf/idpf_ethdev.h|  194 +-
> >  drivers/net/idpf/idpf_logs.h  |   24 -
> >  drivers/net/idpf/idpf_rxtx.c  | 2107 +++--
> >  drivers/net/idpf/idpf_rxtx.h  |  253 +-
> >  drivers/net/idpf/meson.build  |   18 -
> >  18 files changed, 3442 insertions(+), 3467 deletions(-)  create mode
> > 100644 drivers/common/idpf/idpf_common_device.c
> >  create mode 100644 drivers/common/idpf/idpf_common_device.h
> >  create mode 100644 drivers/common/idpf/idpf_common_logs.h
> >  create mode 100644 drivers/common/idpf/idpf_common_rxtx.c
> >  create mode 100644 drivers/common/idpf/idpf_common_rxtx.h
> >  rename drivers/{net/idpf/idpf_rxtx_vec_avx512.c =>
> > common/idpf/idpf_common_rxtx_avx512.c} (97%)  rename
> > drivers/{net/idpf/idpf_vchnl.c => common/idpf/idpf_common_virtchnl.c}
> > (52%)  create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h
> >
> > --
> > 2.26.2
> 
> Overall looks good to me, just couple thing need to fix
> 
> 1. fix copy right date to 2023
> 2. fix some meson build , you can use devtools/check-meson.py to check the
> warning.

Yes, updated in v7.

> 
> 
> 



RE: [PATCH] net/iavf: add check for mbuf

2023-02-01 Thread Xing, Beilei



> -Original Message-
> From: Ye, MingjinX 
> Sent: Tuesday, January 31, 2023 1:20 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming ; Zhou, YidingX
> ; Ye, MingjinX ;
> sta...@dpdk.org; Wu, Jingjing ; Xing, Beilei
> 
> Subject: [PATCH] net/iavf: add check for mbuf
> 
> The scalar Tx path would send wrong mbuf that causes the kernel driver to fire
> the MDD event.
> 
> This patch adds mbuf detection in tx_prepare to fix this issue, rte_errno 
> will be
> set to EINVAL and returned if the verification fails.
 
I don't think PMD needs to check all packets contents, there're so many 
protocols supported.
It depends on HW capability, or application and user should make sure packets 
accuracy.

> 
> Fixes: 3fd32df381f8 ("net/iavf: check Tx packet with correct UP and queue")
> Fixes: 12b435bf8f2f ("net/iavf: support flex desc metadata extraction")
> Fixes: f28fbd1e6b50 ("net/iavf: check max SIMD bitwidth")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Mingjin Ye 



RE: [PATCH v3 4/7] i40e: fix whitespace

2023-01-16 Thread Xing, Beilei



> -Original Message-
> From: Stephen Hemminger 
> Sent: Tuesday, January 17, 2023 8:15 AM
> To: dev@dpdk.org
> Cc: Stephen Hemminger ; Zhang, Yuying
> ; Xing, Beilei 
> Subject: [PATCH v3 4/7] i40e: fix whitespace
> 
> The style standard is to use blank after keywords.
> I.e "if (" not "if("
> 
> Signed-off-by: Stephen Hemminger 
Acked-by: Beilei Xing 


RE: [PATCH v2 05/15] common/idpf: add vport init/deinit

2023-01-08 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Sunday, January 8, 2023 8:11 PM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: dev@dpdk.org; Wu, Wenjun1 
> Subject: RE: [PATCH v2 05/15] common/idpf: add vport init/deinit
> 
> 
> 
> > -Original Message-
> > From: Xing, Beilei 
> > Sent: Friday, January 6, 2023 5:16 PM
> > To: Wu, Jingjing 
> > Cc: dev@dpdk.org; Zhang, Qi Z ; Xing, Beilei
> > ; Wu, Wenjun1 
> > Subject: [PATCH v2 05/15] common/idpf: add vport init/deinit
> >
> > From: Beilei Xing 
> >
> > Add vport init/deinit in common module.
> >
> > Signed-off-by: Wenjun Wu 
> > Signed-off-by: Beilei Xing 
> > ---
> >  drivers/common/idpf/idpf_common_device.c   | 128
> +++
> >  drivers/common/idpf/idpf_common_device.h   |   8 ++
> >  drivers/common/idpf/idpf_common_virtchnl.c |  16 +--
> >  drivers/common/idpf/idpf_common_virtchnl.h |   2 -
> >  drivers/common/idpf/version.map|   4 +-
> >  drivers/net/idpf/idpf_ethdev.c | 138 ++---
> >  6 files changed, 156 insertions(+), 140 deletions(-)
> >
> > diff --git a/drivers/common/idpf/idpf_common_device.c
> > b/drivers/common/idpf/idpf_common_device.c
> > index b2b42443e4..2aad9bcdd3 100644
> > --- a/drivers/common/idpf/idpf_common_device.c
> > +++ b/drivers/common/idpf/idpf_common_device.c
> > @@ -158,4 +158,132 @@ idpf_adapter_deinit(struct idpf_adapter
> *adapter)
> > return 0;
> >  }
> >
> > +int
> > +idpf_vport_init(struct idpf_vport *vport,
> > +   struct virtchnl2_create_vport *create_vport_info,
> > +   void *dev_data)
> > +{
> > +   struct virtchnl2_create_vport *vport_info;
> > +   int i, type, ret;
> > +
> > +   vport->vport_info = rte_zmalloc(NULL, IDPF_DFLT_MBX_BUF_SIZE, 0);
> 
> Can we embed vport_info structure into the vport structure?
> Seems its not necessary to malloc/free the vport_info which always
> associated with a vport structure?
> 
 
Make sense, will update in the next version.



RE: [PATCH] net/idpf: fix build option check

2022-12-08 Thread Xing, Beilei



> -Original Message-
> From: Wu, Jingjing 
> Sent: Thursday, December 8, 2022 11:31 AM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> sta...@dpdk.org
> Subject: [PATCH] net/idpf: fix build option check
> 
> When enable_iova_as_pa option is disabled, idpf driver should avoid the
> building in its build file.
> 
> Fixes: 5bf87b45b2c8 (net/idpf: add AVX512 data path for single queue model)
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Jingjing Wu 
> ---
>  drivers/net/idpf/meson.build | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
> index 998afd21fe..650dade0b9 100644
> --- a/drivers/net/idpf/meson.build
> +++ b/drivers/net/idpf/meson.build
> @@ -7,6 +7,12 @@ if is_windows
>  subdir_done()
>  endif
> 
> +if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
> +build = false
> +reason = 'driver does not support disabling IOVA as PA mode'
> +subdir_done()
> +endif
> +
>  deps += ['common_idpf']
> 
>  sources = files(
> --
> 2.25.1

Acked-by: Beilei Xing 



RE: [PATCH v2] net/idpf: fix crash when launching l3fwd

2022-11-17 Thread Xing, Beilei



> -Original Message-
> From: Wu, Jingjing 
> Sent: Friday, November 18, 2022 2:24 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Peng, Yuan 
> Subject: RE: [PATCH v2] net/idpf: fix crash when launching l3fwd
> 
> > -
> > if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
> > PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not
> supported",
> >  conf->txmode.mq_mode);
> > diff --git a/drivers/net/idpf/idpf_vchnl.c
> > b/drivers/net/idpf/idpf_vchnl.c index ac6486d4ef..88770447f8 100644
> > --- a/drivers/net/idpf/idpf_vchnl.c
> > +++ b/drivers/net/idpf/idpf_vchnl.c
> > @@ -1197,6 +1197,9 @@ idpf_vc_dealloc_vectors(struct idpf_vport *vport)
> > int err, len;
> >
> > alloc_vec = vport->recv_vectors;
> > +   if (alloc_vec == NULL)
> > +   return -EINVAL;
> > +
> Would it be better to check before idpf_vc_dealloc_vectors?
Make sense, will update in next version.


RE: [PATCH 03/13] net/idpf: support device initialization

2022-11-02 Thread Xing, Beilei


> -Original Message-
> From: Raslan Darawsheh 
> Sent: Wednesday, November 2, 2022 11:31 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun ; Wang, Xiao W
> ; NBU-Contact-Thomas Monjalon (EXTERNAL)
> 
> Subject: RE: [PATCH 03/13] net/idpf: support device initialization
> 
> Hi,
> 

[snip] 
 
> > --
> > 2.25.1
> I'd like to report that this patch is causing a compilation failure over the 
> main
> tree 22.11-rc2:
> 
> this is the failure which I see:
> ing-field-initializers -D_GNU_SOURCE -fPIC -march=native -mno-avx512f -
> mno-avx512f -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -
> DRTE_LOG_DEFAULT_LOGTYPE=pmd.net.idpf -MD -MQ
> drivers/libtmp_rte_net_idpf.a.p/net_idpf_idpf_vchnl.c.o -MF
> drivers/libtmp_rte_net_idpf.a.p/net_idpf_idpf_vchnl.c.o.d -o
> drivers/libtmp_rte_net_idpf.a.p/net_idpf_idpf_vchnl.c.o -
> c ../../root/dpdk/drivers/net/idpf/idpf_vchnl.c
> ../../root/dpdk/drivers/net/idpf/idpf_vchnl.c:142:13: error: comparison of
> constant 522 with expression of type 'enum virtchnl_ops' is always false [-
> Werror,-Wtautological-constant-out-of-range-compare]
> if (opcode == VIRTCHNL2_OP_EVENT) {
> ~~ ^  ~~
> 1 error generated.
> [1424/2559] Generating eal.sym_chk with a meson_exe.py custom command
> ninja: build stopped: subcommand failed.
> 
> And it's happening with the CLANG compiler:
> clang version 3.4.2 (tags/RELEASE_34/dot2-final)
> 
> Kindest regards,
> Raslan Darawsheh

Thanks for reporting, it should be fixed with the fix patch 
https://patches.dpdk.org/project/dpdk/patch/20221101024350.105241-1-beilei.x...@intel.com/.

BR,
Beilei 



RE: [PATCH 03/13] net/idpf: support device initialization

2022-10-31 Thread Xing, Beilei


> -Original Message-
> From: Ali Alnubani 
> Sent: Tuesday, November 1, 2022 2:01 AM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun ; Wang, Xiao W
> ; NBU-Contact-Thomas Monjalon (EXTERNAL)
> 
> Subject: RE: [PATCH 03/13] net/idpf: support device initialization
> 
> > -Original Message-
> > From: Junfeng Guo 
> > Sent: Wednesday, August 3, 2022 2:31 PM
> > To: qi.z.zh...@intel.com; jingjing...@intel.com; beilei.x...@intel.com
> > Cc: dev@dpdk.org; junfeng@intel.com; Xiaoyun Li
> > ; Xiao Wang 
> > Subject: [PATCH 03/13] net/idpf: support device initialization
> >
> > Support device init and the following dev ops:
> > - dev_configure
> > - dev_start
> > - dev_stop
> > - dev_close
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Junfeng Guo 
> > ---
> 
> Hello,
> 
> This patch is causing the following build failure in latest main (6a88cbc) 
> with
> clang 3.4.2 in CentOS 7:
> drivers/net/idpf/idpf_vchnl.c:141:13: error: comparison of constant 522 with
> expression of type 'enum virtchnl_ops' is always false [-Werror,-
> Wtautological-constant-out-of-range-compare]
> 

Hi,

Thanks for reporting the issue, fix patch has been sent, 
https://patches.dpdk.org/project/dpdk/patch/20221101024350.105241-1-beilei.x...@intel.com/

> Regards,
> Ali


RE: [PATCH v15 00/18] add support for idpf PMD in DPDK

2022-10-30 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Saturday, October 29, 2022 10:48 PM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: dev@dpdk.org; Thomas Monjalon 
> Subject: Re: [PATCH v15 00/18] add support for idpf PMD in DPDK
> 
> On 10/29/22 06:27, beilei.x...@intel.com wrote:
> > From: Beilei Xing 
> >
> > This patchset introduced the idpf (Infrastructure Data Path Function) PMD
> in DPDK for Intel® IPU E2000 (Device ID: 0x1452).
> > The Intel® IPU E2000 targets to deliver high performance under real
> workloads with security and isolation.
> > Please refer to
> > https://www.intel.com/content/www/us/en/products/network-
> io/infrastruc
> > ture-processing-units/asic/e2000-asic.html
> > for more information.
> >
> > Linux upstream is still ongoing, previous work refers to
> https://patchwork.ozlabs.org/project/intel-wired-
> lan/patch/20220128001009.721392-20-alan.br...@intel.com/.
> >
> > v2-v4:
> > fixed some coding style issues and did some refactors.
> >
> > v5:
> > fixed typo.
> >
> > v6-v9:
> > fixed build errors and coding style issues.
> >
> > v11:
> >   - move shared code to common/idpf/base
> >   - Create one vport if there's no vport devargs
> >   - Refactor if conditions according to coding style
> >   - Refactor virtual channel return values
> >   - Refine dev_stop function
> >   - Refine RSS lut/key
> >   - Fix build error
> >
> > v12:
> >   - Refine dev_configure
> >   - Fix coding style according to the comments
> >   - Re-order patch
> >   - Romove dev_supported_ptypes_get
> >
> > v13:
> >   - refine dev_start/stop and queue_start/stop
> >   - fix timestamp offload
> >
> > v14:
> >   - fix wrong position for rte_validate_tx_offload
> >
> > v15:
> >   - refine the return value for ethdev ops.
> >   - removce forward static declarations.
> >   - refine get caps.
> >   - fix lock/unlock handling.
> 
> Applied to dpdk-next-net/main, thanks.
> 
> I've a number of concerns:
>   * conditional compilation IDPF_RX_PTYPE_OFFLOAD in [PATCH v15 17/18]

Will remove the conditional compilation

> net/idpf: add AVX512 data path for single queue model
>   * the same prefix used for functions in common/idpf/base and net/idpf
> drivers

I think the name of PMD and common library can be the same, right?

>   * common/idpf/base uses own defines for negative errno (defined as a
> number with corresponding errno in a comment). Strictly speaking it is not
> the same, but work fine in a majority of cases

Make sense, will remove the own defines.
Thanks for your review, I saw the status in patchwork has been accepted, but 
didn't see idpf in dpdk-next-net, will send v16 to address the comments first.

> 
> So, final decision will be done by Thomas on pulling to main tree.


RE: [PATCH v14 06/18] net/idpf: add support for queue start

2022-10-28 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Friday, October 28, 2022 11:51 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun 
> Subject: Re: [PATCH v14 06/18] net/idpf: add support for queue start
> 
> On 10/27/22 10:47, Junfeng Guo wrote:
> > Add support for these device ops:
> >   - rx_queue_start
> >   - tx_queue_start
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Junfeng Guo 
> 
> [snip]
> 
> > +#define IDPF_RX_BUF_STRIDE 64
> > +int
> > +idpf_vc_config_rxqs(struct idpf_vport *vport) {
> > +   struct idpf_adapter *adapter = vport->adapter;
> > +   struct idpf_rx_queue **rxq =
> > +   (struct idpf_rx_queue **)vport->dev_data->rx_queues;
> > +   struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
> > +   struct virtchnl2_rxq_info *rxq_info;
> > +   struct idpf_cmd_info args;
> > +   uint16_t total_qs, num_qs;
> > +   int size, i, j;
> > +   int err = 0;
> > +   int k = 0;
> > +
> > +   total_qs = vport->num_rx_q + vport->num_rx_bufq;
> > +   while (total_qs) {
> > +   if (total_qs > adapter->max_rxq_per_msg) {
> > +   num_qs = adapter->max_rxq_per_msg;
> > +   total_qs -= adapter->max_rxq_per_msg;
> > +   } else {
> > +   num_qs = total_qs;
> > +   total_qs = 0;
> > +   }
> > +
> > +   size = sizeof(*vc_rxqs) + (num_qs - 1) *
> > +   sizeof(struct virtchnl2_rxq_info);
> > +   vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
> > +   if (vc_rxqs == NULL) {
> > +   PMD_DRV_LOG(ERR, "Failed to allocate
> virtchnl2_config_rx_queues");
> > +   err = -ENOMEM;
> > +   break;
> > +   }
> > +   vc_rxqs->vport_id = vport->vport_id;
> > +   vc_rxqs->num_qinfo = num_qs;
> > +   if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
> {
> > +   for (i = 0; i < num_qs; i++, k++) {
> > +   rxq_info = &vc_rxqs->qinfo[i];
> > +   rxq_info->dma_ring_addr = rxq[k]-
> >rx_ring_phys_addr;
> > +   rxq_info->type =
> VIRTCHNL2_QUEUE_TYPE_RX;
> > +   rxq_info->queue_id = rxq[k]->queue_id;
> > +   rxq_info->model =
> VIRTCHNL2_QUEUE_MODEL_SINGLE;
> > +   rxq_info->data_buffer_size = rxq[k]-
> >rx_buf_len;
> > +   rxq_info->max_pkt_size = vport-
> >max_pkt_len;
> > +
> > +   rxq_info->desc_ids =
> VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
> > +   rxq_info->qflags |=
> VIRTCHNL2_RX_DESC_SIZE_32BYTE;
> > +
> > +   rxq_info->ring_len = rxq[k]->nb_rx_desc;
> > +   }
> > +   } else {
> > +   for (i = 0; i < num_qs / 3; i++, k++) {
> > +   /* Rx queue */
> > +   rxq_info = &vc_rxqs->qinfo[i * 3];
> > +   rxq_info->dma_ring_addr =
> > +   rxq[k]->rx_ring_phys_addr;
> > +   rxq_info->type =
> VIRTCHNL2_QUEUE_TYPE_RX;
> > +   rxq_info->queue_id = rxq[k]->queue_id;
> > +   rxq_info->model =
> VIRTCHNL2_QUEUE_MODEL_SPLIT;
> > +   rxq_info->data_buffer_size = rxq[k]-
> >rx_buf_len;
> > +   rxq_info->max_pkt_size = vport-
> >max_pkt_len;
> > +
> > +   rxq_info->desc_ids =
> VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
> > +   rxq_info->qflags |=
> VIRTCHNL2_RX_DESC_SIZE_32BYTE;
> > +
> > +   rxq_info->ring_len = rxq[k]->nb_rx_desc;
> > +   rxq_info->rx_bufq1_id = rxq[k]->bufq1-
> >queue_id;
> > +   rxq_info->rx_bufq2_id = rxq[k]->bufq2-
> >queue_id;
> > +   rxq_info->rx_buffer_low_watermark = 64;
> > +
> > +   /* Buffer queue */
> > +   for (j = 1; j <= IDPF_RX_BUFQ_PER_GR

RE: [PATCH v14 02/18] net/idpf: add support for device initialization

2022-10-28 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Friday, October 28, 2022 11:35 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun ; Wang, Xiao W
> ; Wu, Wenjun1 
> Subject: Re: [PATCH v14 02/18] net/idpf: add support for device initialization
> 
> On 10/27/22 10:47, Junfeng Guo wrote:
> > Support device init and add the following dev ops:
> >   - dev_configure
> >   - dev_close
> >   - dev_infos_get
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Wenjun Wu 
> > Signed-off-by: Junfeng Guo 
> 
> [snip]
> 
> > +static int idpf_dev_configure(struct rte_eth_dev *dev); static int
> > +idpf_dev_close(struct rte_eth_dev *dev); static int
> > +idpf_dev_info_get(struct rte_eth_dev *dev,
> > +struct rte_eth_dev_info *dev_info); static void
> > +idpf_adapter_rel(struct idpf_adapter *adapter);
> > +
> > +static const struct eth_dev_ops idpf_eth_dev_ops = {
> > +   .dev_configure  = idpf_dev_configure,
> > +   .dev_close  = idpf_dev_close,
> > +   .dev_infos_get  = idpf_dev_info_get,
> > +};
> 
> Typically it is better to avoid forward static declarations and simply define
> the ops structure after callbacks.

OK, will fix it in v15.

> 
> > +
> > +static int
> > +idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info
> > +*dev_info) {
> > +   struct idpf_vport *vport = dev->data->dev_private;
> > +   struct idpf_adapter *adapter = vport->adapter;
> > +
> > +   dev_info->max_rx_queues = adapter->caps->max_rx_q;
> > +   dev_info->max_tx_queues = adapter->caps->max_tx_q;
> > +   dev_info->min_rx_bufsize = IDPF_MIN_BUF_SIZE;
> > +   dev_info->max_rx_pktlen = IDPF_MAX_FRAME_SIZE;
> > +
> > +   dev_info->max_mtu = dev_info->max_rx_pktlen -
> IDPF_ETH_OVERHEAD;
> > +   dev_info->min_mtu = RTE_ETHER_MIN_MTU;
> > +
> > +   dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
> 
> I guess it make sense if and only if you support API to add/remove unicast
> MAC addresses.

Yes, will remove this info.
> 
> > +
> > +   return 0;
> > +
> 
> [snip]
> 
> > +static int
> > +idpf_init_vport(struct rte_eth_dev *dev) {
> > +   struct idpf_vport *vport = dev->data->dev_private;
> > +   struct idpf_adapter *adapter = vport->adapter;
> > +   uint16_t idx = adapter->cur_vport_idx;
> > +   struct virtchnl2_create_vport *vport_info =
> > +   (struct virtchnl2_create_vport *)adapter-
> >vport_recv_info[idx];
> > +   int i, type, ret;
> > +
> > +   vport->vport_id = vport_info->vport_id;
> > +   vport->txq_model = vport_info->txq_model;
> > +   vport->rxq_model = vport_info->rxq_model;
> > +   vport->num_tx_q = vport_info->num_tx_q;
> > +   vport->num_tx_complq = vport_info->num_tx_complq;
> > +   vport->num_rx_q = vport_info->num_rx_q;
> > +   vport->num_rx_bufq = vport_info->num_rx_bufq;
> > +   vport->max_mtu = vport_info->max_mtu;
> > +   rte_memcpy(vport->default_mac_addr,
> > +  vport_info->default_mac_addr, ETH_ALEN);
> > +   vport->sw_idx = idx;
> > +
> > +   for (i = 0; i < vport_info->chunks.num_chunks; i++) {
> > +   type = vport_info->chunks.chunks[i].type;
> > +   switch (type) {
> > +   case VIRTCHNL2_QUEUE_TYPE_TX:
> > +   vport->chunks_info.tx_start_qid =
> > +   vport_info->chunks.chunks[i].start_queue_id;
> > +   vport->chunks_info.tx_qtail_start =
> > +   vport_info->chunks.chunks[i].qtail_reg_start;
> > +   vport->chunks_info.tx_qtail_spacing =
> > +   vport_info-
> >chunks.chunks[i].qtail_reg_spacing;
> > +   break;
> > +   case VIRTCHNL2_QUEUE_TYPE_RX:
> > +   vport->chunks_info.rx_start_qid =
> > +   vport_info->chunks.chunks[i].start_queue_id;
> > +   vport->chunks_info.rx_qtail_start =
> > +   vport_info->chunks.chunks[i].qtail_reg_start;
> > +   vport->chunks_info.rx_qtail_spacing =
> > +   vport_info-
> >chunks.chunks[i].qtail_reg_spacing;
> > +   break;
> > + 

RE: [PATCH v11 02/18] net/idpf: add support for device initialization

2022-10-28 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Friday, October 28, 2022 11:14 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun ; Wang, Xiao W
> 
> Subject: Re: [PATCH v11 02/18] net/idpf: add support for device initialization
> 
> On 10/25/22 11:57, Andrew Rybchenko wrote:
> > On 10/24/22 16:12, Junfeng Guo wrote:
> >> Support device init and add the following dev ops:
> >>   - dev_configure
> >>   - dev_close
> >>   - dev_infos_get
> >>
> >> Signed-off-by: Beilei Xing 
> >> Signed-off-by: Xiaoyun Li 
> >> Signed-off-by: Xiao Wang 
> >> Signed-off-by: Junfeng Guo 
> 
> [snip]
> 
> >> +struct idpf_adapter *
> >> +idpf_find_adapter(struct rte_pci_device *pci_dev)
> >
> > It looks like the function requires corresponding lock to be held. If
> > yes, it should be documented and code fixed. If no, it should be
> > explaiend why.
> 
> I still don't understand it is a new patch. It is hardly safe to return a 
> pointer
> to an element list when you drop lock.

Sorry I misunderstood your last comment, I thought you meant lock for 
adapter_list.
I don't think we need a lock for adapter, one adapter here doesn't map one 
ethdev,
but one PCI device, we can create some vports for one adapter, and one vport 
maps
one ethdev.
  
> 
> >> +    /* valid only if rxq_model is split Q */
> >> +    uint16_t num_rx_bufq;
> >> +
> >> +    uint16_t max_mtu;
> >
> > unused
> 
> Comments? It is still in place in a new version.

All the above info is returned by backend when creating a vport, so save it 
after creating vport. 

> 
> >> +int
> >> +idpf_vc_get_caps(struct idpf_adapter *adapter) {
> >> +    struct virtchnl2_get_capabilities caps_msg;
> >> +    struct idpf_cmd_info args;
> >> +    int err;
> >> +
> >> + memset(&caps_msg, 0, sizeof(struct
> >> +virtchnl2_get_capabilities));
> >> + caps_msg.csum_caps =
> >> + VIRTCHNL2_CAP_TX_CSUM_L3_IPV4    |
> >> + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP    |
> >> + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP    |
> >> + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP    |
> >> + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP    |
> >> + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP    |
> >> + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP    |
> >> + VIRTCHNL2_CAP_TX_CSUM_GENERIC    |
> >> + VIRTCHNL2_CAP_RX_CSUM_L3_IPV4    |
> >> + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP    |
> >> + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP    |
> >> + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP    |
> >> + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP    |
> >> + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP    |
> >> + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP    |
> >> + VIRTCHNL2_CAP_RX_CSUM_GENERIC;
> >> +
> >> + caps_msg.seg_caps =
> >> + VIRTCHNL2_CAP_SEG_IPV4_TCP    |
> >> + VIRTCHNL2_CAP_SEG_IPV4_UDP    |
> >> + VIRTCHNL2_CAP_SEG_IPV4_SCTP    |
> >> + VIRTCHNL2_CAP_SEG_IPV6_TCP    |
> >> + VIRTCHNL2_CAP_SEG_IPV6_UDP    |
> >> + VIRTCHNL2_CAP_SEG_IPV6_SCTP    |
> >> + VIRTCHNL2_CAP_SEG_GENERIC;
> >> +
> >> + caps_msg.rss_caps =
> >> + VIRTCHNL2_CAP_RSS_IPV4_TCP    |
> >> + VIRTCHNL2_CAP_RSS_IPV4_UDP    |
> >> + VIRTCHNL2_CAP_RSS_IPV4_SCTP    |
> >> + VIRTCHNL2_CAP_RSS_IPV4_OTHER    |
> >> + VIRTCHNL2_CAP_RSS_IPV6_TCP    |
> >> + VIRTCHNL2_CAP_RSS_IPV6_UDP    |
> >> + VIRTCHNL2_CAP_RSS_IPV6_SCTP    |
> >> + VIRTCHNL2_CAP_RSS_IPV6_OTHER    |
> >> + VIRTCHNL2_CAP_RSS_IPV4_AH    |
> >> + VIRTCHNL2_CAP_RSS_IPV4_ESP    |
> >> + VIRTCHNL2_CAP_RSS_IPV4_AH_ESP    |
> >> + VIRTCHNL2_CAP_RSS_IPV6_AH    |
> >> + VIRTCHNL2_CAP_RSS_IPV6_ESP    |
> >> + VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
> >> +
> >> + caps_msg.hsplit_caps =
> >> + VIRTCHNL2_CAP_RX_HSPLIT_AT_L2    |
> >> + VIRTCHNL2_CAP_RX_HSPLIT_AT_L3    |
> >> + VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4    |
> >> + VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6;
> >> +
> >> + caps_msg.rsc_caps =
> >> + VIR

RE: [PATCH v11 15/18] net/idpf: add support for Rx offloading

2022-10-27 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Tuesday, October 25, 2022 6:04 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun 
> Subject: Re: [PATCH v11 15/18] net/idpf: add support for Rx offloading
> 
> On 10/24/22 16:12, Junfeng Guo wrote:
> > Add Rx offloading support:
> >   - support CHKSUM and RSS offload for split queue model
> >   - support CHKSUM offload for single queue model
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Junfeng Guo 
> > ---
> >   doc/guides/nics/features/idpf.ini |   2 +
> >   drivers/net/idpf/idpf_ethdev.c|   9 ++-
> >   drivers/net/idpf/idpf_rxtx.c  | 122 ++
> >   3 files changed, 132 insertions(+), 1 deletion(-)
> >
> > diff --git a/doc/guides/nics/features/idpf.ini
> > b/doc/guides/nics/features/idpf.ini
> > index d4eb9b374c..c86d9378ea 100644
> > --- a/doc/guides/nics/features/idpf.ini
> > +++ b/doc/guides/nics/features/idpf.ini
> > @@ -9,5 +9,7 @@
> >   [Features]
> >   Queue start/stop = Y
> >   MTU update   = Y
> > +L3 checksum offload  = P
> > +L4 checksum offload  = P
> 
> RSS hash missing
> 
> >   Packet type parsing  = Y
> >   Linux= Y
> > diff --git a/drivers/net/idpf/idpf_ethdev.c
> > b/drivers/net/idpf/idpf_ethdev.c index 739cf31d65..d8cc423a23 100644
> > --- a/drivers/net/idpf/idpf_ethdev.c
> > +++ b/drivers/net/idpf/idpf_ethdev.c
> > @@ -94,7 +94,14 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
> > dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
> > dev_info->dev_capa =
> RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
> > RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
> > -   dev_info->rx_offload_capa = 0;
> > +
> > +   dev_info->rx_offload_capa =
> > +   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM   |
> > +   RTE_ETH_RX_OFFLOAD_UDP_CKSUM|
> > +   RTE_ETH_RX_OFFLOAD_TCP_CKSUM|
> > +   RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> > +   RTE_ETH_RX_OFFLOAD_RSS_HASH;
> 
> As I understand you know mode here and you should not report offload
> which is not supported in current mode (RSS vs single queue).

Yes, will remove RTE_ETH_RX_OFFLOAD_RSS_HASH here since RSS hash is not 
supported in single queue mode currently,
So we needn't to add 'RSS hash = Y' in idpf.ini, right?

> 
> > +
> > dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
> >
> > dev_info->default_rxconf = (struct rte_eth_rxconf) {
> 
> [snip]



RE: [PATCH v11 05/18] net/idpf: add support for device start and stop

2022-10-26 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Tuesday, October 25, 2022 5:50 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun 
> Subject: Re: [PATCH v11 05/18] net/idpf: add support for device start and
> stop
> 
> On 10/24/22 16:12, Junfeng Guo wrote:
> > Add dev ops dev_start, dev_stop and link_update.
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Junfeng Guo 
> > ---
> >   drivers/net/idpf/idpf_ethdev.c | 89
> ++
> >   drivers/net/idpf/idpf_ethdev.h |  5 ++
> >   2 files changed, 94 insertions(+)
> >
> > diff --git a/drivers/net/idpf/idpf_ethdev.c
> > b/drivers/net/idpf/idpf_ethdev.c index 1d2075f466..4c7a2d0748 100644
> > --- a/drivers/net/idpf/idpf_ethdev.c
> > +++ b/drivers/net/idpf/idpf_ethdev.c
> > @@ -29,17 +29,42 @@ static const char * const idpf_valid_args[] = {
> >   };
> > +static int
> > +idpf_start_queues(struct rte_eth_dev *dev) {
> > +   struct idpf_rx_queue *rxq;
> > +   struct idpf_tx_queue *txq;
> > +   int err = 0;
> > +   int i;
> > +
> > +   for (i = 0; i < dev->data->nb_tx_queues; i++) {
> > +   txq = dev->data->tx_queues[i];
> > +   if (txq == NULL || txq->tx_deferred_start)
> > +   continue;
> > +
> > +   PMD_DRV_LOG(ERR, "Start Tx queues not supported yet");
> > +   return -ENOTSUP;
> > +   }
> > +
> > +   for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > +   rxq = dev->data->rx_queues[i];
> > +   if (rxq == NULL || rxq->rx_deferred_start)
> > +   continue;
> > +
> > +   PMD_DRV_LOG(ERR, "Start Rx queues not supported yet");
> > +   return -ENOTSUP;
> > +   }
> > +
> > +   return err;
> > +}
> > +
> > +static int
> > +idpf_dev_start(struct rte_eth_dev *dev) {
> > +   struct idpf_vport *vport = dev->data->dev_private;
> > +
> > +   if (dev->data->mtu > vport->max_mtu) {
> > +   PMD_DRV_LOG(ERR, "MTU should be less than %d", vport-
> >max_mtu);
> > +   return -1;
> > +   }
> > +
> > +   vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
> > +
> > +   if (idpf_start_queues(dev) != 0) {
> > +   PMD_DRV_LOG(ERR, "Failed to start queues");
> > +   return -1;
> > +   }
> > +
> > +   if (idpf_vc_ena_dis_vport(vport, true) != 0) {
> > +   PMD_DRV_LOG(ERR, "Failed to enable vport");
> 
> Don't you need to stop queues here?

In this patch, we didn't implement start HW queues, so I will remove start 
queues here
And add start_queues/stop_queues once the APIs are finished.

> 
> > +   return -1;
> > +   }
> > +
> > +   return 0;
> > +}
> > +
> > +static int
> > +idpf_dev_stop(struct rte_eth_dev *dev) {
> > +   struct idpf_vport *vport = dev->data->dev_private;
> 
> Stop queues?

Same.
> 
> > +
> > +   idpf_vc_ena_dis_vport(vport, false);
> > +
> > +   return 0;
> > +}
> > +
> >   static int
> >   idpf_dev_close(struct rte_eth_dev *dev)
> >   {


RE: [PATCH v11 03/18] net/idpf: add Tx queue setup

2022-10-26 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Tuesday, October 25, 2022 5:40 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun 
> Subject: Re: [PATCH v11 03/18] net/idpf: add Tx queue setup
> 
> On 10/24/22 16:12, Junfeng Guo wrote:
> > Add support for tx_queue_setup ops.
> >
> > In the single queue model, the same descriptor queue is used by SW to
> > post buffer descriptors to HW and by HW to post completed descriptors
> > to SW.
> >
> > In the split queue model, "RX buffer queues" are used to pass
> > descriptor buffers from SW to HW while Rx queues are used only to pass
> > the descriptor completions, that is, descriptors that point to
> > completed buffers, from HW to SW. This is contrary to the single queue
> > model in which Rx queues are used for both purposes.
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Junfeng Guo 
> 
> > +
> > +   size = sizeof(struct idpf_flex_tx_sched_desc) * txq->nb_tx_desc;
> > +   for (i = 0; i < size; i++)
> > +   ((volatile char *)txq->desc_ring)[i] = 0;
> 
> Please, add a comment which explains why volatile is required here.

Volatile is used here to make sure the memory write through when touch desc.
It follows Intel PMD's coding style.

> 
> > +
> > +   txq->tx_ring_phys_addr = mz->iova;
> > +   txq->tx_ring = (struct idpf_flex_tx_desc *)mz->addr;
> > +
> > +   txq->mz = mz;
> > +   reset_single_tx_queue(txq);
> > +   txq->q_set = true;
> > +   dev->data->tx_queues[queue_idx] = txq;
> > +   txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
> > +   queue_idx * vport->chunks_info.tx_qtail_spacing);
> 
> I'm sorry, but it looks like too much code is duplicated.
> I guess it could be a shared function to avoid it.

It makes sense to optimize the queue setup, but can we improve it in next 
release
due to the RC2 deadline? 
And this part should be common module shared by idpf PMD and a new PMD,
It should be in driver/common/idpf folder in DPDK 23.03.

> 
> > +
> > +   return 0;
> > +}
> > +
> > +int
> > +idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> > +   uint16_t nb_desc, unsigned int socket_id,
> > +   const struct rte_eth_txconf *tx_conf) {
> > +   struct idpf_vport *vport = dev->data->dev_private;
> > +
> > +   if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
> > +   return idpf_tx_single_queue_setup(dev, queue_idx, nb_desc,
> > + socket_id, tx_conf);
> > +   else
> > +   return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
> > +socket_id, tx_conf);
> > +}
> 
> [snip]



RE: [PATCH v11 02/18] net/idpf: add support for device initialization

2022-10-26 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Tuesday, October 25, 2022 4:57 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Li, Xiaoyun ; Wang, Xiao W
> 
> Subject: Re: [PATCH v11 02/18] net/idpf: add support for device initialization
> 
> On 10/24/22 16:12, Junfeng Guo wrote:
> > Support device init and add the following dev ops:
> >   - dev_configure
> >   - dev_close
> >   - dev_infos_get
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiaoyun Li 
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Junfeng Guo 
> 
> [snip]
> 
> > diff --git a/doc/guides/nics/features/idpf.ini
> > b/doc/guides/nics/features/idpf.ini
> > new file mode 100644
> > index 00..7a44b8b5e4
> > --- /dev/null
> > +++ b/doc/guides/nics/features/idpf.ini
> > @@ -0,0 +1,10 @@
> > +;
> > +; Supported features of the 'idpf' network poll mode driver.
> > +;
> > +; Refer to default.ini for the full list of available PMD features.
> > +;
> > +; A feature with "P" indicates only be supported when non-vector path
> > +; is selected.
> 
> The statement should be added when the first P appears.

Thanks for the comments, all the comments are addressed in next version except 
the below one.

[snip] 

> > +};
> > +
> > +struct idpf_vport {
> > +   struct idpf_adapter *adapter; /* Backreference to associated adapter
> */
> > +   uint16_t vport_id;
> > +   uint32_t txq_model;
> > +   uint32_t rxq_model;
> > +   uint16_t num_tx_q;
> 
> Shouldn't it be set as the result of configure?
> data->nb_tx_queues which is the result of the configure is not
> used in the patch?

The num_tx_q here is different from data->nb_tx_queues.
The idpf PMD requires fixed txqs and rxqs when the idpf PMD requires creating a 
vport.
The num_tx_q and the num_rxq_q are the values returned by backend.
But there'll be only data->nb_tx_queues txqs configured during dev_start.




RE: [PATCH v9 09/14] net/idpf: add support for Rx/Tx offloading

2022-10-24 Thread Xing, Beilei
> > +
> > +/* Translate the rx descriptor status and error fields to pkt flags
> > +*/ static inline uint64_t idpf_rxd_to_pkt_flags(uint16_t
> > +status_error) {
> > +   uint64_t flags = 0;
> > +
> > +   if (unlikely(!(status_error &
> BIT(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S
> > +   return flags;
> > +
> > +   if (likely((status_error & IDPF_RX_FLEX_DESC_STATUS0_XSUM_S) ==
> 0)) {
> > +   flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
> 
> Strictly speaking description of the
> RTE_MBUF_F_RX_IP_CKSUM_GOOD says that IP checksum in the patcket is
> valid, but there is no checksum in IPv6 header.
> So, we can't say that it is valid for IPv6...

Hi Andrew, 
Almost all the comments are addressed in V11, please help to review. Thanks.
Here, since device just writes back 0 or 1 in descriptor, can’t distinguish 
Ipv4 or IPv6, we set RTE_MBUF_F_RX_IP_CKSUM_GOOD here. It follows ice or iavf 
implementation.
BTW, there's no common code in common/idpf currently, only base folder, 
there'll be some comment codes shared by idpf and a new PMD in next release.

> 
> > + RTE_MBUF_F_RX_L4_CKSUM_GOOD);
> > +   return flags;
> > +   }
> > +
> > +   if (unlikely(status_error &


RE: [PATCH v9 01/14] common/idpf: introduce common library

2022-10-21 Thread Xing, Beilei


> -Original Message-
> From: Andrew Rybchenko 
> Sent: Friday, October 21, 2022 2:40 PM
> To: Guo, Junfeng ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Wang, Xiao W 
> Subject: Re: [PATCH v9 01/14] common/idpf: introduce common library
> 
> On 10/21/22 08:18, Junfeng Guo wrote:
> > Introduce common library for IDPF (Infrastructure Data Path Function)
> > PMD.
> >
> > Also add OS specific implementation about some MACRO definitions and
> > small functions which are specific for DPDK.
> 
> Common drivers are required when different class drivers need to share
> some code. So, it must be expalined here why do you create common driver
> instead of usage of base/ driver in your net driver.

Hi Andrew,
Thanks for all your comments.
The common driver will be also used in another PMD which should be upstream in 
23.03, so we create it.

> 
> Note that common driver is a DPDK driver and it must follow DPDK coding
> style. If the code is actually shared with something else and do not follow
> DPDK coding style from the very beginning (since it is an existing code), it
> should be in base/ subdir in either common driver or net driver.

The common driver is a BSD license release provided by Intel internal team, 
basically we won't change it. It's the same process as Intel other PMDs, such 
as base driver of iavf, etc.
Is it OK if it's in common/idpf/base/ folder?

> 
> Also you should not use own trivial wrappers for DPDK API in DPDK-specific
> code. It just complicates reading.
> E.g. BIT() vs RTE_BIT32().

Make sense, will try my best to address all your comments in the next version.
Thanks again.

> 
> So, I need an answer on above questions before I continue review.
> 
> >
> > Signed-off-by: Beilei Xing 
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Junfeng Guo 
> 
> [snip]
> 
> > diff --git a/drivers/common/idpf/idpf_alloc.h
> > b/drivers/common/idpf/idpf_alloc.h
> > new file mode 100644
> > index 00..bc054851b3
> > --- /dev/null
> > +++ b/drivers/common/idpf/idpf_alloc.h
> > @@ -0,0 +1,22 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2001-2022 Intel Corporation  */
> > +
> > +#ifndef _IDPF_ALLOC_H_
> > +#define _IDPF_ALLOC_H_
> > +
> > +/* Memory types */
> 
> If it is a DPDK-specific driver and it is an interface provided by common 
> driver,
> it should use Doxygen-style comments to be a part of genereated API
> documentation.



RE: [PATCH v2 03/14] net/idpf: add support for device initialization

2022-09-20 Thread Xing, Beilei


> +static void
> +idpf_adapter_rel(struct idpf_adapter *adapter) {
> + struct iecm_hw *hw = &adapter->hw;
> + int i;
> +
> + iecm_ctlq_deinit(hw);
> +
> + rte_free(adapter->caps);
> + adapter->caps = NULL;
> +
> + rte_free(adapter->mbx_resp);
> + adapter->mbx_resp = NULL;
> +
> + if (adapter->vport_req_info) {
> + for (i = 0; i < adapter->max_vport_nb; i++) {
> + rte_free(adapter->vport_req_info[i]);
> + adapter->vport_req_info[i] = NULL;
> + }
> + rte_free(adapter->vport_req_info);
> + adapter->vport_req_info = NULL;
> + }
> +
> + if (adapter->vport_recv_info) {
> + for (i = 0; i < adapter->max_vport_nb; i++) {
> + rte_free(adapter->vport_recv_info[i]);
> + adapter->vport_recv_info[i] = NULL;
> + }

Also need to free adapter->vport_recv_info here.

> + }
> +
> + rte_free(adapter->vports);
> + adapter->vports = NULL;
> +}
> +


RE: [PATCH v2 03/14] net/idpf: add support for device initialization

2022-09-20 Thread Xing, Beilei



> -Original Message-
> From: Guo, Junfeng 
> Sent: Monday, September 5, 2022 6:58 PM
> To: Zhang, Qi Z ; Wu, Jingjing
> ; Xing, Beilei 
> Cc: dev@dpdk.org; Wang, Xiao W ; Guo, Junfeng
> ; Li, Xiaoyun 
> Subject: [PATCH v2 03/14] net/idpf: add support for device initialization
> 
> Support device init and the following dev ops:
>   - dev_configure
>   - dev_start
>   - dev_stop
>   - dev_close
> 
> Signed-off-by: Beilei Xing 
> Signed-off-by: Xiaoyun Li 
> Signed-off-by: Xiao Wang 
> Signed-off-by: Junfeng Guo 
> ---
>  drivers/net/idpf/idpf_ethdev.c | 810
> +  drivers/net/idpf/idpf_ethdev.h |
> 229 ++  drivers/net/idpf/idpf_vchnl.c  | 495
> 
>  drivers/net/idpf/meson.build   |  18 +
>  drivers/net/idpf/version.map   |   3 +
>  drivers/net/meson.build|   1 +
>  6 files changed, 1556 insertions(+)
>  create mode 100644 drivers/net/idpf/idpf_ethdev.c  create mode 100644
> drivers/net/idpf/idpf_ethdev.h  create mode 100644
> drivers/net/idpf/idpf_vchnl.c  create mode 100644
> drivers/net/idpf/meson.build  create mode 100644
> drivers/net/idpf/version.map
> 
<...>

> +static int
> +idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter
> +*adapter) {
> + struct iecm_hw *hw = &adapter->hw;
> + int ret = 0;
> +
> + hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
> + hw->hw_addr_len = pci_dev->mem_resource[0].len;
> + hw->back = adapter;
> + hw->vendor_id = pci_dev->id.vendor_id;
> + hw->device_id = pci_dev->id.device_id;
> + hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
> +
> + strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
> +
> + idpf_reset_pf(hw);
> + ret = idpf_check_pf_reset_done(hw);
> + if (ret) {
> + PMD_INIT_LOG(ERR, "IDPF is still resetting");
> + goto err;
> + }
> +
> + ret = idpf_init_mbx(hw);
> + if (ret) {
> + PMD_INIT_LOG(ERR, "Failed to init mailbox");
> + goto err;
> + }
> +
> + adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp",
> IDPF_DFLT_MBX_BUF_SIZE, 0);
> + if (!adapter->mbx_resp) {
> + PMD_INIT_LOG(ERR, "Failed to allocate
> idpf_adapter_mbx_resp memory");
> + goto err_mbx;
> + }
> +
> + if (idpf_check_api_version(adapter)) {
> + PMD_INIT_LOG(ERR, "Failed to check api version");
> + goto err_api;
> + }
> +
> + adapter->caps = rte_zmalloc("idpf_caps",
> +sizeof(struct virtchnl2_get_capabilities), 0);
> + if (!adapter->caps) {
> + PMD_INIT_LOG(ERR, "Failed to allocate idpf_caps memory");
> + goto err_api;
> + }
> +
> + if (idpf_get_caps(adapter)) {
> + PMD_INIT_LOG(ERR, "Failed to get capabilities");
> + goto err_caps;
> + }
> +
> + adapter->max_vport_nb = adapter->caps->max_vports;
> +
> + adapter->vport_req_info = rte_zmalloc("vport_req_info",
> +   adapter->max_vport_nb *
> +   sizeof(*adapter->vport_req_info),
> +   0);
> + if (!adapter->vport_req_info) {
> + PMD_INIT_LOG(ERR, "Failed to allocate vport_req_info
> memory");
> + goto err_caps;
> + }
> +
> + adapter->vport_recv_info = rte_zmalloc("vport_recv_info",
> +adapter->max_vport_nb *
> +sizeof(*adapter-
> >vport_recv_info),
> +0);
> + if (!adapter->vport_recv_info) {
> + PMD_INIT_LOG(ERR, "Failed to allocate vport_recv_info
> memory");
> + goto err_vport_recv_info;
> + }
> +
> + adapter->vports = rte_zmalloc("vports",
> +   adapter->max_vport_nb *
> +   sizeof(*adapter->vports),
> +   0);
> + if (!adapter->vports) {
> + PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
> + goto err_vports;
> + }
> +
> + adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
> +sizeof(struct virtchnl2_config_rx_queues)) /
> +   

RE: [PATCH] doc: refine iavf limitation or known issues

2022-09-13 Thread Xing, Beilei



> -Original Message-
> From: Zhang, Qi Z 
> Sent: Tuesday, September 6, 2022 8:42 PM
> To: Xing, Beilei ; Wu, Jingjing 
> Cc: Yang, Qiming ; dev@dpdk.org; Zhang, Qi Z
> 
> Subject: [PATCH] doc: refine iavf limitation or known issues
> 
> Move all VF related limitationi or known issues from i40e.rst to intel_vf.rst,
> as i40evf has been removed from i40e, i40e.rst should only cover PF's
> information.
> 
> The patch also fix couple typos and refine the words to be more accurate.
> 
> Signed-off-by: Qi Zhang 
Acked-by: Beilei Xing 


RE: [PATCH v4] net/i40e: restore disable double VLAN by default

2022-07-07 Thread Xing, Beilei



> -Original Message-
> From: Liu, KevinX 
> Sent: Friday, July 8, 2022 1:05 AM
> To: dev@dpdk.org
> Cc: Xing, Beilei ; Zhang, Yuying
> ; Yang, SteveX ; Liu, KevinX
> 
> Subject: [PATCH v4] net/i40e: restore disable double VLAN by default
> 
> Previously, QinQ is enabled by default and can't be disabled, but there'll be
> performance drop if QinQ is enabled.
> 
> So, disable QinQ by default.
> 
> Fixes: ae97b8b89826 ("net/i40e: fix error disable double VLAN")
> Signed-off-by: Kevin Liu 
> 
> ---
> v2: update doc and refine commit log
> ---
> v3: refine commit log
> ---
> v4: update doc
> ---
>  doc/guides/nics/i40e.rst   | 13 -
>  drivers/net/i40e/i40e_ethdev.c | 12 
>  2 files changed, 8 insertions(+), 17 deletions(-)
> 
> diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst index
> 85fdc4944d..d5938fa8e4 100644
> --- a/doc/guides/nics/i40e.rst
> +++ b/doc/guides/nics/i40e.rst
> @@ -969,11 +969,14 @@ it will fail and return the info "Conflict with the 
> first
> rule's input set",  which means the current rule's input set conflicts with 
> the first
> rule's.
>  Remove the first rule if want to change the input set of the PCTYPE.
> 
> -Disable QinQ is not supported when FW >= 8.4 -
> 
> -
> -If upgrade FW to version 8.4 and higher, enable QinQ by default and disable
> QinQ is not supported.
> -
> +Vlan related Features miss when FW >= 8.4
> +~
> +
> +If FW version >= 8.4, there'll be some Vlan related issues:
> +1. TCI input set for QinQ  is invalid.
> +2. Fail to configure TPID for QinQ.
> +3. Need to enable QinQ before enabling Vlan filter.
> +4. Fail to strip outer Vlan.
> 
>  Example of getting best performance with l3fwd example
>  --
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 684e095026..117dd85c11 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -4027,12 +4027,6 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int
> mask)
>   }
> 
>   if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
> - /* Double VLAN not allowed to be disabled.*/
> - if (pf->fw8_3gt && !(rxmode->offloads &
> RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)) {
> - PMD_DRV_LOG(WARNING,
> - "Disable double VLAN is not allowed after
> firmwarev8.3!");
> - return 0;
> - }
>   i = 0;
>   num = vsi->mac_num;
>   mac_filter = rte_zmalloc("mac_filter_info_data",
> @@ -6296,7 +6290,6 @@ int i40e_vsi_cfg_inner_vlan_stripping(struct i40e_vsi
> *vsi, bool on)  static int  i40e_dev_init_vlan(struct rte_eth_dev *dev)  {
> - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
>   struct rte_eth_dev_data *data = dev->data;
>   int ret;
>   int mask = 0;
> @@ -6307,11 +6300,6 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
>  RTE_ETH_VLAN_FILTER_MASK |
>  RTE_ETH_VLAN_EXTEND_MASK;
> 
> - /* Double VLAN be enabled by default.*/
> - if (pf->fw8_3gt) {
> - struct rte_eth_rxmode *rxmode = &dev->data-
> >dev_conf.rxmode;
> - rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
> - }
>   ret = i40e_vlan_offload_set(dev, mask);
>   if (ret) {
>   PMD_DRV_LOG(INFO, "Failed to update vlan offload");
> --
> 2.34.1

Acked-by: Beilei Xing 



RE: [PATCH v3] net/i40e: restore disable double VLAN by default

2022-07-07 Thread Xing, Beilei



> -Original Message-
> From: Liu, KevinX 
> Sent: Friday, July 8, 2022 12:26 AM
> To: dev@dpdk.org
> Cc: Xing, Beilei ; Zhang, Yuying
> ; Yang, SteveX ; Liu, KevinX
> 
> Subject: [PATCH v3] net/i40e: restore disable double VLAN by default
> 
> Previously, QinQ is enabled by default and can't be disabled, but there'll be
> performance drop if QinQ is enabled.
> 
> So, disable QinQ by default.
> 
> Fixes: ae97b8b89826 ("net/i40e: fix error disable double VLAN")
> Signed-off-by: Kevin Liu 
> 
> ---
> v2: update doc and refine commit log
> ---
> v3: refine commit log
> ---
>  doc/guides/nics/i40e.rst   | 11 +++
>  drivers/net/i40e/i40e_ethdev.c | 12 
>  2 files changed, 7 insertions(+), 16 deletions(-)
> 
> diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst index
> 85fdc4944d..75ff40aa59 100644
> --- a/doc/guides/nics/i40e.rst
> +++ b/doc/guides/nics/i40e.rst
> @@ -969,11 +969,14 @@ it will fail and return the info "Conflict with the 
> first
> rule's input set",  which means the current rule's input set conflicts with 
> the first
> rule's.
>  Remove the first rule if want to change the input set of the PCTYPE.
> 
> -Disable QinQ is not supported when FW >= 8.4 -
> 
> -
> -If upgrade FW to version 8.4 and higher, enable QinQ by default and disable
> QinQ is not supported.
> +Vlan related feature miss when FW >= 8.4
> +
> 
> +If upgrade FW to version 8.4 and higher, some vlan related issue exist:
> +1. vlan tci input set not work
> +2. tpid set fail
> +3. need enable qinq before use vlan filter 4. outer vlan strip fail
 
Vlan related features miss when FW >=8.4
~~

If FW version >= 8.4, there'll be some Vlan related issues:
1. TCI input set for QinQ  is invalid.
2. Fail to configure TPID for QinQ.
3. Need to enable QinQ before enabling Vlan filter.
4. Fail to strip outer Vlan.

> 
>  Example of getting best performance with l3fwd example
>  --
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 684e095026..117dd85c11 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -4027,12 +4027,6 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int
> mask)
>   }
> 
>   if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
> - /* Double VLAN not allowed to be disabled.*/
> - if (pf->fw8_3gt && !(rxmode->offloads &
> RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)) {
> - PMD_DRV_LOG(WARNING,
> - "Disable double VLAN is not allowed after
> firmwarev8.3!");
> - return 0;
> - }
>   i = 0;
>   num = vsi->mac_num;
>   mac_filter = rte_zmalloc("mac_filter_info_data",
> @@ -6296,7 +6290,6 @@ int i40e_vsi_cfg_inner_vlan_stripping(struct i40e_vsi
> *vsi, bool on)  static int  i40e_dev_init_vlan(struct rte_eth_dev *dev)  {
> - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
>   struct rte_eth_dev_data *data = dev->data;
>   int ret;
>   int mask = 0;
> @@ -6307,11 +6300,6 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
>  RTE_ETH_VLAN_FILTER_MASK |
>  RTE_ETH_VLAN_EXTEND_MASK;
> 
> - /* Double VLAN be enabled by default.*/
> - if (pf->fw8_3gt) {
> - struct rte_eth_rxmode *rxmode = &dev->data-
> >dev_conf.rxmode;
> - rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
> - }
>   ret = i40e_vlan_offload_set(dev, mask);
>   if (ret) {
>   PMD_DRV_LOG(INFO, "Failed to update vlan offload");
> --
> 2.34.1



RE: [PATCH v1] net/iavf: fix select wrong scan hw ring by rxdid

2022-03-16 Thread Xing, Beilei



> -Original Message-
> From: Yang, SteveX 
> Sent: Monday, March 14, 2022 5:32 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ;
> Zhang, Qi Z ; Yang, SteveX ;
> sta...@dpdk.org
> Subject: [PATCH v1] net/iavf: fix select wrong scan hw ring by rxdid
> 
> When setup RX queue, the rxdid would be changed if it's
> "IAVF_RXDID_LEGACY_0/1", that caused the scan hw ring used the wrong
> function 'iavf_rx_scan_hw_ring_flex_rxd()'.
> 
> Ignore the rxdid changed when equals "IAVF_RXDID_LEGACY_0/1".
> 
> Fixes: 0ed16e01313e ("net/iavf: fix function pointer in multi-process")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Steve Yang 
> ---
>  drivers/net/iavf/iavf_rxtx.c | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index
> 16e8d021f9..3b16609f7d 100644
> --- a/drivers/net/iavf/iavf_rxtx.c
> +++ b/drivers/net/iavf/iavf_rxtx.c
> @@ -477,6 +477,8 @@ iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct
> iavf_rx_queue *rxq,
> 
>  static const
>  iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields_ops[IAVF_RXDID_LAST + 1] = {
> + [IAVF_RXDID_LEGACY_0] = iavf_rxd_to_pkt_fields_by_comms_ovs,
> + [IAVF_RXDID_LEGACY_1] = iavf_rxd_to_pkt_fields_by_comms_ovs,
>   [IAVF_RXDID_COMMS_AUX_VLAN] =
> iavf_rxd_to_pkt_fields_by_comms_aux_v1,
>   [IAVF_RXDID_COMMS_AUX_IPV4] =
> iavf_rxd_to_pkt_fields_by_comms_aux_v1,
>   [IAVF_RXDID_COMMS_AUX_IPV6] =
> iavf_rxd_to_pkt_fields_by_comms_aux_v1,
> @@ -521,6 +523,8 @@ iavf_select_rxd_to_pkt_fields_handler(struct
> iavf_rx_queue *rxq, uint32_t rxdid)
> 
>   rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask;
>   break;
>   case IAVF_RXDID_COMMS_OVS_1:
> + case IAVF_RXDID_LEGACY_0:
> + case IAVF_RXDID_LEGACY_1:
>   break;
>   default:
>   /* update this according to the RXDID for FLEX_DESC_NONE
> */
> --
> 2.27.0

Acked-by: Beilei Xing 


RE: [PATCH] app/testpmd: fix GENEVE parsing in csum forward mode

2022-02-15 Thread Xing, Beilei
Hi Zidane,

I40e used UDP dst port 4789 for Vxlan.

BR,
Beilei

From: Raja Zidane 
Sent: Tuesday, February 15, 2022 10:31 PM
To: Singh, Aman Deep ; Matan Azrad 
; Yigit, Ferruh ; dev@dpdk.org; Xing, 
Beilei ; Zhang, Qi Z 
Cc: sta...@dpdk.org
Subject: RE: [PATCH] app/testpmd: fix GENEVE parsing in csum forward mode

Hi all,
reviving the discussion.

@Beilei Xing<mailto:beilei.x...@intel.com> could you please provide info on 
what UDP destination ports are used for VxLan by i40 driver?
if its just the default then we can remove "RTE_ETH_IS_TUNNEL_PKT(pkt_type) == 
0"

From: Singh, Aman Deep 
mailto:aman.deep.si...@intel.com>>
Sent: Monday, January 31, 2022 6:48 PM
To: Raja Zidane mailto:rzid...@nvidia.com>>; Matan Azrad 
mailto:ma...@nvidia.com>>; Ferruh Yigit 
mailto:ferruh.yi...@intel.com>>; 
dev@dpdk.org<mailto:dev@dpdk.org>; Beilei Xing 
mailto:beilei.x...@intel.com>>; Qi Zhang 
mailto:qi.z.zh...@intel.com>>
Cc: sta...@dpdk.org<mailto:sta...@dpdk.org>
Subject: Re: [PATCH] app/testpmd: fix GENEVE parsing in csum forward mode

External email: Use caution opening links or attachments



On 1/30/2022 4:48 PM, Raja Zidane wrote:

I didn't want to remove the default parsing of tunnel as VxLan because I 
thought it might be used,

Instead I moved it to the end, which makes it detect all supported tunnel 
through udp_dst_port,

And only if no tunnel was matched it would default to VxLan.

That was the reason geneve weren't detected and parsed as vxlan instead, which 
is the bug I was trying to solve.

We can take help/input from i40 maintainers for it.

Hi Beilei Xing,

For setting packet_type as Tunnel, what criteria is used by i40 driver. Is it 
only udp_dst port or any other parameters also.





-Original Message-

From: Singh, Aman Deep 
<mailto:aman.deep.si...@intel.com>

Sent: Thursday, January 20, 2022 12:47 PM

To: Matan Azrad <mailto:ma...@nvidia.com>; Ferruh Yigit 
<mailto:ferruh.yi...@intel.com>; Raja Zidane 
<mailto:rzid...@nvidia.com>; 
dev@dpdk.org<mailto:dev@dpdk.org>

Cc: sta...@dpdk.org<mailto:sta...@dpdk.org>

Subject: Re: [PATCH] app/testpmd: fix GENEVE parsing in csum forward mode



External email: Use caution opening links or attachments





On 1/18/2022 6:49 PM, Matan Azrad wrote:

 app/test-pmd/csumonly.c | 16 ++--

 1 file changed, 10 insertions(+), 6 deletions(-)



diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c

index 2aeea243b6..fe810fecdd 100644

--- a/app/test-pmd/csumonly.c

+++ b/app/test-pmd/csumonly.c

@@ -254,7 +254,10 @@ parse_gtp(struct rte_udp_hdr *udp_hdr,

 info->l2_len += RTE_ETHER_GTP_HLEN;

 }



-/* Parse a vxlan header */

+/*

+ * Parse a vxlan header.

+ * If a tunnel is detected in 'pkt_type' it will be parsed by default as vxlan.

+ */

 static void

 parse_vxlan(struct rte_udp_hdr *udp_hdr,

 struct testpmd_offload_info *info, @@ -912,17

+915,18 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)

 RTE_MBUF_F_TX_TUNNEL_VXLAN_GPE;

 goto tunnel_update;

 }

- parse_vxlan(udp_hdr, &info,

- m->packet_type);

+ parse_geneve(udp_hdr, &info);

 if (info.is_tunnel) {

 tx_ol_flags |=

- RTE_MBUF_F_TX_TUNNEL_VXLAN;

+

+ RTE_MBUF_F_TX_TUNNEL_GENEVE;

 goto tunnel_update;

 }

- parse_geneve(udp_hdr, &info);

+ /* Always keep last. */

+ parse_vxlan(udp_hdr, &info,

+ m->packet_type);

 if (info.is_tunnel) {

 tx_ol_flags |=

- RTE_MBUF_F_TX_TUNNEL_GENEVE;

+

+ RTE_MBUF_F_TX_TUNNEL_VXLAN;

 goto tunnel_update;

 }

 } else if (info.l4_proto ==

IPPROTO_GRE) {



-Original Message-

From: Ferruh Yigit <mailto:ferruh.yi...@intel.com>

Sent: Tuesday, January 18, 2022 3:03 PM

To: Matan Azrad <mailto:ma...@nvidia.com>; Raja Zidane 
<mailto:rzid...@nvidia.com>;

dev@dpdk.org<mailto:dev@dpdk.org>

Cc: sta...@dpdk.org<mailto:sta...@dpdk.org>

Subject: Re: [PATCH] app/testpmd: fix GENEVE parsing in csum forward

mode



External email: Use caution opening links or attachments





On 1/18/2022 12:55 PM, Matan Azrad wrote:



-Original Message-

From: Ferruh Yigit <mailto:ferruh.yi..

RE: [PATCH v3 1/2] net/iavf: support L2TPv2 for AVF RSS

2022-02-14 Thread Xing, Beilei



> -Original Message-
> From: Wang, Jie1X 
> Sent: Friday, February 11, 2022 4:09 PM
> To: dev@dpdk.org
> Cc: Yang, SteveX ; Wu, Jingjing
> ; Xing, Beilei ; Zhang, Qi Z
> ; Wang, Jie1X 
> Subject: [PATCH v3 1/2] net/iavf: support L2TPv2 for AVF RSS
> 
> Add support for L2TPv2(include PPP over L2TPv2) protocols RSS based on
> outer MAC src/dst address and L2TPv2 session ID.
> 
> Patterns are listed below:
> eth/ipv4/udp/l2tpv2
> eth/ipv4/udp/l2tpv2/ppp
> eth/ipv6/udp/l2tpv2
> eth/ipv6/udp/l2tpv2/ppp
> 
> Signed-off-by: Jie Wang 

Acked-by: Beilei Xing 


RE: [PATCH v3 2/2] net/iavf: support L2TPv2 for AVF FDIR

2022-02-11 Thread Xing, Beilei



> -Original Message-
> From: Wang, Jie1X 
> Sent: Friday, February 11, 2022 4:09 PM
> To: dev@dpdk.org
> Cc: Yang, SteveX ; Wu, Jingjing
> ; Xing, Beilei ; Zhang, Qi Z
> ; Wang, Jie1X 
> Subject: [PATCH v3 2/2] net/iavf: support L2TPv2 for AVF FDIR
> 
> Add support for L2TPv2(include PPP over L2TPv2) protocols FDIR based on outer
> MAC src/dst address and L2TPv2 session ID.
> 
> Add support for PPPoL2TPv2oUDP protocols FDIR based on inner IP src/dst
> address and UDP/TCP src/dst port.
> 
> Patterns are listed below:
> eth/ipv4(6)/udp/l2tpv2
> eth/ipv4(6)/udp/l2tpv2/ppp
> 
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp
> 
> Signed-off-by: Jie Wang 

Acked-by: Beilei Xing 


RE: Questions on i40e TX path

2022-02-10 Thread Xing, Beilei



> -Original Message-
> From: Honnappa Nagarahalli 
> Sent: Thursday, February 10, 2022 12:38 PM
> To: Xing, Beilei ; dev@dpdk.org
> Cc: Feifei Wang ; Ruifeng Wang
> ; Yigit, Ferruh ; Richardson,
> Bruce ; nd ; nd 
> Subject: RE: Questions on i40e TX path
> 
> 
> 
> Thank you for your input. Please see few comments inline.
> 
> > > Subject: Questions on i40e TX path
> > >
> > > Hi Beilei,
> > >   I want to make sure my understanding of the TX path is correct.
> > > Following is my understanding.
> > >
> > > 1) The RS bit must be set in the TX descriptors to ask the NIC to
> > > report back the send status.
> > Not for each Tx descriptor.
> > According to the datasheet, " The RS flag can be set only on a last
> > Transmit Data Descriptor of a packet or last Transmit Data Descriptor
> > of a TSO or last Transmit Data Descriptor of a filter."
> Yes, understood.
> When combined with #2 below, we are asking the NIC to report back the
> send/completion status for a set of packets. This allows for amortization of 
> cost
> of reporting over the set of packets.
> 
> >
> > > 2) The NIC reports the send completion by setting the DTYPE field to
> > > 0xf. This also indicates that all the earlier descriptors are also
> > > done sending
> > the packets.
> > Yes.
> >
> > > 3) The check "if (txq->nb_tx_free < txq->tx_free_thresh)" is mainly
> > > to ensure that we do not check the "descriptor done" status too often.
> > This condition is to ensure there're enough free descriptors for Tx,
> > avoid Tx ring full.
> Ok. I think this check has another purpose as well, though I am not sure if 
> it is
> intentional. I see that the descriptors are initialized with DTYPE set to 0xF 
> (in
> function i40e_reset_tx_queue). So, in the very first call to transmit 
> function (for
> ex: i40e_xmit_fixed_burst_vec), the 'i40e_tx_free_bufs' function would end up
> checking the DTYPE field, if the above check was not there.
> 
> In the data sheet, in section 8.4.2.1.1 (transmit data descriptor format), 
> the RS
> field is described as follows:
> 
> " Report Status. When set, the hardware reports the DMA completion of the
> transmit descriptor and its data buffer. Completion is reported by descriptor
> write back or by head write back as configured by the HEAD_WBEN flag in the
> transmit context. When it is reported by descriptor write back, the DTYP 
> field is
> set to 0xF and the RS flag is set."
> 
> Considering the last sentence, should the code check for both DTYP field and 
> RS
> field for completion? Currently, the code checks for just the DTYP field (in 
> the
> function i40e_tx_free_bufs).

In my understanding, RS bit is set by SW to indicate HW needs to write back. So 
I don't
think PMD needs to check RS bit.
Besides, PMD should have ensured PMD checks DTYPE of the descriptor with RS bit 
only.

> 
> >
> > >
> > > Is my understanding correct?
> > >
> > > Thank you,
> > > Honnappa


RE: [PATCH v5 5/6] net/iavf: support L2TPv2 for AVF HASH

2022-02-09 Thread Xing, Beilei



> -Original Message-
> From: Wang, Jie1X 
> Sent: Wednesday, February 9, 2022 5:39 PM
> To: dev@dpdk.org
> Cc: Yang, SteveX ; or...@nvidia.com; Singh, Aman
> Deep ; Yigit, Ferruh ;
> tho...@monjalon.net; andrew.rybche...@oktetlabs.ru; Wu, Jingjing
> ; Xing, Beilei ; Zhang, Qi Z
> ; olivier.m...@6wind.com; Wang, Jie1X
> 
> Subject: [PATCH v5 5/6] net/iavf: support L2TPv2 for AVF HASH
> 
> Add support for PPP over L2TPv2 over UDP protocol and L2TPv2 protocol RSS
> hash based on outer MAC src address and L2TPv2 session ID.

The commit log and the release notes for RSS/FDIR should be changed, too.

> 
> Patterns are listed below:
> eth/ipv4/udp/l2tpv2
> eth/ipv4/udp/l2tpv2/ppp
> eth/ipv6/udp/l2tpv2
> eth/ipv6/udp/l2tpv2/ppp
> 
> Signed-off-by: Jie Wang 
> ---
>  doc/guides/rel_notes/release_22_03.rst |  5 ++
>  drivers/net/iavf/iavf.h|  2 +
>  drivers/net/iavf/iavf_generic_flow.c   | 34 +++
>  drivers/net/iavf/iavf_generic_flow.h   |  6 ++
>  drivers/net/iavf/iavf_hash.c   | 81 --
>  5 files changed, 124 insertions(+), 4 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_22_03.rst
> b/doc/guides/rel_notes/release_22_03.rst
> index 17c25d899c..37921fc44f 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -83,6 +83,11 @@ New Features
>* Added rte_flow support for matching GENEVE packets.
>* Added rte_flow support for matching eCPRI packets.
> 
> +* **Updated Intel iavf driver.**
> +
> +  * Added L2TPv2(include PPP over L2TPv2) RSS hash distribute packets
> +based on outer MAC src address and L2TPv2 session ID.
> +
>  * **Updated Marvell cnxk crypto PMD.**
> 
>* Added SHA256-HMAC support in lookaside protocol (IPsec) for CN10K.
> diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index
> 0bb5698583..a01d18e61b 100644
> --- a/drivers/net/iavf/iavf.h
> +++ b/drivers/net/iavf/iavf.h
> @@ -93,6 +93,8 @@
> 
>  #define IAVF_VLAN_TAG_PCP_OFFSET 13
> 
> +#define IAVF_L2TPV2_FLAGS_LEN0x4000
> +
>  struct iavf_adapter;
>  struct iavf_rx_queue;
>  struct iavf_tx_queue;
> diff --git a/drivers/net/iavf/iavf_generic_flow.c
> b/drivers/net/iavf/iavf_generic_flow.c
> index 2befa125ac..1de4187e67 100644
> --- a/drivers/net/iavf/iavf_generic_flow.c
> +++ b/drivers/net/iavf/iavf_generic_flow.c
> @@ -1611,6 +1611,40 @@ enum rte_flow_item_type
> iavf_pattern_eth_ipv6_gre_ipv6_udp[] = {
>   RTE_FLOW_ITEM_TYPE_END,
>  };
> 
> +enum rte_flow_item_type iavf_pattern_eth_ipv4_udp_l2tpv2[] = {
> + RTE_FLOW_ITEM_TYPE_ETH,
> + RTE_FLOW_ITEM_TYPE_IPV4,
> + RTE_FLOW_ITEM_TYPE_UDP,
> + RTE_FLOW_ITEM_TYPE_L2TPV2,
> + RTE_FLOW_ITEM_TYPE_END,
> +};
> +
> +enum rte_flow_item_type iavf_pattern_eth_ipv4_udp_l2tpv2_ppp[] = {
> + RTE_FLOW_ITEM_TYPE_ETH,
> + RTE_FLOW_ITEM_TYPE_IPV4,
> + RTE_FLOW_ITEM_TYPE_UDP,
> + RTE_FLOW_ITEM_TYPE_L2TPV2,
> + RTE_FLOW_ITEM_TYPE_PPP,
> + RTE_FLOW_ITEM_TYPE_END,
> +};
> +
> +enum rte_flow_item_type iavf_pattern_eth_ipv6_udp_l2tpv2[] = {
> + RTE_FLOW_ITEM_TYPE_ETH,
> + RTE_FLOW_ITEM_TYPE_IPV6,
> + RTE_FLOW_ITEM_TYPE_UDP,
> + RTE_FLOW_ITEM_TYPE_L2TPV2,
> + RTE_FLOW_ITEM_TYPE_END,
> +};
> +
> +enum rte_flow_item_type iavf_pattern_eth_ipv6_udp_l2tpv2_ppp[] = {
> + RTE_FLOW_ITEM_TYPE_ETH,
> + RTE_FLOW_ITEM_TYPE_IPV6,
> + RTE_FLOW_ITEM_TYPE_UDP,
> + RTE_FLOW_ITEM_TYPE_L2TPV2,
> + RTE_FLOW_ITEM_TYPE_PPP,
> + RTE_FLOW_ITEM_TYPE_END,
> +};
> +
>  /* PPPoL2TPv2oUDP */
>  enum rte_flow_item_type iavf_pattern_eth_ipv4_udp_l2tpv2_ppp_ipv4[] = {
>   RTE_FLOW_ITEM_TYPE_ETH,
> diff --git a/drivers/net/iavf/iavf_generic_flow.h
> b/drivers/net/iavf/iavf_generic_flow.h
> index 3681a96b31..107bbc1a23 100644
> --- a/drivers/net/iavf/iavf_generic_flow.h
> +++ b/drivers/net/iavf/iavf_generic_flow.h
> @@ -410,6 +410,12 @@ extern enum rte_flow_item_type
> iavf_pattern_eth_ipv6_gre_ipv6_tcp[];
>  extern enum rte_flow_item_type iavf_pattern_eth_ipv6_gre_ipv4_udp[];
>  extern enum rte_flow_item_type iavf_pattern_eth_ipv6_gre_ipv6_udp[];
> 
> +/* L2TPv2 */
> +extern enum rte_flow_item_type iavf_pattern_eth_ipv4_udp_l2tpv2[];
> +extern enum rte_flow_item_type iavf_pattern_eth_ipv4_udp_l2tpv2_ppp[];
> +extern enum rte_flow_item_type iavf_pattern_eth_ipv6_udp_l2tpv2[];
> +extern enum rte_flow_item_type iavf_pattern_eth_ipv6_udp_l2tpv2_ppp[];
> +
>  /* PPPoL2TPv2oUDP */
>  extern enum rte_flow_item_type
> iavf_pattern_eth_ipv4_udp_l2tpv2_ppp_ipv4[];
>  extern enum rte_flow_item_type
> iavf_pattern

RE: Questions on i40e TX path

2022-02-08 Thread Xing, Beilei
Hi Honnappa,

> -Original Message-
> From: Honnappa Nagarahalli 
> Sent: Wednesday, February 9, 2022 12:36 PM
> To: Xing, Beilei ; dev@dpdk.org
> Cc: Feifei Wang ; Ruifeng Wang
> ; Yigit, Ferruh ; Richardson,
> Bruce ; nd ; nd 
> Subject: Questions on i40e TX path
> 
> Hi Beilei,
>   I want to make sure my understanding of the TX path is correct.
> Following is my understanding.
> 
> 1) The RS bit must be set in the TX descriptors to ask the NIC to report back 
> the
> send status.
Not for each Tx descriptor.
According to the datasheet, " The RS flag can be set only on a last Transmit 
Data
Descriptor of a packet or last Transmit Data Descriptor of a TSO or last 
Transmit
Data Descriptor of a filter." 

> 2) The NIC reports the send completion by setting the DTYPE field to 0xf. This
> also indicates that all the earlier descriptors are also done sending the 
> packets.
Yes.

> 3) The check "if (txq->nb_tx_free < txq->tx_free_thresh)" is mainly to ensure 
> that
> we do not check the "descriptor done" status too often.
This condition is to ensure there're enough free descriptors for Tx, avoid Tx 
ring full.

> 
> Is my understanding correct?
> 
> Thank you,
> Honnappa


RE: [PATCH v4 6/6] net/iavf: support L2TPv2 for AVF FDIR

2022-02-08 Thread Xing, Beilei



> -Original Message-
> From: Wang, Jie1X 
> Sent: Tuesday, February 8, 2022 4:39 PM
> To: dev@dpdk.org
> Cc: Yang, SteveX ; or...@nvidia.com; Singh, Aman
> Deep ; Yigit, Ferruh ;
> tho...@monjalon.net; andrew.rybche...@oktetlabs.ru; Wu, Jingjing
> ; Xing, Beilei ; Zhang, Qi Z
> ; olivier.m...@6wind.com; Wang, Jie1X
> 
> Subject: [PATCH v4 6/6] net/iavf: support L2TPv2 for AVF FDIR
> 
> Add support for L2TPv2(include PPP over L2TPv2) protocols FDIR based on outer
> MAC src address and L2TPv2 session ID.
> 
> Add support for PPPoL2TPv2oUDP protocols FDIR based on inner IP src/dst
> address and UDP/TCP src/dst port.
> 
> Patterns are listed below:
> eth/ipv4(6)/udp/l2tpv2
> eth/ipv4(6)/udp/l2tpv2/ppp
> 
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp
> 
> Signed-off-by: Jie Wang 
> ---
>  doc/guides/rel_notes/release_22_03.rst |   8 +-
>  drivers/net/iavf/iavf_fdir.c   | 174 +
>  drivers/net/iavf/iavf_generic_flow.h   |   4 +
>  3 files changed, 156 insertions(+), 30 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_22_03.rst
> b/doc/guides/rel_notes/release_22_03.rst
> index 0d1e4a0b61..5a73ccc14e 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -66,8 +66,12 @@ New Features
> 
>  * **Updated Intel iavf driver.**
> 
> -  Added L2TPv2(include PPP over L2TPv2) RSS hash distribute packets
> -  based on outer MAC src address and L2TPv2 session ID.
> +  * Added L2TPv2(include PPP over L2TPv2) RSS hash distribute packets
> +based on outer MAC src address and L2TPv2 session ID.

Should fix in the patch 5/6.

> +  * Added L2TPv2(include PPP over L2TPv2) FDIR distribute packets
> +based on outer MAC src address and L2TPv2 session ID.
> +  * Added PPPoL2TPv2oUDP FDIR distribute packets based on inner IP
> +src/dst address and UDP/TCP src/dst port.
> 
> 
>  Removed Items
> diff --git a/drivers/net/iavf/iavf_fdir.c b/drivers/net/iavf/iavf_fdir.c index
> b63aaca91d..2583b899aa 100644
> --- a/drivers/net/iavf/iavf_fdir.c
> +++ b/drivers/net/iavf/iavf_fdir.c
> @@ -168,6 +168,31 @@
>   IAVF_FDIR_INSET_GRE_IPV6 | IAVF_INSET_TUN_UDP_SRC_PORT | \
>   IAVF_INSET_TUN_UDP_DST_PORT)
> 
> +#define IAVF_FDIR_INSET_L2TPV2 (\
> + IAVF_INSET_SMAC | IAVF_INSET_L2TPV2)

The same comment for FDIR: should we limit with source MAC?

> +
> +#define IAVF_FDIR_INSET_L2TPV2_PPP_IPV4 (\
> + IAVF_INSET_TUN_IPV4_SRC | IAVF_INSET_TUN_IPV4_DST)
> +

<...>

BR,
Beilei


RE: [PATCH v4 5/6] net/iavf: support L2TPv2 for AVF HASH

2022-02-08 Thread Xing, Beilei



> -Original Message-
> From: Wang, Jie1X 
> Sent: Tuesday, February 8, 2022 4:39 PM
> To: dev@dpdk.org
> Cc: Yang, SteveX ; or...@nvidia.com; Singh, Aman
> Deep ; Yigit, Ferruh ;
> tho...@monjalon.net; andrew.rybche...@oktetlabs.ru; Wu, Jingjing
> ; Xing, Beilei ; Zhang, Qi Z
> ; olivier.m...@6wind.com; Wang, Jie1X
> 
> Subject: [PATCH v4 5/6] net/iavf: support L2TPv2 for AVF HASH
> 
> Add support for PPP over L2TPv2 over UDP protocol and L2TPv2 protocol RSS
> hash based on outer MAC src address and L2TPv2 session ID.
> 
> Patterns are listed below:
> eth/ipv4/udp/l2tpv2
> eth/ipv4/udp/l2tpv2/ppp
> eth/ipv6/udp/l2tpv2
> eth/ipv6/udp/l2tpv2/ppp
> 
> Signed-off-by: Jie Wang 
> ---
>  doc/guides/rel_notes/release_22_03.rst |  6 ++
>  drivers/net/iavf/iavf.h|  2 +
>  drivers/net/iavf/iavf_generic_flow.c   | 34 +++
>  drivers/net/iavf/iavf_generic_flow.h   |  6 ++
>  drivers/net/iavf/iavf_hash.c   | 83 --
>  5 files changed, 127 insertions(+), 4 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/release_22_03.rst
> b/doc/guides/rel_notes/release_22_03.rst
> index 9a507ab9ea..0d1e4a0b61 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -64,6 +64,12 @@ New Features
> 
>* Added rte_flow support for matching GENEVE packets.
> 
> +* **Updated Intel iavf driver.**
> +
> +  Added L2TPv2(include PPP over L2TPv2) RSS hash distribute packets
> + based on outer MAC src address and L2TPv2 session ID.

Add * on the front.

> +
> +
>  Removed Items
>  -
> 
> diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index
> 0bb5698583..a01d18e61b 100644
> --- a/drivers/net/iavf/iavf.h
> +++ b/drivers/net/iavf/iavf.h
> @@ -93,6 +93,8 @@
> 
>  #define IAVF_VLAN_TAG_PCP_OFFSET 13
> 


<...>

> +/* L2TPv2 */
> +#define IAVF_RSS_TYPE_ETH_L2TPV2 (RTE_ETH_RSS_L2TPV2 | \
> +  RTE_ETH_RSS_ETH | \
> +  RTE_ETH_RSS_L2_SRC_ONLY)

Should we limit with L2_SRC_ONLY?

> +
>  /**
>   * Supported pattern for hash.
>   * The first member is pattern item type, @@ -547,6 +589,8 @@ static struct
> iavf_pattern_match_item iavf_hash_pattern_list[] = {


<...>


> --
> 2.25.1



RE: ETH_RSS_IP only does not seem to balance traffic

2022-01-13 Thread Xing, Beilei



> -Original Message-
> From: Richardson, Bruce 
> Sent: Thursday, January 13, 2022 11:06 PM
> To: Yasuhiro Ohara 
> Cc: dev@dpdk.org; Xing, Beilei 
> Subject: Re: ETH_RSS_IP only does not seem to balance traffic
> 
> On Thu, Jan 13, 2022 at 12:52:04AM +0900, Yasuhiro Ohara wrote:
> >
> > Hi,
> >
> > My system developper friend recently ran into a problem where l3fwd
> > does not appear to receive balanced traffic on Intel XL710, but it is
> > resolved when the attached patch is applied.
> >
> > -.rss_hf = ETH_RSS_IP,
> > +.rss_hf = ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP,
> >
> > IIRC I ran into a similar problem 3 or 4 years back, but didn't report
> > then because I believed I was doing something silly.
> > But since my friend is an experienced engineer, I feel like it may be
> > better (for the community) to ask this in the list.
> >
> > We are using dpdk-stable-18.11.6 and igb_uio.
> >
> > What are we doing wrong?
> >
> > If it is not a FAQ, I can test it again with more recent stable, and
> > will report the details.
> >
> For XL710 NICs, I believe that ETH_RSS_IP load balances only IP frames that do
> not have TCP or UDP headers also. Adding i40e driver maintainer on CC to
> comment further.

Yes, Bruce is right. For XL710, ETH_RSS_IP doesn't cover TCP and UDP packets.

> 
> /Bruce


RE: [PATCH v1] net/iavf: remove the extra symbol '+'

2021-12-15 Thread Xing, Beilei



> -Original Message-
> From: Wang, Haiyue 
> Sent: Thursday, December 16, 2021 12:44 PM
> To: dev@dpdk.org
> Cc: Wang, Haiyue ; sta...@dpdk.org; Wu, Jingjing
> ; Xing, Beilei ; Sinha, Abhijit
> ; Doherty, Declan ;
> Nicolau, Radu 
> Subject: [PATCH v1] net/iavf: remove the extra symbol '+'
> 
> This extra symbol '+' should be added when patch was reapplied, and the
> compiler treats it as unsigned type, so the code still runs well.
> 
> Fixes: 84108425054a ("net/iavf: support asynchronous virtual channel
> message")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Haiyue Wang 
> ---
>  drivers/net/iavf/iavf_vchnl.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c 
> index
> 145b059837..8fdd6f6d91 100644
> --- a/drivers/net/iavf/iavf_vchnl.c
> +++ b/drivers/net/iavf/iavf_vchnl.c
> @@ -502,7 +502,7 @@ iavf_get_vf_resource(struct iavf_adapter *adapter)
>   VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
>   VIRTCHNL_VF_LARGE_NUM_QPAIRS |
>   VIRTCHNL_VF_OFFLOAD_QOS |
> -+VIRTCHNL_VF_OFFLOAD_INLINE_IPSEC_CRYPTO;
> + VIRTCHNL_VF_OFFLOAD_INLINE_IPSEC_CRYPTO;
> 
>   args.in_args = (uint8_t *)∩︀
>   args.in_args_size = sizeof(caps);
> --
> 2.34.1

Acked-by: Beilei Xing 



Re: [dpdk-dev] [PATCH v2] net/i40e: fix forward outer IPv6 VXLAN packets

2021-11-04 Thread Xing, Beilei



> -Original Message-
> From: Wang, Jie1X 
> Sent: Friday, November 5, 2021 11:37 AM
> To: dev@dpdk.org
> Cc: Zhang, Yuying ; Li, Xiaoyun
> ; Yang, SteveX ; Xing, Beilei
> ; Zhang, Qi Z ; Wang, Jie1X
> ; sta...@dpdk.org
> Subject: [PATCH v2] net/i40e: fix forward outer IPv6 VXLAN packets
> 
> Testpmd forwards packets in checksum mode that it need to calculate the
> checksum of each layer's protocol. Then it will fill flags and header length 
> into
> mbuf.
> 
> In process_outer_cksums, HW calculates the outer checksum if tx_offloads
> contains outer UDP checksum otherwise SW calculates the outer checksum.
> 
> When tx_offloads contains outer UDP checksum or outer IPv4 checksum,
> mbuf will be filled with correct header length.
> 
> This patch added outer UDP checksum in tx_offload_capa and
> I40E_TX_OFFLOAD_MASK, when we set csum hw outer-udp on that the
> engine can forward outer IPv6 VXLAN packets.
> 
> Fixes: 7497d3e2f777 ("net/i40e: convert to new Tx offloads API")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Jie Wang 
Acked-by: Beilei Xing 


Re: [dpdk-dev] [PATCH] net/i40e: fix forward outer IPv6 VXLAN packets

2021-11-04 Thread Xing, Beilei



> -Original Message-
> From: Wang, Jie1X 
> Sent: Tuesday, November 2, 2021 3:08 PM
> To: dev@dpdk.org
> Cc: Zhang, Yuying ; Li, Xiaoyun
> ; Yang, SteveX ; Xing, Beilei
> ; Zhang, Qi Z ; Wang, Jie1X
> ; sta...@dpdk.org
> Subject: [PATCH] net/i40e: fix forward outer IPv6 VXLAN packets
> 
> Testpmd forwards packets in checksum mode that it need to calculate the
> checksum of each layer's protocol. Then it will fill flags and header length 
> into
> mbuf.
> 
> In process_outer_cksums, HW calculates the outer checksum if tx_offloads
> contains outer UDP checksum otherwise SW calculates the outer checksum.
> 
> When tx_offloads contains outer UDP checksum or outer IPv4 checksum,
> mbuf will be filled with correct header length.
> 
> This patch added outer UDP checksum in tx_offload_capa and
> I40E_TX_OFFLOAD_MASK, when we set csum hw outer-udp on that the
> engine can forward outer IPv6 VXLAN packets.
> 
> Fixes: 399421100e08 ("net/i40e: fix missing mbuf fast free offload")
Seems it's not the right fix line. Could you check if it should be 7497d3e2f777 
("net/i40e: convert to new Tx offloads API").

> Cc: sta...@dpdk.org
> 
> Signed-off-by: Jie Wang 
> ---
>  drivers/net/i40e/i40e_ethdev.c | 1 +
>  drivers/net/i40e/i40e_rxtx.c   | 1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 62e374d19e..faf6391350 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -3746,6 +3746,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
>   RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
>   RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
>   RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
> + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
>   dev_info->tx_queue_offload_capa;
>   dev_info->dev_capa =
>   RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | diff --git
> a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> 6ccb598677..41fe3bf481 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -65,6 +65,7 @@
>   RTE_MBUF_F_TX_QINQ |   \
>   RTE_MBUF_F_TX_VLAN |\
>   RTE_MBUF_F_TX_TUNNEL_MASK | \
> + RTE_MBUF_F_TX_OUTER_UDP_CKSUM | \
>   I40E_TX_IEEE1588_TMST)
> 
>  #define I40E_TX_OFFLOAD_NOTSUP_MASK \
> --
> 2.25.1



Re: [dpdk-dev] [PATCH v6 2/3] net/iavf: support PPPoL2TPv2oUDP RSS Hash

2021-10-20 Thread Xing, Beilei



> -Original Message-
> From: Wang, Jie1X 
> Sent: Wednesday, October 20, 2021 5:32 PM
> To: dev@dpdk.org
> Cc: or...@nvidia.com; Yigit, Ferruh ;
> tho...@monjalon.net; andrew.rybche...@oktetlabs.ru; Li, Xiaoyun
> ; Yang, SteveX ; Wu, Jingjing
> ; Xing, Beilei ; Wu, Wenjun1
> ; Zhang, Qi Z ; Wang, Jie1X
> 
> Subject: [PATCH v6 2/3] net/iavf: support PPPoL2TPv2oUDP RSS Hash
> 
> Add support for PPP over L2TPv2 over UDP protocol RSS Hash based on inner
> IP src/dst address and TCP/UDP src/dst port.
> 
> Patterns are listed below:
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp
> 
> Signed-off-by: Wenjun Wu 
> Signed-off-by: Jie Wang 

Acked-by: Beilei Xing 


Re: [dpdk-dev] [PATCH v5 2/3] net/iavf: support PPPoL2TPv2oUDP RSS Hash

2021-10-19 Thread Xing, Beilei



> -Original Message-
> From: Wang, Jie1X 
> Sent: Tuesday, October 19, 2021 11:08 AM
> To: dev@dpdk.org
> Cc: or...@nvidia.com; Yigit, Ferruh ;
> tho...@monjalon.net; andrew.rybche...@oktetlabs.ru; Li, Xiaoyun
> ; Yang, SteveX ; Wu, Jingjing
> ; Xing, Beilei ; Wu, Wenjun1
> ; Zhang, Qi Z ; Wang, Jie1X
> 
> Subject: [PATCH v5 2/3] net/iavf: support PPPoL2TPv2oUDP RSS Hash
> 
> Add support for PPP over L2TPv2 over UDP protocol RSS Hash based on inner
> IP src/dst address and TCP/UDP src/dst port.
> 
> Patterns are listed below:
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
> eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp
> 
> Signed-off-by: Wenjun Wu 
> Signed-off-by: Jie Wang 
> ---
>  drivers/net/iavf/iavf_generic_flow.c | 131 +++
> drivers/net/iavf/iavf_generic_flow.h |  15 +++
>  drivers/net/iavf/iavf_hash.c | 108 +-
>  3 files changed, 252 insertions(+), 2 deletions(-)
> 

Please also update the release notes.


Re: [dpdk-dev] [PATCH v4 00/18] i40e base code update

2021-09-27 Thread Xing, Beilei


> -Original Message-
> From: Zhang, RobinX 
> Sent: Monday, September 6, 2021 10:03 AM
> To: dev@dpdk.org
> Cc: Xing, Beilei ; Zhang, Qi Z ;
> Zhang, Helin ; Wu, Jingjing ;
> remy.hor...@intel.com; jijiang@intel.com; jing.d.c...@intel.com; Zhu,
> Heqing ; Liang, Cunming
> ; Lu, Wenzhuo ; Zhang,
> Roy Fan ; Chilikin, Andrey
> ; echau...@redhat.com; Guo, Junfeng
> ; Yang, SteveX ; Zhang,
> RobinX 
> Subject: [PATCH v4 00/18] i40e base code update
> 
> update i40e base code.
> 
> source code of i40e driver:
> cid-i40e.2021.08.16.tar.gz
> 
> changelog in i40e share repo:
> From 59a080f4fafe ("i40e-shared: Add opcode 0x0406 and 0x0416 to Linux
> support") To 2c7aab559654 ("i40e-shared: Add defines related to DDP")
> 
> The following commits are ignored:
> cb9139e3bce8 ("i40e-shared: Fix not blinking X722 with x557 PHY via ‘ethtool
> -p'")
> c09d4f9cf390 ("i40e-shared: i40e-shared: Fix build warning -Wformat related
> to integer size")
> ff8a1abc6c17 ("i40e-shared: Fix build warning with __packed") 59a080f4fafe
> ("i40e-shared: Add opcode 0x0406 and 0x0416 to Linux
> support")
> 
> v4:
> - update base code to cid-i40e.2021.08.16
> v3:
> - there has a fix patch contains two issues, split it into two patches
> v2:
> - refine commit messages and macro name
> 
> Robin Zhang (18):
>   net/i40e/base: add new versions of send ASQ command functions
>   net/i40e/base: add support for Min Rollback Revision for 4 more X722
> modules
>   net/i40e/base: set TSA table values when parsing CEE configuration
>   net/i40e/base: define new Shadow RAM pointers
>   net/i40e/base: fix PHY type identifiers for 2.5G and 5G adapters
>   net/i40e/base: fix PF reset failed
>   net/i40e/base: fix update link data for X722
>   net/i40e/base: fix AOC media type reported by ethtool
>   net/i40e/base: add flags and fields for double vlan processing
>   net/i40e/base: fix headers to match functions
>   net/i40e/base: fix potentially uninitialized variables in NVM code
>   net/i40e/base: fix checksum is used before return value is checked
>   net/i40e/base: add defs for MAC frequency calculation if no link
>   net/i40e/base: separate kernel allocated rx_bi rings from AF_XDP rings
>   net/i40e/base: Update FVL FW API version to 1.15
>   net/i40e/base: add defines related to DDP
>   net/i40e/base: update version in readme
>   net/i40e: fix redefinition warning
> 
>  drivers/net/i40e/base/README|   2 +-
>  drivers/net/i40e/base/i40e_adminq.c |  79 +--
>  drivers/net/i40e/base/i40e_adminq_cmd.h |  55 ++--
>  drivers/net/i40e/base/i40e_common.c | 175 +++-
>  drivers/net/i40e/base/i40e_dcb.c|  10 +-
>  drivers/net/i40e/base/i40e_lan_hmc.c|   2 +-
>  drivers/net/i40e/base/i40e_nvm.c|   7 +-
>  drivers/net/i40e/base/i40e_prototype.h  |  17 +++
>  drivers/net/i40e/base/i40e_register.h   |  10 ++
>  drivers/net/i40e/base/i40e_type.h   |  26 +++-
>  drivers/net/i40e/i40e_ethdev.c  |   3 +-
>  11 files changed, 318 insertions(+), 68 deletions(-)
> 
> --
> 2.25.1

Acked-by: Beilei Xing 



Re: [dpdk-dev] [PATCH v4 18/18] net/i40e: fix redefinition warning

2021-09-27 Thread Xing, Beilei



> -Original Message-
> From: Zhang, RobinX 
> Sent: Monday, September 6, 2021 10:03 AM
> To: dev@dpdk.org
> Cc: Xing, Beilei ; Zhang, Qi Z ;
> Zhang, Helin ; Wu, Jingjing ;
> remy.hor...@intel.com; jijiang@intel.com; jing.d.c...@intel.com; Zhu,
> Heqing ; Liang, Cunming
> ; Lu, Wenzhuo ; Zhang,
> Roy Fan ; Chilikin, Andrey
> ; echau...@redhat.com; Guo, Junfeng
> ; Yang, SteveX ; Zhang,
> RobinX 
> Subject: [PATCH v4 18/18] net/i40e: fix redefinition warning
> 
> After update i40e share code, there will be a redefinition compile warning.
> This patch fix the situation by remove duplicate definition in i40e_ethdev.c
> 
> Fixes: eef2daf2e199 ("net/i40e: fix link update no wait")
Need to cc stable?

> 
> Signed-off-by: Robin Zhang 
> ---
>  drivers/net/i40e/i40e_ethdev.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 7b230e2ed1..4fc44dc5e2 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -2886,7 +2886,6 @@ static __rte_always_inline void
> update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)  {
>  /* Link status registers and values*/
> -#define I40E_PRTMAC_LINKSTA  0x001E2420
>  #define I40E_REG_LINK_UP 0x4080
>  #define I40E_PRTMAC_MACC 0x001E24E0
>  #define I40E_REG_MACC_25GB   0x0002
> @@ -2899,7 +2898,7 @@ update_link_reg(struct i40e_hw *hw, struct
> rte_eth_link *link)
>   uint32_t link_speed;
>   uint32_t reg_val;
> 
> - reg_val = I40E_READ_REG(hw, I40E_PRTMAC_LINKSTA);
> + reg_val = I40E_READ_REG(hw, I40E_PRTMAC_LINKSTA(0));
>   link_speed = reg_val & I40E_REG_SPEED_MASK;
>   reg_val &= I40E_REG_LINK_UP;
>   link->link_status = (reg_val == I40E_REG_LINK_UP) ? 1 : 0;
> --
> 2.25.1



Re: [dpdk-dev] Intel FW 8.15 with DPDK 20.11 & 21.02

2021-09-06 Thread Xing, Beilei
Hi Igor,

As I know, 8.15 is not validated by DPDK team, the recommended matching list is 
validated.
Why did you use FW 8.15 with DPDK 20.11? How about updating FW to FW 8.30? the 
matching DPDK version is 21.05 & 21.08.
And if there's any problem during using Intel NIC, you can find Intel PAE for 
help first.

BR,
Beilei

> -Original Message-
> From: Korolevskiy, Igor 
> Sent: Monday, September 6, 2021 5:01 PM
> To: Xing, Beilei ; Yigit, Ferruh
> ; dev@dpdk.org; us...@dpdk.org; d...@dpdk.org
> Cc: Yang, Qiming ; Zhang, Qi Z
> ; Kylulin, Yury ; Tsvetkov,
> Mikhail S 
> Subject: RE: [dpdk-dev] Intel FW 8.15 with DPDK 20.11 & 21.02
> 
> Beilei, hello.
> 
> Yes, we saw this table, thank you. My question was if we are using 8.15 FW
> with DPDK 20.11 - how are you going to work with Dell / MTS / Nokia  - you
> will say that version is not supported, or you will help us to troubleshoot
> problem ( if any )
> 
> I don’t think 8.00 is strictly the version we can use, or is it minimum
> recommended version?
> 
> Igor Korolevskiy | System Consultant @ Dell Technologies Data Center Sales
> Department
> m: +7(903)536-54-77
> Planned OOO:
> 13 - 28 Sept
> 
> 
> 
> Internal Use - Confidential
> 
> -Original Message-
> From: Xing, Beilei 
> Sent: Monday, September 6, 2021 8:20 AM
> To: Korolevskiy, Igor; Yigit, Ferruh; dev@dpdk.org; us...@dpdk.org;
> d...@dpdk.org
> Cc: Yang, Qiming; Zhang, Qi Z; Petrov, Andrey; Kylulin, Yury; Tsvetkov, 
> Mikhail
> S
> Subject: RE: [dpdk-dev] Intel FW 8.15 with DPDK 20.11 & 21.02
> 
> 
> [EXTERNAL EMAIL]
> 
> Hi Igor,
> 
> According to the Recommended Matching List in doc/guides/nics/i40e.rst, we
> should use FW 8.00 with DPDK 20.11 & 21.02.
> 
>+--+---+--+
>| DPDK version | Kernel driver version | Firmware version |
>+==+===+==+
>|21.02 | 2.14.13   |   8.00   |
>+--+---+--+
>|20.11 | 2.14.13   |   8.00   |
>+--+---+--+
> 
> BR,
> Beilei
> 
> > -Original Message-
> > From: Korolevskiy, Igor 
> > Sent: Friday, September 3, 2021 4:19 PM
> > To: Yigit, Ferruh ; dev@dpdk.org;
> > us...@dpdk.org; d...@dpdk.org
> > Cc: Xing, Beilei ; Yang, Qiming
> > ; Zhang, Qi Z ; Petrov,
> > Andrey ; Kylulin, Yury
> > ; Tsvetkov, Mikhail S
> > 
> > Subject: RE: [dpdk-dev] Intel FW 8.15 with DPDK 20.11 & 21.02
> >
> > Hi team,
> >
> >  Thank you for your reply.
> >
> > Intel card in dual port X710 in PCI-e slot.
> >
> > Igor Korolevskiy | System Consultant @ Dell Technologies Data Center
> > Sales Department
> > m: +7(903)536-54-77
> > Planned OOO:
> > 30 Aug-3 Sept
> > 13 - 28 Sept
> >
> >
> >
> > Internal Use - Confidential
> >
> > -Original Message-
> > From: Ferruh Yigit 
> > Sent: Thursday, September 2, 2021 11:15 AM
> > To: Korolevskiy, Igor; dev@dpdk.org; us...@dpdk.org; d...@dpdk.org
> > Cc: Beilei Xing; Qiming Yang; Qi Zhang
> > Subject: Re: [dpdk-dev] Intel FW 8.15 with DPDK 20.11 & 21.02
> >
> >
> > [EXTERNAL EMAIL]
> >
> > On 9/2/2021 10:40 PM, Korolevskiy, Igor wrote:
> > > Dear DPDK community,
> > >
> > > Would you please help us to understand if we can use Intel 8.15
> > > firmware
> > with DPDK 20.11 and 21.02 versions?
> > >
> > > We have huge demand on our servers with usage of Nokia SBC and DPDK,
> > but stuck on validation.
> > >
> > > Thank you in advance!
> > >
> >
> > Hi Igor,
> >
> > Please clarify which Intel device are you talking about?
> >
> > cc'ed Intel NIC maintainers (assuming the device you mentioned is a
> > NIC) for more details.
> >
> > Regards,
> > ferruh


Re: [dpdk-dev] Intel FW 8.15 with DPDK 20.11 & 21.02

2021-09-05 Thread Xing, Beilei
Hi Igor,

According to the Recommended Matching List in doc/guides/nics/i40e.rst, we 
should use FW 8.00 with DPDK 20.11 & 21.02.

   +--+---+--+
   | DPDK version | Kernel driver version | Firmware version |
   +==+===+==+
   |21.02 | 2.14.13   |   8.00   |
   +--+---+--+
   |20.11 | 2.14.13   |   8.00   |
   +--+---+--+

BR,
Beilei

> -Original Message-
> From: Korolevskiy, Igor 
> Sent: Friday, September 3, 2021 4:19 PM
> To: Yigit, Ferruh ; dev@dpdk.org; us...@dpdk.org;
> d...@dpdk.org
> Cc: Xing, Beilei ; Yang, Qiming
> ; Zhang, Qi Z ; Petrov,
> Andrey ; Kylulin, Yury ;
> Tsvetkov, Mikhail S 
> Subject: RE: [dpdk-dev] Intel FW 8.15 with DPDK 20.11 & 21.02
> 
> Hi team,
> 
>  Thank you for your reply.
> 
> Intel card in dual port X710 in PCI-e slot.
> 
> Igor Korolevskiy | System Consultant @ Dell Technologies Data Center Sales
> Department
> m: +7(903)536-54-77
> Planned OOO:
> 30 Aug-3 Sept
> 13 - 28 Sept
> 
> 
> 
> Internal Use - Confidential
> 
> -Original Message-
> From: Ferruh Yigit 
> Sent: Thursday, September 2, 2021 11:15 AM
> To: Korolevskiy, Igor; dev@dpdk.org; us...@dpdk.org; d...@dpdk.org
> Cc: Beilei Xing; Qiming Yang; Qi Zhang
> Subject: Re: [dpdk-dev] Intel FW 8.15 with DPDK 20.11 & 21.02
> 
> 
> [EXTERNAL EMAIL]
> 
> On 9/2/2021 10:40 PM, Korolevskiy, Igor wrote:
> > Dear DPDK community,
> >
> > Would you please help us to understand if we can use Intel 8.15 firmware
> with DPDK 20.11 and 21.02 versions?
> >
> > We have huge demand on our servers with usage of Nokia SBC and DPDK,
> but stuck on validation.
> >
> > Thank you in advance!
> >
> 
> Hi Igor,
> 
> Please clarify which Intel device are you talking about?
> 
> cc'ed Intel NIC maintainers (assuming the device you mentioned is a NIC) for
> more details.
> 
> Regards,
> ferruh


Re: [dpdk-dev] [PATCH v4] ethdev: fix representor port ID search by name

2021-08-31 Thread Xing, Beilei



> -Original Message-
> From: Andrew Rybchenko 
> Sent: Wednesday, September 1, 2021 12:06 AM
> To: Ajit Khaparde ; Somnath Kotur
> ; Daley, John ;
> Hyong Youb Kim ; Xing, Beilei ;
> Yang, Qiming ; Zhang, Qi Z ;
> Wang, Haiyue ; Matan Azrad
> ; Shahaf Shuler ; Viacheslav
> Ovsiienko ; Thomas Monjalon
> ; Yigit, Ferruh 
> Cc: dev@dpdk.org; Viacheslav Galaktionov
> 
> Subject: [PATCH v4] ethdev: fix representor port ID search by name
> 
> From: Viacheslav Galaktionov 
> 
> Getting a list of representors from a representor does not make sense.
> Instead, a parent device should be used.
> 
> To this end, extend the rte_eth_dev_data structure to include the port ID of
> the backing device for representors.
> 
> Signed-off-by: Viacheslav Galaktionov 
> Signed-off-by: Andrew Rybchenko 
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI check is
> disabled for the structure because of the libabigail bug [1].
> 
> Potentially it is bad for out-of-tree drivers which implement representors but
> do not fill in a new parert_port_id field in rte_eth_dev_data structure. Do we
> care?
> 
> mlx5 changes should be reviwed by maintainers very carefully, since we are
> not sure if we patch it correctly.
> 
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> 
> v4:
> - apply mlx5 review notes: remove fallback from generic ethdev
>   code and add fallback to mlx5 code to handle legacy usecase
> 
> v3:
> - fix mlx5 build breakage
> 
> v2:
> - fix mlx5 review notes
> - try device port ID first before parent in order to address
>   backward compatibility issue
> 
>  drivers/net/bnxt/bnxt_reps.c |  1 +
>  drivers/net/enic/enic_vf_representor.c   |  1 +
>  drivers/net/i40e/i40e_vf_representor.c   |  1 +
>  drivers/net/ice/ice_dcf_vf_representor.c |  1 +
> drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
>  drivers/net/mlx5/linux/mlx5_os.c | 13 +
>  drivers/net/mlx5/windows/mlx5_os.c   | 13 +
>  lib/ethdev/ethdev_driver.h   |  6 +++---
>  lib/ethdev/rte_class_eth.c   |  2 +-
>  lib/ethdev/rte_ethdev.c  |  8 
>  lib/ethdev/rte_ethdev_core.h |  6 ++
>  11 files changed, 45 insertions(+), 8 deletions(-)
> 

For i40e part,
Acked-by: Beilei Xing 


Re: [dpdk-dev] [PATCH v2] net/i40e: solve vf vlan strip

2021-08-29 Thread Xing, Beilei



> -Original Message-
> From: dev  On Behalf Of Qiming Chen
> Sent: Monday, August 30, 2021 10:10 AM
> To: dev@dpdk.org
> Cc: Xing, Beilei ; Qiming Chen
> 
> Subject: [dpdk-dev] [PATCH v2] net/i40e: solve vf vlan strip
> 
> Kernel PF+DPDK VF mode, after vf adds vlan, the test result shows that the
> vlan received from vf has been stripped.
> 
> The patch solves the problem that the kernel i40e.ko driver strips the vlan by
> default after vf adds vlan. Determine whether to strip vlan through the
> DEV_RX_OFFLOAD_VLAN_STRIP mask bit in rxmode.offload.
> 
> Environmental information:
> 1) dpdk 19.11
> 2) Kernel PF i40e.ko: 2.7.12
> 3) Firmware: 6.01 0x800034a3 1.1747.0

Thanks for the patch.
As Qi mentioned, i40evf will be deprecated. So please don't submit the patch to 
master repo, but submit to LTS.
Thanks.

> 
> I did not use testpmd to test vlan filter, but write Demo for testing based on
> the following deployment:
> 1) x710 nic, use 2 PFs, each PF virtualizes 1 VF
> 2) 2 pf connected with fiber optic cable
> 3) 2 vf are hard to pass through to the VM
> 4) In vm, dpdk takes over the vf port,
> 5) One port is used as the sending port, and the other port is used as the
> receiving port, e.g. xmit portid is 0, rx portid is 1
> 
> Use the default configuration for port 0 as the sender, and configure port 1
> as the receiving port as follows:
> 1) rte_eth_dev_set_vlan_offload(1, ETH_VLAN_FILTER_OFFLOAD)
> 2) rte_eth_dev_vlan_filter(1, 100, 1)
> 
> Do the following tests:
> Demo constructs a message with vlan 100 to be sent from port 0, and found
> that the vlan header of the message received from port 1 was stripped.
> 
> Signed-off-by: Qiming Chen 
> ---
> v2:
>   Clear coding style quesion.
> ---
>  drivers/net/i40e/i40e_ethdev_vf.c | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> b/drivers/net/i40e/i40e_ethdev_vf.c
> index 625981048a..267e7be0c6 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -1852,11 +1852,15 @@ static int
>  i40evf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)  {
>   int ret;
> + struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> 
> - if (on)
> + if (on) {
>   ret = i40evf_add_vlan(dev, vlan_id);
> - else
> + if (!(dev_conf->rxmode.offloads &
> DEV_RX_OFFLOAD_VLAN_STRIP))
> + i40evf_disable_vlan_strip(dev);
> + } else {
>   ret = i40evf_del_vlan(dev,vlan_id);
> + }
> 
>   return ret;
>  }
> --
> 2.30.1.windows.1



Re: [dpdk-dev] [PATCH v2] net/i40e: solve the failure of vf vlan filtering

2021-08-25 Thread Xing, Beilei



> -Original Message-
> From: dev  On Behalf Of Qiming Chen
> Sent: Tuesday, August 24, 2021 5:30 PM
> To: dev@dpdk.org
> Cc: Xing, Beilei ; Qiming Chen
> 
> Subject: [dpdk-dev] [PATCH v2] net/i40e: solve the failure of vf vlan 
> filtering
> 
> When vf driver port promiscuous is turned on, the vlan filtering function is
> invalid.
> Through communication with PAE expert, this is a limitation of the i40e chip.
> Before adding or removing VLANs, you must first disable unicast
> promiscuous or multicast promiscuous, then operate the vlan, and finally
> restore unicast promiscuous or multicast promiscuous state.

Thanks for the patch.
But I heard from DPDK validation team that there's no vf vlan filter issue with
i40evf driver. Please refer to the test plan
https://doc.dpdk.org/dts/test_plans/kernelpf_iavf_test_plan.html.

So could you please detail the issue?
E.g. do you use kernel PF + DPDK VF or DPDK PF + DPDK VF? 
What's the driver version? And what's the step to reproduce with testpmd?

BTW, for the commit log, needn't to describe the details you communicated
with PAE, just describe what's the issue, the root cause, and how to fix it.
Besides, fix patch needs fix line. Please refer to other fix patches.

Beilei

> 
> Signed-off-by: Qiming Chen 
> ---
>  drivers/net/i40e/i40e_ethdev_vf.c | 23 +--
>  1 file changed, 21 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> b/drivers/net/i40e/i40e_ethdev_vf.c
> index 12e69a3233..a099daae6b 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -1900,11 +1900,30 @@ static int
>  i40evf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)  {
>   int ret;
> + struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
> + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data-
> >dev_private);
> + bool promisc_unicast_enabled = vf->promisc_unicast_enabled;
> + bool promisc_multicast_enabled = vf->promisc_multicast_enabled;
> 
> - if (on)
> + if (promisc_unicast_enabled)
> + i40evf_dev_promiscuous_disable(dev);
> +
> + if (promisc_multicast_enabled)
> + i40evf_dev_allmulticast_disable(dev);
> +
> + if (on) {
>   ret = i40evf_add_vlan(dev, vlan_id);
> - else
> + if ((dev_conf->rxmode.offloads &
> DEV_RX_OFFLOAD_VLAN_STRIP) == 0)
> + i40evf_disable_vlan_strip(dev);
> + } else {
>   ret = i40evf_del_vlan(dev,vlan_id);
> + }
> +
> + if (promisc_unicast_enabled)
> + i40evf_dev_promiscuous_enable(dev);
> +
> + if (promisc_multicast_enabled)
> + i40evf_dev_allmulticast_enable(dev);
> 
>   return ret;
>  }
> --
> 2.30.1.windows.1



Re: [dpdk-dev] i40evf: potential segfault

2021-08-23 Thread Xing, Beilei


> -Original Message-
> From: dev  On Behalf Of Stefan Baranoff
> Sent: Saturday, August 21, 2021 2:18 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] i40evf: potential segfault
> 
> Hi all!
> 
> I was chasing a potential segfault and it appears, if I'm reading this driver
> correctly, that in i40evf_init_vf() the value vf->adapter->eth_dev is never 
> set
> like pf->adapter->eth_dev is in eth_i40e_dev_init().

Good catch.
You can submit the fix according to iavf_dev_init() where adapter->eth_dev is 
initialized.
Thanks.

> 
> I believe this is leading to a segfault when something like
> i40e_recv_scattered_pkts calls:
> dev = I40E_VSI_TO_ETH_DEV(rxq->vsi); // dev ends up NULL here
> dev->data->rx_mbuf_alloc_failed++; // this generates a NULL pointer
> dereference/segfault
> 
> 
> I'm not completely confident in my understanding of the PF/VF drivers so I
> may be missing something; but we are seeing the segfault on those lines in
> v20.05 at least. I couldn't find a related patch/commit but wanted to check if
> my reasoning was correct before adding this 1 line fix.
> 
> 
> Thanks,
> Stefan Baranoff


Re: [dpdk-dev] [PATCH] net/iavf: fix tx thresh check issue

2021-07-22 Thread Xing, Beilei



> -Original Message-
> From: Li, Xiaoyun 
> Sent: Thursday, July 22, 2021 3:56 PM
> To: dev@dpdk.org; Wu, Jingjing ; Xing, Beilei
> 
> Cc: Li, Xiaoyun ; sta...@dpdk.org
> Subject: [PATCH] net/iavf: fix tx thresh check issue
> 
> Function check_tx_thresh is called with wrong parameter. If the check fails,
> tx_queue_setup should return error not keep going.
> iThis patch fixes above issues.

Typo: This

Except that,
Acked-by: Beilei Xing 

> 
> Fixes: 69dd4c3d0898 ("net/avf: enable queue and device")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Xiaoyun Li 
> ---
>  drivers/net/iavf/iavf_rxtx.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index
> d61b32fcee..e33fe4576b 100644
> --- a/drivers/net/iavf/iavf_rxtx.c
> +++ b/drivers/net/iavf/iavf_rxtx.c
> @@ -708,7 +708,8 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,
>   tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH);
>   tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
>   tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
> - check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
> + if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
> + return -EINVAL;
> 
>   /* Free memory if needed. */
>   if (dev->data->tx_queues[queue_idx]) {
> --
> 2.25.1



Re: [dpdk-dev] Question about 'rxm->hash.rss' and 'mb->hash.fdir'

2021-06-30 Thread Xing, Beilei


> -Original Message-
> From: Min Hu (Connor) 
> Sent: Wednesday, June 30, 2021 7:22 PM
> To: Yigit, Ferruh ; dev@dpdk.org; Thomas Monjalon
> ; Andrew Rybchenko
> 
> Cc: Xing, Beilei ; Matan Azrad
> ; shah...@nvidia.com; viachesl...@nvidia.com
> Subject: Re: Question about 'rxm->hash.rss' and 'mb->hash.fdir'
> 
> Hi, Beilei, Matan, Shahaf, Viacheslav,
> 
>   how about your opinion?

Agree with Ferruh.

> 
> 在 2021/6/30 17:34, Ferruh Yigit 写道:
> > On 6/30/2021 3:45 AM, Min Hu (Connor) wrote:
> >> Hi, all
> >>  one question about 'rxm->hash.rss' and 'mb->hash.fdir'.
> >>
> >>  In Rx recv packets function,
> >>  'rxm->hash.rss' will report rss hash result from Rx desc.
> >>  'rxm->hash.fdir' will report filter identifier from Rx desc.
> >>
> >>  But function implementation differs from some PMDs. for example:
> >>  i40e, MLX5 report the two at the same time if pkt_flags is set,like:
> >> **
> >>      if (pkt_flags & PKT_RX_RSS_HASH) {
> >>      rxm->hash.rss =
> >> rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss);
> >>      }
> >>      if (pkt_flags & PKT_RX_FDIR) {
> >>      mb->hash.fdir.hi =
> >>      rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id);
> >>      }
> >> 
> >>
> >>  While, ixgbe only report one of the two. like:
> >> **
> >>      if (likely(pkt_flags & PKT_RX_RSS_HASH))
> >>      mb->hash.rss = rte_le_to_cpu_32(
> >>      rxdp[j].wb.lower.hi_dword.rss);
> >>      else if (pkt_flags & PKT_RX_FDIR) {
> >>      mb->hash.fdir.hash = rte_le_to_cpu_16(
> >>      rxdp[j].wb.lower.hi_dword.csum_ip.csum) &
> >>      IXGBE_ATR_HASH_MASK;
> >>      mb->hash.fdir.id = rte_le_to_cpu_16(
> >>      rxdp[j].wb.lower.hi_dword.csum_ip.ip_id);
> >>      }
> >> 
> >>  So, what is application scenario for 'rxm->hash.rss' and
> >> 'mb->hash.fdir', that is, why the two should be reported? How about
> >> reporting the two at the same time?
> >>  Thanks for  your reply.
> >
> >
> > Hi Connor,
> >
> > mbuf->hash is union, so it is not possible to set both 'hash.rss' & 
> > 'hash.fdir'.
> >
> > I assume for i40e & mlx5 case 'pkt_flags' indicate which one is valid
> > and only one is set in practice. Cc'ed driver mainteriners for more comment.
> 
> Thanks Ferruh,
>   another question, why does user need this information:  rxm-
> >hash.rss or mb->hash.fdir.hi ? what is the function?
> 
> > .
> >


Re: [dpdk-dev] [PATCH v3 1/2] net/i40e: improve performance for scalar Tx

2021-06-29 Thread Xing, Beilei



> -Original Message-
> From: Feifei Wang 
> Sent: Wednesday, June 30, 2021 2:41 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; n...@arm.com; Feifei Wang ;
> Ruifeng Wang 
> Subject: [PATCH v3 1/2] net/i40e: improve performance for scalar Tx
> 
> For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means per-
> queue all mbufs come from the same mempool and have refcnt = 1.
> 
> Thus we can use bulk free of the buffers when mbuf fast free mode is
> enabled.
> 
> Following are the test results with this patch:
> 
> MRR L3FWD Test:
> two ports & bi-directional flows & one core RX API:
> i40e_recv_pkts_bulk_alloc TX API: i40e_xmit_pkts_simple ring_descs_size =
> 1024; Ring_I40E_TX_MAX_FREE_SZ = 64; tx_rs_thresh =
> I40E_DEFAULT_TX_RSBIT_THRESH = 32; tx_free_thresh =
> I40E_DEFAULT_TX_FREE_THRESH = 32;
> 
> For scalar path in arm platform with default 'tx_rs_thresh':
> In n1sdp, performance is improved by 7.9%; In thunderx2, performance is
> improved by 7.6%.
> 
> For scalar path in x86 platform with default 'tx_rs_thresh':
> performance is improved by 4.7%.
> 
> Suggested-by: Ruifeng Wang 
> Signed-off-by: Feifei Wang 
> Reviewed-by: Ruifeng Wang 
> ---
>  drivers/net/i40e/i40e_rxtx.c | 30 --
>  1 file changed, 24 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> 6c58decece..0d3482a9d2 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -1294,22 +1294,40 @@ static __rte_always_inline int
> i40e_tx_free_bufs(struct i40e_tx_queue *txq)  {
>   struct i40e_tx_entry *txep;
> - uint16_t i;
> + uint16_t tx_rs_thresh = txq->tx_rs_thresh;
> + uint16_t i = 0, j = 0;
> + struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> + const uint16_t k = RTE_ALIGN_FLOOR(tx_rs_thresh,
> RTE_I40E_TX_MAX_FREE_BUF_SZ);
> + const uint16_t m = tx_rs_thresh % RTE_I40E_TX_MAX_FREE_BUF_SZ;
> 
>   if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
>   rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
> 
>   rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
>   return 0;
> 
> - txep = &(txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)]);
> + txep = &txq->sw_ring[txq->tx_next_dd - (tx_rs_thresh - 1)];
> 
> - for (i = 0; i < txq->tx_rs_thresh; i++)
> + for (i = 0; i < tx_rs_thresh; i++)
>   rte_prefetch0((txep + i)->mbuf);
> 
>   if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
> - for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
> - rte_mempool_put(txep->mbuf->pool, txep->mbuf);
> - txep->mbuf = NULL;
> + if (k) {
> + for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ)
> {
> + for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ;
> ++i, ++txep) {
> + free[i] = txep->mbuf;
> + txep->mbuf = NULL;
> + }
> + rte_mempool_put_bulk(free[0]->pool, (void
> **)free,
> +
>   RTE_I40E_TX_MAX_FREE_BUF_SZ);
> + }
> + }
> +
> + if (m) {
> + for (i = 0; i < m; ++i, ++txep) {
> + free[i] = txep->mbuf;
> + txep->mbuf = NULL;
> + }
> + rte_mempool_put_bulk(free[0]->pool, (void **)free,
> m);
>   }
>   } else {
>   for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
> --
> 2.25.1
Acked-by: Beilei Xing 



Re: [dpdk-dev] [PATCH v2 1/2] net/i40e: improve performance for scalar Tx

2021-06-29 Thread Xing, Beilei



> -Original Message-
> From: Feifei Wang 
> Sent: Wednesday, June 30, 2021 10:04 AM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; n...@arm.com; Feifei Wang ;
> Ruifeng Wang 
> Subject: [PATCH v2 1/2] net/i40e: improve performance for scalar Tx
> 
> For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means per-
> queue all mbufs come from the same mempool and have refcnt = 1.
> 
> Thus we can use bulk free of the buffers when mbuf fast free mode is
> enabled.
> 
> Following are the test results with this patch:
> 
> MRR L3FWD Test:
> two ports & bi-directional flows & one core RX API:
> i40e_recv_pkts_bulk_alloc TX API: i40e_xmit_pkts_simple ring_descs_size =
> 1024; Ring_I40E_TX_MAX_FREE_SZ = 64; tx_rs_thresh =
> I40E_DEFAULT_TX_RSBIT_THRESH = 32; tx_free_thresh =
> I40E_DEFAULT_TX_FREE_THRESH = 32;
> 
> For scalar path in arm platform with default 'tx_rs_thresh':
> In n1sdp, performance is improved by 7.9%; In thunderx2, performance is
> improved by 7.6%.
> 
> For scalar path in x86 platform with default 'tx_rs_thresh':
> performance is improved by 4.7%.
> 
> Suggested-by: Ruifeng Wang 
> Signed-off-by: Feifei Wang 
> Reviewed-by: Ruifeng Wang 
> ---
>  drivers/net/i40e/i40e_rxtx.c | 26 ++
>  1 file changed, 22 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> 6c58decece..8c72391cde 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -1294,7 +1294,11 @@ static __rte_always_inline int
> i40e_tx_free_bufs(struct i40e_tx_queue *txq)  {
>   struct i40e_tx_entry *txep;
> - uint16_t i;
> + int n = txq->tx_rs_thresh;

Thanks for the patch, just little comment, can we use 'tx_rs_thresh' to replace 
'n' to make it more readable?

> + uint16_t i = 0, j = 0;
> + struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> + const int32_t k = RTE_ALIGN_FLOOR(n,
> RTE_I40E_TX_MAX_FREE_BUF_SZ);
> + const int32_t m = n % RTE_I40E_TX_MAX_FREE_BUF_SZ;
> 
>   if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
>   rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
> @@ -1307,9 +1311,23 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
>   rte_prefetch0((txep + i)->mbuf);
> 
>   if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
> - for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
> - rte_mempool_put(txep->mbuf->pool, txep->mbuf);
> - txep->mbuf = NULL;
> + if (k) {
> + for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ)
> {
> + for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ;
> ++i, ++txep) {
> + free[i] = txep->mbuf;
> + txep->mbuf = NULL;
> + }
> + rte_mempool_put_bulk(free[0]->pool, (void
> **)free,
> +
>   RTE_I40E_TX_MAX_FREE_BUF_SZ);
> + }
> + }
> +
> + if (m) {
> + for (i = 0; i < m; ++i, ++txep) {
> + free[i] = txep->mbuf;
> + txep->mbuf = NULL;
> + }
> + rte_mempool_put_bulk(free[0]->pool, (void **)free,
> m);
>   }
>   } else {
>   for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
> --
> 2.25.1



Re: [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx

2021-06-27 Thread Xing, Beilei


> -Original Message-
> From: Feifei Wang 
> Sent: Friday, June 25, 2021 5:40 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; nd ; Ruifeng Wang
> ; nd ; nd 
> Subject: 回复: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
> 
> 
> 
> > > int n = txq->tx_rs_thresh;
> > >  int32_t i = 0, j = 0;
> > > const int32_t k = RTE_ALIGN_FLOOR(n, RTE_I40E_TX_MAX_FREE_BUF_SZ);
> > > const int32_t m = n % RTE_I40E_TX_MAX_FREE_BUF_SZ; struct rte_mbuf
> > > *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> > >
> > > For FAST_FREE_MODE:
> > >
> > > if (k) {
> > >   for (j = 0; j != k - RTE_I40E_TX_MAX_FREE_BUF_SZ;
> > >   j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
> > >   for (i = 0; i  > >   free[i] = txep->mbuf;
> > >   txep->mbuf = NULL;
> > >   }
> > >   rte_mempool_put_bulk(free[0]->pool, (void **)free,
> > >   RTE_I40E_TX_MAX_FREE_BUF_SZ);
> > >   }
> > >  }
> > >
> > > if (m) {
> > >   for (i = 0; i < m; ++i, ++txep) {
> > >   free[i] = txep->mbuf;
> > >   txep->mbuf = NULL;
> > >   }
> > >  }
> > >  rte_mempool_put_bulk(free[0]->pool, (void **)free, m); }
> 
> > Seems no logical problem, but the code looks heavy due to for loops.
> > Did you run performance with this change when tx_rs_thresh >
> > RTE_I40E_TX_MAX_FREE_BUF_SZ?
> 
> Sorry for my late rely. It takes me some time to do the test for this path and
> following is my test results:
> 
> First, I come up with another way to solve this bug and compare it with
> "loop"(size of 'free' is 64).
> That is set the size of 'free' as a large constant. We know:
> tx_rs_thresh < ring_desc_size < I40E_MAX_RING_DESC(4096), so we can
> directly define as:
> struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> 
> [1]Test Config:
> MRR Test: two porst & bi-directional flows & one core RX API:
> i40e_recv_pkts_bulk_alloc TX API: i40e_xmit_pkts_simple
> ring_descs_size: 1024
> Ring_I40E_TX_MAX_FREE_SZ: 64
> 
> [2]Scheme:
> tx_rs_thresh =  I40E_DEFAULT_TX_RSBIT_THRESH tx_free_thresh =
> I40E_DEFAULT_TX_FREE_THRESH tx_rs_thresh <= tx_free_thresh <
> nb_tx_desc So we change the value of 'tx_rs_thresh' by adjust
> I40E_DEFAULT_TX_RSBIT_THRESH
> 
> [3]Test Results (performance improve):
> In X86:
> tx_rs_thresh/ tx_free_thresh   32/32  256/256 
>  512/512
> 1.mempool_put(base)   0  0
> 0
> 2.mempool_put_bulk:loop   +4.7% +5.6% 
>   +7.0%
> 3.mempool_put_bulk:large size for free   +3.8%  +2.3%   
> -2.0%
> (free[I40E_MAX_RING_DESC])
> 
> In Arm:
> N1SDP:
> tx_rs_thresh/ tx_free_thresh   32/32  256/256 
>  512/512
> 1.mempool_put(base)   0  0
> 0
> 2.mempool_put_bulk:loop   +7.9% +9.1% 
>   +2.9%
> 3.mempool_put_bulk:large size for free+7.1% +8.7%   
> +3.4%
> (free[I40E_MAX_RING_DESC])
> 
> Thunderx2:
> tx_rs_thresh/ tx_free_thresh   32/32  256/256 
>  512/512
> 1.mempool_put(base)   0  0
> 0
> 2.mempool_put_bulk:loop   +7.6% +10.5%
>  +7.6%
> 3.mempool_put_bulk:large size for free+1.7% +18.4% 
> +10.2%
> (free[I40E_MAX_RING_DESC])
> 
> As a result, I feel maybe 'loop' is better and it seems not very heavy
> according to the test.
> What about your views and look forward to your reply.
> Thanks a lot.

Thanks for your patch and test.
It looks OK for me, please send V2.


Re: [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx

2021-06-23 Thread Xing, Beilei


> -Original Message-
> From: Feifei Wang 
> Sent: Tuesday, June 22, 2021 6:08 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; nd ; Ruifeng Wang
> ; nd 
> Subject: 回复: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
> 
> Sorry for a mistake for the code, it should be:
> 
> int n = txq->tx_rs_thresh;
>  int32_t i = 0, j = 0;
> const int32_t k = RTE_ALIGN_FLOOR(n, RTE_I40E_TX_MAX_FREE_BUF_SZ);
> const int32_t m = n % RTE_I40E_TX_MAX_FREE_BUF_SZ; struct rte_mbuf
> *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> 
> For FAST_FREE_MODE:
> 
> if (k) {
>   for (j = 0; j != k - RTE_I40E_TX_MAX_FREE_BUF_SZ;
>   j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
>   for (i = 0; ifree[i] = txep->mbuf;
>   txep->mbuf = NULL;
>   }
>   rte_mempool_put_bulk(free[0]->pool, (void **)free,
>   RTE_I40E_TX_MAX_FREE_BUF_SZ);
>   }
>  }
> 
> if (m) {
>   for (i = 0; i < m; ++i, ++txep) {
>   free[i] = txep->mbuf;
>   txep->mbuf = NULL;
>   }
>  }
>  rte_mempool_put_bulk(free[0]->pool, (void **)free, m); }
> 

Seems no logical problem, but the code looks heavy due to for loops.
Did you run performance with this change when tx_rs_thresh > 
RTE_I40E_TX_MAX_FREE_BUF_SZ?


Re: [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx

2021-06-21 Thread Xing, Beilei



> -Original Message-
> From: Feifei Wang 
> Sent: Thursday, May 27, 2021 4:17 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; n...@arm.com; Feifei Wang ;
> Ruifeng Wang 
> Subject: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
> 
> For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means per-
> queue all mbufs come from the same mempool and have refcnt = 1.
> 
> Thus we can use bulk free of the buffers when mbuf fast free mode is
> enabled.
> 
> For scalar path in arm platform:
> In n1sdp, performance is improved by 7.8%; In thunderx2, performance is
> improved by 6.7%.
> 
> For scalar path in x86 platform,
> performance is improved by 6%.
> 
> Suggested-by: Ruifeng Wang 
> Signed-off-by: Feifei Wang 
> ---
>  drivers/net/i40e/i40e_rxtx.c | 5 -
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> 6c58decece..fe7b20f750 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -1295,6 +1295,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)  {
>   struct i40e_tx_entry *txep;
>   uint16_t i;
> + struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> 
>   if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
>   rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
> @@ -1308,9 +1309,11 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
> 
>   if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
>   for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
> - rte_mempool_put(txep->mbuf->pool, txep->mbuf);
> + free[i] = txep->mbuf;

The tx_rs_thresh can be 'nb_desc - 3', so if tx_rs_thres > 
RTE_I40E_TX_MAX_FREE_BUF_SZ, there'll be out of bounds, right?

>   txep->mbuf = NULL;
>   }
> + rte_mempool_put_bulk(free[0]->pool, (void **)free,
> + txq->tx_rs_thresh);
>   } else {
>   for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
>   rte_pktmbuf_free_seg(txep->mbuf);
> --
> 2.25.1



Re: [dpdk-dev] [PATCH] net/i40e: fix L2 payload RSS mask input set

2021-06-21 Thread Xing, Beilei



> -Original Message-
> From: Zhang, AlvinX 
> Sent: Friday, June 18, 2021 4:38 PM
> To: Xing, Beilei 
> Cc: dev@dpdk.org; Zhang, AlvinX ;
> sta...@dpdk.org
> Subject: [PATCH] net/i40e: fix L2 payload RSS mask input set
> 
> Allow VLAN tag being added to L2 payload packet type RSS input set.
> 
> Fixes: ef4c16fd9148 ("net/i40e: refactor RSS flow")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Alvin Zhang 
> ---
>  drivers/net/i40e/i40e_hash.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
> index b1cb24f..722edc9 100644
> --- a/drivers/net/i40e/i40e_hash.c
> +++ b/drivers/net/i40e/i40e_hash.c
> @@ -201,11 +201,11 @@ struct i40e_hash_match_pattern {  #define
> I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
>   pattern, rss_mask, true, cus_pctype }
> 
> -#define I40E_HASH_L2_RSS_MASK(ETH_RSS_ETH |
> ETH_RSS_L2_SRC_ONLY | \
> +#define I40E_HASH_L2_RSS_MASK(ETH_RSS_VLAN |
> ETH_RSS_ETH | \
> + ETH_RSS_L2_SRC_ONLY | \
>   ETH_RSS_L2_DST_ONLY)
> 
>  #define I40E_HASH_L23_RSS_MASK   (I40E_HASH_L2_RSS_MASK |
> \
> - ETH_RSS_VLAN | \
>   ETH_RSS_L3_SRC_ONLY | \
>   ETH_RSS_L3_DST_ONLY)
> 
> --
> 1.8.3.1

Acked-by: Beilei Xing 



Re: [dpdk-dev] [PATCH v3] net/i40e: fix set rss hash function invalid

2021-06-21 Thread Xing, Beilei



> -Original Message-
> From: dev  On Behalf Of Steve Yang
> Sent: Monday, June 21, 2021 4:04 PM
> To: dev@dpdk.org
> Cc: Xing, Beilei ; Yang, SteveX
> ; sta...@dpdk.org
> Subject: [dpdk-dev] [PATCH v3] net/i40e: fix set rss hash function invalid
> 
> i40e can support following rss hash function types: default/toeplitz,
> symmetric toeplitz, and simple_xor. However, when filter engine parses
> pattern action, it only supports symmetric toeplitz & default.
> 
> Add simple_xor and toeplitz hash functions support when parsing pattern
> action.
> 
> Fixes: ef4c16fd9148 ("net/i40e: refactor RSS flow")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Steve Yang 
> ---
> v3:
>  - add Cc stable line.
> v2:
>  - add the fix line.
>  - support simple_xor and toeplitz hash functions explicitly.
> 
>  drivers/net/i40e/i40e_hash.c | 20 ++--
>  1 file changed, 14 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
> index b1cb24f437..0cef21c88f 100644
> --- a/drivers/net/i40e/i40e_hash.c
> +++ b/drivers/net/i40e/i40e_hash.c
> @@ -1105,13 +1105,21 @@ i40e_hash_parse_pattern_act(const struct
> rte_eth_dev *dev,
> NULL,
> "RSS Queues not supported when
> pattern specified");
> 
> - if (rss_act->func ==
> RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ)
> + switch (rss_act->func) {
> + case RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ:
>   rss_conf->symmetric_enable = true;
> - else if (rss_act->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
> - return rte_flow_error_set(error, -EINVAL,
> -
> RTE_FLOW_ERROR_TYPE_ACTION_CONF,
> -   NULL,
> -   "Only symmetric TOEPLITZ
> supported when pattern specified");
> + break;
> + case RTE_ETH_HASH_FUNCTION_DEFAULT:
> + case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
> + case RTE_ETH_HASH_FUNCTION_SIMPLE_XOR:
> + break;
> + default:
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION_CONF,
> + NULL,
> + "RSS hash function not supported "
> + "when pattern specified");
> + }
> 
>   if (!i40e_hash_validate_rss_types(rss_act->types))
>   return rte_flow_error_set(error, EINVAL,
> --
> 2.27.0

Acked-by: Beilei Xing 



  1   2   3   4   5   >