RE: [PATCH] graph: fix head move when graph walk in mcore dispatch

2024-03-20 Thread Wu, Jingjing



> -Original Message-
> From: Yan, Zhirun 
> Sent: Wednesday, March 20, 2024 4:43 PM
> To: Wu, Jingjing ; dev@dpdk.org
> Cc: jer...@marvell.com; pbhagavat...@marvell.com; sta...@dpdk.org
> Subject: RE: [PATCH] graph: fix head move when graph walk in mcore dispatch
> 
> 
> 
> > -Original Message-
> > From: Wu, Jingjing 
> > Sent: Wednesday, March 20, 2024 2:25 PM
> > To: Yan, Zhirun ; dev@dpdk.org
> > Cc: jer...@marvell.com; pbhagavat...@marvell.com; sta...@dpdk.org
> > Subject: RE: [PATCH] graph: fix head move when graph walk in mcore
> dispatch
> >
> >
> > > > /* skip the src nodes which not bind with current 
> > > > worker */
> > > > if ((int32_t)head < 0 && node->dispatch.lcore_id != 
> > > > graph-
> > > > >dispatch.lcore_id)
> > > > continue;
> > > > -
> > > > +   head++;
> > > If current src node not bind with current core, It will go into infinite 
> > > loop.
> > > This line would have no chance to run.
> >
> > Seems reasonable, it might be OK to change "head<0" to "head <1" the
> condition
> > check?
> 
> No. "head<0" means it is src node.
> All src node would put before head = 0.  "Head<1" is confused.
> You could find the details of graph reel under rte_graph_walk_rtc() in
> lib/graph/rte_graph_model_rtc.h
> 
> I guess if there are some src node missed, it may be caused by wrong config,
> for example, the missed src node not pin to a lcore.
> Use rte_graph_model_mcore_dispatch_node_lcore_affinity_set() to pin the
> src node first.

I don't think it is confusing because head++ happens before head < 0 check.
Yes, it happens when lcore affinity is not set.
For example, we have two source nodes, both of them have no lcore affinity 
setting.
By current code, the second node will also be executed which is not as expected.

Thanks
Jingjing


RE: [PATCH] graph: fix head move when graph walk in mcore dispatch

2024-03-19 Thread Wu, Jingjing


> > /* skip the src nodes which not bind with current worker */
> > if ((int32_t)head < 0 && node->dispatch.lcore_id != graph-
> > >dispatch.lcore_id)
> > continue;
> > -
> > +   head++;
> If current src node not bind with current core, It will go into infinite loop.
> This line would have no chance to run.

Seems reasonable, it might be OK to change "head<0" to "head <1" the condition 
check?



RE: [PATCH v2] net/cpfl: update CP channel API

2023-10-18 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Thursday, October 19, 2023 6:58 PM
> To: Wu, Jingjing ; Zhang, Yuying
> 
> Cc: dev@dpdk.org; Xing, Beilei 
> Subject: [PATCH v2] net/cpfl: update CP channel API
> 
> From: Beilei Xing 
> 
> Update the cpchnl2 function type according to the definition in
> MEV 1.0 release.
> 
> Signed-off-by: Beilei Xing 

Acked-by: Jingjing Wu 


RE: [PATCH] net/cpfl: reset devargs during the first probe

2023-10-11 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Thursday, October 12, 2023 12:47 AM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Xing, Beilei 
> Subject: [PATCH] net/cpfl: reset devargs during the first probe
> 
> From: Beilei Xing 

> Reset devargs during the first probe. Otherwise, probe again will
> be affected.
> 
> Fixes: a607312291b3 ("net/cpfl: support probe again")
> 
> Signed-off-by: Beilei Xing 
> ---
>  drivers/net/cpfl/cpfl_ethdev.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index 762fbddfe6..890a027a1d 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -1611,11 +1611,12 @@ cpfl_parse_devargs(struct rte_pci_device *pci_dev,
> struct cpfl_adapter_ext *adap
>   struct rte_kvargs *kvlist;
>   int ret;
> 
> - cpfl_args->req_vport_nb = 0;
> -
>   if (devargs == NULL)
>   return 0;
> 
> + if (first)
> + memset(cpfl_args, 0, sizeof(struct cpfl_devargs));
> +
adapter is allocated by rte_zmalloc. It should be zero already.
If I understand correctly, memset to 0 should be happened when first probe is 
done or before probe again but not at the beginning when first probe.



RE: [PATCH] raw/ntb: add support for 5th and 6th Gen Intel Xeon

2023-09-15 Thread Wu, Jingjing


> -Original Message-
> From: Guo, Junfeng 
> Sent: Friday, September 15, 2023 3:08 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Guo, Junfeng 
> Subject: [PATCH] raw/ntb: add support for 5th and 6th Gen Intel Xeon
> 
> Add support for 5th and 6th Gen Intel Xeon Scalable processors. Note
> that NTB devices within the 3rd, 4th and 5th Gen Intel Xeon Scalable
> processors share the same device id, and compliant to PCIe 4.0 spec.
> And the NTB devices within 6th Gen Intel Xeon compliant to PCIe 5.0.
> 
> Signed-off-by: Junfeng Guo 
> ---
>  doc/guides/rawdevs/ntb.rst |   8 +-
>  drivers/raw/ntb/ntb.c  |  10 +-
>  drivers/raw/ntb/ntb.h  |   6 +-
>  drivers/raw/ntb/ntb_hw_intel.c | 190 +++--
>  drivers/raw/ntb/ntb_hw_intel.h |  96 ++---
>  usertools/dpdk-devbind.py  |   8 +-
>  6 files changed, 159 insertions(+), 159 deletions(-)
> 
> diff --git a/doc/guides/rawdevs/ntb.rst b/doc/guides/rawdevs/ntb.rst
> index f8befc6594..e663b7a088 100644
> --- a/doc/guides/rawdevs/ntb.rst
> +++ b/doc/guides/rawdevs/ntb.rst
> @@ -153,6 +153,8 @@ Limitation
> 
>  This PMD is only supported on Intel Xeon Platforms:
> 
> -- 4th Generation Intel® Xeon® Scalable Processors.
> -- 3rd Generation Intel® Xeon® Scalable Processors.
> -- 2nd Generation Intel® Xeon® Scalable Processors.
> +- 6th Generation Intel® Xeon® Scalable Processors. (NTB GEN5 device id:
> 0x0DB4)
> +- 5th Generation Intel® Xeon® Scalable Processors. (NTB GEN4 device id:
> 0x347E)
> +- 4th Generation Intel® Xeon® Scalable Processors. (NTB GEN4 device id:
> 0x347E)
> +- 3rd Generation Intel® Xeon® Scalable Processors. (NTB GEN4 device id:
> 0x347E)
> +- 2nd Generation Intel® Xeon® Scalable Processors. (NTB GEN3 device id:
> 0x201C)

Add release note update too as new platform is added? 


RE: [PATCH v6 00/10] net/cpfl: support port representor

2023-09-12 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Wednesday, September 13, 2023 1:30 AM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> 
> Subject: [PATCH v6 00/10] net/cpfl: support port representor
> 
> From: Beilei Xing 

Acked-by: Jingjing Wu 


RE: [PATCH v3 4/9] net/cpfl: setup ctrl path

2023-09-10 Thread Wu, Jingjing



> -Original Message-
> From: Qiao, Wenjing 
> Sent: Wednesday, September 6, 2023 5:34 PM
> To: Zhang, Yuying ; dev@dpdk.org; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: Liu, Mingxia ; Qiao, Wenjing
> 
> Subject: [PATCH v3 4/9] net/cpfl: setup ctrl path
> 
> Setup the control vport and control queue for flow offloading.

In general, "[PATCH 3/9] net/cpfl: add FXP low level implementation" also 
contains ctrl queue setup functions.
May need better organize the patch set.


RE: [PATCH v3 2/9] net/cpfl: add flow json parser

2023-09-10 Thread Wu, Jingjing
> +static int
> +cpfl_json_object_to_int(json_object *object, const char *name, int *value)
> +{
> + json_object *subobject;
> +
> + if (!object) {
> + PMD_DRV_LOG(ERR, "object doesn't exist.");
> + return -EINVAL;
> + }
> + subobject = json_object_object_get(object, name);
> + if (!subobject) {
> + PMD_DRV_LOG(ERR, "%s doesn't exist.", name);
> + return -EINVAL;
> + }
> + *value = json_object_get_int(subobject);
> +
> + return 0;
> +}
> +
> +static int
> +cpfl_json_object_to_uint16(json_object *object, const char *name, uint16_t
> *value)
> +{
Looks no need to define a new function as there is no difference with 
cpfl_json_object_to_int func beside the type of return value.
[...]

> +
> +static int
> +cpfl_flow_js_pattern_key_proto_field(json_object *cjson_field,
> +  struct cpfl_flow_js_pr_key_proto *js_field)
> +{
> + int len, i;
> +
> + if (!cjson_field)
> + return 0;
> + len = json_object_array_length(cjson_field);
> + js_field->fields_size = len;
> + if (len == 0)
Move if check above, before set js_field->fields_size?

> + return 0;
> + js_field->fields =
> + rte_malloc(NULL, sizeof(struct cpfl_flow_js_pr_key_proto_field) *
> len, 0);
> + if (!js_field->fields) {
> + PMD_DRV_LOG(ERR, "Failed to alloc memory.");
> + return -ENOMEM;
> + }
> + for (i = 0; i < len; i++) {
> + json_object *object;
> + const char *name, *mask;
> +
> + object = json_object_array_get_idx(cjson_field, i);
> + name = cpfl_json_object_to_string(object, "name");
> + if (!name) {
> + PMD_DRV_LOG(ERR, "Can not parse string 'name'.");
> + goto err;
> + }
> + if (strlen(name) > CPFL_FLOW_JSON_STR_SIZE_MAX) {
> + PMD_DRV_LOG(ERR, "The 'name' is too long.");
> + goto err;
> + }
> + memcpy(js_field->fields[i].name, name, strlen(name));
Is js_field->fields[i].name zeroed? If not, using strlen() cannot guarantee 
string copy correct. 

> + if (js_field->type == RTE_FLOW_ITEM_TYPE_ETH ||
> + js_field->type == RTE_FLOW_ITEM_TYPE_IPV4) {
> + mask = cpfl_json_object_to_string(object, "mask");
> + if (!mask) {
> + PMD_DRV_LOG(ERR, "Can not parse string
> 'mask'.");
> + goto err;
> + }
> + memcpy(js_field->fields[i].mask, mask, strlen(mask));
The same as above.

> + } else {
> + uint32_t mask_32b;
> + int ret;
> +
> + ret = cpfl_json_object_to_uint32(object, "mask",
> &mask_32b);
> + if (ret < 0) {
> + PMD_DRV_LOG(ERR, "Can not parse uint32
> 'mask'.");
> + goto err;
> + }
> + js_field->fields[i].mask_32b = mask_32b;
> + }
> + }
> +
> + return 0;
> +
> +err:
> + rte_free(js_field->fields);
> + return -EINVAL;
> +}
> +
> +static int
> +cpfl_flow_js_pattern_key_proto(json_object *cjson_pr_key_proto, struct
> cpfl_flow_js_pr *js_pr)
> +{
> + int len, i, ret;
> +
> + len = json_object_array_length(cjson_pr_key_proto);
> + js_pr->key.proto_size = len;
> + js_pr->key.protocols = rte_malloc(NULL, sizeof(struct
> cpfl_flow_js_pr_key_proto) * len, 0);
> + if (!js_pr->key.protocols) {
> + PMD_DRV_LOG(ERR, "Failed to alloc memory.");
> + return -ENOMEM;
> + }
> +
> + for (i = 0; i < len; i++) {
> + json_object *object, *cjson_pr_key_proto_fields;
> + const char *type;
> + enum rte_flow_item_type item_type;
> +
> + object = json_object_array_get_idx(cjson_pr_key_proto, i);
> + /* pr->key->proto->type */
> + type = cpfl_json_object_to_string(object, "type");
> + if (!type) {
> + PMD_DRV_LOG(ERR, "Can not parse string 'type'.");
> + goto err;
> + }
> + item_type = cpfl_get_item_type_by_str(type);
> + if (item_type == RTE_FLOW_ITEM_TYPE_VOID)
> + goto err;
> + js_pr->key.protocols[i].type = item_type;
> + /* pr->key->proto->fields */
> + cjson_pr_key_proto_fields = json_object_object_get(object,
> "fields");
> + ret =
> cpfl_flow_js_pattern_key_proto_field(cjson_pr_key_proto_fields,
> +&js_pr-
> >key.protocols[i]);
> + if (ret < 0)
> + goto err;
> + }
> +
> + return 0;
> +
> +err:
> + rte_free(js_pr->key.protocols);
> + retu

RE: [PATCH v3 1/9] net/cpfl: parse flow parser file in devargs

2023-09-10 Thread Wu, Jingjing



> -Original Message-
> From: Qiao, Wenjing 
> Sent: Wednesday, September 6, 2023 5:34 PM
> To: Zhang, Yuying ; dev@dpdk.org; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: Liu, Mingxia ; Qiao, Wenjing
> 
> Subject: [PATCH v3 1/9] net/cpfl: parse flow parser file in devargs
> 
> Add devargs "flow_parser" for rte_flow json parser.
> 
> Signed-off-by: Wenjing Qiao 
> ---
>  doc/guides/nics/cpfl.rst   | 32 
>  drivers/net/cpfl/cpfl_ethdev.c | 38
> +-
>  drivers/net/cpfl/cpfl_ethdev.h |  3 +++
>  drivers/net/cpfl/meson.build   |  6 ++
>  4 files changed, 78 insertions(+), 1 deletion(-)
> 
> diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
> index c20334230b..7032dd1a1a 100644
> --- a/doc/guides/nics/cpfl.rst
> +++ b/doc/guides/nics/cpfl.rst
> @@ -128,12 +128,24 @@ Runtime Configuration
> 
>  -a BDF,representor=vf[0-3],representor=c1pf1
> 
> +- ``flow_parser`` (default ``not enabled``)
> +
> +  The PMD supports using a JSON file to parse rte_flow tokens into low level
> hardware
> +  resources defined in a DDP package file.
> +
> +  The user can specify the path of json file, for example::
> +
> +-a ca:00.0,flow_parser="refpkg.json"
> +
> +  Then the PMD will load json file for device ``ca:00.0``.
> +  The parameter is optional.
> 
>  Driver compilation and testing
>  --
> 
>  Refer to the document :doc:`build_and_test` for details.
> 
> +Rte flow need to install json-c library.
> 
>  Features
>  
> @@ -164,3 +176,23 @@ Hairpin queue
>  E2100 Series can loopback packets from RX port to TX port.
>  This feature is called port-to-port or hairpin.
>  Currently, the PMD only supports single port hairpin.
> +
> +Rte_flow
> +~
> +
> +Rte_flow uses a json file to direct CPF PMD to parse rte_flow tokens into
> +low level hardware resources defined in a DDP package file.
> +
> +#. install json-c library::
> +
> +   .. code-block:: console
> +
> +   git clone https://github.com/json-c/json-c.git
> +   cd json-c
> +   git checkout 777dd06be83ef7fac71c2218b565557cd068a714
> +
Json-c is the dependency, we can install by package management tool, such as 
apt, can you add that refer?
If we need to install from source code, version number might be better that 
commit id.


RE: [PATCH v4 10/10] net/cpfl: support link update for representor

2023-09-08 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, September 8, 2023 7:17 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> 
> Subject: [PATCH v4 10/10] net/cpfl: support link update for representor
> 
> From: Beilei Xing 
> 
> Add link update ops for representor.
> 
> Signed-off-by: Jingjing Wu 
> Signed-off-by: Beilei Xing 
> ---
>  drivers/net/cpfl/cpfl_ethdev.h  |  1 +
>  drivers/net/cpfl/cpfl_representor.c | 21 +
>  2 files changed, 22 insertions(+)
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
> index 4937d2c6e3..0dd9d4e7f9 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.h
> +++ b/drivers/net/cpfl/cpfl_ethdev.h
> @@ -162,6 +162,7 @@ struct cpfl_repr {
>   struct cpfl_repr_id repr_id;
>   struct rte_ether_addr mac_addr;
>   struct cpfl_vport_info *vport_info;
> + bool func_up; /* If the represented function is up */
>  };
> 
>  struct cpfl_adapter_ext {
> diff --git a/drivers/net/cpfl/cpfl_representor.c
> b/drivers/net/cpfl/cpfl_representor.c
> index 0cd92b1351..3c0fa957de 100644
> --- a/drivers/net/cpfl/cpfl_representor.c
> +++ b/drivers/net/cpfl/cpfl_representor.c
> @@ -308,6 +308,23 @@ cpfl_repr_tx_queue_setup(__rte_unused struct
> rte_eth_dev *dev,
>   return 0;
>  }
> 
> +static int
> +cpfl_repr_link_update(struct rte_eth_dev *ethdev,
> +   __rte_unused int wait_to_complete)
> +{
> + struct cpfl_repr *repr = CPFL_DEV_TO_REPR(ethdev);
> + struct rte_eth_link *dev_link = ðdev->data->dev_link;
> +
> + if (!(ethdev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)) {
> + PMD_INIT_LOG(ERR, "This ethdev is not representor.");
> + return -EINVAL;
> + }
> + dev_link->link_status = repr->func_up ?
> + RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
> +
> + return 0;
> +}
> +
>  static const struct eth_dev_ops cpfl_repr_dev_ops = {
>   .dev_start  = cpfl_repr_dev_start,
>   .dev_stop   = cpfl_repr_dev_stop,
> @@ -317,6 +334,8 @@ static const struct eth_dev_ops cpfl_repr_dev_ops = {
> 
>   .rx_queue_setup = cpfl_repr_rx_queue_setup,
>   .tx_queue_setup = cpfl_repr_tx_queue_setup,
> +
> + .link_update= cpfl_repr_link_update,
>  };
> 
>  static int
> @@ -331,6 +350,8 @@ cpfl_repr_init(struct rte_eth_dev *eth_dev, void
> *init_param)
>   repr->itf.type = CPFL_ITF_TYPE_REPRESENTOR;
>   repr->itf.adapter = adapter;
>   repr->itf.data = eth_dev->data;
> + if (repr->vport_info->vport_info.vport_status ==
> CPCHNL2_VPORT_STATUS_ENABLED)
> + repr->func_up = true;
> 
Now event process? Think about the vsi status changes?

>   eth_dev->dev_ops = &cpfl_repr_dev_ops;
> 
> --
> 2.34.1



RE: [PATCH v4 09/10] net/cpfl: create port representor

2023-09-08 Thread Wu, Jingjing
> + /* warning if no match vport detected */
> + if (!matched)
> + PMD_INIT_LOG(WARNING, "No matched vport for
> representor %s "
> +   "creation will be deferred when
> vport is detected",
> +   name);
> +
If vport info is responded successfully, what is the case that matched is 
false? And I did not find the defer process.
> + rte_spinlock_unlock(&adapter->vport_map_lock);
> + }
> +
> +err:
> + rte_spinlock_unlock(&adapter->repr_lock);
> + rte_free(vlist_resp);
> + return ret;
> +}


RE: [PATCH v4 08/10] net/cpfl: support vport list/info get

2023-09-08 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, September 8, 2023 7:17 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> 
> Subject: [PATCH v4 08/10] net/cpfl: support vport list/info get
> 
> From: Beilei Xing 
> 
> Support cp channel ops CPCHNL2_OP_CPF_GET_VPORT_LIST and
> CPCHNL2_OP_CPF_GET_VPORT_INFO.
> 
> Signed-off-by: Beilei Xing 

Can we merge this patch to previous cpchnl handle one or move ahead before 
representor is introduced?
 



RE: [PATCH v4 03/10] net/cpfl: refine handle virtual channel message

2023-09-08 Thread Wu, Jingjing
> -static struct idpf_vport *
> +static struct cpfl_vport *
>  cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id)
>  {
> - struct idpf_vport *vport = NULL;
> + struct cpfl_vport *vport = NULL;
>   int i;
> 
>   for (i = 0; i < adapter->cur_vport_nb; i++) {
> - vport = &adapter->vports[i]->base;
> - if (vport->vport_id != vport_id)
> + vport = adapter->vports[i];
> + if (vport->base.vport_id != vport_id)
Check if vport is NULL to ensure the structure access?
>   continue;
>   else
>   return vport;
>   }
> 
> - return vport;
> + return NULL;
>  }


RE: [PATCH v4 02/10] net/cpfl: introduce interface structure

2023-09-08 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, September 8, 2023 7:17 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> ; Zhang, Qi Z 
> Subject: [PATCH v4 02/10] net/cpfl: introduce interface structure
> 
> From: Beilei Xing 
> 
> Introduce cplf interface structure to distinguish vport and port
> representor.
> 
> Signed-off-by: Qi Zhang 
> Signed-off-by: Beilei Xing 
> ---
>  drivers/net/cpfl/cpfl_ethdev.c |  3 +++
>  drivers/net/cpfl/cpfl_ethdev.h | 16 
>  2 files changed, 19 insertions(+)
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index 46b3a52e49..92fe92c00f 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -1803,6 +1803,9 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void
> *init_params)
>   goto err;
>   }
> 
> + cpfl_vport->itf.type = CPFL_ITF_TYPE_VPORT;
> + cpfl_vport->itf.adapter = adapter;
> + cpfl_vport->itf.data = dev->data;
>   adapter->vports[param->idx] = cpfl_vport;
>   adapter->cur_vports |= RTE_BIT32(param->devarg_id);
>   adapter->cur_vport_nb++;
> diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
> index b637bf2e45..53e45035e8 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.h
> +++ b/drivers/net/cpfl/cpfl_ethdev.h
> @@ -86,7 +86,19 @@ struct p2p_queue_chunks_info {
>   uint32_t rx_buf_qtail_spacing;
>  };
> 
> +enum cpfl_itf_type {
> + CPFL_ITF_TYPE_VPORT,
> + CPFL_ITF_TYPE_REPRESENTOR
Defined but not used in this patch, how about move CPFL_ITF_TYPE_REPRESENTOR to 
the patch that uses it?
> +};
> +
> +struct cpfl_itf {
> + enum cpfl_itf_type type;
> + struct cpfl_adapter_ext *adapter;
> + void *data;
> +};
> +
>  struct cpfl_vport {
> + struct cpfl_itf itf;
>   struct idpf_vport base;
>   struct p2p_queue_chunks_info *p2p_q_chunks_info;
> 
> @@ -124,5 +136,9 @@ TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
>   RTE_DEV_TO_PCI((eth_dev)->device)
>  #define CPFL_ADAPTER_TO_EXT(p)   \
>   container_of((p), struct cpfl_adapter_ext, base)
> +#define CPFL_DEV_TO_VPORT(dev)   \
> + ((struct cpfl_vport *)((dev)->data->dev_private))
> +#define CPFL_DEV_TO_ITF(dev) \
> + ((struct cpfl_itf *)((dev)->data->dev_private))
> 
>  #endif /* _CPFL_ETHDEV_H_ */
> --
> 2.34.1



RE: [PATCH v2] net/i40e: fix FDIR Rxq receives broadcast packets

2023-07-20 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, July 21, 2023 1:35 PM
> To: Wu, Jingjing ; Zhang, Yuying 
> 
> Cc: dev@dpdk.org; Xing, Beilei ; sta...@dpdk.org
> Subject: [PATCH v2] net/i40e: fix FDIR Rxq receives broadcast packets
> 
> From: Beilei Xing 
> 
> FDIR Rxq is excepted to only receive FDIR programming status, won't
> receive broadcast packets.
> 
> Fixes: a778a1fa2e4e ("i40e: set up and initialize flow director")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Beilei Xing 

Acked-by: Jingjing Wu 


RE: [PATCH] doc: update BIOS setting and supported HW list for NTB

2023-07-03 Thread Wu, Jingjing


> -Original Message-
> From: Guo, Junfeng 
> Sent: Monday, July 3, 2023 5:25 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; sta...@dpdk.org; Guo, Junfeng 
> Subject: [PATCH] doc: update BIOS setting and supported HW list for NTB
> 
> Update BIOS settings and supported platform list for Intel NTB.
> 
> Fixes: f5057be340e4 ("raw/ntb: support Intel Ice Lake")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Junfeng Guo 

Acked-by: Jingjing Wu 


RE: [PATCH] net/cpfl: fix RSS lookup table configuration

2023-07-03 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Monday, July 3, 2023 7:28 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Xing, Beilei 
> Subject: [PATCH] net/cpfl: fix RSS lookup table configuration
> 
> From: Beilei Xing 
> 
> Ethdev Rx queues includes normal data queues and hairpin queues,
> RSS should direct traffic only to the normal data queues.
> 
> Fixes: fda03330fcaa ("net/cpfl: support hairpin queue configuration")
> 
> Signed-off-by: Beilei Xing 

Acked-by: Jingjing Wu 


RE: [PATCH v2] raw/ntb: add check for disabling interrupt in dev close ops

2023-07-02 Thread Wu, Jingjing



> -Original Message-
> From: Guo, Junfeng 
> Sent: Wednesday, June 28, 2023 5:12 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; sta...@dpdk.org; He, Xingguang ; 
> Laatz, Kevin
> ; Guo, Junfeng 
> Subject: [PATCH v2] raw/ntb: add check for disabling interrupt in dev close 
> ops
> 
> During EAL cleanup stage, all bus devices are cleaned up properly.
> In the meantime, the ntb example app will also do the device cleanup
> process, which may call the dev ops '*dev->dev_ops->dev_close' twice.
> 
> If this dev ops for ntb was called twice, the interrupt handle for
> EAL will be disabled twice and will lead to error for the seconde
> time. Like this: "EAL: Error disabling MSI-X interrupts for fd xx"
> 
> Thus, this patch added the check process for disabling interrupt in
> dev_close ops, to ensure that interrupt only be disabled once.
> 
> Fixes: 1cab1a40ea9b ("bus: cleanup devices on shutdown")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Junfeng Guo 

Acked-by: Jingjing Wu 


RE: [PATCH] doc: update release notes for Intel IPU

2023-07-02 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Wednesday, June 28, 2023 11:39 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Xing, Beilei 
> Subject: [PATCH] doc: update release notes for Intel IPU
> 
> From: Beilei Xing 
> 
> Update release notes for Intel IPU new features:
>  - Support VF whose device id is 0x145c.
>  - Support hairpin queue.
> 
> Fixes: 32bcd47e16fe ("net/idpf: support VF")
> Fixes: 1ec8064832db ("net/cpfl: add haipin queue group during vport init")
> 
> Signed-off-by: Beilei Xing 

Acked-by: Jingjing Wu 


RE: [PATCH] net/cpfl: fix fail to re-configure RSS

2023-06-16 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, June 16, 2023 8:00 PM
> To: Wu, Jingjing ; Zhang, Yuying 
> 
> Cc: dev@dpdk.org; Xing, Beilei 
> Subject: [PATCH] net/cpfl: fix fail to re-configure RSS
> 
> From: Beilei Xing 
> 
> Currently, if launch testpmd with multiple queues and re-configure
> rxq with 'port config all rxq 1', Rx queue 0 may not receive packets.
> that's because RSS lookup tale is not re-configured when Rxq number
> is 1.
> Although Rxq number is 1 and multi queue mode is RTE_ETH_MQ_RX_NONE,
> cpfl PMD should init RSS to allow RSS re-configuration.
> 
> Fixes: cfbc66551a14 ("net/cpfl: support RSS")
> 
> Signed-off-by: Beilei Xing 

Acked-by: Jingjing Wu 



RE: [PATCH v10 00/14] net/cpfl: add hairpin queue support

2023-06-05 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Tuesday, June 6, 2023 6:03 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> 
> Subject: [PATCH v10 00/14] net/cpfl: add hairpin queue support
> 
> From: Beilei Xing 
> 
> This patchset adds hairpin queue support.
> 
> v2 changes:
>  - change hairpin rx queus configuration sequence.
>  - code refine.
> 
> v3 changes:
>  - Refine the patchset based on the latest code.
> 
> v4 change:
>  - Remove hairpin rx buffer queue's sw_ring.
>  - Change hairpin rx queus configuration sequence in cpfl_hairpin_bind 
> function.
>  - Refind hairpin queue setup and release.
> 
> v5 change:
>  - Fix memory leak during queue setup.
>  - Refine hairpin Rxq/Txq start/stop.
> 
> v6 change:
>  - Add sign-off.
> 
> v7 change:
>  - Update cpfl.rst
> 
> v8 change:
>  - Fix Intel-compilation failure.
> 
> v9 change:
>  - Fix memory leak if fail to init queue group.
>  - Change log level.
> 
> v10 change:
>  - Avoid accessing out-of-bounds.
> 
> Beilei Xing (14):
>   net/cpfl: refine structures
>   common/idpf: support queue groups add/delete
>   net/cpfl: add haipin queue group during vport init
>   net/cpfl: support hairpin queue capbility get
>   net/cpfl: support hairpin queue setup and release
>   common/idpf: add queue config API
>   net/cpfl: support hairpin queue configuration
>   common/idpf: add switch queue API
>   net/cpfl: support hairpin queue start/stop
>   common/idpf: add irq map config API
>   net/cpfl: enable write back based on ITR expire
>   net/cpfl: support peer ports get
>   net/cpfl: support hairpin bind/unbind
>   doc: update the doc of CPFL PMD
> 
>  doc/guides/nics/cpfl.rst   |   7 +
>  drivers/common/idpf/idpf_common_device.c   |  75 ++
>  drivers/common/idpf/idpf_common_device.h   |   4 +
>  drivers/common/idpf/idpf_common_virtchnl.c | 138 +++-
>  drivers/common/idpf/idpf_common_virtchnl.h |  18 +
>  drivers/common/idpf/version.map|   6 +
>  drivers/net/cpfl/cpfl_ethdev.c | 613 ++--
>  drivers/net/cpfl/cpfl_ethdev.h |  35 +-
>  drivers/net/cpfl/cpfl_rxtx.c   | 789 +++--
>  drivers/net/cpfl/cpfl_rxtx.h   |  76 ++
>  drivers/net/cpfl/cpfl_rxtx_vec_common.h|  21 +-
>  11 files changed, 1663 insertions(+), 119 deletions(-)
> 
> --
> 2.26.2

Acked-by: Jingjing Wu 


RE: [PATCH v8 12/14] net/cpfl: support peer ports get

2023-06-05 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Monday, June 5, 2023 2:17 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> ; Wang, Xiao W 
> Subject: [PATCH v8 12/14] net/cpfl: support peer ports get
> 
> From: Beilei Xing 
> 
> This patch supports get hairpin peer ports.
> 
> Signed-off-by: Xiao Wang 
> Signed-off-by: Beilei Xing 
> ---
>  drivers/net/cpfl/cpfl_ethdev.c | 41 ++
>  1 file changed, 41 insertions(+)
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index 850f1c0bc6..1a1ca4bc77 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -1080,6 +1080,46 @@ cpfl_dev_close(struct rte_eth_dev *dev)
>   return 0;
>  }
> 
> +static int
> +cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
> + size_t len, uint32_t tx)
> +{
> + struct cpfl_vport *cpfl_vport =
> + (struct cpfl_vport *)dev->data->dev_private;
> + struct idpf_tx_queue *txq;
> + struct idpf_rx_queue *rxq;
> + struct cpfl_tx_queue *cpfl_txq;
> + struct cpfl_rx_queue *cpfl_rxq;
> + int i;
> + int j = 0;
> +
> + if (len <= 0)
> + return -EINVAL;
> +
> + if (cpfl_vport->p2p_q_chunks_info == NULL)
> + return -ENOTSUP;
> +
> + if (tx > 0) {
> + for (i = cpfl_vport->nb_data_txq, j = 0; i < 
> dev->data->nb_tx_queues; i++,
> j++) {
> + txq = dev->data->tx_queues[i];
> + if (txq == NULL)
> + return -EINVAL;
> + cpfl_txq = (struct cpfl_tx_queue *)txq;
> + peer_ports[j] = cpfl_txq->hairpin_info.peer_rxp;
Shouldn't access the peer_ports[j] if j >= len.



RE: [PATCH v8 03/14] net/cpfl: add haipin queue group during vport init

2023-06-05 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Monday, June 5, 2023 2:17 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> 
> Subject: [PATCH v8 03/14] net/cpfl: add haipin queue group during vport init
> 
> From: Beilei Xing 
> 
> This patch adds haipin queue group during vport init.
> 
> Signed-off-by: Mingxia Liu 
> Signed-off-by: Beilei Xing 
> ---
>  drivers/net/cpfl/cpfl_ethdev.c | 133 +
>  drivers/net/cpfl/cpfl_ethdev.h |  18 +
>  drivers/net/cpfl/cpfl_rxtx.h   |   7 ++
>  3 files changed, 158 insertions(+)
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index e587155db6..c1273a7478 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -840,6 +840,20 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
>   return 0;
>  }
> 
> +static int
> +cpfl_p2p_queue_grps_del(struct idpf_vport *vport)
> +{
> + struct virtchnl2_queue_group_id qg_ids[CPFL_P2P_NB_QUEUE_GRPS] = {0};
> + int ret = 0;
> +
> + qg_ids[0].queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
> + qg_ids[0].queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
> + ret = idpf_vc_queue_grps_del(vport, CPFL_P2P_NB_QUEUE_GRPS, qg_ids);
> + if (ret)
> + PMD_DRV_LOG(ERR, "Failed to delete p2p queue groups");
> + return ret;
> +}
> +
>  static int
>  cpfl_dev_close(struct rte_eth_dev *dev)
>  {
> @@ -848,7 +862,12 @@ cpfl_dev_close(struct rte_eth_dev *dev)
>   struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
> 
>   cpfl_dev_stop(dev);
> +
> + if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq)
> + cpfl_p2p_queue_grps_del(vport);
> +
>   idpf_vport_deinit(vport);
> + rte_free(cpfl_vport->p2p_q_chunks_info);
> 
>   adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
>   adapter->cur_vport_nb--;
> @@ -1284,6 +1303,96 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter)
>   return vport_idx;
>  }
> 
> +static int
> +cpfl_p2p_q_grps_add(struct idpf_vport *vport,
> + struct virtchnl2_add_queue_groups *p2p_queue_grps_info,
> + uint8_t *p2p_q_vc_out_info)
> +{
> + int ret;
> +
> + p2p_queue_grps_info->vport_id = vport->vport_id;
> + p2p_queue_grps_info->qg_info.num_queue_groups =
> CPFL_P2P_NB_QUEUE_GRPS;
> + p2p_queue_grps_info->qg_info.groups[0].num_rx_q =
> CPFL_MAX_P2P_NB_QUEUES;
> + p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq =
> CPFL_P2P_NB_RX_BUFQ;
> + p2p_queue_grps_info->qg_info.groups[0].num_tx_q =
> CPFL_MAX_P2P_NB_QUEUES;
> + p2p_queue_grps_info->qg_info.groups[0].num_tx_complq =
> CPFL_P2P_NB_TX_COMPLQ;
> + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id =
> CPFL_P2P_QUEUE_GRP_ID;
> + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type =
> VIRTCHNL2_QUEUE_GROUP_P2P;
> + p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0;
> + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0;
> + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0;
> + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0;
> + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight = 0;
> +
> + ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info,
> p2p_q_vc_out_info);
> + if (ret != 0) {
> + PMD_DRV_LOG(ERR, "Failed to add p2p queue groups.");
> + return ret;
> + }
> +
> + return ret;
> +}
> +
> +static int
> +cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport,
> +  struct virtchnl2_add_queue_groups *p2p_q_vc_out_info)
> +{
> + struct p2p_queue_chunks_info *p2p_q_chunks_info = cpfl_vport-
> >p2p_q_chunks_info;
> + struct virtchnl2_queue_reg_chunks *vc_chunks_out;
> + int i, type;
> +
> + if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type !=
> + VIRTCHNL2_QUEUE_GROUP_P2P) {
> + PMD_DRV_LOG(ERR, "Add queue group response mismatch.");
> + return -EINVAL;
> + }
> +
> + vc_chunks_out = &p2p_q_vc_out_info->qg_info.groups[0].chunks;
> +
> + for (i = 0; i < vc_chunks_out->num_chunks; i++) {
> + type = vc_chunks_out->chunks[i].type;
> + switch (type) {
> + case VIRTCHNL2_QUEUE_TYPE_TX:
> + p2p_q_chunks_info->tx_start_qid =
> + vc_chunks_out->ch

RE: [PATCH v3 09/10] net/cpfl: support peer ports get

2023-05-24 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, May 19, 2023 3:31 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> ; Wang, Xiao W 
> Subject: [PATCH v3 09/10] net/cpfl: support peer ports get
> 
> From: Beilei Xing 
> 
> This patch supports get hairpin peer ports.
> 
> Signed-off-by: Xiao Wang 
> Signed-off-by: Beilei Xing 
> ---
>  drivers/net/cpfl/cpfl_ethdev.c | 34 ++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index 3b480178c0..59c7e75d2a 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -1069,6 +1069,39 @@ cpfl_dev_close(struct rte_eth_dev *dev)
>   return 0;
>  }
> 
> +static int
> +cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
> + __rte_unused size_t len, uint32_t tx)
> +{

Param len is used to identify the size of the peer ports array.
You should use it, and check if peer_ports is null. Otherwise will cause 
invalid access.

 *   array length
> + struct cpfl_vport *cpfl_vport =
> + (struct cpfl_vport *)dev->data->dev_private;
> + struct idpf_tx_queue *txq;
> + struct idpf_rx_queue *rxq;
> + struct cpfl_tx_queue *cpfl_txq;
> + struct cpfl_rx_queue *cpfl_rxq;
> + int i, j;
> +
> + if (tx > 0) {
> + for (i = cpfl_vport->nb_data_txq, j = 0; i < 
> dev->data->nb_tx_queues; i++,
> j++) {
> + txq = dev->data->tx_queues[i];
> + if (txq == NULL)
> + return -EINVAL;
> + cpfl_txq = (struct cpfl_tx_queue *)txq;
> + peer_ports[j] = cpfl_txq->hairpin_info.peer_rxp;
> + }
> + } else if (tx == 0) {
> + for (i = cpfl_vport->nb_data_rxq, j = 0; i < 
> dev->data->nb_rx_queues; i++,
> j++) {
> + rxq = dev->data->rx_queues[i];
> + if (rxq == NULL)
> + return -EINVAL;
> + cpfl_rxq = (struct cpfl_rx_queue *)rxq;
> + peer_ports[j] = cpfl_rxq->hairpin_info.peer_txp;
> + }
> + }
> +
> + return j;
> +}
> +
>  static const struct eth_dev_ops cpfl_eth_dev_ops = {
>   .dev_configure  = cpfl_dev_configure,
>   .dev_close  = cpfl_dev_close,
> @@ -1098,6 +1131,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
>   .hairpin_cap_get= cpfl_hairpin_cap_get,
>   .rx_hairpin_queue_setup = cpfl_rx_hairpin_queue_setup,
>   .tx_hairpin_queue_setup = cpfl_tx_hairpin_queue_setup,
> + .hairpin_get_peer_ports = cpfl_hairpin_get_peer_ports,
>  };
> 
>  static int
> --
> 2.26.2



RE: [PATCH v3 08/10] net/cpfl: enable write back based on ITR expire

2023-05-24 Thread Wu, Jingjing
>  idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
>  {
> diff --git a/drivers/common/idpf/idpf_common_device.h
> b/drivers/common/idpf/idpf_common_device.h
> index 112367dae8..f767ea7cec 100644
> --- a/drivers/common/idpf/idpf_common_device.h
> +++ b/drivers/common/idpf/idpf_common_device.h
> @@ -200,5 +200,9 @@ int idpf_vport_info_init(struct idpf_vport *vport,
>struct virtchnl2_create_vport *vport_info);
>  __rte_internal
>  void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct
> virtchnl2_vport_stats *nes);
> +__rte_internal
> +int idpf_vport_irq_map_config_by_qids(struct idpf_vport *vport,
> +   uint32_t *qids,
> +   uint16_t nb_rx_queues);
> 
>  #endif /* _IDPF_COMMON_DEVICE_H_ */
> diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
> index 25624732b0..0729f6b912 100644
> --- a/drivers/common/idpf/version.map
> +++ b/drivers/common/idpf/version.map
> @@ -69,6 +69,7 @@ INTERNAL {
>   idpf_vport_info_init;
>   idpf_vport_init;
>   idpf_vport_irq_map_config;
> + idpf_vport_irq_map_config_by_qids;
>   idpf_vport_irq_unmap_config;
>   idpf_vport_rss_config;
>   idpf_vport_stats_update;

The same, split common change with net one?

> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index c2ab0690fc..3b480178c0 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -730,11 +730,22 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
>  static int
>  cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev)
>  {
> + uint32_t qids[CPFL_MAX_P2P_NB_QUEUES + IDPF_DEFAULT_RXQ_NUM] = {0};
>   struct cpfl_vport *cpfl_vport = dev->data->dev_private;
>   struct idpf_vport *vport = &cpfl_vport->base;
>   uint16_t nb_rx_queues = dev->data->nb_rx_queues;
> + struct cpfl_rx_queue *cpfl_rxq;
> + int i;
> 
> - return idpf_vport_irq_map_config(vport, nb_rx_queues);
> + for (i = 0; i < nb_rx_queues; i++) {
> + cpfl_rxq = dev->data->rx_queues[i];
> + if (cpfl_rxq->hairpin_info.hairpin_q)
> + qids[i] = cpfl_hw_qid_get(cpfl_vport-
> >p2p_q_chunks_info.rx_start_qid,
> +   (i - 
> cpfl_vport->nb_data_rxq));
> + else
> + qids[i] = 
> cpfl_hw_qid_get(vport->chunks_info.rx_start_qid, i);

Looks like cpfl_hw_qid_get  and is used cross files, how about defined it as 
inline or Macro in header file?

> + }
> + return idpf_vport_irq_map_config_by_qids(vport, qids, nb_rx_queues);
>  }
> 
>  /* Update hairpin_info for dev's tx hairpin queue */
> --
> 2.26.2



RE: [PATCH v3 07/10] net/cpfl: support hairpin queue start/stop

2023-05-24 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, May 19, 2023 3:31 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> ; Wang, Xiao W 
> Subject: [PATCH v3 07/10] net/cpfl: support hairpin queue start/stop
> 
> From: Beilei Xing 
> 
> This patch supports Rx/Tx hairpin queue start/stop.
> 
> Signed-off-by: Xiao Wang 
> Signed-off-by: Mingxia Liu 
> Signed-off-by: Beilei Xing 
> ---
>  drivers/common/idpf/idpf_common_virtchnl.c |   2 +-
>  drivers/common/idpf/idpf_common_virtchnl.h |   3 +
>  drivers/common/idpf/version.map|   1 +
>  drivers/net/cpfl/cpfl_ethdev.c |  41 ++
>  drivers/net/cpfl/cpfl_rxtx.c   | 153 ++---
>  drivers/net/cpfl/cpfl_rxtx.h   |  14 ++
>  6 files changed, 195 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/common/idpf/idpf_common_virtchnl.c
> b/drivers/common/idpf/idpf_common_virtchnl.c
> index 211b44a88e..6455f640da 100644
> --- a/drivers/common/idpf/idpf_common_virtchnl.c
> +++ b/drivers/common/idpf/idpf_common_virtchnl.c
> @@ -733,7 +733,7 @@ idpf_vc_vectors_dealloc(struct idpf_vport *vport)
>   return err;
>  }
> 
> -static int
> +int
>  idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
> uint32_t type, bool on)
>  {
> diff --git a/drivers/common/idpf/idpf_common_virtchnl.h
> b/drivers/common/idpf/idpf_common_virtchnl.h
> index db83761a5e..9ff5c38c26 100644
> --- a/drivers/common/idpf/idpf_common_virtchnl.h
> +++ b/drivers/common/idpf/idpf_common_virtchnl.h
> @@ -71,6 +71,9 @@ __rte_internal
>  int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct 
> virtchnl2_txq_info
> *txq_info,
>  uint16_t num_qs);
>  __rte_internal
> +int idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
> +   uint32_t type, bool on);
> +__rte_internal
>  int idpf_vc_queue_grps_del(struct idpf_vport *vport,
>  uint16_t num_q_grps,
>  struct virtchnl2_queue_group_id *qg_ids);
> diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
> index 17e77884ce..25624732b0 100644
> --- a/drivers/common/idpf/version.map
> +++ b/drivers/common/idpf/version.map
> @@ -40,6 +40,7 @@ INTERNAL {
>   idpf_vc_cmd_execute;
>   idpf_vc_ctlq_post_rx_buffs;
>   idpf_vc_ctlq_recv;
> + idpf_vc_ena_dis_one_queue;
>   idpf_vc_irq_map_unmap_config;
>   idpf_vc_one_msg_read;
>   idpf_vc_ptype_info_query;

This change is in common, better to split this patch to two.




RE: [PATCH v3 05/10] net/cpfl: support hairpin queue setup and release

2023-05-24 Thread Wu, Jingjing
> 
> +static int
> +cpfl_rx_hairpin_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue 
> *bufq,
> +uint16_t logic_qid, uint16_t nb_desc)
> +{
> + struct cpfl_vport *cpfl_vport =
> + (struct cpfl_vport *)dev->data->dev_private;
> + struct idpf_vport *vport = &cpfl_vport->base;
> + struct idpf_adapter *adapter = vport->adapter;
> + struct rte_mempool *mp;
> + char pool_name[RTE_MEMPOOL_NAMESIZE];
> +
> + mp = cpfl_vport->p2p_mp;
> + if (!mp) {
> + snprintf(pool_name, RTE_MEMPOOL_NAMESIZE, "p2p_mb_pool_%u",
> +  dev->data->port_id);
> + mp = rte_pktmbuf_pool_create(pool_name, CPFL_P2P_NB_MBUF,
> CPFL_P2P_CACHE_SIZE,
> +  0, CPFL_P2P_MBUF_SIZE, dev->device-
> >numa_node);
> + if (!mp) {
> + PMD_INIT_LOG(ERR, "Failed to allocate mbuf pool for 
> p2p");
> + return -ENOMEM;
> + }
> + cpfl_vport->p2p_mp = mp;
> + }
> +
> + bufq->mp = mp;
> + bufq->nb_rx_desc = nb_desc;
> + bufq->queue_id = cpfl_hw_qid_get(cpfl_vport-
> >p2p_q_chunks_info.rx_buf_start_qid, logic_qid);
> + bufq->port_id = dev->data->port_id;
> + bufq->adapter = adapter;
> + bufq->rx_buf_len = CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM;
> +
> + bufq->sw_ring = rte_zmalloc("sw ring",
> + sizeof(struct rte_mbuf *) * nb_desc,
> + RTE_CACHE_LINE_SIZE);

Is sw_ring required in p2p case? It has been never used right?
Please also check the sw_ring in tx queue.

> + if (!bufq->sw_ring) {
> + PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
> + return -ENOMEM;
> + }
> +
> + bufq->q_set = true;
> + bufq->ops = &def_rxq_ops;
> +
> + return 0;
> +}
> +
> +int
> +cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> + uint16_t nb_desc,
> + const struct rte_eth_hairpin_conf *conf)
> +{
> + struct cpfl_vport *cpfl_vport = (struct cpfl_vport 
> *)dev->data->dev_private;
> + struct idpf_vport *vport = &cpfl_vport->base;
> + struct idpf_adapter *adapter_base = vport->adapter;
> + uint16_t logic_qid = cpfl_vport->nb_p2p_rxq;
> + struct cpfl_rxq_hairpin_info *hairpin_info;
> + struct cpfl_rx_queue *cpfl_rxq;
> + struct idpf_rx_queue *bufq1 = NULL;
> + struct idpf_rx_queue *rxq;
> + uint16_t peer_port, peer_q;
> + uint16_t qid;
> + int ret;
> +
> + if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
> + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin 
> queue.");
> + return -EINVAL;
> + }
> +
> + if (conf->peer_count != 1) {
> + PMD_INIT_LOG(ERR, "Can't support Rx hairpin queue peer count 
> %d",
> conf->peer_count);
> + return -EINVAL;
> + }
> +
> + peer_port = conf->peers[0].port;
> + peer_q = conf->peers[0].queue;
> +
> + if (nb_desc % CPFL_ALIGN_RING_DESC != 0 ||
> + nb_desc > CPFL_MAX_RING_DESC ||
> + nb_desc < CPFL_MIN_RING_DESC) {
> + PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is 
> invalid",
> nb_desc);
> + return -EINVAL;
> + }
> +
> + /* Free memory if needed */
> + if (dev->data->rx_queues[queue_idx]) {
> + cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]);
> + dev->data->rx_queues[queue_idx] = NULL;
> + }
> +
> + /* Setup Rx description queue */
> + cpfl_rxq = rte_zmalloc_socket("cpfl hairpin rxq",
> +  sizeof(struct cpfl_rx_queue),
> +  RTE_CACHE_LINE_SIZE,
> +  SOCKET_ID_ANY);
> + if (!cpfl_rxq) {
> + PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data
> structure");
> + return -ENOMEM;
> + }
> +
> + rxq = &cpfl_rxq->base;
> + hairpin_info = &cpfl_rxq->hairpin_info;
> + rxq->nb_rx_desc = nb_desc * 2;
> + rxq->queue_id = 
> cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info.rx_start_qid,
> logic_qid);
> + rxq->port_id = dev->data->port_id;
> + rxq->adapter = adapter_base;
> + rxq->rx_buf_len = CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM;
> + hairpin_info->hairpin_q = true;
> + hairpin_info->peer_txp = peer_port;
> + hairpin_info->peer_txq_id = peer_q;
> +
> + if (conf->manual_bind != 0)
> + cpfl_vport->p2p_manual_bind = true;
> + else
> + cpfl_vport->p2p_manual_bind = false;
> +
> + /* setup 1 Rx buffer queue for the 1st hairpin rxq */
> + if (logic_qid == 0) {
> + bufq1 = rte_zmalloc_socket("hairpin rx bufq1",
> +sizeof(struct idpf_rx_queue),
> +RTE_CACHE_LINE_SIZE,
> +  

RE: [PATCH v3 04/10] net/cpfl: add haipin queue group during vport init

2023-05-24 Thread Wu, Jingjing
>  static int
>  cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
>  {
> @@ -1306,6 +1414,8 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void 
> *init_params)
>   struct cpfl_adapter_ext *adapter = param->adapter;
>   /* for sending create vport virtchnl msg prepare */
>   struct virtchnl2_create_vport create_vport_info;
> + struct virtchnl2_add_queue_groups p2p_queue_grps_info;
> + uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] = {0};
>   int ret = 0;
> 
>   dev->dev_ops = &cpfl_eth_dev_ops;
> @@ -1340,8 +1450,28 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void
> *init_params)
>   rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
>   &dev->data->mac_addrs[0]);
> 
> + if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) {
> + memset(&p2p_queue_grps_info, 0, sizeof(p2p_queue_grps_info));
> + ret = cpfl_p2p_q_grps_add(vport, &p2p_queue_grps_info,
> p2p_q_vc_out_info);
> + if (ret != 0) {
> + PMD_INIT_LOG(ERR, "Failed to add p2p queue group.");
> + goto err_q_grps_add;
> + }
> + ret = cpfl_p2p_queue_info_init(cpfl_vport,
> +(struct virtchnl2_add_queue_groups
> *)p2p_q_vc_out_info);
> + if (ret != 0) {
> + PMD_INIT_LOG(ERR, "Failed to init p2p queue info.");
> + goto err_p2p_qinfo_init;
If it is failed to add p2p queue group, the device init will quit?
I think it should be better to continue initialization just without p2p 
capability.
> + }
> + }
> +
>   return 0;
> 


RE: [PATCH v3 02/10] net/cpfl: support hairpin queue capbility get

2023-05-24 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, May 19, 2023 3:31 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei
> ; Wang, Xiao W 
> Subject: [PATCH v3 02/10] net/cpfl: support hairpin queue capbility get
> 
> From: Beilei Xing 
> 
> This patch adds hairpin_cap_get ops support.
> 
> Signed-off-by: Xiao Wang 
> Signed-off-by: Mingxia Liu 
> Signed-off-by: Beilei Xing 
> ---
>  drivers/net/cpfl/cpfl_ethdev.c | 13 +
>  drivers/net/cpfl/cpfl_rxtx.h   |  4 
>  2 files changed, 17 insertions(+)
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index e587155db6..b6fd0b05d0 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -154,6 +154,18 @@ cpfl_dev_link_update(struct rte_eth_dev *dev,
>   return rte_eth_linkstatus_set(dev, &new_link);
>  }
> 
> +static int
> +cpfl_hairpin_cap_get(__rte_unused struct rte_eth_dev *dev,
> +  struct rte_eth_hairpin_cap *cap)
> +{
> + cap->max_nb_queues = CPFL_MAX_P2P_NB_QUEUES;
> + cap->max_rx_2_tx = CPFL_MAX_HAIRPINQ_RX_2_TX;
> + cap->max_tx_2_rx = CPFL_MAX_HAIRPINQ_TX_2_RX;
> + cap->max_nb_desc = CPFL_MAX_HAIRPINQ_NB_DESC;
> +
Is that better to check if  p2p queue group is added successfully and then 
return success?


RE: [PATCH v5] common/idpf: refine capability get

2023-04-25 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Monday, April 24, 2023 4:08 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Xing, Beilei 
> Subject: [PATCH v5] common/idpf: refine capability get
> 
> From: Beilei Xing 
> 
> Initialize required capability in PMD, and refine
> idpf_vc_caps_get function. Then different PMDs can
> require different capability.
> 
> Signed-off-by: Beilei Xing  
Acked-by: Jingjing Wu 


RE: [PATCH v4] net/idpf: add VF support

2023-04-25 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Monday, April 24, 2023 8:45 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Xing, Beilei 
> Subject: [PATCH v4] net/idpf: add VF support
> 
> From: Beilei Xing 
> 
> Support VF whose device id is 0x145c.
> 
> Signed-off-by: Beilei Xing 
Acked-by: Jingjing Wu 


RE: [PATCH] net/idpf: refine devargs parse functions

2023-04-23 Thread Wu, Jingjing



> -Original Message-
> From: Liu, Mingxia 
> Sent: Friday, April 21, 2023 3:15 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Xing, Beilei 
> ; Liu, Mingxia
> 
> Subject: [PATCH] net/idpf: refine devargs parse functions
> 
> This patch refines devargs parsing functions and use valid
> variable max_vport_nb to replace IDPF_MAX_VPORT_NUM.
> 
> Signed-off-by: Mingxia Liu 
> ---
>  drivers/net/idpf/idpf_ethdev.c | 61 +-
>  1 file changed, 30 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
> index e02ec2ec5a..a8dd5a0a80 100644
> --- a/drivers/net/idpf/idpf_ethdev.c
> +++ b/drivers/net/idpf/idpf_ethdev.c
> @@ -857,12 +857,6 @@ insert_value(struct idpf_devargs *devargs, uint16_t id)
>   return 0;
>   }
> 
> - if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
> - PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
> -  IDPF_MAX_VPORT_NUM);
> - return -EINVAL;
> - }
> -
>   devargs->req_vports[devargs->req_vport_nb] = id;
>   devargs->req_vport_nb++;
> 
> @@ -879,12 +873,10 @@ parse_range(const char *value, struct idpf_devargs 
> *devargs)
> 
>   result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
>   if (result == 1) {
> - if (lo >= IDPF_MAX_VPORT_NUM)
> - return NULL;
>   if (insert_value(devargs, lo) != 0)
>   return NULL;
>   } else if (result == 2) {
> - if (lo > hi || hi >= IDPF_MAX_VPORT_NUM)
> + if (lo > hi)
>   return NULL;
>   for (i = lo; i <= hi; i++) {
>   if (insert_value(devargs, i) != 0)
> @@ -969,40 +961,46 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, 
> struct
> idpf_adapter_ext *adap
>   return -EINVAL;
>   }
> 
> + ret = rte_kvargs_process(kvlist, IDPF_VPORT, &parse_vport,
> +  idpf_args);
> + if (ret != 0)
> + goto fail;
> +
> + ret = rte_kvargs_process(kvlist, IDPF_TX_SINGLE_Q, &parse_bool,
> +  &adapter->base.is_tx_singleq);
> + if (ret != 0)
> + goto fail;
> +
> + ret = rte_kvargs_process(kvlist, IDPF_RX_SINGLE_Q, &parse_bool,
> +  &adapter->base.is_rx_singleq);
> + if (ret != 0)
> + goto fail;
> +
>   /* check parsed devargs */
>   if (adapter->cur_vport_nb + idpf_args->req_vport_nb >
> - IDPF_MAX_VPORT_NUM) {
> + adapter->max_vport_nb) {
>   PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
> -  IDPF_MAX_VPORT_NUM);
> +  adapter->max_vport_nb);
>   ret = -EINVAL;
> - goto bail;
> + goto fail;
>   }
> 
>   for (i = 0; i < idpf_args->req_vport_nb; i++) {
> + if (idpf_args->req_vports[i] > adapter->max_vport_nb - 1) {
> + PMD_INIT_LOG(ERR, "Invalid vport id %d, it should be 0 
> ~ %d",
> +  idpf_args->req_vports[i], 
> adapter->max_vport_nb - 1);
> + ret = -EINVAL;
This verify is not necessary, because we don't limit the vport id specified in 
args need to be less than the number it supports.



RE: [PATCH v4 1/2] common/idpf: move PF specific functions from common init

2023-04-23 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Thursday, April 6, 2023 3:43 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Xing, Beilei 
> Subject: [PATCH v4 1/2] common/idpf: move PF specific functions from common 
> init
> 
> From: Beilei Xing 
> 
> Move PF reset and PF mailbox initialization functions from
> idpf_adapter_init function to _adapter_ext_init function,
> since they're different between PF and VF support.
> 
> Signed-off-by: Beilei Xing 
> ---
>  drivers/common/idpf/idpf_common_device.c | 72 +---
>  drivers/common/idpf/idpf_common_device.h | 11 
>  drivers/common/idpf/version.map  |  5 ++
>  drivers/net/cpfl/cpfl_ethdev.c   | 51 +
>  drivers/net/idpf/idpf_ethdev.c   | 51 +
>  5 files changed, 131 insertions(+), 59 deletions(-)
> 
> diff --git a/drivers/common/idpf/idpf_common_device.c
> b/drivers/common/idpf/idpf_common_device.c
> index c5e7bbf66c..3b58bdd41e 100644
> --- a/drivers/common/idpf/idpf_common_device.c
> +++ b/drivers/common/idpf/idpf_common_device.c
> @@ -6,8 +6,8 @@
>  #include 
>  #include 
> 
> -static void
> -idpf_reset_pf(struct idpf_hw *hw)
> +void
> +idpf_hw_pf_reset(struct idpf_hw *hw)
>  {
>   uint32_t reg;
> 
> @@ -15,9 +15,8 @@ idpf_reset_pf(struct idpf_hw *hw)
>   IDPF_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
>  }
> 
> -#define IDPF_RESET_WAIT_CNT 100
> -static int
> -idpf_check_pf_reset_done(struct idpf_hw *hw)
> +int
> +idpf_hw_pf_reset_check(struct idpf_hw *hw)
>  {
>   uint32_t reg;
>   int i;
> @@ -33,48 +32,13 @@ idpf_check_pf_reset_done(struct idpf_hw *hw)
>   return -EBUSY;
>  }
> 
> -#define CTLQ_NUM 2
> -static int
> -idpf_init_mbx(struct idpf_hw *hw)
> +int
> +idpf_hw_mbx_init(struct idpf_hw *hw, struct idpf_ctlq_create_info *ctlq_info)
>  {
> - struct idpf_ctlq_create_info ctlq_info[CTLQ_NUM] = {
> - {
> - .type = IDPF_CTLQ_TYPE_MAILBOX_TX,
> - .id = IDPF_CTLQ_ID,
> - .len = IDPF_CTLQ_LEN,
> - .buf_size = IDPF_DFLT_MBX_BUF_SIZE,
> - .reg = {
> - .head = PF_FW_ATQH,
> - .tail = PF_FW_ATQT,
> - .len = PF_FW_ATQLEN,
> - .bah = PF_FW_ATQBAH,
> - .bal = PF_FW_ATQBAL,
> - .len_mask = PF_FW_ATQLEN_ATQLEN_M,
> - .len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
> - .head_mask = PF_FW_ATQH_ATQH_M,
> - }
> - },
> - {
> - .type = IDPF_CTLQ_TYPE_MAILBOX_RX,
> - .id = IDPF_CTLQ_ID,
> - .len = IDPF_CTLQ_LEN,
> - .buf_size = IDPF_DFLT_MBX_BUF_SIZE,
> - .reg = {
> - .head = PF_FW_ARQH,
> - .tail = PF_FW_ARQT,
> - .len = PF_FW_ARQLEN,
> - .bah = PF_FW_ARQBAH,
> - .bal = PF_FW_ARQBAL,
> - .len_mask = PF_FW_ARQLEN_ARQLEN_M,
> - .len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
> - .head_mask = PF_FW_ARQH_ARQH_M,
> - }
> - }
> - };
>   struct idpf_ctlq_info *ctlq;
>   int ret;
> 
> - ret = idpf_ctlq_init(hw, CTLQ_NUM, ctlq_info);
> + ret = idpf_ctlq_init(hw, IDPF_CTLQ_NUM, ctlq_info);
>   if (ret != 0)
>   return ret;
> 
> @@ -96,6 +60,12 @@ idpf_init_mbx(struct idpf_hw *hw)
>   return ret;
>  }

You can check the device id if idpf_hw, and then decide how to init ctlq and 
then this function can also be consumed by VF driver. No need to move them out 
from common. The same as other functions.
> 



RE: [PATCH] common/idpf: fix Rx queue configuration

2023-02-22 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Thursday, February 23, 2023 11:17 AM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Xing, Beilei ; sta...@dpdk.org
> Subject: [PATCH] common/idpf: fix Rx queue configuration
> 
> From: Beilei Xing 
> 
> IDPF PMD enables 2 buffer queues by default. According to the data
> sheet, if there is a second buffer queue, the second buffer queue
> is valid only if bugq2_ena is set.
> 
> Fixes: c2494d783d31 ("net/idpf: support queue start")
> Fixes: 8b95ced47a13 ("common/idpf: add Rx/Tx queue structs")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Beilei Xing 
Acked-by: Jingjing Wu 


RE: [PATCH v6 0/6] add idpf pmd enhancement features

2023-02-07 Thread Wu, Jingjing



> -Original Message-
> From: Liu, Mingxia 
> Sent: Tuesday, February 7, 2023 6:17 PM
> To: dev@dpdk.org; Zhang, Qi Z ; Wu, Jingjing
> ; Xing, Beilei 
> Cc: Liu, Mingxia 
> Subject: [PATCH v6 0/6] add idpf pmd enhancement features
> 
> This patchset add several enhancement features of idpf pmd.
> Including the following:
> - add hw statistics, support stats/xstats ops
> - add rss configure/show ops
> - add event handle: link status
> - add scattered data path for single queue
> 
> This patchset is based on the refactor idpf PMD code:
> http://patches.dpdk.org/project/dpdk/patch/20230207084549.2225214-2-
> wenjun1...@intel.com/
> 
> v2 changes:
>  - Fix rss lut config issue.
> v3 changes:
>  - rebase to the new baseline.
> v4 changes:
>  - rebase to the new baseline.
>  - optimize some code
>  - give "not supported" tips when user want to config rss hash type
>  - if stats reset fails at initialization time, don't rollback, just
>print ERROR info.
> v5 changes:
>  - fix some spelling error
> v6 changes:
>  - add cover-letter
> 

Reviewed-by: Jingjing Wu 


RE: [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message

2023-02-02 Thread Wu, Jingjing
> > > +
> > >   new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> > > + new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
> > > + RTE_ETH_LINK_DOWN;
> > >   new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
> > > RTE_ETH_LINK_SPEED_FIXED);
> > Better to use RTE_ETH_LINK_[AUTONEG/FIXED] instead.
> >
> [Liu, Mingxia] According to the comment description of struct rte_eth_conf,
> RTE_ETH_LINK_SPEED_FIXED is better.
> struct rte_eth_conf {
> uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
>   used. RTE_ETH_LINK_SPEED_FIXED disables link
>   autonegotiation, and a unique speed shall be
>   set. Otherwise, the bitmap defines the set of
>   speeds to be advertised. If the special value
>   RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all 
> speeds
>   supported are advertised. */
> 
I am talking about link_autoneg but not link_speeds



RE: [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message

2023-02-01 Thread Wu, Jingjing
> @@ -83,12 +84,49 @@ static int
>  idpf_dev_link_update(struct rte_eth_dev *dev,
>__rte_unused int wait_to_complete)
>  {
> + struct idpf_vport *vport = dev->data->dev_private;
>   struct rte_eth_link new_link;
> 
>   memset(&new_link, 0, sizeof(new_link));
> 
> - new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> + switch (vport->link_speed) {
> + case 10:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
> + break;
> + case 100:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
> + break;
> + case 1000:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
> + break;
> + case 1:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
> + break;
> + case 2:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
> + break;
> + case 25000:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
> + break;
> + case 4:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
> + break;
> + case 5:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
> + break;
> + case 10:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
> + break;
> + case 20:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
> + break;
> + default:
> + new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> + }
> +
>   new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> + new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
> + RTE_ETH_LINK_DOWN;
>   new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
> RTE_ETH_LINK_SPEED_FIXED);
Better to use RTE_ETH_LINK_[AUTONEG/FIXED] instead.

> 
> @@ -927,6 +965,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, 
> struct
> idpf_adapter_ext *adap
>   return ret;
>  }
> 
> +static struct idpf_vport *
> +idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
> +{
> + struct idpf_vport *vport = NULL;
> + int i;
> +
> + for (i = 0; i < adapter->cur_vport_nb; i++) {
> + vport = adapter->vports[i];
> + if (vport->vport_id != vport_id)
> + continue;
> + else
> + return vport;
> + }
> +
> + return vport;
> +}
> +
> +static void
> +idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t 
> msglen)
> +{
> + struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
> + struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
> +
> + if (msglen < sizeof(struct virtchnl2_event)) {
> + PMD_DRV_LOG(ERR, "Error event");
> + return;
> + }
> +
> + switch (vc_event->event) {
> + case VIRTCHNL2_EVENT_LINK_CHANGE:
> + PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
> + vport->link_up = vc_event->link_status;
Any conversion between bool and uint8?




RE: [PATCH v3 3/6] common/idpf: support single q scatter RX datapath

2023-02-01 Thread Wu, Jingjing
> 
> +uint16_t
> +idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> +uint16_t nb_pkts)
> +{
> + struct idpf_rx_queue *rxq = rx_queue;
> + volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring;
> + volatile union virtchnl2_rx_desc *rxdp;
> + union virtchnl2_rx_desc rxd;
> + struct idpf_adapter *ad;
> + struct rte_mbuf *first_seg = rxq->pkt_first_seg;
> + struct rte_mbuf *last_seg = rxq->pkt_last_seg;
> + struct rte_mbuf *rxm;
> + struct rte_mbuf *nmb;
> + struct rte_eth_dev *dev;
> + const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl;
> + uint16_t nb_hold = 0, nb_rx = 0;
According to the coding style, only the last variable on a line should be 
initialized.

> + uint16_t rx_id = rxq->rx_tail;
> + uint16_t rx_packet_len;
> + uint16_t rx_status0;
> + uint64_t pkt_flags;
> + uint64_t dma_addr;
> + uint64_t ts_ns;
> +



RE: [PATCH v3 2/6] common/idpf: add RSS set/get ops

2023-02-01 Thread Wu, Jingjing
> +static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
> +{
> + uint64_t hena = 0, valid_rss_hf = 0;
According to the coding style, only the last variable on a line should be 
initialized.

> + int ret = 0;
> + uint16_t i;
> +
> + /**
> +  * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
> +  * generalizations of all other IPv4 and IPv6 RSS types.
> +  */
> + if (rss_hf & RTE_ETH_RSS_IPV4)
> + rss_hf |= idpf_ipv4_rss;
> +
> + if (rss_hf & RTE_ETH_RSS_IPV6)
> + rss_hf |= idpf_ipv6_rss;
> +
> + for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
> + uint64_t bit = BIT_ULL(i);
> +
> + if (idpf_map_hena_rss[i] & rss_hf) {
> + valid_rss_hf |= idpf_map_hena_rss[i];
> + hena |= bit;
> + }
> + }
> +
> + vport->rss_hf = hena;
> +
> + ret = idpf_vc_set_rss_hash(vport);
> + if (ret != 0) {
> + PMD_DRV_LOG(WARNING,
> + "fail to set RSS offload types, ret: %d", ret);
> + return ret;
> + }
> +
> + if (valid_rss_hf & idpf_ipv4_rss)
> + valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
> +
> + if (valid_rss_hf & idpf_ipv6_rss)
> + valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
> +
> + if (rss_hf & ~valid_rss_hf)
> + PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
> + rss_hf & ~valid_rss_hf);
It makes me a bit confused, valid_rss_hf is would be the sub of rss_hf 
according above assignment. Would it be possible to go here?
And if it is possible, why not set valid_rss_hf before calling vc command?

> + vport->last_general_rss_hf = valid_rss_hf;
> +
> + return ret;
> +}
> +
>  static int
>  idpf_init_rss(struct idpf_vport *vport)
>  {
> @@ -256,6 +357,204 @@ idpf_init_rss(struct idpf_vport *vport)
>   return ret;
>  }
> 
> +static int
> +idpf_rss_reta_update(struct rte_eth_dev *dev,
> +  struct rte_eth_rss_reta_entry64 *reta_conf,
> +  uint16_t reta_size)
> +{
> + struct idpf_vport *vport = dev->data->dev_private;
> + struct idpf_adapter *adapter = vport->adapter;
> + uint16_t idx, shift;
> + uint32_t *lut;
> + int ret = 0;
> + uint16_t i;
> +
> + if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
> + PMD_DRV_LOG(DEBUG, "RSS is not supported");
> + return -ENOTSUP;
> + }
> +
> + if (reta_size != vport->rss_lut_size) {
> + PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
> +  "(%d) doesn't match the number of hardware can 
> "
> +  "support (%d)",
> + reta_size, vport->rss_lut_size);
> + return -EINVAL;
> + }
> +
> + /* It MUST use the current LUT size to get the RSS lookup table,
> +  * otherwise if will fail with -100 error code.
> +  */
> + lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
> + if (!lut) {
> + PMD_DRV_LOG(ERR, "No memory can be allocated");
> + return -ENOMEM;
> + }
> + /* store the old lut table temporarily */
> + rte_memcpy(lut, vport->rss_lut, reta_size * sizeof(uint32_t));
Stored the vport->rss_lut to lut? But you overwrite the lut below?

> +
> + for (i = 0; i < reta_size; i++) {
> + idx = i / RTE_ETH_RETA_GROUP_SIZE;
> + shift = i % RTE_ETH_RETA_GROUP_SIZE;
> + if (reta_conf[idx].mask & (1ULL << shift))
> + lut[i] = reta_conf[idx].reta[shift];
> + }
> +
> + rte_memcpy(vport->rss_lut, lut, reta_size * sizeof(uint32_t));
> + /* send virtchnl ops to configure RSS */
> + ret = idpf_vc_set_rss_lut(vport);
> + if (ret) {
> + PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
> + goto out;
> + }
> +out:
> + rte_free(lut);
> +
> + return ret;
> +}



RE: [PATCH v3 1/6] common/idpf: add hw statistics

2023-02-01 Thread Wu, Jingjing
> @@ -327,6 +407,11 @@ idpf_dev_start(struct rte_eth_dev *dev)
>   goto err_vport;
>   }
> 
> + if (idpf_dev_stats_reset(dev)) {
> + PMD_DRV_LOG(ERR, "Failed to reset stats");
> + goto err_vport;

If stats reset fails, will block the start process and roll back? I think print 
ERR may be enough.

> + }
> +
>   vport->stopped = 0;
> 
>   return 0;
> @@ -606,6 +691,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
>   .tx_queue_release   = idpf_dev_tx_queue_release,
>   .mtu_set= idpf_dev_mtu_set,
>   .dev_supported_ptypes_get   = idpf_dev_supported_ptypes_get,
> + .stats_get  = idpf_dev_stats_get,
> + .stats_reset= idpf_dev_stats_reset,
>  };
> 
>  static uint16_t
> --
> 2.25.1



RE: [PATCH v4 09/15] common/idpf: add vport info initialization

2023-01-31 Thread Wu, Jingjing
> +int
> +idpf_create_vport_info_init(struct idpf_vport *vport,
> + struct virtchnl2_create_vport *vport_info)
> +{
> + struct idpf_adapter *adapter = vport->adapter;
> +
> + vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
> + if (adapter->txq_model == 0) {
> + vport_info->txq_model =
> + rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
Byte order is consider for txq_model, how about other fields?

> + vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
> + vport_info->num_tx_complq =
> + IDPF_DEFAULT_TXQ_NUM * IDPF_TX_COMPLQ_PER_GRP;
> + } else {
> + vport_info->txq_model =
> + rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
> + vport_info->num_tx_q = IDPF_DEFAULT_TXQ_NUM;
> + vport_info->num_tx_complq = 0;
> + }
> + if (adapter->rxq_model == 0) {
> + vport_info->rxq_model =
> + rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
> + vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
> + vport_info->num_rx_bufq =
> + IDPF_DEFAULT_RXQ_NUM * IDPF_RX_BUFQ_PER_GRP;
> + } else {
> + vport_info->rxq_model =
> + rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
> + vport_info->num_rx_q = IDPF_DEFAULT_RXQ_NUM;
> + vport_info->num_rx_bufq = 0;
> + }
> +
> + return 0;
> +}
> +


RE: [PATCH v4 07/15] common/idpf: add irq map/unmap

2023-01-31 Thread Wu, Jingjing
> @@ -247,8 +247,21 @@ idpf_vport_init(struct idpf_vport *vport,
>   goto err_rss_lut;
>   }
> 
> + /* recv_vectors is used for VIRTCHNL2_OP_ALLOC_VECTORS response,
> +  * reserve maximum size for it now, may need optimization in future.
> +  */
> + vport->recv_vectors = rte_zmalloc("recv_vectors", 
> IDPF_DFLT_MBX_BUF_SIZE, 0);
> + if (vport->recv_vectors == NULL) {
> + DRV_LOG(ERR, "Failed to allocate ecv_vectors");
ecv-> recv?

> + ret = -ENOMEM;
> + goto err_recv_vec;
> + }
> +
>   return 0;
> 
> +err_recv_vec:
> + rte_free(vport->rss_lut);
> + vport->rss_lut = NULL;
>  err_rss_lut:
>   vport->dev_data = NULL;
>   rte_free(vport->rss_key);
> @@ -261,6 +274,8 @@ idpf_vport_init(struct idpf_vport *vport,
>  int
>  idpf_vport_deinit(struct idpf_vport *vport)
>  {
> + rte_free(vport->recv_vectors);
> + vport->recv_vectors = NULL;
>   rte_free(vport->rss_lut);
>   vport->rss_lut = NULL;
> 
> @@ -298,4 +313,88 @@ idpf_config_rss(struct idpf_vport *vport)
> 
>   return ret;
>  }
> +
> +int
> +idpf_config_irq_map(struct idpf_vport *vport, uint16_t nb_rx_queues)
> +{
> + struct idpf_adapter *adapter = vport->adapter;
> + struct virtchnl2_queue_vector *qv_map;
> + struct idpf_hw *hw = &adapter->hw;
> + uint32_t dynctl_val, itrn_val;
> + uint32_t dynctl_reg_start;
> + uint32_t itrn_reg_start;
> + uint16_t i;
> +
> + qv_map = rte_zmalloc("qv_map",
> +  nb_rx_queues *
> +  sizeof(struct virtchnl2_queue_vector), 0);
> + if (qv_map == NULL) {
> + DRV_LOG(ERR, "Failed to allocate %d queue-vector map",
> + nb_rx_queues);
> + goto qv_map_alloc_err;
Use error code -ENOMEM instead of using -1?

> + }
> +
> + /* Rx interrupt disabled, Map interrupt only for writeback */
> +
> + /* The capability flags adapter->caps.other_caps should be
> +  * compared with bit VIRTCHNL2_CAP_WB_ON_ITR here. The if
> +  * condition should be updated when the FW can return the
> +  * correct flag bits.
> +  */
> + dynctl_reg_start =
> + vport->recv_vectors->vchunks.vchunks->dynctl_reg_start;
> + itrn_reg_start =
> + vport->recv_vectors->vchunks.vchunks->itrn_reg_start;
> + dynctl_val = IDPF_READ_REG(hw, dynctl_reg_start);
> + DRV_LOG(DEBUG, "Value of dynctl_reg_start is 0x%x", dynctl_val);
> + itrn_val = IDPF_READ_REG(hw, itrn_reg_start);
> + DRV_LOG(DEBUG, "Value of itrn_reg_start is 0x%x", itrn_val);
> + /* Force write-backs by setting WB_ON_ITR bit in DYN_CTL
> +  * register. WB_ON_ITR and INTENA are mutually exclusive
> +  * bits. Setting WB_ON_ITR bits means TX and RX Descs
> +  * are written back based on ITR expiration irrespective
> +  * of INTENA setting.
> +  */
> + /* TBD: need to tune INTERVAL value for better performance. */
> + itrn_val = (itrn_val == 0) ? IDPF_DFLT_INTERVAL : itrn_val;
> + dynctl_val = VIRTCHNL2_ITR_IDX_0  <<
> +  PF_GLINT_DYN_CTL_ITR_INDX_S |
> +  PF_GLINT_DYN_CTL_WB_ON_ITR_M |
> +  itrn_val << PF_GLINT_DYN_CTL_INTERVAL_S;
> + IDPF_WRITE_REG(hw, dynctl_reg_start, dynctl_val);
> +
> + for (i = 0; i < nb_rx_queues; i++) {
> + /* map all queues to the same vector */
> + qv_map[i].queue_id = vport->chunks_info.rx_start_qid + i;
> + qv_map[i].vector_id =
> + vport->recv_vectors->vchunks.vchunks->start_vector_id;
> + }
> + vport->qv_map = qv_map;
> +
> + if (idpf_vc_config_irq_map_unmap(vport, nb_rx_queues, true) != 0) {
> + DRV_LOG(ERR, "config interrupt mapping failed");
> + goto config_irq_map_err;
> + }
> +
> + return 0;
> +
> +config_irq_map_err:
> + rte_free(vport->qv_map);
> + vport->qv_map = NULL;
> +
> +qv_map_alloc_err:
> + return -1;
> +}
> +



RE: [PATCH] net/iavf:fix slow memory allocation

2022-12-08 Thread Wu, Jingjing



> -Original Message-
> From: You, KaisenX 
> Sent: Thursday, November 17, 2022 2:57 PM
> To: dev@dpdk.org
> Cc: sta...@dpdk.org; Yang, Qiming ; Zhou, YidingX
> ; You, KaisenX ; Wu, Jingjing
> ; Xing, Beilei ; Zhang, Qi Z
> 
> Subject: [PATCH] net/iavf:fix slow memory allocation
> 
> In some cases, the DPDK does not allocate hugepage heap memory to
> some sockets due to the user setting parameters
> (e.g. -l 40-79, SOCKET 0 has no memory).
> When the interrupt thread runs on the corresponding core of this
> socket, each allocation/release will execute a whole set of heap
> allocation/release operations,resulting in poor performance.
> Instead we call malloc() to get memory from the system's
> heap space to fix this problem.
> 
> Fixes: cb5c1b91f76f ("net/iavf: add thread for event callbacks")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Kaisen You 
Acked-by: Jingjing Wu 




RE: Building DPDK with IOVA_AS_VA

2022-12-06 Thread Wu, Jingjing
Yes, thanks for the suggestion.

> -Original Message-
> From: Shijith Thotton 
> Sent: Wednesday, December 7, 2022 3:16 PM
> To: Wu, Jingjing ; Xing, Beilei 
> Cc: dev@dpdk.org; Morten Brørup ; Richardson, 
> Bruce
> 
> Subject: RE: Building DPDK with IOVA_AS_VA
> 
> Hi Jingjing Wu/Beilei Xing.
> 
> >I guess driver may not handle the attribute enable_iova_as_pa well right now.
> >Maybe you can have a try by disabling idpf driver by adding "-
> >Ddisable_drivers=net/idpf".
> >
> 
> Please send a fix. A check can be added similar to hns3 PMD.
> 
> +if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
> +build = false
> +reason = 'driver does not support disabling IOVA as PA mode'
> +subdir_done()
> +endif
> +
> 
> Thanks,
> Shijith
> 
> >>
> >> +To: Intel idpf maintainers Jingjing Wu, Beilei Xing
> >>
> >> > From: Morten Brørup [mailto:m...@smartsharesystems.com]
> >> > Sent: Tuesday, 6 December 2022 17.58
> >> >
> >> > Bruce,
> >> >
> >> > How do I build with IOVA_AS_VA, to benefit from Shijith Thotton's
> >> > patch? (Bonus question: How do I make the CI build this way?)
> >> >
> >> > When I try this command:
> >> > meson -Dplatform=generic -Dcheck_includes=true -
> >> > Denable_iova_as_pa=false work
> >> >
> >> > It fails with:
> >> > drivers/net/idpf/meson.build:36:8: ERROR: Unknown variable
> >> > "static_rte_common_idpf".
> >> >
> >> > Here is the full output:
> >> >
> >> > The Meson build system
> >> > Version: 0.60.3
> >> > Source dir: /home/morten/upstreaming/dpdk-experiment
> >> > Build dir: /home/morten/upstreaming/dpdk-experiment/work
> >> > Build type: native build
> >> > Program cat found: YES (/usr/bin/cat)
> >> > Project name: DPDK
> >> > Project version: 23.03.0-rc0
> >> > C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-
> >> > 1ubuntu1~20.04.1) 9.4.0")
> >> > C linker for the host machine: cc ld.bfd 2.34
> >> > Host machine cpu family: x86_64
> >> > Host machine cpu: x86_64
> >> > Message: ## Building in Developer Mode ##
> >> > Program pkg-config pkgconf found: NO
> >> > Program check-symbols.sh found: YES (/home/morten/upstreaming/dpdk-
> >> > experiment/buildtools/check-symbols.sh)
> >> > Program options-ibverbs-static.sh found: YES
> >> > (/home/morten/upstreaming/dpdk-experiment/buildtools/options-ibverbs-
> >> > static.sh)
> >> > Program objdump found: YES (/usr/bin/objdump)
> >> > Program python3 found: YES (/usr/bin/python3)
> >> > Program cat found: YES (/usr/bin/cat)
> >> > Program ../buildtools/symlink-drivers-solibs.sh found: YES (/bin/sh
> >> > /home/morten/upstreaming/dpdk-experiment/config/../buildtools/symlink-
> >> > drivers-solibs.sh)
> >> > Checking for size of "void *" : 8
> >> > Checking for size of "void *" : 8
> >> > Library m found: YES
> >> > Library numa found: YES
> >> > Has header "numaif.h" : YES
> >> > Library libfdt found: NO
> >> > Library libexecinfo found: NO
> >> > Did not find pkg-config by name 'pkg-config'
> >> > Found Pkg-config: NO
> >> > Run-time dependency libarchive found: NO (tried pkgconfig)
> >> > Run-time dependency libbsd found: NO (tried pkgconfig)
> >> > Run-time dependency jansson found: NO (tried pkgconfig)
> >> > Run-time dependency openssl found: NO (tried pkgconfig)
> >> > Run-time dependency libpcap found: NO (tried pkgconfig)
> >> > Library pcap found: YES
> >> > Has header "pcap.h" with dependency -lpcap: YES
> >> > Compiler for C supports arguments -Wcast-qual: YES
> >> > Compiler for C supports arguments -Wdeprecated: YES
> >> > Compiler for C supports arguments -Wformat: YES
> >> > Compiler for C supports arguments -Wformat-nonliteral: YES
> >> > Compiler for C supports arguments -Wformat-security: YES
> >> > Compiler for C supports arguments -Wmissing-declarations: YES
> >> > Compiler for C supports arguments -Wmissing-prototypes: YES
> >> > Compiler for C supports arguments -Wnested-externs: YES
> >> > Compiler for C supports arguments -Wold-style-definition: YES
>

RE: Building DPDK with IOVA_AS_VA

2022-12-06 Thread Wu, Jingjing
I guess driver may not handle the attribute enable_iova_as_pa well right now.
Maybe you can have a try by disabling idpf driver by adding 
"-Ddisable_drivers=net/idpf".

> -Original Message-
> From: Morten Brørup 
> Sent: Wednesday, December 7, 2022 2:55 AM
> To: Richardson, Bruce ; Wu, Jingjing
> ; Xing, Beilei 
> Cc: dev@dpdk.org; Shijith Thotton 
> Subject: RE: Building DPDK with IOVA_AS_VA
> 
> +To: Intel idpf maintainers Jingjing Wu, Beilei Xing
> 
> > From: Morten Brørup [mailto:m...@smartsharesystems.com]
> > Sent: Tuesday, 6 December 2022 17.58
> >
> > Bruce,
> >
> > How do I build with IOVA_AS_VA, to benefit from Shijith Thotton's
> > patch? (Bonus question: How do I make the CI build this way?)
> >
> > When I try this command:
> > meson -Dplatform=generic -Dcheck_includes=true -
> > Denable_iova_as_pa=false work
> >
> > It fails with:
> > drivers/net/idpf/meson.build:36:8: ERROR: Unknown variable
> > "static_rte_common_idpf".
> >
> > Here is the full output:
> >
> > The Meson build system
> > Version: 0.60.3
> > Source dir: /home/morten/upstreaming/dpdk-experiment
> > Build dir: /home/morten/upstreaming/dpdk-experiment/work
> > Build type: native build
> > Program cat found: YES (/usr/bin/cat)
> > Project name: DPDK
> > Project version: 23.03.0-rc0
> > C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-
> > 1ubuntu1~20.04.1) 9.4.0")
> > C linker for the host machine: cc ld.bfd 2.34
> > Host machine cpu family: x86_64
> > Host machine cpu: x86_64
> > Message: ## Building in Developer Mode ##
> > Program pkg-config pkgconf found: NO
> > Program check-symbols.sh found: YES (/home/morten/upstreaming/dpdk-
> > experiment/buildtools/check-symbols.sh)
> > Program options-ibverbs-static.sh found: YES
> > (/home/morten/upstreaming/dpdk-experiment/buildtools/options-ibverbs-
> > static.sh)
> > Program objdump found: YES (/usr/bin/objdump)
> > Program python3 found: YES (/usr/bin/python3)
> > Program cat found: YES (/usr/bin/cat)
> > Program ../buildtools/symlink-drivers-solibs.sh found: YES (/bin/sh
> > /home/morten/upstreaming/dpdk-experiment/config/../buildtools/symlink-
> > drivers-solibs.sh)
> > Checking for size of "void *" : 8
> > Checking for size of "void *" : 8
> > Library m found: YES
> > Library numa found: YES
> > Has header "numaif.h" : YES
> > Library libfdt found: NO
> > Library libexecinfo found: NO
> > Did not find pkg-config by name 'pkg-config'
> > Found Pkg-config: NO
> > Run-time dependency libarchive found: NO (tried pkgconfig)
> > Run-time dependency libbsd found: NO (tried pkgconfig)
> > Run-time dependency jansson found: NO (tried pkgconfig)
> > Run-time dependency openssl found: NO (tried pkgconfig)
> > Run-time dependency libpcap found: NO (tried pkgconfig)
> > Library pcap found: YES
> > Has header "pcap.h" with dependency -lpcap: YES
> > Compiler for C supports arguments -Wcast-qual: YES
> > Compiler for C supports arguments -Wdeprecated: YES
> > Compiler for C supports arguments -Wformat: YES
> > Compiler for C supports arguments -Wformat-nonliteral: YES
> > Compiler for C supports arguments -Wformat-security: YES
> > Compiler for C supports arguments -Wmissing-declarations: YES
> > Compiler for C supports arguments -Wmissing-prototypes: YES
> > Compiler for C supports arguments -Wnested-externs: YES
> > Compiler for C supports arguments -Wold-style-definition: YES
> > Compiler for C supports arguments -Wpointer-arith: YES
> > Compiler for C supports arguments -Wsign-compare: YES
> > Compiler for C supports arguments -Wstrict-prototypes: YES
> > Compiler for C supports arguments -Wundef: YES
> > Compiler for C supports arguments -Wwrite-strings: YES
> > Compiler for C supports arguments -Wno-address-of-packed-member: YES
> > Compiler for C supports arguments -Wno-packed-not-aligned: YES
> > Compiler for C supports arguments -Wno-missing-field-initializers: YES
> > Compiler for C supports arguments -mavx512f: YES
> > Checking if "AVX512 checking" : compiles: YES
> > Fetching value of define "__SSE4_2__" : 1
> > Fetching value of define "__AES__" :
> > Fetching value of define "__AVX__" :
> > Fetching value of define "__AVX2__" :
> > Fetching value of define "__AVX512BW__" :
> > Fetching value of define "__AVX512CD__" :
> > Fetching value of define "__AVX

RE: [PATCH] net/idpf: add supported ptypes get

2022-11-17 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Friday, November 18, 2022 11:51 AM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Peng, Yuan ; Xing, Beilei
> 
> Subject: [PATCH] net/idpf: add supported ptypes get
> 
> From: Beilei Xing 
> 
> Failed to launch l3fwd, the log shows:
> port 0 cannot parse packet type, please add --parse-ptype
> This patch adds dev_supported_ptypes_get ops.
> 
> Fixes: 549343c25db8 ("net/idpf: support device initialization")
> 
> Signed-off-by: Beilei Xing 
Reviewed-by: Jingjing Wu 



RE: [PATCH v2] net/idpf: fix crash when launching l3fwd

2022-11-17 Thread Wu, Jingjing
> -
>   if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
>   PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
>conf->txmode.mq_mode);
> diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
> index ac6486d4ef..88770447f8 100644
> --- a/drivers/net/idpf/idpf_vchnl.c
> +++ b/drivers/net/idpf/idpf_vchnl.c
> @@ -1197,6 +1197,9 @@ idpf_vc_dealloc_vectors(struct idpf_vport *vport)
>   int err, len;
> 
>   alloc_vec = vport->recv_vectors;
> + if (alloc_vec == NULL)
> + return -EINVAL;
> +
Would it be better to check before idpf_vc_dealloc_vectors?



RE: [PATCH v2] raw/ntb: add PPD status check for SPR

2022-07-01 Thread Wu, Jingjing



> -Original Message-
> From: Guo, Junfeng 
> Sent: Thursday, June 30, 2022 4:56 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Guo, Junfeng 
> Subject: [PATCH v2] raw/ntb: add PPD status check for SPR
> 
> Add PPD (PCIe Port Definition) status check for SPR (Sapphire Rapids).
> 
> Note that NTB on SPR has the same device id with that on ICX, while
> the field offsets of PPD Control Register are different. Here, we use
> the PCI device revision id to distinguish the HW platform (ICX/SPR)
> and check the Port Config Status and Port Definition accordingly.
> 
> +---+++
> |  Fields   | Bit Range (on ICX) | Bit Range (on SPR) |
> +---+++
> | Port Configuration Status | 12 | 14 |
> | Port Definition   | 9:8| 10:8   |
> +---+++
> 
> v2:
> fix revision id value check logic.
> 
> Signed-off-by: Junfeng Guo 
Acked-by: Jingjing Wu 


RE: [PATCH v4] raw/ntb: clear all valid DB bits when DB init

2022-02-09 Thread Wu, Jingjing



> -Original Message-
> From: Guo, Junfeng 
> Sent: Thursday, February 10, 2022 3:07 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; sta...@dpdk.org; Guo, Junfeng 
> Subject: [PATCH v4] raw/ntb: clear all valid DB bits when DB init
> 
> Before registering the doorbell interrupt handler callback function,
> all the valid doorbell bits within the NTB private data struct should
> be cleared to avoid the confusion of the handshake timing sequence
> diagram when setting up the NTB connection in back-to-back mode.
> 
> Fixes: 62012a76811e ("raw/ntb: add handshake process")
> Cc: sta...@dpdk.org
> 
> v2: fix typo
> v3: fix coding style issue
> v4: add ops check before calling it
> 
> Signed-off-by: Junfeng Guo 
Acked-by: Jingjing Wu 




RE: [PATCH] raw/ntb: add check for DB intr handler registering

2022-02-09 Thread Wu, Jingjing



> -Original Message-
> From: Guo, Junfeng 
> Sent: Thursday, February 10, 2022 2:29 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; sta...@dpdk.org; Guo, Junfeng 
> Subject: [PATCH] raw/ntb: add check for DB intr handler registering
> 
> The callback registering of doorbell interrupt handler should be
> finished before enabling the interrupt event fd. Thus add the return
> value check for this callback registering.
> 
> Fixes: 62012a76811e ("raw/ntb: add handshake process")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Junfeng Guo 
> ---
>  drivers/raw/ntb/ntb.c | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
> index cc611dfbb9..0801e6d1ae 100644
> --- a/drivers/raw/ntb/ntb.c
> +++ b/drivers/raw/ntb/ntb.c
> @@ -1403,8 +1403,12 @@ ntb_init_hw(struct rte_rawdev *dev, struct 
> rte_pci_device
> *pci_dev)
> 
>   intr_handle = pci_dev->intr_handle;
>   /* Register callback func to eal lib */
> - rte_intr_callback_register(intr_handle,
> -ntb_dev_intr_handler, dev);
> + ret = rte_intr_callback_register(intr_handle,
> +  ntb_dev_intr_handler, dev);
> + if (ret) {
> + NTB_LOG(ERR, "Unable to register doorbell intr handler.");
> + return ret;
> + }
When will this register failure happen? Have you checked what is the root cause?
 
> 
>   ret = rte_intr_efd_enable(intr_handle, hw->db_cnt);
>   if (ret)
Need roll back, such as rte_intr_callback_unregister is required when fail or 
driver remove?
> --
> 2.25.1



RE: [PATCH v3] raw/ntb: clear all valid DB bits when DB init

2022-02-09 Thread Wu, Jingjing



> -Original Message-
> From: Guo, Junfeng 
> Sent: Wednesday, February 9, 2022 12:47 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; sta...@dpdk.org; Guo, Junfeng 
> Subject: [PATCH v3] raw/ntb: clear all valid DB bits when DB init
> 
> Before registering the doorbell interrupt handler callback function,
> all the valid doorbell bits within the NTB private data struct should
> be cleared to avoid the confusion of the handshake timing sequence
> diagram when setting up the NTB connection in back-to-back mode.
> 
> Fixes: 62012a76811e ("raw/ntb: add handshake process")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Junfeng Guo 
> ---
>  drivers/raw/ntb/ntb.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
Better to add changes compared to previous version, which would help reviewers.

> diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
> index 46ac02e5ab..cc611dfbb9 100644
> --- a/drivers/raw/ntb/ntb.c
> +++ b/drivers/raw/ntb/ntb.c
> @@ -1398,6 +1398,8 @@ ntb_init_hw(struct rte_rawdev *dev, struct 
> rte_pci_device
> *pci_dev)
> 
>   /* Init doorbell. */
>   hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t);
> + /* Clear all valid doorbell bits before registering intr handler */
> + (*hw->ntb_ops->db_clear)(dev, hw->db_valid_mask);

Check if hw->ntb_ops->db_clear is NULL before call it.

> 
>   intr_handle = pci_dev->intr_handle;
>   /* Register callback func to eal lib */
> --
> 2.25.1



Re: [dpdk-dev] [PATCH v8 6/7] net/iavf: add watchdog for VFLR

2021-10-17 Thread Wu, Jingjing



> -Original Message-
> From: Nicolau, Radu 
> Sent: Friday, October 15, 2021 6:15 PM
> To: Wu, Jingjing ; Xing, Beilei 
> Cc: dev@dpdk.org; Doherty, Declan ; Sinha, Abhijit
> ; Zhang, Qi Z ; Richardson, 
> Bruce
> ; Ananyev, Konstantin 
> ;
> Nicolau, Radu 
> Subject: [PATCH v8 6/7] net/iavf: add watchdog for VFLR
> 
> Add watchdog to iAVF PMD which support monitoring the VFLR register. If
> the device is not already in reset then if a VF reset in progress is
> detected then notfiy user through callback and set into reset state.
> If the device is already in reset then poll for completion of reset.
> 
> The watchdog is disabled by default, to enable it set
> IAVF_DEV_WATCHDOG_PERIOD to a non zero value (microseconds)
> 
> Signed-off-by: Declan Doherty 
> Signed-off-by: Radu Nicolau 

Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH v4 6/6] net/iavf: add watchdog for VFLR

2021-10-07 Thread Wu, Jingjing
> > Besides checking VFGEN_RSTAT, there is a process to handle
> VIRTCHNL_OP_EVENT  from PF. What is the change for? Any scenario which
> VIRTCHNL_OP_EVENT  doesn't cover?
> > And how is the 500us been determined?
> 
> Hi Jingjing, thanks for reviewing, I think this can be handled with the
> VIRTCHNL_OP_EVENT  with no need for a watchdog alarm, I will rework the
> patch.
> 
Hi, Radu, I saw the patch is reworked, but looks like watchdog is still there. 
So what is the scenario
VIRTCHNL_OP_EVENT  doesn't cover?



Re: [dpdk-dev] [PATCH v4 6/6] net/iavf: add watchdog for VFLR

2021-10-03 Thread Wu, Jingjing



> -Original Message-
> From: Nicolau, Radu 
> Sent: Friday, October 1, 2021 5:52 PM
> To: Wu, Jingjing ; Xing, Beilei 
> Cc: dev@dpdk.org; Doherty, Declan ; Sinha, Abhijit
> ; Zhang, Qi Z ; Richardson, 
> Bruce
> ; Ananyev, Konstantin 
> ;
> Nicolau, Radu 
> Subject: [PATCH v4 6/6] net/iavf: add watchdog for VFLR
> 
> Add watchdog to iAVF PMD which support monitoring the VFLR register. If
> the device is not already in reset then if a VF reset in progress is
> detected then notfiy user through callback and set into reset state.
> If the device is already in reset then poll for completion of reset.
> 
> Signed-off-by: Declan Doherty 
> Signed-off-by: Radu Nicolau 
> ---
>  drivers/net/iavf/iavf.h|  6 +++
>  drivers/net/iavf/iavf_ethdev.c | 97 ++
>  2 files changed, 103 insertions(+)
> 
> diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
> index d5f574b4b3..4481d2e134 100644
> --- a/drivers/net/iavf/iavf.h
> +++ b/drivers/net/iavf/iavf.h
> @@ -212,6 +212,12 @@ struct iavf_info {
>   int cmd_retval; /* return value of the cmd response from PF */
>   uint8_t *aq_resp; /* buffer to store the adminq response from PF */
> 
> + struct {
> + uint8_t enabled:1;
> + uint64_t period_us;
> + } watchdog;
> + /** iAVF watchdog configuration */
> +
>   /* Event from pf */
>   bool dev_closed;
>   bool link_up;
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index aad6a28585..d02aa9c1c5 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -24,6 +24,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
> 
>  #include "iavf.h"
>  #include "iavf_rxtx.h"
> @@ -239,6 +240,94 @@ iavf_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
>   return 0;
>  }
> 
> +
> +static int
> +iavf_vfr_inprogress(struct iavf_hw *hw)
> +{
> + int inprogress = 0;
> +
> + if ((IAVF_READ_REG(hw, IAVF_VFGEN_RSTAT) &
> + IAVF_VFGEN_RSTAT_VFR_STATE_MASK) ==
> + VIRTCHNL_VFR_INPROGRESS)
> + inprogress = 1;
> +
> + if (inprogress)
> + PMD_DRV_LOG(INFO, "Watchdog detected VFR in progress");
> +
> + return inprogress;
> +}
> +
> +static void
> +iavf_dev_watchdog(void *cb_arg)
> +{
> + struct iavf_adapter *adapter = cb_arg;
> + struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter);
> + int vfr_inprogress = 0, rc = 0;
> +
> + /* check if watchdog has been disabled since last call */
> + if (!adapter->vf.watchdog.enabled)
> + return;
> +
> + /* If in reset then poll vfr_inprogress register for completion */
> + if (adapter->vf.vf_reset) {
> + vfr_inprogress = iavf_vfr_inprogress(hw);
> +
> + if (!vfr_inprogress) {
> + PMD_DRV_LOG(INFO, "VF \"%s\" reset has completed",
> + adapter->eth_dev->data->name);
> + adapter->vf.vf_reset = false;
> + }
> + /* If not in reset then poll vfr_inprogress register for VFLR event */
> + } else {
> + vfr_inprogress = iavf_vfr_inprogress(hw);
> +
> + if (vfr_inprogress) {
> + PMD_DRV_LOG(INFO,
> + "VF \"%s\" reset event has been detected by 
> watchdog",
> + adapter->eth_dev->data->name);
> +
> + /* enter reset state with VFLR event */
> + adapter->vf.vf_reset = true;
> +
> + rte_eth_dev_callback_process(adapter->eth_dev,
> + RTE_ETH_EVENT_INTR_RESET, NULL);
> + }
> + }
> +
> + /* re-alarm watchdog */
> + rc = rte_eal_alarm_set(adapter->vf.watchdog.period_us,
> + &iavf_dev_watchdog, cb_arg);
> +
> + if (rc)
> + PMD_DRV_LOG(ERR, "Failed \"%s\" to reset device watchdog alarm",
> + adapter->eth_dev->data->name);
> +}
> +
> +static void
> +iavf_dev_watchdog_enable(struct iavf_adapter *adapter, uint64_t period_us)
> +{
> + int rc;
> +
> + PMD_DRV_LOG(INFO, "Enabling device watchdog");
> +
> + adapter->vf.watchdog.enabled = 1;
> + adapter->vf.watchdog.period_us = period_us;
> +
> + rc = rte_eal_alarm_set(adapter->vf.watchdog.period_us,
> + &iavf_dev_watchdog, (void *)adapter

Re: [dpdk-dev] [PATCH v4 5/6] net/iavf: add xstats support for inline IPsec crypto

2021-10-03 Thread Wu, Jingjing



> -Original Message-
> From: Nicolau, Radu 
> Sent: Friday, October 1, 2021 5:51 PM
> To: Wu, Jingjing ; Xing, Beilei 
> Cc: dev@dpdk.org; Doherty, Declan ; Sinha, Abhijit
> ; Zhang, Qi Z ; Richardson, 
> Bruce
> ; Ananyev, Konstantin 
> ;
> Nicolau, Radu 
> Subject: [PATCH v4 5/6] net/iavf: add xstats support for inline IPsec crypto
> 
> Add per queue counters for maintaining statistics for inline IPsec
> crypto offload, which can be retrieved through the
> rte_security_session_stats_get() with more detailed errors through the
> rte_ethdev xstats.
> 
> Signed-off-by: Declan Doherty 
> Signed-off-by: Radu Nicolau 

Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH v4 4/6] net/iavf: add iAVF IPsec inline crypto support

2021-10-03 Thread Wu, Jingjing



> -Original Message-
> From: Nicolau, Radu 
> Sent: Friday, October 1, 2021 5:51 PM
> To: Wu, Jingjing ; Xing, Beilei 
> ; Ray Kinsella
> 
> Cc: dev@dpdk.org; Doherty, Declan ; Sinha, Abhijit
> ; Zhang, Qi Z ; Richardson, 
> Bruce
> ; Ananyev, Konstantin 
> ;
> Nicolau, Radu 
> Subject: [PATCH v4 4/6] net/iavf: add iAVF IPsec inline crypto support
> 
> Add support for inline crypto for IPsec, for ESP transport and
> tunnel over IPv4 and IPv6, as well as supporting the offload for
> ESP over UDP, and inconjunction with TSO for UDP and TCP flows.
> Implement support for rte_security packet metadata
> 
> Add definition for IPsec descriptors, extend support for offload
> in data and context descriptor to support
> 
> Add support to virtual channel mailbox for IPsec Crypto request
> operations. IPsec Crypto requests receive an initial acknowledgement
> from phsyical function driver of receipt of request and then an
> asynchronous response with success/failure of request including any
> response data.
> 
> Add enhanced descriptor debugging
> 
> Refactor of scalar tx burst function to support integration of offload
> 
> Signed-off-by: Declan Doherty 
> Signed-off-by: Abhijit Sinha 
> Signed-off-by: Radu Nicolau 

Reviewed-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH v4 3/6] net/iavf: add support for asynchronous virt channel messages

2021-10-03 Thread Wu, Jingjing



> -Original Message-
> From: Nicolau, Radu 
> Sent: Friday, October 1, 2021 5:51 PM
> To: Wu, Jingjing ; Xing, Beilei 
> Cc: dev@dpdk.org; Doherty, Declan ; Sinha, Abhijit
> ; Zhang, Qi Z ; Richardson, 
> Bruce
> ; Ananyev, Konstantin 
> ;
> Nicolau, Radu 
> Subject: [PATCH v4 3/6] net/iavf: add support for asynchronous virt channel 
> messages
> 
> Add support for asynchronous virtual channel messages, specifically for
> inline IPsec messages.
> 
> Signed-off-by: Declan Doherty 
> Signed-off-by: Abhijit Sinha 
> Signed-off-by: Radu Nicolau 
> ---
>  drivers/net/iavf/iavf.h   |  16 
>  drivers/net/iavf/iavf_vchnl.c | 137 +-
>  2 files changed, 101 insertions(+), 52 deletions(-)

Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH v4 2/6] net/iavf: rework tx path

2021-10-03 Thread Wu, Jingjing



> -Original Message-
> From: Nicolau, Radu 
> Sent: Friday, October 1, 2021 5:51 PM
> To: Wu, Jingjing ; Xing, Beilei 
> ; Richardson,
> Bruce ; Ananyev, Konstantin 
> 
> Cc: dev@dpdk.org; Doherty, Declan ; Sinha, Abhijit
> ; Zhang, Qi Z ; Nicolau, Radu
> 
> Subject: [PATCH v4 2/6] net/iavf: rework tx path
> 
> Rework the TX path and TX descriptor usage in order to
> allow for better use of oflload flags and to facilitate enabling of
> inline crypto offload feature.
> 
> Signed-off-by: Declan Doherty 
> Signed-off-by: Abhijit Sinha 
> Signed-off-by: Radu Nicolau 
> ---
>  drivers/net/iavf/iavf_rxtx.c | 536 +++
>  drivers/net/iavf/iavf_rxtx.h |   9 +-
>  drivers/net/iavf/iavf_rxtx_vec_sse.c |  10 +-
>  3 files changed, 319 insertions(+), 236 deletions(-)

Acked-by: Jingjing Wu 


Re: [dpdk-dev] Not able to start IAVF PMD with dpdk 20.11.3

2021-09-23 Thread Wu, Jingjing
Could you have a try to switch from  uio_pci_generic to vfio_pci?

From: Dey, Souvik 
Sent: Thursday, September 23, 2021 11:29 PM
To: dev@dpdk.org; Xing, Beilei ; Wu, Jingjing 

Subject: Not able to start IAVF PMD with dpdk 20.11.3

Hi All,
 Trying to test E810 Sr-IOV based nic card with 20.11.3 dpdk. I am running 
the host kernel driver and only the VF is attached to the VM where I am running 
dpdk. But when I attaching the VF to the testpmd it is throwing the below 
error. Is there are way to move forward here ?

[root@connexip linuxadmin]# ./dpdk-testpmd -c 0xf -n 4 -a 08:00.0 -- -i 
--rxq=16 --txq=16
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: Probe PCI driver: net_iavf (8086:1889) device: :08:00.0 (socket 0)
EAL: Error reading from file descriptor 27: Input/output error
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool : n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will 
pair with itself.

Configuring Port 0 (socket 0)
EAL: Error reading from file descriptor 27: Input/output error
iavf_execute_vf_cmd(): No response for cmd 28
iavf_disable_vlan_strip(): Failed to execute command of 
OP_DISABLE_VLAN_STRIPPING
iavf_execute_vf_cmd(): No response for cmd 24
iavf_configure_rss_lut(): Failed to execute command of OP_CONFIG_RSS_LUT
iavf_dev_configure(): configure rss failed
Port0 dev_configure = -1
Fail to configure port 0
EAL: Error - exiting with code: 1
  Cause: Start ports failed
[root@connexip linuxadmin]#


I do see the matching list for driver but is this required even if I use kernel 
driver on the host and only IAVF PMD on the guest ?

DPDK
Kernel Driver
OS Default DDP
COMMS DDP
Wireless DDP
Firmware
20.11
1.3.2
1.3.20
1.3.24
N/A
2.3
21.02
1.4.11
1.3.24
1.3.28
1.3.4
2.4



Host details:
Red Hat Enterprise Linux release 8.2 (Ootpa)
[root@localhost ~]# modinfo ice
filename:   
/lib/modules/4.18.0-193.19.1.el8_2.x86_64/kernel/drivers/net/ethernet/intel/ice/ice.ko.xz
firmware:   intel/ice/ddp/ice.pkg (ice-1.3.4.0.pkg)
version:0.8.1-k
license:GPL v2
description:Intel(R) Ethernet Connection E800 Series Linux Driver
author: Intel Corporation, 
linux.n...@intel.com<mailto:linux.n...@intel.com>
rhelversion:8.2
Using vfio-pci to connect the VF to the VM.

Guest Details:
OS : debian buster
Dpdk : 20.11.3
Interface bound to uio_pci_generic
[root@connexip linuxadmin]# /opt/sonus/bin/np/swe/dpdk-devbind.py --status

Network devices using DPDK-compatible driver

:01:00.0 'Virtio network device 1041' drv=uio_pci_generic unused=virtio_pci
:08:00.0 'Ethernet Adaptive Virtual Function 1889' drv=uio_pci_generic 
unused=i40evf
:09:00.0 'Ethernet Adaptive Virtual Function 1889' drv=uio_pci_generic 
unused=i40evf

Network devices using kernel driver
===
:02:00.0 'Virtio network device 1041' if=ha0 drv=virtio-pci 
unused=virtio_pci,uio_pci_generic *Active*


Notice: This e-mail together with any attachments may contain information of 
Ribbon Communications Inc. and its Affiliates that is confidential and/or 
proprietary for the sole use of the intended recipient. Any review, disclosure, 
reliance or distribution by others or forwarding without express permission is 
strictly prohibited. If you are not the intended recipient, please notify the 
sender immediately and then delete all copies, including any attachments.


Re: [dpdk-dev] [PATCH v2 2/4] net/iavf: add iAVF IPsec inline crypto support

2021-09-17 Thread Wu, Jingjing
In general, the patch is too big to review. Patch split would help a lot!

[...]
> +static const struct rte_cryptodev_symmetric_capability *
> +get_capability(struct iavf_security_ctx *iavf_sctx,
> + uint32_t algo, uint32_t type)
> +{
> + const struct rte_cryptodev_capabilities *capability;
> + int i = 0;
> +
> + capability = &iavf_sctx->crypto_capabilities[i];
> +
> + while (capability->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) {
> + if (capability->op == RTE_CRYPTO_OP_TYPE_SYMMETRIC &&
> + capability->sym.xform_type == type &&
> + capability->sym.cipher.algo == algo)
> + return &capability->sym;
> + /** try next capability */
> + capability = &iavf_crypto_capabilities[i++];

Better to  check i to avoid out of boundary.
[...]

> +
> +static int
> +valid_length(uint32_t len, uint32_t min, uint32_t max, uint32_t increment)
> +{
> + if (len < min || len > max)
> + return 0;
> +
> + if (increment == 0)
> + return 1;
> +
> + if ((len - min) % increment)
> + return 0;
> +
> + return 1;
> +}
Would it be better to use true/false instead of 1/0? And the same to following 
valid functions.
[...]

> +static int
> +iavf_ipsec_crypto_session_validate_conf(struct iavf_security_ctx *iavf_sctx,
> + struct rte_security_session_conf *conf)
> +{
> + /** validate security action/protocol selection */
> + if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
> + conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC) {
> + PMD_DRV_LOG(ERR, "Unsupported action / protocol specified");
> + return -EINVAL;
> + }
> +
> + /** validate IPsec protocol selection */
> + if (conf->ipsec.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP) {
> + PMD_DRV_LOG(ERR, "Unsupported IPsec protocol specified");
> + return -EINVAL;
> + }
> +
> + /** validate selected options */
> + if (conf->ipsec.options.copy_dscp ||
> + conf->ipsec.options.copy_flabel ||
> + conf->ipsec.options.copy_df ||
> + conf->ipsec.options.dec_ttl ||
> + conf->ipsec.options.ecn ||
> + conf->ipsec.options.stats) {
> + PMD_DRV_LOG(ERR, "Unsupported IPsec option specified");
> + return -EINVAL;
> + }
> +
> + /**
> +  * Validate crypto xforms parameters.
> +  *
> +  * AEAD transforms can be used for either inbound/outbound IPsec SAs,
> +  * for non-AEAD crypto transforms we explicitly only support CIPHER/AUTH
> +  * for outbound and AUTH/CIPHER chained transforms for inbound IPsec.
> +  */
> + if (conf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
> + if (!valid_aead_xform(iavf_sctx, &conf->crypto_xform->aead)) {
> + PMD_DRV_LOG(ERR, "Unsupported IPsec option specified");
> + return -EINVAL;
> + }
Invalid parameter, but not unsupported option, right? Same to below.
[...]

> +static void
> +sa_add_set_aead_params(struct virtchnl_ipsec_crypto_cfg_item *cfg,
> + struct rte_crypto_aead_xform *aead, uint32_t salt)
> +{
> + cfg->crypto_type = VIRTCHNL_AEAD;
> +
> + switch (aead->algo) {
> + case RTE_CRYPTO_AEAD_AES_CCM:
> + cfg->algo_type = VIRTCHNL_AES_CCM; break;
> + case RTE_CRYPTO_AEAD_AES_GCM:
> + cfg->algo_type = VIRTCHNL_AES_GCM; break;
> + case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
> + cfg->algo_type = VIRTCHNL_CHACHA20_POLY1305; break;
> + default:
> + RTE_ASSERT("we should be here");

Assert just because invalid config? Similar comments to other valid functions.

> + }
> +
> + cfg->key_len = aead->key.length;
> + cfg->iv_len = aead->iv.length;
> + cfg->digest_len = aead->digest_length;
> + cfg->salt = salt;
> +
> + RTE_ASSERT(sizeof(cfg->key_data) < cfg->key_len);
> +
Not only data, but length, better to valid before setting? The same to other 
kind params setting.
[...]


> +static inline void
> +iavf_build_data_desc_cmd_offset_fields(volatile uint64_t *qw1,
> + struct rte_mbuf *m)
> +{
> + uint64_t command = 0;
> + uint64_t offset = 0;
> + uint64_t l2tag1 = 0;
> +
> + *qw1 = IAVF_TX_DESC_DTYPE_DATA;
> +
> + command = (uint64_t)IAVF_TX_DESC_CMD_ICRC;
> +
> + /* Descriptor based VLAN insertion */
> + if (m->ol_flags & PKT_TX_VLAN_PKT) {
> + command |= (uint64_t)IAVF_TX_DESC_CMD_IL2TAG1;
> + l2tag1 |= m->vlan_tci;
> + }
> +
>   /* Set MACLEN */
> - *td_offset |= (tx_offload.l2_len >> 1) <<
> -   IAVF_TX_DESC_LENGTH_MACLEN_SHIFT;
> -
> - /* Enable L3 checksum offloads */
> - if (ol_flags & PKT_TX_IP_CKSUM) {
> - *td_cmd |= IAVF_TX_DESC_CMD_IIPT_IPV4_CSUM;
> - *td_offset |= (tx_offload.l3_len >> 2) <<
> -  

Re: [dpdk-dev] [PATCH] net/iavf: remove interrupt handler

2021-08-12 Thread Wu, Jingjing
> > -Original Message-
> > From: Zhang, RobinX 
> > Sent: Friday, July 23, 2021 3:47 PM
> > To: dev@dpdk.org
> > Cc: Wu, Jingjing ; Xing, Beilei
> > ; Zhang, Qi Z ; Guo,
> > Junfeng ; Yang, SteveX
> ;
> > Zhang, RobinX 
> > Subject: [PATCH] net/iavf: remove interrupt handler
> 
> As you are not going to remove interrupt handler for all the cases, the title 
> is
> misleading Better replace it with "enable interrupt polling"
> 
> >
> > For VF hosted by Intel 700 series NICs, internal rx interrupt and
> > adminq interrupt share the same source, that cause a lot cpu cycles be
> > wasted on interrupt handler on rx path.
> >
> > The patch disable pci interrupt and remove the interrupt handler,
> > replace it with a low frequency(50ms) interrupt polling daemon which
> > is implemtented by registering an alarm callback periodly.
> >
> > The virtual channel capability bit VIRTCHNL_VF_OFFLOAD_WB_ON_ITR can
> > be used to negotiate if iavf PMD needs to enable background alarm or
> > not, so ideally this change will not impact the case hosted by Intel 800 
> > series
> NICS.
> >
> > Suggested-by: Jingjing Wu 
> > Signed-off-by: Qi Zhang 
> 
As it is a kind of problem solving, can it be sent to dpdk-stable too?

> No need to add me as the author for this patch but, you can add a reference
> to the original i40e commit to explain you implement the same logic.
> 
> > Signed-off-by: Robin Zhang 
> > ---
> >  drivers/net/iavf/iavf.h|  3 +++
> >  drivers/net/iavf/iavf_ethdev.c | 37
> > ++
> >  drivers/net/iavf/iavf_vchnl.c  | 11 --
> >  3 files changed, 22 insertions(+), 29 deletions(-)
> >
> > diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index
> > b3bd078111..771f3b79d7 100644
> > --- a/drivers/net/iavf/iavf.h
> > +++ b/drivers/net/iavf/iavf.h
> > @@ -69,6 +69,8 @@
> >  #define IAVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
> >  #define IAVF_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */
> >
> > +#define IAVF_ALARM_INTERVAL 5 /* us */
> > +
> >  /* The overhead from MTU to max frame size.
> >   * Considering QinQ packet, the VLAN tag needs to be counted twice.
> >   */
> > @@ -372,6 +374,7 @@ int iavf_config_irq_map_lv(struct iavf_adapter
> > *adapter, uint16_t num,  void iavf_add_del_all_mac_addr(struct
> > iavf_adapter *adapter, bool add);  int iavf_dev_link_update(struct
> > rte_eth_dev *dev,  __rte_unused int wait_to_complete);
> > +void iavf_dev_alarm_handler(void *param);
> >  int iavf_query_stats(struct iavf_adapter *adapter,
> >  struct virtchnl_eth_stats **pstats);  int
> > iavf_config_promisc(struct iavf_adapter *adapter, bool enable_unicast,
> > diff --git a/drivers/net/iavf/iavf_ethdev.c
> > b/drivers/net/iavf/iavf_ethdev.c index
> > 41382c6d66..bbe5b3ddb1 100644
> > --- a/drivers/net/iavf/iavf_ethdev.c
> > +++ b/drivers/net/iavf/iavf_ethdev.c
> > @@ -16,6 +16,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> >  #include 
> >  #include 
> >  #include 
> > @@ -692,9 +693,9 @@ static int iavf_config_rx_queues_irqs(struct
> > rte_eth_dev *dev,
> >   */
> >  vf->msix_base = IAVF_MISC_VEC_ID;
> >
> > -/* set ITR to max */
> > +/* set ITR to default */
> >  interval = iavf_calc_itr_interval(
> > -IAVF_QUEUE_ITR_INTERVAL_MAX);
> > +IAVF_QUEUE_ITR_INTERVAL_DEFAULT);
> >  IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01,
> > IAVF_VFINT_DYN_CTL01_INTENA_MASK |
> > (IAVF_ITR_INDEX_DEFAULT <<
> > @@ -853,9 +854,8 @@ iavf_dev_start(struct rte_eth_dev *dev)
> > PMD_DRV_LOG(ERR, "configure irq failed");  goto err_queue;  }
> > -/* re-enable intr again, because efd assign may change */
> > +/* only enable interrupt in rx interrupt mode */
> >  if (dev->data->dev_conf.intr_conf.rxq != 0) {
> > -rte_intr_disable(intr_handle);  rte_intr_enable(intr_handle);  }
> >
> > @@ -889,6 +889,9 @@ iavf_dev_stop(struct rte_eth_dev *dev)
> >
> >  PMD_INIT_FUNC_TRACE();
> >
> > +if (dev->data->dev_conf.intr_conf.rxq != 0)
> > +rte_intr_disable(intr_handle);
> > +
> >  if (adapter->stopped == 1)
> >  return 0;
> >
> > @@ -1669,8 +1672,6 @@ iavf_dev_rx_queue_intr_enable(struct
> rte_eth_dev
> > *dev, uint16_t queue_id)
> >
> >  IAVF_WRITE_FLUSH(hw);
> >
> > -rte_intr_ack(&pci_dev->intr_handle);
> > -
> >  return 0;
> >  }

Re: [dpdk-dev] [PATCH] net/iavf: fix Rx issue for scalar Rx functions

2021-05-31 Thread Wu, Jingjing


> -Original Message-
> From: Xing, Beilei 
> Sent: Tuesday, June 1, 2021 1:10 PM
> To: Wu, Jingjing ; Zhang, Qi Z 
> Cc: dev@dpdk.org; Xing, Beilei ; sta...@dpdk.org
> Subject: [PATCH] net/iavf: fix Rx issue for scalar Rx functions
> 
> From: Beilei Xing 
> 
> The new allocated mbuf should be updated to the SW
> ring.
> 
> Fixes: a2b29a7733ef ("net/avf: enable basic Rx Tx")
> Fixes: b8b4c54ef9b0 ("net/iavf: support flexible Rx descriptor in normal 
> path")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Beilei Xing 

Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH v3 5/5] net/iavf: fix vector mapping with queue

2021-01-11 Thread Wu, Jingjing
> -Original Message-
> From: Xing, Beilei 
> Sent: Tuesday, January 12, 2021 2:11 PM
> To: Wu, Jingjing ; dev@dpdk.org
> Cc: Xia, Chenbo ; Lu, Xiuchun
> 
> Subject: RE: [PATCH v3 5/5] net/iavf: fix vector mapping with queue
> 
> Seems the patch is conflict with patch
> https://patches.dpdk.org/patch/86202/, please help to review.

Thanks. Looks like the similar fix. Will comment that patch and drop this one 
if that is merged.
> 
> > -----Original Message-
> > From: Wu, Jingjing 
> > Sent: Thursday, January 7, 2021 4:27 PM
> > To: dev@dpdk.org
> > Cc: Wu, Jingjing ; Xing, Beilei
> > ; Xia, Chenbo ; Lu,
> > Xiuchun 
> > Subject: [PATCH v3 5/5] net/iavf: fix vector mapping with queue
> >
> > Fix the vector mapping with queue by changing the recircle when
> > exceeds RX_VEC_START + nb_msix;
> >
> > Fixes: d6bde6b5eae9 ("net/avf: enable Rx interrupt")
> >
> > Signed-off-by: Jingjing Wu 
> > ---
> >  drivers/net/iavf/iavf_ethdev.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/iavf/iavf_ethdev.c
> > b/drivers/net/iavf/iavf_ethdev.c index 06395f852b..b169b19975 100644
> > --- a/drivers/net/iavf/iavf_ethdev.c
> > +++ b/drivers/net/iavf/iavf_ethdev.c
> > @@ -570,7 +570,7 @@ static int iavf_config_rx_queues_irqs(struct
> > rte_eth_dev *dev,
> > /* If Rx interrupt is reuquired, and we can use
> >  * multi interrupts, then the vec is from 1
> >  */
> > -   vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors,
> > +   vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors -
> 1,
> >   intr_handle->nb_efd);
> > vf->msix_base = IAVF_RX_VEC_START;
> > vec = IAVF_RX_VEC_START;
> > @@ -578,7 +578,7 @@ static int iavf_config_rx_queues_irqs(struct
> > rte_eth_dev *dev,
> > qv_map[i].queue_id = i;
> > qv_map[i].vector_id = vec;
> > intr_handle->intr_vec[i] = vec++;
> > -   if (vec >= vf->nb_msix)
> > +   if (vec >= vf->nb_msix +
> IAVF_RX_VEC_START)
> > vec = IAVF_RX_VEC_START;
> > }
> > vf->qv_map = qv_map;
> > --
> > 2.21.1



Re: [dpdk-dev] [PATCH] net/iavf: fix vector id assignment

2021-01-11 Thread Wu, Jingjing



> -Original Message-
> From: Xu, Ting 
> Sent: Tuesday, January 12, 2021 2:27 PM
> To: Yu, DapengX ; Zhang, Qi Z
> ; Wu, Jingjing ; Xing, Beilei
> 
> Cc: dev@dpdk.org; Yu, DapengX ; sta...@dpdk.org
> Subject: RE: [PATCH] net/iavf: fix vector id assignment
> 
> > -Original Message-
> > From: dapengx...@intel.com 
> > Sent: Friday, January 8, 2021 6:21 PM
> > To: Zhang, Qi Z ; Wu, Jingjing
> > ; Xing, Beilei ; Xu,
> > Ting 
> > Cc: dev@dpdk.org; Yu, DapengX ;
> sta...@dpdk.org
> > Subject: [PATCH] net/iavf: fix vector id assignment
> >
> > From: YU DAPENG 
> >
> > The number of MSI-X interrupts on Rx shall be the minimal value of the
> > number of available MSI-X interrupts per VF - 1 (the 1 is for
> > miscellaneous
> > interrupt) and the number of configured Rx queues.
> > The current code break the rule because the number of available MSI-X
> > interrupts is used as the first value, but code does not subtract 1 from it.
> >
> > In normal situation, the first value is larger than the second value.
> > So each queue can be assigned a unique vector_id.
> >
> > For example: 17 available MSI-X interrupts, and 16 available Rx queues
> > per VF; but only 4 Rx queues are configured when device is started.
> > vector_id:0 is for misc interrupt, vector_id:1 for Rx queue0,
> > vector_id:2 for Rx queue1, vector_id:3 for Rx queue2, vector_id:4 for
> > Rx queue3.
> >
> > Current code breaks the rule in this normal situation, because when
> > assign vector_ids to interrupt handle, for example, it does not assign
> > vector_id:4 to the queue3, but assign vector_id:1 to it, because the
> > condition used causes vector_id wrap around too early.
> >
> 
> Hi, Dapeng,
> 
> Could you please further explain in which condition will this error happen?
> Seems it requires vf->nb_msix = 3 to make it happen, but I do not notice
> such situation.
> I know it may be an example, is there any more specific case?
> 
> Thanks.
> 
> > In iavf_config_irq_map(), the current code does not write data into
> > the last element of vecmap[], because of the previous code break.
> > Which cause wrong data is sent to PF with opcode
> > VIRTCHNL_OP_CONFIG_IRQ_MAP and cause
> > error: VIRTCHNL_STATUS_ERR_PARAM(-5).
> >
> > If kernel driver supports large VFs (up to 256 queues), different
> > queues can be assigned same vector_id.
> >
> > In order to adapt to large VFs and avoid wrapping early, the condition
> > is replaced from vec >= vf->nb_msix to vec >= vf->vf_res->max_vectors.
> >
> > Fixes: d6bde6b5eae9 ("net/avf: enable Rx interrupt")
> > Cc: sta...@dpdk.org
> >
> > Signed-off-by: YU DAPENG 
> > ---
> >  drivers/net/iavf/iavf_ethdev.c | 8 +---
> >  1 file changed, 5 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/iavf/iavf_ethdev.c
> > b/drivers/net/iavf/iavf_ethdev.c index 7e3c26a94..d730bb156 100644
> > --- a/drivers/net/iavf/iavf_ethdev.c
> > +++ b/drivers/net/iavf/iavf_ethdev.c
> > @@ -483,6 +483,7 @@ static int iavf_config_rx_queues_irqs(struct
> > rte_eth_dev *dev,
> > struct iavf_qv_map *qv_map;
> > uint16_t interval, i;
> > int vec;
> > +   uint16_t max_vectors;
> >
> > if (rte_intr_cap_multiple(intr_handle) &&
> > dev->data->dev_conf.intr_conf.rxq) { @@ -570,15 +571,16 @@
> > static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
> > /* If Rx interrupt is reuquired, and we can use
> >  * multi interrupts, then the vec is from 1
> >  */
> > -   vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors,
> > - intr_handle->nb_efd);
> > +   max_vectors =
> > +   vf->vf_res->max_vectors -
> > IAVF_RX_VEC_START;

Looks it is the same fix as http://patchwork.dpdk.org/patch/86118 /.
I think this line need to be moved to ahead of RTE_MIN? And the 
RTE_MIN(max_vectors, intr_handle->nb_efd);

> > +   vf->nb_msix = RTE_MIN(max_vectors, intr_handle-
> > >nb_efd);
> > vf->msix_base = IAVF_RX_VEC_START;
> > vec = IAVF_RX_VEC_START;
> > for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > qv_map[i].queue_id = i;
> > qv_map[i].vector_id = vec;
> > intr_handle->intr_vec[i] = vec++;
> > -   if (vec >= vf->nb_msix)
> > +   if (vec >= vf->vf_res->max_vectors)
> > vec = IAVF_RX_VEC_START;
> > }
> > vf->qv_map = qv_map;
> > --
> > 2.27.0



Re: [dpdk-dev] [PATCH v2 4/8] emu/iavf: add vfio-user device register and unregister

2021-01-05 Thread Wu, Jingjing
> +static int iavf_emu_update_status(int vfio_dev_id) {
> + struct iavf_emudev *dev;
> + int ret;
> +
> + dev = find_iavf_with_dev_id(vfio_dev_id);
> + if (!dev)
> + return -1;
> +
> + ret = iavf_emu_setup_mem_table(dev);
> + if (ret) {
> + EMU_IAVF_LOG(ERR, "Failed to set up memtable for "
> + "device %d", dev->vfio->dev_id);
> + return ret;
> + }
> +
> + ret = iavf_emu_setup_irq(dev);
In update callback, irq fds will be reinitialized here. Think about if the 
update happening during mailbox communication, the eventfd of mailbox will be 
cleared without notify.

> + if (ret) {
> + EMU_IAVF_LOG(ERR, "Failed to set up irq for "
> + "device %d", dev->vfio->dev_id);
> + return ret;
> + }
> +
> + dev->ops->update_status(dev->edev);
> +
> + return 0;
> +}


Re: [dpdk-dev] [PATCH v2 4/8] emu/iavf: add vfio-user device register and unregister

2021-01-03 Thread Wu, Jingjing
> +static inline struct iavf_emu_sock_list * iavf_emu_find_sock_list(char
> +*sock_addr) {
> + struct iavf_emu_sock_list *list;
> + struct iavf_emudev *dev;
> + int list_exist;

Initialize list_exist to 0?
> +
> + if (!sock_addr)
> + return NULL;
> +
> + pthread_mutex_lock(&sock_list_lock);
> +
> + TAILQ_FOREACH(list, &sock_list, next) {
> + dev = (struct iavf_emudev *)list->emu_dev->priv_data;
> +
> + if (!strcmp(dev->sock_addr, sock_addr)) {
> + list_exist = 1;
> + break;
> + }
> + break;
This "break" need to be removed.

> + }
> +
> + pthread_mutex_unlock(&sock_list_lock);
> +
> + if (!list_exist)
> + return NULL;
> +
> + return list;
> +}
> +


Re: [dpdk-dev] [PATCH 5/9] vfio_user: implement interrupt related APIs

2020-12-29 Thread Wu, Jingjing
>   if ((cmd == VFIO_USER_DMA_MAP || cmd == VFIO_USER_DMA_UNMAP
> ||
> + cmd == VFIO_USER_DEVICE_SET_IRQS ||
>   cmd == VFIO_USER_DEVICE_RESET)
>   && dev->ops->lock_dp) {
>   dev->ops->lock_dp(dev_id, 1);

About cmd "VFIO_USER_REGION_WRITE", irq setting would cause update_status to 
iavfbe device.
Where will the lock be?

> @@ -871,7 +1056,8 @@ static int vfio_user_message_handler(int dev_id, int fd)
>   if (vfio_user_is_ready(dev) && dev->ops->new_device)
>   dev->ops->new_device(dev_id);
>   } else {
> - if ((cmd == VFIO_USER_DMA_MAP || cmd ==
> VFIO_USER_DMA_UNMAP)
> + if ((cmd == VFIO_USER_DMA_MAP || cmd ==
> VFIO_USER_DMA_UNMAP
> + || cmd == VFIO_USER_DEVICE_SET_IRQS)
>   && dev->ops->update_status)
>   dev->ops->update_status(dev_id);
>   }
> @@ -898,6 +1084,7 @@ static int vfio_user_sock_read(int fd, void *data)
>   if (dev) {
>   dev->ops->destroy_device(dev_id);
>   vfio_user_destroy_mem_entries(dev->mem);
> + vfio_user_clean_irqfd(dev);
>   dev->is_ready = 0;
>   dev->msg_id = 0;
>   }
> @@ -995,9 +1182,9 @@ vfio_user_start_server(struct vfio_user_server_socket
> *sk)
>   }
> 
>   /* All the info must be set before start */
> - if (!dev->dev_info || !dev->reg) {
> + if (!dev->dev_info || !dev->reg || !dev->irqs.info) {
>   VFIO_USER_LOG(ERR, "Failed to start, "
> - "dev/reg info must be set before start\n");
> + "dev/reg/irq info must be set before start\n");
>   return -1;
>   }
> 



Re: [dpdk-dev] [PATCH v2 5/8] emu/iavf: add resource management and internal logic of iavf

2020-12-28 Thread Wu, Jingjing
> +static ssize_t iavf_emu_bar0_rw(struct rte_vfio_user_reg_info *reg, char
> *buf,
> + size_t count, loff_t pos, bool iswrite) {
> + struct iavf_emudev *dev = (struct iavf_emudev *)reg->priv;
> + char *reg_pos;
> +
> + if (!reg->base) {
> + EMU_IAVF_LOG(ERR, "BAR 0 does not exist\n");
> + return -EFAULT;
> + }
> +
> + if (pos + count > reg->info->size) {
> + EMU_IAVF_LOG(ERR, "Access exceeds BAR 0 size\n");
> + return -EINVAL;
> + }
> +
> + reg_pos = (char *)reg->base + pos;
> +
> + if (!iswrite) {
> + rte_memcpy(buf, reg_pos, count);
> + } else {
> + int tmp;
> + uint32_t val;
> + int idx = -1;
> +
> + if (count != 4)
> + return -EINVAL;
> +
> + val = *(uint32_t *)buf;
> + /* Only handle interrupt enable/disable for now */
> + if (pos == IAVF_VFINT_DYN_CTL01) {
> + tmp = val & IAVF_VFINT_DYN_CTL01_INTENA_MASK;
> + idx = 0;
> + } else if ((pos >= IAVF_VFINT_DYN_CTLN1(0)) && pos <=
> +
>   IAVF_VFINT_DYN_CTLN1(RTE_IAVF_EMU_MAX_INTR - 1)) {
> + tmp = val & IAVF_VFINT_DYN_CTLN1_INTENA_MASK;
> + idx = pos - IAVF_VFINT_DYN_CTLN1(0);
> + if (idx % 4)
> + return -EINVAL;
> + idx = idx / 4;
Should be idx = idx / 4 + 1; ?

> + }
> +
> + if (idx != -1 &&
> + tmp != dev->intr->info[idx].enable && dev->ready) {
> + dev->ops->update_status(dev->edev);
> + dev->intr->info[idx].enable = tmp;
dev->intr->info[idx].enable need to be set before update_status callback is 
called.

> + }
> +
> + rte_memcpy(reg_pos, buf, count);
> + }
> +
> + return count;
> +}


Re: [dpdk-dev] [PATCH v1] net/iavf: fix cannot release mbufs issue

2020-11-10 Thread Wu, Jingjing



> -Original Message-
> From: Xu, Ting 
> Sent: Wednesday, November 11, 2020 11:07 AM
> To: dev@dpdk.org
> Cc: Zhang, Qi Z ; Xing, Beilei ;
> Wu, Jingjing ; Xu, Ting ;
> sta...@dpdk.org
> Subject: [PATCH v1] net/iavf: fix cannot release mbufs issue
> 
> In the function _iavf_rx_queue_release_mbufs_vec to release rx mbufs,
> rxq->rxrearm_nb is given the value of rx descriptor number at last.
> However, since the process to release and allocate mbufs lacks the
> initialization of rxrearm_nb, if we try to release mbufs next time, it will 
> return
> without releasing directly. In this patch, rxrearm_nb is initialized to be 
> zero in
> rx queue reset.
> 
> Fixes: 319c421f3890 ("net/avf: enable SSE Rx Tx")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Ting Xu 

Acked-by: Jingjing Wu 



Re: [dpdk-dev] [PATCH v4] raw/ntb: add Ice Lake support for Intel NTB

2020-09-07 Thread Wu, Jingjing



> -Original Message-
> From: Li, Xiaoyun 
> Sent: Tuesday, September 8, 2020 11:28 AM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Maslekar, Omkar ; Li,
> Xiaoyun 
> Subject: [PATCH v4] raw/ntb: add Ice Lake support for Intel NTB
> 
> Add NTB device support (4th generation) for Intel Ice Lake platform.
> 
> Signed-off-by: Xiaoyun Li 
Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH v3] raw/ntb: add Icelake support for Intel NTB

2020-09-07 Thread Wu, Jingjing
> > > - rte_write64(limit, limit_addr);
> > > + if (is_gen3_ntb(hw)) {
> > > + /* Setup the external point so that remote can access. */
> > > + xlat_off = XEON_EMBAR1_OFFSET + 8 * mw_idx;
> > > + xlat_addr = hw->hw_addr + xlat_off;
> > > + limit_off = XEON_EMBAR1XLMT_OFFSET +
> > > + mw_idx * XEON_BAR_INTERVAL_OFFSET;
> > > + limit_addr = hw->hw_addr + limit_off;
> > > + base = rte_read64(xlat_addr);
> > > + base &= ~0xf;
> > > + limit = base + size;
> > > + rte_write64(limit, limit_addr);
> > > + } else if (is_gen4_ntb(hw)) {
> > Can we use a variable in struct to indicate it's gen4 or gen3 after init 
> > instead of
> > check it every time?
> 
> What's the difference? It comes from the value in hw->pci_dev->id.device_id.
> Checking it in this way is trying to make it easier to extend it for gen2 ntb 
> in the future.
> It's not either gen3 or gen4.
> I don't think it makes sense to have a bool value to indicate it's gen3 or 
> gen4.

Understand, as the inline function is very simple, it looks OK.
> 
> >
> > > + /* Set translate base address index register */
> > > + xlat_off = XEON_GEN4_IM1XBASEIDX_OFFSET +
> > > +mw_idx * XEON_GEN4_XBASEIDX_INTERVAL;
> > > + xlat_addr = hw->hw_addr + xlat_off;
> > > + rte_write16(rte_log2_u64(size), xlat_addr);
> > > + } else {
> > > + rte_write64(base, limit_addr);
> > > + rte_write64(0, xlat_addr);
> > > + return -ENOTSUP;
> > > + }
> > Is the else branch necessary? As if neither gen3 or gen4, the init would 
> > fail.
> > Would be better to print an ERR instead of just return NO support.
> 
> I don't think so.
> Yes. It will fail in init. Returning err is to stop other following actions 
> like in
> intel_ntb_vector_bind() since it should be stopped.
> And I'd like to keep them in one coding style. As to the print, I think that 
> can be upper
> layer's job to check the value and print err.
> Choosing ENOTSUP is because that in init, if it's not supported hw, it will 
> return -
> ENOTSUP err.
> 
I cannot say what you did is incorrect. But try to think it like this way:  
according current API design, ntb raw device is allocated when driver probe, if 
init fails, raw device would be free. How the ops be called? 

> > >
> > >   return 0;
> > >  }
> >



Re: [dpdk-dev] [PATCH v3] raw/ntb: add Icelake support for Intel NTB

2020-09-07 Thread Wu, Jingjing
> +
> +static int
> +intel_ntb_dev_init(const struct rte_rawdev *dev) {
> + struct ntb_hw *hw = dev->dev_private;
> + uint8_t bar;
> + int ret, i;
> +
> + if (hw == NULL) {
> + NTB_LOG(ERR, "Invalid device.");
> + return -EINVAL;
> + }
> +
>   hw->hw_addr = (char *)hw->pci_dev->mem_resource[0].addr;
> 
> + if (is_gen3_ntb(hw)) {
> + ret = intel_ntb3_check_ppd(hw);
> + } else if (is_gen4_ntb(hw)) {
> + /* PPD is in MMIO but not config space for NTB Gen4 */
> + ret = intel_ntb4_check_ppd(hw);
> + if (ret)
> + return ret;
Above two lines are not necessary.
> + } else {
> + return -ENOTSUP;
> + }
> +
> + if (ret)
> + return ret;
> +
>   hw->mw_cnt = XEON_MW_COUNT;
>   hw->db_cnt = XEON_DB_COUNT;
>   hw->spad_cnt = XEON_SPAD_COUNT;
> @@ -149,15 +219,28 @@ intel_ntb_mw_set_trans(const struct rte_rawdev
> *dev, int mw_idx,
>   rte_write64(base, xlat_addr);
>   rte_write64(limit, limit_addr);
> 
> - /* Setup the external point so that remote can access. */
> - xlat_off = XEON_EMBAR1_OFFSET + 8 * mw_idx;
> - xlat_addr = hw->hw_addr + xlat_off;
> - limit_off = XEON_EMBAR1XLMT_OFFSET + mw_idx *
> XEON_BAR_INTERVAL_OFFSET;
> - limit_addr = hw->hw_addr + limit_off;
> - base = rte_read64(xlat_addr);
> - base &= ~0xf;
> - limit = base + size;
> - rte_write64(limit, limit_addr);
> + if (is_gen3_ntb(hw)) {
> + /* Setup the external point so that remote can access. */
> + xlat_off = XEON_EMBAR1_OFFSET + 8 * mw_idx;
> + xlat_addr = hw->hw_addr + xlat_off;
> + limit_off = XEON_EMBAR1XLMT_OFFSET +
> + mw_idx * XEON_BAR_INTERVAL_OFFSET;
> + limit_addr = hw->hw_addr + limit_off;
> + base = rte_read64(xlat_addr);
> + base &= ~0xf;
> + limit = base + size;
> + rte_write64(limit, limit_addr);
> + } else if (is_gen4_ntb(hw)) {
Can we use a variable in struct to indicate it's gen4 or gen3 after init 
instead of check it every time?

> + /* Set translate base address index register */
> + xlat_off = XEON_GEN4_IM1XBASEIDX_OFFSET +
> +mw_idx * XEON_GEN4_XBASEIDX_INTERVAL;
> + xlat_addr = hw->hw_addr + xlat_off;
> + rte_write16(rte_log2_u64(size), xlat_addr);
> + } else {
> + rte_write64(base, limit_addr);
> + rte_write64(0, xlat_addr);
> + return -ENOTSUP;
> + }
Is the else branch necessary? As if neither gen3 or gen4, the init would fail. 
Would be better to print an ERR instead of just return NO support.
> 
>   return 0;
>  }




Re: [dpdk-dev] [PATCH 7/7] net/iavf: fix port close

2020-08-11 Thread Wu, Jingjing
If RTE_ETH_DEV_CLOSE_REMOVE is set, port would be released when dev_close is 
called.
So it is not necessary to mark it as closed.

Another concern in my mind is the REST virtchnl message is missed to send to PF 
in iavf_dev_reset.

Thanks
Jingjing

> -Original Message-
> From: Yang, SteveX 
> Sent: Tuesday, August 11, 2020 3:59 PM
> To: Wu, Jingjing ; Xing, Beilei
> ; dev@dpdk.org
> Cc: Yang, Qiming ; Yang, SteveX
> 
> Subject: [PATCH 7/7] net/iavf: fix port close
> 
> Port reset will call iavf_dev_uninit() to release resources. It wants to call
> iavf_dev_close() to release resources. So there will be a call conflict if 
> calling
> iavf_dev_reset() and iavf_dev_close() at the same time.
> 
> This patch added adapter->closed flag in iavf_dev_close() to control the
> status of close.
> 
> Fixes: 83fe5e80692a ("net/iavf: move device state flag")
> 
> Signed-off-by: SteveX Yang 
> ---
>  drivers/net/iavf/iavf.h| 1 +
>  drivers/net/iavf/iavf_ethdev.c | 6 ++
>  2 files changed, 7 insertions(+)
> 
> diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index
> 9be8a2381..06cbe6089 100644
> --- a/drivers/net/iavf/iavf.h
> +++ b/drivers/net/iavf/iavf.h
> @@ -161,6 +161,7 @@ struct iavf_adapter {
>   bool tx_vec_allowed;
>   const uint32_t *ptype_tbl;
>   bool stopped;
> + bool closed;
>   uint16_t fdir_ref_cnt;
>  };
> 
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index f16aff531..b58e57b07 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -1367,6 +1367,7 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
>   hw->back = IAVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data-
> >dev_private);
>   adapter->eth_dev = eth_dev;
>   adapter->stopped = 1;
> + adapter->closed = 0;
> 
>   if (iavf_init_vf(eth_dev) != 0) {
>   PMD_INIT_LOG(ERR, "Init vf failed");
> @@ -1423,6 +1424,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
>   struct iavf_adapter *adapter =
>   IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> 
> + if (adapter->closed == 1)
> + return;
> +
>   iavf_dev_stop(dev);
>   iavf_flow_flush(dev, NULL);
>   iavf_flow_uninit(adapter);
> @@ -1434,6 +1438,8 @@ iavf_dev_close(struct rte_eth_dev *dev)
>   rte_intr_callback_unregister(intr_handle,
>iavf_dev_interrupt_handler, dev);
>   iavf_disable_irq0(hw);
> +
> + adapter->closed = 1;
>  }
> 
>  static int
> --
> 2.17.1



Re: [dpdk-dev] [PATCH V4 2/4] net/i40e: FDIR flow memory management optimization

2020-07-16 Thread Wu, Jingjing



> -Original Message-
> From: Sun, Chenmin 
> Sent: Thursday, July 16, 2020 3:53 AM
> To: Zhang, Qi Z ; Xing, Beilei ; 
> Wu,
> Jingjing ; Wang, Haiyue 
> Cc: dev@dpdk.org; Sun, Chenmin 
> Subject: [PATCH V4 2/4] net/i40e: FDIR flow memory management optimization
> 
> From: Chenmin Sun 
> 
> This patch allocated some memory pool for flow management to avoid
> calling rte_zmalloc/rte_free every time.
> This patch also improves the hash table operation. When adding/removing
> a flow, the software will directly add/delete it from the hash table.
> If any error occurs, it then roll back the operation it just done.
> 
> Signed-off-by: Chenmin Sun 
Reviewed-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH V4 4/4] net/i40e: FDIR update rate optimization

2020-07-16 Thread Wu, Jingjing


[...]

> +static inline unsigned char *
> +i40e_find_available_buffer(struct rte_eth_dev *dev)
> +{
> + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> + struct i40e_fdir_info *fdir_info = &pf->fdir;
> + struct i40e_tx_queue *txq = pf->fdir.txq;
> + volatile struct i40e_tx_desc *txdp = &txq->tx_ring[txq->tx_tail + 1];
> + uint32_t i;
> +
> + /* no available buffer
> +  * search for more available buffers from the current
> +  * descriptor, until an unavailable one
> +  */
> + if (fdir_info->txq_available_buf_count <= 0) {
> + uint16_t tmp_tail;
> + volatile struct i40e_tx_desc *tmp_txdp;
> +
> + tmp_tail = txq->tx_tail;
> + tmp_txdp = &txq->tx_ring[tmp_tail + 1];
> +
> + do {
> + if ((tmp_txdp->cmd_type_offset_bsz &
> +
>   rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
> +
>   rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
> + fdir_info->txq_available_buf_count++;
> + else
> + break;
> +
> + tmp_tail += 2;
> + if (tmp_tail >= txq->nb_tx_desc)
> + tmp_tail = 0;
> + } while (tmp_tail != txq->tx_tail);
> + }
> +
> + /*
> +  * if txq_available_buf_count > 0, just use the next one is ok,
> +  * else wait for the next DD until it's set to make sure the data
> +  * had been fetched by hardware
> +  */
> + if (fdir_info->txq_available_buf_count > 0) {
> + fdir_info->txq_available_buf_count--;
> + } else {
> + /* wait until the tx descriptor is ready */
> + for (i = 0; i < I40E_FDIR_MAX_WAIT_US; i++) {
> + if ((txdp->cmd_type_offset_bsz &
> +
>   rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) ==
> +
>   rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
> + break;
> + rte_delay_us(1);
> + }
> + if (i >= I40E_FDIR_MAX_WAIT_US) {
> + PMD_DRV_LOG(ERR,
> + "Failed to program FDIR filter: time out to get DD 
> on tx
> queue.");
> + return NULL;
> + }
> + }
Why wait for I40E_FDIR_MAX_WAIT_US but not return NULL immediately?

[...]


>  i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
> enum i40e_filter_pctype pctype,
> const struct i40e_fdir_filter_conf *filter,
> -   bool add)
> +   bool add, bool wait_status)
>  {
>   struct i40e_tx_queue *txq = pf->fdir.txq;
>   struct i40e_rx_queue *rxq = pf->fdir.rxq;
> @@ -2011,8 +2092,10 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
>   volatile struct i40e_tx_desc *txdp;
>   volatile struct i40e_filter_program_desc *fdirdp;
>   uint32_t td_cmd;
> - uint16_t vsi_id, i;
> + uint16_t vsi_id;
>   uint8_t dest;
> + uint32_t i;
> + uint8_t retry_count = 0;
> 
>   PMD_DRV_LOG(INFO, "filling filter programming descriptor.");
>   fdirdp = (volatile struct i40e_filter_program_desc *)
> @@ -2087,7 +2170,8 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
> 
>   PMD_DRV_LOG(INFO, "filling transmit descriptor.");
>   txdp = &txq->tx_ring[txq->tx_tail + 1];
> - txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr);
> + txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr[txq->tx_tail / 
> 2]);
> +
[txq->tx_tail / 2] is not readable, how about use the avail pkt you get 
directly? Or another index to identify it?
 
>   td_cmd = I40E_TX_DESC_CMD_EOP |
>I40E_TX_DESC_CMD_RS  |
>I40E_TX_DESC_CMD_DUMMY;
> @@ -2100,25 +2184,34 @@ i40e_flow_fdir_filter_programming(struct i40e_pf *pf,
>   txq->tx_tail = 0;
>   /* Update the tx tail register */
>   rte_wmb();
> +
> + /* capture the previous error report(if any) from rx ring */
> + while ((i40e_check_fdir_programming_status(rxq) < 0) &&
> + (++retry_count < 100))
> + PMD_DRV_LOG(INFO, "previous error report captured.");
> +
Why check FDIR ring for 100 times? And "&&" is used here, the log is only print 
if the 100th check fails? 

> 
> --
> 2.17.1



Re: [dpdk-dev] [PATCH V4 1/4] net/i40e: introducing the fdir space tracking

2020-07-16 Thread Wu, Jingjing



> -Original Message-
> From: Sun, Chenmin 
> Sent: Thursday, July 16, 2020 3:53 AM
> To: Zhang, Qi Z ; Xing, Beilei ; 
> Wu,
> Jingjing ; Wang, Haiyue 
> Cc: dev@dpdk.org; Sun, Chenmin 
> Subject: [PATCH V4 1/4] net/i40e: introducing the fdir space tracking
> 
> From: Chenmin Sun 
> 
> This patch introduces a fdir flow management for guaranteed/shared
> space tracking.
> The fdir space is reported by the
> i40e_hw_capabilities.fd_filters_guaranteed and fd_filters_best_effort.
> The fdir space is managed by hardware and now is tracking in software.
> The management algorithm is controlled by the GLQF_CTL.INVALPRIO.
> Detailed implementation please check in the datasheet and the
> description of struct i40e_fdir_info.fdir_invalprio.
> 
> This patch changes the global register GLQF_CTL. Therefore, when devarg
> ``support-multi-driver`` is set, the patch will not take effect to
> avoid affecting the normal behavior of other i40e drivers, e.g., Linux
> kernel driver.
> 
> Signed-off-by: Chenmin Sun 
Reviewed-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH 3/3] maintainers: update for driver testing tool

2020-04-26 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Sunday, April 26, 2020 4:22 PM
> To: dev@dpdk.org; Wu, Jingjing ; Lu, Wenzhuo
> ; Zhang, Qi Z 
> Subject: [PATCH 3/3] maintainers: update for driver testing tool
> 
> Replace Jingjing Wu with Beilei Xing.
> 
> Signed-off-by: Beilei Xing 

Acked-by: Jingjing Wu 


Re: [dpdk-dev] [dpdk-stable] [PATCH v2 2/2] examples/vmdq: fix RSS configuration

2020-04-02 Thread Wu, Jingjing
> > > + rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
> > > + port_conf.rx_adv_conf.rss_conf.rss_hf &=
> > > + dev_info.flow_type_rss_offloads;
> > > + if (port_conf.rx_adv_conf.rss_conf.rss_hf != rss_hf_tmp) {
> > > + printf("Port %u modified RSS hash function based on hardware
> > support,"
> >
> > This is RSS offload type but not hash function.
> 
> * The *rss_hf* field of the *rss_conf* structure indicates the different
>  * types of IPv4/IPv6 packets to which the RSS hashing must be applied.
>  * Supplying an *rss_hf* equal to zero disables the RSS feature.
> 
> And in testpmd, it's the same.
> port->dev_conf.rx_adv_conf.rss_conf.rss_hf =
>   rss_hf & port->dev_info.flow_type_rss_offloads;

OK. I got, the definition of rss_hf at the beginning might be hash function 
which also the same as RSS offload type.
Ignore my comments then.

BTW hash function is also indicating TOEPLITZ/XOR... in somewhere. 


Thanks
Jingjng


Re: [dpdk-dev] [PATCH v2 1/2] doc: add user guide for VMDq

2020-04-02 Thread Wu, Jingjing


> -Original Message-
> From: dev  On Behalf Of Junyu Jiang
> Sent: Wednesday, March 25, 2020 2:33 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming ; Yigit, Ferruh 
> ; Jiang,
> JunyuX 
> Subject: [dpdk-dev] [PATCH v2 1/2] doc: add user guide for VMDq
> 
> currently, there is no documentation for vmdq example,
> this path added the user guide for vmdq.
> 
> Signed-off-by: Junyu Jiang 
Reviewed-by: Jingjing Wu  



Re: [dpdk-dev] [PATCH v2 2/2] examples/vmdq: fix RSS configuration

2020-04-02 Thread Wu, Jingjing


> -Original Message-
> From: dev  On Behalf Of Junyu Jiang
> Sent: Wednesday, March 25, 2020 2:33 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming ; Yigit, Ferruh 
> ; Jiang,
> JunyuX ; sta...@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 2/2] examples/vmdq: fix RSS configuration
> 
> In order that all queues of pools can receive packets,
> add enable-rss argument to change rss configuration.
> 
> Fixes: 6bb97df521aa ("examples/vmdq: new app")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Junyu Jiang 
> Acked-by: Xiaoyun Li 
> ---
>  doc/guides/sample_app_ug/vmdq_forwarding.rst |  6 +--
>  examples/vmdq/main.c | 39 +---
>  2 files changed, 37 insertions(+), 8 deletions(-)
> 
> diff --git a/doc/guides/sample_app_ug/vmdq_forwarding.rst
> b/doc/guides/sample_app_ug/vmdq_forwarding.rst
> index df23043d6..658d6742d 100644
> --- a/doc/guides/sample_app_ug/vmdq_forwarding.rst
> +++ b/doc/guides/sample_app_ug/vmdq_forwarding.rst
> @@ -26,13 +26,13 @@ The Intel® 82599 10 Gigabit Ethernet Controller NIC also 
> supports
> the splitting
>  While the Intel® X710 or XL710 Ethernet Controller NICs support many 
> configurations of
> VMDQ pools of 4 or 8 queues each.
>  And queues numbers for each VMDQ pool can be changed by setting
> CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
>  in config/common_* file.
> -The nb-pools parameter can be passed on the command line, after the EAL 
> parameters:
> +The nb-pools and enable-rss parameters can be passed on the command line, 
> after the
> EAL parameters:
> 
>  .. code-block:: console
> 
> -./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP
> +./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP --enable-rss
> 
> -where, NP can be 8, 16 or 32.
> +where, NP can be 8, 16 or 32, rss is disabled by default.
> 
>  In Linux* user space, the application can display statistics with the number 
> of packets
> received on each queue.
>  To have the application display the statistics, send a SIGHUP signal to the 
> running
> application process.
> diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
> index 00920..98032e6a3 100644
> --- a/examples/vmdq/main.c
> +++ b/examples/vmdq/main.c
> @@ -59,6 +59,7 @@ static uint32_t enabled_port_mask;
>  /* number of pools (if user does not specify any, 8 by default */
>  static uint32_t num_queues = 8;
>  static uint32_t num_pools = 8;
> +static uint8_t rss_enable;
> 
>  /* empty vmdq configuration structure. Filled in programatically */
>  static const struct rte_eth_conf vmdq_conf_default = {
> @@ -143,6 +144,13 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t
> num_pools)
>   (void)(rte_memcpy(eth_conf, &vmdq_conf_default, sizeof(*eth_conf)));
>   (void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf,
>  sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
> + if (rss_enable) {
> + eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
> + eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
> + ETH_RSS_UDP |
> + ETH_RSS_TCP |
> + ETH_RSS_SCTP;
> + }
>   return 0;
>  }
> 
> @@ -164,6 +172,7 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
>   uint16_t q;
>   uint16_t queues_per_pool;
>   uint32_t max_nb_pools;
> + uint64_t rss_hf_tmp;
> 
>   /*
>* The max pool number from dev_info will be used to validate the pool
> @@ -209,6 +218,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
>   if (!rte_eth_dev_is_valid_port(port))
>   return -1;
> 
> + rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
> + port_conf.rx_adv_conf.rss_conf.rss_hf &=
> + dev_info.flow_type_rss_offloads;
> + if (port_conf.rx_adv_conf.rss_conf.rss_hf != rss_hf_tmp) {
> + printf("Port %u modified RSS hash function based on hardware 
> support,"

This is RSS offload type but not hash function.


Re: [dpdk-dev] [PATCH 07/12] net/iavf: add flow director enabled switch value

2020-03-25 Thread Wu, Jingjing



-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Leyi Rong
Sent: Monday, March 16, 2020 3:46 PM
To: Zhang, Qi Z ; Ye, Xiaolong 
Cc: dev@dpdk.org; Rong, Leyi 
Subject: [dpdk-dev] [PATCH 07/12] net/iavf: add flow director enabled switch 
value

The commit adds fdir_enabled flag into iavf_adapter structure to identify if 
fdir id is active. Rx data path can be benefit if fdir id parsing is not 
needed, especially in vector path.

Signed-off-by: Leyi Rong 
---
 drivers/net/iavf/iavf.h  |  1 +
 drivers/net/iavf/iavf_rxtx.h | 26 ++
 2 files changed, 27 insertions(+)

diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 
4fe15237a..1918a67f1 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -142,6 +142,7 @@ struct iavf_adapter {
bool tx_vec_allowed;
const uint32_t *ptype_tbl;
bool stopped;
+   uint8_t fdir_enabled;
 };
 
 /* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index 
c85207dae..5548d1adb 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -281,6 +281,32 @@ void iavf_dump_tx_descriptor(const struct iavf_tx_queue 
*txq,
   tx_desc->cmd_type_offset_bsz);
 }
 
+/* Enable/disable flow director Rx processing in data path. */ static 
+inline void iavf_fdir_rx_proc_enable(struct iavf_adapter *ad, bool on) 
+{
+   static uint32_t ref_cnt;
+
+   if (on) {
+   /* enable flow director processing */
+   if (ref_cnt++ == 0) {
+   ad->fdir_enabled = on;
+   PMD_DRV_LOG(DEBUG,
+   "FDIR processing on RX set to %d", on);
+   }
+   } else {
+   if (ref_cnt >= 1) {
+   ref_cnt--;
+
+   if (ref_cnt == 0) {
+   ad->fdir_enabled = on;
+   PMD_DRV_LOG(DEBUG,
+   "FDIR processing on RX set to %d", 
on);
+   }
+   }
+   }
+}
+

fdir_enabled is used for fast path. To avoid competition, how about to make it 
in queue granularity?


 #ifdef RTE_LIBRTE_IAVF_DEBUG_DUMP_DESC
 #define IAVF_DUMP_RX_DESC(rxq, desc, rx_id) \
iavf_dump_rx_descriptor(rxq, desc, rx_id)
--
2.17.1



Re: [dpdk-dev] [PATCH 04/12] net/iavf: flexible Rx descriptor support in normal path

2020-03-25 Thread Wu, Jingjing
One general comment:
Looks like there are exact same code comparing with legacy rx, such as logic to 
update tail, multi-segments loop.
It will be good if all the common code can be wrapped and use for multi recv 
functions

[...]

+/* Get the number of used descriptors of a rx queue for flexible RXD */ 
+uint32_t iavf_dev_rxq_count_flex_rxd(struct rte_eth_dev *dev, uint16_t 
+queue_id) { #define IAVF_RXQ_SCAN_INTERVAL 4
+   volatile union iavf_rx_flex_desc *rxdp;
+   struct iavf_rx_queue *rxq;
+   uint16_t desc = 0;
+
+   rxq = dev->data->rx_queues[queue_id];
+   rxdp = (volatile union iavf_rx_flex_desc *)&rxq->rx_ring[rxq->rx_tail];
+   while ((desc < rxq->nb_rx_desc) &&
+  rte_le_to_cpu_16(rxdp->wb.status_error0) &
+  (1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S)) {
+   /* Check the DD bit of a rx descriptor of each 4 in a group,
+* to avoid checking too frequently and downgrading performance
+* too much.
+*/
+   desc += IAVF_RXQ_SCAN_INTERVAL;
+   rxdp += IAVF_RXQ_SCAN_INTERVAL;
+   if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+   rxdp = (volatile union iavf_rx_flex_desc *)
+   &(rxq->rx_ring[rxq->rx_tail +
+   desc - rxq->nb_rx_desc]);
+   }
+
+   return desc;
+}

No much difference between iavf_dev_rxq_count. Why do we need a new one? DD bit 
is located in the same place, right?
Can we merge those two functions to one?  
+
 /* Get the number of used descriptors of a rx queue */  uint32_t  
iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id) @@ -1795,6 
+2264,10 @@ iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t queue_id)
 
rxq = dev->data->rx_queues[queue_id];
rxdp = &rxq->rx_ring[rxq->rx_tail];
+
+   if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+   return iavf_dev_rxq_count_flex_rxd(dev, queue_id);
+
while ((desc < rxq->nb_rx_desc) &&
   ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) &
 IAVF_RXD_QW1_STATUS_MASK) >> IAVF_RXD_QW1_STATUS_SHIFT) & @@ 
-1813,6 +2286,31 @@ iavf_dev_rxq_count(struct rte_eth_dev *dev, uint16_t 
queue_id)
return desc;
 }
 
+int
+iavf_dev_rx_desc_status_flex_rxd(void *rx_queue, uint16_t offset) {
+   volatile union iavf_rx_flex_desc *rxdp;
+   struct iavf_rx_queue *rxq = rx_queue;
+   uint32_t desc;
+
+   if (unlikely(offset >= rxq->nb_rx_desc))
+   return -EINVAL;
+
+   if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold)
+   return RTE_ETH_RX_DESC_UNAVAIL;
+
+   desc = rxq->rx_tail + offset;
+   if (desc >= rxq->nb_rx_desc)
+   desc -= rxq->nb_rx_desc;
+
+   rxdp = (volatile union iavf_rx_flex_desc *)&rxq->rx_ring[desc];
+   if (rte_le_to_cpu_16(rxdp->wb.status_error0) &
+   (1 << IAVF_RX_FLEX_DESC_STATUS0_DD_S))
+   return RTE_ETH_RX_DESC_DONE;
+
+   return RTE_ETH_RX_DESC_AVAIL;
+}
+

Similar comments as above.

[..] 

  { @@ -569,6 +596,20 @@ iavf_configure_queues(struct iavf_adapter *adapter)
vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr;
vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+
+   if (vf->vf_res->vf_cap_flags &
+   VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+   vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
+   vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
+   rxq[i]->rxdid = IAVF_RXDID_COMMS_OVS_1;
Because this function is used to construct virtchnl message for configure 
queues.
The rxdid in rxq[i] should be set before this function is called. How about to 
move this line assignment to iavf_dev_rx_queue_setup?
+   PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
+   "Queue[%d]", vc_qp->rxq.rxdid, i);
+   } else {
+   vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
+   rxq[i]->rxdid = IAVF_RXDID_LEGACY_1;
Same as above.

+   PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
+   "Queue[%d]", vc_qp->rxq.rxdid, i);
+   }
}
}
 
--
2.17.1



Re: [dpdk-dev] [PATCH v4 1/7] net/iavf: stop the PCI probe in DCF mode

2020-03-25 Thread Wu, Jingjing



-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Haiyue Wang
Sent: Thursday, March 26, 2020 11:04 AM
To: dev@dpdk.org; Ye, Xiaolong ; Zhang, Qi Z 
; Yang, Qiming ; Xing, Beilei 

Cc: Zhao1, Wei ; Wang, Haiyue 
Subject: [dpdk-dev] [PATCH v4 1/7] net/iavf: stop the PCI probe in DCF mode

A new DCF PMD will be introduced, which runs on Intel VF hardware, and it is a 
pure software design to control the advance functionality (such as switch, ACL) 
for rest of the VFs.

So if the DCF (Device Config Function) mode is specified by the devarg 
'cap=dcf', then it will stop the PCI probe in the iavf PMD.

Signed-off-by: Haiyue Wang 
Reviewed-by: Xiaolong Ye 


Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH] net/iavf: enable port reset

2020-03-25 Thread Wu, Jingjing


-Original Message-
From: Cui, LunyuanX 
Sent: Wednesday, March 25, 2020 10:48 AM
To: dev@dpdk.org
Cc: Wu, Jingjing ; Yang, Qiming ; 
Cui, LunyuanX 
Subject: [PATCH] net/iavf: enable port reset

This patch is intended to add iavf_dev_reset ops, enable iavf to support "port 
reset all".

Signed-off-by: Lunyuan Cui 

Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH] net/iavf: unify Rx ptype table

2020-03-24 Thread Wu, Jingjing


> -Original Message-
> From: Wang, ShougangX 
> Sent: Friday, March 6, 2020 10:24 AM
> To: dev@dpdk.org
> Cc: Rong, Leyi ; Wu, Jingjing ; 
> Wang,
> ShougangX 
> Subject: [PATCH] net/iavf: unify Rx ptype table
> 
> From: Wang Shougang 
Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH] net/iavf: unify Rx ptype table

2020-03-23 Thread Wu, Jingjing



> -Original Message-
> From: Wang, ShougangX 
> Sent: Monday, March 23, 2020 4:16 PM
> To: Wu, Jingjing ; dev@dpdk.org
> Cc: Rong, Leyi 
> Subject: RE: [PATCH] net/iavf: unify Rx ptype table
> 
> > -Original Message-
> > From: Wu, Jingjing
> > Sent: Monday, March 23, 2020 10:09 AM
> > To: Wang, ShougangX ; dev@dpdk.org
> > Cc: Rong, Leyi 
> > Subject: RE: [PATCH] net/iavf: unify Rx ptype table
> >
> >
> >
> > -Original Message-
> > From: Wang, ShougangX
> > Sent: Friday, March 6, 2020 10:24 AM
> > To: dev@dpdk.org
> > Cc: Rong, Leyi ; Wu, Jingjing ;
> > Wang, ShougangX 
> > Subject: [PATCH] net/iavf: unify Rx ptype table
> >
> > From: Wang Shougang 
> >
> > This patch unified the Rx ptype table.
> >
> > Signed-off-by: Wang Shougang 
> > ---
> >  drivers/net/iavf/iavf.h   |   3 +-
> >  drivers/net/iavf/iavf_ethdev.c|   3 +
> >  drivers/net/iavf/iavf_rxtx.c  | 604 +++---
> >  drivers/net/iavf/iavf_rxtx.h  |   3 +
> >  drivers/net/iavf/iavf_rxtx_vec_avx2.c |  21 +-
> > drivers/net/iavf/iavf_rxtx_vec_sse.c  |  25 +-
> >  6 files changed, 561 insertions(+), 98 deletions(-)
> >
> > diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index
> > fe25d807c..526040c6e 100644
> > --- a/drivers/net/iavf/iavf.h
> > +++ b/drivers/net/iavf/iavf.h
> > @@ -119,7 +119,7 @@ struct iavf_info {
> > uint16_t rxq_map[IAVF_MAX_MSIX_VECTORS];  };
> >
> > -#define IAVF_MAX_PKT_TYPE 256
> > +#define IAVF_MAX_PKT_TYPE 1024
> >
> >  /* Structure to store private data for each VF instance. */  struct 
> > iavf_adapter
> > { @@ -131,6 +131,7 @@ struct iavf_adapter {
> > /* For vector PMD */
> > bool rx_vec_allowed;
> > bool tx_vec_allowed;
> > +   const uint32_t *ptype_tbl;
> > bool stopped;
> >  };
> >
> > diff --git a/drivers/net/iavf/iavf_ethdev.c 
> > b/drivers/net/iavf/iavf_ethdev.c index
> > 34913f9c4..ee9f82249 100644
> > --- a/drivers/net/iavf/iavf_ethdev.c
> > +++ b/drivers/net/iavf/iavf_ethdev.c
> > @@ -1334,6 +1334,9 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
> > return -1;
> > }
> >
> > +   /* set default ptype table */
> > +   adapter->ptype_tbl = iavf_get_default_ptype_table();
> > +
> > As the ptype table is static, is that necessary to define a function to get 
> > it? Is
> > there any consideration for future extension?
> 
> Yes, I'm used to encapsulating it as a function for future extension.
> Do I need to set it as a global table instead of encapsulating in function?
Is there any chance the default ptype table change? If so, I think you can keep 
it as a function.
> 
> Thanks.
> Shougang


Re: [dpdk-dev] [PATCH] net/iavf: unify Rx ptype table

2020-03-22 Thread Wu, Jingjing



-Original Message-
From: Wang, ShougangX 
Sent: Friday, March 6, 2020 10:24 AM
To: dev@dpdk.org
Cc: Rong, Leyi ; Wu, Jingjing ; 
Wang, ShougangX 
Subject: [PATCH] net/iavf: unify Rx ptype table

From: Wang Shougang 

This patch unified the Rx ptype table.

Signed-off-by: Wang Shougang 
---
 drivers/net/iavf/iavf.h   |   3 +-
 drivers/net/iavf/iavf_ethdev.c|   3 +
 drivers/net/iavf/iavf_rxtx.c  | 604 +++---
 drivers/net/iavf/iavf_rxtx.h  |   3 +
 drivers/net/iavf/iavf_rxtx_vec_avx2.c |  21 +-  
drivers/net/iavf/iavf_rxtx_vec_sse.c  |  25 +-
 6 files changed, 561 insertions(+), 98 deletions(-)

diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 
fe25d807c..526040c6e 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -119,7 +119,7 @@ struct iavf_info {
uint16_t rxq_map[IAVF_MAX_MSIX_VECTORS];  };
 
-#define IAVF_MAX_PKT_TYPE 256
+#define IAVF_MAX_PKT_TYPE 1024
 
 /* Structure to store private data for each VF instance. */  struct 
iavf_adapter { @@ -131,6 +131,7 @@ struct iavf_adapter {
/* For vector PMD */
bool rx_vec_allowed;
bool tx_vec_allowed;
+   const uint32_t *ptype_tbl;
bool stopped;
 };
 
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c 
index 34913f9c4..ee9f82249 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1334,6 +1334,9 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
return -1;
}
 
+   /* set default ptype table */
+   adapter->ptype_tbl = iavf_get_default_ptype_table();
+
As the ptype table is static, is that necessary to define a function to get it? 
Is there any consideration for future extension?


Thanks
Jingjing


Re: [dpdk-dev] [PATCH v1 1/4] net/iavf: stop the PCI probe in DCF mode

2020-03-22 Thread Wu, Jingjing
+static int
+handle_dcf_arg(__rte_unused const char *key, const char *value,
+  __rte_unused void *arg)
__rte_unused is not needed here.

+{
+   bool *dcf = arg;
+
+   if (arg == NULL || value == NULL)
+   return -EINVAL;
+
+   if (strcmp(value, "dcf") == 0)
+   *dcf = true;
+   else
+   *dcf = false;
+
+   return 0;
+}
+


Re: [dpdk-dev] [PATCH] net/ice: correct VSI context

2019-12-29 Thread Wu, Jingjing



> -Original Message-
> From: Xing, Beilei 
> Sent: Saturday, December 14, 2019 2:14 PM
> To: Wu, Jingjing ; dev@dpdk.org; Zhang, Qi Z 
> 
> Cc: sta...@dpdk.org
> Subject: [PATCH] net/ice: correct VSI context
> 
> There'll always be a MDD event triggered when adding
> a FDIR rule. The root cause is 'LAN enable' is not
> configured during control VSI setup.
> Besides, correct FDIR fields for both main VSI and
> control VSI.
> 
> Fixes: 84dc7a95a2d3 ("net/ice: enable flow director engine")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Beilei Xing 
Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH] raw/ntb: fix write memory barrier issue

2019-12-25 Thread Wu, Jingjing



> -Original Message-
> From: Li, Xiaoyun 
> Sent: Wednesday, December 4, 2019 11:19 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Li, Xiaoyun ; sta...@dpdk.org
> Subject: [PATCH] raw/ntb: fix write memory barrier issue
> 
> All buffers and ring info should be written before tail register update.
> This patch relocates the write memory barrier before updating tail register
> to avoid potential issues.
> 
> Fixes: 11b5c7daf019 ("raw/ntb: add enqueue and dequeue functions")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Xiaoyun Li 
Acked-by: Jingjing Wu 


Re: [dpdk-dev] [PATCH] doc: fix a typo in ntb guide

2019-12-22 Thread Wu, Jingjing



> -Original Message-
> From: Li, Xiaoyun 
> Sent: Wednesday, December 4, 2019 11:20 PM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Li, Xiaoyun ; sta...@dpdk.org
> Subject: [PATCH] doc: fix a typo in ntb guide
> 
> In prerequisites of ntb guide, the correct flag when loading igb_uio
> module should be `wc_activate=1`, not `wc_active=1`.
> 
> Fixes: 11b5c7daf019 ("raw/ntb: add enqueue and dequeue functions")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Xiaoyun Li 
Acked-by: Jingjing Wu 

> ---
>  doc/guides/rawdevs/ntb.rst | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/guides/rawdevs/ntb.rst b/doc/guides/rawdevs/ntb.rst
> index 58472135f..aa7d80964 100644
> --- a/doc/guides/rawdevs/ntb.rst
> +++ b/doc/guides/rawdevs/ntb.rst
> @@ -52,11 +52,11 @@ NTB PMD needs kernel PCI driver to support write 
> combining (WC) to
> get
>  better performance. The difference will be more than 10 times.
>  To enable WC, there are 2 ways.
> 
> -- Insert igb_uio with ``wc_active=1`` flag if use igb_uio driver.
> +- Insert igb_uio with ``wc_activate=1`` flag if use igb_uio driver.
> 
>  .. code-block:: console
> 
> -  insmod igb_uio.ko wc_active=1
> +  insmod igb_uio.ko wc_activate=1
> 
>  - Enable WC for NTB device's Bar 2 and Bar 4 (Mapped memory) manually.
>The reference is https://www.kernel.org/doc/html/latest/x86/mtrr.html
> --
> 2.17.1



Re: [dpdk-dev] [PATCH v2] raw/ntb: fix write memory barrier issue

2019-12-22 Thread Wu, Jingjing



> -Original Message-
> From: Li, Xiaoyun 
> Sent: Monday, December 16, 2019 9:59 AM
> To: Wu, Jingjing 
> Cc: dev@dpdk.org; Maslekar, Omkar ; Li, Xiaoyun
> ; sta...@dpdk.org
> Subject: [PATCH v2] raw/ntb: fix write memory barrier issue
> 
> All buffers and ring info should be written before tail register update.
> This patch relocates the write memory barrier before updating tail register
> to avoid potential issues.
> 
> Fixes: 11b5c7daf019 ("raw/ntb: add enqueue and dequeue functions")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Xiaoyun Li 
Acked-by: Jingjing Wu 

> ---
> v2:
>  * Replaced rte_wmb with rte_io_wmb since rte_io_wmb is enough.
> ---
>  drivers/raw/ntb/ntb.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
> index ad7f6abfd..c7de86f36 100644
> --- a/drivers/raw/ntb/ntb.c
> +++ b/drivers/raw/ntb/ntb.c
> @@ -683,8 +683,8 @@ ntb_enqueue_bufs(struct rte_rawdev *dev,
>  sizeof(struct ntb_used) * nb1);
>   rte_memcpy(txq->tx_used_ring, tx_used + nb1,
>  sizeof(struct ntb_used) * nb2);
> + rte_io_wmb();
>   *txq->used_cnt = txq->last_used;
> - rte_wmb();
> 
>   /* update queue stats */
>   hw->ntb_xstats[NTB_TX_BYTES_ID + off] += bytes;
> @@ -789,8 +789,8 @@ ntb_dequeue_bufs(struct rte_rawdev *dev,
>  sizeof(struct ntb_desc) * nb1);
>   rte_memcpy(rxq->rx_desc_ring, rx_desc + nb1,
>  sizeof(struct ntb_desc) * nb2);
> + rte_io_wmb();
>   *rxq->avail_cnt = rxq->last_avail;
> - rte_wmb();
> 
>   /* update queue stats */
>   off = NTB_XSTATS_NUM * ((size_t)context + 1);
> --
> 2.17.1



Re: [dpdk-dev] [PATCH v2] net/iavf: fix Rx total stats

2019-12-12 Thread Wu, Jingjing



> -Original Message-
> From: Min, JiaqiX 
> Sent: Friday, December 13, 2019 9:23 AM
> To: dev@dpdk.org
> Cc: Wu, Jingjing ; Yang, Qiming 
> ; Min, JiaqiX
> ; sta...@dpdk.org
> Subject: [PATCH v2] net/iavf: fix Rx total stats
> 
> Rx total stats is the total number of successfully received packets,
> so exclude the number of rx_discards for Rx total stats.
> 
> Fixes: f4a41a6953af ("net/avf: support stats")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Jiaqi Min 
Acked-by: Jingjing Wu 




Re: [dpdk-dev] [PATCH v6 0/4] enable FIFO for NTB

2019-09-25 Thread Wu, Jingjing



> -Original Message-
> From: Li, Xiaoyun
> Sent: Thursday, September 26, 2019 11:20 AM
> To: Wu, Jingjing ; Wiles, Keith 
> ; Maslekar,
> Omkar ; Liang, Cunming 
> Cc: dev@dpdk.org; Li, Xiaoyun 
> Subject: [PATCH v6 0/4] enable FIFO for NTB
> 
> Enable FIFO for NTB rawdev driver to support packet based
> processing. And an example is provided to support txonly,
> rxonly, iofwd between NTB device and ethdev, and file
> transmission.
> 
> Acked-by: Omkar Maslekar 
> 
Series Acked-by: Jingjing Wu  


Thanks
Jingjing


Re: [dpdk-dev] [PATCH v4 4/4] examples/ntb: support more functions for NTB

2019-09-23 Thread Wu, Jingjing
<...>

> +* ``--qp=N``
> +
> +  Set the number of queues as N, where qp > 0.
The default value is 1?

<...>

> +
> + /* Set default fwd mode if user doesn't set it. */
> + if (fwd_mode == MAX_FWD_MODE && eth_port_id < RTE_MAX_ETHPORTS) {
> + printf("Set default fwd mode as iofwd.\n");
> + fwd_mode = IOFWD;
> + }
> + if (fwd_mode == MAX_FWD_MODE) {

use "else if"? because (fwd_mode == MAX_FWD_MODE) including (fwd_mode == 
MAX_FWD_MODE && eth_port_id < RTE_MAX_ETHPORTS)

Thanks
Jingjing


Re: [dpdk-dev] [PATCH v4 3/4] raw/ntb: add enqueue and dequeue functions

2019-09-22 Thread Wu, Jingjing



> -Original Message-
> From: Li, Xiaoyun
> Sent: Monday, September 9, 2019 11:27 AM
> To: Wu, Jingjing ; Wiles, Keith 
> ; Maslekar,
> Omkar ; Liang, Cunming 
> Cc: dev@dpdk.org; Li, Xiaoyun 
> Subject: [PATCH v4 3/4] raw/ntb: add enqueue and dequeue functions
> 
> Introduce enqueue and dequeue functions to support packet based
> processing. And enable write-combining for ntb driver since it
> can improve the performance a lot.
> 
> Signed-off-by: Xiaoyun Li 
Acked-by: Jingjing Wu 



Re: [dpdk-dev] [PATCH v4 2/4] raw/ntb: add xstats support

2019-09-22 Thread Wu, Jingjing
>  static int
> -ntb_xstats_reset(struct rte_rawdev *dev __rte_unused,
> -  const uint32_t ids[] __rte_unused,
> -  uint32_t nb_ids __rte_unused)
> +ntb_xstats_reset(struct rte_rawdev *dev,
> +  const uint32_t ids[],
> +  uint32_t nb_ids)
>  {
> - return 0;
> -}
> + struct ntb_hw *hw = dev->dev_private;
> + uint32_t i, xstats_num;
> 
> + xstats_num = NTB_XSTATS_NUM * (hw->queue_pairs + 1);
> + for (i = 0; i < nb_ids && ids[i] < xstats_num; i++)
> + hw->ntb_xstats[ids[i]] = 0;
> +
As there is no lock for the xstats, the enqueue and dequeuer thread are 
updating the value. It will cause competition.
Suggest to save the ntx_xstats, and update the value when enqueue and dequeuer 
are updating.

Thanks
Jingjing


Re: [dpdk-dev] [PATCH v4 1/4] raw/ntb: setup ntb queue

2019-09-22 Thread Wu, Jingjing
<...>
> +static void
> +ntb_rxq_release(struct ntb_rx_queue *rxq)
> +{
> + if (!rxq) {
> + NTB_LOG(ERR, "Pointer to rxq is NULL");
> + return;
> + }
> +
> + ntb_rxq_release_mbufs(rxq);
> +
> + rte_free(rxq->sw_ring);
> + rte_free(rxq);
It' better to free rxq out of this function, as the point of param "rxq" cannot 
be set to NULL in this func.

> +}
> +
> +static int
> +ntb_rxq_setup(struct rte_rawdev *dev,
> +   uint16_t qp_id,
> +   rte_rawdev_obj_t queue_conf)
> +{
> + struct ntb_queue_conf *rxq_conf = queue_conf;
> + struct ntb_hw *hw = dev->dev_private;
> + struct ntb_rx_queue *rxq;
> +
> + /* Allocate the rx queue data structure */
> + rxq = rte_zmalloc_socket("ntb rx queue",
> +  sizeof(struct ntb_rx_queue),
> +  RTE_CACHE_LINE_SIZE,
> +  dev->socket_id);
> + if (!rxq) {
> + NTB_LOG(ERR, "Failed to allocate memory for "
> + "rx queue data structure.");
> + return -ENOMEM;
Need to free rxq here.
<...>

> +static void
> +ntb_txq_release(struct ntb_tx_queue *txq)
>  {
> + if (!txq) {
> + NTB_LOG(ERR, "Pointer to txq is NULL");
> + return;
> + }
> +
> + ntb_txq_release_mbufs(txq);
> +
> + rte_free(txq->sw_ring);
> + rte_free(txq);
The same as above "ntb_rxq_release".

<...>

> +static int
> +ntb_queue_setup(struct rte_rawdev *dev,
> + uint16_t queue_id,
> + rte_rawdev_obj_t queue_conf)
> +{
> + struct ntb_hw *hw = dev->dev_private;
> + int ret;
> +
> + if (queue_id > hw->queue_pairs)
Should be ">=" ?

> + return -EINVAL;
> +
> + ret = ntb_txq_setup(dev, queue_id, queue_conf);
> + if (ret < 0)
> + return ret;
> +
> + ret = ntb_rxq_setup(dev, queue_id, queue_conf);
> +
> + return ret;
> +}
> +
>  static int
> -ntb_queue_release(struct rte_rawdev *dev __rte_unused,
> -   uint16_t queue_id __rte_unused)
> +ntb_queue_release(struct rte_rawdev *dev, uint16_t queue_id)
>  {
> + struct ntb_hw *hw = dev->dev_private;
> + struct ntb_tx_queue *txq;
> + struct ntb_rx_queue *rxq;
> +
> + if (queue_id > hw->queue_pairs)
Should be ">=" ?

> + return -EINVAL;
> +
> + txq = hw->tx_queues[queue_id];
> + rxq = hw->rx_queues[queue_id];
> + ntb_txq_release(txq);
> + ntb_rxq_release(rxq);
> +
>   return 0;
>  }
> 
> @@ -234,6 +470,77 @@ ntb_queue_count(struct rte_rawdev *dev)
>   return hw->queue_pairs;
>  }
> 
> +static int
> +ntb_queue_init(struct rte_rawdev *dev, uint16_t qp_id)
> +{
> + struct ntb_hw *hw = dev->dev_private;
> + struct ntb_rx_queue *rxq = hw->rx_queues[qp_id];
> + struct ntb_tx_queue *txq = hw->tx_queues[qp_id];
> + volatile struct ntb_header *local_hdr;
> + struct ntb_header *remote_hdr;
> + uint16_t q_size = hw->queue_size;
> + uint32_t hdr_offset;
> + void *bar_addr;
> + uint16_t i;
> +
> + if (hw->ntb_ops->get_peer_mw_addr == NULL) {
> + NTB_LOG(ERR, "Failed to get mapped peer addr.");
Would it be better to log as "XX ops is not supported" to keep consistent as 
others?

> + return -EINVAL;
> + }
> +
> + /* Put queue info into the start of shared memory. */
> + hdr_offset = hw->hdr_size_per_queue * qp_id;
> + local_hdr = (volatile struct ntb_header *)
> + ((size_t)hw->mz[0]->addr + hdr_offset);
> + bar_addr = (*hw->ntb_ops->get_peer_mw_addr)(dev, 0);
> + if (bar_addr == NULL)
> + return -EINVAL;
> + remote_hdr = (struct ntb_header *)
> +  ((size_t)bar_addr + hdr_offset);
> +
> + /* rxq init. */
> + rxq->rx_desc_ring = (struct ntb_desc *)
> + (&remote_hdr->desc_ring);
> + rxq->rx_used_ring = (volatile struct ntb_used *)
> + (&local_hdr->desc_ring[q_size]);
> + rxq->avail_cnt = &remote_hdr->avail_cnt;
> + rxq->used_cnt = &local_hdr->used_cnt;
> +
> + for (i = 0; i < rxq->nb_rx_desc - 1; i++) {
> + struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mpool);
> + if (unlikely(!mbuf)) {
> + NTB_LOG(ERR, "Failed to allocate mbuf for RX");
Need release mbufs allocated here or in " ntb_dev_start".

<...>

> + hw->hdr_size_per_queue = RTE_ALIGN(sizeof(struct ntb_header) +
> + hw->queue_size * sizeof(struct ntb_desc) +
> + hw->queue_size * sizeof(struct ntb_used),
> + RTE_CACHE_LINE_SIZE);
hw->hdr_size_per_queue is internal information, why put the assignment in 
ntb_dev_info_get?

> + info->ntb_hdr_size = hw->hdr_size_per_queue * hw->queue_pairs;
>  }
> 
>  static int
> -ntb_dev_configure(const struct rte_rawdev *dev __rte_unused,
> -   rte_rawdev_obj_t config __rte_unused)

Re: [dpdk-dev] [RFC] ethdev: support hairpin queue

2019-09-05 Thread Wu, Jingjing
Hi, Ori

Thanks for the explanation. I have more question below.

Thanks
Jingjing

> -Original Message-
> From: Ori Kam [mailto:or...@mellanox.com]
> Sent: Thursday, September 5, 2019 1:45 PM
> To: Wu, Jingjing ; Thomas Monjalon 
> ;
> Yigit, Ferruh ; arybche...@solarflare.com; Shahaf 
> Shuler
> ; Slava Ovsiienko ; Alex
> Rosenbaum 
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [RFC] ethdev: support hairpin queue
> 
> Hi Wu,
> Thanks for your comments PSB,
> 
> Ori
> 
> > -Original Message-
> > From: Wu, Jingjing 
> > Sent: Thursday, September 5, 2019 7:01 AM
> > To: Ori Kam ; Thomas Monjalon
> > ; Yigit, Ferruh ;
> > arybche...@solarflare.com; Shahaf Shuler ; Slava
> > Ovsiienko ; Alex Rosenbaum
> > 
> > Cc: dev@dpdk.org
> > Subject: RE: [dpdk-dev] [RFC] ethdev: support hairpin queue
> >
> >
> > > -Original Message-
> > > From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Ori Kam
> > > Sent: Tuesday, August 13, 2019 9:38 PM
> > > To: tho...@monjalon.net; Yigit, Ferruh ;
> > > arybche...@solarflare.com; shah...@mellanox.com;
> > viachesl...@mellanox.com;
> > > al...@mellanox.com
> > > Cc: dev@dpdk.org; or...@mellanox.com
> > > Subject: [dpdk-dev] [RFC] ethdev: support hairpin queue
> > >
> > > This RFC replaces RFC[1].
> > >
> > > The hairpin feature (different name can be forward) acts as "bump on the
> > wire",
> > > meaning that a packet that is received from the wire can be modified using
> > > offloaded action and then sent back to the wire without application
> > intervention
> > > which save CPU cycles.
> > >
> > > The hairpin is the inverse function of loopback in which application
> > > sends a packet then it is received again by the
> > > application without being sent to the wire.
> > >
> > > The hairpin can be used by a number of different NVF, for example load
> > > balancer, gateway and so on.
> > >
> > > As can be seen from the hairpin description, hairpin is basically RX queue
> > > connected to TX queue.
> > >
> > > During the design phase I was thinking of two ways to implement this
> > > feature the first one is adding a new rte flow action. and the second
> > > one is create a special kind of queue.
> > >
> > > The advantages of using the queue approch:
> > > 1. More control for the application. queue depth (the memory size that
> > > should be used).
> > > 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it
> > > will be easy to integrate with such system.
> >
> >
> > Which kind of QoS?
> 
> For example latency , packet rate those kinds of makes sense in the queue 
> level.
> I know we don't have any current support but I think we will have during the 
> next year.
> 
Where would be the QoS API loading? TM API? Or propose other new?
> >
> > > 3. Native integression with the rte flow API. Just setting the target
> > > queue/rss to hairpin queue, will result that the traffic will be routed
> > > to the hairpin queue.
> > > 4. Enable queue offloading.
> > >
> > Looks like the hairpin queue is just hardware queue, it has no relationship 
> > with
> > host memory. It makes the queue concept a little bit confusing. And why do 
> > we
> > need to setup queues, maybe some info in eth_conf is enough?
> 
> Like stated above it makes sense to have queue related parameters.
> For example I can think of application that most packets are going threw that 
> hairpin
> queue, but some control packets are
> from the application. So the application can configure the QoS between those 
> two
> queues. In addtion this will enable the application
> to use the queue like normal queue from rte_flow (see comment below) and 
> every other
> aspect.
> 
Yes, it is typical use case. And rte_flow is used to classify to different 
queue?
If I understand correct, your hairpin queue is using host memory/or on-card 
memory for buffering, but CPU cannot touch it, all the packet processing is 
done by NIC.
Queue is created, where the queue ID is used? Tx queue ID may be used as action 
of rte_flow? I still don't understand where the hairpin Rx queue ID be used. 
In my opinion, if no rx/tx function, it should not be a true queue from host 
view. 

> >
> > Not sure how your hardware make the hairpin work? Use rte_flow for packet
> > modification offload? Then how does HW distribute packets to those hardware
> > queue, classific

Re: [dpdk-dev] [RFC] ethdev: support hairpin queue

2019-09-04 Thread Wu, Jingjing


> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Ori Kam
> Sent: Tuesday, August 13, 2019 9:38 PM
> To: tho...@monjalon.net; Yigit, Ferruh ;
> arybche...@solarflare.com; shah...@mellanox.com; viachesl...@mellanox.com;
> al...@mellanox.com
> Cc: dev@dpdk.org; or...@mellanox.com
> Subject: [dpdk-dev] [RFC] ethdev: support hairpin queue
> 
> This RFC replaces RFC[1].
> 
> The hairpin feature (different name can be forward) acts as "bump on the 
> wire",
> meaning that a packet that is received from the wire can be modified using
> offloaded action and then sent back to the wire without application 
> intervention
> which save CPU cycles.
> 
> The hairpin is the inverse function of loopback in which application
> sends a packet then it is received again by the
> application without being sent to the wire.
> 
> The hairpin can be used by a number of different NVF, for example load
> balancer, gateway and so on.
> 
> As can be seen from the hairpin description, hairpin is basically RX queue
> connected to TX queue.
> 
> During the design phase I was thinking of two ways to implement this
> feature the first one is adding a new rte flow action. and the second
> one is create a special kind of queue.
> 
> The advantages of using the queue approch:
> 1. More control for the application. queue depth (the memory size that
> should be used).
> 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it
> will be easy to integrate with such system.


Which kind of QoS?

> 3. Native integression with the rte flow API. Just setting the target
> queue/rss to hairpin queue, will result that the traffic will be routed
> to the hairpin queue.
> 4. Enable queue offloading.
> 
Looks like the hairpin queue is just hardware queue, it has no relationship 
with host memory. It makes the queue concept a little bit confusing. And why do 
we need to setup queues, maybe some info in eth_conf is enough?

Not sure how your hardware make the hairpin work? Use rte_flow for packet 
modification offload? Then how does HW distribute packets to those hardware 
queue, classification? If So, why not just extend rte_flow with the hairpin 
action?

> Each hairpin Rxq can be connected Txq / number of Txqs which can belong to a
> different ports assuming the PMD supports it. The same goes the other
> way each hairpin Txq can be connected to one or more Rxqs.
> This is the reason that both the Txq setup and Rxq setup are getting the
> hairpin configuration structure.
> 
> From PMD prespctive the number of Rxq/Txq is the total of standard
> queues + hairpin queues.
> 
> To configure hairpin queue the user should call
> rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup insteed
> of the normal queue setup functions.

If the new API introduced to avoid ABI change, would one API 
rte_eth_rx_hairpin_setup be enough?

Thanks
Jingjing


  1   2   3   4   5   6   7   >