testing report for DPDK release candidate 23.11-rc3

2023-11-23 Thread Wael Abualrub
Hi,

> -Original Message-
> From: Thomas Monjalon 
> Sent: Wednesday, November 15, 2023 12:01 AM
> To: annou...@dpdk.org
> Subject: release candidate 23.11-rc3
>
> A new DPDK release candidate is ready for testing:
>   https://git.dpdk.org/dpdk/tag/?id=v23.11-rc3
>
> There are 129 new patches in this snapshot.
>
> Release notes:
>   https://doc.dpdk.org/guides/rel_notes/release_23_11.html
>
> Please test and report issues on bugs.dpdk.org.
>
> Only doc, tools, and bug fixes should be accepted at this stage.
>
> DPDK 23.11-rc4 should be the last release candidate.
> The final release should be done before the end of next week.
>
> Thank you everyone
>

The following is the testing report for 23.11-rc3.

Note: all tests are passed, and no critical issues were found.

The following is a list of tests that we ran on NVIDIA hardware this release:

- Basic functionality:
   Send and receive multiple types of traffic.
- testpmd xstats counter test.
- testpmd timestamp test.
- Changing/checking link status through testpmd.

- RTE flow tests:
 See: https://doc.dpdk.org/guides/nics/mlx5.html#supported-hardware-offloads

- RSS testing.
- VLAN filtering, stripping and insertion tests.
- Checksum and TSO tests.
- ptype reporting.
- link status interrupt using the example application link_status_interrupt 
tests.
- Interrupt mode using l3fwd-power example application tests.
- Multi process testing using multi process example applications.
- Hardware LRO tests.
- Regex tests.
- Buffer Split.
- Tx scheduling.

Kindest Regards,
Wael Abualrub


[PATCH 1/2] devtools: remove ABI exception for baseband FFT

2023-11-23 Thread David Marchand
Those API are now stable.

Fixes: c96b519bd1c3 ("bbdev: promote some functions as stable")

Signed-off-by: David Marchand 
---
 devtools/libabigail.abignore | 4 
 1 file changed, 4 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 325f34e0b6..3ff51509de 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -37,10 +37,6 @@
 type_kind = enum
 changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM, 
RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
 
-; Ignore changes to bbdev FFT API which is experimental
-[suppress_type]
-name = rte_bbdev_fft_op
-
 
 ; Temporary exceptions till next major ABI version ;
 
-- 
2.41.0



[PATCH 2/2] devtools: remove ABI exception for crypto asym operations

2023-11-23 Thread David Marchand
Those API are now stable.

Fixes: 79a4c2cda131 ("cryptodev: promote some functions as stable")

Signed-off-by: David Marchand 
---
 devtools/libabigail.abignore | 7 ---
 1 file changed, 7 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 3ff51509de..21b8cd6113 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -30,13 +30,6 @@
 ; Experimental APIs exceptions ;
 
 
-; Ignore changes to asymmetric crypto API which is experimental
-[suppress_type]
-name = rte_crypto_asym_op
-[suppress_type]
-type_kind = enum
-changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM, 
RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
-
 
 ; Temporary exceptions till next major ABI version ;
 
-- 
2.41.0



Re: [PATCH 24.03 v2 8/9] event/opdl: add schedule-type capability flags

2023-11-23 Thread Bruce Richardson
On Thu, Nov 23, 2023 at 09:40:57AM +0530, Jerin Jacob wrote:
> On Tue, Nov 21, 2023 at 11:47 PM Bruce Richardson
>  wrote:
> >
> > Document explicitly the scheduling types supported by this driver, both
> > via info_get() function, and via table in the documentation.
> >
> > Signed-off-by: Bruce Richardson 
> > ---
> >
> > Maintainers, please check this patch carefully, as I'm not sure the
> > correct way to document this.
> >
> > According to the docs for this driver, it supports parallel only via
> > ordered. Therefore, I've actually made the docs inconsistent from the
> > flags claimed in the API. I've documented that PARALLEL is supported in
> > the info_get() flags, so code that checks for that will run, but I've
> > omitted it from the table in the docs, since it is not directly
> > supported. Is this a good compromise, or an accurate reflection of the
> > driver?
> > ---
> >  doc/guides/eventdevs/features/opdl.ini | 2 ++
> >  drivers/event/opdl/opdl_evdev.c| 3 +++
> >  2 files changed, 5 insertions(+)
> >
> > diff --git a/doc/guides/eventdevs/features/opdl.ini 
> > b/doc/guides/eventdevs/features/opdl.ini
> > index 5cc35d3c77..7adccc98de 100644
> > --- a/doc/guides/eventdevs/features/opdl.ini
> > +++ b/doc/guides/eventdevs/features/opdl.ini
> > @@ -4,6 +4,8 @@
> >  ; Refer to default.ini for the full list of available PMD features.
> >  ;
> >  [Scheduling Features]
> > +atomic_scheduling  = Y
> > +ordered_scheduling = Y
> 
> Missed parallel
> 

Deliberate omission for now. See note above. Basically, parallel is
supported through ordered, so I added the flag below to stop apps from
breaking, but I wasn't sure about advertising it in the docs. Will add it
if you feel its best to keep them consistent.

/Bruce


Re: [PATCH 1/2] devtools: remove ABI exception for baseband FFT

2023-11-23 Thread Maxime Coquelin




On 11/23/23 09:58, David Marchand wrote:

Those API are now stable.

Fixes: c96b519bd1c3 ("bbdev: promote some functions as stable")

Signed-off-by: David Marchand 
---
  devtools/libabigail.abignore | 4 
  1 file changed, 4 deletions(-)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 325f34e0b6..3ff51509de 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -37,10 +37,6 @@
  type_kind = enum
  changed_enumerators = RTE_CRYPTO_ASYM_XFORM_ECPM, 
RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
  
-; Ignore changes to bbdev FFT API which is experimental

-[suppress_type]
-name = rte_bbdev_fft_op
-
  
  ; Temporary exceptions till next major ABI version ;
  


Reviewed-by: Maxime Coquelin 

Thanks,
Maxime



Re: [PATCH 24.03 v2 8/9] event/opdl: add schedule-type capability flags

2023-11-23 Thread Jerin Jacob
On Thu, Nov 23, 2023 at 2:52 PM Bruce Richardson
 wrote:
>
> On Thu, Nov 23, 2023 at 09:40:57AM +0530, Jerin Jacob wrote:
> > On Tue, Nov 21, 2023 at 11:47 PM Bruce Richardson
> >  wrote:
> > >
> > > Document explicitly the scheduling types supported by this driver, both
> > > via info_get() function, and via table in the documentation.
> > >
> > > Signed-off-by: Bruce Richardson 
> > > ---
> > >
> > > Maintainers, please check this patch carefully, as I'm not sure the
> > > correct way to document this.
> > >
> > > According to the docs for this driver, it supports parallel only via
> > > ordered. Therefore, I've actually made the docs inconsistent from the
> > > flags claimed in the API. I've documented that PARALLEL is supported in
> > > the info_get() flags, so code that checks for that will run, but I've
> > > omitted it from the table in the docs, since it is not directly
> > > supported. Is this a good compromise, or an accurate reflection of the
> > > driver?
> > > ---
> > >  doc/guides/eventdevs/features/opdl.ini | 2 ++
> > >  drivers/event/opdl/opdl_evdev.c| 3 +++
> > >  2 files changed, 5 insertions(+)
> > >
> > > diff --git a/doc/guides/eventdevs/features/opdl.ini 
> > > b/doc/guides/eventdevs/features/opdl.ini
> > > index 5cc35d3c77..7adccc98de 100644
> > > --- a/doc/guides/eventdevs/features/opdl.ini
> > > +++ b/doc/guides/eventdevs/features/opdl.ini
> > > @@ -4,6 +4,8 @@
> > >  ; Refer to default.ini for the full list of available PMD features.
> > >  ;
> > >  [Scheduling Features]
> > > +atomic_scheduling  = Y
> > > +ordered_scheduling = Y
> >
> > Missed parallel
> >
>
> Deliberate omission for now. See note above. Basically, parallel is

I see. I missed the note.

> supported through ordered, so I added the flag below to stop apps from
> breaking, but I wasn't sure about advertising it in the docs. Will add it
> if you feel its best to keep them consistent.

I think, it is better to keep them consistent.

>
> /Bruce


[PATCH] doc: add tested platforms with NVIDIA NICs

2023-11-23 Thread Raslan Darawsheh
Add tested platforms with NVIDIA NICs to the 23.11 release notes.

Signed-off-by: Raslan Darawsheh 
---
 doc/guides/rel_notes/release_23_11.rst | 150 +
 1 file changed, 150 insertions(+)

diff --git a/doc/guides/rel_notes/release_23_11.rst 
b/doc/guides/rel_notes/release_23_11.rst
index 520321c71e..5b8dfc61cf 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -464,3 +464,153 @@ Tested Platforms
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
===
+
+* Intel\ |reg| platforms with NVIDIA\ |reg| NICs combinations
+
+  * CPU:
+
+* Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2697A v4 @ 2.60GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2697 v3 @ 2.60GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2680 v2 @ 2.80GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2670 0 @ 2.60GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2650 v4 @ 2.20GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2650 v3 @ 2.30GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2640 @ 2.50GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2650 0 @ 2.00GHz
+* Intel\ |reg| Xeon\ |reg| CPU E5-2620 v4 @ 2.10GHz
+
+  * OS:
+
+* Red Hat Enterprise Linux release 9.1 (Plow)
+* Red Hat Enterprise Linux release 8.6 (Ootpa)
+* Red Hat Enterprise Linux release 8.4 (Ootpa)
+* Red Hat Enterprise Linux Server release 7.9 (Maipo)
+* Red Hat Enterprise Linux Server release 7.6 (Maipo)
+* Ubuntu 22.04
+* Ubuntu 20.04
+* SUSE Enterprise Linux 15 SP2
+
+  * OFED:
+
+* MLNX_OFED 23.07-0.5.1.2 and above
+
+  * upstream kernel:
+
+* Linux 6.7.0-rc1 and above
+
+  * rdma-core:
+
+* rdma-core-48.0 and above
+
+  * NICs
+
+* NVIDIA\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4121A-ACAT (2x25G)
+
+  * Host interface: PCI Express 3.0 x8
+  * Device ID: 15b3:1015
+  * Firmware version: 14.32.1010 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-4 Lx 50G MCX4131A-GCAT (1x50G)
+
+  * Host interface: PCI Express 3.0 x8
+  * Device ID: 15b3:1015
+  * Firmware version: 14.32.1010 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-5 100G MCX516A-CCAT (2x100G)
+
+  * Host interface: PCI Express 3.0 x16
+  * Device ID: 15b3:1017
+  * Firmware version: 16.35.2000 and above
+:
+
+* NVIDIA\ |reg| ConnectX\ |reg|-5 100G MCX516A-CCAT (2x100G)
+
+  * Host interface: PCI Express 3.0 x16
+  * Device ID: 15b3:1017
+  * Firmware version: 16.38.1900 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
+
+  * Host interface: PCI Express 3.0 x16
+  * Device ID: 15b3:1017
+  * Firmware version: 16.38.1900 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-5 100G MCX556A-EDAT (2x100G)
+
+  * Host interface: PCI Express 3.0 x16
+  * Device ID: 15b3:1017
+  * Firmware version: 16.38.1900 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-5 Ex EN 100G MCX516A-CDAT (2x100G)
+
+  * Host interface: PCI Express 4.0 x16
+  * Device ID: 15b3:1019
+  * Firmware version: 16.38.1900 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G)
+
+  * Host interface: PCI Express 4.0 x16
+  * Device ID: 15b3:101d
+  * Firmware version: 22.38.1900 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-6 Lx EN 25G MCX631102AN-ADAT (2x25G)
+
+  * Host interface: PCI Express 4.0 x8
+  * Device ID: 15b3:101f
+  * Firmware version: 26.38.1900 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-7 200G CX713106AE-HEA_QP1_Ax (2x200G)
+
+  * Host interface: PCI Express 5.0 x16
+  * Device ID: 15b3:1021
+  * Firmware version: 28.38.1900 and above
+
+* NVIDIA\ |reg| BlueField\ |reg| SmartNIC
+
+  * NVIDIA\ |reg| BlueField\ |reg|-2 SmartNIC MT41686 - MBF2H332A-AEEOT_A1 
(2x25G)
+
+* Host interface: PCI Express 3.0 x16
+* Device ID: 15b3:a2d6
+* Firmware version: 24.38.1002 and above
+
+  * NVIDIA\ |reg| BlueField\ |reg|-3 P-Series DPU MT41692 - 900-9D3B6-00CV-AAB 
(2x200G)
+
+* Host interface: PCI Express 5.0 x16
+* Device ID: 15b3:a2dc
+* Firmware version: 32.38.1002 and above
+
+  * Embedded software:
+
+* Ubuntu 22.04
+* MLNX_OFED 23.07-0.5.0.0 and above
+* DOCA_2.2.0_BSP_4.2.0_Ubuntu_22.04-2.23-07
+* DPDK application running on ARM cores
+
+* IBM Power 9 platforms with NVIDIA\ |reg| NICs combinations
+
+  * CPU:
+
+* POWER9 2.2 (pvr 004e 1202)
+
+  * OS:
+
+* Ubuntu 20.04
+
+  * NICs:
+
+* NVIDIA\ |reg| ConnectX\ |reg|-6 Dx 100G MCX623106AN-CDAT (2x100G)
+
+  * Host interface: PCI Express 4.0 x16
+  * Device ID: 15b3:101d
+  * Firmware version: 22.38.1900 and above
+
+* NVIDIA\ |reg| ConnectX\ |reg|-7 200G CX713106AE-HEA_QP1_Ax (2x200G)
+
+  * Host interface: PCI Express 5.0 x16
+  * Device ID: 15b3:1021
+  * Firmware version: 28.38.1900

Re: [PATCH] examples/l3fwd-power: fix to configure the uncore env

2023-11-23 Thread Ferruh Yigit
On 11/23/2023 1:58 AM, Thomas Monjalon wrote:
> 26/10/2023 17:19, Sivaprasad Tummala:
>> Updated the l3fwd-power app to configure the uncore env before invoking
>> any uncore APIs. With auto-detection in 'rte_power_uncore_init()' it is
>> too late because other APIs already called.
> 
> You are also updating the uncore API.
> 
>> +if (env == RTE_UNCORE_PM_ENV_AUTO_DETECT)
>> +/* Currently only intel_uncore is supported. This will be
>> + * extended with auto-detection support for multiple uncore
>> + * implementations.
>> + */
>> +env = RTE_UNCORE_PM_ENV_INTEL_UNCORE;
> 
> It looks like this patch does not make sense without AMD support.
> 

Yes, right now auto detect directly fallback to Intel, but there is an
intention to add AMD support too, that is why instead directly using
Intel preferred the auto detection abstraction.


Re: [PATCH v2] eal/x86: add AMD vendor check to choose TSC calibration

2023-11-23 Thread Ferruh Yigit
On 11/23/2023 7:27 AM, Sivaprasad Tummala wrote:
> AMD Epyc processors doesn't support get_tsc_freq_arch().
> The patch allows graceful return to allow fallback to
> alternate TSC calibration.
> 
> Fixes: 3dbc565e81a0 ("timer: honor arch-specific TSC frequency query")
> Cc: jerin.ja...@caviumnetworks.com
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Sivaprasad Tummala 
> 

Acked-by: Ferruh Yigit 




RE: [PATCH] examples/ipsec-secgw: fix partial overflow

2023-11-23 Thread Dooley, Brian
Thanks Thomas, makes sense.

> -Original Message-
> From: Thomas Monjalon 
> Sent: Wednesday, November 22, 2023 4:38 PM
> To: Dooley, Brian 
> Cc: dev@dpdk.org; sta...@dpdk.org; Nicolau, Radu
> ; Akhil Goyal ; Power, Ciara
> 
> Subject: Re: [PATCH] examples/ipsec-secgw: fix partial overflow
> 
> > > Case of partial overflow detected with ASan. Added extra padding to
> > > cdev_key structure.
> > >
> > > This structure is used for the key in hash table.
> > > Padding is added to force the struct to use 8 bytes, to ensure
> > > memory is notread past this structs boundary (the hash key
> > > calculation reads 8 bytes if this struct is size 5 bytes).
> > > The padding should be zeroed.
> > > If fields are modified in this struct, the padding must be updated
> > > to ensure multiple of 8 bytes size overall.
> > >
> > > Fixes: d299106e8e31 ("examples/ipsec-secgw: add IPsec sample
> > > application")
> > > Cc: sergio.gonzalez.mon...@intel.com
> > > Cc: sta...@dpdk.org
> > >
> > > Signed-off-by: Brian Dooley 
> >
> > Acked-by: Ciara Power 
> 
> Applied and made the comment simpler with this:
> 
>   uint8_t padding[3]; /* padding to 8-byte size should be zeroed */
> 
> 



Re: [PATCH v3 0/2] ethdev: add the check for PTP capability

2023-11-23 Thread lihuisong (C)



在 2023/11/2 7:39, Ferruh Yigit 写道:

On 10/20/2023 4:58 AM, lihuisong (C) wrote:

在 2023/9/21 19:17, Hemant Agrawal 写道:

HI Ferruh,


On 9/21/2023 11:02 AM, lihuisong (C) wrote:

Hi Ferruh,

Sorry for my delay reply because of taking a look at all PMDs
implementation.


在 2023/9/16 1:46, Ferruh Yigit 写道:

On 8/17/2023 9:42 AM, Huisong Li wrote:

   From the first version of ptpclient, it seems that this example
assume that the PMDs support the PTP feature and enable PTP by
default. Please see commit ab129e9065a5 ("examples/ptpclient: add
minimal PTP client") which are introduced in 2015.

And two years later, Rx HW timestamp offload was introduced to
enable or disable PTP feature in HW via rte_eth_rxmode. Please see
commit 42ffc45aa340 ("ethdev: add Rx HW timestamp capability").


Hi Huisong,

As far as I know this offload is not for PTP.
PTP and TIMESTAMP are different.

If TIMESTAMP offload cannot stand for PTP, we may need to add one new
offlaod for PTP.


Can you please detail what is "PTP offload"?


PTP is a protocol for time sync.
Rx TIMESTAMP offload is to ask HW to add timestamp to mbuf.

Yes.
But a lot of PMDs actually depand on HW to report Rx timestamp
releated information because of reading Rx timestamp of PTP SYNC
packet in read_rx_timestamp API.


HW support may be required for PTP but this doesn't mean timestamp
offload is used.

And then about four years later, ptpclient enable Rx timestamp
offload because some PMDs require this offload to enable. Please see
commit 7a04a4f67dca ("examples/ptpclient: enable Rx timestamp

offload").

dpaa2 seems using TIMESTAMP offload and PTP together, hence they
updated ptpclient sample to set TIMESTAMP offload.

[Hemant] In case of dpaa2, we need to enable HW timestamp for PTP. In
the current dpaa2 driver
If the code is compiled with, RTE_LIBRTE_IEEE1588, we are enabling the
HW timestamp
Otherwise, we are only enabling it when the TIMESTAMP offload is
selected.

We added patch in ptpclient earlier to pass the timestamp offload,
however later we also updated the driver to do it by default.



It is a little mess for PTP and RTE_LIBRTE_IEEE1588 to use.
Actually, whether PTP code is compiled should not depended on this macro
RTE_LIBRTE_IEEE1588.


There is already a patch by Thomas to remove RTE_LIBRTE_IEEE1588 [1],
agree that this functionality needs some attention.

Removing RTE_LIBRTE_IEEE1588 impact drivers, that is what holding us back.

+1 remove the compile macro RTE_LIBRTE_IEEE1588.
And hns3 had beed removed it.



[1]
https://patchwork.dpdk.org/project/dpdk/patch/20230203132810.14187-1-tho...@monjalon.net/


If there is a capability, it will be perfect, no matter whether it is
TIMESTAMP offload.
What do you think, Ferruh?


Difficulty is to know when to enable HW timestamp, and for some drivers
this may change the descriptor format (to include timestamp), so driver
should set correct datapath functions for this case.

Yes, to get Rx timestamp of PTP packet from descriptor for many NIC.


We know when a HW timer is required, it is required for PTP protocol and
required for TIMESTAMP offload.
TIMESTAMP offload may be unnecessary for some NIC which don't get Rx 
timestamp from descriptor(But, IMO, like this hardware is very rare.).


What do you think to dynamically enable it for PTP when
'rte_eth_timesync_enable()' API called, and for TIMESTAMP offload when
the offload is enabled.

Agree above.
At least, this can make sure all NIC can enable PTP feature.

If this works, now new configuration item or offload is required, what
do you think?

The new capability item is required to know if the port support PTP feature.
so application can enable/disable PTP based on this capability.



There are many PMDs doing like this, such as ice, igc, cnxk, dpaa2,
hns3 and so on.


Can you please point the ice & igc code, cc'ing their maintainers, we
can look
together?



We need to clarify dpaa2 usage.


By all the records, this is more like a process of perfecting PTP
feature.
Not all network adaptors support PTP feature. So adding the check
for PTP capability in ethdev layer is necessary.


Nope, as PTP (IEEE1588/802.1AS) implemented as dev_ops, and ops
already checked, so no additional check is needed.

But only having dev_ops about PTP doesn't satisfy the use of this
feature.
For example,
there are serveal network ports belonged to a driver on one OS, and
only one port support PTP function.
So driver needs one *PTP* offload.

We just need to clarify TIMESTAMP offload and PTP usage and find out
what is causing confusion.

Yes it is a little bit confusion.
There are two kinds of implementation:
A: ixgbe and txgbe (it seems that their HW is similar) don't need
TIMESTAMP offload,and only use dev_ops to finish PTP feature.
B:  saving "Rx timestamp related information" from Rx description when
receive PTP SYNC packet and
  report it in read_rx_timestamp API.
For case B, most of driver use TIMESTAMP offload to decide if driver
save "Rx timestamp related informatio

[PATCH 1/5] doc: remove restriction on ixgbe vector support

2023-11-23 Thread David Marchand
The ixgbe driver has vector support for different architectures for a
while now.

Fixes: b20971b6cca0 ("net/ixgbe: implement vector driver for ARM")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 doc/guides/nics/ixgbe.rst | 2 --
 1 file changed, 2 deletions(-)

diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index b1d77ab7ab..14573b542e 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -47,8 +47,6 @@ The wider register gives space to hold multiple packet 
buffers so as to save ins
 There is no change to PMD API. The RX/TX handler are the only two entries for 
vPMD packet I/O.
 They are transparently registered at runtime RX/TX execution if all condition 
checks pass.
 
-1.  To date, only an SSE version of IX GBE vPMD is available.
-
 Some constraints apply as pre-conditions for specific optimizations on bulk 
packet transfers.
 The following sections explain RX and TX constraints in the vPMD.
 
-- 
2.41.0



[PATCH 0/5] Some documentation fixes

2023-11-23 Thread David Marchand
Not urgent for the release (especially the last patch which is scary by
its size) but here are some cleanups in the documentation.


-- 
David Marchand

David Marchand (5):
  doc: remove restriction on ixgbe vector support
  doc: enhance readability in memif example commands
  doc: fix some ordered lists
  doc: remove number of commands in vDPA guide
  doc: use ordered lists

 doc/guides/eventdevs/dlb2.rst | 29 ++-
 doc/guides/eventdevs/dpaa.rst |  2 +-
 .../linux_gsg/nic_perf_intel_platform.rst | 10 ++--
 doc/guides/nics/cnxk.rst  |  4 +-
 doc/guides/nics/dpaa2.rst | 19 +++
 doc/guides/nics/enetc.rst |  6 +--
 doc/guides/nics/enetfec.rst   | 12 ++---
 doc/guides/nics/i40e.rst  | 16 +++---
 doc/guides/nics/ixgbe.rst |  2 -
 doc/guides/nics/memif.rst | 10 ++--
 doc/guides/nics/mlx4.rst  | 32 ++--
 doc/guides/nics/mlx5.rst  | 39 +++
 doc/guides/nics/mvpp2.rst | 49 ++-
 doc/guides/nics/pfe.rst   |  8 +--
 doc/guides/nics/tap.rst   | 14 +++---
 doc/guides/nics/virtio.rst| 12 +
 doc/guides/platform/bluefield.rst |  4 +-
 doc/guides/platform/cnxk.rst  | 29 ++-
 doc/guides/platform/dpaa.rst  | 14 +++---
 doc/guides/platform/dpaa2.rst | 20 
 doc/guides/platform/mlx5.rst  | 14 +++---
 doc/guides/platform/octeontx.rst  | 22 -
 .../prog_guide/env_abstraction_layer.rst  | 10 ++--
 .../generic_segmentation_offload_lib.rst  |  2 +-
 doc/guides/prog_guide/graph_lib.rst   | 39 ---
 doc/guides/prog_guide/rawdev.rst  | 28 ++-
 doc/guides/prog_guide/rte_flow.rst| 12 ++---
 doc/guides/prog_guide/stack_lib.rst   |  8 +--
 doc/guides/prog_guide/trace_lib.rst   | 12 ++---
 doc/guides/rawdevs/ifpga.rst  |  5 +-
 doc/guides/sample_app_ug/ip_pipeline.rst  |  4 +-
 doc/guides/sample_app_ug/pipeline.rst |  4 +-
 doc/guides/sample_app_ug/vdpa.rst | 29 ++-
 doc/guides/windows_gsg/run_apps.rst   |  8 +--
 34 files changed, 282 insertions(+), 246 deletions(-)

-- 
2.41.0



[PATCH 2/5] doc: enhance readability in memif example commands

2023-11-23 Thread David Marchand
'#.' is a token for ordered lists in RST.
Add a space in those example commands.

Signed-off-by: David Marchand 
---
 doc/guides/nics/memif.rst | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/doc/guides/nics/memif.rst b/doc/guides/nics/memif.rst
index afc574fdaa..2867b2f66d 100644
--- a/doc/guides/nics/memif.rst
+++ b/doc/guides/nics/memif.rst
@@ -216,15 +216,15 @@ In this example we run two instances of testpmd 
application and transmit packets
 
 First create ``server`` interface::
 
-#.//app/dpdk-testpmd -l 0-1 --proc-type=primary 
--file-prefix=pmd1 --vdev=net_memif,role=server -- -i
+# .//app/dpdk-testpmd -l 0-1 --proc-type=primary 
--file-prefix=pmd1 --vdev=net_memif,role=server -- -i
 
 Now create ``client`` interface (server must be already running so the client 
will connect)::
 
-#.//app/dpdk-testpmd -l 2-3 --proc-type=primary 
--file-prefix=pmd2 --vdev=net_memif -- -i
+# .//app/dpdk-testpmd -l 2-3 --proc-type=primary 
--file-prefix=pmd2 --vdev=net_memif -- -i
 
 You can also enable ``zero-copy`` on ``client`` interface::
 
-#.//app/dpdk-testpmd -l 2-3 --proc-type=primary 
--file-prefix=pmd2 --vdev=net_memif,zero-copy=yes --single-file-segments -- -i
+# .//app/dpdk-testpmd -l 2-3 --proc-type=primary 
--file-prefix=pmd2 --vdev=net_memif,zero-copy=yes --single-file-segments -- -i
 
 Start forwarding packets::
 
@@ -260,7 +260,7 @@ To see socket filename use show memif command::
 
 Now create memif interface by running testpmd with these command line options::
 
-#./dpdk-testpmd --vdev=net_memif,socket=/run/vpp/memif.sock -- -i
+# ./dpdk-testpmd --vdev=net_memif,socket=/run/vpp/memif.sock -- -i
 
 Testpmd should now create memif client interface and try to connect to server.
 In testpmd set forward option to icmpecho and start forwarding::
@@ -283,7 +283,7 @@ The situation is analogous to cross connecting 2 ports of 
the NIC by cable.
 
 To set the loopback, just use the same socket and id with different roles::
 
-#./dpdk-testpmd --vdev=net_memif0,role=server,id=0 
--vdev=net_memif1,role=client,id=0 -- -i
+# ./dpdk-testpmd --vdev=net_memif0,role=server,id=0 
--vdev=net_memif1,role=client,id=0 -- -i
 
 Then start the communication::
 
-- 
2.41.0



[PATCH 3/5] doc: fix some ordered lists

2023-11-23 Thread David Marchand
Ordered lists must start preceded by an empty line.
Entries must be separated by an empty line (as per our coding style).
Incorrectly indented lines are seen as a separator and result in
starting a new list in the rendered doc.

Fix issues in some guides.

Fixes: 85d9252e55f2 ("net/mlx5: add test for remote PD and CTX")
Fixes: 26b683b4f7d0 ("net/virtio: setup Rx queue interrupts")
Fixes: 9dcf5d15569b ("doc: clarify path selection in virtio guide")
Fixes: 68a03efeed65 ("doc: add Marvell cnxk platform guide")
Fixes: f6010c7655cc ("doc: add GSO programmer's guide")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 doc/guides/nics/mlx5.rst  | 21 +--
 doc/guides/nics/virtio.rst| 12 +++
 doc/guides/platform/cnxk.rst  |  3 +++
 .../generic_segmentation_offload_lib.rst  |  2 +-
 4 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 45379960f0..39a8c5d7b4 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -2326,19 +2326,18 @@ This command performs:
 
 #. Call the regular ``port attach`` function with updated identifier.
 
-For example, to attach a port whose PCI address is ``:0a:00.0``
-and its socket path is ``/var/run/import_ipc_socket``:
+   For example, to attach a port whose PCI address is ``:0a:00.0``
+   and its socket path is ``/var/run/import_ipc_socket``:
 
-.. code-block:: console
-
-   testpmd> mlx5 port attach :0a:00.0 socket=/var/run/import_ipc_socket
-   testpmd: MLX5 socket path is /var/run/import_ipc_socket
-   testpmd: Attach port with extra devargs :0a:00.0,cmd_fd=40,pd_handle=1
-   Attaching a new port...
-   EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: :0a:00.0 (socket 0)
-   Port 0 is attached. Now total ports is 1
-   Done
+   .. code-block:: console
 
+  testpmd> mlx5 port attach :0a:00.0 socket=/var/run/import_ipc_socket
+  testpmd: MLX5 socket path is /var/run/import_ipc_socket
+  testpmd: Attach port with extra devargs 
:0a:00.0,cmd_fd=40,pd_handle=1
+  Attaching a new port...
+  EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: :0a:00.0 (socket 
0)
+  Port 0 is attached. Now total ports is 1
+  Done
 
 port map external Rx queue
 ~~
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index ba6247170d..c22ce56a02 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -217,6 +217,7 @@ Prerequisites for Rx interrupts
 ~~~
 
 To support Rx interrupts,
+
 #. Check if guest kernel supports VFIO-NOIOMMU:
 
 Linux started to support VFIO-NOIOMMU since 4.8.0. Make sure the guest
@@ -379,12 +380,16 @@ according to below configuration:
 
 #. Split virtqueue mergeable path: If Rx mergeable is negotiated, in-order 
feature is
not negotiated, this path will be selected.
+
 #. Split virtqueue non-mergeable path: If Rx mergeable and in-order feature 
are not
negotiated, also Rx offload(s) are requested, this path will be selected.
+
 #. Split virtqueue in-order mergeable path: If Rx mergeable and in-order 
feature are
both negotiated, this path will be selected.
+
 #. Split virtqueue in-order non-mergeable path: If in-order feature is 
negotiated and
Rx mergeable is not negotiated, this path will be selected.
+
 #. Split virtqueue vectorized Rx path: If Rx mergeable is disabled and no Rx 
offload
requested, this path will be selected.
 
@@ -393,16 +398,21 @@ according to below configuration:
 
 #. Packed virtqueue mergeable path: If Rx mergeable is negotiated, in-order 
feature
is not negotiated, this path will be selected.
+
 #. Packed virtqueue non-mergeable path: If Rx mergeable and in-order feature 
are not
negotiated, this path will be selected.
+
 #. Packed virtqueue in-order mergeable path: If in-order and Rx mergeable 
feature are
both negotiated, this path will be selected.
+
 #. Packed virtqueue in-order non-mergeable path: If in-order feature is 
negotiated and
Rx mergeable is not negotiated, this path will be selected.
+
 #. Packed virtqueue vectorized Rx path: If building and running environment 
support
(AVX512 || NEON) && in-order feature is negotiated && Rx mergeable
is not negotiated && TCP_LRO Rx offloading is disabled && vectorized option 
enabled,
this path will be selected.
+
 #. Packed virtqueue vectorized Tx path: If building and running environment 
support
(AVX512 || NEON)  && in-order feature is negotiated && vectorized option 
enabled,
this path will be selected.
@@ -480,5 +490,7 @@ or configuration, below steps can help you identify which 
path you selected and
 root cause faster.
 
 #. Run vhost/virtio test case;
+
 #. Run "perf top" and check virtio Rx/Tx callback names;
+
 #. Identify which virtio path is selected refer to above table.
diff --git a/doc/guides/platform/cnxk.rst b/d

[PATCH 4/5] doc: remove number of commands in vDPA guide

2023-11-23 Thread David Marchand
There are now 5 supported commands.

Fixes: 6505865aa8ed ("examples/vdpa: add statistics show command")
Cc: sta...@dpdk.org

Signed-off-by: David Marchand 
---
 doc/guides/sample_app_ug/vdpa.rst | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/doc/guides/sample_app_ug/vdpa.rst 
b/doc/guides/sample_app_ug/vdpa.rst
index cb9c4f2169..6b6de53e48 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -38,8 +38,7 @@ where
 * --iface specifies the path prefix of the UNIX domain socket file, e.g.
   /tmp/vhost-user-, then the socket files will be named as /tmp/vhost-user-
   (n starts from 0).
-* --interactive means run the vdpa sample in interactive mode, currently 4
-  internal cmds are supported:
+* --interactive means run the vdpa sample in interactive mode:
 
   1. help: show help message
   2. list: list all available vdpa devices
-- 
2.41.0



[PATCH 5/5] doc: use ordered lists

2023-11-23 Thread David Marchand
Prefer automatically ordered lists by using #.

Signed-off-by: David Marchand 
---
 doc/guides/eventdevs/dlb2.rst | 29 ++-
 doc/guides/eventdevs/dpaa.rst |  2 +-
 .../linux_gsg/nic_perf_intel_platform.rst | 10 ++--
 doc/guides/nics/cnxk.rst  |  4 +-
 doc/guides/nics/dpaa2.rst | 19 +++
 doc/guides/nics/enetc.rst |  6 +--
 doc/guides/nics/enetfec.rst   | 12 ++---
 doc/guides/nics/i40e.rst  | 16 +++---
 doc/guides/nics/mlx4.rst  | 32 ++--
 doc/guides/nics/mlx5.rst  | 18 +++
 doc/guides/nics/mvpp2.rst | 49 ++-
 doc/guides/nics/pfe.rst   |  8 +--
 doc/guides/nics/tap.rst   | 14 +++---
 doc/guides/platform/bluefield.rst |  4 +-
 doc/guides/platform/cnxk.rst  | 26 +-
 doc/guides/platform/dpaa.rst  | 14 +++---
 doc/guides/platform/dpaa2.rst | 20 
 doc/guides/platform/mlx5.rst  | 14 +++---
 doc/guides/platform/octeontx.rst  | 22 -
 .../prog_guide/env_abstraction_layer.rst  | 10 ++--
 doc/guides/prog_guide/graph_lib.rst   | 39 ---
 doc/guides/prog_guide/rawdev.rst  | 28 ++-
 doc/guides/prog_guide/rte_flow.rst| 12 ++---
 doc/guides/prog_guide/stack_lib.rst   |  8 +--
 doc/guides/prog_guide/trace_lib.rst   | 12 ++---
 doc/guides/rawdevs/ifpga.rst  |  5 +-
 doc/guides/sample_app_ug/ip_pipeline.rst  |  4 +-
 doc/guides/sample_app_ug/pipeline.rst |  4 +-
 doc/guides/sample_app_ug/vdpa.rst | 26 +-
 doc/guides/windows_gsg/run_apps.rst   |  8 +--
 30 files changed, 250 insertions(+), 225 deletions(-)

diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
index 6a273d6f45..2532d92888 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb2.rst
@@ -271,24 +271,29 @@ certain reconfiguration sequences that are valid in the 
eventdev API but not
 supported by the PMD.
 
 Specifically, the PMD supports the following configuration sequence:
-1. Configure and start the device
-2. Stop the device
-3. (Optional) Reconfigure the device
-4. (Optional) If step 3 is run:
 
-   a. Setup queue(s). The reconfigured queue(s) lose their previous port links.
-   b. The reconfigured port(s) lose their previous queue links.
+#. Configure and start the device
 
-5. (Optional, only if steps 4a and 4b are run) Link port(s) to queue(s)
-6. Restart the device. If the device is reconfigured in step 3 but one or more
+#. Stop the device
+
+#. (Optional) Reconfigure the device
+   Setup queue(s). The reconfigured queue(s) lose their previous port links.
+   The reconfigured port(s) lose their previous queue links.
+   Link port(s) to queue(s)
+
+#. Restart the device. If the device is reconfigured in step 3 but one or more
of its ports or queues are not, the PMD will apply their previous
configuration (including port->queue links) at this time.
 
 The PMD does not support the following configuration sequences:
-1. Configure and start the device
-2. Stop the device
-3. Setup queue or setup port
-4. Start the device
+
+#. Configure and start the device
+
+#. Stop the device
+
+#. Setup queue or setup port
+
+#. Start the device
 
 This sequence is not supported because the event device must be reconfigured
 before its ports or queues can be.
diff --git a/doc/guides/eventdevs/dpaa.rst b/doc/guides/eventdevs/dpaa.rst
index 266f92d159..33d41fc7c4 100644
--- a/doc/guides/eventdevs/dpaa.rst
+++ b/doc/guides/eventdevs/dpaa.rst
@@ -64,7 +64,7 @@ Example:
 Limitations
 ---
 
-1. DPAA eventdev can not work with DPAA PUSH mode queues configured for ethdev.
+#. DPAA eventdev can not work with DPAA PUSH mode queues configured for ethdev.
Please configure export DPAA_NUM_PUSH_QUEUES=0
 
 Platform Requirement
diff --git a/doc/guides/linux_gsg/nic_perf_intel_platform.rst 
b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
index dbfaf4e350..4a5815dfb9 100644
--- a/doc/guides/linux_gsg/nic_perf_intel_platform.rst
+++ b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
@@ -127,7 +127,7 @@ The following are some recommendations on GRUB boot 
settings:
 Configurations before running DPDK
 --
 
-1. Reserve huge pages.
+#. Reserve huge pages.
See the earlier section on :ref:`linux_gsg_hugepages` for more details.
 
.. code-block:: console
@@ -147,7 +147,7 @@ Configurations before running DPDK
   # Mount to the specific folder.
   mount -t hugetlbfs nodev /mnt/huge
 
-2. Check the CPU layout using the DPDK ``cpu_layout`` utility:
+#. Check the CPU layout using the DPDK ``cpu_layout`` utility:
 
.. code-block:: console
 
@@ -157,7 +157,7 @@ Configurations before runnin

Re: [PATCH 1/5] doc: remove restriction on ixgbe vector support

2023-11-23 Thread Bruce Richardson
On Thu, Nov 23, 2023 at 12:44:01PM +0100, David Marchand wrote:
> The ixgbe driver has vector support for different architectures for a
> while now.
> 
> Fixes: b20971b6cca0 ("net/ixgbe: implement vector driver for ARM")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: David Marchand 
> ---
Acked-by: Bruce Richardson 


Re: [PATCH 2/5] doc: enhance readability in memif example commands

2023-11-23 Thread Bruce Richardson
On Thu, Nov 23, 2023 at 12:44:02PM +0100, David Marchand wrote:
> '#.' is a token for ordered lists in RST.
> Add a space in those example commands.
> 
> Signed-off-by: David Marchand 
> ---
Acked-by: Bruce Richardson 

As someone who runs DPDK on my systems as a regular user rather than root,
I'd also point out an alternative fix is to replace the "#" symbol, which
tends to be for the root prompt, with "$" symbol, more commonly used for
regular users. We should encourage running DPDK as non-root as much as we
can.


Re: [PATCH 3/5] doc: fix some ordered lists

2023-11-23 Thread Bruce Richardson
On Thu, Nov 23, 2023 at 12:44:03PM +0100, David Marchand wrote:
> Ordered lists must start preceded by an empty line.
> Entries must be separated by an empty line (as per our coding style).
> Incorrectly indented lines are seen as a separator and result in
> starting a new list in the rendered doc.
> 
> Fix issues in some guides.
> 
> Fixes: 85d9252e55f2 ("net/mlx5: add test for remote PD and CTX")
> Fixes: 26b683b4f7d0 ("net/virtio: setup Rx queue interrupts")
> Fixes: 9dcf5d15569b ("doc: clarify path selection in virtio guide")
> Fixes: 68a03efeed65 ("doc: add Marvell cnxk platform guide")
> Fixes: f6010c7655cc ("doc: add GSO programmer's guide")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: David Marchand 
> ---
Acked-by: Bruce Richardson 


Re: [PATCH 5/5] doc: use ordered lists

2023-11-23 Thread Bruce Richardson
On Thu, Nov 23, 2023 at 12:44:05PM +0100, David Marchand wrote:
> Prefer automatically ordered lists by using #.
> 
> Signed-off-by: David Marchand 
> ---
Haven't checked all instances, but definely agree with the idea.
If not merged for 23.11, please merge early for 24.03 in case of churn
during development.

Acked-by: Bruce Richardson 


Re: [PATCH v3 0/2] ethdev: add the check for PTP capability

2023-11-23 Thread lihuisong (C)



在 2023/11/2 7:39, Ferruh Yigit 写道:

timesync_read_rx_timestamp
On 9/21/2023 12:59 PM, lihuisong (C) wrote:

add ice & igc maintainers

在 2023/9/21 19:06, Ferruh Yigit 写道:

On 9/21/2023 11:02 AM, lihuisong (C) wrote:

Hi Ferruh,

Sorry for my delay reply because of taking a look at all PMDs
implementation.


在 2023/9/16 1:46, Ferruh Yigit 写道:

On 8/17/2023 9:42 AM, Huisong Li wrote:

   From the first version of ptpclient, it seems that this example
assume that
the PMDs support the PTP feature and enable PTP by default. Please see
commit ab129e9065a5 ("examples/ptpclient: add minimal PTP client")
which are introduced in 2015.

And two years later, Rx HW timestamp offload was introduced to
enable or
disable PTP feature in HW via rte_eth_rxmode. Please see
commit 42ffc45aa340 ("ethdev: add Rx HW timestamp capability").


Hi Huisong,

As far as I know this offload is not for PTP.
PTP and TIMESTAMP are different.

If TIMESTAMP offload cannot stand for PTP, we may need to add one new
offlaod for PTP.


Can you please detail what is "PTP offload"?


It indicates whether the device supports PTP or enable  PTP feature.


We have 'rte_eth_timesync_enable()' and 'rte_eth_timesync_disable()'
APIs to control PTP support.

No, this is just to control it.
we still need to like a device capablity to report application if the 
port support to call this API, right?


But when mention from "offload", it is something device itself does.

PTP is a protocol (IEEE 1588), and used to synchronize clocks.
What I get is protocol can be parsed by networking stack and it can be
used by application to synchronize clock.

When you are refer to "PTP offload", does it mean device (NIC)
understands the protocol and parse it to synchronize device clock with
other devices?

Good point. PTP offload is unreasonable.
But the capablity is required indeed.
What do you think of introducing a RTE_ETH_DEV_PTP in 
dev->data->dev_flags for PTP feature?



We have 'rte_eth_timesync_*()' APIs, my understanding is application
parses the PTP protocol, and it may use this information to configure
NIC to synchronize its clock, but it may also use PTP provided
information to sync any other clock. Is this understanding correct?



If TIMESTAMP offload is not for PTP, I don't know what the point of this
offload independent existence is.


TIMESTAMP offload request device to add timestamp to mbuf in ingress,
and use mbuf timestamp to schedule packet for egress.

Agree.


Technically this time-stamping can be done by driver, but if offload
set, HW timestamp is used for it.

Rx timestamp can be used for various reasons, like debugging and
performance/latency analyses, etc..



PTP is a protocol for time sync.
Rx TIMESTAMP offload is to ask HW to add timestamp to mbuf.

Yes.
But a lot of PMDs actually depand on HW to report Rx timestamp releated
information
because of reading Rx timestamp of PTP SYNC packet in read_rx_timestamp
API.


HW support may be required for PTP but this doesn't mean timestamp
offload is used.

understand.

And then about four years later, ptpclient enable Rx timestamp offload
because some PMDs require this offload to enable. Please see
commit 7a04a4f67dca ("examples/ptpclient: enable Rx timestamp
offload").


dpaa2 seems using TIMESTAMP offload and PTP together, hence they
updated
ptpclient sample to set TIMESTAMP offload.

There are many PMDs doing like this, such as ice, igc, cnxk, dpaa2, hns3
and so on.


Can you please point the ice & igc code, cc'ing their maintainers, we
can look together?

*-->igc code:*

Having following codes in igc_recv_scattered_pkts():

     if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
         uint32_t *ts = rte_pktmbuf_mtod_offset(first_seg,
                 uint32_t *, -IGC_TS_HDR_LEN);
         rxq->rx_timestamp = (uint64_t)ts[3] * NSEC_PER_SEC +
                 ts[2];
         rxm->timesync = rxq->queue_id;
     }
Note:this rxm->timesync will be used in timesync_read_rx_timestamp()


Above code requires TIMESTAMP offload to set timesync, but this
shouldn't be a requirement. Usage seems mixed.


*-->ice code:*

#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
     if (ice_timestamp_dynflag > 0 &&
         (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
         rxq->time_high =
            rte_le_to_cpu_32(rxd.wb.flex_ts.ts_high);
         if (unlikely(is_tsinit)) {
             ts_ns = ice_tstamp_convert_32b_64b(hw, ad, 1,
rxq->time_high);
             rxq->hw_time_low = (uint32_t)ts_ns;
             rxq->hw_time_high = (uint32_t)(ts_ns >> 32);
             is_tsinit = false;
         } else {
             if (rxq->time_high < rxq->hw_time_low)
                 rxq->hw_time_high += 1;
             ts_ns = (uint64_t)rxq->hw_time_high << 32 | rxq->time_high;
             rxq->hw_time_low = rxq->time_high;
         }
         rxq->hw_time_update = rte_get_timer_cycles() /
                  (rte_get_timer_hz()

Re: [PATCH 4/5] doc: remove number of commands in vDPA guide

2023-11-23 Thread Thomas Monjalon
23/11/2023 12:44, David Marchand:
> There are now 5 supported commands.
> 
> Fixes: 6505865aa8ed ("examples/vdpa: add statistics show command")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: David Marchand 
> ---
>  doc/guides/sample_app_ug/vdpa.rst | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/doc/guides/sample_app_ug/vdpa.rst 
> b/doc/guides/sample_app_ug/vdpa.rst
> index cb9c4f2169..6b6de53e48 100644
> --- a/doc/guides/sample_app_ug/vdpa.rst
> +++ b/doc/guides/sample_app_ug/vdpa.rst
> @@ -38,8 +38,7 @@ where
>  * --iface specifies the path prefix of the UNIX domain socket file, e.g.
>/tmp/vhost-user-, then the socket files will be named as 
> /tmp/vhost-user-
>(n starts from 0).
> -* --interactive means run the vdpa sample in interactive mode, currently 4
> -  internal cmds are supported:
> +* --interactive means run the vdpa sample in interactive mode:

While modifying this line, I think uppercase "vDPA" should be used.




[PATCH 2/5] doc/features: add link up/down feature

2023-11-23 Thread Huisong Li
Add link up/down feature for features.rst.

Fixes: 915e67837586 ("ethdev: API for link up and down")
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index dfa6f087c8..1c0cf2cea7 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -41,10 +41,13 @@ Link status
 
 Supports getting the link speed, duplex mode and link state (up/down).
 
-* **[implements] eth_dev_ops**: ``link_update``.
+* **[implements] eth_dev_ops**: ``link_update``, ``dev_set_link_up``, 
``dev_set_link_down``.
 * **[implements] rte_eth_dev_data**: ``dev_link``.
 * **[related]API**: ``rte_eth_link_get()``, ``rte_eth_link_get_nowait()``.
 
+Set link up/down an Ethernet device.
+
+* **[related]API**: ``rte_eth_dev_set_link_up()``, 
``rte_eth_dev_set_link_down()``.
 
 .. _nic_features_link_status_event:
 
-- 
2.33.0



[PATCH 1/5] doc/features: add RSS hash algorithm feature

2023-11-23 Thread Huisong Li
Add hash algorithm feature introduced by 23.11 and fix some RSS features
description.

Fixes: 34ff088cc241 ("ethdev: set and query RSS hash algorithm")

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 25 +
 1 file changed, 21 insertions(+), 4 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1a1dc16c1e..dfa6f087c8 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -277,10 +277,12 @@ RSS hash
 Supports RSS hashing on RX.
 
 * **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = 
``RTE_ETH_MQ_RX_RSS_FLAG``.
-* **[uses] user config**: ``dev_conf.rx_adv_conf.rss_conf``.
+* **[uses] user config**: ``rss_conf.rss_hf``.
 * **[uses] rte_eth_rxconf,rte_eth_rxmode**: 
``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
 * **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_RSS_HASH``, ``mbuf.rss``.
+* **[related]  API**: ``rte_eth_dev_configure``, 
``rte_eth_dev_rss_hash_update``
+  ``rte_eth_dev_rss_hash_conf_get()``.
 
 
 .. _nic_features_inner_rss:
@@ -288,7 +290,7 @@ Supports RSS hashing on RX.
 Inner RSS
 -
 
-Supports RX RSS hashing on Inner headers.
+Supports RX RSS hashing on Inner headers by rte_flow API.
 
 * **[uses]rte_flow_action_rss**: ``level``.
 * **[uses]rte_eth_rxconf,rte_eth_rxmode**: 
``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
@@ -303,9 +305,24 @@ RSS key update
 Supports configuration of Receive Side Scaling (RSS) hash computation. Updating
 Receive Side Scaling (RSS) hash key.
 
-* **[implements] eth_dev_ops**: ``rss_hash_update``, ``rss_hash_conf_get``.
+* **[implements] eth_dev_ops**: ``dev_configure``, ``rss_hash_update``, 
``rss_hash_conf_get``.
+* **[uses] user config**: ``rss_conf.rss_key``, ``rss_conf.rss_key_len``
 * **[provides]   rte_eth_dev_info**: ``hash_key_size``.
-* **[related]API**: ``rte_eth_dev_rss_hash_update()``,
+* **[related]API**: ``rte_eth_dev_configure``, 
``rte_eth_dev_rss_hash_update()``,
+  ``rte_eth_dev_rss_hash_conf_get()``.
+
+.. _nic_features_rss_hash_algo_update:
+
+RSS hash algorithm update
+--
+
+Supports configuration of Receive Side Scaling (RSS) hash algorithm. Updating
+RSS hash algorithm.
+
+* **[implements] eth_dev_ops**: ``dev_configure``, ``rss_hash_update``, 
``rss_hash_conf_get``.
+* **[uses] user config**: ``rss_conf.algorithm``
+* **[provides]   rte_eth_dev_info**: ``rss_algo_capa``.
+* **[related]API**: ``rte_eth_dev_configure``, 
``rte_eth_dev_rss_hash_update()``,
   ``rte_eth_dev_rss_hash_conf_get()``.
 
 
-- 
2.33.0



[PATCH 4/5] doc/features: add Traffic Manager features

2023-11-23 Thread Huisong Li
Add Traffic Manager features.

Fixes: 5d109deffa87 ("ethdev: add traffic management API")
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 13 +
 1 file changed, 13 insertions(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index a5026a0aa8..69fbf737b8 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -761,6 +761,19 @@ Supports congestion management.
   ``rte_eth_cman_config_set()``, ``rte_eth_cman_config_get()``.
 
 
+.. _nic_features_traffic_manager:
+
+Traffic manager
+-
+
+Supports Traffic manager.
+
+* **[implements] rte_tm_ops**: ``capabilities_get``, ``shaper_profile_add``,
+  ``hierarchy_commit`` and so on.
+* **[related]API**: ``rte_tm_capabilities_get()``, 
``rte_tm_shaper_profile_add()``,
+  ``rte_tm_hierarchy_commit()`` and so on.
+
+
 .. _nic_features_fw_version:
 
 FW version
-- 
2.33.0



[PATCH 3/5] doc/features: add config interface for speed capabilities

2023-11-23 Thread Huisong Li
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 1 +
 1 file changed, 1 insertion(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1c0cf2cea7..a5026a0aa8 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -31,6 +31,7 @@ Speed capabilities
 Supports getting the speed capabilities that the current device is capable of.
 
 * **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
+* **[uses] user config**: ``dev_conf.link_speeds``.
 * **[related]  API**: ``rte_eth_dev_info_get()``.
 
 
-- 
2.33.0



[PATCH 0/5] doc/features: fix some features and add new features

2023-11-23 Thread Huisong Li
This series add the configuration interface for some features and add
serval new features had beed supported.

Huisong Li (5):
  doc/features: add RSS hash algorithm feature
  doc/features: add link up/down feature
  doc/features: add config interface for speed capabilities
  doc/features: add Traffic Manager features
  doc/features: add dump device private information feature

 doc/guides/nics/features.rst | 56 
 1 file changed, 51 insertions(+), 5 deletions(-)

-- 
2.33.0



[PATCH 5/5] doc/features: add dump device private information feature

2023-11-23 Thread Huisong Li
Add dump device private information feature.

Fixes: edcf22c6d389 ("ethdev: introduce dump API")
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 12 
 1 file changed, 12 insertions(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 69fbf737b8..bae1a1faab 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -820,6 +820,18 @@ registers and register size).
 * **[related]API**: ``rte_eth_dev_get_reg_info()``.
 
 
+.. _nic_features_device_private_info_dump:
+
+Device private information dump
+--
+
+Supports retrieving device private information to a file. Provided data and
+the order depends on PMD.
+
+* **[mplements] eth_dev_ops**: ``eth_dev_priv_dump``.
+* **[related]API**: ``rte_eth_dev_priv_dump()``.
+
+
 .. _nic_features_led:
 
 LED
-- 
2.33.0



Re: [PATCH 5/5] doc/features: add dump device private information feature

2023-11-23 Thread Thomas Monjalon
23/11/2023 14:59, Huisong Li:
> +.. _nic_features_device_private_info_dump:
> +
> +Device private information dump
> +--

Why underlining is too short?

> +
> +Supports retrieving device private information to a file. Provided data and
> +the order depends on PMD.
> +
> +* **[mplements] eth_dev_ops**: ``eth_dev_priv_dump``.

looks like a typo here

> +* **[related]API**: ``rte_eth_dev_priv_dump()``.





[Bug 1330] rte_thread_* wrappers missing in DPDK

2023-11-23 Thread bugzilla
https://bugs.dpdk.org/show_bug.cgi?id=1330

Bug ID: 1330
   Summary: rte_thread_* wrappers missing in DPDK
   Product: DPDK
   Version: 23.11
  Hardware: All
OS: Linux
Status: UNCONFIRMED
  Severity: normal
  Priority: Normal
 Component: other
  Assignee: dev@dpdk.org
  Reporter: vo...@cesnet.cz
  Target Milestone: ---

In DPDK 23.11, rte_ctrl_thread_create has been replaced by
rte_thread_create_control, encouraging Linux users to switch from the pthread_t
API to the rte_thread API.

The problem is that the rte_thread API does not currently provide wrappers for
all pthread functions. There are no equivalent functions for
pthread_timedjoin_np, pthread_getname_np and pthread_cancel.

Would it be possible to add support for these functions as well?

-- 
You are receiving this mail because:
You are the assignee for the bug.

Re: [RFC PATCH 1/5] bus: new driver to accept shared memory over unix socket

2023-11-23 Thread Jerin Jacob
On Fri, Sep 22, 2023 at 1:49 PM Bruce Richardson
 wrote:
>
> Add a new driver to DPDK which supports taking in memory e.g. hugepage
> memory via a unix socket connection and maps it into the DPDK process
> replacing the current socket memory as the default memory for use by
> future requests.
>
> Signed-off-by: Bruce Richardson 

Thanks Bruce for this work. IMO, This will open up a lot of use cases
like CPU based offload on different process.

> +
> +enum shared_mem_msg_type {
> +   MSG_TYPE_ACK = 0,
> +   MSG_TYPE_MMAP_BASE_ADDR,
> +   MSG_TYPE_MEMPOOL_OFFSET,
> +   MSG_TYPE_RX_RING_OFFSET,
> +   MSG_TYPE_TX_RING_OFFSET,
> +   MSG_TYPE_START,
> +   MSG_TYPE_GET_MAC,
> +   MSG_TYPE_REPORT_MAC,
> +};

In order to cater to different use cases, IMO, drivers/bus/shared_mem/
should be generic and act only
as transport for communicating with other process. That would
translate to the following
1) drivers/bus/shared_mem/ provides means to register message type and
its callback
2) The consumers(drivers/mempool/shared_mem and drivers/net/sharedmem)
register the
the callback. Definition the callback and message type should be in consumer.

Also, We may change the bus/sharedmem to bus/socket or so, to limit
the scope of bus
driver as communion mechanism to talk to different process. That way
there can different DPDK driver
can based on socket bus in the future.


[PATCH v8 00/21] dts: docstrings update

2023-11-23 Thread Juraj Linkeš
The first commit makes changes to the code. These code changes mainly
change the structure of the code so that the actual API docs generation
works. There are also some code changes which get reflected in the
documentation, such as making functions/methods/attributes private or
public.

The rest of the commits deal with the actual docstring documentation
(from which the API docs are generated). The format of the docstrings
is the Google format [0] with PEP257 [1] and some guidelines captured
in the last commit of this group covering what the Google format
doesn't.
The docstring updates are split into many commits to make review
possible. When accepted, they may be squashed.
The docstrings have been composed in anticipation of [2], adhering to
maximum line length of 100. We don't have a tool for automatic docstring
formatting, hence the usage of 100 right away to save time.

NOTE: The logger.py module is not fully documented, as it's being
refactored and the refactor will be submitted in the near future.
Documenting it now seems unnecessary.

[0] https://google.github.io/styleguide/pyguide.html#s3.8.4-comments-in-classes
[1] https://peps.python.org/pep-0257/
[2] https://patches.dpdk.org/project/dpdk/list/?series=29844

v7:
Split the series into docstrings and api docs generation and addressed
comments.

v8:
Addressed review comments, all of which were pretty minor - small
gramatical changes, a little bit of rewording to remove confusion here
and there, additional explanations and so on.

Juraj Linkeš (21):
  dts: code adjustments for doc generation
  dts: add docstring checker
  dts: add basic developer docs
  dts: exceptions docstring update
  dts: settings docstring update
  dts: logger and utils docstring update
  dts: dts runner and main docstring update
  dts: test suite docstring update
  dts: test result docstring update
  dts: config docstring update
  dts: remote session docstring update
  dts: interactive remote session docstring update
  dts: port and virtual device docstring update
  dts: cpu docstring update
  dts: os session docstring update
  dts: posix and linux sessions docstring update
  dts: node docstring update
  dts: sut and tg nodes docstring update
  dts: base traffic generators docstring update
  dts: scapy tg docstring update
  dts: test suites docstring update

 doc/guides/tools/dts.rst  |  73 +++
 dts/framework/__init__.py |  12 +-
 dts/framework/config/__init__.py  | 375 +---
 dts/framework/config/types.py | 132 ++
 dts/framework/dts.py  | 162 +--
 dts/framework/exception.py| 156 ---
 dts/framework/logger.py   |  72 ++-
 dts/framework/remote_session/__init__.py  |  80 ++--
 .../interactive_remote_session.py |  36 +-
 .../remote_session/interactive_shell.py   | 150 +++
 dts/framework/remote_session/os_session.py| 284 
 dts/framework/remote_session/python_shell.py  |  32 ++
 .../remote_session/remote/__init__.py |  27 --
 .../remote/interactive_shell.py   | 131 --
 .../remote_session/remote/python_shell.py |  12 -
 .../remote_session/remote/remote_session.py   | 168 ---
 .../remote_session/remote/testpmd_shell.py|  45 --
 .../remote_session/remote_session.py  | 230 ++
 .../{remote => }/ssh_session.py   |  28 +-
 dts/framework/remote_session/testpmd_shell.py |  83 
 dts/framework/settings.py | 188 ++--
 dts/framework/test_result.py  | 301 ++---
 dts/framework/test_suite.py   | 236 +++---
 dts/framework/testbed_model/__init__.py   |  29 +-
 dts/framework/testbed_model/{hw => }/cpu.py   | 209 ++---
 dts/framework/testbed_model/hw/__init__.py|  27 --
 dts/framework/testbed_model/hw/port.py|  60 ---
 .../testbed_model/hw/virtual_device.py|  16 -
 .../linux_session.py  |  70 ++-
 dts/framework/testbed_model/node.py   | 214 ++---
 dts/framework/testbed_model/os_session.py | 422 ++
 dts/framework/testbed_model/port.py   |  93 
 .../posix_session.py  |  85 +++-
 dts/framework/testbed_model/sut_node.py   | 238 ++
 dts/framework/testbed_model/tg_node.py|  69 ++-
 .../testbed_model/traffic_generator.py|  72 ---
 .../traffic_generator/__init__.py |  43 ++
 .../capturing_traffic_generator.py|  49 +-
 .../{ => traffic_generator}/scapy.py  | 110 +++--
 .../traffic_generator/traffic_generator.py|  85 
 dts/framework/testbed_model/virtual_device.py |  29 ++
 dts/framework/utils.py| 122 ++---
 dts/main.py   |  19 +-
 dts/poetry.lock   |  12 +-
 dts/pyproject.toml|   6 +-
 dts/tes

[PATCH v8 01/21] dts: code adjustments for doc generation

2023-11-23 Thread Juraj Linkeš
The standard Python tool for generating API documentation, Sphinx,
imports modules one-by-one when generating the documentation. This
requires code changes:
* properly guarding argument parsing in the if __name__ == '__main__'
  block,
* the logger used by DTS runner underwent the same treatment so that it
  doesn't create log files outside of a DTS run,
* however, DTS uses the arguments to construct an object holding global
  variables. The defaults for the global variables needed to be moved
  from argument parsing elsewhere,
* importing the remote_session module from framework resulted in
  circular imports because of one module trying to import another
  module. This is fixed by reorganizing the code,
* some code reorganization was done because the resulting structure
  makes more sense, improving documentation clarity.

The are some other changes which are documentation related:
* added missing type annotation so they appear in the generated docs,
* reordered arguments in some methods,
* removed superfluous arguments and attributes,
* change private functions/methods/attributes to private and vice-versa.

The above all appear in the generated documentation and the with them,
the documentation is improved.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/config/__init__.py  |  8 +-
 dts/framework/dts.py  | 31 +--
 dts/framework/exception.py| 54 +---
 dts/framework/remote_session/__init__.py  | 41 +
 .../interactive_remote_session.py |  0
 .../{remote => }/interactive_shell.py |  0
 .../{remote => }/python_shell.py  |  0
 .../remote_session/remote/__init__.py | 27 --
 .../{remote => }/remote_session.py|  0
 .../{remote => }/ssh_session.py   | 12 +--
 .../{remote => }/testpmd_shell.py |  0
 dts/framework/settings.py | 85 +++
 dts/framework/test_result.py  |  4 +-
 dts/framework/test_suite.py   |  7 +-
 dts/framework/testbed_model/__init__.py   | 12 +--
 dts/framework/testbed_model/{hw => }/cpu.py   | 13 +++
 dts/framework/testbed_model/hw/__init__.py| 27 --
 .../linux_session.py  |  6 +-
 dts/framework/testbed_model/node.py   | 23 +++--
 .../os_session.py | 22 ++---
 dts/framework/testbed_model/{hw => }/port.py  |  0
 .../posix_session.py  |  4 +-
 dts/framework/testbed_model/sut_node.py   |  8 +-
 dts/framework/testbed_model/tg_node.py| 29 +--
 .../traffic_generator/__init__.py | 23 +
 .../capturing_traffic_generator.py|  4 +-
 .../{ => traffic_generator}/scapy.py  | 19 ++---
 .../traffic_generator.py  | 14 ++-
 .../testbed_model/{hw => }/virtual_device.py  |  0
 dts/framework/utils.py| 40 +++--
 dts/main.py   |  9 +-
 31 files changed, 244 insertions(+), 278 deletions(-)
 rename dts/framework/remote_session/{remote => }/interactive_remote_session.py 
(100%)
 rename dts/framework/remote_session/{remote => }/interactive_shell.py (100%)
 rename dts/framework/remote_session/{remote => }/python_shell.py (100%)
 delete mode 100644 dts/framework/remote_session/remote/__init__.py
 rename dts/framework/remote_session/{remote => }/remote_session.py (100%)
 rename dts/framework/remote_session/{remote => }/ssh_session.py (91%)
 rename dts/framework/remote_session/{remote => }/testpmd_shell.py (100%)
 rename dts/framework/testbed_model/{hw => }/cpu.py (95%)
 delete mode 100644 dts/framework/testbed_model/hw/__init__.py
 rename dts/framework/{remote_session => testbed_model}/linux_session.py (97%)
 rename dts/framework/{remote_session => testbed_model}/os_session.py (95%)
 rename dts/framework/testbed_model/{hw => }/port.py (100%)
 rename dts/framework/{remote_session => testbed_model}/posix_session.py (98%)
 create mode 100644 dts/framework/testbed_model/traffic_generator/__init__.py
 rename dts/framework/testbed_model/{ => 
traffic_generator}/capturing_traffic_generator.py (98%)
 rename dts/framework/testbed_model/{ => traffic_generator}/scapy.py (95%)
 rename dts/framework/testbed_model/{ => 
traffic_generator}/traffic_generator.py (81%)
 rename dts/framework/testbed_model/{hw => }/virtual_device.py (100%)

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index 9b32cf0532..ef25a463c0 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -17,6 +17,7 @@
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
 
@@ -89,7 +90,7 @@ class TrafficGeneratorConfig:
 traffic_generator_type: TrafficGeneratorType
 
 @staticmethod
-def from_dict(d: dict):
+def from_dict(d:

[PATCH v8 02/21] dts: add docstring checker

2023-11-23 Thread Juraj Linkeš
Python docstrings are the in-code way to document the code. The
docstring checker of choice is pydocstyle which we're executing from
Pylama, but the current latest versions are not complatible due to [0],
so pin the pydocstyle version to the latest working version.

[0] https://github.com/klen/pylama/issues/232

Signed-off-by: Juraj Linkeš 
---
 dts/poetry.lock| 12 ++--
 dts/pyproject.toml |  6 +-
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/dts/poetry.lock b/dts/poetry.lock
index f7b3b6d602..a734fa71f0 100644
--- a/dts/poetry.lock
+++ b/dts/poetry.lock
@@ -489,20 +489,20 @@ files = [
 
 [[package]]
 name = "pydocstyle"
-version = "6.3.0"
+version = "6.1.1"
 description = "Python docstring style checker"
 optional = false
 python-versions = ">=3.6"
 files = [
-{file = "pydocstyle-6.3.0-py3-none-any.whl", hash = 
"sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"},
-{file = "pydocstyle-6.3.0.tar.gz", hash = 
"sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"},
+{file = "pydocstyle-6.1.1-py3-none-any.whl", hash = 
"sha256:6987826d6775056839940041beef5c08cc7e3d71d63149b48e36727f70144dc4"},
+{file = "pydocstyle-6.1.1.tar.gz", hash = 
"sha256:1d41b7c459ba0ee6c345f2eb9ae827cab14a7533a88c5c6f7e94923f72df92dc"},
 ]
 
 [package.dependencies]
-snowballstemmer = ">=2.2.0"
+snowballstemmer = "*"
 
 [package.extras]
-toml = ["tomli (>=1.2.3)"]
+toml = ["toml"]
 
 [[package]]
 name = "pyflakes"
@@ -837,4 +837,4 @@ jsonschema = ">=4,<5"
 [metadata]
 lock-version = "2.0"
 python-versions = "^3.10"
-content-hash = 
"0b1e4a1cb8323e17e5ee5951c97e74bde6e60d0413d7b25b1803d5b2bab39639"
+content-hash = 
"3501e97b3dadc19fe8ae179fe21b1edd2488001da9a8e86ff2bca0b86b99b89b"
diff --git a/dts/pyproject.toml b/dts/pyproject.toml
index 980ac3c7db..37a692d655 100644
--- a/dts/pyproject.toml
+++ b/dts/pyproject.toml
@@ -25,6 +25,7 @@ PyYAML = "^6.0"
 types-PyYAML = "^6.0.8"
 fabric = "^2.7.1"
 scapy = "^2.5.0"
+pydocstyle = "6.1.1"
 
 [tool.poetry.group.dev.dependencies]
 mypy = "^0.961"
@@ -39,10 +40,13 @@ requires = ["poetry-core>=1.0.0"]
 build-backend = "poetry.core.masonry.api"
 
 [tool.pylama]
-linters = "mccabe,pycodestyle,pyflakes"
+linters = "mccabe,pycodestyle,pydocstyle,pyflakes"
 format = "pylint"
 max_line_length = 100
 
+[tool.pylama.linter.pydocstyle]
+convention = "google"
+
 [tool.mypy]
 python_version = "3.10"
 enable_error_code = ["ignore-without-code"]
-- 
2.34.1



[PATCH v8 03/21] dts: add basic developer docs

2023-11-23 Thread Juraj Linkeš
Expand the framework contribution guidelines and add how to document the
code with Python docstrings.

Signed-off-by: Juraj Linkeš 
---
 doc/guides/tools/dts.rst | 73 
 1 file changed, 73 insertions(+)

diff --git a/doc/guides/tools/dts.rst b/doc/guides/tools/dts.rst
index 32c18ee472..cd771a428c 100644
--- a/doc/guides/tools/dts.rst
+++ b/doc/guides/tools/dts.rst
@@ -264,6 +264,65 @@ which be changed with the ``--output-dir`` command line 
argument.
 The results contain basic statistics of passed/failed test cases and DPDK 
version.
 
 
+Contributing to DTS
+---
+
+There are two areas of contribution: The DTS framework and DTS test suites.
+
+The framework contains the logic needed to run test cases, such as connecting 
to nodes,
+running DPDK apps and collecting results.
+
+The test cases call APIs from the framework to test their scenarios. Adding 
test cases may
+require adding code to the framework as well.
+
+
+Framework Coding Guidelines
+~~~
+
+When adding code to the DTS framework, pay attention to the rest of the code
+and try not to divert much from it. The :ref:`DTS developer tools 
` will issue
+warnings when some of the basics are not met.
+
+The code must be properly documented with docstrings. The style must conform to
+the `Google style 
`_.
+See an example of the style
+`here 
`_.
+For cases which are not covered by the Google style, refer
+to `PEP 257 `_. There are some cases which 
are not covered by
+the two style guides, where we deviate or where some additional clarification 
is helpful:
+
+   * The __init__() methods of classes are documented separately from the 
docstring of the class
+ itself.
+   * The docstrigs of implemented abstract methods should refer to the 
superclass's definition
+ if there's no deviation.
+   * Instance variables/attributes should be documented in the docstring of 
the class
+ in the ``Attributes:`` section.
+   * The dataclass.dataclass decorator changes how the attributes are 
processed. The dataclass
+ attributes which result in instance variables/attributes should also be 
recorded
+ in the ``Attributes:`` section.
+   * Class variables/attributes, on the other hand, should be documented with 
``#:`` above
+ the type annotated line. The description may be omitted if the meaning is 
obvious.
+   * The Enum and TypedDict also process the attributes in particular ways and 
should be documented
+ with ``#:`` as well. This is mainly so that the autogenerated docs 
contain the assigned value.
+   * When referencing a parameter of a function or a method in their 
docstring, don't use
+ any articles and put the parameter into single backticks. This mimics the 
style of
+ `Python's documentation `_.
+   * When specifying a value, use double backticks::
+
+def foo(greet: bool) -> None:
+"""Demonstration of single and double backticks.
+
+`greet` controls whether ``Hello World`` is printed.
+
+Args:
+   greet: Whether to print the ``Hello World`` message.
+"""
+if greet:
+   print(f"Hello World")
+
+   * The docstring maximum line length is the same as the code maximum line 
length.
+
+
 How To Write a Test Suite
 -
 
@@ -293,6 +352,18 @@ There are four types of methods that comprise a test suite:
| These methods don't need to be implemented if there's no need for them in 
a test suite.
  In that case, nothing will happen when they're is executed.
 
+#. **Configuration, traffic and other logic**
+
+   The ``TestSuite`` class contains a variety of methods for anything that
+   a test suite setup, a teardown, or a test case may need to do.
+
+   The test suites also frequently use a DPDK app, such as testpmd, in 
interactive mode
+   and use the interactive shell instances directly.
+
+   These are the two main ways to call the framework logic in test suites. If 
there's any
+   functionality or logic missing from the framework, it should be implemented 
so that
+   the test suites can use one of these two ways.
+
 #. **Test case verification**
 
Test case verification should be done with the ``verify`` method, which 
records the result.
@@ -308,6 +379,8 @@ There are four types of methods that comprise a test suite:
and used by the test suite via the ``sut_node`` field.
 
 
+.. _dts_dev_tools:
+
 DTS Developer Tools
 ---
 
-- 
2.34.1



[PATCH v8 04/21] dts: exceptions docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/__init__.py  |  12 -
 dts/framework/exception.py | 106 +
 2 files changed, 83 insertions(+), 35 deletions(-)

diff --git a/dts/framework/__init__.py b/dts/framework/__init__.py
index d551ad4bf0..662e6ccad2 100644
--- a/dts/framework/__init__.py
+++ b/dts/framework/__init__.py
@@ -1,3 +1,13 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2022 PANTHEON.tech s.r.o.
+# Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
+
+"""Libraries and utilities for running DPDK Test Suite (DTS).
+
+The various modules in the DTS framework offer:
+
+* Connections to nodes, both interactive and non-interactive,
+* A straightforward way to add support for different operating systems of 
remote nodes,
+* Test suite setup, execution and teardown, along with test case setup, 
execution and teardown,
+* Pre-test suite setup and post-test suite teardown.
+"""
diff --git a/dts/framework/exception.py b/dts/framework/exception.py
index 151e4d3aa9..658eee2c38 100644
--- a/dts/framework/exception.py
+++ b/dts/framework/exception.py
@@ -3,8 +3,10 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-User-defined exceptions used across the framework.
+"""DTS exceptions.
+
+The exceptions all have different severities expressed as an integer.
+The highest severity of all raised exceptions is used as the exit code of DTS.
 """
 
 from enum import IntEnum, unique
@@ -13,59 +15,79 @@
 
 @unique
 class ErrorSeverity(IntEnum):
-"""
-The severity of errors that occur during DTS execution.
+"""The severity of errors that occur during DTS execution.
+
 All exceptions are caught and the most severe error is used as return code.
 """
 
+#:
 NO_ERR = 0
+#:
 GENERIC_ERR = 1
+#:
 CONFIG_ERR = 2
+#:
 REMOTE_CMD_EXEC_ERR = 3
+#:
 SSH_ERR = 4
+#:
 DPDK_BUILD_ERR = 10
+#:
 TESTCASE_VERIFY_ERR = 20
+#:
 BLOCKING_TESTSUITE_ERR = 25
 
 
 class DTSError(Exception):
-"""
-The base exception from which all DTS exceptions are derived.
-Stores error severity.
+"""The base exception from which all DTS exceptions are subclassed.
+
+Do not use this exception, only use subclassed exceptions.
 """
 
+#:
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.GENERIC_ERR
 
 
 class SSHTimeoutError(DTSError):
-"""
-Command execution timeout.
-"""
+"""The SSH execution of a command timed out."""
 
+#:
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
 _command: str
 
 def __init__(self, command: str):
+"""Define the meaning of the first argument.
+
+Args:
+command: The executed command.
+"""
 self._command = command
 
 def __str__(self) -> str:
-return f"TIMEOUT on {self._command}"
+"""Add some context to the string representation."""
+return f"{self._command} execution timed out."
 
 
 class SSHConnectionError(DTSError):
-"""
-SSH connection error.
-"""
+"""An unsuccessful SSH connection."""
 
+#:
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
 _host: str
 _errors: list[str]
 
 def __init__(self, host: str, errors: list[str] | None = None):
+"""Define the meaning of the first two arguments.
+
+Args:
+host: The hostname to which we're trying to connect.
+errors: Any errors that occurred during the connection attempt.
+"""
 self._host = host
 self._errors = [] if errors is None else errors
 
 def __str__(self) -> str:
+"""Include the errors in the string representation."""
 message = f"Error trying to connect with {self._host}."
 if self._errors:
 message += f" Errors encountered while retrying: {', 
'.join(self._errors)}"
@@ -74,76 +96,92 @@ def __str__(self) -> str:
 
 
 class SSHSessionDeadError(DTSError):
-"""
-SSH session is not alive.
-It can no longer be used.
-"""
+"""The SSH session is no longer alive."""
 
+#:
 severity: ClassVar[ErrorSeverity] = ErrorSeverity.SSH_ERR
 _host: str
 
 def __init__(self, host: str):
+"""Define the meaning of the first argument.
+
+Args:
+host: The hostname of the disconnected node.
+"""
 self._host = host
 
 def __str__(self) -> str:
-return f"SSH session with {self._host} has died"
+"""Add some context to the string representation."""
+return f"SSH session with {self._host} has died."
 
 
 class ConfigurationError(DTSError):
-"""
-Raised when an invalid configuration is encountered.
-"""
+"""An invalid configuration."""
 
+#:
 severity: ClassVar[ErrorSeverity] = 

[PATCH v8 05/21] dts: settings docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/settings.py | 103 +-
 1 file changed, 102 insertions(+), 1 deletion(-)

diff --git a/dts/framework/settings.py b/dts/framework/settings.py
index 25b5dcff22..41f98e8519 100644
--- a/dts/framework/settings.py
+++ b/dts/framework/settings.py
@@ -3,6 +3,72 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022 University of New Hampshire
 
+"""Environment variables and command line arguments parsing.
+
+This is a simple module utilizing the built-in argparse module to parse 
command line arguments,
+augment them with values from environment variables and make them available 
across the framework.
+
+The command line value takes precedence, followed by the environment variable 
value,
+followed by the default value defined in this module.
+
+The command line arguments along with the supported environment variables are:
+
+.. option:: --config-file
+.. envvar:: DTS_CFG_FILE
+
+The path to the YAML test run configuration file.
+
+.. option:: --output-dir, --output
+.. envvar:: DTS_OUTPUT_DIR
+
+The directory where DTS logs and results are saved.
+
+.. option:: --compile-timeout
+.. envvar:: DTS_COMPILE_TIMEOUT
+
+The timeout for compiling DPDK.
+
+.. option:: -t, --timeout
+.. envvar:: DTS_TIMEOUT
+
+The timeout for all DTS operation except for compiling DPDK.
+
+.. option:: -v, --verbose
+.. envvar:: DTS_VERBOSE
+
+Set to any value to enable logging everything to the console.
+
+.. option:: -s, --skip-setup
+.. envvar:: DTS_SKIP_SETUP
+
+Set to any value to skip building DPDK.
+
+.. option:: --tarball, --snapshot, --git-ref
+.. envvar:: DTS_DPDK_TARBALL
+
+The path to a DPDK tarball, git commit ID, tag ID or tree ID to test.
+
+.. option:: --test-cases
+.. envvar:: DTS_TESTCASES
+
+A comma-separated list of test cases to execute. Unknown test cases will 
be silently ignored.
+
+.. option:: --re-run, --re_run
+.. envvar:: DTS_RERUN
+
+Re-run each test case this many times in case of a failure.
+
+The module provides one key module-level variable:
+
+Attributes:
+SETTINGS: The module level variable storing framework-wide DTS settings.
+
+Typical usage example::
+
+  from framework.settings import SETTINGS
+  foo = SETTINGS.foo
+"""
+
 import argparse
 import os
 from collections.abc import Callable, Iterable, Sequence
@@ -16,6 +82,23 @@
 
 
 def _env_arg(env_var: str) -> Any:
+"""A helper method augmenting the argparse Action with environment 
variables.
+
+If the supplied environment variable is defined, then the default value
+of the argument is modified. This satisfies the priority order of
+command line argument > environment variable > default value.
+
+Arguments with no values (flags) should be defined using the const keyword 
argument
+(True or False). When the argument is specified, it will be set to const, 
if not specified,
+the default will be stored (possibly modified by the corresponding 
environment variable).
+
+Other arguments work the same as default argparse arguments, that is using
+the default 'store' action.
+
+Returns:
+  The modified argparse.Action.
+"""
+
 class _EnvironmentArgument(argparse.Action):
 def __init__(
 self,
@@ -68,14 +151,28 @@ def __call__(
 
 @dataclass(slots=True)
 class Settings:
+"""Default framework-wide user settings.
+
+The defaults may be modified at the start of the run.
+"""
+
+#:
 config_file_path: Path = Path(__file__).parent.parent.joinpath("conf.yaml")
+#:
 output_dir: str = "output"
+#:
 timeout: float = 15
+#:
 verbose: bool = False
+#:
 skip_setup: bool = False
+#:
 dpdk_tarball_path: Path | str = "dpdk.tar.xz"
+#:
 compile_timeout: float = 1200
+#:
 test_cases: list[str] = field(default_factory=list)
+#:
 re_run: int = 0
 
 
@@ -166,7 +263,7 @@ def _get_parser() -> argparse.ArgumentParser:
 action=_env_arg("DTS_RERUN"),
 default=SETTINGS.re_run,
 type=int,
-help="[DTS_RERUN] Re-run each test case the specified amount of times "
+help="[DTS_RERUN] Re-run each test case the specified number of times "
 "if a test failure occurs",
 )
 
@@ -174,6 +271,10 @@ def _get_parser() -> argparse.ArgumentParser:
 
 
 def get_settings() -> Settings:
+"""Create new settings with inputs from the user.
+
+The inputs are taken from the command line and from environment variables.
+"""
 parsed_args = _get_parser().parse_args()
 return Settings(
 config_file_path=parsed_args.config_file,
-- 
2.34.1



[PATCH v8 06/21] dts: logger and utils docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/logger.py | 72 ++---
 dts/framework/utils.py  | 88 +
 2 files changed, 113 insertions(+), 47 deletions(-)

diff --git a/dts/framework/logger.py b/dts/framework/logger.py
index bb2991e994..cfa6e8cd72 100644
--- a/dts/framework/logger.py
+++ b/dts/framework/logger.py
@@ -3,9 +3,9 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-DTS logger module with several log level. DTS framework and TestSuite logs
-are saved in different log files.
+"""DTS logger module.
+
+DTS framework and TestSuite logs are saved in different log files.
 """
 
 import logging
@@ -18,19 +18,21 @@
 stream_fmt = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
 
 
-class LoggerDictType(TypedDict):
-logger: "DTSLOG"
-name: str
-node: str
-
+class DTSLOG(logging.LoggerAdapter):
+"""DTS logger adapter class for framework and testsuites.
 
-# List for saving all using loggers
-Loggers: list[LoggerDictType] = []
+The :option:`--verbose` command line argument and the 
:envvar:`DTS_VERBOSE` environment
+variable control the verbosity of output. If enabled, all messages will be 
emitted to the
+console.
 
+The :option:`--output` command line argument and the 
:envvar:`DTS_OUTPUT_DIR` environment
+variable modify the directory where the logs will be stored.
 
-class DTSLOG(logging.LoggerAdapter):
-"""
-DTS log class for framework and testsuite.
+Attributes:
+node: The additional identifier. Currently unused.
+sh: The handler which emits logs to console.
+fh: The handler which emits logs to a file.
+verbose_fh: Just as fh, but logs with a different, more verbose, 
format.
 """
 
 _logger: logging.Logger
@@ -40,6 +42,15 @@ class DTSLOG(logging.LoggerAdapter):
 verbose_fh: logging.FileHandler
 
 def __init__(self, logger: logging.Logger, node: str = "suite"):
+"""Extend the constructor with additional handlers.
+
+One handler logs to the console, the other one to a file, with either 
a regular or verbose
+format.
+
+Args:
+logger: The logger from which to create the logger adapter.
+node: An additional identifier. Currently unused.
+"""
 self._logger = logger
 # 1 means log everything, this will be used by file handlers if their 
level
 # is not set
@@ -92,26 +103,43 @@ def __init__(self, logger: logging.Logger, node: str = 
"suite"):
 super(DTSLOG, self).__init__(self._logger, dict(node=self.node))
 
 def logger_exit(self) -> None:
-"""
-Remove stream handler and logfile handler.
-"""
+"""Remove the stream handler and the logfile handler."""
 for handler in (self.sh, self.fh, self.verbose_fh):
 handler.flush()
 self._logger.removeHandler(handler)
 
 
+class _LoggerDictType(TypedDict):
+logger: DTSLOG
+name: str
+node: str
+
+
+# List for saving all loggers in use
+_Loggers: list[_LoggerDictType] = []
+
+
 def getLogger(name: str, node: str = "suite") -> DTSLOG:
+"""Get DTS logger adapter identified by name and node.
+
+An existing logger will be returned if one with the exact name and node 
already exists.
+A new one will be created and stored otherwise.
+
+Args:
+name: The name of the logger.
+node: An additional identifier for the logger.
+
+Returns:
+A logger uniquely identified by both name and node.
 """
-Get logger handler and if there's no handler for specified Node will 
create one.
-"""
-global Loggers
+global _Loggers
 # return saved logger
-logger: LoggerDictType
-for logger in Loggers:
+logger: _LoggerDictType
+for logger in _Loggers:
 if logger["name"] == name and logger["node"] == node:
 return logger["logger"]
 
 # return new logger
 dts_logger: DTSLOG = DTSLOG(logging.getLogger(name), node)
-Loggers.append({"logger": dts_logger, "name": name, "node": node})
+_Loggers.append({"logger": dts_logger, "name": name, "node": node})
 return dts_logger
diff --git a/dts/framework/utils.py b/dts/framework/utils.py
index a0f2173949..cc5e458cc8 100644
--- a/dts/framework/utils.py
+++ b/dts/framework/utils.py
@@ -3,6 +3,16 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Various utility classes and functions.
+
+These are used in multiple modules across the framework. They're here because
+they provide some non-specific functionality, greatly simplify imports or just 
don't
+fit elsewhere.
+
+Attributes:
+REGEX_FOR_PCI_ADDRESS: The regex representing a PCI address, e.g. 
``:00:08.0``.
+"""
+
 import atexit
 import json
 import os
@@ -1

[PATCH v8 07/21] dts: dts runner and main docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/dts.py | 131 ---
 dts/main.py  |  10 ++--
 2 files changed, 116 insertions(+), 25 deletions(-)

diff --git a/dts/framework/dts.py b/dts/framework/dts.py
index 356368ef10..e16d4578a0 100644
--- a/dts/framework/dts.py
+++ b/dts/framework/dts.py
@@ -3,6 +3,33 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+r"""Test suite runner module.
+
+A DTS run is split into stages:
+
+#. Execution stage,
+#. Build target stage,
+#. Test suite stage,
+#. Test case stage.
+
+The module is responsible for running tests on testbeds defined in the test 
run configuration.
+Each setup or teardown of each stage is recorded in a 
:class:`~.test_result.DTSResult` or
+one of its subclasses. The test case results are also recorded.
+
+If an error occurs, the current stage is aborted, the error is recorded and 
the run continues in
+the next iteration of the same stage. The return code is the highest 
`severity` of all
+:class:`~.exception.DTSError`\s.
+
+Example:
+An error occurs in a build target setup. The current build target is 
aborted and the run
+continues with the next build target. If the errored build target was the 
last one in the given
+execution, the next execution begins.
+
+Attributes:
+dts_logger: The logger instance used in this module.
+result: The top level result used in the module.
+"""
+
 import sys
 
 from .config import (
@@ -23,9 +50,38 @@
 
 
 def run_all() -> None:
-"""
-The main process of DTS. Runs all build targets in all executions from the 
main
-config file.
+"""Run all build targets in all executions from the test run configuration.
+
+Before running test suites, executions and build targets are first set up.
+The executions and build targets defined in the test run configuration are 
iterated over.
+The executions define which tests to run and where to run them and build 
targets define
+the DPDK build setup.
+
+The tests suites are set up for each execution/build target tuple and each 
scheduled
+test case within the test suite is set up, executed and torn down. After 
all test cases
+have been executed, the test suite is torn down and the next build target 
will be tested.
+
+All the nested steps look like this:
+
+#. Execution setup
+
+#. Build target setup
+
+#. Test suite setup
+
+#. Test case setup
+#. Test case logic
+#. Test case teardown
+
+#. Test suite teardown
+
+#. Build target teardown
+
+#. Execution teardown
+
+The test cases are filtered according to the specification in the test run 
configuration and
+the :option:`--test-cases` command line argument or
+the :envvar:`DTS_TESTCASES` environment variable.
 """
 global dts_logger
 global result
@@ -87,6 +143,8 @@ def run_all() -> None:
 
 
 def _check_dts_python_version() -> None:
+"""Check the required Python version - v3.10."""
+
 def RED(text: str) -> str:
 return f"\u001B[31;1m{str(text)}\u001B[0m"
 
@@ -109,9 +167,16 @@ def _run_execution(
 execution: ExecutionConfiguration,
 result: DTSResult,
 ) -> None:
-"""
-Run the given execution. This involves running the execution setup as well 
as
-running all build targets in the given execution.
+"""Run the given execution.
+
+This involves running the execution setup as well as running all build 
targets
+in the given execution. After that, execution teardown is run.
+
+Args:
+sut_node: The execution's SUT node.
+tg_node: The execution's TG node.
+execution: An execution's test run configuration.
+result: The top level result object.
 """
 dts_logger.info(f"Running execution with SUT 
'{execution.system_under_test_node.name}'.")
 execution_result = result.add_execution(sut_node.config)
@@ -144,8 +209,18 @@ def _run_build_target(
 execution: ExecutionConfiguration,
 execution_result: ExecutionResult,
 ) -> None:
-"""
-Run the given build target.
+"""Run the given build target.
+
+This involves running the build target setup as well as running all test 
suites
+in the given execution the build target is defined in.
+After that, build target teardown is run.
+
+Args:
+sut_node: The execution's SUT node.
+tg_node: The execution's TG node.
+build_target: A build target's test run configuration.
+execution: The build target's execution's test run configuration.
+execution_result: The execution level result object associated with 
the execution.
 """
 dts_logger.info(f"Running build target '{build_target.name}'.")
 build_target_result = execution_

[PATCH v8 08/21] dts: test suite docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/test_suite.py | 231 +++-
 1 file changed, 175 insertions(+), 56 deletions(-)

diff --git a/dts/framework/test_suite.py b/dts/framework/test_suite.py
index f9e66e814a..dfb391ffbd 100644
--- a/dts/framework/test_suite.py
+++ b/dts/framework/test_suite.py
@@ -2,8 +2,19 @@
 # Copyright(c) 2010-2014 Intel Corporation
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Base class for creating DTS test cases.
+"""Features common to all test suites.
+
+The module defines the :class:`TestSuite` class which doesn't contain any test 
cases, and as such
+must be extended by subclasses which add test cases. The :class:`TestSuite` 
contains the basics
+needed by subclasses:
+
+* Test suite and test case execution flow,
+* Testbed (SUT, TG) configuration,
+* Packet sending and verification,
+* Test case verification.
+
+The module also defines a function, :func:`get_test_suites`,
+for gathering test suites from a Python module.
 """
 
 import importlib
@@ -11,7 +22,7 @@
 import re
 from ipaddress import IPv4Interface, IPv6Interface, ip_interface
 from types import MethodType
-from typing import Any, Union
+from typing import Any, ClassVar, Union
 
 from scapy.layers.inet import IP  # type: ignore[import]
 from scapy.layers.l2 import Ether  # type: ignore[import]
@@ -31,25 +42,44 @@
 
 
 class TestSuite(object):
-"""
-The base TestSuite class provides methods for handling basic flow of a 
test suite:
-* test case filtering and collection
-* test suite setup/cleanup
-* test setup/cleanup
-* test case execution
-* error handling and results storage
-Test cases are implemented by derived classes. Test cases are all methods
-starting with test_, further divided into performance test cases
-(starting with test_perf_) and functional test cases (all other test 
cases).
-By default, all test cases will be executed. A list of testcase str names
-may be specified in conf.yaml or on the command line
-to filter which test cases to run.
-The methods named [set_up|tear_down]_[suite|test_case] should be overridden
-in derived classes if the appropriate suite/test case fixtures are needed.
+"""The base class with methods for handling the basic flow of a test suite.
+
+* Test case filtering and collection,
+* Test suite setup/cleanup,
+* Test setup/cleanup,
+* Test case execution,
+* Error handling and results storage.
+
+Test cases are implemented by subclasses. Test cases are all methods 
starting with ``test_``,
+further divided into performance test cases (starting with ``test_perf_``)
+and functional test cases (all other test cases).
+
+By default, all test cases will be executed. A list of testcase names may 
be specified
+in the YAML test run configuration file and in the :option:`--test-cases` 
command line argument
+or in the :envvar:`DTS_TESTCASES` environment variable to filter which 
test cases to run.
+The union of both lists will be used. Any unknown test cases from the 
latter lists
+will be silently ignored.
+
+If the :option:`--re-run` command line argument or the :envvar:`DTS_RERUN` 
environment variable
+is set, in case of a test case failure, the test case will be executed 
again until it passes
+or it fails that many times in addition of the first failure.
+
+The methods named ``[set_up|tear_down]_[suite|test_case]`` should be 
overridden in subclasses
+if the appropriate test suite/test case fixtures are needed.
+
+The test suite is aware of the testbed (the SUT and TG) it's running on. 
From this, it can
+properly choose the IP addresses and other configuration that must be 
tailored to the testbed.
+
+Attributes:
+sut_node: The SUT node where the test suite is running.
+tg_node: The TG node where the test suite is running.
 """
 
 sut_node: SutNode
-is_blocking = False
+tg_node: TGNode
+#: Whether the test suite is blocking. A failure of a blocking test suite
+#: will block the execution of all subsequent test suites in the current 
build target.
+is_blocking: ClassVar[bool] = False
 _logger: DTSLOG
 _test_cases_to_run: list[str]
 _func: bool
@@ -72,6 +102,20 @@ def __init__(
 func: bool,
 build_target_result: BuildTargetResult,
 ):
+"""Initialize the test suite testbed information and basic 
configuration.
+
+Process what test cases to run, create the associated
+:class:`~.test_result.TestSuiteResult`, find links between ports
+and set up default IP addresses to be used when configuring them.
+
+Args:
+sut_node: The SUT node where the test suite will run.
+tg_node: The TG node where the test suite will run.
+test_cases: The list of t

[PATCH v8 09/21] dts: test result docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/test_result.py | 297 ---
 1 file changed, 239 insertions(+), 58 deletions(-)

diff --git a/dts/framework/test_result.py b/dts/framework/test_result.py
index 57090feb04..4467749a9d 100644
--- a/dts/framework/test_result.py
+++ b/dts/framework/test_result.py
@@ -2,8 +2,25 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-Generic result container and reporters
+r"""Record and process DTS results.
+
+The results are recorded in a hierarchical manner:
+
+* :class:`DTSResult` contains
+* :class:`ExecutionResult` contains
+* :class:`BuildTargetResult` contains
+* :class:`TestSuiteResult` contains
+* :class:`TestCaseResult`
+
+Each result may contain multiple lower level results, e.g. there are multiple
+:class:`TestSuiteResult`\s in a :class:`BuildTargetResult`.
+The results have common parts, such as setup and teardown results, captured in 
:class:`BaseResult`,
+which also defines some common behaviors in its methods.
+
+Each result class has its own idiosyncrasies which they implement in 
overridden methods.
+
+The :option:`--output` command line argument and the :envvar:`DTS_OUTPUT_DIR` 
environment
+variable modify the directory where the files with results will be stored.
 """
 
 import os.path
@@ -26,26 +43,34 @@
 
 
 class Result(Enum):
-"""
-An Enum defining the possible states that
-a setup, a teardown or a test case may end up in.
-"""
+"""The possible states that a setup, a teardown or a test case may end up 
in."""
 
+#:
 PASS = auto()
+#:
 FAIL = auto()
+#:
 ERROR = auto()
+#:
 SKIP = auto()
 
 def __bool__(self) -> bool:
+"""Only PASS is True."""
 return self is self.PASS
 
 
 class FixtureResult(object):
-"""
-A record that stored the result of a setup or a teardown.
-The default is FAIL because immediately after creating the object
-the setup of the corresponding stage will be executed, which also 
guarantees
-the execution of teardown.
+"""A record that stores the result of a setup or a teardown.
+
+:attr:`~Result.FAIL` is a sensible default since it prevents false 
positives (which could happen
+if the default was :attr:`~Result.PASS`).
+
+Preventing false positives or other false results is preferable since a 
failure
+is mostly likely to be investigated (the other false results may not be 
investigated at all).
+
+Attributes:
+result: The associated result.
+error: The error in case of a failure.
 """
 
 result: Result
@@ -56,21 +81,37 @@ def __init__(
 result: Result = Result.FAIL,
 error: Exception | None = None,
 ):
+"""Initialize the constructor with the fixture result and store a 
possible error.
+
+Args:
+result: The result to store.
+error: The error which happened when a failure occurred.
+"""
 self.result = result
 self.error = error
 
 def __bool__(self) -> bool:
+"""A wrapper around the stored :class:`Result`."""
 return bool(self.result)
 
 
 class Statistics(dict):
-"""
-A helper class used to store the number of test cases by its result
-along a few other basic information.
-Using a dict provides a convenient way to format the data.
+"""How many test cases ended in which result state along some other basic 
information.
+
+Subclassing :class:`dict` provides a convenient way to format the data.
+
+The data are stored in the following keys:
+
+* **PASS RATE** (:class:`int`) -- The FAIL/PASS ratio of all test cases.
+* **DPDK VERSION** (:class:`str`) -- The tested DPDK version.
 """
 
 def __init__(self, dpdk_version: str | None):
+"""Extend the constructor with keys in which the data are stored.
+
+Args:
+dpdk_version: The version of tested DPDK.
+"""
 super(Statistics, self).__init__()
 for result in Result:
 self[result.name] = 0
@@ -78,8 +119,17 @@ def __init__(self, dpdk_version: str | None):
 self["DPDK VERSION"] = dpdk_version
 
 def __iadd__(self, other: Result) -> "Statistics":
-"""
-Add a Result to the final count.
+"""Add a Result to the final count.
+
+Example:
+stats: Statistics = Statistics()  # empty Statistics
+stats += Result.PASS  # add a Result to `stats`
+
+Args:
+other: The Result to add to this statistics object.
+
+Returns:
+The modified statistics object.
 """
 self[other.name] += 1
 self["PASS RATE"] = (
@@ -88,9 +138,7 @@ def __iadd__(self, other: Result) -> "Statistics":
 return self
 
 def __str__(self) -> str:
-"""
-Provide a str

[PATCH v8 10/21] dts: config docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/config/__init__.py | 369 ++-
 dts/framework/config/types.py| 132 +++
 2 files changed, 444 insertions(+), 57 deletions(-)
 create mode 100644 dts/framework/config/types.py

diff --git a/dts/framework/config/__init__.py b/dts/framework/config/__init__.py
index ef25a463c0..62eded7f04 100644
--- a/dts/framework/config/__init__.py
+++ b/dts/framework/config/__init__.py
@@ -3,8 +3,34 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-Yaml config parsing methods
+"""Testbed configuration and test suite specification.
+
+This package offers classes that hold real-time information about the testbed, 
hold test run
+configuration describing the tested testbed and a loader function, 
:func:`load_config`, which loads
+the YAML test run configuration file
+and validates it according to :download:`the schema `.
+
+The YAML test run configuration file is parsed into a dictionary, parts of 
which are used throughout
+this package. The allowed keys and types inside this dictionary are defined in
+the :doc:`types ` module.
+
+The test run configuration has two main sections:
+
+* The :class:`ExecutionConfiguration` which defines what tests are going 
to be run
+  and how DPDK will be built. It also references the testbed where these 
tests and DPDK
+  are going to be run,
+* The nodes of the testbed are defined in the other section,
+  a :class:`list` of :class:`NodeConfiguration` objects.
+
+The real-time information about testbed is supposed to be gathered at runtime.
+
+The classes defined in this package make heavy use of :mod:`dataclasses`.
+All of them use slots and are frozen:
+
+* Slots enables some optimizations, by pre-allocating space for the defined
+  attributes in the underlying data structure,
+* Frozen makes the object immutable. This enables further optimizations,
+  and makes it thread safe should we ever want to move in that direction.
 """
 
 import json
@@ -12,11 +38,20 @@
 import pathlib
 from dataclasses import dataclass
 from enum import auto, unique
-from typing import Any, TypedDict, Union
+from typing import Union
 
 import warlock  # type: ignore[import]
 import yaml
 
+from framework.config.types import (
+BuildTargetConfigDict,
+ConfigurationDict,
+ExecutionConfigDict,
+NodeConfigDict,
+PortConfigDict,
+TestSuiteConfigDict,
+TrafficGeneratorConfigDict,
+)
 from framework.exception import ConfigurationError
 from framework.settings import SETTINGS
 from framework.utils import StrEnum
@@ -24,55 +59,97 @@
 
 @unique
 class Architecture(StrEnum):
+r"""The supported architectures of 
:class:`~framework.testbed_model.node.Node`\s."""
+
+#:
 i686 = auto()
+#:
 x86_64 = auto()
+#:
 x86_32 = auto()
+#:
 arm64 = auto()
+#:
 ppc64le = auto()
 
 
 @unique
 class OS(StrEnum):
+r"""The supported operating systems of 
:class:`~framework.testbed_model.node.Node`\s."""
+
+#:
 linux = auto()
+#:
 freebsd = auto()
+#:
 windows = auto()
 
 
 @unique
 class CPUType(StrEnum):
+r"""The supported CPUs of :class:`~framework.testbed_model.node.Node`\s."""
+
+#:
 native = auto()
+#:
 armv8a = auto()
+#:
 dpaa2 = auto()
+#:
 thunderx = auto()
+#:
 xgene1 = auto()
 
 
 @unique
 class Compiler(StrEnum):
+r"""The supported compilers of 
:class:`~framework.testbed_model.node.Node`\s."""
+
+#:
 gcc = auto()
+#:
 clang = auto()
+#:
 icc = auto()
+#:
 msvc = auto()
 
 
 @unique
 class TrafficGeneratorType(StrEnum):
+"""The supported traffic generators."""
+
+#:
 SCAPY = auto()
 
 
-# Slots enables some optimizations, by pre-allocating space for the defined
-# attributes in the underlying data structure.
-#
-# Frozen makes the object immutable. This enables further optimizations,
-# and makes it thread safe should we every want to move in that direction.
 @dataclass(slots=True, frozen=True)
 class HugepageConfiguration:
+r"""The hugepage configuration of 
:class:`~framework.testbed_model.node.Node`\s.
+
+Attributes:
+amount: The number of hugepages.
+force_first_numa: If :data:`True`, the hugepages will be configured on 
the first NUMA node.
+"""
+
 amount: int
 force_first_numa: bool
 
 
 @dataclass(slots=True, frozen=True)
 class PortConfig:
+r"""The port configuration of 
:class:`~framework.testbed_model.node.Node`\s.
+
+Attributes:
+node: The :class:`~framework.testbed_model.node.Node` where this port 
exists.
+pci: The PCI address of the port.
+os_driver_for_dpdk: The operating system driver name for use with DPDK.
+os_driver: The operating system driver name when the operating system 
controls the port.
+ 

[PATCH v8 11/21] dts: remote session docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/remote_session/__init__.py  |  39 +-
 .../remote_session/remote_session.py  | 130 +-
 dts/framework/remote_session/ssh_session.py   |  16 +--
 3 files changed, 137 insertions(+), 48 deletions(-)

diff --git a/dts/framework/remote_session/__init__.py 
b/dts/framework/remote_session/__init__.py
index 5e7ddb2b05..51a01d6b5e 100644
--- a/dts/framework/remote_session/__init__.py
+++ b/dts/framework/remote_session/__init__.py
@@ -2,12 +2,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
-"""
-The package provides modules for managing remote connections to a remote host 
(node),
-differentiated by OS.
-The package provides a factory function, create_session, that returns the 
appropriate
-remote connection based on the passed configuration. The differences are in the
-underlying transport protocol (e.g. SSH) and remote OS (e.g. Linux).
+"""Remote interactive and non-interactive sessions.
+
+This package provides modules for managing remote connections to a remote host 
(node).
+
+The non-interactive sessions send commands and return their output and exit 
code.
+
+The interactive sessions open an interactive shell which is continuously open,
+allowing it to send and receive data within that particular shell.
 """
 
 # pylama:ignore=W0611
@@ -26,10 +28,35 @@
 def create_remote_session(
 node_config: NodeConfiguration, name: str, logger: DTSLOG
 ) -> RemoteSession:
+"""Factory for non-interactive remote sessions.
+
+The function returns an SSH session, but will be extended if support
+for other protocols is added.
+
+Args:
+node_config: The test run configuration of the node to connect to.
+name: The name of the session.
+logger: The logger instance this session will use.
+
+Returns:
+The SSH remote session.
+"""
 return SSHSession(node_config, name, logger)
 
 
 def create_interactive_session(
 node_config: NodeConfiguration, logger: DTSLOG
 ) -> InteractiveRemoteSession:
+"""Factory for interactive remote sessions.
+
+The function returns an interactive SSH session, but will be extended if 
support
+for other protocols is added.
+
+Args:
+node_config: The test run configuration of the node to connect to.
+logger: The logger instance this session will use.
+
+Returns:
+The interactive SSH remote session.
+"""
 return InteractiveRemoteSession(node_config, logger)
diff --git a/dts/framework/remote_session/remote_session.py 
b/dts/framework/remote_session/remote_session.py
index 719f7d1ef7..2059f9a981 100644
--- a/dts/framework/remote_session/remote_session.py
+++ b/dts/framework/remote_session/remote_session.py
@@ -3,6 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
+"""Base remote session.
+
+This module contains the abstract base class for remote sessions and defines
+the structure of the result of a command execution.
+"""
+
+
 import dataclasses
 from abc import ABC, abstractmethod
 from pathlib import PurePath
@@ -15,8 +22,14 @@
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class CommandResult:
-"""
-The result of remote execution of a command.
+"""The result of remote execution of a command.
+
+Attributes:
+name: The name of the session that executed the command.
+command: The executed command.
+stdout: The standard output the command produced.
+stderr: The standard error output the command produced.
+return_code: The return code the command exited with.
 """
 
 name: str
@@ -26,6 +39,7 @@ class CommandResult:
 return_code: int
 
 def __str__(self) -> str:
+"""Format the command outputs."""
 return (
 f"stdout: '{self.stdout}'\n"
 f"stderr: '{self.stderr}'\n"
@@ -34,13 +48,24 @@ def __str__(self) -> str:
 
 
 class RemoteSession(ABC):
-"""
-The base class for defining which methods must be implemented in order to 
connect
-to a remote host (node) and maintain a remote session. The derived classes 
are
-supposed to implement/use some underlying transport protocol (e.g. SSH) to
-implement the methods. On top of that, it provides some basic services 
common to
-all derived classes, such as keeping history and logging what's being 
executed
-on the remote node.
+"""Non-interactive remote session.
+
+The abstract methods must be implemented in order to connect to a remote 
host (node)
+and maintain a remote session.
+The subclasses must use (or implement) some underlying transport protocol 
(e.g. SSH)
+to implement the methods. On top of that, it provides some basic services 
common to all
+subclasses, such as keeping history and logging what's being executed on 
the 

[PATCH v8 12/21] dts: interactive remote session docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 .../interactive_remote_session.py | 36 +++
 .../remote_session/interactive_shell.py   | 99 +++
 dts/framework/remote_session/python_shell.py  | 26 -
 dts/framework/remote_session/testpmd_shell.py | 58 +--
 4 files changed, 149 insertions(+), 70 deletions(-)

diff --git a/dts/framework/remote_session/interactive_remote_session.py 
b/dts/framework/remote_session/interactive_remote_session.py
index 098ded1bb0..1cc82e3377 100644
--- a/dts/framework/remote_session/interactive_remote_session.py
+++ b/dts/framework/remote_session/interactive_remote_session.py
@@ -22,27 +22,23 @@
 class InteractiveRemoteSession:
 """SSH connection dedicated to interactive applications.
 
-This connection is created using paramiko and is a persistent connection 
to the
-host. This class defines methods for connecting to the node and configures 
this
-connection to send "keep alive" packets every 30 seconds. Because paramiko 
attempts
-to use SSH keys to establish a connection first, providing a password is 
optional.
-This session is utilized by InteractiveShells and cannot be interacted with
-directly.
-
-Arguments:
-node_config: Configuration class for the node you are connecting to.
-_logger: Desired logger for this session to use.
+The connection is created using `paramiko 
`_
+and is a persistent connection to the host. This class defines the methods 
for connecting
+to the node and configures the connection to send "keep alive" packets 
every 30 seconds.
+Because paramiko attempts to use SSH keys to establish a connection first, 
providing
+a password is optional. This session is utilized by InteractiveShells
+and cannot be interacted with directly.
 
 Attributes:
-hostname: Hostname that will be used to initialize a connection to the 
node.
-ip: A subsection of hostname that removes the port for the connection 
if there
+hostname: The hostname that will be used to initialize a connection to 
the node.
+ip: A subsection of `hostname` that removes the port for the 
connection if there
 is one. If there is no port, this will be the same as hostname.
-port: Port to use for the ssh connection. This will be extracted from 
the
-hostname if there is a port included, otherwise it will default to 
22.
+port: Port to use for the ssh connection. This will be extracted from 
`hostname`
+if there is a port included, otherwise it will default to ``22``.
 username: User to connect to the node with.
 password: Password of the user connecting to the host. This will 
default to an
 empty string if a password is not provided.
-session: Underlying paramiko connection.
+session: The underlying paramiko connection.
 
 Raises:
 SSHConnectionError: There is an error creating the SSH connection.
@@ -58,9 +54,15 @@ class InteractiveRemoteSession:
 _node_config: NodeConfiguration
 _transport: Transport | None
 
-def __init__(self, node_config: NodeConfiguration, _logger: DTSLOG) -> 
None:
+def __init__(self, node_config: NodeConfiguration, logger: DTSLOG) -> None:
+"""Connect to the node during initialization.
+
+Args:
+node_config: The test run configuration of the node to connect to.
+logger: The logger instance this session will use.
+"""
 self._node_config = node_config
-self._logger = _logger
+self._logger = logger
 self.hostname = node_config.hostname
 self.username = node_config.user
 self.password = node_config.password if node_config.password else ""
diff --git a/dts/framework/remote_session/interactive_shell.py 
b/dts/framework/remote_session/interactive_shell.py
index 4db19fb9b3..b158f963b6 100644
--- a/dts/framework/remote_session/interactive_shell.py
+++ b/dts/framework/remote_session/interactive_shell.py
@@ -3,18 +3,20 @@
 
 """Common functionality for interactive shell handling.
 
-This base class, InteractiveShell, is meant to be extended by other classes 
that
-contain functionality specific to that shell type. These derived classes will 
often
-modify things like the prompt to expect or the arguments to pass into the 
application,
-but still utilize the same method for sending a command and collecting output. 
How
-this output is handled however is often application specific. If an 
application needs
-elevated privileges to start it is expected that the method for gaining those
-privileges is provided when initializing the class.
+The base class, :class:`InteractiveShell`, is meant to be extended by 
subclasses that contain
+functionality specific to that shell type. These subclasses will often modify 
things like
+the p

[PATCH v8 13/21] dts: port and virtual device docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/__init__.py   | 17 --
 dts/framework/testbed_model/port.py   | 53 +++
 dts/framework/testbed_model/virtual_device.py | 17 +-
 3 files changed, 72 insertions(+), 15 deletions(-)

diff --git a/dts/framework/testbed_model/__init__.py 
b/dts/framework/testbed_model/__init__.py
index 8ced05653b..6086512ca2 100644
--- a/dts/framework/testbed_model/__init__.py
+++ b/dts/framework/testbed_model/__init__.py
@@ -2,9 +2,20 @@
 # Copyright(c) 2022-2023 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
-This package contains the classes used to model the physical traffic generator,
-system under test and any other components that need to be interacted with.
+"""Testbed modelling.
+
+This package defines the testbed elements DTS works with:
+
+* A system under test node: :class:`~.sut_node.SutNode`,
+* A traffic generator node: :class:`~.tg_node.TGNode`,
+* The ports of network interface cards (NICs) present on nodes: 
:class:`~.port.Port`,
+* The logical cores of CPUs present on nodes: :class:`~.cpu.LogicalCore`,
+* The virtual devices that can be created on nodes: 
:class:`~.virtual_device.VirtualDevice`,
+* The operating systems running on nodes: 
:class:`~.linux_session.LinuxSession`
+  and :class:`~.posix_session.PosixSession`.
+
+DTS needs to be able to connect to nodes and understand some of the hardware 
present on these nodes
+to properly build and test DPDK.
 """
 
 # pylama:ignore=W0611
diff --git a/dts/framework/testbed_model/port.py 
b/dts/framework/testbed_model/port.py
index 680c29bfe3..817405bea4 100644
--- a/dts/framework/testbed_model/port.py
+++ b/dts/framework/testbed_model/port.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""NIC port model.
+
+Basic port information, such as location (the port are identified by their PCI 
address on a node),
+drivers and address.
+"""
+
+
 from dataclasses import dataclass
 
 from framework.config import PortConfig
@@ -9,24 +16,35 @@
 
 @dataclass(slots=True, frozen=True)
 class PortIdentifier:
+"""The port identifier.
+
+Attributes:
+node: The node where the port resides.
+pci: The PCI address of the port on `node`.
+"""
+
 node: str
 pci: str
 
 
 @dataclass(slots=True)
 class Port:
-"""
-identifier: The PCI address of the port on a node.
-
-os_driver: The driver used by this port when the OS is controlling it.
-Example: i40e
-os_driver_for_dpdk: The driver the device must be bound to for DPDK to use 
it,
-Example: vfio-pci.
+"""Physical port on a node.
 
-Note: os_driver and os_driver_for_dpdk may be the same thing.
-Example: mlx5_core
+The ports are identified by the node they're on and their PCI addresses. 
The port on the other
+side of the connection is also captured here.
+Each port is serviced by a driver, which may be different for the 
operating system (`os_driver`)
+and for DPDK (`os_driver_for_dpdk`). For some devices, they are the same, 
e.g.: ``mlx5_core``.
 
-peer: The identifier of a port this port is connected with.
+Attributes:
+identifier: The PCI address of the port on a node.
+os_driver: The operating system driver name when the operating system 
controls the port,
+e.g.: ``i40e``.
+os_driver_for_dpdk: The operating system driver name for use with 
DPDK, e.g.: ``vfio-pci``.
+peer: The identifier of a port this port is connected with.
+The `peer` is on a different node.
+mac_address: The MAC address of the port.
+logical_name: The logical name of the port. Must be discovered.
 """
 
 identifier: PortIdentifier
@@ -37,6 +55,12 @@ class Port:
 logical_name: str = ""
 
 def __init__(self, node_name: str, config: PortConfig):
+"""Initialize the port from `node_name` and `config`.
+
+Args:
+node_name: The name of the port's node.
+config: The test run configuration of the port.
+"""
 self.identifier = PortIdentifier(
 node=node_name,
 pci=config.pci,
@@ -47,14 +71,23 @@ def __init__(self, node_name: str, config: PortConfig):
 
 @property
 def node(self) -> str:
+"""The node where the port resides."""
 return self.identifier.node
 
 @property
 def pci(self) -> str:
+"""The PCI address of the port."""
 return self.identifier.pci
 
 
 @dataclass(slots=True, frozen=True)
 class PortLink:
+"""The physical, cabled connection between the ports.
+
+Attributes:
+sut_port: The port on the SUT node connected to `tg_port`.
+tg_port: The port on the TG node connected to `sut_port`.
+"""
+
 sut_port: Port
 tg_port: Port

[PATCH v8 14/21] dts: cpu docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/cpu.py | 196 +
 1 file changed, 144 insertions(+), 52 deletions(-)

diff --git a/dts/framework/testbed_model/cpu.py 
b/dts/framework/testbed_model/cpu.py
index 1b392689f5..9e33b2825d 100644
--- a/dts/framework/testbed_model/cpu.py
+++ b/dts/framework/testbed_model/cpu.py
@@ -1,6 +1,22 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""CPU core representation and filtering.
+
+This module provides a unified representation of logical CPU cores along
+with filtering capabilities.
+
+When symmetric multiprocessing (SMP or multithreading) is enabled on a server,
+the physical CPU cores are split into logical CPU cores with different IDs.
+
+:class:`LogicalCoreCountFilter` filters by the number of logical cores. It's 
possible to specify
+the socket from which to filter the number of logical cores. It's also 
possible to not use all
+logical CPU cores from each physical core (e.g. only the first logical core of 
each physical core).
+
+:class:`LogicalCoreListFilter` filters by logical core IDs. This mostly checks 
that
+the logical cores are actually present on the server.
+"""
+
 import dataclasses
 from abc import ABC, abstractmethod
 from collections.abc import Iterable, ValuesView
@@ -11,9 +27,17 @@
 
 @dataclass(slots=True, frozen=True)
 class LogicalCore(object):
-"""
-Representation of a CPU core. A physical core is represented in OS
-by multiple logical cores (lcores) if CPU multithreading is enabled.
+"""Representation of a logical CPU core.
+
+A physical core is represented in OS by multiple logical cores (lcores)
+if CPU multithreading is enabled. When multithreading is disabled, their 
IDs are the same.
+
+Attributes:
+lcore: The logical core ID of a CPU core. It's the same as `core` with
+disabled multithreading.
+core: The physical core ID of a CPU core.
+socket: The physical socket ID where the CPU resides.
+node: The NUMA node ID where the CPU resides.
 """
 
 lcore: int
@@ -22,27 +46,36 @@ class LogicalCore(object):
 node: int
 
 def __int__(self) -> int:
+"""The CPU is best represented by the logical core, as that's what we 
configure in EAL."""
 return self.lcore
 
 
 class LogicalCoreList(object):
-"""
-Convert these options into a list of logical core ids.
-lcore_list=[LogicalCore1, LogicalCore2] - a list of LogicalCores
-lcore_list=[0,1,2,3] - a list of int indices
-lcore_list=['0','1','2-3'] - a list of str indices; ranges are supported
-lcore_list='0,1,2-3' - a comma delimited str of indices; ranges are 
supported
-
-The class creates a unified format used across the framework and allows
-the user to use either a str representation (using str(instance) or 
directly
-in f-strings) or a list representation (by accessing instance.lcore_list).
-Empty lcore_list is allowed.
+r"""A unified way to store :class:`LogicalCore`\s.
+
+Create a unified format used across the framework and allow the user to use
+either a :class:`str` representation (using ``str(instance)`` or directly 
in f-strings)
+or a :class:`list` representation (by accessing the `lcore_list` property,
+which stores logical core IDs).
 """
 
 _lcore_list: list[int]
 _lcore_str: str
 
 def __init__(self, lcore_list: list[int] | list[str] | list[LogicalCore] | 
str):
+"""Process `lcore_list`, then sort.
+
+There are four supported logical core list formats::
+
+lcore_list=[LogicalCore1, LogicalCore2]  # a list of LogicalCores
+lcore_list=[0,1,2,3]# a list of int indices
+lcore_list=['0','1','2-3']  # a list of str indices; ranges are 
supported
+lcore_list='0,1,2-3'# a comma delimited str of indices; 
ranges are supported
+
+Args:
+lcore_list: Various ways to represent multiple logical cores.
+Empty `lcore_list` is allowed.
+"""
 self._lcore_list = []
 if isinstance(lcore_list, str):
 lcore_list = lcore_list.split(",")
@@ -58,6 +91,7 @@ def __init__(self, lcore_list: list[int] | list[str] | 
list[LogicalCore] | str):
 
 @property
 def lcore_list(self) -> list[int]:
+"""The logical core IDs."""
 return self._lcore_list
 
 def _get_consecutive_lcores_range(self, lcore_ids_list: list[int]) -> 
list[str]:
@@ -83,28 +117,30 @@ def _get_consecutive_lcores_range(self, lcore_ids_list: 
list[int]) -> list[str]:
 return formatted_core_list
 
 def __str__(self) -> str:
+"""The consecutive ranges of logical core IDs."""
 return self._lcore_str
 
 
 @dataclasses.dataclass(slots=True, frozen=True)
 class LogicalCoreCount(object):
-"""
-Define the number of

[PATCH v8 15/21] dts: os session docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/os_session.py | 272 --
 1 file changed, 205 insertions(+), 67 deletions(-)

diff --git a/dts/framework/testbed_model/os_session.py 
b/dts/framework/testbed_model/os_session.py
index 76e595a518..cfdbd1c4bd 100644
--- a/dts/framework/testbed_model/os_session.py
+++ b/dts/framework/testbed_model/os_session.py
@@ -2,6 +2,26 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""OS-aware remote session.
+
+DPDK supports multiple different operating systems, meaning it can run on 
these different operating
+systems. This module defines the common API that OS-unaware layers use and 
translates the API into
+OS-aware calls/utility usage.
+
+Note:
+Running commands with administrative privileges requires OS awareness. 
This is the only layer
+that's aware of OS differences, so this is where non-privileged command 
get converted
+to privileged commands.
+
+Example:
+A user wishes to remove a directory on a remote 
:class:`~.sut_node.SutNode`.
+The :class:`~.sut_node.SutNode` object isn't aware what OS the node is 
running - it delegates
+the OS translation logic to :attr:`~.node.Node.main_session`. The SUT node 
calls
+:meth:`~OSSession.remove_remote_dir` with a generic, OS-unaware path and
+the :attr:`~.node.Node.main_session` translates that to ``rm -rf`` if the 
node's OS is Linux
+and other commands for other OSs. It also translates the path to match the 
underlying OS.
+"""
+
 from abc import ABC, abstractmethod
 from collections.abc import Iterable
 from ipaddress import IPv4Interface, IPv6Interface
@@ -28,10 +48,16 @@
 
 
 class OSSession(ABC):
-"""
-The OS classes create a DTS node remote session and implement OS specific
+"""OS-unaware to OS-aware translation API definition.
+
+The OSSession classes create a remote session to a DTS node and implement 
OS specific
 behavior. There a few control methods implemented by the base class, the 
rest need
-to be implemented by derived classes.
+to be implemented by subclasses.
+
+Attributes:
+name: The name of the session.
+remote_session: The remote session maintaining the connection to the 
node.
+interactive_session: The interactive remote session maintaining the 
connection to the node.
 """
 
 _config: NodeConfiguration
@@ -46,6 +72,15 @@ def __init__(
 name: str,
 logger: DTSLOG,
 ):
+"""Initialize the OS-aware session.
+
+Connect to the node right away and also create an interactive remote 
session.
+
+Args:
+node_config: The test run configuration of the node to connect to.
+name: The name of the session.
+logger: The logger instance this session will use.
+"""
 self._config = node_config
 self.name = name
 self._logger = logger
@@ -53,15 +88,15 @@ def __init__(
 self.interactive_session = create_interactive_session(node_config, 
logger)
 
 def close(self, force: bool = False) -> None:
-"""
-Close the remote session.
+"""Close the underlying remote session.
+
+Args:
+force: Force the closure of the connection.
 """
 self.remote_session.close(force)
 
 def is_alive(self) -> bool:
-"""
-Check whether the remote session is still responding.
-"""
+"""Check whether the underlying remote session is still responding."""
 return self.remote_session.is_alive()
 
 def send_command(
@@ -72,10 +107,23 @@ def send_command(
 verify: bool = False,
 env: dict | None = None,
 ) -> CommandResult:
-"""
-An all-purpose API in case the command to be executed is already
-OS-agnostic, such as when the path to the executed command has been
-constructed beforehand.
+"""An all-purpose API for OS-agnostic commands.
+
+This can be used for an execution of a portable command that's 
executed the same way
+on all operating systems, such as Python.
+
+The :option:`--timeout` command line argument and the 
:envvar:`DTS_TIMEOUT`
+environment variable configure the timeout of command execution.
+
+Args:
+command: The command to execute.
+timeout: Wait at most this long in seconds for `command` execution 
to complete.
+privileged: Whether to run the command with administrative 
privileges.
+verify: If :data:`True`, will check the exit code of the command.
+env: A dictionary with environment variables to be used with the 
command execution.
+
+Raises:
+RemoteCommandExecutionError: If verify is :data:`True` and the 
command failed.
 """
 if privileged:
 command = sel

[PATCH v8 16/21] dts: posix and linux sessions docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/linux_session.py | 64 +++-
 dts/framework/testbed_model/posix_session.py | 81 +---
 2 files changed, 114 insertions(+), 31 deletions(-)

diff --git a/dts/framework/testbed_model/linux_session.py 
b/dts/framework/testbed_model/linux_session.py
index 055765ba2d..0ab59cef85 100644
--- a/dts/framework/testbed_model/linux_session.py
+++ b/dts/framework/testbed_model/linux_session.py
@@ -2,6 +2,13 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""Linux OS translator.
+
+Translate OS-unaware calls into Linux calls/utilities. Most of Linux 
distributions are mostly
+compliant with POSIX standards, so this module only implements the parts that 
aren't.
+This intermediate module implements the common parts of mostly POSIX compliant 
distributions.
+"""
+
 import json
 from ipaddress import IPv4Interface, IPv6Interface
 from typing import TypedDict, Union
@@ -17,43 +24,52 @@
 
 
 class LshwConfigurationOutput(TypedDict):
+"""The relevant parts of ``lshw``'s ``configuration`` section."""
+
+#:
 link: str
 
 
 class LshwOutput(TypedDict):
-"""
-A model of the relevant information from json lshw output, e.g.:
-{
-...
-"businfo" : "pci@:08:00.0",
-"logicalname" : "enp8s0",
-"version" : "00",
-"serial" : "52:54:00:59:e1:ac",
-...
-"configuration" : {
-  ...
-  "link" : "yes",
-  ...
-},
-...
+"""A model of the relevant information from ``lshw``'s json output.
+
+Example:
+::
+
+{
+...
+"businfo" : "pci@:08:00.0",
+"logicalname" : "enp8s0",
+"version" : "00",
+"serial" : "52:54:00:59:e1:ac",
+...
+"configuration" : {
+  ...
+  "link" : "yes",
+  ...
+},
+...
 """
 
+#:
 businfo: str
+#:
 logicalname: NotRequired[str]
+#:
 serial: NotRequired[str]
+#:
 configuration: LshwConfigurationOutput
 
 
 class LinuxSession(PosixSession):
-"""
-The implementation of non-Posix compliant parts of Linux remote sessions.
-"""
+"""The implementation of non-Posix compliant parts of Linux."""
 
 @staticmethod
 def _get_privileged_command(command: str) -> str:
 return f"sudo -- sh -c '{command}'"
 
 def get_remote_cpus(self, use_first_core: bool) -> list[LogicalCore]:
+"""Overrides :meth:`~.os_session.OSSession.get_remote_cpus`."""
 cpu_info = self.send_command("lscpu -p=CPU,CORE,SOCKET,NODE|grep -v 
\\#").stdout
 lcores = []
 for cpu_line in cpu_info.splitlines():
@@ -65,18 +81,20 @@ def get_remote_cpus(self, use_first_core: bool) -> 
list[LogicalCore]:
 return lcores
 
 def get_dpdk_file_prefix(self, dpdk_prefix: str) -> str:
+"""Overrides :meth:`~.os_session.OSSession.get_dpdk_file_prefix`."""
 return dpdk_prefix
 
-def setup_hugepages(self, hugepage_amount: int, force_first_numa: bool) -> 
None:
+def setup_hugepages(self, hugepage_count: int, force_first_numa: bool) -> 
None:
+"""Overrides :meth:`~.os_session.OSSession.setup_hugepages`."""
 self._logger.info("Getting Hugepage information.")
 hugepage_size = self._get_hugepage_size()
 hugepages_total = self._get_hugepages_total()
 self._numa_nodes = self._get_numa_nodes()
 
-if force_first_numa or hugepages_total != hugepage_amount:
+if force_first_numa or hugepages_total != hugepage_count:
 # when forcing numa, we need to clear existing hugepages regardless
 # of size, so they can be moved to the first numa node
-self._configure_huge_pages(hugepage_amount, hugepage_size, 
force_first_numa)
+self._configure_huge_pages(hugepage_count, hugepage_size, 
force_first_numa)
 else:
 self._logger.info("Hugepages already configured.")
 self._mount_huge_pages()
@@ -132,6 +150,7 @@ def _configure_huge_pages(self, amount: int, size: int, 
force_first_numa: bool)
 self.send_command(f"echo {amount} | tee {hugepage_config_path}", 
privileged=True)
 
 def update_ports(self, ports: list[Port]) -> None:
+"""Overrides :meth:`~.os_session.OSSession.update_ports`."""
 self._logger.debug("Gathering port info.")
 for port in ports:
 assert port.node == self.name, "Attempted to gather port info on 
the wrong node"
@@ -161,6 +180,7 @@ def _update_port_attr(self, port: Port, attr_value: str | 
None, attr_name: str)
 )
 
 def configure_port_state(self, port: Port, enable: bool) -> None:
+"""Overrides :meth:`~.os_session.OSSession.configure_port_state`."""
 state = "up" if enable else "down"
 self.send_com

[PATCH v8 17/21] dts: node docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/node.py | 191 +++-
 1 file changed, 131 insertions(+), 60 deletions(-)

diff --git a/dts/framework/testbed_model/node.py 
b/dts/framework/testbed_model/node.py
index b313b5ad54..6eecbdfd6a 100644
--- a/dts/framework/testbed_model/node.py
+++ b/dts/framework/testbed_model/node.py
@@ -3,8 +3,13 @@
 # Copyright(c) 2022-2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2022-2023 University of New Hampshire
 
-"""
-A node is a generic host that DTS connects to and manages.
+"""Common functionality for node management.
+
+A node is any host/server DTS connects to.
+
+The base class, :class:`Node`, provides functionality common to all nodes and 
is supposed
+to be extended by subclasses with functionalities specific to each node type.
+The :func:`~Node.skip_setup` decorator can be used without subclassing.
 """
 
 from abc import ABC
@@ -35,10 +40,22 @@
 
 
 class Node(ABC):
-"""
-Basic class for node management. This class implements methods that
-manage a node, such as information gathering (of CPU/PCI/NIC) and
-environment setup.
+"""The base class for node management.
+
+It shouldn't be instantiated, but rather subclassed.
+It implements common methods to manage any node:
+
+* Connection to the node,
+* Hugepages setup.
+
+Attributes:
+main_session: The primary OS-aware remote session used to communicate 
with the node.
+config: The node configuration.
+name: The name of the node.
+lcores: The list of logical cores that DTS can use on the node.
+It's derived from logical cores present on the node and the test 
run configuration.
+ports: The ports of this node specified in the test run configuration.
+virtual_devices: The virtual devices used on the node.
 """
 
 main_session: OSSession
@@ -52,6 +69,17 @@ class Node(ABC):
 virtual_devices: list[VirtualDevice]
 
 def __init__(self, node_config: NodeConfiguration):
+"""Connect to the node and gather info during initialization.
+
+Extra gathered information:
+
+* The list of available logical CPUs. This is then filtered by
+  the ``lcores`` configuration in the YAML test run configuration file,
+* Information about ports from the YAML test run configuration file.
+
+Args:
+node_config: The node's test run configuration.
+"""
 self.config = node_config
 self.name = node_config.name
 self._logger = getLogger(self.name)
@@ -60,7 +88,7 @@ def __init__(self, node_config: NodeConfiguration):
 self._logger.info(f"Connected to node: {self.name}")
 
 self._get_remote_cpus()
-# filter the node lcores according to user config
+# filter the node lcores according to the test run configuration
 self.lcores = LogicalCoreListFilter(
 self.lcores, LogicalCoreList(self.config.lcores)
 ).filter()
@@ -76,9 +104,14 @@ def _init_ports(self) -> None:
 self.configure_port_state(port)
 
 def set_up_execution(self, execution_config: ExecutionConfiguration) -> 
None:
-"""
-Perform the execution setup that will be done for each execution
-this node is part of.
+"""Execution setup steps.
+
+Configure hugepages and call :meth:`_set_up_execution` where
+the rest of the configuration steps (if any) are implemented.
+
+Args:
+execution_config: The execution test run configuration according 
to which
+the setup steps will be taken.
 """
 self._setup_hugepages()
 self._set_up_execution(execution_config)
@@ -87,54 +120,70 @@ def set_up_execution(self, execution_config: 
ExecutionConfiguration) -> None:
 self.virtual_devices.append(VirtualDevice(vdev))
 
 def _set_up_execution(self, execution_config: ExecutionConfiguration) -> 
None:
-"""
-This method exists to be optionally overwritten by derived classes and
-is not decorated so that the derived class doesn't have to use the 
decorator.
+"""Optional additional execution setup steps for subclasses.
+
+Subclasses should override this if they need to add additional 
execution setup steps.
 """
 
 def tear_down_execution(self) -> None:
-"""
-Perform the execution teardown that will be done after each execution
-this node is part of concludes.
+"""Execution teardown steps.
+
+There are currently no common execution teardown steps common to all 
DTS node types.
 """
 self.virtual_devices = []
 self._tear_down_execution()
 
 def _tear_down_execution(self) -> None:
-"""
-This method exists to be optionally overwritten by derived classes and
-is not decorate

[PATCH v8 18/21] dts: sut and tg nodes docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/framework/testbed_model/sut_node.py | 230 
 dts/framework/testbed_model/tg_node.py  |  42 +++--
 2 files changed, 176 insertions(+), 96 deletions(-)

diff --git a/dts/framework/testbed_model/sut_node.py 
b/dts/framework/testbed_model/sut_node.py
index 5ce9446dba..c4acea38d1 100644
--- a/dts/framework/testbed_model/sut_node.py
+++ b/dts/framework/testbed_model/sut_node.py
@@ -3,6 +3,14 @@
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 # Copyright(c) 2023 University of New Hampshire
 
+"""System under test (DPDK + hardware) node.
+
+A system under test (SUT) is the combination of DPDK
+and the hardware we're testing with DPDK (NICs, crypto and other devices).
+An SUT node is where this SUT runs.
+"""
+
+
 import os
 import tarfile
 import time
@@ -26,6 +34,11 @@
 
 
 class EalParameters(object):
+"""The environment abstraction layer parameters.
+
+The string representation can be created by converting the instance to a 
string.
+"""
+
 def __init__(
 self,
 lcore_list: LogicalCoreList,
@@ -35,21 +48,23 @@ def __init__(
 vdevs: list[VirtualDevice],
 other_eal_param: str,
 ):
-"""
-Generate eal parameters character string;
-:param lcore_list: the list of logical cores to use.
-:param memory_channels: the number of memory channels to use.
-:param prefix: set file prefix string, eg:
-prefix='vf'
-:param no_pci: switch of disable PCI bus eg:
-no_pci=True
-:param vdevs: virtual device list, eg:
-vdevs=[
-VirtualDevice('net_ring0'),
-VirtualDevice('net_ring1')
-]
-:param other_eal_param: user defined DPDK eal parameters, eg:
-other_eal_param='--single-file-segments'
+"""Initialize the parameters according to inputs.
+
+Process the parameters into the format used on the command line.
+
+Args:
+lcore_list: The list of logical cores to use.
+memory_channels: The number of memory channels to use.
+prefix: Set the file prefix string with which to start DPDK, e.g.: 
``prefix='vf'``.
+no_pci: Switch to disable PCI bus e.g.: ``no_pci=True``.
+vdevs: Virtual devices, e.g.::
+
+vdevs=[
+VirtualDevice('net_ring0'),
+VirtualDevice('net_ring1')
+]
+other_eal_param: user defined DPDK EAL parameters, e.g.:
+``other_eal_param='--single-file-segments'``
 """
 self._lcore_list = f"-l {lcore_list}"
 self._memory_channels = f"-n {memory_channels}"
@@ -61,6 +76,7 @@ def __init__(
 self._other_eal_param = other_eal_param
 
 def __str__(self) -> str:
+"""Create the EAL string."""
 return (
 f"{self._lcore_list} "
 f"{self._memory_channels} "
@@ -72,11 +88,21 @@ def __str__(self) -> str:
 
 
 class SutNode(Node):
-"""
-A class for managing connections to the System under Test, providing
-methods that retrieve the necessary information about the node (such as
-CPU, memory and NIC details) and configuration capabilities.
-Another key capability is building DPDK according to given build target.
+"""The system under test node.
+
+The SUT node extends :class:`Node` with DPDK specific features:
+
+* DPDK build,
+* Gathering of DPDK build info,
+* The running of DPDK apps, interactively or one-time execution,
+* DPDK apps cleanup.
+
+The :option:`--tarball` command line argument and the 
:envvar:`DTS_DPDK_TARBALL`
+environment variable configure the path to the DPDK tarball
+or the git commit ID, tag ID or tree ID to test.
+
+Attributes:
+config: The SUT node configuration
 """
 
 config: SutNodeConfiguration
@@ -94,6 +120,11 @@ class SutNode(Node):
 _path_to_devbind_script: PurePath | None
 
 def __init__(self, node_config: SutNodeConfiguration):
+"""Extend the constructor with SUT node specifics.
+
+Args:
+node_config: The SUT node's test run configuration.
+"""
 super(SutNode, self).__init__(node_config)
 self._dpdk_prefix_list = []
 self._build_target_config = None
@@ -113,6 +144,12 @@ def __init__(self, node_config: SutNodeConfiguration):
 
 @property
 def _remote_dpdk_dir(self) -> PurePath:
+"""The remote DPDK dir.
+
+This internal property should be set after extracting the DPDK 
tarball. If it's not set,
+that implies the DPDK setup step has been skipped, in which case we 
can guess where
+a previous build was located.
+"""
 if self.__remote_dpdk_dir is 

[PATCH v8 19/21] dts: base traffic generators docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 .../traffic_generator/__init__.py | 22 -
 .../capturing_traffic_generator.py| 45 +++
 .../traffic_generator/traffic_generator.py| 33 --
 3 files changed, 67 insertions(+), 33 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/__init__.py 
b/dts/framework/testbed_model/traffic_generator/__init__.py
index 52888d03fa..11e2bd7d97 100644
--- a/dts/framework/testbed_model/traffic_generator/__init__.py
+++ b/dts/framework/testbed_model/traffic_generator/__init__.py
@@ -1,6 +1,19 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
+"""DTS traffic generators.
+
+A traffic generator is capable of generating traffic and then monitor 
returning traffic.
+All traffic generators must count the number of received packets. Some may 
additionally capture
+individual packets.
+
+A traffic generator may be software running on generic hardware or it could be 
specialized hardware.
+
+The traffic generators that only count the number of received packets are 
suitable only for
+performance testing. In functional testing, we need to be able to dissect each 
arrived packet
+and a capturing traffic generator is required.
+"""
+
 from framework.config import ScapyTrafficGeneratorConfig, TrafficGeneratorType
 from framework.exception import ConfigurationError
 from framework.testbed_model.node import Node
@@ -12,8 +25,15 @@
 def create_traffic_generator(
 tg_node: Node, traffic_generator_config: ScapyTrafficGeneratorConfig
 ) -> CapturingTrafficGenerator:
-"""A factory function for creating traffic generator object from user 
config."""
+"""The factory function for creating traffic generator objects from the 
test run configuration.
+
+Args:
+tg_node: The traffic generator node where the created traffic 
generator will be running.
+traffic_generator_config: The traffic generator config.
 
+Returns:
+A traffic generator capable of capturing received packets.
+"""
 match traffic_generator_config.traffic_generator_type:
 case TrafficGeneratorType.SCAPY:
 return ScapyTrafficGenerator(tg_node, traffic_generator_config)
diff --git 
a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py 
b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
index 1fc7f98c05..0246590333 100644
--- 
a/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
+++ 
b/dts/framework/testbed_model/traffic_generator/capturing_traffic_generator.py
@@ -23,19 +23,21 @@
 
 
 def _get_default_capture_name() -> str:
-"""
-This is the function used for the default implementation of capture names.
-"""
 return str(uuid.uuid4())
 
 
 class CapturingTrafficGenerator(TrafficGenerator):
 """Capture packets after sending traffic.
 
-A mixin interface which enables a packet generator to declare that it can 
capture
+The intermediary interface which enables a packet generator to declare 
that it can capture
 packets and return them to the user.
 
+Similarly to :class:`~.traffic_generator.TrafficGenerator`, this class 
exposes
+the public methods specific to capturing traffic generators and defines a 
private method
+that must implement the traffic generation and capturing logic in 
subclasses.
+
 The methods of capturing traffic generators obey the following workflow:
+
 1. send packets
 2. capture packets
 3. write the capture to a .pcap file
@@ -44,6 +46,7 @@ class CapturingTrafficGenerator(TrafficGenerator):
 
 @property
 def is_capturing(self) -> bool:
+"""This traffic generator can capture traffic."""
 return True
 
 def send_packet_and_capture(
@@ -54,11 +57,12 @@ def send_packet_and_capture(
 duration: float,
 capture_name: str = _get_default_capture_name(),
 ) -> list[Packet]:
-"""Send a packet, return received traffic.
+"""Send `packet` and capture received traffic.
+
+Send `packet` on `send_port` and then return all traffic captured
+on `receive_port` for the given `duration`.
 
-Send a packet on the send_port and then return all traffic captured
-on the receive_port for the given duration. Also record the captured 
traffic
-in a pcap file.
+The captured traffic is recorded in the `capture_name`.pcap file.
 
 Args:
 packet: The packet to send.
@@ -68,7 +72,7 @@ def send_packet_and_capture(
 capture_name: The name of the .pcap file where to store the 
capture.
 
 Returns:
- A list of received packets. May be empty if no packets are 
captured.
+ The received packets. May be empty if no packets are captured.
 """
 return self.send_packets_and_captur

[PATCH v8 20/21] dts: scapy tg docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 .../testbed_model/traffic_generator/scapy.py  | 91 +++
 1 file changed, 54 insertions(+), 37 deletions(-)

diff --git a/dts/framework/testbed_model/traffic_generator/scapy.py 
b/dts/framework/testbed_model/traffic_generator/scapy.py
index c88cf28369..30ea3914ee 100644
--- a/dts/framework/testbed_model/traffic_generator/scapy.py
+++ b/dts/framework/testbed_model/traffic_generator/scapy.py
@@ -2,14 +2,15 @@
 # Copyright(c) 2022 University of New Hampshire
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""Scapy traffic generator.
+"""The Scapy traffic generator.
 
-Traffic generator used for functional testing, implemented using the Scapy 
library.
+A traffic generator used for functional testing, implemented with
+`the Scapy library `_.
 The traffic generator uses an XML-RPC server to run Scapy on the remote TG 
node.
 
-The XML-RPC server runs in an interactive remote SSH session running Python 
console,
-where we start the server. The communication with the server is facilitated 
with
-a local server proxy.
+The traffic generator uses the :mod:`xmlrpc.server` module to run an XML-RPC 
server
+in an interactive remote Python SSH session. The communication with the server 
is facilitated
+with a local server proxy from the :mod:`xmlrpc.client` module.
 """
 
 import inspect
@@ -69,20 +70,20 @@ def scapy_send_packets_and_capture(
 recv_iface: str,
 duration: float,
 ) -> list[bytes]:
-"""RPC function to send and capture packets.
+"""The RPC function to send and capture packets.
 
-The function is meant to be executed on the remote TG node.
+The function is meant to be executed on the remote TG node via the server 
proxy.
 
 Args:
 xmlrpc_packets: The packets to send. These need to be converted to
-xmlrpc.client.Binary before sending to the remote server.
+:class:`~xmlrpc.client.Binary` objects before sending to the 
remote server.
 send_iface: The logical name of the egress interface.
 recv_iface: The logical name of the ingress interface.
 duration: Capture for this amount of time, in seconds.
 
 Returns:
 A list of bytes. Each item in the list represents one packet, which 
needs
-to be converted back upon transfer from the remote node.
+to be converted back upon transfer from the remote node.
 """
 scapy_packets = [scapy.all.Packet(packet.data) for packet in 
xmlrpc_packets]
 sniffer = scapy.all.AsyncSniffer(
@@ -96,19 +97,15 @@ def scapy_send_packets_and_capture(
 
 
 def scapy_send_packets(xmlrpc_packets: list[xmlrpc.client.Binary], send_iface: 
str) -> None:
-"""RPC function to send packets.
+"""The RPC function to send packets.
 
-The function is meant to be executed on the remote TG node.
-It doesn't return anything, only sends packets.
+The function is meant to be executed on the remote TG node via the server 
proxy.
+It only sends `xmlrpc_packets`, without capturing them.
 
 Args:
 xmlrpc_packets: The packets to send. These need to be converted to
-xmlrpc.client.Binary before sending to the remote server.
+:class:`~xmlrpc.client.Binary` objects before sending to the 
remote server.
 send_iface: The logical name of the egress interface.
-
-Returns:
-A list of bytes. Each item in the list represents one packet, which 
needs
-to be converted back upon transfer from the remote node.
 """
 scapy_packets = [scapy.all.Packet(packet.data) for packet in 
xmlrpc_packets]
 scapy.all.sendp(scapy_packets, iface=send_iface, realtime=True, 
verbose=True)
@@ -128,11 +125,19 @@ def scapy_send_packets(xmlrpc_packets: 
list[xmlrpc.client.Binary], send_iface: s
 
 
 class QuittableXMLRPCServer(SimpleXMLRPCServer):
-"""Basic XML-RPC server that may be extended
-by functions serializable by the marshal module.
+"""Basic XML-RPC server.
+
+The server may be augmented by functions serializable by the 
:mod:`marshal` module.
 """
 
 def __init__(self, *args, **kwargs):
+"""Extend the XML-RPC server initialization.
+
+Args:
+args: The positional arguments that will be passed to the 
superclass's constructor.
+kwargs: The keyword arguments that will be passed to the 
superclass's constructor.
+The `allow_none` argument will be set to :data:`True`.
+"""
 kwargs["allow_none"] = True
 super().__init__(*args, **kwargs)
 self.register_introspection_functions()
@@ -140,13 +145,12 @@ def __init__(self, *args, **kwargs):
 self.register_function(self.add_rpc_function)
 
 def quit(self) -> None:
+"""Quit the server."""
 self._BaseServer__shutdown_request = True
 return None
 
 def add_rpc_function(s

[PATCH v8 21/21] dts: test suites docstring update

2023-11-23 Thread Juraj Linkeš
Format according to the Google format and PEP257, with slight
deviations.

Signed-off-by: Juraj Linkeš 
---
 dts/tests/TestSuite_hello_world.py | 16 +---
 dts/tests/TestSuite_os_udp.py  | 20 ++
 dts/tests/TestSuite_smoke_tests.py | 61 --
 3 files changed, 72 insertions(+), 25 deletions(-)

diff --git a/dts/tests/TestSuite_hello_world.py 
b/dts/tests/TestSuite_hello_world.py
index 768ba1cfa8..fd7ff1534d 100644
--- a/dts/tests/TestSuite_hello_world.py
+++ b/dts/tests/TestSuite_hello_world.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2010-2014 Intel Corporation
 
-"""
+"""The DPDK hello world app test suite.
+
 Run the helloworld example app and verify it prints a message for each used 
core.
 No other EAL parameters apart from cores are used.
 """
@@ -15,22 +16,25 @@
 
 
 class TestHelloWorld(TestSuite):
+"""DPDK hello world app test suite."""
+
 def set_up_suite(self) -> None:
-"""
+"""Set up the test suite.
+
 Setup:
 Build the app we're about to test - helloworld.
 """
 self.app_helloworld_path = self.sut_node.build_dpdk_app("helloworld")
 
 def test_hello_world_single_core(self) -> None:
-"""
+"""Single core test case.
+
 Steps:
 Run the helloworld app on the first usable logical core.
 Verify:
 The app prints a message from the used core:
 "hello from core "
 """
-
 # get the first usable core
 lcore_amount = LogicalCoreCount(1, 1, 1)
 lcores = LogicalCoreCountFilter(self.sut_node.lcores, 
lcore_amount).filter()
@@ -42,14 +46,14 @@ def test_hello_world_single_core(self) -> None:
 )
 
 def test_hello_world_all_cores(self) -> None:
-"""
+"""All cores test case.
+
 Steps:
 Run the helloworld app on all usable logical cores.
 Verify:
 The app prints a message from all used cores:
 "hello from core "
 """
-
 # get the maximum logical core number
 eal_para = self.sut_node.create_eal_parameters(
 lcore_filter_specifier=LogicalCoreList(self.sut_node.lcores)
diff --git a/dts/tests/TestSuite_os_udp.py b/dts/tests/TestSuite_os_udp.py
index bf6b93deb5..2cf29d37bb 100644
--- a/dts/tests/TestSuite_os_udp.py
+++ b/dts/tests/TestSuite_os_udp.py
@@ -1,7 +1,8 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 PANTHEON.tech s.r.o.
 
-"""
+"""Basic IPv4 OS routing test suite.
+
 Configure SUT node to route traffic from if1 to if2.
 Send a packet to the SUT node, verify it comes back on the second port on the 
TG node.
 """
@@ -13,24 +14,26 @@
 
 
 class TestOSUdp(TestSuite):
+"""IPv4 UDP OS routing test suite."""
+
 def set_up_suite(self) -> None:
-"""
+"""Set up the test suite.
+
 Setup:
-Configure SUT ports and SUT to route traffic from if1 to if2.
+Bind the SUT ports to the OS driver, configure the ports and 
configure the SUT
+to route traffic from if1 to if2.
 """
-
-# This test uses kernel drivers
 self.sut_node.bind_ports_to_driver(for_dpdk=False)
 self.configure_testbed_ipv4()
 
 def test_os_udp(self) -> None:
-"""
+"""Basic UDP IPv4 traffic test case.
+
 Steps:
 Send a UDP packet.
 Verify:
 The packet with proper addresses arrives at the other TG port.
 """
-
 packet = Ether() / IP() / UDP()
 
 received_packets = self.send_packet_and_capture(packet)
@@ -40,7 +43,8 @@ def test_os_udp(self) -> None:
 self.verify_packets(expected_packet, received_packets)
 
 def tear_down_suite(self) -> None:
-"""
+"""Tear down the test suite.
+
 Teardown:
 Remove the SUT port configuration configured in setup.
 """
diff --git a/dts/tests/TestSuite_smoke_tests.py 
b/dts/tests/TestSuite_smoke_tests.py
index 8958f58dac..5e2bac14bd 100644
--- a/dts/tests/TestSuite_smoke_tests.py
+++ b/dts/tests/TestSuite_smoke_tests.py
@@ -1,6 +1,17 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2023 University of New Hampshire
 
+"""Smoke test suite.
+
+Smoke tests are a class of tests which are used for validating a minimal set 
of important features.
+These are the most important features without which (or when they're faulty) 
the software wouldn't
+work properly. Thus, if any failure occurs while testing these features,
+there isn't that much of a reason to continue testing, as the software is 
fundamentally broken.
+
+These tests don't have to include only DPDK tests, as the reason for failures 
could be
+in the infrastructure (a faulty link between NICs or a misconfiguration).
+"""
+
 import re
 
 from framework.config import PortConfig
@@ -11,23 +22,39 @@
 
 
 class SmokeTests(TestSuite):
+"""DPDK and infrastructure smoke test suit

Re: [PATCH 24.03 1/4] arg_parser: new library for command line parsing

2023-11-23 Thread Bruce Richardson
On Wed, Nov 22, 2023 at 04:45:47PM +, Euan Bourke wrote:
> Add a new library to make it easier for eal and other libraries to parse
> command line arguments.
> 
> The first function in this library is one to parse a corelist string into an
> array of individual core ids. The function will then return the total number
> of cores described in the corelist
> 
> Signed-off-by: Euan Bourke 

Thanks for the patchset. Some comments inline below.

/Bruce

> ---
>  .mailmap|   1 +
>  MAINTAINERS |   5 ++
>  doc/api/doxy-api-index.md   |   3 +-
>  doc/api/doxy-api.conf.in|   1 +
>  lib/arg_parser/arg_parser.c | 113 
>  lib/arg_parser/meson.build  |   7 ++
>  lib/arg_parser/rte_arg_parser.h |  66 +++
>  lib/arg_parser/version.map  |  10 +++
>  lib/meson.build |   1 +
>  9 files changed, 206 insertions(+), 1 deletion(-)
>  create mode 100644 lib/arg_parser/arg_parser.c
>  create mode 100644 lib/arg_parser/meson.build
>  create mode 100644 lib/arg_parser/rte_arg_parser.h
>  create mode 100644 lib/arg_parser/version.map
> 
> diff --git a/.mailmap b/.mailmap
> index 72b216df9c..c1a4bf85f6 100644
> --- a/.mailmap
> +++ b/.mailmap
> @@ -379,6 +379,7 @@ Eric Zhang 
>  Erik Gabriel Carrillo 
>  Erik Ziegenbalg 
>  Erlu Chen 
> +Euan Bourke 
>  Eugenio Pérez 
>  Eugeny Parshutin 
>  Evan Swanson 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index cf2af0d3a4..ce81877ce0 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1753,6 +1753,11 @@ M: Nithin Dabilpuram 
>  M: Pavan Nikhilesh 
>  F: lib/node/
>  
> +Argument parsing
> +M: Bruce Richardson 
> +M: Euan Bourke 
> +F: lib/arg_parser/
> +
>  
>  Test Applications
>  -
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index a6a768bd7c..f711010140 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -221,7 +221,8 @@ The public API headers are grouped by topics:
>[config file](@ref rte_cfgfile.h),
>[key/value args](@ref rte_kvargs.h),
>[string](@ref rte_string_fns.h),
> -  [thread](@ref rte_thread.h)
> +  [thread](@ref rte_thread.h),
> +  [argument parsing](@ref rte_arg_parser.h)
>  
>  - **debug**:
>[jobstats](@ref rte_jobstats.h),
> diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
> index e94c9e4e46..05718ba6ed 100644
> --- a/doc/api/doxy-api.conf.in
> +++ b/doc/api/doxy-api.conf.in
> @@ -28,6 +28,7 @@ INPUT   = 
> @TOPDIR@/doc/api/doxy-api-index.md \
>@TOPDIR@/lib/eal/include \
>@TOPDIR@/lib/eal/include/generic \
>@TOPDIR@/lib/acl \
> +  @TOPDIR@/lib/arg_parser \
>@TOPDIR@/lib/bbdev \
>@TOPDIR@/lib/bitratestats \
>@TOPDIR@/lib/bpf \
> diff --git a/lib/arg_parser/arg_parser.c b/lib/arg_parser/arg_parser.c
> new file mode 100644
> index 00..45acaf5631
> --- /dev/null
> +++ b/lib/arg_parser/arg_parser.c
> @@ -0,0 +1,113 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 Intel Corporation
> + */
> +
> +#include "errno.h"
> +#include "stdlib.h"
> +#include "ctype.h"
> +#include "string.h"
> +#include "stdbool.h"
> +
> +#include 
> +#include 
> +
> +
> +struct core_bits {
> + uint8_t bits[(UINT16_MAX + 1)/CHAR_BIT];
> + uint16_t max_bit_set;
> + uint16_t min_bit_set;
> + uint32_t total_bits_set;
> +};
> +
> +static inline bool
> +get_core_bit(struct core_bits *mask, uint16_t idx)
> +{
> + return !!(mask->bits[idx/8] & (1 << (idx % 8)));

Very minor nit, whitespace around the "/" in idx/8.

> +}
> +
> +static inline void
> +set_core_bit(struct core_bits *mask, uint16_t idx)
> +{
> + if (get_core_bit(mask, idx) == 0) {

The function would be simpler flipping the comparison, since we do nothing
if the bit is already set.

if (get_core_bit(mask, idx))
return;

Thereafter you can unconditionally set the bit, and increment
total_bits_set, before branching for total_bits_set == 1, and the min/max
comparison in the else leg of that.

> + mask->total_bits_set++;
> +
> + /* If its the first bit, assign min and max that value */
> + if (mask->total_bits_set == 1) {
> + mask->min_bit_set = idx;
> + mask->max_bit_set = idx;
> + }
> + }
> +
> + mask->bits[idx/8] |= 1 << (idx % 8);
> +
> + if (idx > mask->max_bit_set)
> + mask->max_bit_set = idx;
> +
> + if (idx < mask->min_bit_set)
> + mask->min_bit_set = idx;
> +}
> +
> +static inline void
> +corebits_to_array(struct core_bits *mask, uint16_t *cores, size_t max_cores)
> +{
> + uint32_t count = 0;
> + for (uint32_t i = mask->min_bit_set; i <= mask->max_bit_set && count < 
> max_cores; i++) {
> + if

[PATCH 0/1] rebase iova fixes

2023-11-23 Thread christian . ehrhardt
From: Christian Ehrhardt 

Testing 23.11 has shown that [1] is still needed, all tests
on ppc64 with no-huge still fail due to iova=pa breaking without
pa being available.

This is the rebase of [1] to 23.11 and I've added the matching
change to the documentation as well.

David Wilder (1):
  eal/linux: force iova-mode va without pa available

 doc/guides/prog_guide/env_abstraction_layer.rst |  9 ++---
 lib/eal/linux/eal.c | 14 --
 2 files changed, 14 insertions(+), 9 deletions(-)

-- 
2.34.1



[PATCH 1/1] eal/linux: force iova-mode va without pa available

2023-11-23 Thread christian . ehrhardt
From: David Wilder 

When using --no-huge option physical address are not guaranteed
to be persistent.

This change effectively makes "--no-huge" the same as
"--no-huge --iova-mode=va".

When --no-huge is used (or any other condition making physical
addresses unavailable) setting --iova-mode=pa will have no effect.

Signed-off-by: Christian Ehrhardt 
---
 doc/guides/prog_guide/env_abstraction_layer.rst |  9 ++---
 lib/eal/linux/eal.c | 14 --
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst 
b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6debf54efb..20c7355e0f 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -559,9 +559,12 @@ IOVA Mode is selected by considering what the current 
usable Devices on the
 system require and/or support.
 
 On FreeBSD, RTE_IOVA_PA is always the default. On Linux, the IOVA mode is
-detected based on a 2-step heuristic detailed below.
+detected based on a heuristic detailed below.
 
-For the first step, EAL asks each bus its requirement in terms of IOVA mode
+For the first step, if no Physical Addresses are available RTE_IOVA_VA is
+selected.
+
+Then EAL asks each bus its requirement in terms of IOVA mode
 and decides on a preferred IOVA mode.
 
 - if all buses report RTE_IOVA_PA, then the preferred IOVA mode is RTE_IOVA_PA,
@@ -575,7 +578,7 @@ and decides on a preferred IOVA mode.
 If the buses have expressed no preference on which IOVA mode to pick, then a
 default is selected using the following logic:
 
-- if physical addresses are not available, RTE_IOVA_VA mode is used
+- if enable_iova_as_pa was not set at build RTE_IOVA_VA mode is used
 - if /sys/kernel/iommu_groups is not empty, RTE_IOVA_VA mode is used
 - otherwise, RTE_IOVA_PA mode is used
 
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 57da058cec..7d0eedef57 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1067,6 +1067,14 @@ rte_eal_init(int argc, char **argv)
 
phys_addrs = rte_eal_using_phys_addrs() != 0;
 
+   if (!phys_addrs) {
+   /* if we have no access to physical addresses,
+* pick IOVA as VA mode.
+*/
+   iova_mode = RTE_IOVA_VA;
+   RTE_LOG(INFO, EAL, "Physical addresses are unavailable, 
selecting IOVA as VA mode.\n");
+   }
+
/* if no EAL option "--iova-mode=", use bus IOVA scheme */
if (internal_conf->iova_mode == RTE_IOVA_DC) {
/* autodetect the IOVA mapping mode */
@@ -1078,12 +1086,6 @@ rte_eal_init(int argc, char **argv)
if (!RTE_IOVA_IN_MBUF) {
iova_mode = RTE_IOVA_VA;
RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced 
by build option.\n");
-   } else if (!phys_addrs) {
-   /* if we have no access to physical addresses,
-* pick IOVA as VA mode.
-*/
-   iova_mode = RTE_IOVA_VA;
-   RTE_LOG(DEBUG, EAL, "Physical addresses are 
unavailable, selecting IOVA as VA mode.\n");
} else if (is_iommu_enabled()) {
/* we have an IOMMU, pick IOVA as VA mode */
iova_mode = RTE_IOVA_VA;
-- 
2.34.1



Re: [PATCH 24.03 2/4] arg_parser: add new coremask parsing API

2023-11-23 Thread Bruce Richardson
On Wed, Nov 22, 2023 at 04:45:48PM +, Euan Bourke wrote:
> Add new coremask parsing API. This API behaves similarly to the corelist 
> parsing
> API, parsing the coremask string, filling its values into the cores array.
> 

General tip - commit log messages are generally wrapped at 72 characters.

> The API also returns a 'count' which corresponds to the total number of cores
> in the coremask string.
> 
> Signed-off-by: Euan Bourke 

Again, some review comments inline below.

/Bruce

> ---
>  lib/arg_parser/arg_parser.c | 58 +
>  lib/arg_parser/rte_arg_parser.h | 33 +++
>  lib/arg_parser/version.map  |  1 +
>  3 files changed, 92 insertions(+)
> 
> diff --git a/lib/arg_parser/arg_parser.c b/lib/arg_parser/arg_parser.c
> index 45acaf5631..58be94b67d 100644
> --- a/lib/arg_parser/arg_parser.c
> +++ b/lib/arg_parser/arg_parser.c
> @@ -11,6 +11,9 @@
>  #include 
>  #include 
>  
> +#define BITS_PER_HEX 4
> +#define MAX_COREMASK_SIZE ((UINT16_MAX+1)/BITS_PER_HEX)
> +

Whitespace around "/" operator here, and below in bits definition (which I
missed on review of first patch).

>  
>  struct core_bits {
>   uint8_t bits[(UINT16_MAX + 1)/CHAR_BIT];
> @@ -57,6 +60,15 @@ corebits_to_array(struct core_bits *mask, uint16_t *cores, 
> size_t max_cores)
>   }
>  }
>  
> +static int xdigit2val(unsigned char c)
> +{
> + if (isdigit(c))
> + return c - '0';
> + else if (isupper(c))
> + return c - 'A' + 10;
> + else
> + return c - 'a' + 10;
> +}
>  
>  int
>  rte_parse_corelist(const char *corelist, uint16_t *cores, uint32_t cores_len)
> @@ -111,3 +123,49 @@ rte_parse_corelist(const char *corelist, uint16_t 
> *cores, uint32_t cores_len)
>  
>   return total_count;
>  }
> +
> +int
> +rte_parse_coremask(const char *coremask, uint16_t *cores, uint32_t cores_len)
> +{
> + struct core_bits *mask = malloc(sizeof(struct core_bits));

Check return value from malloc. Need to do so in patch 1 also.

> + memset(mask, 0, sizeof(struct core_bits));
> +
> + /* Remove all blank characters ahead and after .
> +  * Remove 0x/0X if exists.
> +  */
> + while (isblank(*coremask))
> + coremask++;
> + if (coremask[0] == '0' && ((coremask[1] == 'x')
> + || (coremask[1] == 'X')))

Nit: this can all fit on one line, as it's <100 chars long.

> + coremask += 2;
> +
> + int32_t i = strlen(coremask);
> + while ((i > 0) && isblank(coremask[i - 1]))
> + i--;
> + if (i == 0 || i > MAX_COREMASK_SIZE)
> + return -1;
> +
> + uint32_t idx = 0;
> + uint8_t j;
> + int val;
> +

You can define "val" inside the for loop as it's not needed outside it.
Since we use C11 standard, you can also avoid declaring j here too, and
just do "for (uint8_t j = 0; )".

> + for (i = i - 1; i >= 0; i--) {
> + char c = coremask[i];
> +
> + if (isxdigit(c) == 0)
> + return -1;
> +
> + val = xdigit2val(c);
> +
> + for (j = 0; j < BITS_PER_HEX; j++, idx++) {
> + if ((1 << j) & val)
> + set_core_bit(mask, idx);
> + }
> + }
> +
> + corebits_to_array(mask, cores, cores_len);
> + uint32_t total_count = mask->total_bits_set;
> + free(mask);
> +
> + return total_count;
> +}
> diff --git a/lib/arg_parser/rte_arg_parser.h b/lib/arg_parser/rte_arg_parser.h
> index 1b12bf451f..b149b37755 100644
> --- a/lib/arg_parser/rte_arg_parser.h
> +++ b/lib/arg_parser/rte_arg_parser.h
> @@ -58,6 +58,39 @@ __rte_experimental
>  int
>  rte_parse_corelist(const char *corelist, uint16_t *cores, uint32_t 
> cores_len);
>  
> +/**
> + * Convert a string describing a bitmask of core ids into an array of core 
> ids.
> + *
> + * On success, the passed array is filled with the core ids present in the
> + * bitmask up to the "cores_len", and the number of elements added into the 
> array is returned.
> + * For example, passing a 0xA "coremask" results in an array of [1, 3]
> + * and would return 2.
> + * 
> + * Like the snprintf function for strings, if the length of the input array 
> is
> + * insufficient to hold the number of cores in the "coresmask", the input 
> array is
> + * filled to capacity and the return value is the number of elements which 
> would
> + * be returned if the array had been big enough.
> + * Function can also be called with a NULL array and 0 "cores_len" to find 
> out
> + * the "cores_len" required.
> + *
> + * @param coremask
> + *   A string containing a bitmask of core ids.
> + * @param cores
> + *   An array where to store the core ids.
> + *   Array can be NULL if "cores_len" is 0.
> + * @param cores_len
> + *   The length of the "cores" array.
> + *   If the size is smaller than that needed to hold all cores from 
> "coremask",
> + *   only "cores_len" elements will be written to the array.

[PATCH] crypto/ipsec_mb: fix getting process ID per job

2023-11-23 Thread Ciara Power
Currently, when using IPsec-mb 1.4+, the process ID is obtained for each
job in a burst with a call to getpid().
This system call uses too many CPU cycles, and is unnecessary per job.

Instead, set the process ID value per lcore.
This is read when processing the burst, instead of per job.

Fixes: 9593d83e5d88 ("crypto/ipsec_mb: fix aesni_mb multi-process session ID")
Cc: sta...@dpdk.org

Signed-off-by: Ciara Power 
---
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 22 ++
 1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c 
b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 7f61065939..e63ba23a11 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -6,6 +6,8 @@
 
 #include "pmd_aesni_mb_priv.h"
 
+RTE_DEFINE_PER_LCORE(pid_t, pid);
+
 struct aesni_mb_op_buf_data {
struct rte_mbuf *m;
uint32_t offset;
@@ -846,6 +848,7 @@ aesni_mb_session_configure(IMB_MGR *mb_mgr,
 #if IMB_VERSION(1, 3, 0) < IMB_VERSION_NUM
sess->session_id = imb_set_session(mb_mgr, &sess->template_job);
sess->pid = getpid();
+   RTE_PER_LCORE(pid) = sess->pid;
 #endif
 
return 0;
@@ -1503,7 +1506,7 @@ aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, 
IMB_JOB *job,
 static inline int
 set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
struct rte_crypto_op *op, uint8_t *digest_idx,
-   IMB_MGR *mb_mgr)
+   IMB_MGR *mb_mgr, pid_t pid)
 {
struct rte_mbuf *m_src = op->sym->m_src, *m_dst;
struct aesni_mb_qp_data *qp_data = ipsec_mb_get_qp_private_data(qp);
@@ -1517,6 +1520,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
uint8_t sgl = 0;
uint8_t lb_sgl = 0;
 
+#if IMB_VERSION(1, 3, 0) >= IMB_VERSION_NUM
+   (void) pid;
+#endif
+
session = ipsec_mb_get_session_private(qp, op);
if (session == NULL) {
op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
@@ -1527,7 +1534,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
session->template_job.cipher_mode;
 
 #if IMB_VERSION(1, 3, 0) < IMB_VERSION_NUM
-   if (session->pid != getpid()) {
+   if (session->pid != pid) {
memcpy(job, &session->template_job, sizeof(IMB_JOB));
imb_set_session(mb_mgr, job);
} else if (job->session_id != session->session_id)
@@ -2136,6 +2143,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
int retval, processed_jobs = 0;
uint16_t i, nb_jobs;
IMB_JOB *jobs[IMB_MAX_BURST_SIZE] = {NULL};
+   pid_t pid;
 
if (unlikely(nb_ops == 0 || mb_mgr == NULL))
return 0;
@@ -2174,6 +2182,11 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
}
}
 
+   if (!RTE_PER_LCORE(pid))
+   RTE_PER_LCORE(pid) = getpid();
+
+   pid = RTE_PER_LCORE(pid);
+
/*
 * Get the next operations to process from ingress queue.
 * There is no need to return the job to the IMB_MGR
@@ -2192,7 +2205,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
   &digest_idx);
else
retval = set_mb_job_params(job, qp, op,
-  &digest_idx, mb_mgr);
+  &digest_idx, mb_mgr, 
pid);
 
if (unlikely(retval != 0)) {
qp->stats.dequeue_err_count++;
@@ -2315,6 +2328,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
struct rte_crypto_op *op;
IMB_JOB *job;
int retval, processed_jobs = 0;
+   pid_t pid = 0;
 
if (unlikely(nb_ops == 0 || mb_mgr == NULL))
return 0;
@@ -2351,7 +2365,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
&digest_idx);
else
retval = set_mb_job_params(job, qp, op,
-   &digest_idx, mb_mgr);
+   &digest_idx, mb_mgr, pid);
 
if (unlikely(retval != 0)) {
qp->stats.dequeue_err_count++;
-- 
2.25.1



RE: [PATCH] crypto/ipsec_mb: fix getting process ID per job

2023-11-23 Thread De Lara Guarch, Pablo



> -Original Message-
> From: Power, Ciara 
> Sent: Thursday, November 23, 2023 5:07 PM
> To: dev@dpdk.org
> Cc: tho...@monjalon.net; Ji, Kai ; De Lara Guarch, Pablo
> ; Power, Ciara ;
> sta...@dpdk.org
> Subject: [PATCH] crypto/ipsec_mb: fix getting process ID per job
> 
> Currently, when using IPsec-mb 1.4+, the process ID is obtained for each job 
> in
> a burst with a call to getpid().
> This system call uses too many CPU cycles, and is unnecessary per job.
> 
> Instead, set the process ID value per lcore.
> This is read when processing the burst, instead of per job.
> 
> Fixes: 9593d83e5d88 ("crypto/ipsec_mb: fix aesni_mb multi-process session
> ID")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Ciara Power 

Acked-by: Pablo de Lara 

Re: [PATCH] crypto/ipsec_mb: fix getting process ID per job

2023-11-23 Thread Ji, Kai
Acked-by: Kai Ji 


From: Power, Ciara 
Sent: 23 November 2023 17:07
To: dev@dpdk.org 
Cc: tho...@monjalon.net ; Ji, Kai ; De 
Lara Guarch, Pablo ; Power, Ciara 
; sta...@dpdk.org 
Subject: [PATCH] crypto/ipsec_mb: fix getting process ID per job

Currently, when using IPsec-mb 1.4+, the process ID is obtained for each
job in a burst with a call to getpid().
This system call uses too many CPU cycles, and is unnecessary per job.

Instead, set the process ID value per lcore.
This is read when processing the burst, instead of per job.

Fixes: 9593d83e5d88 ("crypto/ipsec_mb: fix aesni_mb multi-process session ID")
Cc: sta...@dpdk.org

Signed-off-by: Ciara Power 
---

--
2.25.1



[PATCH v2] crypto/ipsec_mb: fix getting process ID per job

2023-11-23 Thread Ciara Power
Currently, when using IPsec-mb 1.4+, the process ID is obtained for each
job in a burst with a call to getpid().
This system call uses too many CPU cycles, and is unnecessary per job.

Instead, set the process ID value per lcore.
This is read when processing the burst, instead of per job.

Fixes: 9593d83e5d88 ("crypto/ipsec_mb: fix aesni_mb multi-process session ID")
Cc: sta...@dpdk.org

Signed-off-by: Ciara Power 
---
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 22 ++
 1 file changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c 
b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index ece9cfd5ed..4de4866cf3 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -6,6 +6,8 @@
 
 #include "pmd_aesni_mb_priv.h"
 
+RTE_DEFINE_PER_LCORE(pid_t, pid);
+
 struct aesni_mb_op_buf_data {
struct rte_mbuf *m;
uint32_t offset;
@@ -846,6 +848,7 @@ aesni_mb_session_configure(IMB_MGR *mb_mgr,
 #if IMB_VERSION(1, 3, 0) < IMB_VERSION_NUM
sess->session_id = imb_set_session(mb_mgr, &sess->template_job);
sess->pid = getpid();
+   RTE_PER_LCORE(pid) = sess->pid;
 #endif
 
return 0;
@@ -1503,7 +1506,7 @@ aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, 
IMB_JOB *job,
 static inline int
 set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
struct rte_crypto_op *op, uint8_t *digest_idx,
-   IMB_MGR *mb_mgr)
+   IMB_MGR *mb_mgr, pid_t pid)
 {
struct rte_mbuf *m_src = op->sym->m_src, *m_dst;
struct aesni_mb_qp_data *qp_data = ipsec_mb_get_qp_private_data(qp);
@@ -1517,6 +1520,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
uint8_t sgl = 0;
uint8_t lb_sgl = 0;
 
+#if IMB_VERSION(1, 3, 0) >= IMB_VERSION_NUM
+   (void) pid;
+#endif
+
session = ipsec_mb_get_session_private(qp, op);
if (session == NULL) {
op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
@@ -1527,7 +1534,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp,
session->template_job.cipher_mode;
 
 #if IMB_VERSION(1, 3, 0) < IMB_VERSION_NUM
-   if (session->pid != getpid()) {
+   if (session->pid != pid) {
memcpy(job, &session->template_job, sizeof(IMB_JOB));
imb_set_session(mb_mgr, job);
} else if (job->session_id != session->session_id)
@@ -2136,6 +2143,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
int retval, processed_jobs = 0;
uint16_t i, nb_jobs;
IMB_JOB *jobs[IMB_MAX_BURST_SIZE] = {NULL};
+   pid_t pid;
 
if (unlikely(nb_ops == 0 || mb_mgr == NULL))
return 0;
@@ -2176,6 +2184,11 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
continue;
}
 
+   if (!RTE_PER_LCORE(pid))
+   RTE_PER_LCORE(pid) = getpid();
+
+   pid = RTE_PER_LCORE(pid);
+
/*
 * Get the next operations to process from ingress queue.
 * There is no need to return the job to the IMB_MGR
@@ -2194,7 +2207,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
   &digest_idx);
else
retval = set_mb_job_params(job, qp, op,
-  &digest_idx, mb_mgr);
+  &digest_idx, mb_mgr, 
pid);
 
if (unlikely(retval != 0)) {
qp->stats.dequeue_err_count++;
@@ -2317,6 +2330,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
struct rte_crypto_op *op;
IMB_JOB *job;
int retval, processed_jobs = 0;
+   pid_t pid = 0;
 
if (unlikely(nb_ops == 0 || mb_mgr == NULL))
return 0;
@@ -2353,7 +2367,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct 
rte_crypto_op **ops,
&digest_idx);
else
retval = set_mb_job_params(job, qp, op,
-   &digest_idx, mb_mgr);
+   &digest_idx, mb_mgr, pid);
 
if (unlikely(retval != 0)) {
qp->stats.dequeue_err_count++;
-- 
2.25.1



Re: [PATCH v2] crypto/ipsec_mb: fix getting process ID per job

2023-11-23 Thread Ji, Kai
Acked-by: Kai Ji 



From: Power, Ciara 
Sent: 23 November 2023 17:15
To: dev@dpdk.org 
Cc: tho...@monjalon.net ; Ji, Kai ; De 
Lara Guarch, Pablo ; Power, Ciara 
; sta...@dpdk.org 
Subject: [PATCH v2] crypto/ipsec_mb: fix getting process ID per job

Currently, when using IPsec-mb 1.4+, the process ID is obtained for each
job in a burst with a call to getpid().
This system call uses too many CPU cycles, and is unnecessary per job.

Instead, set the process ID value per lcore.
This is read when processing the burst, instead of per job.

Fixes: 9593d83e5d88 ("crypto/ipsec_mb: fix aesni_mb multi-process session ID")
Cc: sta...@dpdk.org

Signed-off-by: Ciara Power 




RE: [PATCH 3/5] doc: fix some ordered lists

2023-11-23 Thread Dariusz Sosnowski
Hi,

> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index
> 45379960f0..39a8c5d7b4 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -2326,19 +2326,18 @@ This command performs:
> 
>  #. Call the regular ``port attach`` function with updated identifier.
> 
> -For example, to attach a port whose PCI address is ``:0a:00.0`` -and its
> socket path is ``/var/run/import_ipc_socket``:
> +   For example, to attach a port whose PCI address is ``:0a:00.0``
> +   and its socket path is ``/var/run/import_ipc_socket``:
> 
> -.. code-block:: console
> -
> -   testpmd> mlx5 port attach :0a:00.0
> socket=/var/run/import_ipc_socket
> -   testpmd: MLX5 socket path is /var/run/import_ipc_socket
> -   testpmd: Attach port with extra devargs
> :0a:00.0,cmd_fd=40,pd_handle=1
> -   Attaching a new port...
> -   EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: :0a:00.0 (socket
> 0)
> -   Port 0 is attached. Now total ports is 1
> -   Done
> +   .. code-block:: console
> 
> +  testpmd> mlx5 port attach :0a:00.0
> socket=/var/run/import_ipc_socket
> +  testpmd: MLX5 socket path is /var/run/import_ipc_socket
> +  testpmd: Attach port with extra devargs
> :0a:00.0,cmd_fd=40,pd_handle=1
> +  Attaching a new port...
> +  EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: :0a:00.0 
> (socket
> 0)
> +  Port 0 is attached. Now total ports is 1
> +  Done
> 
>  port map external Rx queue
>  ~~
The preceding list explains what "mlx5 port attach" command does and the 
following section provides an example of usage.
I don't think this section should be a part of that list.

Best regards,
Dariusz Sosnowski


RE: [PATCH 5/5] doc: use ordered lists

2023-11-23 Thread Dariusz Sosnowski
Hi,

> -Original Message-
> From: David Marchand 
> Sent: Thursday, November 23, 2023 12:44
> Subject: [PATCH 5/5] doc: use ordered lists
> 
> Prefer automatically ordered lists by using #.
> 
> Signed-off-by: David Marchand 
> ---
>  doc/guides/eventdevs/dlb2.rst | 29 ++-
>  doc/guides/eventdevs/dpaa.rst |  2 +-
>  .../linux_gsg/nic_perf_intel_platform.rst | 10 ++--
>  doc/guides/nics/cnxk.rst  |  4 +-
>  doc/guides/nics/dpaa2.rst | 19 +++
>  doc/guides/nics/enetc.rst |  6 +--
>  doc/guides/nics/enetfec.rst   | 12 ++---
>  doc/guides/nics/i40e.rst  | 16 +++---
>  doc/guides/nics/mlx4.rst  | 32 ++--
>  doc/guides/nics/mlx5.rst  | 18 +++
>  doc/guides/nics/mvpp2.rst | 49 ++-
>  doc/guides/nics/pfe.rst   |  8 +--
>  doc/guides/nics/tap.rst   | 14 +++---
>  doc/guides/platform/bluefield.rst |  4 +-
>  doc/guides/platform/cnxk.rst  | 26 +-
>  doc/guides/platform/dpaa.rst  | 14 +++---
>  doc/guides/platform/dpaa2.rst | 20 
>  doc/guides/platform/mlx5.rst  | 14 +++---
>  doc/guides/platform/octeontx.rst  | 22 -
>  .../prog_guide/env_abstraction_layer.rst  | 10 ++--
>  doc/guides/prog_guide/graph_lib.rst   | 39 ---
>  doc/guides/prog_guide/rawdev.rst  | 28 ++-
>  doc/guides/prog_guide/rte_flow.rst| 12 ++---
>  doc/guides/prog_guide/stack_lib.rst   |  8 +--
>  doc/guides/prog_guide/trace_lib.rst   | 12 ++---
>  doc/guides/rawdevs/ifpga.rst  |  5 +-
>  doc/guides/sample_app_ug/ip_pipeline.rst  |  4 +-
>  doc/guides/sample_app_ug/pipeline.rst |  4 +-
>  doc/guides/sample_app_ug/vdpa.rst | 26 +-
>  doc/guides/windows_gsg/run_apps.rst   |  8 +--
>  30 files changed, 250 insertions(+), 225 deletions(-)
Looks good to me. Thank you.

Acked-by: Dariusz Sosnowski 

Best regards,
Dariusz Sosnowski


RE: [PATCH v2] crypto/ipsec_mb: fix getting process ID per job

2023-11-23 Thread De Lara Guarch, Pablo



> -Original Message-
> From: Power, Ciara 
> Sent: Thursday, November 23, 2023 5:16 PM
> To: dev@dpdk.org
> Cc: tho...@monjalon.net; Ji, Kai ; De Lara Guarch, Pablo
> ; Power, Ciara ;
> sta...@dpdk.org
> Subject: [PATCH v2] crypto/ipsec_mb: fix getting process ID per job
> 
> Currently, when using IPsec-mb 1.4+, the process ID is obtained for each job 
> in
> a burst with a call to getpid().
> This system call uses too many CPU cycles, and is unnecessary per job.
> 
> Instead, set the process ID value per lcore.
> This is read when processing the burst, instead of per job.
> 
> Fixes: 9593d83e5d88 ("crypto/ipsec_mb: fix aesni_mb multi-process session
> ID")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Ciara Power 

Acked-by: Pablo de Lara 


[PATCH] maintainers: add mlx5 driver platform guides

2023-11-23 Thread Dariusz Sosnowski
Add NVIDIA's platform specific guides to files maintained by networking
mlx5 driver maintainers.

Signed-off-by: Dariusz Sosnowski 
---
 MAINTAINERS | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index b07dfbcd39..f33adb3a65 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -877,6 +877,8 @@ F: drivers/net/mlx5/
 F: buildtools/options-ibverbs-static.sh
 F: doc/guides/nics/mlx5.rst
 F: doc/guides/nics/features/mlx5.ini
+F: doc/guides/platform/bluefield.rst
+F: doc/guides/platform/mlx5.rst
 
 Microsoft mana
 M: Long Li 
-- 
2.25.1



RE: [PATCH v4 07/10] net/mlx5: replace zero length array with flex array

2023-11-23 Thread Dariusz Sosnowski
Hi,

> -Original Message-
> From: Stephen Hemminger 
> Sent: Monday, November 20, 2023 18:07
> Subject: [PATCH v4 07/10] net/mlx5: replace zero length array with flex array
> 
> Zero length arrays are GNU extension. Replace with standard flex array.
> 
> Signed-off-by: Stephen Hemminger 
> Reviewed-by: Tyler Retzlaff 
> ---
>  drivers/common/mlx5/mlx5_prm.h | 2 +-
>  drivers/net/mlx5/mlx5.h| 4 ++--
>  drivers/net/mlx5/mlx5_flow.h   | 2 +-
>  drivers/net/mlx5/mlx5_tx.h | 3 ++-
>  4 files changed, 6 insertions(+), 5 deletions(-)
Look good to me. Thank you.

Acked-by: Dariusz Sosnowski 

Best regards,
Dariusz Sosnowski


Re: [PATCH] maintainers: add mlx5 driver platform guides

2023-11-23 Thread Thomas Monjalon
23/11/2023 18:38, Dariusz Sosnowski:
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -877,6 +877,8 @@ F: drivers/net/mlx5/
>  F: buildtools/options-ibverbs-static.sh
>  F: doc/guides/nics/mlx5.rst
>  F: doc/guides/nics/features/mlx5.ini
> +F: doc/guides/platform/bluefield.rst
> +F: doc/guides/platform/mlx5.rst

As drivers/common/mlx5/ is listed before drivers/net/mlx5/
we should list doc/guides/platform/mlx5.rst before doc/guides/nics/mlx5.rst
Probably that bluefield.rst should be in the middle between common and NIC docs.





[PATCH 1/3] net/octeon_ep: optimize Rx and Tx routines

2023-11-23 Thread pbhagavatula
From: Pavan Nikhilesh 

Preset rearm data to avoid writing multiple fields in fastpath,
Increase maximum outstanding Tx instructions from 128 to 256.

Signed-off-by: Pavan Nikhilesh 
---
 drivers/net/octeon_ep/cnxk_ep_rx.c| 12 
 drivers/net/octeon_ep/otx_ep_common.h |  3 +++
 drivers/net/octeon_ep/otx_ep_rxtx.c   | 27 +++
 drivers/net/octeon_ep/otx_ep_rxtx.h   |  2 +-
 4 files changed, 39 insertions(+), 5 deletions(-)

diff --git a/drivers/net/octeon_ep/cnxk_ep_rx.c 
b/drivers/net/octeon_ep/cnxk_ep_rx.c
index 74f0011283..75bb7225d2 100644
--- a/drivers/net/octeon_ep/cnxk_ep_rx.c
+++ b/drivers/net/octeon_ep/cnxk_ep_rx.c
@@ -93,7 +93,7 @@ cnxk_ep_check_rx_pkts(struct otx_ep_droq *droq)
new_pkts = val - droq->pkts_sent_ism_prev;
droq->pkts_sent_ism_prev = val;
 
-   if (val > (uint32_t)(1 << 31)) {
+   if (val > RTE_BIT32(31)) {
/* Only subtract the packet count in the HW counter
 * when count above halfway to saturation.
 */
@@ -128,7 +128,6 @@ cnxk_ep_process_pkts_scalar(struct rte_mbuf **rx_pkts, 
struct otx_ep_droq *droq,
 {
struct rte_mbuf **recv_buf_list = droq->recv_buf_list;
uint32_t bytes_rsvd = 0, read_idx = droq->read_idx;
-   uint16_t port_id = droq->otx_ep_dev->port_id;
uint16_t nb_desc = droq->nb_desc;
uint16_t pkts;
 
@@ -137,14 +136,19 @@ cnxk_ep_process_pkts_scalar(struct rte_mbuf **rx_pkts, 
struct otx_ep_droq *droq,
struct rte_mbuf *mbuf;
uint16_t pkt_len;
 
+   rte_prefetch0(recv_buf_list[otx_ep_incr_index(read_idx, 2, 
nb_desc)]);
+   
rte_prefetch0(rte_pktmbuf_mtod(recv_buf_list[otx_ep_incr_index(read_idx,
+  
2, nb_desc)],
+ void *));
+
mbuf = recv_buf_list[read_idx];
info = rte_pktmbuf_mtod(mbuf, struct otx_ep_droq_info *);
read_idx = otx_ep_incr_index(read_idx, 1, nb_desc);
pkt_len = rte_bswap16(info->length >> 48);
-   mbuf->data_off += OTX_EP_INFO_SIZE;
mbuf->pkt_len = pkt_len;
mbuf->data_len = pkt_len;
-   mbuf->port = port_id;
+
+   *(uint64_t *)&mbuf->rearm_data = droq->rearm_data;
rx_pkts[pkts] = mbuf;
bytes_rsvd += pkt_len;
}
diff --git a/drivers/net/octeon_ep/otx_ep_common.h 
b/drivers/net/octeon_ep/otx_ep_common.h
index 82e57520d3..299b5122d8 100644
--- a/drivers/net/octeon_ep/otx_ep_common.h
+++ b/drivers/net/octeon_ep/otx_ep_common.h
@@ -365,6 +365,9 @@ struct otx_ep_droq {
/* receive buffer list contains mbuf ptr list */
struct rte_mbuf **recv_buf_list;
 
+   /* Packet re-arm data. */
+   uint64_t rearm_data;
+
/* Packets pending to be processed */
uint64_t pkts_pending;
 
diff --git a/drivers/net/octeon_ep/otx_ep_rxtx.c 
b/drivers/net/octeon_ep/otx_ep_rxtx.c
index c421ef0a1c..40c4a16a38 100644
--- a/drivers/net/octeon_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeon_ep/otx_ep_rxtx.c
@@ -284,6 +284,32 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
return 0;
 }
 
+static inline uint64_t
+otx_ep_set_rearm_data(struct otx_ep_device *otx_ep)
+{
+   uint16_t port_id = otx_ep->port_id;
+   struct rte_mbuf mb_def;
+   uint64_t *tmp;
+
+   RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) % 8 != 0);
+   RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) - offsetof(struct 
rte_mbuf, data_off) !=
+2);
+   RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) - offsetof(struct 
rte_mbuf, data_off) !=
+4);
+   RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) - offsetof(struct 
rte_mbuf, data_off) !=
+6);
+   mb_def.nb_segs = 1;
+   mb_def.data_off = RTE_PKTMBUF_HEADROOM + OTX_EP_INFO_SIZE;
+   mb_def.port = port_id;
+   rte_mbuf_refcnt_set(&mb_def, 1);
+
+   /* Prevent compiler reordering: rearm_data covers previous fields */
+   rte_compiler_barrier();
+   tmp = (uint64_t *)&mb_def.rearm_data;
+
+   return *tmp;
+}
+
 /* OQ initialization */
 static int
 otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
@@ -340,6 +366,7 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t 
q_no,
goto init_droq_fail;
 
droq->refill_threshold = c_refill_threshold;
+   droq->rearm_data = otx_ep_set_rearm_data(otx_ep);
 
/* Set up OQ registers */
ret = otx_ep->fn_list.setup_oq_regs(otx_ep, q_no);
diff --git a/drivers/net/octeon_ep/otx_ep_rxtx.h 
b/drivers/net/octeon_ep/otx_ep_rxtx.h
index cb68ef3b41..b159c32cae 100644
--- a/drivers/net/octeon_ep/otx_ep_rxtx.h
+++ b/drivers/net/octeon_ep/otx_ep_rxtx.h
@@ -17,7 +17,7 @@
 
 #define OTX_EP_FSZ 28
 #define OTX2_EP_FSZ 24
-#define OTX_EP_MAX_INS

[PATCH 2/3] net/octeon_ep: use SSE instructions for Rx routine

2023-11-23 Thread pbhagavatula
From: Pavan Nikhilesh 

Optimize Rx routine to use SSE instructions.

Signed-off-by: Pavan Nikhilesh 
---
 drivers/net/octeon_ep/cnxk_ep_rx.c | 159 +--
 drivers/net/octeon_ep/cnxk_ep_rx.h | 167 +
 drivers/net/octeon_ep/cnxk_ep_rx_sse.c | 124 ++
 drivers/net/octeon_ep/meson.build  |  11 ++
 drivers/net/octeon_ep/otx_ep_ethdev.c  |   7 ++
 drivers/net/octeon_ep/otx_ep_rxtx.h|  10 ++
 6 files changed, 320 insertions(+), 158 deletions(-)
 create mode 100644 drivers/net/octeon_ep/cnxk_ep_rx.h
 create mode 100644 drivers/net/octeon_ep/cnxk_ep_rx_sse.c

diff --git a/drivers/net/octeon_ep/cnxk_ep_rx.c 
b/drivers/net/octeon_ep/cnxk_ep_rx.c
index 75bb7225d2..f3e4fb27d1 100644
--- a/drivers/net/octeon_ep/cnxk_ep_rx.c
+++ b/drivers/net/octeon_ep/cnxk_ep_rx.c
@@ -2,164 +2,7 @@
  * Copyright(C) 2023 Marvell.
  */
 
-#include "otx_ep_common.h"
-#include "otx2_ep_vf.h"
-#include "otx_ep_rxtx.h"
-
-static inline int
-cnxk_ep_rx_refill_mbuf(struct otx_ep_droq *droq, uint32_t count)
-{
-   struct otx_ep_droq_desc *desc_ring = droq->desc_ring;
-   struct rte_mbuf **recv_buf_list = droq->recv_buf_list;
-   uint32_t refill_idx = droq->refill_idx;
-   struct rte_mbuf *buf;
-   uint32_t i;
-   int rc;
-
-   rc = rte_pktmbuf_alloc_bulk(droq->mpool, &recv_buf_list[refill_idx], 
count);
-   if (unlikely(rc)) {
-   droq->stats.rx_alloc_failure++;
-   return rc;
-   }
-
-   for (i = 0; i < count; i++) {
-   buf = recv_buf_list[refill_idx];
-   desc_ring[refill_idx].buffer_ptr = 
rte_mbuf_data_iova_default(buf);
-   refill_idx++;
-   }
-
-   droq->refill_idx = otx_ep_incr_index(droq->refill_idx, count, 
droq->nb_desc);
-   droq->refill_count -= count;
-
-   return 0;
-}
-
-static inline void
-cnxk_ep_rx_refill(struct otx_ep_droq *droq)
-{
-   uint32_t desc_refilled = 0, count;
-   uint32_t nb_desc = droq->nb_desc;
-   uint32_t refill_idx = droq->refill_idx;
-   int rc;
-
-   if (unlikely(droq->read_idx == refill_idx))
-   return;
-
-   if (refill_idx < droq->read_idx) {
-   count = droq->read_idx - refill_idx;
-   rc = cnxk_ep_rx_refill_mbuf(droq, count);
-   if (unlikely(rc)) {
-   droq->stats.rx_alloc_failure++;
-   return;
-   }
-   desc_refilled = count;
-   } else {
-   count = nb_desc - refill_idx;
-   rc = cnxk_ep_rx_refill_mbuf(droq, count);
-   if (unlikely(rc)) {
-   droq->stats.rx_alloc_failure++;
-   return;
-   }
-
-   desc_refilled = count;
-   count = droq->read_idx;
-   rc = cnxk_ep_rx_refill_mbuf(droq, count);
-   if (unlikely(rc)) {
-   droq->stats.rx_alloc_failure++;
-   return;
-   }
-   desc_refilled += count;
-   }
-
-   /* Flush the droq descriptor data to memory to be sure
-* that when we update the credits the data in memory is
-* accurate.
-*/
-   rte_io_wmb();
-   rte_write32(desc_refilled, droq->pkts_credit_reg);
-}
-
-static inline uint32_t
-cnxk_ep_check_rx_pkts(struct otx_ep_droq *droq)
-{
-   uint32_t new_pkts;
-   uint32_t val;
-
-   /* Batch subtractions from the HW counter to reduce PCIe traffic
-* This adds an extra local variable, but almost halves the
-* number of PCIe writes.
-*/
-   val = __atomic_load_n(droq->pkts_sent_ism, __ATOMIC_RELAXED);
-   new_pkts = val - droq->pkts_sent_ism_prev;
-   droq->pkts_sent_ism_prev = val;
-
-   if (val > RTE_BIT32(31)) {
-   /* Only subtract the packet count in the HW counter
-* when count above halfway to saturation.
-*/
-   rte_write64((uint64_t)val, droq->pkts_sent_reg);
-   rte_mb();
-
-   rte_write64(OTX2_SDP_REQUEST_ISM, droq->pkts_sent_reg);
-   while (__atomic_load_n(droq->pkts_sent_ism, __ATOMIC_RELAXED) 
>= val) {
-   rte_write64(OTX2_SDP_REQUEST_ISM, droq->pkts_sent_reg);
-   rte_mb();
-   }
-
-   droq->pkts_sent_ism_prev = 0;
-   }
-   rte_write64(OTX2_SDP_REQUEST_ISM, droq->pkts_sent_reg);
-   droq->pkts_pending += new_pkts;
-
-   return new_pkts;
-}
-
-static inline int16_t __rte_hot
-cnxk_ep_rx_pkts_to_process(struct otx_ep_droq *droq, uint16_t nb_pkts)
-{
-   if (droq->pkts_pending < nb_pkts)
-   cnxk_ep_check_rx_pkts(droq);
-
-   return RTE_MIN(nb_pkts, droq->pkts_pending);
-}
-
-static __rte_always_inline void
-cnxk_ep_process_pkts_scalar(struct rte_mbuf **rx_pkts, struct otx_ep_droq 
*droq, uint16_t new_pkts)
-{
-   struct rte_mbuf **rec

[PATCH 3/3] net/octeon_ep: use AVX2 instructions for Rx

2023-11-23 Thread pbhagavatula
From: Pavan Nikhilesh 

Optimize Rx routine to use AVX2 instructions when underlying
architecture supports it.

Signed-off-by: Pavan Nikhilesh 
---
 drivers/net/octeon_ep/cnxk_ep_rx_avx.c | 117 +
 drivers/net/octeon_ep/meson.build  |  12 +++
 drivers/net/octeon_ep/otx_ep_ethdev.c  |  10 +++
 drivers/net/octeon_ep/otx_ep_rxtx.h|  10 +++
 4 files changed, 149 insertions(+)
 create mode 100644 drivers/net/octeon_ep/cnxk_ep_rx_avx.c

diff --git a/drivers/net/octeon_ep/cnxk_ep_rx_avx.c 
b/drivers/net/octeon_ep/cnxk_ep_rx_avx.c
new file mode 100644
index 00..cbd797f98b
--- /dev/null
+++ b/drivers/net/octeon_ep/cnxk_ep_rx_avx.c
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Marvell.
+ */
+
+#include "cnxk_ep_rx.h"
+
+static __rte_always_inline void
+cnxk_ep_process_pkts_vec_avx(struct rte_mbuf **rx_pkts, struct otx_ep_droq 
*droq, uint16_t new_pkts)
+{
+   struct rte_mbuf **recv_buf_list = droq->recv_buf_list;
+   uint32_t bytes_rsvd = 0, read_idx = droq->read_idx;
+   const uint64_t rearm_data = droq->rearm_data;
+   struct rte_mbuf *m[CNXK_EP_OQ_DESC_PER_LOOP_AVX];
+   uint32_t pidx[CNXK_EP_OQ_DESC_PER_LOOP_AVX];
+   uint32_t idx[CNXK_EP_OQ_DESC_PER_LOOP_AVX];
+   uint16_t nb_desc = droq->nb_desc;
+   uint16_t pkts = 0;
+   uint8_t i;
+
+   idx[0] = read_idx;
+   while (pkts < new_pkts) {
+   __m256i data[CNXK_EP_OQ_DESC_PER_LOOP_AVX];
+   /* mask to shuffle from desc. to mbuf (2 descriptors)*/
+   const __m256i mask =
+   _mm256_set_epi8(0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 20, 
21, 0xFF, 0xFF, 20,
+   21, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 
0xFF, 0xFF, 0xFF,
+   0xFF, 0xFF, 0xFF, 7, 6, 5, 4, 3, 2, 1, 
0);
+
+   for (i = 1; i < CNXK_EP_OQ_DESC_PER_LOOP_AVX; i++)
+   idx[i] = otx_ep_incr_index(idx[i - 1], 1, nb_desc);
+
+   if (new_pkts - pkts > 8) {
+   pidx[0] = otx_ep_incr_index(idx[i - 1], 1, nb_desc);
+   for (i = 1; i < CNXK_EP_OQ_DESC_PER_LOOP_AVX; i++)
+   pidx[i] = otx_ep_incr_index(pidx[i - 1], 1, 
nb_desc);
+
+   for (i = 0; i < CNXK_EP_OQ_DESC_PER_LOOP_AVX; i++) {
+   rte_prefetch0(recv_buf_list[pidx[i]]);
+   
rte_prefetch0(rte_pktmbuf_mtod(recv_buf_list[pidx[i]], void *));
+   }
+   }
+
+   for (i = 0; i < CNXK_EP_OQ_DESC_PER_LOOP_AVX; i++)
+   m[i] = recv_buf_list[idx[i]];
+
+   for (i = 0; i < CNXK_EP_OQ_DESC_PER_LOOP_AVX; i++)
+   data[i] = _mm256_set_epi64x(0,
+   rte_pktmbuf_mtod(m[i], struct otx_ep_droq_info 
*)->length >> 16,
+   0, rearm_data);
+
+   for (i = 0; i < CNXK_EP_OQ_DESC_PER_LOOP_AVX; i++) {
+   data[i] = _mm256_shuffle_epi8(data[i], mask);
+   bytes_rsvd += _mm256_extract_epi16(data[i], 10);
+   }
+
+   for (i = 0; i < CNXK_EP_OQ_DESC_PER_LOOP_AVX; i++)
+   _mm256_storeu_si256((__m256i *)&m[i]->rearm_data, 
data[i]);
+
+   for (i = 0; i < CNXK_EP_OQ_DESC_PER_LOOP_AVX; i++)
+   rx_pkts[pkts++] = m[i];
+   idx[0] = otx_ep_incr_index(idx[i - 1], 1, nb_desc);
+   }
+   droq->read_idx = idx[0];
+
+   droq->refill_count += new_pkts;
+   droq->pkts_pending -= new_pkts;
+   /* Stats */
+   droq->stats.pkts_received += new_pkts;
+   droq->stats.bytes_received += bytes_rsvd;
+}
+
+uint16_t __rte_noinline __rte_hot
+cnxk_ep_recv_pkts_avx(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t 
nb_pkts)
+{
+   struct otx_ep_droq *droq = (struct otx_ep_droq *)rx_queue;
+   uint16_t new_pkts, vpkts;
+
+   new_pkts = cnxk_ep_rx_pkts_to_process(droq, nb_pkts);
+   vpkts = RTE_ALIGN_FLOOR(new_pkts, CNXK_EP_OQ_DESC_PER_LOOP_AVX);
+   cnxk_ep_process_pkts_vec_avx(rx_pkts, droq, vpkts);
+   cnxk_ep_process_pkts_scalar(&rx_pkts[vpkts], droq, new_pkts - vpkts);
+
+   /* Refill RX buffers */
+   if (droq->refill_count >= DROQ_REFILL_THRESHOLD)
+   cnxk_ep_rx_refill(droq);
+
+   return new_pkts;
+}
+
+uint16_t __rte_noinline __rte_hot
+cn9k_ep_recv_pkts_avx(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t 
nb_pkts)
+{
+   struct otx_ep_droq *droq = (struct otx_ep_droq *)rx_queue;
+   uint16_t new_pkts, vpkts;
+
+   new_pkts = cnxk_ep_rx_pkts_to_process(droq, nb_pkts);
+   vpkts = RTE_ALIGN_FLOOR(new_pkts, CNXK_EP_OQ_DESC_PER_LOOP_AVX);
+   cnxk_ep_process_pkts_vec_avx(rx_pkts, droq, vpkts);
+   cnxk_ep_process_pkts_scalar(&rx_pkts[vpkts], droq, new_pkts - vpkts);
+
+   /* Refill RX buffe

RE: [EXT] Re: [PATCH] raw/cnxk_bphy: switch to dynamic logging

2023-11-23 Thread Tomasz Duszynski



>-Original Message-
>From: Stephen Hemminger 
>Sent: Wednesday, November 15, 2023 1:16 AM
>To: Tomasz Duszynski 
>Cc: dev@dpdk.org; Jakub Palider ; Jerin Jacob 
>Kollanukkaran
>
>Subject: [EXT] Re: [PATCH] raw/cnxk_bphy: switch to dynamic logging
>
>External Email
>
>--
>On Tue, 14 Nov 2023 09:04:46 +0100
>Tomasz Duszynski  wrote:
>
>> Dynamically allocated log type is a standard approach among all drivers.
>> Switch to it.
>>
>> Signed-off-by: Tomasz Duszynski 
>> ---
>>  drivers/raw/cnxk_bphy/cnxk_bphy.c  | 32 +-
>>  drivers/raw/cnxk_bphy/cnxk_bphy_cgx.c  |  4 ++-
>>  drivers/raw/cnxk_bphy/cnxk_bphy_cgx.h  |  7 +
>>  drivers/raw/cnxk_bphy/cnxk_bphy_cgx_test.c | 32
>> ++
>>  4 files changed, 44 insertions(+), 31 deletions(-)
>
>Good to see but there seems to be a lot of other places calling plt_err(), 
>plt_info(),
>plt_warn(), plt_print()

Yeah, apparently my script did not catch everything. Thanks for pointing this 
out. 


Re: [PATCH 5/5] doc/features: add dump device private information feature

2023-11-23 Thread lihuisong (C)



在 2023/11/23 22:18, Thomas Monjalon 写道:

23/11/2023 14:59, Huisong Li:

+.. _nic_features_device_private_info_dump:
+
+Device private information dump
+--

Why underlining is too short?

You are right.
Then I got this warning email which is the same as you said.😁
will fix it in v2.



+
+Supports retrieving device private information to a file. Provided data and
+the order depends on PMD.
+
+* **[mplements] eth_dev_ops**: ``eth_dev_priv_dump``.

looks like a typo here

Ack.



+* **[related]API**: ``rte_eth_dev_priv_dump()``.



.


[PATCH 0/6] doc/features: fix some features and add new features

2023-11-23 Thread Huisong Li
This series fix the configuration interface for RSS feature and add
serval features had beed supported.

---
 -v2:
   - fix the short underline warning.
   - add loopback mode feature.

Huisong Li (6):
  doc/features: add RSS hash algorithm feature
  doc/features: add link up/down feature
  doc/features: add features for link speeds
  doc/features: add Traffic Manager features
  doc/features: add dump device private information feature
  doc/features: add feature for loopback mode

 doc/guides/nics/features.rst | 78 ++--
 1 file changed, 74 insertions(+), 4 deletions(-)

-- 
2.33.0



[PATCH 3/6] doc/features: add features for link speeds

2023-11-23 Thread Huisong Li
Add features for link speeds.

Fixes: 82113036e4e5 ("ethdev: redesign link speed config")
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index f14962a6c3..d13e43ae81 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -34,6 +34,17 @@ Supports getting the speed capabilities that the current 
device is capable of.
 * **[related]  API**: ``rte_eth_dev_info_get()``.
 
 
+.. _nic_features_link_speeds_config:
+
+Link speed configuration
+
+
+Supports configurating fixed speed and link autonegotiation.
+
+* **[uses] user config**: ``dev_conf.link_speeds:RTE_ETH_LINK_SPEED_*``.
+* **[related]  API**: ``rte_eth_dev_configure()``.
+
+
 .. _nic_features_link_status:
 
 Link status
-- 
2.33.0



[PATCH 2/6] doc/features: add link up/down feature

2023-11-23 Thread Huisong Li
Add link up/down feature for features.rst.

Fixes: 915e67837586 ("ethdev: API for link up and down")
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 4 
 1 file changed, 4 insertions(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 0d38c5c525..f14962a6c3 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -45,6 +45,10 @@ Supports getting the link speed, duplex mode and link state 
(up/down).
 * **[implements] rte_eth_dev_data**: ``dev_link``.
 * **[related]API**: ``rte_eth_link_get()``, ``rte_eth_link_get_nowait()``.
 
+Set link up/down an Ethernet device.
+
+* **[implements] eth_dev_ops**: ``dev_set_link_up``, ``dev_set_link_down``.
+* **[related]API**: ``rte_eth_dev_set_link_up()``, 
``rte_eth_dev_set_link_down()``.
 
 .. _nic_features_link_status_event:
 
-- 
2.33.0



[PATCH 1/6] doc/features: add RSS hash algorithm feature

2023-11-23 Thread Huisong Li
Add hash algorithm feature introduced by 23.11 and fix some RSS features
description.

Fixes: 34ff088cc241 ("ethdev: set and query RSS hash algorithm")

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1a1dc16c1e..0d38c5c525 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -277,10 +277,12 @@ RSS hash
 Supports RSS hashing on RX.
 
 * **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = 
``RTE_ETH_MQ_RX_RSS_FLAG``.
-* **[uses] user config**: ``dev_conf.rx_adv_conf.rss_conf``.
+* **[uses] user config**: ``rss_conf.rss_hf``.
 * **[uses] rte_eth_rxconf,rte_eth_rxmode**: 
``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
 * **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_RSS_HASH``, ``mbuf.rss``.
+* **[related]  API**: ``rte_eth_dev_configure``, 
``rte_eth_dev_rss_hash_update``
+  ``rte_eth_dev_rss_hash_conf_get()``.
 
 
 .. _nic_features_inner_rss:
@@ -288,7 +290,7 @@ Supports RSS hashing on RX.
 Inner RSS
 -
 
-Supports RX RSS hashing on Inner headers.
+Supports RX RSS hashing on Inner headers by rte_flow API.
 
 * **[uses]rte_flow_action_rss**: ``level``.
 * **[uses]rte_eth_rxconf,rte_eth_rxmode**: 
``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
@@ -303,9 +305,25 @@ RSS key update
 Supports configuration of Receive Side Scaling (RSS) hash computation. Updating
 Receive Side Scaling (RSS) hash key.
 
-* **[implements] eth_dev_ops**: ``rss_hash_update``, ``rss_hash_conf_get``.
+* **[implements] eth_dev_ops**: ``dev_configure``, ``rss_hash_update``, 
``rss_hash_conf_get``.
+* **[uses] user config**: ``rss_conf.rss_key``, ``rss_conf.rss_key_len``
 * **[provides]   rte_eth_dev_info**: ``hash_key_size``.
-* **[related]API**: ``rte_eth_dev_rss_hash_update()``,
+* **[related]API**: ``rte_eth_dev_configure``, 
``rte_eth_dev_rss_hash_update()``,
+  ``rte_eth_dev_rss_hash_conf_get()``.
+
+
+.. _nic_features_rss_hash_algo_update:
+
+RSS hash algorithm update
+-
+
+Supports configuration of Receive Side Scaling (RSS) hash algorithm. Updating
+RSS hash algorithm.
+
+* **[implements] eth_dev_ops**: ``dev_configure``, ``rss_hash_update``, 
``rss_hash_conf_get``.
+* **[uses] user config**: ``rss_conf.algorithm``
+* **[provides]   rte_eth_dev_info**: ``rss_algo_capa``.
+* **[related]API**: ``rte_eth_dev_configure``, 
``rte_eth_dev_rss_hash_update()``,
   ``rte_eth_dev_rss_hash_conf_get()``.
 
 
-- 
2.33.0



[PATCH 4/6] doc/features: add Traffic Manager features

2023-11-23 Thread Huisong Li
Add Traffic Manager features.

Fixes: 5d109deffa87 ("ethdev: add traffic management API")
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 13 +
 1 file changed, 13 insertions(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index d13e43ae81..ef061759c7 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -773,6 +773,19 @@ Supports congestion management.
   ``rte_eth_cman_config_set()``, ``rte_eth_cman_config_get()``.
 
 
+.. _nic_features_traffic_manager:
+
+Traffic manager
+---
+
+Supports Traffic manager.
+
+* **[implements] rte_tm_ops**: ``capabilities_get``, ``shaper_profile_add``,
+  ``hierarchy_commit`` and so on.
+* **[related]API**: ``rte_tm_capabilities_get()``, 
``rte_tm_shaper_profile_add()``,
+  ``rte_tm_hierarchy_commit()`` and so on.
+
+
 .. _nic_features_fw_version:
 
 FW version
-- 
2.33.0



[PATCH 5/6] doc/features: add dump device private information feature

2023-11-23 Thread Huisong Li
Add dump device private information feature.

Fixes: edcf22c6d389 ("ethdev: introduce dump API")
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 12 
 1 file changed, 12 insertions(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index ef061759c7..c5c4dbf745 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -832,6 +832,18 @@ registers and register size).
 * **[related]API**: ``rte_eth_dev_get_reg_info()``.
 
 
+.. _nic_features_device_private_info_dump:
+
+Device private information dump
+---
+
+Supports retrieving device private information to a file. Provided data and
+the order depends on PMD.
+
+* **[implements] eth_dev_ops**: ``eth_dev_priv_dump``.
+* **[related]API**: ``rte_eth_dev_priv_dump()``.
+
+
 .. _nic_features_led:
 
 LED
-- 
2.33.0



[PATCH 6/6] doc/features: add feature for loopback mode

2023-11-23 Thread Huisong Li
Add feature for loopback mode.

Fixes: db0359256170 ("ixgbe: add Tx->Rx loopback mode for 82599")
Cc: sta...@dpdk.org

Signed-off-by: Huisong Li 
---
 doc/guides/nics/features.rst | 12 
 1 file changed, 12 insertions(+)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index c5c4dbf745..caf1258554 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -45,6 +45,18 @@ Supports configurating fixed speed and link autonegotiation.
 * **[related]  API**: ``rte_eth_dev_configure()``.
 
 
+.. _nic_features_loopback:
+
+Loopback configuration
+--
+
+Supports configurating loopback mode. The default value 0 is to disable
+loopback mode and other value is defined by given Ethernet controller.
+
+* **[uses] user config**: ``dev_conf.lpbk_mode``.
+* **[related]  API**: ``rte_eth_dev_configure()``.
+
+
 .. _nic_features_link_status:
 
 Link status
-- 
2.33.0



Re: [PATCH 0/6] doc/features: fix some features and add new features

2023-11-23 Thread fengchengwen
LGTM
Series-acked-by: Chengwen Feng 

On 2023/11/24 11:12, Huisong Li wrote:
> This series fix the configuration interface for RSS feature and add
> serval features had beed supported.
> 
> ---
>  -v2:
>- fix the short underline warning.
>- add loopback mode feature.
> 
> Huisong Li (6):
>   doc/features: add RSS hash algorithm feature
>   doc/features: add link up/down feature
>   doc/features: add features for link speeds
>   doc/features: add Traffic Manager features
>   doc/features: add dump device private information feature
>   doc/features: add feature for loopback mode
> 
>  doc/guides/nics/features.rst | 78 ++--
>  1 file changed, 74 insertions(+), 4 deletions(-)
> 


[PATCH] lib/ethdev: modified the definition of 'NVGRE_ENCAP'

2023-11-23 Thread Sunyang Wu
Fix the issue of incorrect definition of 'NVGRE_ENCAP', and
modified the error comments of 'rte_flow_action_nvgre_encap'.

Fixes: c2beb1d ("ethdev: add missing items/actions to flow object converter")
Fixes: 3850cf0 ("ethdev: add tunnel encap/decap actions")
Cc: sta...@dpdk.org

Signed-off-by: Joey Xing 
Signed-off-by: Sunyang Wu 
---
 lib/ethdev/rte_flow.c | 2 +-
 lib/ethdev/rte_flow.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 549e329558..04348e0243 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -216,7 +216,7 @@ static const struct rte_flow_desc_data 
rte_flow_desc_action[] = {
   sizeof(struct rte_flow_action_of_push_mpls)),
MK_FLOW_ACTION(VXLAN_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
MK_FLOW_ACTION(VXLAN_DECAP, 0),
-   MK_FLOW_ACTION(NVGRE_ENCAP, sizeof(struct rte_flow_action_vxlan_encap)),
+   MK_FLOW_ACTION(NVGRE_ENCAP, sizeof(struct rte_flow_action_nvgre_encap)),
MK_FLOW_ACTION(NVGRE_DECAP, 0),
MK_FLOW_ACTION(RAW_ENCAP, sizeof(struct rte_flow_action_raw_encap)),
MK_FLOW_ACTION(RAW_DECAP, sizeof(struct rte_flow_action_raw_decap)),
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index affdc8121b..4cdc1f1d8f 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -3471,7 +3471,7 @@ struct rte_flow_action_vxlan_encap {
  */
 struct rte_flow_action_nvgre_encap {
/**
-* Encapsulating vxlan tunnel definition
+* Encapsulating nvgre tunnel definition
 * (terminated by the END pattern item).
 */
struct rte_flow_item *definition;
-- 
2.19.0.rc0.windows.1



Re: [PATCH 1/1] eal/linux: force iova-mode va without pa available

2023-11-23 Thread Christian Ehrhardt
On Thu, Nov 23, 2023 at 4:52 PM  wrote:

> From: David Wilder 
>
> When using --no-huge option physical address are not guaranteed
> to be persistent.
>
> This change effectively makes "--no-huge" the same as
> "--no-huge --iova-mode=va".
>
> When --no-huge is used (or any other condition making physical
> addresses unavailable) setting --iova-mode=pa will have no effect.
>
> Signed-off-by: Christian Ehrhardt 
> ---
>  doc/guides/prog_guide/env_abstraction_layer.rst |  9 ++---
>  lib/eal/linux/eal.c | 14 --
>  2 files changed, 14 insertions(+), 9 deletions(-)
>
> diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst
> b/doc/guides/prog_guide/env_abstraction_layer.rst
> index 6debf54efb..20c7355e0f 100644
> --- a/doc/guides/prog_guide/env_abstraction_layer.rst
> +++ b/doc/guides/prog_guide/env_abstraction_layer.rst
> @@ -559,9 +559,12 @@ IOVA Mode is selected by considering what the current
> usable Devices on the
>  system require and/or support.
>
>  On FreeBSD, RTE_IOVA_PA is always the default. On Linux, the IOVA mode is
> -detected based on a 2-step heuristic detailed below.
> +detected based on a heuristic detailed below.
>
> -For the first step, EAL asks each bus its requirement in terms of IOVA
> mode
> +For the first step, if no Physical Addresses are available RTE_IOVA_VA is
> +selected.
> +
> +Then EAL asks each bus its requirement in terms of IOVA mode
>  and decides on a preferred IOVA mode.
>
>  - if all buses report RTE_IOVA_PA, then the preferred IOVA mode is
> RTE_IOVA_PA,
> @@ -575,7 +578,7 @@ and decides on a preferred IOVA mode.
>  If the buses have expressed no preference on which IOVA mode to pick,
> then a
>  default is selected using the following logic:
>
> -- if physical addresses are not available, RTE_IOVA_VA mode is used
> +- if enable_iova_as_pa was not set at build RTE_IOVA_VA mode is used
>  - if /sys/kernel/iommu_groups is not empty, RTE_IOVA_VA mode is used
>  - otherwise, RTE_IOVA_PA mode is used
>
> diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
> index 57da058cec..7d0eedef57 100644
> --- a/lib/eal/linux/eal.c
> +++ b/lib/eal/linux/eal.c
> @@ -1067,6 +1067,14 @@ rte_eal_init(int argc, char **argv)
>
> phys_addrs = rte_eal_using_phys_addrs() != 0;
>
> +   if (!phys_addrs) {
> +   /* if we have no access to physical addresses,
> +* pick IOVA as VA mode.
> +*/
> +   iova_mode = RTE_IOVA_VA;
>

^^ this won't work, thanks for trying to rush it :-/
I'll fix and test it before v2
But since it wasn't my patch initially I'm happy if anyone else wants to
take over ...


> +   RTE_LOG(INFO, EAL, "Physical addresses are unavailable,
> selecting IOVA as VA mode.\n");
> +   }
> +
> /* if no EAL option "--iova-mode=", use bus IOVA scheme */
> if (internal_conf->iova_mode == RTE_IOVA_DC) {
> /* autodetect the IOVA mapping mode */
> @@ -1078,12 +1086,6 @@ rte_eal_init(int argc, char **argv)
> if (!RTE_IOVA_IN_MBUF) {
> iova_mode = RTE_IOVA_VA;
> RTE_LOG(DEBUG, EAL, "IOVA as VA mode is
> forced by build option.\n");
> -   } else if (!phys_addrs) {
> -   /* if we have no access to physical
> addresses,
> -* pick IOVA as VA mode.
> -*/
> -   iova_mode = RTE_IOVA_VA;
> -   RTE_LOG(DEBUG, EAL, "Physical addresses
> are unavailable, selecting IOVA as VA mode.\n");
> } else if (is_iommu_enabled()) {
> /* we have an IOMMU, pick IOVA as VA mode
> */
> iova_mode = RTE_IOVA_VA;
> --
> 2.34.1
>
>

-- 
Christian Ehrhardt
Director of Engineering, Ubuntu Server
Canonical Ltd


[PATCH] net/ice: fix tso tunnel setting to not take effect

2023-11-23 Thread Kaiwen Deng
The Tx offload capabilities of ICE ethdev doesn't include
tso tunnel, which will result in tso tunnel setting to
not take effect.

This commit will add tso tunnel capabilities in ice_dev_info_get().

Fixes: 295968d17407 ("ethdev: add namespace")
Cc: sta...@dpdk.org

Signed-off-by: Kaiwen Deng 
---
 drivers/net/ice/ice_ethdev.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3ccba4db80..fbc957fcd8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3876,7 +3876,11 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-   RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
+   RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+   RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+   RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+   RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+   RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
}
 
-- 
2.34.1



[PATCH] net/i40e: remove redundant judgment in fdir parse

2023-11-23 Thread Kaiwen Deng
if (eth_spec && eth_mask &&
   next_type == RTE_FLOW_ITEM_TYPE_END) {
...
if (next_type == RTE_FLOW_ITEM_TYPE_VLAN || ...) {
...
}
...
}

Clearly, that condition in the inner "if" is always "false".

This commit will remove the redundant judgment.

Fixes: 7d83c152a207 ("net/i40e: parse flow director filter")
Cc: sta...@dpdk.org

Signed-off-by: Kaiwen Deng 
---
 drivers/net/i40e/i40e_flow.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 877e49151e..92165c8422 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -1708,8 +1708,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
 
ether_type = 
rte_be_to_cpu_16(eth_spec->hdr.ether_type);
 
-   if (next_type == RTE_FLOW_ITEM_TYPE_VLAN ||
-   ether_type == RTE_ETHER_TYPE_IPV4 ||
+   if (ether_type == RTE_ETHER_TYPE_IPV4 ||
ether_type == RTE_ETHER_TYPE_IPV6 ||
ether_type == i40e_get_outer_vlan(dev)) {
rte_flow_error_set(error, EINVAL,
-- 
2.34.1



RE: [EXT] [PATCH 1/2] node: forward packet from ethdev_rx node

2023-11-23 Thread Sunil Kumar Kori
> -Original Message-
> From: Rakesh Kudurumalla 
> Sent: Thursday, November 23, 2023 11:46 AM
> To: Nithin Kumar Dabilpuram ; Pavan
> Nikhilesh Bhagavatula 
> Cc: dev@dpdk.org; Jerin Jacob Kollanukkaran ;
> Rakesh Kudurumalla 
> Subject: [EXT] [PATCH 1/2] node: forward packet from ethdev_rx node
> 
> External Email
> 
> --
> By default all packets received on ethdev_rx node is forwarded to pkt_cls
> node.This patch provides library support to add a new node as next node to
> ethdev_rx node and forward packet to new node from rx node.
> 
IMO, Subject "node: add API to update ethdev_rx next node" or similar is more 
suitable to reflect the implementation.
Please run check-git-log.sh also to make sure that subject is aligned.

> Signed-off-by: Rakesh Kudurumalla 
> ---
>  lib/node/ethdev_ctrl.c  | 40
> +
>  lib/node/rte_node_eth_api.h | 17 
>  lib/node/version.map|  1 +
>  3 files changed, 58 insertions(+)
> 
> diff --git a/lib/node/ethdev_ctrl.c b/lib/node/ethdev_ctrl.c index
> d564b80e37..d64fc33655 100644
> --- a/lib/node/ethdev_ctrl.c
> +++ b/lib/node/ethdev_ctrl.c
> @@ -129,3 +129,43 @@ rte_node_eth_config(struct
> rte_node_ethdev_config *conf, uint16_t nb_confs,
>   ctrl.nb_graphs = nb_graphs;
>   return 0;
>  }
> +
> +int
> +rte_node_ethdev_rx_next_update(rte_node_t id, const char *edge_name) {
> + struct ethdev_rx_node_main *data;
> + ethdev_rx_node_elem_t *elem;
> + char **next_nodes;
> + int rc = -EINVAL;
> + uint32_t count;
> + uint16_t i = 0;
> +
> + if (id == RTE_EDGE_ID_INVALID)
> + return id;
Return type and returned value are mismatched.

> +
Add a NULL check for edge_name too. 

> + count = rte_node_edge_get(id, NULL);
This API itself return error if id is invalid. Use returned value instead of 
above check.

> + next_nodes = malloc(count);
> + if (next_nodes == NULL)
> + return rc;
> +
> + count = rte_node_edge_get(id, next_nodes);
> +
> + while (next_nodes[i] != NULL) {
> + if (strcmp(edge_name, next_nodes[i]) == 0) {
> + data = ethdev_rx_get_node_data_get();
> + elem = data->head;
> + while (elem->next != data->head) {
> + if (elem->nid == id) {
> + elem->ctx.cls_next = i;
> + rc = 0;
> + goto found;
> + }
> + elem = elem->next;
> + }
> + }
> + i++;
> + }
> +found:
Cosmetic: use "exit" keyword instead of "found". 

> + free(next_nodes);
> + return rc;
> +}
> diff --git a/lib/node/rte_node_eth_api.h b/lib/node/rte_node_eth_api.h index
> eaae50772d..66cea2d31e 100644
> --- a/lib/node/rte_node_eth_api.h
> +++ b/lib/node/rte_node_eth_api.h
> @@ -57,6 +57,23 @@ struct rte_node_ethdev_config {
>   */
>  int rte_node_eth_config(struct rte_node_ethdev_config *cfg,
>   uint16_t cnt, uint16_t nb_graphs);
> +
> +/**
> + * Update ethdev rx next node.
> + *
> + * @param id
> + *   Node id whose edge is to be updated.
> + * @param edge_name
> + *   Name of the next node.
> + *
> + * @return
> + *   RTE_EDGE_ID_INVALID if id is invalid
> + *   EINVAL if edge name doesn't exist
> + *   0 on successful initialization.
> + */
Does it make sense to add error codes like below:
-ENINVAL: Either of input parameters are invalid.
-ENODATA: If edge_name is not found.
-ENOMEM: If memory allocation failed.
0: on Success

What's your thoughts on this ?

> +__rte_experimental
> +int rte_node_ethdev_rx_next_update(rte_node_t id, const char
> +*edge_name);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/node/version.map b/lib/node/version.map index
> 99ffcdd414..07abc3a79f 100644
> --- a/lib/node/version.map
> +++ b/lib/node/version.map
> @@ -16,6 +16,7 @@ EXPERIMENTAL {
>   rte_node_ip6_route_add;
> 
>   # added in 23.11
> + rte_node_ethdev_rx_next_update;
>   rte_node_ip4_reassembly_configure;
>   rte_node_udp4_dst_port_add;
>   rte_node_udp4_usr_node_add;
> --
> 2.25.1



Re: [PATCH] net/ice: fix tso tunnel setting to not take effect

2023-11-23 Thread lihuisong (C)

please add Bugzilla ID

在 2023/11/24 14:44, Kaiwen Deng 写道:

The Tx offload capabilities of ICE ethdev doesn't include
tso tunnel, which will result in tso tunnel setting to
not take effect.

This commit will add tso tunnel capabilities in ice_dev_info_get().

Fixes: 295968d17407 ("ethdev: add namespace")
Cc: sta...@dpdk.org

Signed-off-by: Kaiwen Deng 
---
  drivers/net/ice/ice_ethdev.c | 6 +-
  1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3ccba4db80..fbc957fcd8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3876,7 +3876,11 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct 
rte_eth_dev_info *dev_info)
RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-   RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
+   RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+   RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+   RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+   RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+   RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
}
  

correct,
Reviewed-by: Huisong Li 


Re: [PATCH] raw/cnxk_bphy: switch to dynamic logging

2023-11-23 Thread David Marchand
Hello,

On Tue, Nov 14, 2023 at 9:05 AM Tomasz Duszynski  wrote:
[snip]
> @@ -15,6 +16,11 @@
>  #include "cnxk_bphy_irq.h"
>  #include "rte_pmd_bphy.h"
>
> +extern int bphy_rawdev_logtype;
> +
> +#define BPHY_LOG(level, fmt, args...) \
> +   rte_log(RTE_LOG_ ## level, bphy_rawdev_logtype, "%s(): " fmt "\n", 
> __func__, ##args)
> +
>  static const struct rte_pci_id pci_bphy_map[] = {
> {RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_BPHY)},
> {
> @@ -81,7 +87,7 @@ bphy_rawdev_selftest(uint16_t dev_id)
> goto err_desc;
> if (descs != 1) {
> ret = -ENODEV;
> -   plt_err("Wrong number of descs reported\n");
> +   BPHY_LOG(ERR, "Wrong number of descs reported\n");

I think it is the only occurence in this patch, please remove trailing
\n since BPHY_LOG appends one.
Thanks.


-- 
David Marchand