Hi, I reworked this patch as part of a new patchset, and some comments
below.
On the comment about the offload capabilities, i.e. what happens if a
PMD supports SECURITY and TSO but not both in the same time, well, if
this is the case they should not be set both, and also this can
theoretical
26/08/2021 12:50, Radha Mohan Chintakuntla:
> Removing the rawdev based octeontx2-dma driver as the dependent
> common/octeontx2 will be soon be going away. Also a new DMA driver will
> be coming in this place once the rte_dmadev library is in.
>
> Signed-off-by: Radha Mohan Chintakuntla
This pa
On 10/18/2021 5:04 PM, Ali Alnubani wrote:
-Original Message-
From: dev On Behalf Of Ferruh Yigit
Sent: Wednesday, October 13, 2021 11:16 PM
To: Konstantin Ananyev ; dev@dpdk.org;
jer...@marvell.com; Ajit Khaparde ; Raslan
Darawsheh ; Andrew Rybchenko
; Qi Zhang ;
Honnappa Nagarahalli
C
In current DPDK framework, all Rx queues is pre-loaded with mbufs for
incoming packets. When number of representors scale out in a switch
domain, the memory consumption became significant. Further more,
polling all ports leads to high cache miss, high latency and low
throughputs.
This patch introd
In current DPDK framework, each Rx queue is pre-loaded with mbufs to
save incoming packets. For some PMDs, when number of representors scale
out in a switch domain, the memory consumption became significant.
Polling all ports also leads to high cache miss, high latency and low
throughput.
This pat
Dump device capability and Rx domain ID if shared Rx queue is supported
by device.
Signed-off-by: Xueming Li
---
app/test-pmd/config.c | 4
1 file changed, 4 insertions(+)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9c66329e96e..c0616dcd2fd 100644
--- a/app/test-pmd/co
Adds "--rxq-share=X" parameter to enable shared RxQ, share if device
supports, otherwise fallback to standard RxQ.
Share group number grows per X ports. X defaults to MAX, implies all
ports join share group 1. Queue ID is mapped equally with shared Rx
queue ID.
Forwarding engine "shared-rxq" shou
In case of shared Rx queue, polling any member port returns mbufs for
all members. This patch dumps mbuf->port for each packet.
Signed-off-by: Xueming Li
---
app/test-pmd/util.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index 51506e49404..e9
Shared Rx queue must be polled on same core. This patch checks and stops
forwarding if shared RxQ being scheduled on multiple
cores.
It's suggested to use same number of Rx queues and polling cores.
Signed-off-by: Xueming Li
---
app/test-pmd/config.c | 103 +
To support shared Rx queue, this patch introduces dedicate forwarding
engine. The engine groups received packets by mbuf->port into sub-group,
updates stream statistics and simply frees packets.
Signed-off-by: Xueming Li
---
app/test-pmd/meson.build| 1 +
app/test-pmd/share
Cores count has a direct impact on the time needed to complete unit
tests.
Currently, the core list used for unit test is enforced to "all cores on
the system" with no way for (CI) users to adapt it.
On the other hand, EAL default behavior (when no -c/-l option gets passed)
is to start threads on
Sorry, forgot to reply to original thread, resent.
Please ignore this series.
On Mon, 2021-10-18 at 20:08 +0800, Xueming Li wrote:
> In current DPDK framework, all Rx queues is pre-loaded with mbufs for
> incoming packets. When number of representors scale out in a switch
> domain, the m
After yellow color actions in the metering policy were supported,
the RSS could be used for both green and yellow colors and only the
queues attribute could be different.
When specifying the attributes of a RSS, some fields can be ignored
and some default values will be used in PMD. For example, t
The RSS configuration in a policy action container was a pointer
inside an union, and the pointer area could be used as other fate
action. In the current implementation, the RSS of the green color
was prior to that of the yellow color. There was a high posibility
the pointer was considered as the R
mlx5_rxq_start() allocates rxq_ctrl->obj and frees it on failure,
but did not set it to NULL. Later mlx5_rxq_release() could not recognize
this object is already freed and attempted to release its resources,
resulting in a crash:
Configuring Port 0 (socket 0)
mlx5_common: Failed to create
On 10/18/2021 2:48 PM, Ferruh Yigit wrote:
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' A
On Mon, Oct 18, 2021 at 07:01:36PM +0200, David Marchand wrote:
> Cores count has a direct impact on the time needed to complete unit
> tests.
>
> Currently, the core list used for unit test is enforced to "all cores on
> the system" with no way for (CI) users to adapt it.
> On the other hand, EAL
The RSS configuration in a policy action container was a pointer
inside a union, and the pointer area could be used as other fate
action. In the current implementation, the RSS of the green color
was prior to that of the yellow color. There was a high possibility
the pointer was considered as the R
On 9/22/21 2:52 PM, Dharmik Thakkar wrote:
Convert rte_atomic usages to compiler atomic built-ins
for stats sync
Signed-off-by: Dharmik Thakkar
Reviewed-by: Joyce Kong
Reviewed-by: Ruifeng Wang
---
Tested-by: David Christensen
Hi Akhil, Nipin,
I do ack the serie formally. Thanks for the updates and welcome to the new
bbdev pmd!
Acked-by: Nicolas Chautru -Original Message-
> From: Akhil Goyal
> Sent: Monday, October 18, 2021 8:23 AM
> To: nipun.gu...@nxp.com; dev@dpdk.org; Chautru, Nicolas
>
> Cc: david.mar
> Hi Akhil, Nipin,
>
> I do ack the serie formally. Thanks for the updates and welcome to the new
> bbdev pmd!
>
> Acked-by: Nicolas Chautru
After yellow color actions in the metering policy were supported,
the RSS could be used for both green and yellow colors and only the
queues attribute could be different.
When specifying the attributes of a RSS, some fields can be ignored
and some default values will be used in PMD. For example, t
On 10/15/2021 7:49 AM, Andrew Rybchenko wrote:
Prepare driver infrastructure and add support for transfer rules
proxy. The API allows to find out right entry point for transfer
rules (which require corresponding priviledges).
Viacheslav Galaktionov (3):
common/sfc_efx/base: support unprivileg
On 10/15/2021 5:26 PM, Michal Krawczyk wrote:
Hi,
this version updates the driver to version 2.5.0. It mainly focuses on
fixing the offload flags fixes. Other features included in this patchset
are:
* NUMA aware allocations for the queue specific structures
* New watchdog - check for miss
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for
Moving interrupt handle structure definition inside the c file
to make its fields totally opaque to the outside world.
Dynamically allocating the efds and elist array os intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.
Signed-off-by:
Prototype/Implement get set APIs for interrupt handle fields.
User won't be able to access any of the interrupt handle fields
directly while should use these get/set APIs to access/manipulate
them.
Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
as APIs defined are moved to rte_i
Implementing alarm cleanup routine, where the memory allocated
for interrupt instance can be freed.
Signed-off-by: Harman Kalra
---
lib/eal/common/eal_private.h | 11 +++
lib/eal/freebsd/eal.c| 1 +
lib/eal/freebsd/eal_alarm.c | 7 +++
lib/eal/linux/eal.c | 1 +
Updating the interrupt testsuite to make use of interrupt
handle get set APIs.
Signed-off-by: Harman Kalra
---
app/test/test_interrupts.c | 162 ++---
1 file changed, 97 insertions(+), 65 deletions(-)
diff --git a/app/test/test_interrupts.c b/app/test/test_interr
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.
Signed-off-by: Harman Kalra
---
lib/eal/freebsd/eal_interrupts.c | 92 ++
lib/eal/linux/eal_interrupts.c
Implementing a new API get the state if DPDK memory management
APIs are initialized.
One of the use case of this API is while allocating an interrupt
instance, if malloc APIs are ready memory for interrupt handles
should be allocated via rte_malloc_* APIs else glibc malloc APIs
are used. Eg. Alarm
As discussed in last release deprecation notice,
crypto and security session framework are reworked
to reduce the need of two mempool objects and
remove the requirement to expose the rte_security_session
and rte_cryptodev_sym_session structures.
Design methodology is explained in the patch descript
As per current design, rte_security_session_create()
unnecessarily use 2 mempool objects for a single session.
And structure rte_security_session is not directly used
by the application, it may cause ABI breakage if the structure
is modified in future.
To address these two issues, the API will now
rte_security_session struct is now hidden in the library.
application can access the opaque data and fast_mdata
using the set/get APIs introduced in this patch.
Signed-off-by: Akhil Goyal
---
doc/guides/prog_guide/rte_security.rst | 11 ++
doc/guides/rel_notes/deprecation.rst | 4 --
doc/
Aligned the security session create and destroy
as per the recent changes in rte_security
and used the fast mdata field introduced in
rte_security. Enabled compilation of cnxk driver.
Signed-off-by: Nithin Dabilpuram
Signed-off-by: Akhil Goyal
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 64 ++
Some PMDs need session physical address which can be passed
to the hardware. But since security_session_create
does not allow PMD to get mempool object, the PMD cannot
call rte_mempool_virt2iova().
Hence the library layer need to calculate the iova for session
private data and pass it to the PMD.
Added the support for rte_security_op.session_get_size()
in all the PMDs which support rte_security sessions and the
op was not supported.
Signed-off-by: Akhil Goyal
---
drivers/crypto/caam_jr/caam_jr.c| 6 ++
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 7 +++
drivers/crypt
On 9/29/2021 1:13 PM, Leyi Rong wrote:
The common header file for vectorization is included in multiple files,
and so must use macros for the current compilation unit, rather than the
compiler-capability flag set for the whole driver. With the current,
incorrect, macro, the AVX512 or AVX2 flags m
Structure rte_cryptodev_sym_session is moved to internal
headers which are not visible to applications.
The only field which should be used by app is opaque_data.
This field can now be accessed via set/get APIs added in this
patch.
Subsequent changes in app and lib are made to compile the code.
Si
Some PMDs need session physical address which can be passed
to the hardware. But since sym_session_configure
does not allow PMD to get mempool object, the PMD cannot
call rte_mempool_virt2iova().
Hence the library layer need to calculate the iova for session
private data and pass it to the PMD.
Si
> > Can somebody from Intel look into this?
> > We don’t have intel-ipsec-mb library access, so cannot reproduce the issue.
>
> I see Intel reply on the issue so we are probably good.
>
> However, on access to intel-ipsec-mb topic, sources are available on github.
> Compiling is relatively easy (
1. Introduction and Retrospective
Nowadays the networks are evolving fast and wide, the network
structures are getting more and more complicated, the new
application areas are emerging. To address these challenges
the new network protocols are continuously being developed,
considered by technical
1. Introduction and Retrospective
Nowadays the networks are evolving fast and wide, the network
structures are getting more and more complicated, the new
application areas are emerging. To address these challenges
the new network protocols are continuously being developed,
considered by technical
From: Gregory Etelson
RTE flow API provides RAW item type for packet patterns of variable
length. The RAW item structure has fixed size members that describe the
variable pattern length and methods to process it.
There is the new RTE Flow items with variable lengths coming - flex
item. In order
From: Gregory Etelson
testpmd flow creation is constructed from these procedures:
1. receive string with flow rule description;
2. parse input string and build flow parameters: port_id value,
flow attributes, items array, actions array;
3. create a flow rule from flow rule parameters.
From: Gregory Etelson
RTE flex item API was introduced in
"ethdev: introduce configurable flexible item" patch.
The API allows DPDK application to define parser for custom
network header in port hardware and offload flows that will match
the custom header elements.
Signed-off-by: Gregory Etelso
From: Gregory Etelson
Testpmd interactive mode provides CLI to configure application
commands. Testpmd reads CLI command and parameters from STDIN, and
converts input into C objects with internal parser.
The patch adds jansson dependency to testpmd.
With jansson, testpmd can read input in JSON fo
From: Gregory Etelson
Network port hardware is shipped with fixed number of
supported network protocols. If application must work with a
protocol that is not included in the port hardware by default, it
can try to add the new protocol to port hardware.
Flex item or flex parser is port infrastruc
2021-10-19 01:07 (UTC+0530), Harman Kalra:
[...]
> +struct rte_intr_handle *rte_intr_instance_alloc(void)
> +{
> + struct rte_intr_handle *intr_handle;
> + bool mem_allocator;
This name is not very descriptive; what would "mem_allocator is false" mean?
How about "is_rte_memory"?
> +
> +
MLX5 hardware has its internal IOMMU where PMD registers the memory.
On the data path, PMD translates VA into a key consumed by the device
IOMMU. It is impractical for the PMD to register all allocated memory
because of increased lookup cost both in HW and SW. Most often mbuf
memory comes from me
Data path performance can benefit if the PMD knows which memory it will
need to handle in advance, before the first mbuf is sent to the PMD.
It is impractical, however, to consider all allocated memory for this
purpose. Most often mbuf memory comes from mempools that can come and
go. PMD can enumer
Mempool is a generic allocator that is not necessarily used
for device IO operations and its memory for DMA.
Add MEMPOOL_F_NON_IO flag to mark such mempools automatically
a) if their objects are not contiguous;
b) if IOVA is not available for any object.
Other components can inspect this flag
in or
Add internal API to register mempools, that is, to create memory
regions (MR) for their memory and store them in a separate database.
Implementation deals with multi-process, so that class drivers don't
need to. Each protection domain has its own database. Memory regions
can be shared within a data
When the first port in a given protection domain (PD) starts,
install a mempool event callback for this PD and register all existing
memory regions (MR) for it. When the last port in a PD closes,
remove the callback and unregister all mempools for this PD.
This behavior can be switched off with a n
> -Original Message-
> From: Zhang, Qi Z
> Sent: 18 октября 2021 г. 16:06
> To: Dmitry Kozlyuk ; dev@dpdk.org
> Cc: Ori Kam ; NBU-Contact-Thomas Monjalon
> ; Yigit, Ferruh ; Andrew
> Rybchenko
> Subject: RE: [dpdk-dev] [PATCH v2 1/5] ethdev: add capability to keep flow
> rules on restart
On Tue, 19 Oct 2021 01:07:02 +0530
Harman Kalra wrote:
> + /* Detect if DPDK malloc APIs are ready to be used. */
> + mem_allocator = rte_malloc_is_ready();
> + if (mem_allocator)
> + intr_handle = rte_zmalloc(NULL, sizeof(struct rte_intr_handle),
> +
From: Pavan Nikhilesh
Mark all the driver specific functions as internal, remove
`rte` prefix from `struct rte_eventdev_ops`.
Remove experimental tag from internal functions.
Remove `eventdev_pmd.h` from non-internal header files.
Signed-off-by: Pavan Nikhilesh
Acked-by: Hemant Agrawal
---
v5
From: Pavan Nikhilesh
Create rte_eventdev_core.h and move all the internal data structures
to this file. These structures are mostly used by drivers, but they
need to be in the public header file as they are accessed by datapath
inline functions for performance reasons.
The accessibility of these
From: Pavan Nikhilesh
Allocate max space for internal port, port config, queue config and
link map arrays.
Introduce new macro RTE_EVENT_MAX_PORTS_PER_DEV and set it to max
possible value.
This simplifies the port and queue reconfigure scenarios and will
also allow inline functions to refer point
From: Pavan Nikhilesh
Move fastpath inline function pointers from rte_eventdev into a
separate structure accessed via a flat array.
The intention is to make rte_eventdev and related structures private
to avoid future API/ABI breakages.`
Signed-off-by: Pavan Nikhilesh
Acked-by: Ray Kinsella
---
From: Pavan Nikhilesh
Invoke event_dev_probing_finish() function at the end of probing,
this function sets the function pointers in the fp_ops flat array.
Signed-off-by: Pavan Nikhilesh
Acked-by: Hemant Agrawal
---
drivers/event/dpaa/dpaa_eventdev.c | 4 +++-
drivers/event/dpaa2/dpaa2
From: Pavan Nikhilesh
Use new driver interface for the fastpath enqueue/dequeue inline
functions.
Signed-off-by: Pavan Nikhilesh
Acked-by: Jay Jayatheerthan
Acked-by: Abhinandan Gujjar
---
lib/eventdev/rte_event_crypto_adapter.h | 15 +---
lib/eventdev/rte_event_eth_tx_adapter.h | 15 +++
From: Pavan Nikhilesh
Move rte_eventdev, rte_eventdev_data structures to eventdev_pmd.h.
Signed-off-by: Pavan Nikhilesh
Acked-by: Harman Kalra
---
drivers/event/dlb2/dlb2_inline_fns.h | 2 +
drivers/event/dsw/dsw_evdev.h | 2 +
drivers/event/octeontx/timvf_worker.h | 2 +
drive
From: Pavan Nikhilesh
Hide rte_event_timer_adapter_pmd.h file as it is an internal file.
Remove rte_ prefix from rte_event_timer_adapter_ops structure.
Signed-off-by: Pavan Nikhilesh
---
drivers/event/cnxk/cnxk_tim_evdev.c | 5 ++--
drivers/event/cnxk/cnxk_tim_evdev.h | 2
From: Pavan Nikhilesh
Remove rte_ prefix from rte_eth_event_enqueue_buffer,
rte_event_eth_rx_adapter and rte_event_crypto_adapter
as they are only used in rte_event_eth_rx_adapter.c and
rte_event_crypto_adapter.c
Signed-off-by: Pavan Nikhilesh
Acked-by: Jay Jayatheerthan
Acked-by: Abhinandan G
From: Pavan Nikhilesh
Rearrange fields in rte_event_timer data structure to remove holes.
Also, remove use of volatile from rte_event_timer.
Signed-off-by: Pavan Nikhilesh
---
doc/guides/rel_notes/release_21_11.rst | 3 +++
lib/eventdev/rte_event_timer_adapter.h | 4 ++--
2 files changed, 5 in
From: Pavan Nikhilesh
Move memory used by timer adapters to hugepage.
Allocate memory on the first adapter create or lookup to address
both primary and secondary process usecases.
This will prevent TLB misses if any and aligns to memory structure
of other subsystems.
Signed-off-by: Pavan Nikhile
From: Pavan Nikhilesh
Promote event vector configuration APIs to stable.
Signed-off-by: Pavan Nikhilesh
Acked-by: Jay Jayatheerthan
Acked-by: Ray Kinsella
---
doc/guides/rel_notes/release_21_11.rst | 2 ++
lib/eventdev/rte_event_eth_rx_adapter.h | 1 -
lib/eventdev/rte_eventdev.h
From: Pavan Nikhilesh
Slowpath trace APIs are only used in rte_eventdev.c so make them
as internal.
Signed-off-by: Pavan Nikhilesh
Acked-by: Jay Jayatheerthan
Acked-by: Abhinandan Gujjar
---
lib/eventdev/{rte_eventdev_trace.h => eventdev_trace.h} | 0
lib/eventdev/eventdev_trace_points.c
From: Pavan Nikhilesh
Mark rte_trace global variables as internal i.e. remove them
from experimental section of version map.
Some of them are used in inline APIs, mark those as global.
Signed-off-by: Pavan Nikhilesh
Acked-by: Ray Kinsella
---
lib/eventdev/version.map | 71 ++--
On Mon, Oct 18, 2021 at 9:47 AM Ferruh Yigit wrote:
>
> On 10/18/2021 5:04 PM, Ali Alnubani wrote:
> >> -Original Message-
> >> From: dev On Behalf Of Ferruh Yigit
> >> Sent: Wednesday, October 13, 2021 11:16 PM
> >> To: Konstantin Ananyev ; dev@dpdk.org;
> >> jer...@marvell.com; Ajit Kha
> -Original Message-
> From: Yigit, Ferruh
> Sent: Saturday, October 16, 2021 4:50 AM
> To: Zhang, AlvinX ; Xing, Beilei
> ; Guo, Junfeng
> Cc: dev@dpdk.org; sta...@dpdk.org; Zhang, Qi Z
> Subject: Re: [dpdk-stable] [PATCH] net/i40e: fix IPv6 fragment RSS offload
> type
> in flow
>
>
On Mon, Oct 18, 2021 at 6:00 AM Xueming Li wrote:
>
> In current DPDK framework, each Rx queue is pre-loaded with mbufs to
> save incoming packets. For some PMDs, when number of representors scale
> out in a switch domain, the memory consumption became significant.
> Polling all ports also leads t
> -Original Message-
> From: Dmitry Kozlyuk
> Sent: Tuesday, October 19, 2021 6:51 AM
> To: Zhang, Qi Z ; dev@dpdk.org
> Cc: Ori Kam ; NBU-Contact-Thomas Monjalon
> ; Yigit, Ferruh ; Andrew
> Rybchenko
> Subject: RE: [dpdk-dev] [PATCH v2 1/5] ethdev: add capability to keep flow
> rules
On Mon, 18 Oct 2021 10:40:00 +
"Ananyev, Konstantin" wrote:
> > On Fri, 15 Oct 2021 10:30:02 +0100
> > Vladimir Medvedkin wrote:
> >
> > > + m[i * 8 + j] = (rss_key[i] << j)|
> > > + (uint8_t)((uint16_t)(rss_key[i + 1]) >>
> > > +
> -Original Message-
> From: Zhang, Qi Z
> Sent: Tuesday, October 19, 2021 8:03 AM
> To: Yigit, Ferruh ; Zhang, AlvinX
> ; Xing, Beilei ; Guo, Junfeng
>
> Cc: dev@dpdk.org; sta...@dpdk.org
> Subject: RE: [dpdk-stable] [PATCH] net/i40e: fix IPv6 fragment RSS offload
> type in
> flow
>
>
> -Original Message-
> From: Zhang, AlvinX
> Sent: Tuesday, October 19, 2021 9:32 AM
> To: Zhang, Qi Z ; Yigit, Ferruh
> ;
> Xing, Beilei ; Guo, Junfeng
> Cc: dev@dpdk.org; sta...@dpdk.org
> Subject: RE: [dpdk-stable] [PATCH] net/i40e: fix IPv6 fragment RSS offload
> type
> in flow
>
To keep flow format uniform with ice, this patch adds support for
this RSS rule:
flow create 0 ingress pattern eth / ipv6 / ipv6_frag_ext / end \
actions rss types ipv6-frag end queues end queues end / end
Fixes: ef4c16fd9148 ("net/i40e: refactor RSS flow")
Cc: sta...@dpdk.org
Signed-off-
> -Original Message-
> From: Ananyev, Konstantin
> Sent: Monday, October 18, 2021 18:16
> To: Li, Xiaoyun ; Stephen Hemminger
>
> Cc: Yigit, Ferruh ; dev@dpdk.org; sta...@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH] app/testpmd: fix l4 sw csum over multi
> segments
>
>
> > > > +
Hi Vijay,
> -Original Message-
> From: Vijay Kumar Srivastava
> Sent: Monday, October 18, 2021 6:06 PM
> To: Xia, Chenbo ; dev@dpdk.org
> Cc: maxime.coque...@redhat.com; andrew.rybche...@oktetlabs.ru; Harpreet Singh
> Anand ; Praveen Kumar Jain
> Subject: RE: [PATCH 02/10] vdpa/sfc: add
For test-pmd:
- removing Xiaoyun Li
- adding Yuying Zhang
Signed-off-by: Yuying Zhang
---
MAINTAINERS | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index ed8becce85..f13bf425b0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1635,7 +1635,7 @
Support IAVF PPPoL2TPv2oUDP RSS Hash. Required to distribute packets
based on inner IP src+dest address and TCP/UDP src+dest port.
---
v5: update release notes.
v4:
* update commit log.
* redefine PPP protocol header.
* delete l2tpv2_encap.
v3:
* add testpmd match ppp and l2tpv2 protocol heade
Added flow pattern items and header formats of L2TPv2 and PPP.
Signed-off-by: Wenjun Wu
Signed-off-by: Jie Wang
---
doc/guides/prog_guide/rte_flow.rst | 25 +++
doc/guides/rel_notes/release_21_11.rst | 4 +
lib/ethdev/rte_flow.c | 2 +
lib/ethdev/rte_flow.h
Add support for PPP over L2TPv2 over UDP protocol RSS Hash based
on inner IP src/dst address and TCP/UDP src/dst port.
Patterns are listed below:
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/udp
eth/ipv4(6)/udp/l2tpv2/ppp/ipv4(6)/tcp
Signed-off-by: Wenjun Wu
Signed-off-b
Add support for test-pmd to parse protocol pattern L2TPv2 and PPP.
Signed-off-by: Wenjun Wu
Signed-off-by: Jie Wang
---
app/test-pmd/cmdline_flow.c | 251
1 file changed, 251 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow
> -Original Message-
> From: Zhang, Yuying
> Sent: Tuesday, October 19, 2021 14:17
> To: dev@dpdk.org; Li, Xiaoyun
> Cc: Zhang, Yuying
> Subject: [PATCH v1] maintainers: update for driver testing tool
>
> For test-pmd:
> - removing Xiaoyun Li
> - adding Yuying Zhang
>
> Signed-
This patchset fix FreeBSD build error reported by
https://bugs.dpdk.org/show_bug.cgi?id=788.
Also splitting AVX-specific code into new xxx_common_avx.h header file.
---
v2:
- Decouple i40e_rxtx_common_avx.h/ice_rxtx_common_avx.h from
i40e_rxtx_vec_common.h/ice_rxtx_vec_common.h
Leyi Rong (2):
The common header file for vectorization is included in multiple files,
and so must use macros for the current compilation unit, rather than the
compiler-capability flag set for the whole driver. With the current,
incorrect, macro, the AVX512 or AVX2 flags may be set when compiling up
SSE code, lea
The common header file for vectorization is included in multiple files,
and so must use macros for the current compilation unit, rather than the
compiler-capability flag set for the whole driver. With the current,
incorrect, macro, the AVX512 or AVX2 flags may be set when compiling up
SSE code, lea
> -Original Message-
> From: Li, Miao
> Sent: Monday, October 18, 2021 10:17 PM
> To: dev@dpdk.org
> Cc: Xia, Chenbo ; maxime.coque...@redhat.com; Li, Miao
>
> Subject: [PATCH v7 2/5] vhost: implement rte_power_monitor API
>
> This patch defines rte_vhost_power_monitor_cond which is used
> -Original Message-
> From: Li, Miao
> Sent: Monday, October 18, 2021 10:17 PM
> To: dev@dpdk.org
> Cc: Xia, Chenbo ; maxime.coque...@redhat.com; Li, Miao
>
> Subject: [PATCH v7 3/5] net/vhost: implement rte_power_monitor API
>
> This patch implements rte_power_monitor API in vhost PMD
> -Original Message-
> From: Li, Miao
> Sent: Monday, October 18, 2021 10:17 PM
> To: dev@dpdk.org
> Cc: Xia, Chenbo ; maxime.coque...@redhat.com; Li, Miao
>
> Subject: [PATCH v7 5/5] examples/l3fwd-power: support virtio/vhost
>
> In l3fwd-power, there is default port configuration which
> -Original Message-
> From: Maxime Coquelin
> Sent: Monday, October 18, 2021 6:21 PM
> To: dev@dpdk.org; Xia, Chenbo ; amore...@redhat.com;
> david.march...@redhat.com; andrew.rybche...@oktetlabs.ru; Yigit, Ferruh
> ; michae...@nvidia.com; viachesl...@nvidia.com; Li,
> Xiaoyun
> Cc: neli
> -Original Message-
> From: Peng, ZhihongX
> Sent: Friday, October 15, 2021 11:11 PM
> To: david.march...@redhat.com; Burakov, Anatoly
> ; Ananyev, Konstantin
> ; step...@networkplumber.org;
> Dumitrescu, Cristian ; Mcnamara, John
>
> Cc: dev@dpdk.org; Lin, Xueqin ; Peng, ZhihongX
>
> S
On Mon, 2021-10-18 at 17:21 -0700, Ajit Khaparde wrote:
> On Mon, Oct 18, 2021 at 6:00 AM Xueming Li wrote:
> >
> > In current DPDK framework, each Rx queue is pre-loaded with mbufs to
> > save incoming packets. For some PMDs, when number of representors scale
> > out in a switch domain, the memo
> -Original Message-
> From: Peng, ZhihongX
> Sent: Friday, October 15, 2021 11:11 PM
> To: david.march...@redhat.com; Burakov, Anatoly
> ; Ananyev, Konstantin
> ; step...@networkplumber.org;
> Dumitrescu, Cristian ; Mcnamara, John
>
> Cc: dev@dpdk.org; Lin, Xueqin ; Peng, ZhihongX
> ; st
On 10/18/21 3:59 PM, Xueming Li wrote:
> In current DPDK framework, each Rx queue is pre-loaded with mbufs to
> save incoming packets. For some PMDs, when number of representors scale
> out in a switch domain, the memory consumption became significant.
> Polling all ports also leads to high cache m
Hi Olivier,
On 8/1/2021 11:06 AM, Eli Britstein wrote:
On 7/30/2021 2:10 PM, Olivier Matz wrote:
External email: Use caution opening links or attachments
Hi Eli,
On Thu, Jul 29, 2021 at 10:13:45AM +0300, Eli Britstein wrote:
On 7/28/2021 6:28 PM, Olivier Matz wrote:
External email: Use ca
201 - 298 of 298 matches
Mail list logo