From: Dmitry Kozlyuk
Hugepage allocation from the system takes time, resulting in slow
startup or sporadic delays later. Most of the time spent in kernel
is zero-filling memory for security reasons, which may be irrelevant
in a controlled environment. The bottleneck is memory access speed,
so for
From: Dmitry Kozlyuk
get_hugepage_dir() searched for a hugetlbfs mount with a given page size
using handcraft parsing of /proc/mounts and mixing traversal logic with
selecting the needed entry. Separate code to enumerate hugetlbfs mounts
to eal_hugepage_mount_walk() taking a callback that can ins
From: Viacheslav Ovsiienko
The primary DPDK process launch might take a long time if initially
allocated memory is large. From practice allocation of 1 TB of memory
over 1 GB hugepages on Linux takes tens of seconds. Fast restart
is highly desired for some applications and launch delay presents
a
From: Dmitry Kozlyuk
Memory allocator performance is crucial to applications that deal
with large amount of memory or allocate frequently. DPDK allocator
performance is affected by EAL options, API used and, at least,
allocation size. New autotest is intended to be run with different
EAL options.
From: Dmitry Kozlyuk
Hugepage allocation from the system takes time, resulting in slow
startup or sporadic delays later. Most of the time spent in kernel
is zero-filling memory for security reasons, which may be irrelevant
in a controlled environment. The bottleneck is memory access speed,
so for
From: Dmitry Kozlyuk
get_hugepage_dir() searched for a hugetlbfs mount with a given page size
using handcraft parsing of /proc/mounts and mixing traversal logic with
selecting the needed entry. Separate code to enumerate hugetlbfs mounts
to eal_hugepage_mount_walk() taking a callback that can ins
From: Viacheslav Ovsiienko
The primary DPDK process launch might take a long time if initially
allocated memory is large. From practice allocation of 1 TB of memory
over 1 GB hugepages on Linux takes tens of seconds. Fast restart
is highly desired for some applications and launch delay presents
a
From: Dmitry Kozlyuk
Memory allocator performance is crucial to applications that deal
with large amount of memory or allocate frequently. DPDK allocator
performance is affected by EAL options, API used and, at least,
allocation size. New autotest is intended to be run with different
EAL options.
From: Dmitry Kozlyuk
MLX5 hardware has its internal IOMMU where PMD registers the memory.
On the data path, PMD translates VA into a key consumed by the device
IOMMU. It is impractical for the PMD to register all allocated memory
because of increased lookup cost both in HW and SW. Most often mb
From: Dmitry Kozlyuk
Performance of MLX5 PMD of different classes can benefit if PMD knows
which memory it will need to handle in advance, before the first mbuf
is sent to the PMD. It is impractical, however, to consider
all allocated memory for this purpose. Most often mbuf memory comes
from mem
From: Dmitry Kozlyuk
Mempool is a generic allocator that is not necessarily used for device
IO operations and its memory for DMA. Add MEMPOOL_F_NON_IO flag to mark
such mempools.
Signed-off-by: Dmitry Kozlyuk
Acked-by: Matan Azrad
---
doc/guides/rel_notes/release_21_11.rst | 3 +++
lib/mempoo
From: Dmitry Kozlyuk
Add internal API to register mempools, that is, to create memory
regions (MR) for their memory and store them in a separate database.
Implementation deals with multi-process, so that class drivers don't
need to. Each protection domain has its own database. Memory regions
can
From: Dmitry Kozlyuk
When the first port in a given protection domain (PD) starts,
install a mempool event callback for this PD and register all existing
memory regions (MR) for it. When the last port in a PD closes,
remove the callback and unregister all mempools for this PD.
This behavior can b
From: Dmitry Kozlyuk
It is unspecified whether flow rules and indirect actions are kept
when a port is stopped, possibly reconfigured, and started again.
Vendors approach the topic differently, e.g. mlx5 and i40e PMD
disagree in whether flow rules can be kept, and mlx5 PMD would keep
indirect act
From: Dmitry Kozlyuk
Currently, it is not specified what happens to the flow rules when
the device is stopped, possibly reconfigured, then started.
If flow rules were kept, it could be convenient for application
developers, because they wouldn't need to save and restore them.
However, due to the
From: Dmitry Kozlyuk
rte_flow_action_handle_create() did not mention what happens
with an indirect action when a device is stopped, possibly reconfigured,
and started again. It is natural for some indirect actions to be
persistent, like counters and meters; keeping others just saves
application t
From: Dmitry Kozlyuk
Maximum available flow priority was discovered using Verbs API
regardless of the selected flow engine. This required some Verbs
objects to be initialized in order to use DevX engine. Make priority
discovery an engine method and implement it for DevX using its API.
Cc: sta...
From: Dmitry Kozlyuk
Drop queue creation and destruction were not implemented for DevX
flow engine and Verbs engine methods were used as a workaround.
Implement these methods for DevX so that there is a valid queue ID
that can be used regardless of queue configuration via API.
Cc: sta...@dpdk.or
From: Dmitry Kozlyuk
MLX5 PMD uses reference counting to manage RX queue resources.
After port stop shared RSS actions kept references to RX queues,
preventing resource release. As a result, internal PMD mempool
for such queues had been exhausted after a number of port restarts.
Diagnostic messag
19 matches
Mail list logo