Hi Keith,
On Mon, Dec 18, 2017 at 06:43:35PM +, Wiles, Keith wrote:
>
>
> > On Dec 18, 2017, at 11:59 AM, Adrien Mazarguil
> > wrote:
>
> >> Not to criticize style, but a few blank lines could help in
> >> readability for these files IMHO. Unless blank lines are illegal
> >> :-)
> >
> >
-Original Message-
> Date: Mon, 18 Dec 2017 16:33:53 +
> From: "Eads, Gage"
> To: Jerin Jacob
> CC: "Gujjar, Abhinandan S" , "dev@dpdk.org"
> , "Vangati, Narender" , "Rao,
> Nikhil" , "hemant.agra...@nxp.com"
> , "Doherty, Declan" ,
> "nidadavolu.mur...@cavium.com" ,
> "nithin.da
Hi,
On Tue, Dec 19, 2017 at 07:42:20AM -0800, Xiao Wang wrote:
> @@ -336,6 +337,10 @@ struct rte_uio_pci_dev {
> struct pci_dev *dev = udev->pdev;
> int err;
>
> + atomic_inc(&udev->refcnt);
> + if (atomic_read(&udev->refcnt) > 1)
The "inc and read" should be atomic. Otherwi
In some case, one device are accessed by different processes via
different BARs, so one uio device may be opened by more than one
process, for this case we just need to enable interrupt once, and
pci_clear_master only when the last process closed.
Fixes: 5f6ff30dc507 ("igb_uio: fix interrupt enabl
Thanks for pointing it out. Fix it in v3.
BRs,
Xiao
> -Original Message-
> From: Bie, Tiwei
> Sent: Tuesday, December 19, 2017 5:05 PM
> To: Wang, Xiao W
> Cc: Yigit, Ferruh ; dev@dpdk.org;
> step...@networkplumber.org
> Subject: Re: [dpdk-dev] [PATCH v2] igb_uio: allow multi-process acc
IPSec Multi-buffer library v0.48 has been released,
which includes, among other features, support for AES-CCM.
Signed-off-by: Pablo de Lara
---
doc/guides/cryptodevs/aesni_gcm.rst | 4 ++--
doc/guides/cryptodevs/aesni_mb.rst | 9 +
2 files changed, 7 insertions(+), 6 deletions(-)
diff
Hi,
On Mon, Dec 18, 2017 at 02:38:55PM +0300, Andrew Rybchenko wrote:
> On 12/18/2017 01:53 PM, Igor Ryzhov wrote:
> >
> > On Mon, Dec 18, 2017 at 1:35 PM, Andrew Rybchenko
> > mailto:arybche...@solarflare.com>> wrote:
> >
> > On 12/14/2017 08:15 PM, Olivier Matz wrote:
> >
> > From
On 12/19/2017 12:29 PM, Olivier MATZ wrote:
Hi,
On Mon, Dec 18, 2017 at 02:38:55PM +0300, Andrew Rybchenko wrote:
On 12/18/2017 01:53 PM, Igor Ryzhov wrote:
On Mon, Dec 18, 2017 at 1:35 PM, Andrew Rybchenko
mailto:arybche...@solarflare.com>> wrote:
On 12/14/2017 08:15 PM, Olivier Matz wr
On Mon, Dec 18, 2017 at 09:23:41PM +0100, Adrien Mazarguil wrote:
> On Mon, Dec 18, 2017 at 10:34:12AM -0800, Stephen Hemminger wrote:
> > On Mon, 18 Dec 2017 17:46:23 +0100
> > Adrien Mazarguil wrote:
> >
> > > +static int
> > > +hyperv_iface_is_netvsc(const struct if_nameindex *iface)
> > > +{
On Mon, Dec 18, 2017 at 01:17:51PM -0800, Stephen Hemminger wrote:
> On Mon, 18 Dec 2017 20:54:16 +0100
> Thomas Monjalon wrote:
>
> > > > +#endif /* RTE_LIBRTE_HYPERV_DEBUG */
> > > > +
> > > > +#define DEBUG(...) PMD_DRV_LOG(DEBUG, __VA_ARGS__)
> > > > +#define INFO(...) PMD_DRV_LOG(INFO, __VA_
On Tue, Dec 19, 2017 at 10:59:26AM +0530, Hemant Agrawal wrote:
> On 12/18/2017 10:00 PM, Thomas Monjalon wrote:
> > 18/12/2017 16:52, Bruce Richardson:
> > > On Mon, Dec 18, 2017 at 06:09:02PM +0530, Hemant Agrawal wrote:
> > > > --- a/GNUmakefile
> > > > +++ b/GNUmakefile
> > > > @@ -1,33 +1,6 @@
On Mon, Dec 18, 2017 at 03:59:46PM -0800, Stephen Hemminger wrote:
> On Mon, 18 Dec 2017 17:46:23 +0100
> Adrien Mazarguil wrote:
>
> > +static int
> > +ether_addr_from_str(struct ether_addr *eth_addr, const char *str)
> > +{
> > + static const uint8_t conv[0x100] = {
> > + ['0'] = 0x
Hi Tiwei,
> Hi Ning,
>
> On Thu, Dec 14, 2017 at 07:38:14PM +0800, Ning Li wrote:
> > When use virtio_user as exception path, we need to specify a MAC
> > address for the tap port.
>
> Is this a fix? Did you meet any issue? If so, please describe
> the issue and add a fixline.
Specify the MAC a
On Mon, Dec 18, 2017 at 01:05:14PM -0500, Aaron Conole wrote:
> Luca Boccassi writes:
>
> > On Tue, 2017-12-12 at 17:14 +, Bruce Richardson wrote:
> >> On Tue, Dec 12, 2017 at 04:59:34PM +, Bruce Richardson wrote:
> >> > This patchset changes the meson+ninja build system to always create
Hi Beilei,
On Tue, Nov 28, 2017 at 06:12:55PM +0800, Beilei Xing wrote:
> Add support of PPPoE and L2TP in rte_net_get_ptype().
>
> Signed-off-by: Beilei Xing
> ---
> lib/librte_mbuf/rte_mbuf_ptype.c | 2 ++
> lib/librte_mbuf/rte_mbuf_ptype.h | 26 ++
> 2 files changed,
Signed-off-by: Hemant Agrawal
---
GNUmakefile | 32 ++--
Makefile| 32 ++--
2 files changed, 4 insertions(+), 60 deletions(-)
diff --git a/GNUmakefile b/GNUmakefile
index 45b7fbb..594f8cb 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -1
On Tue, Dec 19, 2017 at 09:53:27AM +, Bruce Richardson wrote:
> On Mon, Dec 18, 2017 at 09:23:41PM +0100, Adrien Mazarguil wrote:
> > On Mon, Dec 18, 2017 at 10:34:12AM -0800, Stephen Hemminger wrote:
> > > On Mon, 18 Dec 2017 17:46:23 +0100
> > > Adrien Mazarguil wrote:
> > >
>
> > > > +sta
The DPDK uses the Open Source BSD-3-Clause license for the core libraries
and drivers. The kernel components are naturally GPLv2 licensed.
Many of the files in the DPDK source code contain the full text of the
applicable license. For example, most of the BSD-3-Clause files contain a
full copy of t
Signed-off-by: Hemant Agrawal
---
config/defconfig_arm64-dpaa-linuxapp-gcc | 34 +++-
doc/guides/cryptodevs/dpaa_sec.rst| 31 +++---
doc/guides/nics/dpaa.rst | 31 +++---
doc/guides/prog_guide/rte_security.r
On Mon, Dec 18, 2017 at 01:18:33PM -0800, Ferruh Yigit wrote:
> On 12/18/2017 1:06 PM, Ferruh Yigit wrote:
> > Signed-off-by: Ferruh Yigit
>
> <...>
>
> > + * SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
> > + * Copyright(c) 2014 6W
On Tue, Dec 19, 2017 at 03:44:39PM +0530, Hemant Agrawal wrote:
> Signed-off-by: Hemant Agrawal
> ---
Acked-by: Bruce Richardson
FYI: if doing a V6, you can drop the "All rights reserved" from the
copyright line. If no V6, it's fine as-is (or it can be removed on
apply).
Hi Hemant,
On Wed, Dec 06, 2017 at 06:01:12PM +0530, Hemant Agrawal wrote:
> This is required for the optimizations w.r.t hw mempools.
> They will use different kind of optimizations if the buffers
> are from single contiguous memzone.
>
> Signed-off-by: Hemant Agrawal
> ---
> lib/librte_mempoo
> From: Pavan Nikhilesh [mailto:pbhagavat...@caviumnetworks.com]
> Add service core configuration for Rx adapter. The configuration picks
> the least used service core to run the service on.
>
> Signed-off-by: Pavan Nikhilesh
> ---
> app/test-eventdev/evt_common.h | 41 ---
Hi Xueming,
On Wed, Nov 15, 2017 at 11:45:43PM +0800, Xueming Li wrote:
> Add echo option to echo commandline to screen when running loaded
> scripts from file.
>
> Signed-off-by: Xueming Li
> ---
> lib/librte_cmdline/cmdline_socket.c | 5 +++--
> lib/librte_cmdline/cmdline_socket.h | 3 ++-
>
Hi Olivier,
On 12/19/2017 3:54 PM, Olivier MATZ wrote:
Hi Hemant,
On Wed, Dec 06, 2017 at 06:01:12PM +0530, Hemant Agrawal wrote:
This is required for the optimizations w.r.t hw mempools.
They will use different kind of optimizations if the buffers
are from single contiguous memzone.
Signed-o
Hi Tiwei,
On Mon, Dec 18, 2017 at 10:50:47AM +0800, Tiwei Bie wrote:
> On Thu, Dec 14, 2017 at 03:32:13PM +0100, Olivier Matz wrote:
> > From: Samuel Gauthier
> >
> > On arm32, we were always selecting the simple handler, but it is only
> > available if neon is present.
> >
> > This is due to a
Any flags added to the project args are automatically added to all builds,
both native and cross-compiled. This is not what we want for the -march
flag as a valid -march for the cross-compile is not valid for pmdinfogen
which is a native-build tool.
Instead we store the march flag as a variable, a
The detection of pcap as a dependency involves invoking pcap-config to get
parameters - something not possible in a cross-compilation environment.
Therefore we need to just look for the presence of the library in a
cross-compilation environment and assume if the library is present we can
compile an
While meson has built-in support for cross-compilation, there can be things
inside the meson.build files which cause problems in a cross-compile
environment. This patchset fixes a number of issues found when doing test
builds for arm architecture.
NOTE: this patchset only contains the fixes made t
Add some skeleton files to enable compiling for ARM target. This has been
tested by doing a cross-compile for armv8-a type using the linaro gcc
toolchain.
meson arm-build --cross-file aarch64_cross.txt
ninja -C arm-build
where aarch64_cross.txt contained the following
[bi
On Tue, Dec 19, 2017 at 04:16:33PM +0530, Hemant Agrawal wrote:
> Hi Olivier,
>
> On 12/19/2017 3:54 PM, Olivier MATZ wrote:
> > Hi Hemant,
> >
> > On Wed, Dec 06, 2017 at 06:01:12PM +0530, Hemant Agrawal wrote:
> > > This is required for the optimizations w.r.t hw mempools.
> > > They will use d
Move get_virtual_area out of linuxapp EAL memory and make it
common to EAL, so that other code could reserve virtual areas
as well.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_common_memory.c | 70 ++
lib/librte_eal/common/eal_private.h | 29 +++
This patchset introduces a prototype implementation of dynamic memory allocation
for DPDK. It is intended to start a conversation and build consensus on the best
way to implement this functionality. The patchset works well enough to pass all
unit tests, and to work with traffic forwarding, provided
At the moment, we always rely on scanning everything for every
socket up until RTE_MAX_NUMA_NODES and checking if there's a memseg
associated with each socket if we want to find out how many sockets
we actually have. This becomes a problem when we may have memory on
socket but it's not allocated ye
Move get_virtual_area out of linuxapp EAL memory and make it
common to EAL, so that other code could reserve virtual areas
as well.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_common_memory.c | 70 ++
lib/librte_eal/common/eal_private.h | 29 +++
Down the line, we will need to do everything from the heap as any
alloc or free may trigger alloc/free OS memory, which would involve
growing/shrinking heap.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/malloc_elem.c | 16 ++--
lib/librte_eal/common/malloc_heap.c | 36 +++
At the moment, we always rely on scanning everything for every
socket up until RTE_MAX_NUMA_NODES and checking if there's a memseg
associated with each socket if we want to find out how many sockets
we actually have. This becomes a problem when we may have memory on
socket but it's not allocated ye
This does not change the public API, as this API is not meant to be
called directly.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/malloc_heap.c | 7 ++-
lib/librte_eal/common/malloc_heap.h | 2 +-
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/lib/librte_eal/commo
As we are preparing for dynamic memory allocation, we need to be
able to handle holes in our malloc heap, hence we're switching to
doubly linked list, and prepare infrastructure to support it.
Since our heap is now aware where are our first and last elements,
there is no longer any need to have a
For non-legacy memory init mode, instead of looking at generic
sysfs path, look at sysfs paths pertaining to each NUMA node
for hugepage counts. Note that per-NUMA node path does not
provide information regarding reserved pages, so we might not
get the best info from these paths, but this saves us
For now, this option does nothing, but it will be useful in
dynamic memory allocation down the line. Currently, DPDK stores
all pages as separate files in hugetlbfs. This option will allow
storing all pages in one file (one file per socket, per page size).
Signed-off-by: Anatoly Burakov
---
lib/
Down the line, we will need to do everything from the heap as any
alloc or free may trigger alloc/free OS memory, which would involve
growing/shrinking heap.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/malloc_elem.c | 16 ++--
lib/librte_eal/common/malloc_heap.c | 36 +++
We need this function to join newly allocated segments with the heap.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/malloc_elem.c | 6 +++---
lib/librte_eal/common/malloc_elem.h | 3 +++
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/lib/librte_eal/common/malloc_elem.c
This will be helpful down the line when we implement support for
allocating physically contiguous memory. We can no longer guarantee
physically contiguous memory unless we're in IOVA_AS_VA mode, but
we can certainly try and see if we succeed. In addition, this would
be useful for e.g. PMD's who may
This will be helpful down the line when we implement support for
allocating physically contiguous memory. We can no longer guarantee
physically contiguous memory unless we're in IOVA_AS_VA mode, but
we can certainly try and see if we succeed. In addition, this would
be useful for e.g. PMD's who may
For non-legacy memory init mode, instead of looking at generic
sysfs path, look at sysfs paths pertaining to each NUMA node
for hugepage counts. Note that per-NUMA node path does not
provide information regarding reserved pages, so we might not
get the best info from these paths, but this saves us
This adds a "--legacy-mem" command-line switch. It will be used to
go back to the old memory behavior, one where we can't dynamically
allocate/free memory (the downside), but one where the user can
get physically contiguous memory, like before (the upside).
For now, nothing but the legacy behavior
This isn't used anywhere yet, but the support is now there. Also,
adding cleanup to allocation procedures, so that if we fail to
allocate everything we asked for, we can free all of it back.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_memalloc.h | 3 +
lib/librte_eal/lin
rte_fbarray is a simple resizable array, not unlike vectors in
higher-level languages. Rationale for its existence is the
following: since we are going to map memory page-by-page, there
could be quite a lot of memory segments to keep track of (for
smaller page sizes, page count can easily reach tho
As we are preparing for dynamic memory allocation, we need to be
able to handle holes in our malloc heap, hence we're switching to
doubly linked list, and prepare infrastructure to support it.
Since our heap is now aware where are our first and last elements,
there is no longer any need to have a
This does not change the public API, as this API is not meant to be
called directly.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/malloc_heap.c | 7 ++-
lib/librte_eal/common/malloc_heap.h | 2 +-
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/lib/librte_eal/commo
We need this function to join newly allocated segments with the heap.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/malloc_elem.c | 6 +++---
lib/librte_eal/common/malloc_elem.h | 3 +++
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/lib/librte_eal/common/malloc_elem.c
This adds a new set of _contig API's to rte_memzone.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_common_memzone.c | 44
lib/librte_eal/common/include/rte_memzone.h | 158
2 files changed, 202 insertions(+)
diff --git a/lib/librte_eal/comm
Nothing uses that code yet. The bulk of it is copied from old
memory allocation stuff (eal_memory.c). We provide an API to
allocate either one page or multiple pages, guaranteeing that
we'll get contiguous VA for all of the pages that we requested.
Signed-off-by: Anatoly Burakov
---
lib/librte_e
We greatly expand memzone list, and it makes some operations faster.
Plus, it's there, so we might as well use it.
As part of this commit, a potential memory leak is fixed (when we
allocate a memzone but there's no room in config, we don't free it
back), and there's a compile fix for ENA driver.
This isn't used anywhere yet, but the support is now there. Also,
adding cleanup to allocation procedures, so that if we fail to
allocate everything we asked for, we can free all of it back.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_memalloc.h | 3 +
lib/librte_eal/lin
Add a new (non-legacy) memory init path for EAL. It uses the
new dynamic allocation facilities, although it's only being run
at startup.
If no -m or --socket-mem switches were specified, the new init
will not allocate anything, whereas if those switches were passed,
appropriate amounts of pages wo
This adds a new set of _contig API's to rte_malloc.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/include/rte_malloc.h | 181 +
lib/librte_eal/common/rte_malloc.c | 63 ++
2 files changed, 244 insertions(+)
diff --git a/lib/librte_eal/comm
Nothing uses that code yet. The bulk of it is copied from old
memory allocation stuff (eal_memory.c). We provide an API to
allocate either one page or multiple pages, guaranteeing that
we'll get contiguous VA for all of the pages that we requested.
Signed-off-by: Anatoly Burakov
---
lib/librte_e
We greatly expand memzone list, and it makes some operations faster.
Plus, it's there, so we might as well use it.
As part of this commit, a potential memory leak is fixed (when we
allocate a memzone but there's no room in config, we don't free it
back), and there's a compile fix for ENA driver.
This adds a new set of _contig API's to rte_malloc.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/include/rte_malloc.h | 181 +
lib/librte_eal/common/rte_malloc.c | 63 ++
2 files changed, 244 insertions(+)
diff --git a/lib/librte_eal/comm
This set of changes enables rte_malloc to allocate and free memory
as needed. The way it works is, first malloc checks if there is
enough memory already allocated to satisfy user's request. If there
isn't, we try and allocate more memory. The reverse happens with
free - we free an element, check it
Add a new (non-legacy) memory init path for EAL. It uses the
new dynamic allocation facilities, although it's only being run
at startup.
If no -m or --socket-mem switches were specified, the new init
will not allocate anything, whereas if those switches were passed,
appropriate amounts of pages wo
No major changes, just add some checks in a few key places, and
a new parameter to pass around.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_common_memzone.c | 16 +++--
lib/librte_eal/common/malloc_elem.c| 105 +++--
lib/librte_eal/common/malloc_
No major changes, just add some checks in a few key places, and
a new parameter to pass around.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_common_memzone.c | 16 +++--
lib/librte_eal/common/malloc_elem.c| 105 +++--
lib/librte_eal/common/malloc_
Before, we were aggregating multiple pages into one memseg, so the
number of memsegs was small. Now, each page gets its own memseg,
so the list of memsegs is huge. To accommodate the new memseg list
size and to keep the under-the-hood workings sane, the memseg list
is now not just a single list, bu
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/linuxapp/eal/eal_memalloc.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c
b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
index 13172a0..8b3f219 100755
--- a/lib/librte_eal/linuxapp/eal/e
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/linuxapp/eal/eal_memalloc.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c
b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
index 13172a0..8b3f219 100755
--- a/lib/librte_eal/linuxapp/eal/e
Currently it is not possible to use memory that is not owned by DPDK to
perform DMA. This scenarion might be used in vhost applications (like
SPDK) where guest send its own memory table. To fill this gap provide
API to allow registering arbitrary address in VFIO container.
Signed-off-by: Pawel Wod
If a user has specified that the zone should have contiguous memory,
use the new _contig allocation API's instead of normal ones.
Otherwise, account for the fact that unless we're in IOVA_AS_VA
mode, we cannot guarantee that the pages would be physically
contiguous, so we calculate the memzone size
If a user has specified that the zone should have contiguous memory,
use the new _contig allocation API's instead of normal ones.
Otherwise, account for the fact that unless we're in IOVA_AS_VA
mode, we cannot guarantee that the pages would be physically
contiguous, so we calculate the memzone size
This adds a new set of _contig API's to rte_memzone.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_common_memzone.c | 44
lib/librte_eal/common/include/rte_memzone.h | 158
2 files changed, 202 insertions(+)
diff --git a/lib/librte_eal/comm
This set of changes enables rte_malloc to allocate and free memory
as needed. The way it works is, first malloc checks if there is
enough memory already allocated to satisfy user's request. If there
isn't, we try and allocate more memory. The reverse happens with
free - we free an element, check it
Hi Zhiyong,
On 11/30/2017 10:46 AM, Zhiyong Yang wrote:
Vhostpci PMD is a new type driver working in guest OS which has ability to
drive the vhostpci modern pci device, which is a new virtio device.
The following linking is about vhostpci design:
An initial device design is presented at KVM Fo
This patchset introduces a prototype implementation of dynamic memory allocation
for DPDK. It is intended to start a conversation and build consensus on the best
way to implement this functionality. The patchset works well enough to pass all
unit tests, and to work with traffic forwarding, provided
This does not change the public API, as this API is not meant to be
called directly.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/malloc_heap.c | 7 ++-
lib/librte_eal/common/malloc_heap.h | 2 +-
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/lib/librte_eal/commo
Move get_virtual_area out of linuxapp EAL memory and make it
common to EAL, so that other code could reserve virtual areas
as well.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_common_memory.c | 70 ++
lib/librte_eal/common/eal_private.h | 29 +++
For now, this option does nothing, but it will be useful in
dynamic memory allocation down the line. Currently, DPDK stores
all pages as separate files in hugetlbfs. This option will allow
storing all pages in one file (one file per socket, per page size).
Signed-off-by: Anatoly Burakov
---
lib/
At the moment, we always rely on scanning everything for every
socket up until RTE_MAX_NUMA_NODES and checking if there's a memseg
associated with each socket if we want to find out how many sockets
we actually have. This becomes a problem when we may have memory on
socket but it's not allocated ye
Down the line, we will need to do everything from the heap as any
alloc or free may trigger alloc/free OS memory, which would involve
growing/shrinking heap.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/malloc_elem.c | 16 ++--
lib/librte_eal/common/malloc_heap.c | 36 +++
We need this function to join newly allocated segments with the heap.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/malloc_elem.c | 6 +++---
lib/librte_eal/common/malloc_elem.h | 3 +++
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/lib/librte_eal/common/malloc_elem.c
This adds a "--legacy-mem" command-line switch. It will be used to
go back to the old memory behavior, one where we can't dynamically
allocate/free memory (the downside), but one where the user can
get physically contiguous memory, like before (the upside).
For now, nothing but the legacy behavior
This isn't used anywhere yet, but the support is now there. Also,
adding cleanup to allocation procedures, so that if we fail to
allocate everything we asked for, we can free all of it back.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_memalloc.h | 3 +
lib/librte_eal/lin
Nothing uses that code yet. The bulk of it is copied from old
memory allocation stuff (eal_memory.c). We provide an API to
allocate either one page or multiple pages, guaranteeing that
we'll get contiguous VA for all of the pages that we requested.
Signed-off-by: Anatoly Burakov
---
lib/librte_e
This will be helpful down the line when we implement support for
allocating physically contiguous memory. We can no longer guarantee
physically contiguous memory unless we're in IOVA_AS_VA mode, but
we can certainly try and see if we succeed. In addition, this would
be useful for e.g. PMD's who may
For non-legacy memory init mode, instead of looking at generic
sysfs path, look at sysfs paths pertaining to each NUMA node
for hugepage counts. Note that per-NUMA node path does not
provide information regarding reserved pages, so we might not
get the best info from these paths, but this saves us
This adds a new set of _contig API's to rte_malloc.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/include/rte_malloc.h | 181 +
lib/librte_eal/common/rte_malloc.c | 63 ++
2 files changed, 244 insertions(+)
diff --git a/lib/librte_eal/comm
Add a new (non-legacy) memory init path for EAL. It uses the
new dynamic allocation facilities, although it's only being run
at startup.
If no -m or --socket-mem switches were specified, the new init
will not allocate anything, whereas if those switches were passed,
appropriate amounts of pages wo
If a user has specified that the zone should have contiguous memory,
use the new _contig allocation API's instead of normal ones.
Otherwise, account for the fact that unless we're in IOVA_AS_VA
mode, we cannot guarantee that the pages would be physically
contiguous, so we calculate the memzone size
rte_fbarray is a simple resizable array, not unlike vectors in
higher-level languages. Rationale for its existence is the
following: since we are going to map memory page-by-page, there
could be quite a lot of memory segments to keep track of (for
smaller page sizes, page count can easily reach tho
Before, we were aggregating multiple pages into one memseg, so the
number of memsegs was small. Now, each page gets its own memseg,
so the list of memsegs is huge. To accommodate the new memseg list
size and to keep the under-the-hood workings sane, the memseg list
is now not just a single list, bu
As we are preparing for dynamic memory allocation, we need to be
able to handle holes in our malloc heap, hence we're switching to
doubly linked list, and prepare infrastructure to support it.
Since our heap is now aware where are our first and last elements,
there is no longer any need to have a
We greatly expand memzone list, and it makes some operations faster.
Plus, it's there, so we might as well use it.
As part of this commit, a potential memory leak is fixed (when we
allocate a memzone but there's no room in config, we don't free it
back), and there's a compile fix for ENA driver.
No major changes, just add some checks in a few key places, and
a new parameter to pass around.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_common_memzone.c | 16 +++--
lib/librte_eal/common/malloc_elem.c| 105 +++--
lib/librte_eal/common/malloc_
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/linuxapp/eal/eal_memalloc.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c
b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
index 13172a0..8b3f219 100755
--- a/lib/librte_eal/linuxapp/eal/e
This adds a new set of _contig API's to rte_memzone.
Signed-off-by: Anatoly Burakov
---
lib/librte_eal/common/eal_common_memzone.c | 44
lib/librte_eal/common/include/rte_memzone.h | 158
2 files changed, 202 insertions(+)
diff --git a/lib/librte_eal/comm
This set of changes enables rte_malloc to allocate and free memory
as needed. The way it works is, first malloc checks if there is
enough memory already allocated to satisfy user's request. If there
isn't, we try and allocate more memory. The reverse happens with
free - we free an element, check it
Currently it is not possible to use memory that is not owned by DPDK to
perform DMA. This scenarion might be used in vhost applications (like
SPDK) where guest send its own memory table. To fill this gap provide
API to allow registering arbitrary address in VFIO container.
Signed-off-by: Pawel Wod
On 19-Dec-17 11:04 AM, Anatoly Burakov wrote:
This patchset introduces a prototype implementation of dynamic memory allocation
for DPDK. It is intended to start a conversation and build consensus on the best
way to implement this functionality. The patchset works well enough to pass all
unit test
On 19-Dec-17 11:04 AM, Anatoly Burakov wrote:
Move get_virtual_area out of linuxapp EAL memory and make it
common to EAL, so that other code could reserve virtual areas
as well.
Signed-off-by: Anatoly Burakov
---
My apologies, this patchset was sent erroneously. Please look at the
version ta
1 - 100 of 240 matches
Mail list logo