Hi Bruce,
For the heck of it I tried to build DPDK .a and .h files with rte_config.h 
changes of these parmeters.

1st build- change RTE_MAX_MEMSEG_PER_TYPE:
/*-orig: #define RTE_MAX_MEMSEG_PER_TYPE 32768 */
#define RTE_MAX_MEMSEG_PER_TYPE 16384         <<<< reduced by 1/2

And executed ninja clean and ninja.
Copied the .h and .a files to our application build environment and built our 
application.
Htop showed our application process VIRT value did not change, still 36.6G.

2nd build- change RTE_MAX_MEMSEG_PER_TYPE and RTE_MAX_MEMSEG_PER_TYPE together:
/*-orig: #define RTE_MAX_MEMSEG_LISTS 128 */
#define RTE_MAX_MEMSEG_LISTS 64                    <<<< reduced by 1/2
#define RTE_MAX_MEMSEG_PER_LIST 8192
#define RTE_MAX_MEM_MB_PER_LIST 32768
/*-orig: #define RTE_MAX_MEMSEG_PER_TYPE 32768 */
#define RTE_MAX_MEMSEG_PER_TYPE 16384    <<<< reduced by 1/2
#define RTE_MAX_MEM_MB_PER_TYPE 65536

Htop showed our application process VIRT value did not change, still 36.6G.  
RES value also did not change.

Also tried same above but with --legacy-mem removed from EAL init argument and 
VIRT jumped up to 99G.  So it is a must to have --legacy-mem.

Our design 5-6 yrs mature, and we had to upgrade DPDK version to support Intel 
E810 NIC.  Our boot-up (not grub, but early script) first allocates the 2x1G 
hugepages on Oracle9 and 1024x2M on CentOS7.  We do not change hugepages once 
we boot up and go into DPDK mode.


I was hoping this path of changing the rte_config.h for MEMSEG would be a 
viable path.


Regards,
Ed


-----Original Message-----
From: Morten Brørup <m...@smartsharesystems.com> 
Sent: Friday, May 3, 2024 2:58 PM
To: Bruce Richardson <bruce.richard...@intel.com>; Lombardo, Ed 
<ed.lomba...@netscout.com>
Cc: Dmitry Kozlyuk <dmitry.kozl...@gmail.com>; dev@dpdk.org; 
anatoly.bura...@intel.com
Subject: RE: Need help with reducing VIRT memory

External Email: This message originated outside of NETSCOUT. Do not click links 
or open attachments unless you recognize the sender and know the content is 
safe.

> From: Bruce Richardson [mailto:bruce.richard...@intel.com]
> Sent: Friday, 3 May 2024 17.52
> 
> On Fri, May 03, 2024 at 04:27:39PM +0100, Bruce Richardson wrote:
> > On Fri, May 03, 2024 at 02:48:12PM +0000, Lombardo, Ed wrote:
> > > Hi Dmitry,
> > > I am not clear on the DPDK memory layout and how to tweak these
> #define values.
> > >
> > > #define RTE_MAX_MEMSEG_PER_LIST 8192 #define 
> > > RTE_MAX_MEM_MB_PER_LIST 32768 #define RTE_MAX_MEMSEG_PER_TYPE 
> > > 32768 #define RTE_MAX_MEM_MB_PER_TYPE 65536
> > >
> > > I want to limit how much DPDK grabs for memory, but grabs what it
> absolutely needs for our application.
> >
> > Hi,
> >
> > This is what DPDK does. What is being shown in the VIRT figures is 
> > the address space reservation DPDK has made, but not what memory it
> actually uses.
> > Only sufficient hugepage memory to meet the demands of your app 
> > should
> be
> > mapped by DPDK, the rest is unused address space that is not taking 
> > up
> any
> > actual memory.
> >

On DPDK 17.11, DPDK allocates the command line specified amount of hugemem 
(--socket-mem <megabytes>).
E.g. if a hardware appliance with 8 GB RAM running Linux has been boot-time 
configured with 6 GB hugemem (e.g. 4 * 1 GB gigantic hugepages and 1024 * 2 MB 
hugepages), and DPDK allocates all 6 GB hugemem at EAL init, no hugepages will 
be available for other applications, regardless if DPDK actually uses this 
memory or not.
This is especially relevant for "embedded" systems with only FLASH and RAM, and 
no swap.

I don't know if other DPDK versions behave differently. I haven't looked into 
this in detail.
I certainly don't hope recent DPDK versions assume swap is available, and 
blindly allocate obscene amounts of memory.

Configuring overcommitted hugepages might help with the problem of DPDK 
allocating all available hugepages:

Instead of reserving a fixed number of hugepages at boot-time, by setting 
nr_hugepages, set nr_overcommit_hugepages to allow applications to dynamically 
allocate hugepages, where the number of hugepages in the system grows and 
shrinks with the amount of hugemem allocated by applications. Just set it to a 
sufficiently large number.
Please note that the Linux kernel does not support overcommitting 1 GB gigantic 
hugepages, only 2 MB hugepages.

> By way of illustration, here is the memory output for a testpmd 
> process on my system. I got this by running "top -b -p <testpmd-PID>"
> 
>     PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+
> COMMAND
> 2336969 bruce     20   0  256.2g  26432  19712 S  93.8   0.0   5:28.13
> dpdk-testpmd
> 
> If we look at the memory relevant columns, indeed VIRT shows a huge 
> value - 256G in my case. However, the actual RAM used by testpmd is 
> given in the "RES" (resident??) column, showing that testpmd actually 
> is only using 26,432kB of memory in this instance, of which 19,712kB 
> is shared memory (mostly hugepages). In fact, testpmd actually has 
> even more hugepage memory than that mapped into it, but they must not 
> be actually in use.
> [Anatoly,
> can you confirm that this would be the case when using vfio-pci i.e. 
> no physical addresses to query?]
> 
> Regards,
> /Bruce

Reply via email to