Hi, I am running DPDK 2.0.0 on a RH 6.4 VM. I have 512 2MB hugepages specified in my grub configuration file.
When the eal is mapping hugepages for use by my application it creates them only for socket 0, which is not what I wish. I would like to use socket 1 & 2. I tried providing the --socket-mem parameter like so: --socket-mem=0,256,256 But to no avail. I see that the maps created for huge pages in /proc/self/maps are all N0=1, which explains why the hugepage_info specifies socket_id = 0. This is what I get when I run numactl --hardware available: 1 nodes (0) node 0 cpus: 0 1 2 3 node 0 size: 8191 MB node 0 free: 5076 MB node distances: node 0 0: 10 Here is my lscpu output: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 4 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 37 Stepping: 1 CPU MHz: 2194.711 BogoMIPS: 4389.42 Hypervisor vendor: VMware Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0-3 I would like to understand why hugepage mapping happens only on socket 0, and how I can make it map for the other sockets as well. Your assistance is much appreciated. Thanks, Michal Dorsett Developer, Strategic IP Group Desk: +972 962 4350 Mobile: +972 50 771 6689 Verint Cyber Intelligence www.verint.com<http://www.verint.com/> This electronic message may contain proprietary and confidential information of Verint Systems Inc., its affiliates and/or subsidiaries. The information is intended to be for the use of the individual(s) or entity(ies) named above. If you are not the intended recipient (or authorized to receive this e-mail for the intended recipient), you may not use, copy, disclose or distribute to anyone this message or any information contained in this message. If you have received this electronic message in error, please notify us by replying to this e-mail.