Regards,
Keith

> On Sep 12, 2016, at 6:18 AM, Keren Hochman <keren.hochman at lightcyber.com> 
> wrote:
> 
> Hi,
> I tried to run 2 instances of testpmd from the same machine but received a
> message:  Cannot get hugepage information when I tried to run the second
> instance. Is there a way to disable hugepages or allow to instances to
> access it ? Thanks. keren

Running two instances or more DPDK applications you need to make sure the 
resources are split up correctly. You did not supply your command lines being 
used, but I will try to state how it is done.

First memory or huge pages must to allocated to each instance using the 
?socket-mem 128,128 or ?socket-men 128 if you only have one socket in your 
system. Make sure you have enough huge pages allocated in the /etc/sysctl.conf 
file for both instances. In the ?socket-mem 128,128 you are giving 256 huge 
pages to one instance and if the second instance used ?socket-mem 256,256 then 
512 pages, which means you need 256+512 huge pages in the system.

Next the huge page files in the /dev/hugepage directory must have different 
prefixes by using the ?file-prefix option giving different file prefixes for 
each instance. If you have already run DPDK instance once without the option 
please ?sudo rm -fr /dev/hupages/*? to release the current huge pages.

Next you need to make sure you blacklist the port using the -b option on the 
command line of the ports not used by this instance. Each instance needs to 
blacklist the ports not being used. This seems to be the easiest for me, but 
you could look into use the whitelist option as well.

Next make sure you allocate different cores to each instance using the -c or -l 
option, the -l option is a bit easier to read IMO.

Next use the ?proc-type auto in both instances just to be clear. This could be 
optional I think.

I hope this helps. You can also pull down Pktgen and look at the 
pktgen-master.sh and pktgen-slave.sh scripts and modify them for your needs. 
http://dpdk.org/download <http://dpdk.org/download>

Reply via email to