Let me add a little more color on what Gilles stated.

First, you should probably upgrade to the latest v4.1.x release: v4.1.4.  It 
has a bunch of bug fixes compared to v4.1.0.

Second, you should know that it is relatively uncommon to run HPC/MPI apps 
inside VMs because the virtualization infrastructure will -- by definition -- 
decrease your overall performance.  This is usually counter to the goal of 
writing/running HPC applications.  If you do run HPC/MPI applications in VMs, 
it is strongly recommended that you bind the cores in the VM to physical cores 
to attempt to minimize the performance loss.

By default, Open MPI maps MPI processes by core when deciding how many 
processes to place on each machine (and also deciding how to bind them).  For 
example, Open MPI looks at a machine and sees that it has N cores, and (by 
default) maps N MPI processes to that machine.  You can change Open MPI's 
defaults to map by hardware thread ("Hyperthread" in Interl parlance) instead 
of by core, but conventional knowledge is that math-heavy processes don't 
perform well with the limited resources of a single hardware thread, and 
benefit from the full resources of the core (this depends on your specific app, 
of course -- YMMV).  Intel's and AMD's hardware threads have gotten better over 
the years, but I think they still represent a division of resources in the 
core, and will likely still be performance-detrimental to at least some classes 
of HPC applications.  It's a surprisingly complicated topic.

In the v4.x series, note that you can use "mpirun --report-bindings ..." to see 
exactly where Open MPI thinks it has bound each process.  Note that this 
binding occurs before each MPI process starts; it's nothing that the 
application itself needs to do.

--
Jeff Squyres
jsquy...@cisco.com
________________________________
From: users <users-boun...@lists.open-mpi.org> on behalf of Gilles Gouaillardet 
via users <users@lists.open-mpi.org>
Sent: Tuesday, September 13, 2022 9:07 AM
To: Open MPI Users <users@lists.open-mpi.org>
Cc: Gilles Gouaillardet <gilles.gouaillar...@gmail.com>
Subject: Re: [OMPI users] Hardware topology influence

Lucas,

the number of MPI tasks started by mpirun is either
 - explicitly passed via the command line (e.g. mpirun -np 2306 ...)
 - equals to the number of available slots, and this value is either
     a) retrieved from the resource manager (such as a SLURM allocation)
     b) explicitly set in a machine file (e.g. mpirun -machinefile 
<machinefile> ...) or the command line
         (e.g. mpirun --hosts host0:96,host1:96 ...)
     c) if none of the above is set, the number of detected cores on the system

Cheers,

Gilles

On Tue, Sep 13, 2022 at 9:23 PM Lucas Chaloyard via users 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>> wrote:
Hello,

I'm working as a research intern in a lab where we're studying virtualization.
And I've been working with several benchmarks using OpenMPI 4.1.0 (ASKAP, GPAW 
and Incompact3d from Phrononix Test suite).

To briefly explain my experiments, I'm running those benchmarks on several 
virtual machines using different topologies.
During one experiment I've been comparing those two topologies :
- Topology1 : 96 vCPUS divided in 96 sockets containing 1 threads
- Topology2 : 96 vCPUS divided in 48 sockets containing 2 threads (usage of 
hyperthreading)

For the ASKAP Benchmark :
- While using Topology2, 2306 processes will be created by the application to 
do its work.
- While using Topology1, 4612 processes will be created by the application to 
do its work.
This is also happening when running GPAW and Incompact3d benchmarks.

What I've been wondering (and looking for) is, does OpenMPI take into account 
the topology, and reduce the number of processes create to execute its work in 
order to avoid the usage of hyperthreading ?
Or is it something done by the application itself ?

I was looking at the source code, and I've been trying to find how and when are 
filled the information about the MPI_COMM_WORLD communicator, to see if the 
'num_procs' field depends on the topology, but I didn't have any chance for now.

Respectfully, Chaloyard Lucas.

Reply via email to