Your configure options look fine.
Getting 1 process assigned to each core (irrespective of HT on or off):
—map-by core —bind-to core
This will tight-pack the processes - i.e., they will be placed on each
successive core. If you want to balance the load across the allocation (if the
#procs < #c
I’m afraid not. The MPI job would not be very happy to suddenly lose some nodes
during execution, and relocating MPI processes during execution is something we
don’t currently support.
There is work underway to integrate the RM more fully into that procedure so it
could tell the MPI job to chec
The RM can ask for deallocation of some nodes?
For example, mpirun asks to the RM which resources are available (let
node1, node2, node3) and spawns orted in the nodes. After some time during
the elaboration, can the RM ask to deassign node3 or reassign jobs on
node3 to node4?
Cheers,
Federico R
Thanks for the responses.
The idea is to bind one process per processor. The actual problem that
prompted the investigation is that a job
ran with 1.4.2 runs in 59 minutes and the same job in 1.6.4 and 1.8.4 takes 79
minutes on the same machine, same compiler etc. In trying to track down the
Actually, I believe from the cmd line that the questioner wanted each process
to be bound to a single core.
From your output, I’m guessing you have hyperthreads enabled on your system -
yes? In that case, the 1.4 series is likely to be binding each process to a
single HT because it isn’t sophis
Bug, it should be "span,pe=2"
2015-04-10 15:28 GMT+02:00 Nick Papior Andersen :
> I guess you want process #1 to have core 0 and core 1 bound to it, process
> #2 have core 2 and core 3 bound?
>
> I can do this with (I do this with 1.8.4, I do not think it works with
> 1.6.x):
> --map-by ppr:4:soc
I guess you want process #1 to have core 0 and core 1 bound to it, process
#2 have core 2 and core 3 bound?
I can do this with (I do this with 1.8.4, I do not think it works with
1.6.x):
--map-by ppr:4:socket:span:pe=2
ppr = processes per resource.
socket = the resource
span = load balance the pro
We can't seem to get "processor affinity" using 1.6.4 or newer OpenMPI.
Note this is a 2 socket machine with 8 cores per socket
We had compiled OpenMPI 1.4.2 with the following configure options:
===
export CC=/apps/share/