Let me try to clarify. If you launch a job that has only 1 or 2 processes in it 
(total), then we bind to core by default. This is done because a job that small 
is almost always some kind of benchmark.

If there are more than 2 processes in the job (total), then we default to 
binding to NUMA (if NUMA’s are present - otherwise, to socket) across the 
entire job.

You can always override these behaviors.

> On Apr 9, 2017, at 3:45 PM, Reuti <re...@staff.uni-marburg.de> wrote:
> 
>>> But I can't see a binding by core for number of processes <= 2. Does it 
>>> mean 2 per node or 2 overall for the `mpiexec`? 
>> 
>> It’s 2 processes overall
> 
> Having a round-robin allocation in the cluster, this might not be what was 
> intended (to bind only one or two cores per exechost)?
> 
> Obviously the default changes (from --bind-to core to --bin-to socket), 
> whether I compiled Open MPI with or w/o libnuma (I wanted to get rid of the 
> warning in the output only – now it works). But "--bind-to core" I could also 
> use w/o libnuma and it worked, I got only that warning in addition about the 
> memory couldn't be bound.
> 
> BTW: I always had to use -ldl when using `mpicc`. Now, that I compiled in 
> libnuma, this necessity is gone.
> 
> -- Reuti
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to