I believe that map-by core with a PE > 1 may have worked at some point in the 
past, but the docs should probably be looked at. I took a (very brief) look at 
the code and re-enabling that particular option would be difficult and not 
really necessary since one can reproduce the desired pattern within the current 
context.

> On Nov 21, 2017, at 5:34 AM, Noam Bernstein <noam.bernst...@nrl.navy.mil> 
> wrote:
> 
>> 
>> On Nov 20, 2017, at 7:02 PM, r...@open-mpi.org <mailto:r...@open-mpi.org> 
>> wrote:
>> 
>> So there are two options here that will work and hopefully provide you with 
>> the desired pattern:
>> 
>> * if you want the procs to go in different NUMA regions:
>> $ mpirun --map-by numa:PE=2 --report-bindings -n 2 /bin/true
>> [rhc001:131460] MCW rank 0 bound to socket 0[core 0[hwt 0-1]], socket 0[core 
>> 1[hwt 0-1]]: 
>> [BB/BB/../../../../../../../../../..][../../../../../../../../../../../..]
>> [rhc001:131460] MCW rank 1 bound to socket 1[core 12[hwt 0-1]], socket 
>> 1[core 13[hwt 0-1]]: 
>> [../../../../../../../../../../../..][BB/BB/../../../../../../../../../..]
>> 
>> * if you want the procs to go in the same NUMA region:
>> $ mpirun --map-by ppr:2:numa:PE=2 --report-bindings -n 2 /bin/true
>> [rhc001:131559] MCW rank 0 bound to socket 0[core 0[hwt 0-1]], socket 0[core 
>> 1[hwt 0-1]]: 
>> [BB/BB/../../../../../../../../../..][../../../../../../../../../../../..]
>> [rhc001:131559] MCW rank 1 bound to socket 0[core 2[hwt 0-1]], socket 0[core 
>> 3[hwt 0-1]]: 
>> [../../BB/BB/../../../../../../../..][../../../../../../../../../../../..]
>> 
>> Reason: the level you are mapping by (e.g., NUMA) must have enough cores in 
>> it to meet your PE=N directive. If you map by core, then there is only one 
>> core in that object.
> 
> Makes sense.  I’ll try that.  However, if I understand your explanation 
> correctly the docs should probably be changed, because they seem to be 
> suggesting something that will never work.   In fact, would the ":PE=N" > 1 
> ever work for "—map-by core”?  I guess maybe if you have hyperthreading on, 
> but I’d still argue that that’s an unhelpful example, given how rarely 
> hyperthreading is used in HPC.
> 
>                                                               Noam
> 
> 
> 
> 
> ____________
> ||
> |U.S. NAVAL|
> |_RESEARCH_|
> LABORATORY
> 
> Noam Bernstein, Ph.D.
> Center for Materials Physics and Technology
> U.S. Naval Research Laboratory
> T +1 202 404 8628  F +1 202 404 7546
> https://www.nrl.navy.mil <https://www.nrl.navy.mil/>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to