In the OMPI v1.6 series, you can use the processor affinity options.  And you 
can use --report-bindings to show exactly where processes were bound.  For 
example:

-----
% mpirun -np 4 --bind-to-core --report-bindings -bycore uptime
[svbu-mpi056:18904] MCW rank 0 bound to socket 0[core 0]: [B . . .][. . . .]
[svbu-mpi056:18904] MCW rank 1 bound to socket 0[core 1]: [. B . .][. . . .]
[svbu-mpi056:18904] MCW rank 2 bound to socket 0[core 2]: [. . B .][. . . .]
[svbu-mpi056:18904] MCW rank 3 bound to socket 0[core 3]: [. . . B][. . . .]
 05:06:13 up 7 days,  6:57,  1 user,  load average: 0.29, 0.10, 0.03
 05:06:13 up 7 days,  6:57,  1 user,  load average: 0.29, 0.10, 0.03
 05:06:13 up 7 days,  6:57,  1 user,  load average: 0.29, 0.10, 0.03
 05:06:13 up 7 days,  6:57,  1 user,  load average: 0.29, 0.10, 0.03
% 
-----

I bound each process to a single core, and mapped them on a round-robin basis 
by core.  Hence, all 4 processes ended up on their own cores on a single 
processor socket.

The --report-bindings output shows that this particular machine has 2 sockets, 
each with 4 cores.



On Aug 30, 2012, at 5:37 AM, Zbigniew Koza wrote:

> Hi,
> 
> consider this specification:
> 
> "Curie fat consists in 360 nodes which contains 4 eight cores CPU Nehalem-EX 
> clocked at 2.27 GHz, let 32 cores / node and 11520 cores for the full fat 
> configuration"
> 
> Suppose I would like to run some performance tests just on a single processor 
> rather than 4 of them.
> Is there a way to do this?
> I'm afraid specifying that I need 1 cluster node with 8 MPI prcesses
> will result in OS distributing these 8 processes among 4
> processors forming the node, and this is not what I'm after.
> 
> Z Koza
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to