For the web archives...

Brock and I talked about this in person at SC.  The conversation was much more 
involved than this seemingly-simple question implied.  :-)

The short version is:

- numactl does both memory and processor binding
- hwloc is the new numactl :-)
  - e.g., see the hwloc-bind(1) command
- OMPI does both memory and processor binding
- OMPI 1.5.5 will have an MCA parameter for process-wide memory binding policy
- Torque cpusets are probably do what is desired: restrict MPI processes to a 
subset of the processors on a given server (e.g., if multiple Torque jobs are 
running on the same server)


On Nov 9, 2011, at 1:46 PM, Brock Palen wrote:

> Question,
> If we are using torque with TM with cpusets enabled for pinning should we not 
> enable numactl?  Would they conflict with each other?
> 
> Brock Palen
> www.umich.edu/~brockp
> Center for Advanced Computing
> bro...@umich.edu
> (734)936-1985
> 
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to