On Aug 24, 2005, at 10:27 PM, Troy Benjegerdes wrote:

Processor affinity is now implemented. You must ask for it via the MCA
param "mpi_paffinity_alone".  If this parameter is set to a nonzero
value, OMPI will assume that its job is alone on the nodes that it is
running on, and, if you have not oversubscribed the node, will bind MPI
processes to processors, starting with processor ID 0 (i.e.,
effectively bindings MPI processes to the processor number equivalent
to their relative VPID on that node).

Any thoughts on how to support NUMA with something like this? On the
dual opteron w/DDR IB systems I've got, I'm seeing a big perfomance
difference that primarily depends on which node the memory is on.

I take it from this that you have activated the processor affinity stuff? I'm not well-versed on how opterons work, but don't they allocate memory in a first-processor-usage kind of basis? I.e., malloc() will return memory local to the processor that invoked it? If so, the processor affinity stuff is called way at the beginning of time, before 99% of the malloc's in OMPI are invoked, so that *should* be taken care of naturally...

Are you seeing something different?

I'm also working on an memory affinity framework, but that's really for explicit shared memory operations on NUMA machines (e.g., shared memory collectives, where we want to control the physical location of pages in an mmap'ed chunk of memory that is shared between multiple processes).

--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/

Reply via email to