On 12/10/2010 09:19 AM, Richard Treumann wrote:
It seems to me the MPI_Get_processor_namedescription is too ambiguous
to make this 100% portable. I assume most MPI implementations simply
use the hostname so all processes on the same host will return the
same string. The suggestion would work then.
However, it would also be reasonable for an MPI that did processor
binding to return " hostname.socket#.core#" so every rank would have a
unique processor name.
Fair enough. However, I think it is a lot more stable then grabbing
information from the bowels of the runtime environment. Of course one
could just call the appropriate system call to get the hostname, if you
are on the right type of OS/Architecture :-).
The extension idea is a bit at odds with the idea that MPI is an
architecture independent API. That does not rule out the option if
there is a good use case but it does raise the bar just a bit.
Yeah, that is kind of the rub isn't it. There is enough architectural
differences out there that it might be difficult to come to an agreement
on the elements of locality you should focus on. It would be nice if
there was some sort of distance value that would be assigned to each
peer a process has. Of course then you still have the problem trying to
figure out what distance you really want to base your grouping on.
--td
Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
From: Ralph Castain <r...@open-mpi.org>
To: Open MPI Users <us...@open-mpi.org>
Date: 12/10/2010 08:00 AM
Subject: Re: [OMPI users] Method for worker to determine its "rank"
on a single machine?
Sent by: users-boun...@open-mpi.org
------------------------------------------------------------------------
Ick - I agree that's portable, but truly ugly.
Would it make sense to implement this as an MPI extension, and then
perhaps propose something to the Forum for this purpose?
Just hate to see such a complex, time-consuming method when the info
is already available on every process.
On Dec 10, 2010, at 3:36 AM, Terry Dontje wrote:
A more portable way of doing what you want below is to gather each
processes processor_name given by MPI_Get_processor_name, have the
root who gets this data assign unique numbers to each name and then
scatter that info to the processes and have them use that as the color
to a MPI_Comm_split call. Once you've done that you can do a
MPI_Comm_size to find how many are on the node and be able to send to
all the other processes on that node using the new communicator.
Good luck,
--td
On 12/09/2010 08:18 PM, Ralph Castain wrote:
The answer is yes - sort of...
In OpenMPI, every process has information about not only its own local
rank, but the local rank of all its peers regardless of what node they
are on. We use that info internally for a variety of things.
Now the "sort of". That info isn't exposed via an MPI API at this
time. If that doesn't matter, then I can tell you how to get it - it's
pretty trivial to do.
On Dec 9, 2010, at 6:14 PM, David Mathog wrote:
Is it possible through MPI for a worker to determine:
1. how many MPI processes are running on the local machine
2. within that set its own "local rank"
?
For instance, a quad core with 4 processes might be hosting ranks 10,
14, 15, 20, in which case the "local ranks" would be 1,2,3,4. The idea
being to use this information so that a program could selectively access
different local resources. Simple example: on this 4 worker machine
reside telephone directories for Los Angeles, San Diego, San Jose, and
Sacramento. Each worker is to open one database and search it when the
master sends a request. With the "local rank" number this would be as
easy as naming the databases file1, file2, file3, and file4. Without it
the 4 processes would have to communicate with each other somehow to
sort out which is to use which database. And that could get ugly fast,
especially if they don't all start at the same time.
Thanks,
David Mathog
_mathog@caltech.edu_ <mailto:mat...@caltech.edu>
Manager, Sequence Analysis Facility, Biology Division, Caltech
_______________________________________________
users mailing list
_users@open-mpi.org_ <mailto:us...@open-mpi.org>
_http://www.open-mpi.org/mailman/listinfo.cgi/users_
_______________________________________________
users mailing list
_users@open-mpi.org_ <mailto:us...@open-mpi.org>
_http://www.open-mpi.org/mailman/listinfo.cgi/users_
--
<Mail Attachment.gif>
Terry D. Dontje | Principal Software Engineer
Developer Tools Engineering | +1.781.442.2631
Oracle *- Performance Technologies*
95 Network Drive, Burlington, MA 01803
Email _terry.dontje@oracle.com_ <mailto:terry.don...@oracle.com>
_______________________________________________
users mailing list_
__users@open-mpi.org_ <mailto:us...@open-mpi.org>_
__http://www.open-mpi.org/mailman/listinfo.cgi/users_
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Oracle
Terry D. Dontje | Principal Software Engineer
Developer Tools Engineering | +1.781.442.2631
Oracle *- Performance Technologies*
95 Network Drive, Burlington, MA 01803
Email terry.don...@oracle.com <mailto:terry.don...@oracle.com>