Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-14 Thread Jeff Squyres
On Dec 10, 2010, at 11:00 AM, Prentice Bisbal wrote:

>> Would it make sense to implement this as an MPI extension, and then
>> perhaps propose something to the Forum for this purpose?
> 
> I think that makes sense. As core and socket counts go up, I imagine the need 
> for this information will become more common as programmers try to explicitly 
> keep codes on a single socket or node.

Something along these lines has come up as an MPI-3 proposal (from Oak Ridge 
and Sandia, IIRC).  It hasn't gotten a lot of discussion yet, but concerns like 
Dick's were already brought up.  

There seems to be general desire for this kind of functionality both within the 
Forum and among users, so we'll see where it goes...

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Terry Dontje

On 12/10/2010 03:24 PM, David Mathog wrote:

Ashley Pittman wrote:


For a much simpler approach you could also use these two environment

variables, this is on my current system which is 1.5 based, YMMV of course.

OMPI_COMM_WORLD_LOCAL_RANK
OMPI_COMM_WORLD_LOCAL_SIZE
However that doesn't really tell you which MPI_COMM_WORLD ranks are on 
the same node as you I believe.


--td

That is simpler.  It works on OMPI 1.4.3 too:

cat>/usr/common/bin/dumpev.sh





Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Ralph Castain
Sorry - guess I had misunderstood. Yes, if all you want is the local rank of 
your own process, then this will work.

My suggestion was if you wanted the list of local procs, or to know the local 
rank of your peers.


On Dec 10, 2010, at 1:24 PM, David Mathog wrote:

> Ashley Pittman wrote:
> 
>> For a much simpler approach you could also use these two environment
> variables, this is on my current system which is 1.5 based, YMMV of course.
>> 
>> OMPI_COMM_WORLD_LOCAL_RANK
>> OMPI_COMM_WORLD_LOCAL_SIZE
> 
> That is simpler.  It works on OMPI 1.4.3 too:
> 
> cat >/usr/common/bin/dumpev.sh < #!/bin/bash
> set
> EOD
> mpirun -np 4 --host monkey01 --mca plm_rsh_agent rsh
> /usr/common/bin/dumpenv.sh | grep LOCAL_RANK
> OMPI_COMM_WORLD_LOCAL_RANK=0
> OMPI_COMM_WORLD_LOCAL_RANK=1
> OMPI_COMM_WORLD_LOCAL_RANK=2
> OMPI_COMM_WORLD_LOCAL_RANK=3
> 
> Thanks,
> 
> David Mathog
> mat...@caltech.edu
> Manager, Sequence Analysis Facility, Biology Division, Caltech
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread David Mathog
Ashley Pittman wrote:

> For a much simpler approach you could also use these two environment
variables, this is on my current system which is 1.5 based, YMMV of course.
> 
> OMPI_COMM_WORLD_LOCAL_RANK
> OMPI_COMM_WORLD_LOCAL_SIZE

That is simpler.  It works on OMPI 1.4.3 too:

cat >/usr/common/bin/dumpev.sh <

Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Ashley Pittman

For a much simpler approach you could also use these two environment variables, 
this is on my current system which is 1.5 based, YMMV of course.

OMPI_COMM_WORLD_LOCAL_RANK
OMPI_COMM_WORLD_LOCAL_SIZE

Actually orte seems to set both OMPI_COMM_WORLD_LOCAL_RANK and 
OMPI_COMM_WORLD_NODE_RANK, I can't see any difference between the two.

Ashley.

On 10 Dec 2010, at 18:25, Ralph Castain wrote:
> 
> So if you wanted to get your own local rank, you would call:
> 
> my_local_rank = orte_ess.proc_get_local_rank(ORTE_PROC_MY_NAME);

-- 

Ashley Pittman, Bath, UK.

Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk




Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Gus Correa

Hi David

For what it is worth, the method suggested by
Terry Dontje and Richard Troutmann is what is used in several 
generations of climate coupled models that we've been using for the

past 8+ years.

The goals are slightly different from yours:
they cut across logical boundaries
(i.e. who's atmosphere, who's ocean, etc),
whereas you want to cut across physical boundaries
(i.e. belonging to the same computer,
as diffuse as the notion of "same computer" can be these days).

The variants of this technique that I know of
are slightly different from Terry's suggestion:
they don't split the (MPI_COMM_WORLD) communicator,
but create additional sub-communicators instead.
However, the idea is the same.

The upside of this technique, as Terry and Richard point out,
is portability.
These models have been run in IBM Blue Genes using the IBM MPI,
on Kraken and Jaguar (Cray XT5  or XT6?) using whatever MPI they
have there, and I can even run them in our modest Beowulf clusters,
using OpenMPI or MVAPICH2, or even MPICH2.
All MPI calls are completely standard, hence the code is portable.
If the code had calls to the "orte" layer
(or to "P4" in the old days of MPICH) for instance, it wouldn't be.

If portability, specially portability across MPI variants, is important
to you, you may think of implementing the functionality you need
this way.

And to the MPI insiders/developers, a plea from a mere user:
Whatever you take to the Forum,
please keep this functionality (creating new communicators, splitting 
old ones, getting processor name, etc) in the standard,

although the extensions suggested by Ralph Castain and Eugene Loh
would be certainly welcome.

Cheers,
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-

David Mathog wrote:

The answer is yes - sort of...

In OpenMPI, every process has information about not only its own local

rank, but the local rank of all its peers regardless of what node they
are on. We use that info internally for a variety of things.

Now the "sort of". That info isn't exposed via an MPI API at this

time. If that doesn't matter, then I can tell you how to get it - it's
pretty trivial to do.

Please tell me how to do this using the internal information.  


For now I will use that to write these functions (which might at some
point correspond to standard functions, or not) 


my_MPI_Local_size(MPI_Comm comm, int *lmax, int *lactual)
my_MPI_Local_rank(MPI_Comm comm, int *lrank)

These will return N for lmax, a value M in 1->N for lactual, and a value
in 1->M for lrank, for any worker on a machine corresponding to a
hostfile line like:

node123.cluster slots=N

As usual, this could get complicated.  There are probably race
conditions on lactual vs. lrank as the workers start, but I'm guessing
the lrank to lmax relationship won't have that problem.  Similarly, the
meaning of "local" is pretty abstract. For now all that is intended is
"a group of equivalent cores within a single enclosure, where
communication between them is strictly internal to the enclosure, and
where all have equivalent access to the local disks and the network
interface(s)".  Other ways to define "local" might make more sense on
more complex hardware. 


Another function that logically belongs with these is:

my_MPI_Local_list(MPI_Comm comm, int *llist, int *lactual)

I don't need it now, but can imagine applications that would.  This
would return the (current)  lactual value and the corresponding list of
rank numbers of all the local workers.  The array llist must be of size
lmax.


Thanks,

David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Ralph Castain
There are no race conditions in this data. It is determined by mpirun prior to 
launch, so all procs receive the data during MPI_Init and it remains static 
throughout the life of the job. It isn't dynamically updated at this time (will 
change in later versions), so it won't tell you if a process is sitting in 
finalize, for example.

First, you have to configure OMPI --with-devel-headers to get access to the 
required functions.

If you look at the file orte/mca/ess/ess.h, you'll see functions like

orte_ess.proc_get_local_rank(orte_process_name_t *name)

You can call that function with any process name. In the ORTE world, process 
names are a struct of two fields: a jobid that is common to all processes in 
your application, and a vpid that is the MPI rank. We also have a defined var 
for your own name to make life a little easier.

So if you wanted to get your own local rank, you would call:

#include "orte/types.h"
#include "orte/runtime/orte_globals.h"
#include "orte/mca/ess/ess.h"

my_local_rank = orte_ess.proc_get_local_rank(ORTE_PROC_MY_NAME);

To get the local rank of some other process in the job, you would call:

#include "orte/types.h"
#include "orte/runtime/orte_globals.h"
#include "orte/mca/ess/ess.h"

orte_process_name_t name;

name.jobid = ORTE_PROC_MY_NAME->jobid;
name.vpid = ;

his_local_rank = orte_ess.proc_get_local_rank(&name);

The node rank only differs from the local rank when a comm_spawn has been 
executed. If you need that capability, I can explain the difference - for now, 
you can ignore that function.

I don't currently provide the max number of local procs to each process or a 
list of local procs, but can certainly do so - nobody had a use for it before. 
Or you can construct those pieces of info fairly easily from data you do have. 
What you would do is loop over the get_proc_locality call:

#include "opal/mca/paffinity/paffinity.h"
#include "orte/types.h"
#include "orte/runtime/orte_globals.h"
#include "orte/mca/ess/ess.h"

orte_vpid_t v;
orte_process_name_t name;

name.jobid = ORTE_PROC_MY_NAME->jobid;

for (v=0; v < orte_process_info.num_procs; v++) {
name.vpid = v;
if (OPAL_PROC_ON_NODE & orte_ess.proc_get_locality(&name)) {
/* the proc is on your node - do whatever with it */
}
}

HTH
Ralph


On Dec 10, 2010, at 9:49 AM, David Mathog wrote:

>> The answer is yes - sort of...
>> 
>> In OpenMPI, every process has information about not only its own local
> rank, but the local rank of all its peers regardless of what node they
> are on. We use that info internally for a variety of things.
>> 
>> Now the "sort of". That info isn't exposed via an MPI API at this
> time. If that doesn't matter, then I can tell you how to get it - it's
> pretty trivial to do.
> 
> Please tell me how to do this using the internal information.  
> 
> For now I will use that to write these functions (which might at some
> point correspond to standard functions, or not) 
> 
> my_MPI_Local_size(MPI_Comm comm, int *lmax, int *lactual)
> my_MPI_Local_rank(MPI_Comm comm, int *lrank)
> 
> These will return N for lmax, a value M in 1->N for lactual, and a value
> in 1->M for lrank, for any worker on a machine corresponding to a
> hostfile line like:
> 
> node123.cluster slots=N
> 
> As usual, this could get complicated.  There are probably race
> conditions on lactual vs. lrank as the workers start, but I'm guessing
> the lrank to lmax relationship won't have that problem.  Similarly, the
> meaning of "local" is pretty abstract. For now all that is intended is
> "a group of equivalent cores within a single enclosure, where
> communication between them is strictly internal to the enclosure, and
> where all have equivalent access to the local disks and the network
> interface(s)".  Other ways to define "local" might make more sense on
> more complex hardware. 
> 
> Another function that logically belongs with these is:
> 
> my_MPI_Local_list(MPI_Comm comm, int *llist, int *lactual)
> 
> I don't need it now, but can imagine applications that would.  This
> would return the (current)  lactual value and the corresponding list of
> rank numbers of all the local workers.  The array llist must be of size
> lmax.
> 
> 
> Thanks,
> 
> David Mathog
> mat...@caltech.edu
> Manager, Sequence Analysis Facility, Biology Division, Caltech
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread David Mathog
> The answer is yes - sort of...
> 
> In OpenMPI, every process has information about not only its own local
rank, but the local rank of all its peers regardless of what node they
are on. We use that info internally for a variety of things.
> 
> Now the "sort of". That info isn't exposed via an MPI API at this
time. If that doesn't matter, then I can tell you how to get it - it's
pretty trivial to do.

Please tell me how to do this using the internal information.  

For now I will use that to write these functions (which might at some
point correspond to standard functions, or not) 

my_MPI_Local_size(MPI_Comm comm, int *lmax, int *lactual)
my_MPI_Local_rank(MPI_Comm comm, int *lrank)

These will return N for lmax, a value M in 1->N for lactual, and a value
in 1->M for lrank, for any worker on a machine corresponding to a
hostfile line like:

node123.cluster slots=N

As usual, this could get complicated.  There are probably race
conditions on lactual vs. lrank as the workers start, but I'm guessing
the lrank to lmax relationship won't have that problem.  Similarly, the
meaning of "local" is pretty abstract. For now all that is intended is
"a group of equivalent cores within a single enclosure, where
communication between them is strictly internal to the enclosure, and
where all have equivalent access to the local disks and the network
interface(s)".  Other ways to define "local" might make more sense on
more complex hardware. 

Another function that logically belongs with these is:

my_MPI_Local_list(MPI_Comm comm, int *llist, int *lactual)

I don't need it now, but can imagine applications that would.  This
would return the (current)  lactual value and the corresponding list of
rank numbers of all the local workers.  The array llist must be of size
lmax.


Thanks,

David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech


Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Prentice Bisbal



On 12/10/2010 07:55 AM, Ralph Castain wrote:

Ick - I agree that's portable, but truly ugly.

Would it make sense to implement this as an MPI extension, and then
perhaps propose something to the Forum for this purpose?


I think that makes sense. As core and socket counts go up, I imagine the 
need for this information will become more common as programmers try to 
explicitly keep codes on a single socket or node.


Prentice



Just hate to see such a complex, time-consuming method when the info is
already available on every process.

On Dec 10, 2010, at 3:36 AM, Terry Dontje wrote:


A more portable way of doing what you want below is to gather each
processes processor_name given by MPI_Get_processor_name, have the
root who gets this data assign unique numbers to each name and then
scatter that info to the processes and have them use that as the color
to a MPI_Comm_split call. Once you've done that you can do a
MPI_Comm_size to find how many are on the node and be able to send to
all the other processes on that node using the new communicator.

Good luck,

--td
On 12/09/2010 08:18 PM, Ralph Castain wrote:

The answer is yes - sort of...

In OpenMPI, every process has information about not only its own local rank, 
but the local rank of all its peers regardless of what node they are on. We use 
that info internally for a variety of things.

Now the "sort of". That info isn't exposed via an MPI API at this time. If that 
doesn't matter, then I can tell you how to get it - it's pretty trivial to do.


On Dec 9, 2010, at 6:14 PM, David Mathog wrote:


Is it possible through MPI for a worker to determine:

  1. how many MPI processes are running on the local machine
  2. within that set its own "local rank"

?

For instance, a quad core with 4 processes might be hosting ranks 10,
14, 15, 20, in which case the "local ranks" would be 1,2,3,4.  The idea
being to use this information so that a program could selectively access
different local resources.  Simple example: on this 4 worker machine
reside telephone directories for Los Angeles, San Diego, San Jose, and
Sacramento.  Each worker is to open one database and search it when the
master sends a request.  With the "local rank" number this would be as
easy as naming the databases file1, file2, file3, and file4.  Without it
the 4 processes would have to communicate with each other somehow to
sort out which is to use which database.  And that could get ugly fast,
especially if they don't all start at the same time.

Thanks,

David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Eugene Loh




Terry Dontje wrote:

  
On 12/10/2010 09:19 AM, Richard Treumann wrote:
   It seems to me the MPI_Get_processor_name
description is too ambiguous to make this 100% portable.  I assume most
MPI implementations simply use the hostname so all processes on the
same host will return the same string.  The suggestion would work then.


However, it would also be
reasonable for an MPI  that did processor binding to return "
hostname.socket#.core#" so every rank would have a unique processor
name. 
  
Fair enough.  However, I think it is a lot more stable then grabbing
information from the bowels of the runtime environment.  Of course one
could just call the appropriate system call to get the hostname, if you
are on the right type of OS/Architecture :-).
   The extension idea is a
bit at odds with the idea that MPI is an architecture independent API.
 That does not rule out the option if there is a good use case but it
does raise the bar just a bit. 
  
Yeah, that is kind of the rub isn't it.  There is enough architectural
differences out there that it might be difficult to come to an
agreement on the elements of locality you should focus on.  It would be
nice if there was some sort of distance value that would be assigned to
each peer a process has.  Of course then you still have the problem
trying to figure out what distance you really want to base your
grouping on.

Similar issues within a node (e.g., hwloc, shared caches, sockets,
boards, etc.) as outside a node (same/different hosts, number of switch
hops, number of torus hops, etc.).  Lots of potential complexity, but
the main difference inside/outside a node is that nodal boundaries
present "hard" process-migration boundaries.




Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Terry Dontje

On 12/10/2010 09:19 AM, Richard Treumann wrote:


It seems to me the MPI_Get_processor_namedescription is too ambiguous 
to make this 100% portable.  I assume most MPI implementations simply 
use the hostname so all processes on the same host will return the 
same string.  The suggestion would work then.


However, it would also be reasonable for an MPI  that did processor 
binding to return " hostname.socket#.core#" so every rank would have a 
unique processor name.
Fair enough.  However, I think it is a lot more stable then grabbing 
information from the bowels of the runtime environment.  Of course one 
could just call the appropriate system call to get the hostname, if you 
are on the right type of OS/Architecture :-).


The extension idea is a bit at odds with the idea that MPI is an 
architecture independent API.  That does not rule out the option if 
there is a good use case but it does raise the bar just a bit.


Yeah, that is kind of the rub isn't it.  There is enough architectural 
differences out there that it might be difficult to come to an agreement 
on the elements of locality you should focus on.  It would be nice if 
there was some sort of distance value that would be assigned to each 
peer a process has.  Of course then you still have the problem trying to 
figure out what distance you really want to base your grouping on.


--td

Dick Treumann  -  MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363



From:   Ralph Castain 
To: Open MPI Users 
Date:   12/10/2010 08:00 AM
Subject: 	Re: [OMPI users] Method for worker to determine its "rank" 
on asingle machine?

Sent by:users-boun...@open-mpi.org






Ick - I agree that's portable, but truly ugly.

Would it make sense to implement this as an MPI extension, and then 
perhaps propose something to the Forum for this purpose?


Just hate to see such a complex, time-consuming method when the info 
is already available on every process.


On Dec 10, 2010, at 3:36 AM, Terry Dontje wrote:

A more portable way of doing what you want below is to gather each 
processes processor_name given by MPI_Get_processor_name, have the 
root who gets this data assign unique numbers to each name and then 
scatter that info to the processes and have them use that as the color 
to a MPI_Comm_split call.  Once you've done that you can do a 
MPI_Comm_size to find how many are on the node and be able to send to 
all the other processes on that node using the new communicator.


Good luck,

--td
On 12/09/2010 08:18 PM, Ralph Castain wrote:
The answer is yes - sort of...

In OpenMPI, every process has information about not only its own local 
rank, but the local rank of all its peers regardless of what node they 
are on. We use that info internally for a variety of things.


Now the "sort of". That info isn't exposed via an MPI API at this 
time. If that doesn't matter, then I can tell you how to get it - it's 
pretty trivial to do.



On Dec 9, 2010, at 6:14 PM, David Mathog wrote:


Is it possible through MPI for a worker to determine:

1. how many MPI processes are running on the local machine
2. within that set its own "local rank"

?

For instance, a quad core with 4 processes might be hosting ranks 10,
14, 15, 20, in which case the "local ranks" would be 1,2,3,4.  The idea
being to use this information so that a program could selectively access
different local resources.  Simple example: on this 4 worker machine
reside telephone directories for Los Angeles, San Diego, San Jose, and
Sacramento.  Each worker is to open one database and search it when the
master sends a request.  With the "local rank" number this would be as
easy as naming the databases file1, file2, file3, and file4.  Without it
the 4 processes would have to communicate with each other somehow to
sort out which is to use which database.  And that could get ugly fast,
especially if they don't all start at the same time.

Thanks,

David Mathog
_mathog@caltech.edu_ <mailto:mat...@caltech.edu>
Manager, Sequence Analysis Facility, Biology Division, Caltech
___
users mailing list
_users@open-mpi.org_ <mailto:us...@open-mpi.org>
_http://www.open-mpi.org/mailman/listinfo.cgi/users_


___
users mailing list
_users@open-mpi.org_ <mailto:us...@open-mpi.org>
_http://www.open-mpi.org/mailman/listinfo.cgi/users_



--

Terry D. Dontje | Principal Software Engineer
Developer Tools Engineering | +1.781.442.2631
Oracle *- Performance Technologies*
95 Network Drive, Burlington, MA 01803
Email _terry.dontje@oracle.com_ <mailto:terry.don...@oracle.com>



___
u

Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Richard Treumann
It seems to me the MPI_Get_processor_name description is too ambiguous to 
make this 100% portable.  I assume most MPI implementations simply use the 
hostname so all processes on the same host will return the same string. 
The suggestion would work then.

However, it would also be reasonable for an MPI  that did processor 
binding to return " hostname.socket#.core#" so every rank would have a 
unique processor name.

The extension idea is a bit at odds with the idea that MPI is an 
architecture independent API.  That does not rule out the option if there 
is a good use case but it does raise the bar just a bit.


Dick Treumann  -  MPI Team 
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363




From:
Ralph Castain 
To:
Open MPI Users 
List-Post: users@lists.open-mpi.org
Date:
12/10/2010 08:00 AM
Subject:
Re: [OMPI users] Method for worker to determine its "rank" on a single 
machine?
Sent by:
users-boun...@open-mpi.org



Ick - I agree that's portable, but truly ugly.

Would it make sense to implement this as an MPI extension, and then 
perhaps propose something to the Forum for this purpose?

Just hate to see such a complex, time-consuming method when the info is 
already available on every process.

On Dec 10, 2010, at 3:36 AM, Terry Dontje wrote:

A more portable way of doing what you want below is to gather each 
processes processor_name given by MPI_Get_processor_name, have the root 
who gets this data assign unique numbers to each name and then scatter 
that info to the processes and have them use that as the color to a 
MPI_Comm_split call.  Once you've done that you can do a MPI_Comm_size to 
find how many are on the node and be able to send to all the other 
processes on that node using the new communicator. 

Good luck,

--td
On 12/09/2010 08:18 PM, Ralph Castain wrote: 
The answer is yes - sort of...

In OpenMPI, every process has information about not only its own local 
rank, but the local rank of all its peers regardless of what node they are 
on. We use that info internally for a variety of things.

Now the "sort of". That info isn't exposed via an MPI API at this time. If 
that doesn't matter, then I can tell you how to get it - it's pretty 
trivial to do.


On Dec 9, 2010, at 6:14 PM, David Mathog wrote:


Is it possible through MPI for a worker to determine:

 1. how many MPI processes are running on the local machine
 2. within that set its own "local rank"

? 

For instance, a quad core with 4 processes might be hosting ranks 10,
14, 15, 20, in which case the "local ranks" would be 1,2,3,4.  The idea
being to use this information so that a program could selectively access
different local resources.  Simple example: on this 4 worker machine
reside telephone directories for Los Angeles, San Diego, San Jose, and
Sacramento.  Each worker is to open one database and search it when the
master sends a request.  With the "local rank" number this would be as
easy as naming the databases file1, file2, file3, and file4.  Without it
the 4 processes would have to communicate with each other somehow to
sort out which is to use which database.  And that could get ugly fast,
especially if they don't all start at the same time.

Thanks,

David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 

Terry D. Dontje | Principal Software Engineer
Developer Tools Engineering | +1.781.442.2631
Oracle - Performance Technologies
95 Network Drive, Burlington, MA 01803
Email terry.don...@oracle.com



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Ralph Castain
Ick - I agree that's portable, but truly ugly.

Would it make sense to implement this as an MPI extension, and then perhaps 
propose something to the Forum for this purpose?

Just hate to see such a complex, time-consuming method when the info is already 
available on every process.

On Dec 10, 2010, at 3:36 AM, Terry Dontje wrote:

> A more portable way of doing what you want below is to gather each processes 
> processor_name given by MPI_Get_processor_name, have the root who gets 
> this data assign unique numbers to each name and then scatter that info to 
> the processes and have them use that as the color to a MPI_Comm_split call.  
> Once you've done that you can do a MPI_Comm_size to find how many are on the 
> node and be able to send to all the other processes on that node using the 
> new communicator.  
> 
> Good luck,
> 
> --td
> On 12/09/2010 08:18 PM, Ralph Castain wrote:
>> 
>> The answer is yes - sort of...
>> 
>> In OpenMPI, every process has information about not only its own local rank, 
>> but the local rank of all its peers regardless of what node they are on. We 
>> use that info internally for a variety of things.
>> 
>> Now the "sort of". That info isn't exposed via an MPI API at this time. If 
>> that doesn't matter, then I can tell you how to get it - it's pretty trivial 
>> to do.
>> 
>> 
>> On Dec 9, 2010, at 6:14 PM, David Mathog wrote:
>> 
>>> Is it possible through MPI for a worker to determine:
>>> 
>>>  1. how many MPI processes are running on the local machine
>>>  2. within that set its own "local rank"
>>> 
>>> ?  
>>> 
>>> For instance, a quad core with 4 processes might be hosting ranks 10,
>>> 14, 15, 20, in which case the "local ranks" would be 1,2,3,4.  The idea
>>> being to use this information so that a program could selectively access
>>> different local resources.  Simple example: on this 4 worker machine
>>> reside telephone directories for Los Angeles, San Diego, San Jose, and
>>> Sacramento.  Each worker is to open one database and search it when the
>>> master sends a request.  With the "local rank" number this would be as
>>> easy as naming the databases file1, file2, file3, and file4.  Without it
>>> the 4 processes would have to communicate with each other somehow to
>>> sort out which is to use which database.  And that could get ugly fast,
>>> especially if they don't all start at the same time.
>>> 
>>> Thanks,
>>> 
>>> David Mathog
>>> mat...@caltech.edu
>>> Manager, Sequence Analysis Facility, Biology Division, Caltech
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> -- 
> 
> Terry D. Dontje | Principal Software Engineer
> Developer Tools Engineering | +1.781.442.2631
> Oracle - Performance Technologies
> 95 Network Drive, Burlington, MA 01803
> Email terry.don...@oracle.com
> 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-10 Thread Terry Dontje
A more portable way of doing what you want below is to gather each 
processes processor_name given by MPI_Get_processor_name, have the root 
who gets this data assign unique numbers to each name and then scatter 
that info to the processes and have them use that as the color to a 
MPI_Comm_split call.  Once you've done that you can do a MPI_Comm_size 
to find how many are on the node and be able to send to all the other 
processes on that node using the new communicator.


Good luck,

--td
On 12/09/2010 08:18 PM, Ralph Castain wrote:

The answer is yes - sort of...

In OpenMPI, every process has information about not only its own local rank, 
but the local rank of all its peers regardless of what node they are on. We use 
that info internally for a variety of things.

Now the "sort of". That info isn't exposed via an MPI API at this time. If that 
doesn't matter, then I can tell you how to get it - it's pretty trivial to do.


On Dec 9, 2010, at 6:14 PM, David Mathog wrote:


Is it possible through MPI for a worker to determine:

  1. how many MPI processes are running on the local machine
  2. within that set its own "local rank"

?

For instance, a quad core with 4 processes might be hosting ranks 10,
14, 15, 20, in which case the "local ranks" would be 1,2,3,4.  The idea
being to use this information so that a program could selectively access
different local resources.  Simple example: on this 4 worker machine
reside telephone directories for Los Angeles, San Diego, San Jose, and
Sacramento.  Each worker is to open one database and search it when the
master sends a request.  With the "local rank" number this would be as
easy as naming the databases file1, file2, file3, and file4.  Without it
the 4 processes would have to communicate with each other somehow to
sort out which is to use which database.  And that could get ugly fast,
especially if they don't all start at the same time.

Thanks,

David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Oracle
Terry D. Dontje | Principal Software Engineer
Developer Tools Engineering | +1.781.442.2631
Oracle *- Performance Technologies*
95 Network Drive, Burlington, MA 01803
Email terry.don...@oracle.com 





Re: [OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-09 Thread Ralph Castain
The answer is yes - sort of...

In OpenMPI, every process has information about not only its own local rank, 
but the local rank of all its peers regardless of what node they are on. We use 
that info internally for a variety of things.

Now the "sort of". That info isn't exposed via an MPI API at this time. If that 
doesn't matter, then I can tell you how to get it - it's pretty trivial to do.


On Dec 9, 2010, at 6:14 PM, David Mathog wrote:

> Is it possible through MPI for a worker to determine:
> 
>  1. how many MPI processes are running on the local machine
>  2. within that set its own "local rank"
> 
> ?  
> 
> For instance, a quad core with 4 processes might be hosting ranks 10,
> 14, 15, 20, in which case the "local ranks" would be 1,2,3,4.  The idea
> being to use this information so that a program could selectively access
> different local resources.  Simple example: on this 4 worker machine
> reside telephone directories for Los Angeles, San Diego, San Jose, and
> Sacramento.  Each worker is to open one database and search it when the
> master sends a request.  With the "local rank" number this would be as
> easy as naming the databases file1, file2, file3, and file4.  Without it
> the 4 processes would have to communicate with each other somehow to
> sort out which is to use which database.  And that could get ugly fast,
> especially if they don't all start at the same time.
> 
> Thanks,
> 
> David Mathog
> mat...@caltech.edu
> Manager, Sequence Analysis Facility, Biology Division, Caltech
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




[OMPI users] Method for worker to determine its "rank" on a single machine?

2010-12-09 Thread David Mathog
Is it possible through MPI for a worker to determine:

  1. how many MPI processes are running on the local machine
  2. within that set its own "local rank"

?  

For instance, a quad core with 4 processes might be hosting ranks 10,
14, 15, 20, in which case the "local ranks" would be 1,2,3,4.  The idea
being to use this information so that a program could selectively access
different local resources.  Simple example: on this 4 worker machine
reside telephone directories for Los Angeles, San Diego, San Jose, and
Sacramento.  Each worker is to open one database and search it when the
master sends a request.  With the "local rank" number this would be as
easy as naming the databases file1, file2, file3, and file4.  Without it
the 4 processes would have to communicate with each other somehow to
sort out which is to use which database.  And that could get ugly fast,
especially if they don't all start at the same time.

Thanks,

David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech