On 7/11/08 7:50 AM, "Ashley Pittman" <apitt...@concurrent-thinking.com>
wrote:

> On Fri, 2008-07-11 at 07:42 -0600, Ralph H Castain wrote:
>> 
>> 
>> On 7/11/08 7:32 AM, "Ashley Pittman" <apitt...@concurrent-thinking.com>
>> wrote:
>> 
>>> On Fri, 2008-07-11 at 07:20 -0600, Ralph H Castain wrote:
>>>> This variable is only for internal use and has no applicability to a user.
>>>> Basically, it is used by the local daemon to tell an application process
>>>> its
>>>> rank when launched.
>>>> 
>>>> Note that it disappears in v1.3...so I wouldn't recommend looking for it.
>>>> Is
>>>> there something you are trying to do with it?
>>> 
>>> Recently on this list I recommended somebody use it for their needs.
>>> 
>>> http://www.open-mpi.org/community/lists/users/2008/06/5983.php
>> 
>> Ah - yeah, that one slid by me. I'll address it directly.
> 
> I was quite surprised that openmpi didn't have a command option for this
> actually, it's quite a common thing to use.

Nobody asked... ;-)

>  
>>>> Reason I ask: some folks wanted to know things like the MPI rank prior to
>>>> calling MPI_Init, so we added a few MPI envar's that are available from
>>>> beginning of process execution, if that is what you are looking for.
>>> 
>>> It's also essential for Valgrind support which can use it to name
>>> logfiles according to rank using the --log-file=valgrind.out.%
>>> q{OMPI_MCA_ns_nds_vpid} option.
>> 
>> Well, it won't hurt for now - but it won't work with 1.3 or beyond. It's
>> always risky to depend upon a code's internal variables as developers feel
>> free to change those as circumstances dictate since users aren't supposed to
>> be affected.
>> 
>> I believe this is partly what motivated the creation of the MPI envars - to
>> create a vehicle that -would- be guaranteed stable for just these purposes.
>> The concern was that users were doing things that accessed internal envars
>> which we changed from version to version. The new envars will remain fixed.
> 
> Absolutely, these are useful time and time again so should be part of
> the API and hence stable.  Care to mention what they are and I'll add it
> to my note as something to change when upgrading to 1.3 (we are looking
> at testing a snapshot in the near future).

Surely:

OMPI_COMM_WORLD_SIZE            #procs in the job
OMPI_COMM_WORLD_LOCAL_SIZE      #procs in this job that are sharing the node
OMPI_UNIVERSE_SIZE              total #slots allocated to this user
                                (across all nodes)
OMPI_COMM_WORLD_RANK            proc's rank
OMPI_COMM_WORLD_LOCAL_RANK      local rank on node - lowest rank'd proc on
                                the node is given local_rank=0

If there are others that would be useful, now is definitely the time to
speak up!

> 
> Ashley Pittman.
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to