On 7/11/08 8:33 AM, "Ashley Pittman" <apitt...@concurrent-thinking.com>
wrote:

> On Fri, 2008-07-11 at 08:01 -0600, Ralph H Castain wrote:
>>>> I believe this is partly what motivated the creation of the MPI envars - to
>>>> create a vehicle that -would- be guaranteed stable for just these purposes.
>>>> The concern was that users were doing things that accessed internal envars
>>>> which we changed from version to version. The new envars will remain fixed.
>>> 
>>> Absolutely, these are useful time and time again so should be part of
>>> the API and hence stable.  Care to mention what they are and I'll add it
>>> to my note as something to change when upgrading to 1.3 (we are looking
>>> at testing a snapshot in the near future).
>> 
>> Surely:
>> 
>> OMPI_COMM_WORLD_SIZE            #procs in the job
>> OMPI_COMM_WORLD_LOCAL_SIZE      #procs in this job that are sharing the node
>> OMPI_UNIVERSE_SIZE              total #slots allocated to this user
>>                                 (across all nodes)
>> OMPI_COMM_WORLD_RANK            proc's rank
>> OMPI_COMM_WORLD_LOCAL_RANK      local rank on node - lowest rank'd proc on
>>                                 the node is given local_rank=0
>> 
>> If there are others that would be useful, now is definitely the time to
>> speak up!
> 
> The only other one I'd like to see is some kind of global identifier for
> the job but as far as I can see I don't believe that openmpi has such a
> concept.

Not really - of course, many environments have a jobid they assign at time
of allocation. We could create a unified identifier from that to ensure a
consistent name was always available, but the problem would be that not all
environments provide it (e.g., rsh). To guarantee that the variable would
always be there, we would have to make something up in those cases.

<shrug> could easily be done I suppose - let me raise the question
internally and see the response.

Thanks!
Ralph

> 
> Ashley Pittman.
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to