On Nov 6, 2013, at 16:26 , Nathan Hjelm <hje...@lanl.gov> wrote: > On Wed, Nov 06, 2013 at 02:06:15AM +0000, Jeff Squyres (jsquyres) wrote: >> On Nov 5, 2013, at 2:59 PM, George Bosilca <bosi...@icl.utk.edu> wrote: >> >>> I have a question regarding the extension of this concept to multi-BTL >>> runs. Granted we will have to have a local indexing of BTL (I'm not >>> concerned about this). But how do we ensure the naming is globally >>> consistent (in the sense that all processes in the job will agree that >>> usnic0 is index 0) even when we have a heterogeneous environment? >> >> The MPI_T pvars are local-only. So even if index 0 is usnic_0 in proc A, >> but index 0 is usnic_3 in proc B, it shouldn't matter. More specifically: >> these values only have meaning within the process from which they were >> gathered. >> >> I guess I'm trying to say that there's no need to ensure globally consistent >> ordering between processes. ...unless I'm missing something? > > There is no need to ensure global consistency unless you declare the pvar to > have a global scope (MCA_BASE_VAR_SCOPE_GROUP, MCA_BASE_VAR_SCOPE_GROUP_EQ, > MCA_BASE_VAR_SCOPE_ALL, or MCA_BASE_VAR_SCOPE_ALL_EQ.)
They clearly can’t be of any _EQ scope. After reading the entire chapter in the MPI 3.1 I’m not sure how the defined scope applies to the naming or to the relationship between their values. That being said, the consistency I was looking for is somehow different. What I really wanted is a way, not based on the physical naming but based on some logical naming, that will allow a tool to “globally” make sense of the information exposed. Having separate (on each node) local information about the local usnic0 provide little information about any communication inconsistencies. Knowing the fact that there are many pending sends on my local usnic0 is interesting, but being able to link this information with a number of pending receives/other MPI_T on the peers sharing the same network layer will be much more valuable knowledge, providing better insight on what is going on each network layer. George. > > -Nathan > _______________________________________________ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel