On Oct 17, 2011, at 8:55 PM, George Bosilca wrote:

> 
> On Oct 17, 2011, at 18:23 , Ralph Castain wrote:
> 
>> 
>> On Oct 17, 2011, at 4:14 PM, George Bosilca wrote:
>> 
>>> Ralph,
>>> 
>>> I have a concern about the code below (snippet from the commit 25303). You 
>>> moved the setting of proc_flags and proc_name in a function named set_arch. 
>>> As the name and the lengthy comment above it indicate, the scope of this 
>>> particular function is to set the architecture of the remote process, not 
>>> the locality flag or the process name.
>> 
>> I agree. However, there is no harm in moving those settings to that function.
> 
> Ralph,
> 
> It depends on your definition of harm. The large number of developers that 
> worked on the OPAL and OMPI layer tried to follow standard coding practices 
> as much as possible. Side effects like the one you just introduced are not 
> tolerated, and have been promptly addressed in the past [at least in the OPAL 
> and OMPI layers].

You have a strange definition of side effect, and a very different memory of 
anything I have encountered.

> 
> For the sake of the future developers, I would really appreciate if you avoid 
> transgressing these community-friendly rules.
> 
>> Indeed, there is a significant advantage in doing so as it allows the data 
>> to be exchanged during the modex, instead of mandating that ORTE provide it.
>> 
>> I agree that the function name is now somewhat inaccurate, but chose not to 
>> change it as (a) I couldn't think of a better alternative, and (b) it seemed 
>> a trivial issue. If it bothers you or others, however, please feel free to 
>> change the name of the function.
> 
> george.
> 
>> 
>> 
>>> 
>>> george.
>>> 
>>> 
>>> Index: ompi/proc/proc.c
>>> ===================================================================
>>> --- ompi/proc/proc.c        (revision 25302)
>>> +++ ompi/proc/proc.c        (working copy)
>>> @@ -119,11 +119,6 @@
>>>           if (OMPI_SUCCESS != (ret = ompi_modex_send_key_value("OMPI_ARCH", 
>>> &proc->proc_arch, OPAL_UINT32))) {
>>>               return ret;
>>>           }
>>> -        } else {
>>> -            /* get the locality information */
>>> -            proc->proc_flags = 
>>> orte_ess.proc_get_locality(&proc->proc_name);
>>> -            /* get the name of the node it is on */
>>> -            proc->proc_hostname = 
>>> orte_ess.proc_get_hostname(&proc->proc_name);
>>>       }
>>>   }
>>> 
>>> @@ -149,8 +144,8 @@
>>>   OPAL_THREAD_LOCK(&ompi_proc_lock);
>>> 
>>>   for( item  = opal_list_get_first(&ompi_proc_list);
>>> -        item != opal_list_get_end(&ompi_proc_list);
>>> -        item  = opal_list_get_next(item)) {
>>> +         item != opal_list_get_end(&ompi_proc_list);
>>> +         item  = opal_list_get_next(item)) {
>>>       proc = (ompi_proc_t*)item;
>>> 
>>>       if (proc->proc_name.vpid != ORTE_PROC_MY_NAME->vpid) {
>>> @@ -177,6 +172,10 @@
>>>               OPAL_THREAD_UNLOCK(&ompi_proc_lock);
>>>               return ret;
>>>           }
>>> +            /* get the locality information */
>>> +            proc->proc_flags = 
>>> orte_ess.proc_get_locality(&proc->proc_name);
>>> +            /* get the name of the node it is on */
>>> +            proc->proc_hostname = 
>>> orte_ess.proc_get_hostname(&proc->proc_name);
>>>       }
>>>   }
>>>   OPAL_THREAD_UNLOCK(&ompi_proc_lock);
>>> 
>>> 
>>> 
>>> 
>>> On Oct 17, 2011, at 16:51 , r...@osl.iu.edu wrote:
>>> 
>>>> Author: rhc
>>>> Date: 2011-10-17 16:51:22 EDT (Mon, 17 Oct 2011)
>>>> New Revision: 25303
>>>> URL: https://svn.open-mpi.org/trac/ompi/changeset/25303
>>>> 
>>>> Log:
>>>> 
>>>> Complete implementation of pmi support. Ensure we support both mpirun and 
>>>> direct launch within same configuration to avoid requiring separate 
>>>> builds. Add support for generic pmi, not just under slurm. Add 
>>>> publish/subscribe support, although slurm's pmi implementation will just 
>>>> return an error as it hasn't been done yet.
>>>> 
>>>> 
>>>> 
>>>> Added:
>>>> trunk/ompi/mca/pubsub/pmi/   (props changed)
>>>> trunk/ompi/mca/pubsub/pmi/Makefile.am
>>>> trunk/ompi/mca/pubsub/pmi/configure.m4
>>>> trunk/ompi/mca/pubsub/pmi/pubsub_pmi.c
>>>> trunk/ompi/mca/pubsub/pmi/pubsub_pmi.h
>>>> trunk/ompi/mca/pubsub/pmi/pubsub_pmi_component.c
>>>> trunk/orte/mca/ess/pmi/   (props changed)
>>>> trunk/orte/mca/ess/pmi/Makefile.am
>>>> trunk/orte/mca/ess/pmi/configure.m4
>>>> trunk/orte/mca/ess/pmi/ess_pmi.h
>>>> trunk/orte/mca/ess/pmi/ess_pmi_component.c
>>>> trunk/orte/mca/ess/pmi/ess_pmi_module.c
>>>> Text files modified: 
>>>> trunk/ompi/proc/proc.c                             |    13                 
>>>>                         
>>>> trunk/orte/config/orte_check_pmi.m4                |     3                 
>>>>                         
>>>> trunk/orte/mca/ess/slurmd/Makefile.am              |    10                 
>>>>                         
>>>> trunk/orte/mca/ess/slurmd/configure.m4             |    16 -               
>>>>                         
>>>> trunk/orte/mca/ess/slurmd/ess_slurmd_component.c   |    16 -               
>>>>                         
>>>> trunk/orte/mca/ess/slurmd/ess_slurmd_module.c      |   158 +++++---------  
>>>>                         
>>>> trunk/orte/mca/grpcomm/pmi/grpcomm_pmi_component.c |    43 +++             
>>>>                         
>>>> trunk/orte/mca/grpcomm/pmi/grpcomm_pmi_module.c    |   414 
>>>> +++++++++++++++++++++++++++------------ 
>>>> trunk/orte/util/nidmap.c                           |     3                 
>>>>                         
>>>> 9 files changed, 396 insertions(+), 280 deletions(-)
>>>> 
>>>> 
>>>> Diff not shown due to size (69184 bytes).
>>>> To see the diff, run the following command:
>>>> 
>>>>    svn diff -r 25302:25303 --no-diff-deleted
>>>> 
>>>> _______________________________________________
>>>> svn mailing list
>>>> s...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/svn
>>> 
>>> 
>>> _______________________________________________
>>> devel mailing list
>>> de...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> 
>> 
>> _______________________________________________
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel


Reply via email to