Ralph,
On Fri, Sep 12, 2014 at 10:54 AM, Ralph Castain wrote:
> The design is supposed to be that each node knows precisely how many
> daemons are involved in each collective, and who is going to talk to them.
ok, but in the design does not ensure that things will happen in the right
order :
-
On Sep 12, 2014, at 5:45 AM, Gilles Gouaillardet
wrote:
> Ralph,
>
> On Fri, Sep 12, 2014 at 10:54 AM, Ralph Castain wrote:
> The design is supposed to be that each node knows precisely how many daemons
> are involved in each collective, and who is going to talk to them.
>
> ok, but in the
Let me know if Nadia can help here, Ralph.
Josh
On Fri, Sep 12, 2014 at 9:31 AM, Ralph Castain wrote:
>
> On Sep 12, 2014, at 5:45 AM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com> wrote:
>
> Ralph,
>
> On Fri, Sep 12, 2014 at 10:54 AM, Ralph Castain wrote:
>
>> The design is suppose
bbenton -> bbenton
On Wed, Sep 10, 2014 at 5:46 AM, Jeff Squyres (jsquyres) wrote:
> As the next step of the planned migration to Github, I need to know:
>
> - Your Github ID (so that you can be added to the new OMPI git repo)
> - Your SVN ID (so that I can map SVN->Github IDs, and therefore map
Hi Folks,
So, I've got a testbed cray system with no batch scheduler, just use the native
alps both as the resource manager and as the job launcher for the orte daemons.
What I'm noticing is that the mpirun command and -host option, or otherwise
trying to specify via an mpirun way, the nodes to r
Odd - I'm pretty sure it does indeed build the -L argument...and indeed, it
does:
for (nnode=0; nnode < map->nodes->size; nnode++) {
if (NULL == (node =
(orte_node_t*)opal_pointer_array_get_item(map->nodes, nnode))) {
continue;
}
/* if the daemon already