Folks,
currently, default mapping policy on master is different than v2.x.
my preliminary question is : when will the master mapping policy land
into the release branch ?
v2.0.0 ? v2.x ? v3.0.0 ?
here are some commands and their output (both n0 and n1 have 16 cores
each, mpirun runs on n0)
You are welcome to raise the question of default mapping behavior on master yet
again, but please do so on a separate thread so we can make sense of it.
Note that I will not be making more modifications of that behavior, so if
someone feels strongly that they want it to change, please go ahead a
Thanks Nathan,
sorry for the confusion, what i observed was a consequence of something
else ...
mpirun --host n0,n1 -np 4 a.out
/* n0 and n1 have 16 cores each */
runs 4 instances of a.out on n0 (and nothing on n1)
if i run with -np 32, then 16 tasks run on each node.
with v2.x, the --ov
Hi Folks,
The last known blocker for 2.0.0 will hopefully be resolved this week,
which means its time to be filling in the users' migration guide.
If you have a feature that went in to the 2.0.x release stream that's
important, please add a short description of the feature to the
migration guid
add_procs is always called at least once. This is how we set up shared
memory communication. It will then be invoked on-demand for non-local
peers with the reachability argument set to NULL (because the bitmask
doesn't provide any benefit when adding only 1 peer).
-Nathan
On Tue, May 17, 2016 at
Thanks -- filed: https://github.com/open-mpi/ompi/pull/1671
> On May 14, 2016, at 11:38 PM, dpchoudh . wrote:
>
> In the file ompi/mca.bml/r2/bml_r2.c, it seems like the function name is
> incorrect in some error messages (seems like a case of unchecked copy-paste
> issue) in:
>
> 1. Function
Sounds like something has been broken - what Jeff describes is the intended
behavior
> On May 16, 2016, at 8:00 AM, Gilles Gouaillardet
> wrote:
>
> Jeff,
>
> this is not what I observed
> (tcp btl, 2 to 4 nodes with one task per node, cutoff=0)
> the add_procs of the tcp btl is invoked once
Jeff,
this is not what I observed
(tcp btl, 2 to 4 nodes with one task per node, cutoff=0)
the add_procs of the tcp btl is invoked once with the 4 tasks.
I checked the sources and found cutoff only controls if the modex is
invoked once for all at init, or on demand.
Cheers,
Gilles
On Monday, Ma
We changed the way BTL add_procs is invoked on master and v2.x for scalability
reasons.
In short: add_procs is only invoked the first time you talk to a given peer.
The cutoff switch is an override to that -- if the sizeof COMM_WORLD is less
than the cutoff, we revert to the old behavior of ca
it seems I misunderstood some things ...
add_procs is always invoked, regardless the cutoff value.
cutoff is used to retrieve processes info via the modex "on demand" vs at
init time.
Please someone correct me and/or elaborate if needed
Cheers,
Gilles
On Monday, May 16, 2016, Gilles Gouaillard
i cannot reproduce this behavior.
note mca_btl_tcp_add_procs is invoked once per tcp component (e.g. once
per physical NIC)
so you might want to explicitly select one nic
mpirun --mca btl_tcp_if_include xxx ...
my printf output are the same and regardless the mpi_add_procs_cutoff value
Che
11 matches
Mail list logo