Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-31 Thread Jeff Squyres (jsquyres)
Brian --

Do we need to change the nightly snapshot URL from

https://www.open-mpi.org/nightly/v3.x/

to 

https://www.open-mpi.org/nightly/v3.0/



> On May 30, 2017, at 11:37 PM, Barrett, Brian via devel 
>  wrote:
> 
> We have now created a v3.0.x branch based on today’s v3.x branch.  I’ve reset 
> all outstanding v3.x PRs to the v3.0.x branch.  No one has permissions to 
> pull into the v3.x branch, although I’ve left it in place for a couple of 
> weeks so that people can slowly update their local git repositories.  The 
> nightly tarballs are still building, but everything else should be setup.  
> Yell when you figure out what I missed.
> 
> Brian
> 
>> On May 30, 2017, at 2:44 PM, Barrett, Brian  wrote:
>> 
>> For various reasons, the rename didn’t happen on Saturday.  It will, 
>> instead, happen tonight at 6:30pm PDT.  Sorry for the delay!
>> 
>> Brian
>> 
>>> On May 23, 2017, at 9:38 PM, Barrett, Brian  wrote:
>>> 
>>> All -
>>> 
>>> Per the discussion on today’s telecon, we’re going to rename the branch 
>>> Saturday (5/27) morning (Pacific time).  We’ll branch v3.0.x from v3.x and 
>>> update all the nightly builds and web pages.  I’m going to push through all 
>>> the PRs on 3.x which are currently outstanding, but please be careful about 
>>> pushing together complex PRs on 3.x for the rest of the week.  If something 
>>> is submitted before Saturday and doesn’t make it due to reviews, you’ll 
>>> have to resubmit.
>>> 
>>> Brian
>>> 
 On May 5, 2017, at 4:21 PM, r...@open-mpi.org wrote:
 
 +1 Go for it :-)
 
> On May 5, 2017, at 2:34 PM, Barrett, Brian via devel 
>  wrote:
> 
> To be clear, we’d do the move all at once on Saturday morning.  Things 
> that would change:
> 
> 1) nightly tarballs would rename from openmpi-v3.x-- hash>.tar.gz to openmpi-v3.0.x--.tar.gz
> 2) nightly tarballs would build from v3.0.x, not v3.x branch
> 3) PRs would need to be filed against v3.0.x
> 4) Both https://www.open-mpi.org/nightly/v3.x/ and 
> https://www.open-mpi.org/nightly/v3.0.x/ would work for searching for new 
> nightly tarballs
> 
> At some point in the future (say, two weeks), (4) would change, and only 
> https://www.open-mpi.org/nightly/v3.0.x/ would work.  Otherwise, we need 
> to have a coordinated name switch, which seems way harder than it needs 
> to be.  MTT, for example, requires a configured directory for nightlies, 
> but as long as the latest_tarball.txt is formatted correctly, everything 
> else works fine.
> 
> Brian
> 
>> On May 5, 2017, at 2:26 PM, Paul Hargrove  wrote:
>> 
>> As a maintainer of non-MTT scripts that need to know the layout of the 
>> directories containing nighty and RC tarball, I also think that all the 
>> changes should be done soon (and all together, not spread over months).
>> 
>> -Paul
>> 
>> On Fri, May 5, 2017 at 2:16 PM, George Bosilca  
>> wrote:
>> If we rebranch from master for every "major" release it makes sense to 
>> rename the branch. In the long term renaming seems like the way to go, 
>> and thus the pain of altering everything that depends on the naming will 
>> exist at some point. I'am in favor of doing it asap (but I have no 
>> stakes in the game as UTK does not have an MTT).
>> 
>>   George.
>> 
>> 
>> 
>> On Fri, May 5, 2017 at 1:53 PM, Barrett, Brian via devel 
>>  wrote:
>> Hi everyone -
>> 
>> We’ve been having discussions among the release managers about the 
>> choice of naming the branch for Open MPI 3.0.0 as v3.x (as opposed to 
>> v3.0.x).  Because the current plan is that each “major” release (in the 
>> sense of the three release points from master per year, not necessarily 
>> in increasing the major number of the release number) is to rebranch off 
>> of master, there’s a feeling that we should have named the branch 
>> v3.0.x, and then named the next one 3.1.x, and so on.  If that’s the 
>> case, we should consider renaming the branch and all the things that 
>> depend on the branch (web site, which Jeff has already half-done; MTT 
>> testing; etc.).  The disadvantage is that renaming will require everyone 
>> who’s configured MTT to update their test configs.
>> 
>> The first question is should we rename the branch?  While there would be 
>> some ugly, there’s nothing that really breaks long term if we don’t.  
>> Jeff has stronger feelings than I have here.
>> 
>> If we are going to rename the branch from v3.x to v3.0.x, my proposal 
>> would be that we do it next Saturday evening (May 13th).  I’d create a 
>> new branch from the current state of v3.x and then delete the old 
>> branch.  We’d try to push all the PRs Friday so that there were no 
>> outstanding PRs that would have to be reopened.  We’d then bug everyone 
>> to upda

Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-31 Thread Jeff Squyres (jsquyres)
On May 31, 2017, at 9:59 AM, Jeff Squyres (jsquyres)  wrote:
> 
>https://www.open-mpi.org/nightly/v3.0/

Ah, you actually already moved it to:

https://www.open-mpi.org/nightly/v3.0.x/

Got it; thanks.

-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel


[OMPI devel] PMIX busted

2017-05-31 Thread George Bosilca
I have problems compiling the current master. Anyone else has similar
issues ?

  George.


  CC   base/ptl_base_frame.lo
In file included from
/Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/thread_usage.h:31:0,
 from
/Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/mutex.h:32,
 from
/Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/threads.h:37,
 from
/Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/client/pmix_client_ops.h:18,
 from
../../../../../../../../../../opal/mca/pmix/pmix2x/pmix/src/mca/ptl/base/ptl_base_frame.c:45:
/Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:80:34:
warning: "PMIX_C_GCC_INLINE_ASSEMBLY" is not defined [-Wundef]
 #define PMIX_GCC_INLINE_ASSEMBLY PMIX_C_GCC_INLINE_ASSEMBLY
  ^
/Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:115:6:
note: in expansion of macro 'PMIX_GCC_INLINE_ASSEMBLY'
 #if !PMIX_GCC_INLINE_ASSEMBLY
  ^
/Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:153:7:
warning: "PMIX_ASSEMBLY_BUILTIN" is not defined [-Wundef]
 #elif PMIX_ASSEMBLY_BUILTIN == PMIX_BUILTIN_SYNC
   ^
/Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:155:7:
warning: "PMIX_ASSEMBLY_BUILTIN" is not defined [-Wundef]
 #elif PMIX_ASSEMBLY_BUILTIN == PMIX_BUILTIN_GCC
   ^
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] PMIX busted

2017-05-31 Thread r...@open-mpi.org
No - I just rebuilt it myself, and I don’t see any relevant MTT build failures. 
Did you rerun autogen?


> On May 31, 2017, at 7:02 AM, George Bosilca  wrote:
> 
> I have problems compiling the current master. Anyone else has similar issues ?
> 
>   George.
> 
> 
>   CC   base/ptl_base_frame.lo
> In file included from 
> /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/thread_usage.h:31:0,
>  from 
> /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/mutex.h:32,
>  from 
> /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/threads.h:37,
>  from 
> /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/client/pmix_client_ops.h:18,
>  from 
> ../../../../../../../../../../opal/mca/pmix/pmix2x/pmix/src/mca/ptl/base/ptl_base_frame.c:45:
> /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:80:34:
>  warning: "PMIX_C_GCC_INLINE_ASSEMBLY" is not defined [-Wundef]
>  #define PMIX_GCC_INLINE_ASSEMBLY PMIX_C_GCC_INLINE_ASSEMBLY
>   ^
> /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:115:6:
>  note: in expansion of macro 'PMIX_GCC_INLINE_ASSEMBLY'
>  #if !PMIX_GCC_INLINE_ASSEMBLY
>   ^
> /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:153:7:
>  warning: "PMIX_ASSEMBLY_BUILTIN" is not defined [-Wundef]
>  #elif PMIX_ASSEMBLY_BUILTIN == PMIX_BUILTIN_SYNC
>^
> /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:155:7:
>  warning: "PMIX_ASSEMBLY_BUILTIN" is not defined [-Wundef]
>  #elif PMIX_ASSEMBLY_BUILTIN == PMIX_BUILTIN_GCC
>^
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] PMIX busted

2017-05-31 Thread George Bosilca
After removing all leftover files and redoing the autogen things went back
to normal. Sorry for the noise.

  George.



On Wed, May 31, 2017 at 10:06 AM, r...@open-mpi.org  wrote:

> No - I just rebuilt it myself, and I don’t see any relevant MTT build
> failures. Did you rerun autogen?
>
>
> > On May 31, 2017, at 7:02 AM, George Bosilca  wrote:
> >
> > I have problems compiling the current master. Anyone else has similar
> issues ?
> >
> >   George.
> >
> >
> >   CC   base/ptl_base_frame.lo
> > In file included from /Users/bosilca/unstable/ompi/
> trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/thread_usage.h:31:0,
> >  from /Users/bosilca/unstable/ompi/
> trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/mutex.h:32,
> >  from /Users/bosilca/unstable/ompi/
> trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/threads.h:37,
> >  from /Users/bosilca/unstable/ompi/
> trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/client/pmix_client_ops.h:18,
> >  from ../../../../../../../../../../
> opal/mca/pmix/pmix2x/pmix/src/mca/ptl/base/ptl_base_frame.c:45:
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/
> pmix2x/pmix/src/atomics/sys/atomic.h:80:34: warning:
> "PMIX_C_GCC_INLINE_ASSEMBLY" is not defined [-Wundef]
> >  #define PMIX_GCC_INLINE_ASSEMBLY PMIX_C_GCC_INLINE_ASSEMBLY
> >   ^
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/
> pmix2x/pmix/src/atomics/sys/atomic.h:115:6: note: in expansion of macro
> 'PMIX_GCC_INLINE_ASSEMBLY'
> >  #if !PMIX_GCC_INLINE_ASSEMBLY
> >   ^
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/
> pmix2x/pmix/src/atomics/sys/atomic.h:153:7: warning:
> "PMIX_ASSEMBLY_BUILTIN" is not defined [-Wundef]
> >  #elif PMIX_ASSEMBLY_BUILTIN == PMIX_BUILTIN_SYNC
> >^
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/
> pmix2x/pmix/src/atomics/sys/atomic.h:155:7: warning:
> "PMIX_ASSEMBLY_BUILTIN" is not defined [-Wundef]
> >  #elif PMIX_ASSEMBLY_BUILTIN == PMIX_BUILTIN_GCC
> >^
> >
> > ___
> > devel mailing list
> > devel@lists.open-mpi.org
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
>
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] PMIX busted

2017-05-31 Thread r...@open-mpi.org
Sorry for the hassle...

> On May 31, 2017, at 7:31 AM, George Bosilca  wrote:
> 
> After removing all leftover files and redoing the autogen things went back to 
> normal. Sorry for the noise.
> 
>   George.
> 
> 
> 
> On Wed, May 31, 2017 at 10:06 AM, r...@open-mpi.org 
>  mailto:r...@open-mpi.org>> 
> wrote:
> No - I just rebuilt it myself, and I don’t see any relevant MTT build 
> failures. Did you rerun autogen?
> 
> 
> > On May 31, 2017, at 7:02 AM, George Bosilca  > > wrote:
> >
> > I have problems compiling the current master. Anyone else has similar 
> > issues ?
> >
> >   George.
> >
> >
> >   CC   base/ptl_base_frame.lo
> > In file included from 
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/thread_usage.h:31:0,
> >  from 
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/mutex.h:32,
> >  from 
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/threads/threads.h:37,
> >  from 
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/client/pmix_client_ops.h:18,
> >  from 
> > ../../../../../../../../../../opal/mca/pmix/pmix2x/pmix/src/mca/ptl/base/ptl_base_frame.c:45:
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:80:34:
> >  warning: "PMIX_C_GCC_INLINE_ASSEMBLY" is not defined [-Wundef]
> >  #define PMIX_GCC_INLINE_ASSEMBLY PMIX_C_GCC_INLINE_ASSEMBLY
> >   ^
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:115:6:
> >  note: in expansion of macro 'PMIX_GCC_INLINE_ASSEMBLY'
> >  #if !PMIX_GCC_INLINE_ASSEMBLY
> >   ^
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:153:7:
> >  warning: "PMIX_ASSEMBLY_BUILTIN" is not defined [-Wundef]
> >  #elif PMIX_ASSEMBLY_BUILTIN == PMIX_BUILTIN_SYNC
> >^
> > /Users/bosilca/unstable/ompi/trunk/ompi/opal/mca/pmix/pmix2x/pmix/src/atomics/sys/atomic.h:155:7:
> >  warning: "PMIX_ASSEMBLY_BUILTIN" is not defined [-Wundef]
> >  #elif PMIX_ASSEMBLY_BUILTIN == PMIX_BUILTIN_GCC
> >^
> >
> > ___
> > devel mailing list
> > devel@lists.open-mpi.org 
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/devel 
> > 
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org 
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel 
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-31 Thread Jeff Squyres (jsquyres)
On May 30, 2017, at 11:37 PM, Barrett, Brian via devel 
 wrote:
> 
> We have now created a v3.0.x branch based on today’s v3.x branch.  I’ve reset 
> all outstanding v3.x PRs to the v3.0.x branch.  No one has permissions to 
> pull into the v3.x branch, although I’ve left it in place for a couple of 
> weeks so that people can slowly update their local git repositories.  

A thought on this point...

I'm kinda in favor of ripping off the band aid and deleting the 
old/stale/now-unwritable v3.x branch in order to force everyone to update to 
the new branch name ASAP.

Thoughts?

-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-31 Thread r...@open-mpi.org

> On May 31, 2017, at 7:48 AM, Jeff Squyres (jsquyres)  
> wrote:
> 
> On May 30, 2017, at 11:37 PM, Barrett, Brian via devel 
>  wrote:
>> 
>> We have now created a v3.0.x branch based on today’s v3.x branch.  I’ve 
>> reset all outstanding v3.x PRs to the v3.0.x branch.  No one has permissions 
>> to pull into the v3.x branch, although I’ve left it in place for a couple of 
>> weeks so that people can slowly update their local git repositories.  
> 
> A thought on this point...
> 
> I'm kinda in favor of ripping off the band aid and deleting the 
> old/stale/now-unwritable v3.x branch in order to force everyone to update to 
> the new branch name ASAP.
> 
> Thoughts?

FWIW: Brian very kindly already re-pointed all the existing PRs to the new 
branch.

> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] Open MPI 3.x branch naming

2017-05-31 Thread Barrett, Brian via devel

> On May 31, 2017, at 7:52 AM, r...@open-mpi.org wrote:
> 
>> On May 31, 2017, at 7:48 AM, Jeff Squyres (jsquyres)  
>> wrote:
>> 
>> On May 30, 2017, at 11:37 PM, Barrett, Brian via devel 
>>  wrote:
>>> 
>>> We have now created a v3.0.x branch based on today’s v3.x branch.  I’ve 
>>> reset all outstanding v3.x PRs to the v3.0.x branch.  No one has 
>>> permissions to pull into the v3.x branch, although I’ve left it in place 
>>> for a couple of weeks so that people can slowly update their local git 
>>> repositories.  
>> 
>> A thought on this point...
>> 
>> I'm kinda in favor of ripping off the band aid and deleting the 
>> old/stale/now-unwritable v3.x branch in order to force everyone to update to 
>> the new branch name ASAP.
>> 
>> Thoughts?
> 
> FWIW: Brian very kindly already re-pointed all the existing PRs to the new 
> branch.

Yes, I should have noted that in my original email.  That solves the existing 
PR problem, but everyone still has a bit of work to do if they had local 
changes to a branch based on v3.x.  In theory, it shouldn’t be much work to 
clean all that up.  But theory and practice don’t always match when using git 
:).

Brian
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

[OMPI devel] mapper issue with heterogeneous topologies

2017-05-31 Thread Gilles Gouaillardet

Hi Ralph,


this is a follow-up on Siegmar's post that started at 
https://www.mail-archive.com/users@lists.open-mpi.org/msg31177.html




mpiexec -np 3 --host loki:2,exin hello_1_mpi
--
There are not enough slots available in the system to satisfy the 3 slots
that were requested by the application:
   hello_1_mpi

Either request fewer slots for your application, or make more slots available
for use.
--



loki is a physical machine with 2 NUMA, 2 sockets, ...

*but* exin is a virtual machine with *no* NUMA, 2 sockets, ...


my guess is that mpirun is able to find some NUMA objects on 'loki', so 
it uses the default mapping policy


(aka --map-by numa). unfortunatly exin has no NUMA objects, and mpirun 
fails with an error message


that is hard to interpret.


as a workaround, it is possible to

mpirun --map-by socket


so if i understand and remember correctly, mpirun should make the 
decision to map by numa *after* it receives the topology from exin and 
not before.


does that make sense ?

can you please take care of that ?


fwiw, i ran

lstopo --of xml > /tmp/topo.xml

on two nodes, and manually remove the NUMANode and Bridge objects from 
the topology of the second node, and then


mpirun --mca --mca hwloc_base_topo_file /tmp/topo.xml --host n0:2,n1 -np 
3 hostname


in order to reproduce the issue.


Cheers,


Gilles

___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel


Re: [OMPI devel] mapper issue with heterogeneous topologies

2017-05-31 Thread r...@open-mpi.org
I don’t believe we check topologies prior to making that decision - this is why 
we provide map-by options. Seems to me that this oddball setup has a simple 
solution - all he has to do is set a mapping policy for that environment. Can 
even be done in the default mca param file.

I wouldn’t modify the code for these corner cases as it is just as likely to 
introduce errors

> On May 31, 2017, at 5:46 PM, Gilles Gouaillardet  wrote:
> 
> Hi Ralph,
> 
> 
> this is a follow-up on Siegmar's post that started at 
> https://www.mail-archive.com/users@lists.open-mpi.org/msg31177.html
> 
> 
>> mpiexec -np 3 --host loki:2,exin hello_1_mpi
>> --
>> There are not enough slots available in the system to satisfy the 3 slots
>> that were requested by the application:
>>   hello_1_mpi
>> 
>> Either request fewer slots for your application, or make more slots available
>> for use.
>> --
> 
> 
> loki is a physical machine with 2 NUMA, 2 sockets, ...
> 
> *but* exin is a virtual machine with *no* NUMA, 2 sockets, ...
> 
> 
> my guess is that mpirun is able to find some NUMA objects on 'loki', so it 
> uses the default mapping policy
> 
> (aka --map-by numa). unfortunatly exin has no NUMA objects, and mpirun fails 
> with an error message
> 
> that is hard to interpret.
> 
> 
> as a workaround, it is possible to
> 
> mpirun --map-by socket
> 
> 
> so if i understand and remember correctly, mpirun should make the decision to 
> map by numa *after* it receives the topology from exin and not before.
> 
> does that make sense ?
> 
> can you please take care of that ?
> 
> 
> fwiw, i ran
> 
> lstopo --of xml > /tmp/topo.xml
> 
> on two nodes, and manually remove the NUMANode and Bridge objects from the 
> topology of the second node, and then
> 
> mpirun --mca --mca hwloc_base_topo_file /tmp/topo.xml --host n0:2,n1 -np 3 
> hostname
> 
> in order to reproduce the issue.
> 
> 
> Cheers,
> 
> 
> Gilles
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel