Re: [OMPI devel] OMPI devel] Question about Open MPI bindings

2016-09-05 Thread r...@open-mpi.org
> On Sep 5, 2016, at 11:25 AM, George Bosilca wrote: > > Thanks for all these suggestions. I could get the expected bindings by 1) > removing the vm and 2) adding hetero. This is far from an ideal setting, as > now I have to make my own machinefile for every single run, or spawn daemons > on

Re: [OMPI devel] Question about Open MPI bindings

2016-09-05 Thread George Bosilca
Indeed. As indicated on the other thread if I add the novm and hetero and specify both the --bind-to and --map-by I get the expected behavior. Thanks, George. On Mon, Sep 5, 2016 at 2:14 PM, r...@open-mpi.org wrote: > I didn’t define the default behaviors - I just implemented what everyone >

Re: [OMPI devel] OMPI devel] Question about Open MPI bindings

2016-09-05 Thread George Bosilca
Thanks for all these suggestions. I could get the expected bindings by 1) removing the vm and 2) adding hetero. This is far from an ideal setting, as now I have to make my own machinefile for every single run, or spawn daemons on all the machines on the cluster. Wouldn't it be useful to make the d

Re: [OMPI devel] Question about Open MPI bindings

2016-09-05 Thread r...@open-mpi.org
I didn’t define the default behaviors - I just implemented what everyone said they wanted, as eventually captured in a Google spreadsheet Jeff posted (and was available and discussed for weeks before implemented). So the defaults are: * if np <= 2, we map-by core bind-to core * if np > 2, we ma

Re: [OMPI devel] Question about Open MPI bindings

2016-09-05 Thread George Bosilca
On Sat, Sep 3, 2016 at 10:34 AM, r...@open-mpi.org wrote: > Interesting - well, it looks like ORTE is working correctly. The map is > what you would expect, and so is planned binding. > > What this tells us is that we are indeed binding (so far as ORTE is > concerned) to the correct places. Rank

Re: [OMPI devel] Hanging tests

2016-09-05 Thread Gilles Gouaillardet
ok, will double check tomorrow this was the very same hang i fixed earlier Cheers, Gilles On Monday, September 5, 2016, r...@open-mpi.org wrote: > I was just looking at the overnight MTT report, and these were present > going back a long ways in both branches. They are in the Intel test suite

Re: [OMPI devel] Hanging tests

2016-09-05 Thread r...@open-mpi.org
I was just looking at the overnight MTT report, and these were present going back a long ways in both branches. They are in the Intel test suite. If you have already addressed them, then thanks! > On Sep 5, 2016, at 7:48 AM, Gilles Gouaillardet > wrote: > > Ralph, > > I fixed a hang earlier

Re: [OMPI devel] Hanging tests

2016-09-05 Thread Gilles Gouaillardet
Ralph, I fixed a hang earlier today in master, and the PR for v2.x is at https://github.com/open-mpi/ompi-release/pull/1368 Can you please make sure you are running the latest master ? Which testsuite do these tests come from ? I will have a look tomorrow if the hang is still there Cheers, Gi

[OMPI devel] Hanging tests

2016-09-05 Thread r...@open-mpi.org
Hey folks All of the tests that involve either ISsend_ator, SSend_ator, ISsend_rtoa, or SSend_rtoa are hanging on master and v2.x. Does anyone know what these tests do, and why we never seem to pass them? Do we care? Ralph ___ devel mailing list deve

Re: [OMPI devel] Performance analysis proposal

2016-09-05 Thread Christoph Niethammer
Hi, I did now some measurements on our system: The results seem not as stable as the already reported ones for short messages but show generally the same behavior with a peak around 1kB. Vader performs much better in 2.x and master than in 1.10 in the threaded case. Thanks for the info about the