Re: [OMPI devel] fortran calling MPI_* instead of PMPI_*

2015-08-31 Thread Gilles Gouaillardet
Jeff, i filed PR #845 https://github.com/open-mpi/ompi/pull/845 could you please have a look ? Cheers, Gilles On 8/30/2015 9:20 PM, Gilles Gouaillardet wrote: ok, will do basically, I simply have to #include "ompi/mpi/c/profile/defines.h" if configure set the WANT_MPI_PROFILING macro (since

Re: [OMPI devel] fortran calling MPI_* instead of PMPI_*

2015-08-31 Thread Jeff Squyres (jsquyres)
Sweet. Let's followup on that PR. Thanks! > On Aug 31, 2015, at 3:10 AM, Gilles Gouaillardet wrote: > > Jeff, > > i filed PR #845 https://github.com/open-mpi/ompi/pull/845 > > could you please have a look ? > > Cheers, > > Gilles > > On 8/30/2015 9:20 PM, Gilles Gouaillardet wrote: >> ok,

[OMPI devel] Status update: PMIx on master

2015-08-31 Thread Ralph Castain
Hi folks Per last week’s telecon, I committed the PR to bring PMIx into the master. As discussed, things are generally working okay - we had a little cleanup to do once the code was exposed to different environments, but not too horrible (thanks Gilles!). First, a quick status update. We know

Re: [OMPI devel] Status update: PMIx on master

2015-08-31 Thread Howard Pritchard
Hi Ralph, Thanks for getting this in! I verified that for master/HEAD today that, modulo the caveats about spawn/pub/sub etc. job launches on Cray using aprun or srun work as expected, so some of the MTT failures over the weekend should go away with runs this week. Thanks, Howard 2015-08-31

[OMPI devel] Dual rail IB card problem

2015-08-31 Thread Rolf vandeVaart
There was a problem reported on the User's list about Open MPI always picking one Mellanox card when they were two in the machine. http://www.open-mpi.org/community/lists/users/2015/08/27507.php We dug a little deeper and I think this has to do with how hwloc is figuring out where one of the

Re: [OMPI devel] Dual rail IB card problem

2015-08-31 Thread Atchley, Scott
What is the output of /sbin/lspci -tv? On Aug 31, 2015, at 4:06 PM, Rolf vandeVaart wrote: > There was a problem reported on the User's list about Open MPI always picking > one Mellanox card when they were two in the machine. > > http://www.open-mpi.org/community/lists/users/2015/08/27507.php

Re: [OMPI devel] Dual rail IB card problem

2015-08-31 Thread Brice Goglin
The locality is mlx4_0 as reported by lstopo is "near the entire machine" (while mlx4_1 is reported near NUMA node #3). I would vote for buggy PCI-NUMA affinity being reported by the BIOS. But I am not very familiar with 4x E5-4600 machines so please make sure this PCI slot is really attached to a

Re: [OMPI devel] Dual rail IB card problem

2015-08-31 Thread Gilles Gouaillardet
Brice, as a side note, what is the rationale for defining the distance as a floating point number ? i remember i had to fix a bug in ompi a while ago /* e.g. replace if (d1 == d2) with if((d1-d2) < epsilon) */ Cheers, Gilles On 9/1/2015 5:28 AM, Brice Goglin wrote: The locality is mlx4_0 as

[OMPI devel] Problem running from ompi master

2015-08-31 Thread Cabral, Matias A
Hi, Before submitting a pull req I decided to test some changes on ompi master branch but I'm facing an unrelated runtime error with ess pmi not being found. I confirmed PATH and LD_LIBRARY_PATH are set correctly and also that mca_ess_pmi.so where it should. Any suggestions? Thanks, Regards,

Re: [OMPI devel] Problem running from ompi master

2015-08-31 Thread Gilles Gouaillardet
Hi, this part has been revamped recently. at first, i would recommend you make a fresh install remove the install directory, and the build directory if you use VPATH, re-run configure && make && make install that should hopefully fix the issue Cheers, Gilles On 9/1/2015 9:35 AM, Cabral, Mat