Hello
It's a float because we normalize to 1 on the diagonal (some AMD
machines have values like 10 on the diagonal and 16 or 22 otherwise, so
you ge 1.0, 1.6 or 2.2 after normalization), and also because some users
wanted to specify their own distance matrix.
I'd like to cleanup the distance API
Le 01/09/2015 15:59, marcin.krotkiewski a écrit :
> Dear Rolf and Brice,
>
> Thank you very much for your help. I have now moved the 'dubious' IB
> card from Slot 1 to Slot 5. It is now reported by hwloc as bound to a
> separate NUMA node. In this case OpenMPI works as could be expected:
>
> - NUM
Hi Gilles,
I deleted everything, re-cloned and re-built (without my patch), but still see
the same issue. The only option I'm using with configure is --prefix. I even
tried building with --enable-mpirun-prefix-by-default, and also passing the
prefix at runtime (mpirun -prefix =/...), but I al
What system is this on? CentOS7? Are you doing a VPATH build, or doing the
build in the repo location?
Also, I assume you remembered to run autogen.pl before configure, yes?
> On Sep 1, 2015, at 10:11 AM, Cabral, Matias A
> wrote:
>
> Hi Gilles, <>
>
> I deleted everything, re-cloned and r
Hi Ralph,
RHEL 7.0, building in the repo location.
Yes, running autogen.pl to generate configure.
I suspect this is unrelated, but I saw this while make install:
WARNING! Common symbols found:
btl_openib_lex.o: 0008 C btl_openib_ini_yyleng
btl_openib_lex.o: 00
Those are just warnings that have no impact on things - just a reminder to
developers about cleanup we promised to do over time.
I’ll try to reproduce this here
> On Sep 1, 2015, at 11:11 AM, Cabral, Matias A
> wrote:
>
> Hi Ralph, <>
>
> RHEL 7.0, building in the repo location.
>
> Yes,
Hmmm…I cannot reproduce the problem. I configured the same way on a CentOS7
box, and everything runs just fine.
It has to be something in your library path, I think. Are you by chance adding
the prefix location to the end of the ld path instead of the beginning? Or some
oddity in your autotool
Hi all,
The OpenMPI FAQ says:
https://www.open-mpi.org/faq/?category=slurm#slurm-direct-srun-mpi-apps
# Yes, if you have configured OMPI --with-pmi=foo, where foo is
# the path to the directory where pmi.h/pmi2.h is located.
# Slurm (> 2.6, > 14.03) installs PMI-2 support by default.
However, w