Oleg, I can build the latest master branch of OpenMPI in WSL
I can give it a try with 3.1.2 if that is any help to you?
uname -a
Linux Johns-Spectre 4.4.0-17134-Microsoft #285-Microsoft Thu Aug 30
17:31:00 PST 2018 x86_64 x86_64 x86_64 GNU/Linux
apt-get upgrade
apt-get install gfortran
wget https:
Yeah, there’s no good answer here from an “automatically do the right thing”
point of view. The reachable:netlink component (which is used for the TCP BTL)
only works with libnl-3 because libnl-1 is a real pain to deal with if you’re
trying to parse route behaviors. It will do the right thing
Oleg, I have a Windows 10 system and could help by testing this also.
But I have to say - it will be quicker just to install VirtualBox and
a CentOS VM. Or an Ubuntu VM.
You can then set up a small test network of VMs using the VirtualBox
HostOnly network for tests of your MPI code.
On Wed, 19 Sep
Alan --
Sorry for the delay.
I agree with Gilles: Brian's commit had to do with "reachable" plugins in Open
MPI -- they do not appear to be the problem here.
>From the config.log you sent, it looks like configure aborted because you
>requested UCX support (via --with-ucx) but configure wasn't
I can't say that we've tried to build on WSL; the fact that it fails is
probably not entirely unsurprising. :-(
I looked at your logs, and although I see the compile failure, I don't see any
reason *why* it failed. Here's the relevant fail from the tar_openmpi_fail
file:
-
5523 Making al
Yeah, it's a bit terrible, but we didn't reliably reproduce this problem for
many months, either. :-\
As George noted, it's been ported to all the release branches but is not yet in
an official release. Until an official release (4.0.0 just had an rc; it will
be released soon, and 3.0.3 will
On further investigation removing the "preconnect_all" option does change the
problem at least. Without "preconnect_all" I no longer see:
--
At least one pair of MPI processes are unable to reach each other for
MPI communicat
On further investigation removing the "preconnect_all" option does change the
problem at least. Without "preconnect_all" I no longer see:
--
At least one pair of MPI processes are unable to reach each other for
MPI communicat
I can't speculate on why you did not notice the memory issue before, simply
because for months we (the developers) didn't noticed and our testing
infrastructure didn't catch this bug despite running millions of tests. The
root cause of the bug was a memory ordering issue, and these are really
trick
Hi George
thanks for your answer. I was previously using OpenMPI 3.1.2 and have also this
problem. However, using --enable-debug --enable-mem-debugat configuration time,
I was unable to reproduce the failure and it was quite difficult for me do trace
the problem. May be I have not run enought
10 matches
Mail list logo