Re: [OMPI devel] Bug report: using static build

2014-07-12 Thread Ralph Castain
Hmmm... --enable-static works just fine (at least, for the 1.8.2rc) on CentOS, so this may be a Debian thing. We have some protection in there for getpwuid on the Cray, where static builds take similar exception to it, but that requires you configure with --disable-getpwuid. I was unaware of p

[OMPI devel] Bug report: using static build

2014-07-12 Thread Andrey Gursky
Dear developers and subscribers, I'm not aware of information how open-mpi static build is being validated. Is there any documentation about it? For now I tested the static build on Debian Jessie (testing) amd64 with openmpi-1.6.5 and openmpi-1.8.1. There are few issues with it. - openmpi doesn

Re: [OMPI devel] Bug report: non-blocking allreduce with user-defined operation gives segfault

2014-04-24 Thread Rupert Nash
Hi George, Having looked again you're correct about the two 2buf reductions being wrong. For now, I've updated my patch of nbc.c to copy buf1 into buf3 and then do buf3 OP= buf2 (see below). Patching ompi_3buff_op_reduce to cope with user-defined operations is certainly possible, but I don't r

Re: [OMPI devel] Bug report: non-blocking allreduce with user-defined operation gives segfault

2014-04-23 Thread George Bosilca
Rupert, You are right, the code of any non-blocking reduce is not built with user-level op in mind. However, I'm not sure about your patch. One reason is that ompi_3buff is doing target = source1 op source2 while ompi_2buf is doing target op= source (notice the op=) Thus you can't replace omp

[OMPI devel] Bug report: non-blocking allreduce with user-defined operation gives segfault

2014-04-23 Thread Rupert Nash
Hello devel list I've been trying to use a non-blocking MPI_Iallreduce in a CFD application I'm working on, but it kept segfaulting on me. I have reduced it to a simple test case - see the gist here for the full code https://gist.github.com/rupertnash/1182 build and run with:

Re: [OMPI devel] Bug Report cxx/constants.h

2012-08-15 Thread Jeff Squyres
Weird -- it never caused an issue for my clang version. Shrug. Thanks for the heads up! I committed it to the SVN trunk; it will be included in v1.6.1. On Aug 15, 2012, at 9:08 AM, John T. Foster wrote: > In the release version of open-mpi 1.6 > > cxx/constants.h header file on line 54 ther

[OMPI devel] Bug Report cxx/constants.h

2012-08-15 Thread John T. Foster
In the release version of open-mpi 1.6 cxx/constants.h header file on line 54 there is an extra semi-colon at the end of line. This causes the clang compiler to fail on a Mac when the header is included. JTF -- John T. Foster Assistant Professor Mechanical Engineering Department The Universi

Re: [OMPI devel] bug report-

2011-08-09 Thread Shiqing Fan
Now I see the problem. The Open MPI binaries were build with Microsoft cl compiler, it has different name conventions, so the symbols couldn't be resolved by g++ compiler. I've started the native MinGW compiler support, some projects can already be built via gcc or g++, but it's not finished

Re: [OMPI devel] bug report-

2011-08-09 Thread Shiqing Fan
Hi, Which command did you use to compile your code? I tried following code on my Windows 7 machine with compile command "mpicxx hello.cpp": hello.cpp === # include "mpi.h" using namespace std; int main ( int argc, char *argv[] ) { int rank, size; MPI::Ini

Re: [OMPI devel] bug report-

2011-08-09 Thread Jeff Squyres
If all processes are coming out to be rank 0, it *may* mean that you're dynamically linking to the wrong MPI library at run time, or have some other kind of MPI implementation/version mismatch. At least, it can mean this in a POSIX environment. I don't rightly know what it means in a Windows

Re: [OMPI devel] bug report-

2011-08-09 Thread Shiqing Fan
Hi Renyong, If the same problem occurs under Linux, then the Boost.MPI library might have compatible issues with Open MPI, but it still needs to be verified. However, I'm also confused why the simple code didn't work for you. The only guess is the environment is messed up by different MPI im

Re: [OMPI devel] bug report-

2011-08-09 Thread Shiqing Fan
Hi, The code works for me under MinGW console with the pre-compiled installer. Could you try "which mpicc" to ensure that the correct Open MPI commands are in path? For building Open MPI by your self with CMake, you have to configure it in the GUI and then generate the sln files by pressing

Re: [OMPI devel] bug report-

2011-08-08 Thread Shiqing Fan
Hi, I've never tried this Boost.MPI with Open MPI on Windows. Does it work without the Boost.MPI library? Did you run your test under MinGW? Regards, Shiqing On 2011-08-08 5:31 PM, renyong.yang wrote: Run time environment of mine is Windows 7, with the unstable OpenMPI_v1.5.3-2_win32.exe

[OMPI devel] bug report-

2011-08-08 Thread renyong.yang
Run time environment of mine is Windows 7, with the unstable OpenMPI_v1.5.3-2_win32.exe release for Windows, together with Microsoft Compute Cluster Pack. Additionally I'm using Boost.MPI library v1.47 compiled by min

Re: [OMPI devel] Bug report: single processor MPI_Allgatherv

2008-01-19 Thread George Bosilca
Daniel, Thanks for the fix. It is indeed the right solution. I'll make sure it gets into the trunk asap. Thanks, george. On Jan 18, 2008, at 4:57 PM, Daniel G. Hyams wrote: Sorry to reply to my own mail, but the bug only affects MPI_Allgatherv. In this changeset: https://svn.open-

Re: [OMPI devel] Bug report: single processor MPI_Allgatherv

2008-01-18 Thread Daniel G. Hyams
Sorry to reply to my own mail, but the bug only affects MPI_Allgatherv. In this changeset: https://svn.open-mpi.org/trac/ompi/changeset/16360 In coll_self_allgatherv.c, the "extent" variable is never used. So the fix is just to multiply "extent" by disps[0], on line 50. I've verified that

[OMPI devel] Bug report: single processor MPI_Allgatherv

2008-01-18 Thread Daniel G. Hyams
I don't think that the displacements (disps) are being handled correctly in MPI_Allgatherv, for a single process case. The disps are being handled as byte offsets, instead of 'item' offsets...they need to be multiplied by the size, in bytes, of the MPI_Datatype being sent. This bug seems to b