[OMPI users] building boost.mpi with openmpi: mpi.jam

2011-06-11 Thread MM
Hello, boost: 1.46.1 openmpi: 1.5.3 winxp : 64bit For openmpi mailing list users, boost comes with a boost.MPI library which is a C++-nativized library that wraps around any MPI-1 implementation available. Boost libraries can be built with bjam, a tool that is part of a build system. It comes wit

Re: [OMPI users] mpi.h:: OMPI_HAVE_FORTRAN_LOGICAL / INTEGER / REAL are set to 0 (zero)

2011-06-11 Thread Shiqing Fan
Hi, I did a few tests to reproduce the problem, and found out that it is because of using a single cmake cache to handle the fortran settings, so in some cases the part of the cache can be refreshed. I'm thinking to have a second cache to make it more convenient for users. So the current so

Re: [OMPI users] Deadlock with barrier und RMA

2011-06-11 Thread Constantinos Makassikis
On Sat, Jun 11, 2011 at 5:17 PM, Ole Kliemann < ole-ompi-2...@mail.plastictree.net> wrote: > On Sat, Jun 11, 2011 at 07:24:24AM -0600, Ralph Castain wrote: > > Oh my - that is such an old version! Any reason for using it instead of > something more recent? > > I'm using the cluster of the universi

Re: [OMPI users] Deadlock with barrier und RMA

2011-06-11 Thread Ole Kliemann
On Sat, Jun 11, 2011 at 07:24:24AM -0600, Ralph Castain wrote: > Oh my - that is such an old version! Any reason for using it instead of > something more recent? I'm using the cluster of the university where I work und I'm not the admin. So I'm going with what is installed there. It's the first

Re: [OMPI users] Deadlock with barrier und RMA

2011-06-11 Thread Ralph Castain
Oh my - that is such an old version! Any reason for using it instead of something more recent? On Jun 11, 2011, at 8:43 AM, Ole Kliemann wrote: > Hi everyone! > > I'm trying to use MPI on a cluster running OpenMPI 1.2.4 and starting > processes through PBSPro_11.0.2.110766. I've been running i

[OMPI users] Deadlock with barrier und RMA

2011-06-11 Thread Ole Kliemann
Hi everyone! I'm trying to use MPI on a cluster running OpenMPI 1.2.4 and starting processes through PBSPro_11.0.2.110766. I've been running into a couple of performance and deadlock problems and like to check whether I'm making a mistake. One of the deadlocks I managed to boil down to the attach