Okay, I can reproduce this problem. Frankly, I don't think this ever worked
with OMPI, and I'm not sure how the choice of BTL makes a difference.
The program is crashing in the communicator definition, which involves a
communication over our internal out-of-band messaging system. That system has
Dear OpenMPI Users,
I successfully installed OpenMPI on some FreeBSD machines and I can
run MPI programs on the cluster. Yippie!
But I'm not patient enough to write my own MPI-based routines. So I
thought maybe I could ask here for suggestions. I am primarily
interested in general linear algebra
Reopening this thread. In searching another problem I ran across this one in a
different context. Turns out there really is a bug here that needs to be
addressed.
I'll try to tackle it this weekend - will update you when done.
On Jun 25, 2010, at 7:23 AM, Philippe wrote:
> Hi,
>
> I'm trying
Jeff Squyres wrote:
On Jul 17, 2010, at 4:22 AM, Anton Shterenlikht wrote:
Is loop vectorisation/unrolling safe for MPI logic?
I presume it is, but are there situations where
loop vectorisation could e.g. violate the order
of execution of MPI calls?
I *assume* that the intel compiler w
On Jul 17, 2010, at 8:13 AM, David Zhang wrote:
> collective calls return once it receive reply from everyone in the
> communicator that the message has been received (this is done under the
> hood). Thus since only one process in the communicator calls Bcast, that
> process will hang indefini
On Sat, Jul 17, 2010 at 07:50:30AM -0400, Jeff Squyres wrote:
> On Jul 17, 2010, at 4:22 AM, Anton Shterenlikht wrote:
>
> > Is loop vectorisation/unrolling safe for MPI logic?
> > I presume it is, but are there situations where
> > loop vectorisation could e.g. violate the order
> > of execution
collective calls return once it receive reply from everyone in the
communicator that the message has been received (this is done under the
hood). Thus since only one process in the communicator calls Bcast, that
process will hang indefinitely waiting for reply from other processes on the
same comm
On Sat, Jul 17, 2010 at 07:49:21AM -0400, Jeff Squyres wrote:
> On Jul 17, 2010, at 4:13 AM, Anton Shterenlikht wrote:
>
> > Sorry, just to be absolutely clear, are you saying
> > that even though only one process in the communicator
> > is calling Bcast, the call will be made on all
> > processes
On Jul 17, 2010, at 4:22 AM, Anton Shterenlikht wrote:
> Is loop vectorisation/unrolling safe for MPI logic?
> I presume it is, but are there situations where
> loop vectorisation could e.g. violate the order
> of execution of MPI calls?
I *assume* that the intel compiler will not unroll loops th
On Jul 17, 2010, at 4:13 AM, Anton Shterenlikht wrote:
> Sorry, just to be absolutely clear, are you saying
> that even though only one process in the communicator
> is calling Bcast, the call will be made on all
> processes?
MPI does not magically cause all processes to call MPI_Bcast behind the
I'm using mpif90 with Intel 10 fortran complier:
% mpif90 -compile_info
ifort -I/usr/mpi/qlogic//include/mpich/intel10/x86_64 -c
-I/usr/mpi/qlogic//include
If I don't specify any compiler options, the
compiler vectorises some loops:
% mpif90 p-grains1.f90
p-grains1.f90(123): (col. 1) remark:
On Fri, Jul 16, 2010 at 05:20:53PM -0400, Prentice Bisbal wrote:
>
>
> Eugene Loh wrote:
> > Anton Shterenlikht wrote:
> >
> >> Will this bit of code work:
> >>
> >> if (rank .eq. ) then
> >>
> >> *change var*
> >>
> >> call MPI_Bcast(var, 1, MPI_INTEGER, rank, &
> >>
12 matches
Mail list logo