Hi,
I'm trying to do an MPI_ALLREDUCE with quadruple precision real and
MPI_SUM and open mpi does not give me the correct answer (vartemp
is equal to vartored instead of 2*vartored). Switching to double precision
real works fine.
My version of openmpi is 1.2.7 and it has been compiled with if
I dabble in Fortran but am not an expert -- is REAL(kind=16) the same
as REAL*16? MPI_REAL16 should be a 16 byte REAL; I'm not 100% sure
that REAL(kind=16) is the same thing...?
On Oct 23, 2008, at 7:37 AM, Julien Devriendt wrote:
Hi,
I'm trying to do an MPI_ALLREDUCE with quadruple pre
I think the KINDs are compiler dependent. For Sun Studio Fortran,
REAL*16 and REAL(16) are the same thing. For Intel, maybe it's
different. I don't know. Try running this program:
double precision xDP
real(16) x16
real*16 xSTAR16
write(6,*) kind(xDP), kind(x16), kind(xSTAR16), kind(1.0_16)
Yes it is: REAL(kind=16) = REAL*16 = 16 byte REAL in fortran, or a
long double in C that is why I thought MPI_REAL16 should work.
On Mon, 27 Oct 2008, Jeff Squyres wrote:
I dabble in Fortran but am not an expert -- is REAL(kind=16) the same as
REAL*16? MPI_REAL16 should be a 16 byte REAL; I'm
Thanks for your suggestions.
I tried them all (declaring my variables as REAL*16 or REAL(16)) to no
avail. I still get the wrong answer with my call to MPI_ALLREDUCE.
I think the KINDs are compiler dependent. For Sun Studio Fortran, REAL*16
and REAL(16) are the same thing. For Intel, maybe i
Sorry, forgot to mention that running your sample program with ifort
produces the expected result:
8 16 16 16
Thanks for your suggestions.
I tried them all (declaring my variables as REAL*16 or REAL(16)) to no avail.
I still get the wrong answer with my call to MPI_ALLREDUCE.
I think the
I assume you've confirmed that point to point communication works
happily with quad prec on your machine? How about one-way reductions?
On Tue, 2008-10-28 at 08:47 +, Julien Devriendt wrote:
> Thanks for your suggestions.
> I tried them all (declaring my variables as REAL*16 or REAL(16)) to
Yes point to point communication is OK with quad prec. and one-way
reductions as well. I also tried my sample code on another platform
(which sports AMD opterons instead of Intel CPUs) with the same compilers,
and get the same *wrong* results with the call to MPI_ALLREDUCE in quad
prec, so it
Something odd is definitely going on here. I'm able to replicate your
problem with the intel compiler suite, but I can't quite figure out
why -- it all works properly if I convert the app to C (and still use
the MPI_REAL16 datatype with long double data).
George and I are investigating; I'