Thomas,
The struct idea makes perfect sense. As apparently you have multiple
local_tlr_lookup the current approach will certainly not work. As you mentioned
the allocatable arrays do not have similar relative displacements, and this
prevent the derived datatype from being correctly constructed.
Hi George,
Thanks for taking the time to look at my question! wtlr was a typo when I
was stripping things down for a smaller example... TLR should be a 3x3
matrix (long range dipole dipole tensor).
I'm trying to split up the computation of anywhere between 30k and 15m
individual dipole-dipole ten
Thomas,
IWhat exactly is 'local_tlr_lookup(1)%wtlr'?
I think the problem is that your MPI derived datatype use the pointer to
the allocatable arrays instead of using the pointer to the first element of
these arrays. As an example instead of doing
call mpi_get_address(local_tlr_lookup(1)%wtlr,
Hi All,
I'm trying to parallelize my code by distributing the computation of
various elements of a lookup table and then sync that lookup table across
all nodes. To make the code easier to read, and to keep track of everything
easier, I've decided to use a derived data type in fortran defined as
f
If it's the same with multiple released versions of open MPI, it sounds like a
problem with your compiler, I'm afraid.
To be honest, I didn't try to make a representative va args test program that
uses it the same was OMPI does - I just whipped up a quick va args test.
Meaning: OMPI may well b
On 12-03-2015 20:44, Jeff Squyres (jsquyres) wrote:
Gah; my mistake -- that va_end(fmt) should be va_end(list).
It works for me with gcc 4.9.1 and icc:
Intel(R) C Intel(R) 64 Compiler XE for applications running on Intel(R) 64,
Version 15.0.2.164 Build 20150121
Hi Jeff
I've some more tests